Keynote by Dr. Pramod Varma Co-founder & Chief Architect NFH India AI Impact Summit

Keynote by Dr. Pramod Varma Co-founder & Chief Architect NFH India AI Impact Summit

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session focused on how artificial intelligence can be integrated with India’s digital public infrastructure (DPI) to unlock scale and new opportunities [94-99]. Pramod Varma opened by noting that the recent event showed a shift from an elite, exclusive audience in Paris to a broader, democratized participation of students, children, and young entrepreneurs [10-13]. He argued that India is uniquely positioned to diffuse AI because a decade of digital investment has brought a billion people into the formal system through identity, banking, and paper-less transactions [28-33]. Key DPI components such as Aadhaar, eSign, DigiLocker, UPI, GST invoicing and FastTag generate machine-readable, cryptographically signed data that can be accessed via APIs [38-44][45]. These programmable data trails, combined with the DPDP privacy act that gives individuals ownership of their data, create a verifiable ecosystem for AI applications [46-48]. Varma predicted that countries that layer AI on top of robust DPI will achieve ten- to fifty-fold economic gains compared with those lacking such infrastructure [49-50]. He emphasized that India’s political will, regulatory support, and ready infrastructure have converged in the past decade, making it an ideal testbed for AI diffusion [50]. A further advantage, he said, is the presence of young, adventurous entrepreneurs who are eager to tackle the nation’s many problems [51-55]. The startup ecosystem has exploded from about 1,000 firms in 2016 to roughly 100,000 today, with a projection of one million by 2035, illustrating the scale of potential AI-driven innovation [81-84]. Varma cautioned that while not every venture will succeed, the willingness to attempt bold solutions is essential for leveraging AI and DPI together [85-86]. He concluded by handing over to a panel that would discuss the combinatorial power of DPI and AI and the exponential impact of their integration [88-90]. The moderator then outlined the panel’s agenda, asking how DPI architecture can mitigate new AI risks, what opportunities and threats arise, and whether new products and ecosystems can emerge [95-99]. Overall, the discussion highlighted India’s strategic advantage in using its extensive, programmable digital infrastructure to accelerate AI adoption and drive inclusive growth, while recognizing the need to manage emerging risks [49-50][95-99].


Keypoints

India is positioning itself as a model for the democratization of AI, moving from an “elite, exclusive” past to broad participation by students, children, and young entrepreneurs, a shift highlighted as a national achievement. [10-14]


A robust Digital Public Infrastructure (DPI) underpins this AI push, built over the last decade through initiatives such as Aadhaar, e-Sign, DigiLocker, UPI, GST, FastTag, and GPI, all of which are API-driven, cryptographically secured, and generate machine-readable, verifiable data trails. [32-45]


The synergy of DPI with AI is framed as a competitive economic advantage, with programmability, composability, and citizen-controlled data (via the DPDP Act) expected to deliver “10x or 50x” growth for countries that combine AI with strong DPI foundations. [46-50]


India’s vibrant entrepreneurial ecosystem is seen as the engine for diffusing AI, illustrated by the rapid rise from ~1,000 startups in 2016 to ~100,000 today and a projection of 1 million startups by 2035, leveraging DPI to tackle the nation’s myriad problems. [51-55][80-84][81-84]


The upcoming panel will explore the opportunities, risks, and product-market implications of embedding AI into DPI, asking how DPI architecture can mitigate new risks and what novel ecosystems might emerge. [94-99]


Overall purpose/goal:


The keynote sets the stage for a panel discussion on “AI and Digital Public Infrastructure” by showcasing India’s unique readiness-its inclusive AI democratization, mature DPI, and entrepreneurial vigor-and by framing the strategic question of how AI-enabled DPI can unlock scalable benefits while managing associated risks.


Overall tone:


The speaker adopts an enthusiastic, celebratory tone, repeatedly emphasizing India’s achievements and future potential. The mood is optimistic and confident, with occasional rhetorical emphasis on the country’s “lot of problems” (repeated for effect). By the end, the tone shifts slightly toward a more formal, invitational stance as the speaker hands over to the panel, maintaining the underlying optimism while acknowledging the need to address risks.


Speakers

Pramod Varma – Dr.; Co-founder & Chief Architect, NFH India; expertise in open-source, scalable digital systems, decentralized networks, AI and digital public infrastructure. [S1]


Speaker 1 – Moderator/host of the session; specific role or title not specified. [S2]


Additional speakers:


(none)


Full session reportComprehensive analysis and detailed insights

Speaker 1 opened the session, highlighted the expertise of Pramod Varma and invited him to deliver the keynote [1-4].


Pramod Varma thanked the audience, congratulated the Government of India, the Ministry of Electronics and Information Technology and the Ministry of External Affairs for a “fantastic week,” and observed that the audience had shifted from an “elite, exclusive” crowd in Paris to a diverse mix of students, children, entrepreneurs and young entrepreneurs – a visible sign of AI democratisation in India [5-13].


He praised the Prime Minister as a strong supporter of AI diffusion [15-16] and outlined two strategic arguments for India’s advantage: (1) the pursuit of a sovereign large-language model (LLM) and (2) the view that AI extends far beyond LLMs, a perspective rooted in his own academic work on AI dating back to 1989 [19-27].


He then linked these arguments to India’s decade-long digital investment, noting that a billion people have been brought “from invisible to the system” through universal identity, bank accounts and paper-less transactions [28-34].


Varma described the bold, audacious projects of that period as partly “lucky” and emphasized that Indians have embraced these tools at population scale [35-37].


Beyond identity, India has digitised business processes: the Goods and Services Tax (GST) system now generates billions of machine-readable, cryptographically protected, digitally designed invoices [38-40][C1]; FastTag creates a verifiable transport-related e-bill [42-43][C2]; and the Government Payments Interface (GPI) further formalises financial inclusion [44-45].


All of these public services (Aadhaar, eSign, DigiLocker, UPI, GST invoices, FastTag, GPI) are exposed via APIs, making them programmable and composable [45-48].


The recently enacted Data Protection and Data Privacy (DPDP) Act gives individuals the right to control their own data, ensuring that “data belongs to the people, data belongs to the small businesses” [46-48][C3].


Varma argued that the combination of programmable APIs with massive, verifiable data trails creates an ideal substrate for AI [45-48][C4].


He forecast that countries that layer AI on top of robust DPI could achieve ten- to fifty-fold higher economic growth than those without such infrastructure, attributing this potential to the right place, political will, regulatory push and infrastructure built within a single decade [49-52].


Turning to entrepreneurship, Varma highlighted India’s dynamism, citing the rise from roughly 1,000 start-ups in 2016 to about 100,000 today and a projection of 1 million start-ups by 2035 [81-84][C5]. He stressed that, while not every venture will succeed, attempting bold solutions to the nation’s many problems is essential for AI diffusion [85-86].


Concluding his remarks, Varma introduced the panel on the “combinatorial power of DPI and AI,” inviting discussion on exponential benefits, risks and new market ecosystems [87-90][92-93].


Speaker 1 then framed the panel’s agenda, asking how AI integration can unlock scalable benefits, what opportunities and risks arise, how DPI architecture might mitigate those risks, and whether AI-enabled DPI can spawn novel products, services and ecosystems [94-99][C6].


Session transcriptComplete transcript of the session
Speaker 1

…infrastructure in the country. He’s a prominent expert on open source, scalable digital systems and decentralized networks. It is now my honor to call upon Pramod to take the stage to give his keynote address. Thank you.

Pramod Varma

Friday evening can be really hard. It’s tiring right after a long week. So thank you for having me here and I don’t want to take up too much of your time. First of all, I want to congratulate Government of India, METI, MEA. What a fantastic week. And compared to last time in Paris, we heard actually from many people who attended that last time it was elite, exclusive people attending it. This is true democratization. You can see that number of students, children, entrepreneurs, young entrepreneurs walking in. It just tells you that… India can definitely demonstrate what it means to democratize and diffuse AI. And our prime minister is, I think, a mastermind at it. So he’s a great supporter of it.

But what I wanted to give you about five minutes or so is that why India is peculiarly in advantage of diffusing AI. Now, we have two arguments we can make. Our own LLM. I think much of our discussions and today AI discussions are all about sovereign LLM, big LLM. How are we going to build our own LLM? LLM is only one part of it. There’s so much more there to AI, especially for the people who have lived. My master’s was in AI. I was in 89. So. AI has been there for a while. I think now it’s all coming together. But AI spans much beyond LLMs and why India is peculiarly set up to succeed is because of the serendipity, but it is because of the investment we made in the last decade, digital investment.

And people who have not looked at the macro picture, it’s very important to understand India over the last decade brought a billion people from invisible to the system. They were invisible to the system to being visible to the system. And we formalized a billion people by giving everyone an identity, everyone a bank account, everyone can transact. Make payments, paperless signature. So we built Aadhaar, begin with. Of course we built in 2000, I remember 2014 was seminal for us because I was actually architecting eSign, DigiLocker and UPI at the same time. And who knew they were all going to play out. But I think brave people are also lucky. I think when we attempt something bold and audacious, sometimes luck comes in the way and Indians have truly embraced all this into actually at population scale, in one sense going beyond what we can.

And it did not stop there though. We actually digitized businesses through GST. India is the only country where we have billions of invoices, actual proof of purchase in machine -readable, cryptographically protected, digitally designed fashion. That’s like a goldmine. That’s each of those steps we made. Or fast tag. When fast tag gets done in the road, there’s a proof of transport, an eBay bill. Each of them is again machine -readable, cryptographically signed and usable by the next layer of innovation. So what we did with GPI by formalizing is one inclusion story. It was a brilliant inclusion story to get everyone into the formal system. but it also said you know serendipity set up the most powerful two ingredients for AI data and programmability every one of our infrastructure components DPI components are API based every one of them this is why we have fun pay is why we have the road and grow and everyone else building applications and workflows using this underlying digital public infrastructure API’s identity API’s verification digital occur verification a document verification API’s he signed for paperless signature UPI and mandates for recurring payments and other collections or payments each of them is programmable combining that with data that gets in later a billion people billion plus people you in India generate verifiable data trail.

And that’s beautiful. But even more beautiful when it is controlled and owned by the individuals, which is our DPDP Act actually giving you. Our privacy bill is giving us the right to control our own data. And India has truly demonstrated that the data belongs to the people, data belongs to the small businesses, using which now they can create a virtual cycle. So I think AI’s two biggest ingredients, programmability and composability, combined with data, verifiable data trail, allows India, and this is a bold prediction I’m making, 10 years later, when you compare countries’ economic progress and growth, countries who have invested in DPI and combined, AI on top of DPI, would have done 10x or 50x better than countries who have no underlying infrastructure.

So I think India is lucky, right place, right political will, right regulatory push, right infrastructure readiness, all in the last decade, all in one decade. But for my favorite part of all that thing is that India is also blessed with young adventurous entrepreneurs. Entrepreneurs who have no inhibition at all. At least a few of you came to meet me outside saying I’m starting a company. It’s just music to our ears because India’s problems are a plethora. As you know, we are a country of problems. So we have anywhere you look, we see problems. Energy sector, agriculture, agriculture. We have a lot of problems. We have a lot of problems. We have a lot of problems.

We have a lot of problems. We have a lot of problems. a lot of problems. We have a lot of problems. We have a lot of problems. We have a lot of problems. We have a lot of problems. We have a lot of problems. We have a lot of problems. We have a lot of problems. We have a lot of problems. We have a lot of problems. We have a lot of problems. We have a lot of problems. We have a lot of problems. We have a lot of We have a lot of problems. We have a lot of problems. We have a lot of problems. We have access to capital, access to investment, access to the right products, not solved.

We have much to solve. And if you combine our infrastructure and diffuse AI, but diffuse AI through entrepreneurship. The way we diffuse DPI through entrepreneurship, we went from 1 ,000 companies in 2016 to 100 ,000 startups today. And the prediction is that we’ll get 1 million startups by 2035. It’s a very high chance we’ll get. Doesn’t mean all of them will succeed. But attempting matters. I think young people have to attempt, audacious attempt, bold attempt to solve problems. And India has beautifully set up. And we have a wonderful panel. I don’t want to take up too much of time. Wonderful panel talking about the combinatorial power of DPI and AI. Combining both what can be really an exponential power and why countries who are investing, and they’re all global, and they’re all global, experts in deeply investing into DPI.

So I give my floor to them. Thank you. Thank you to all of you too, even if so many people coming and sitting, really appreciate it, much appreciate and a wonderful weekend and keep imagining and keep building and keep solving. Thank you so much.

Speaker 1

Thank you so much for setting that context. Now we will have the panel on AI and digital public infrastructure. The session will explore how integrating AI into DPI can unlock new benefits at scale while also discussing the challenges and risks of such an integration. How can DPI architecture mitigate new risks and emerge as AI becomes embedded in foundational digital systems? What are the opportunities and risks that emerge as a result of integrating AI into DPI? And could integrating AI into DPI enable the development of new products, services and market ecosystems?

Related ResourcesKnowledge base sources related to the discussion topics (14)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“India has digitised business processes: the GST system now generates billions of machine‑readable, cryptographically protected, digitally designed invoices.”

The knowledge base states that India is the only country with billions of invoices that are machine-readable, cryptographically protected, and digitally designed [S1] and [S58].

Confirmedhigh

“A billion people have been brought “from invisible to the system” through universal identity, bank accounts and paper‑less transactions.”

Digital public infrastructure in India provides over 1.3 billion digital identities and has enabled massive inclusion, confirming the scale of bringing people into the system [S63] and [S12].

Additional Contextmedium

“All of these public services (Aadhaar, eSign, DigiLocker, UPI, GST invoices, FastTag, GPI) are exposed via APIs, making them programmable and composable.”

The knowledge base highlights that APIs serve as “ports-of-entry” for cloud resources and that Aadhaar, UPI and other services are built on API-driven digital public infrastructure [S72] and [S73].

Confirmedhigh

“The recently enacted Data Protection and Data Privacy (DPDP) Act gives individuals the right to control their own data.”

India’s Digital Personal Data Protection Bill has been passed, establishing legal rights for individuals to control personal data [S69].

Confirmedhigh

“India is the only country where we have billions of invoices, actual proof of purchase in machine‑readable, cryptographically protected, digitally designed fashion.”

The source explicitly notes India’s unique position with billions of such invoices, matching the claim [S1] and [S58].

External Sources (76)
S1
Keynote by Dr. Pramod Varma Co-founder & Chief Architect NFH India AI Impact Summit — -Moderator: Session moderator (no specific expertise, role, or title mentioned beyond moderating the discussion) …inf…
S2
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S3
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S4
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S5
AI startups in Silicon Valley rethink VC funding with leaner teams and strategic growth — In Silicon Valley, a notable trend isemergingas AI startups achieve significant revenue with leaner teams, challenging t…
S6
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — It is very clear to me that the 2030s will be a chaotic era. There will be disruption. There will be large changes. And …
S7
Designing Indias Digital Future AI at the Core 6G at the Edge — Radhakant acknowledges strong governmental backing for 6G, citing support from the Prime Minister, the VARA 6G Alliance,…
S8
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — I’ll take a much higher level approach. You know, I think there’s a sort of consensus around AI regulation that’s kind o…
S9
From India to the Global South_ Advancing Social Impact with AI — This comment directly addresses one of the most anxiety-provoking aspects of AI adoption – job displacement. By framing …
S10
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — India’s approach to Digital Public Infrastructure (DPI) emphasizes the importance of civil society and citizen engagemen…
S11
Open Forum #30 High Level Review of AI Governance Including the Discussion — Abhishek Singh: Thank you. Thank you, Yoichi, and thank you for highlighting this very, very important issue of AI gover…
S12
Empowering People with Digital Public Infrastructure — Rene Saul: Everything is connected. This AI, energy, infrastructure, where are you going to put it, where are you goin…
S13
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — And accessibility has to be also broadened in terms of multi -modality and also, where necessary, include a human in the…
S14
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — ## Core Discussion Themes ## Participants and Perspectives ## Practical Applications ### Regional Implementation and …
S15
Panel Discussion: 01 — in building this healthy and fair ecosystem to boost the innovation on artificial intelligence.
S16
Open Forum #66 the Ecosystem for Digital Cooperation in Development — Tale Jordbakke: First of all, thank you for having NORAD in this panel. In NORAD, we believe that achieving the SDGs can…
S17
Welcome Address — This comment introduces a major policy position that distinguishes India’s approach from other major powers. It shifts t…
S18
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — “The democratization of AI with inclusion, which I touched upon in my keynote at the EIFGO Global Summit in Geneva last …
S19
Multistakeholder Partnerships for Thriving AI Ecosystems — Dr. Bärbel Koffler emphasized that governments must create frameworks and governance structures to ensure AI benefits ar…
S20
Collaborative AI Network – Strengthening Skills Research and Innovation — “We’re talking of AI being a possible DPI, a digital public infrastructure.”[1]. “I think those are aspects which a DPI …
S21
Trade regulations in the digital environment: Is there a gender component? (UNCTAD) — The analysis of the provided statements highlights several key points from all speakers. One main argument is that digit…
S22
The future of Digital Public Infrastructure for environmental sustainability — Ensuring that DPI development is harmonised with these goals is vital for a fair and secure digital landscape. In summat…
S23
Sticking with Start-ups / DAVOS 2025 — This comment provides valuable context on the rapid growth of India’s startup ecosystem, offering a global perspective o…
S24
WS #257 Emerging Norms for Digital Public Infrastructure — Benefits and Risks of DPI Jyoti Panday: Good morning, everyone. As Professor Muller introduced me, I’m Jyoti Pandey, …
S25
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — Countries around the world have made investments into digital public infrastructure (DPI) that supports vital society-wi…
S26
AI for agriculture Scaling Intelegence for food and climate resiliance — By creating interoperable networks based on open protocols like Beacon, by collaborating with each other, one of us is b…
S27
Open Forum #30 High Level Review of AI Governance Including the Discussion — Abhishek Singh: Thank you. Thank you, Yoichi, and thank you for highlighting this very, very important issue of AI gover…
S28
Effective Governance for Open Digital Ecosystems | IGF 2023 Open Forum #65 — Amandeep Singh Gill:Yes, if I may jump in quickly, I think building on Eileen’s point, I think the foundations are essen…
S29
How AI Drives Innovation and Economic Growth — High level of consensus across diverse perspectives (World Bank, academia, legal scholarship, development practice) sugg…
S30
Secure Finance Risk-Based AI Policy for the Banking Sector — And one of them could do much, much better than the guys who you think are at the cutting edge today. So this is an emer…
S31
How AI Drives Innovation and Economic Growth — The tone was notably optimistic yet pragmatic, described as representing “hope” rather than the “fear” that characterize…
S32
AI for Safer Workplaces & Smarter Industries Transforming Risk into Real-Time Intelligence — There was unexpected consensus that fear about AI is widespread across different age groups and demographics, but this f…
S33
Omnipresent Smart Wireless: Deploying Future Networks at Scale — Finally, the analysis concludes by asserting the need for regulations at national, regional, and international levels. T…
S34
Day 0 Event #171 Legalization of data governance — The speakers generally agreed on the importance of comprehensive data governance frameworks, the need to balance securit…
S35
Opening of the session — A thread of concern weaves through the discussion on artificial intelligence (AI) and data protection: the call is for A…
S36
AI as critical infrastructure for continuity in public services — The discussion revealed that data sovereignty encompasses more than simple data localization. As Pramod noted, true sove…
S37
Current Developments in DNS Privacy | IGF 2023 — Data ownership and privacy are complex issues that need to be considered in the context of data protection. Encouraging …
S38
Diplo & GIP at Big data, big problems? Challenges and opportunities in the context of data ownership, privacy and protection — DiploFoundation and the Geneva Internet Platform will participate in the event ‘Big Data, Big Problems? Challenges and O…
S39
Big Data, Big Problems? Challenges and Opportunities in the context of Data Ownership, Privacy and Protection — Challenges of big data: What chalenges does the use of big data pose? Do the benefits outweight these challenges? The e…
S40
Networking Session #60 Risk & impact assessment of AI on human rights & democracy — Adopted by the Council of Europe, includes modules for risk analysis, stakeholder engagement, impact assessment, and mit…
S41
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — Analysis of context-context is crucial, I will say in two words why. Then we go to stakeholder engagement, and this morn…
S42
A Digital Future for All (afternoon sessions) — AI governance should focus on both opportunities and risks, not just existential risks. There is a need to balance innov…
S43
Keynote by Dr. Pramod Varma Co-founder & Chief Architect NFH India AI Impact Summit — Varma begins by acknowledging this is a “Friday evening” and expressing consideration for the audience’s time. He congra…
S44
AI Innovation in India — This comment energized the discussion by providing a grand vision that contextualized all the individual innovations wit…
S45
AI 2.0 Reimagining Indian education system — The discussion positioned India’s educational AI integration within broader national aspirations for global AI leadershi…
S46
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — “The democratization of AI with inclusion, which I touched upon in my keynote at the EIFGO Global Summit in Geneva last …
S47
The Global Power Shift India’s Rise in AI & Semiconductors — The panellists addressed fundamental changes in how knowledge is acquired and applied in the AI era. Singh emphasised th…
S48
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — And accessibility has to be also broadened in terms of multi -modality and also, where necessary, include a human in the…
S49
Digital Public Infrastructure, Policy Harmonization, and Digital Cooperation — Marie Ndé Sene Ahouantchede explains that ECOWAS views public digital infrastructure as built on three pillars: payment …
S50
Creating digital public infrastructure that empowers people | IGF 2023 Open Forum #168 — Countries around the world have made investments into digital public infrastructure (DPI) that supports vital society-wi…
S51
Internet Governance Forum 2024 — The potential benefits of DPI were widely recognised, with examples such as Brazil’s PIX payments system demonstrating h…
S52
The future of Digital Public Infrastructure for environmental sustainability — Ensuring that DPI development is harmonised with these goals is vital for a fair and secure digital landscape. In summat…
S53
The Foundation of AI Democratizing Compute Data Infrastructure — And so with, and India has been looking at this data empowerment and protection architecture, which is on that lines. An…
S54
Sticking with Start-ups / DAVOS 2025 — The startup ecosystem is evolving rapidly, with India emerging as a major player
S55
Effective Governance for Open Digital Ecosystems | IGF 2023 Open Forum #65 — Amandeep Singh Gill:Yes, if I may jump in quickly, I think building on Eileen’s point, I think the foundations are essen…
S56
Open Forum #30 High Level Review of AI Governance Including the Discussion — Abhishek Singh: Thank you. Thank you, Yoichi, and thank you for highlighting this very, very important issue of AI gover…
S57
30th Annual FIRST Conference — featured invited and keynote presentations
S58
https://dig.watch/event/india-ai-impact-summit-2026/keynote-by-dr-pramod-varma-co-founder-chief-architect-nfh-india-ai-impact-summit — And it did not stop there though. We actually digitized businesses through GST. India is the only country where we have …
S59
Connecting the Unconnected in the field of Education Excellence, Cyber Security & Rural Solutions and Women Empowerment in ICT — Ninad S. Deshpande: Thank you, Ash. That’s a round of applause for India’s achievements. Without more ado, I would like …
S60
Open Internet Inclusive AI Unlocking Innovation for All — And India really has a competitive advantage. In fact, we’ve been looking for startups we could find, fund that would ba…
S61
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Hemant Taneja General Catalyst — Taneja argued that India is uniquely positioned to lead in AI deployment due to its status as the world’s strongest grow…
S62
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — Rather than viewing India’s complexity as a challenge, Raghavan presented it as the country’s greatest competitive advan…
S63
WS #98 Universal Principles Local Realities Multistakeholder Pathways for DPI — Assessment of financial inclusion benefits through DPI India’s DPI development was driven by the need to provide public…
S64
High Level Session 2: Digital Public Goods and Global Digital Cooperation — Nandan Nilekani: Thank you, Mr. Nandan, and it’s really a great honor and privilege to be speaking at the ITF 2025 in No…
S65
Nepal Engagement Session — The speakers emphasized how these tools have achieved remarkable scale and adoption. Uttar Pradesh successfully onboarde…
S66
https://dig.watch/event/india-ai-impact-summit-2026/driving-indias-ai-future-growth-innovation-and-impact — Awesome. Great question, Midu. And, you know, we as a nation have proven ourselves to be phenomenal adopters of technolo…
S67
https://dig.watch/event/india-ai-impact-summit-2026/keynote-rishad-premji — India’s advantage will come from developing talent at scale, not just people trained on AI, but people who can apply it …
S68
Empowering Inclusive and Sustainable Trade in Asia-Pacific: Perspectives on the WTO E-commerce Moratorium — Krishna Moorthy:Okay, thank you. So as I said earlier, it’s not something that we can come to a conclusion immediately t…
S69
The Challenges of Data Governance in a Multilateral World — India has received praise for its positive approach to technology and digitization. The country recently passed the Digi…
S70
https://dig.watch/event/india-ai-impact-summit-2026/safe-and-responsible-ai-at-scale-practical-pathways — Thank you. entities and the 3 ,000 entities actually manage 5 million new compliances in a year. They have those kind of…
S71
Towards inclusive digital innovation ecosystems – do’s and don’ts and what next? — Ms. Yolanda Martinez:Hello everyone and thank you for the invitation to be part of this panel and I would like to build …
S72
NIST Special Publication 500-317 (DRAFT) — 1. Accessible Ports-of-Entry : APIs can significantly leverage the ability to develop fullyaccessible “ports-of-entry” t…
S73
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — Success of Aadhaar, UPI, and data layer implementations that enabled various sector applications to be built on top
S74
Data act: member states agree common position on fair access to and use of data — The EU member states’ representatives (Coreper) reached acommon position(negotiating mandate), allowing the Council to e…
S75
New consumer data privacy law signed in the US state of Delaware — Governor John Carneysigned the Delaware Personal Data Privacy Act (DPDPA). This makes Delaware the 12th state in the US …
S76
Ethiopian Parliament passes digital data protection legislation — The Personal Data Protection Proclamation (PDPP) passed by Ethiopia’s Federal House of Representatives on April 4, 2024,…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
P
Pramod Varma
8 arguments146 words per minute1200 words490 seconds
Argument 1
Broad participation shows AI is no longer elite‑only (Pramod Varma)
EXPLANATION
Pramod highlights that the recent AI event attracted a wide range of participants, including students, children, and young entrepreneurs, contrasting with previous gatherings that were limited to elite attendees. This shift signals a democratization of AI access and interest across society.
EVIDENCE
He compared the current event to the previous one in Paris, noting that the earlier conference was attended by elite, exclusive people, whereas now a large number of students, children, and entrepreneurs are walking in, demonstrating broader participation [10-13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panel remarks emphasize the need to broaden accessibility and inclusion, mentioning multi-modality and human-in-the-loop approaches to widen participation in AI-enabled DPI [S13].
MAJOR DISCUSSION POINT
Democratization of AI participation
Argument 2
Strong political backing from the Prime Minister accelerates AI adoption (Pramod Varma)
EXPLANATION
Pramod asserts that the Indian Prime Minister is a key champion of AI, providing strategic support that speeds up the country’s AI initiatives. This political endorsement is presented as a critical factor for rapid diffusion of AI technologies.
EVIDENCE
He described the Prime Minister as a mastermind for AI and a great supporter, indicating strong governmental backing for AI efforts [15-16].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Prime Minister’s explicit support for advanced technology programmes such as 6G illustrates high-level political endorsement for AI initiatives [S7]; his public stance on AI-related job concerns further underscores governmental backing [S9].
MAJOR DISCUSSION POINT
Political support for AI
Argument 3
India built a suite of API‑based public services (Aadhaar, eSign, DigiLocker, UPI, GST, FastTag) that generate machine‑readable, cryptographically signed data (Pramod Varma)
EXPLANATION
Pramod outlines how India has created a comprehensive digital public infrastructure (DPI) consisting of APIs for identity, signatures, document storage, payments, tax invoicing, and transport verification. These services produce large volumes of verifiable, machine‑readable data.
EVIDENCE
He listed the construction of Aadhaar, eSign, DigiLocker, UPI, GST invoicing, and FastTag, each producing cryptographically signed, machine-readable data, and noted that billions of invoices and transport records are now digitally captured [32-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s digital public infrastructure, notably the GST system that produces billions of machine-readable, cryptographically protected invoices, is cited as a concrete example of API-driven public services [S1].
MAJOR DISCUSSION POINT
API‑centric digital public infrastructure
Argument 4
The API‑centric, programmable nature of DPI enables composable AI applications (Pramod Varma)
EXPLANATION
He argues that because all DPI components expose programmable APIs, developers can layer AI services on top of them, creating modular and composable applications. This programmability is a key enabler for innovative AI solutions.
EVIDENCE
He described how every infrastructure component (identity, verification, e-sign, UPI, etc.) is API-based and programmable, allowing data and services to be combined for AI-driven workflows [45].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussion of DPI as a connective layer that facilitates integration of AI services highlights its programmable, composable nature for building new workflows [S12].
MAJOR DISCUSSION POINT
Programmability of DPI for AI composability
AGREED WITH
Speaker 1
Argument 5
The DPDP Act gives individuals control over their data, turning personal data into a trusted resource for AI (Pramod Varma)
EXPLANATION
Pramod notes that India’s Data Protection and Digital Privacy (DPDP) Act empowers citizens to own and manage their personal data, which can then be safely leveraged for AI development. This legal framework builds trust in data usage.
EVIDENCE
He explained that the DPDP Act and privacy bill grant individuals the right to control their own data, ensuring that data belongs to people and small businesses, thereby creating a trusted data pool for AI [46-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The DPDP Act’s provisions granting individuals ownership and control over their data are presented as creating a trusted data pool for AI development [S1].
MAJOR DISCUSSION POINT
Data ownership and privacy for AI
AGREED WITH
Speaker 1
Argument 6
Verifiable, large‑scale data trails from DPI constitute a “goldmine” for training and deploying AI models (Pramod Varma)
EXPLANATION
He emphasizes that the massive, cryptographically secured data generated by DPI—such as billions of GST invoices and FastTag transport records—provides a rich, high‑quality dataset for AI model training and deployment.
EVIDENCE
He called the billions of machine-readable, cryptographically protected invoices a “goldmine” and highlighted that each transaction (e.g., FastTag road tolls) creates a verifiable data trail usable for AI innovation [39-41][45].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The existence of massive, verifiable, cryptographically secured transaction records-such as GST invoices and FastTag toll logs-is highlighted as a rich resource for AI model training [S1].
MAJOR DISCUSSION POINT
Data as AI training resource
Argument 7
Young, adventurous entrepreneurs are rapidly launching AI‑driven startups, growing from 1,000 in 2016 to 100,000 today, with a target of 1 million by 2035 (Pramod Varma)
EXPLANATION
Pramod points out the vibrant entrepreneurial ecosystem in India, where a new generation of risk‑taking founders is creating AI‑focused companies at scale. He cites rapid growth in startup numbers and an ambitious future target.
EVIDENCE
He mentioned the presence of enthusiastic entrepreneurs eager to start companies and then provided statistics that the number of startups rose from 1,000 in 2016 to 100,000 today, with a goal of reaching one million by 2035 [51-55][81-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The rapid expansion of India’s AI startup ecosystem from 1,000 to 100,000 firms, with a goal of one million by 2035, is documented in the keynote summary [S1]; parallel trends of lean AI startups are noted in Silicon Valley analyses [S5].
MAJOR DISCUSSION POINT
Startup surge in AI
Argument 8
Combining DPI with AI is predicted to deliver 10×–50× higher economic growth for countries that have such infrastructure (Pramod Varma)
EXPLANATION
He makes a bold forecast that nations which integrate AI on top of robust DPI will experience ten to fifty times greater economic progress compared to those lacking such foundations. This underscores the strategic value of DPI‑AI synergy.
EVIDENCE
He stated that countries investing in DPI and layering AI on top would achieve 10x or 50x better economic outcomes over a ten-year horizon, citing the combination of programmability, composability, and verifiable data as the drivers [49-50].
MAJOR DISCUSSION POINT
Economic impact of DPI‑AI synergy
AGREED WITH
Speaker 1
S
Speaker 1
2 arguments135 words per minute132 words58 seconds
Argument 1
The upcoming panel will examine how DPI architecture can mitigate new risks as AI becomes embedded in core systems (Speaker 1)
EXPLANATION
Speaker 1 introduces the panel discussion, asking how the design of digital public infrastructure can address emerging risks when AI is integrated into foundational systems. The focus is on risk mitigation and resilience.
EVIDENCE
He posed the question, “How can DPI architecture mitigate new risks and emerge as AI becomes embedded in foundational digital systems?” during the session introduction [97-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panel discussions raise concerns about integrity, privacy, and emerging risks in AI-enabled DPI, calling for governance and mitigation strategies [S13]; broader AI governance frameworks across the tech stack are referenced [S8].
MAJOR DISCUSSION POINT
Risk mitigation in AI‑enabled DPI
DISAGREED WITH
Pramod Varma
Argument 2
The panel will explore how AI‑infused DPI can spawn new products, services, and market ecosystems (Speaker 1)
EXPLANATION
Speaker 1 outlines that the panel will discuss the opportunities for novel offerings and ecosystem development that arise when AI is integrated with digital public infrastructure. This highlights potential economic and innovation benefits.
EVIDENCE
He asked whether integrating AI into DPI could enable the development of new products, services, and market ecosystems as part of the session agenda [96-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Commentary highlights DPI’s role in connecting technologies and enabling AI-driven service innovation, positioning AI as an optimization layer atop existing digital infrastructure [S12][S14].
MAJOR DISCUSSION POINT
Innovation opportunities from AI‑DPI integration
Agreements
Agreement Points
AI integration with DPI can unlock new products, services, and market ecosystems
Speakers: Pramod Varma, Speaker 1
Combining DPI with AI is predicted to deliver 10×–50× higher economic growth for countries that have such infrastructure (Pramod Varma) The API‑centric, programmable nature of DPI enables composable AI applications (Pramod Varma) The upcoming panel will explore how AI‑infused DPI can spawn new products, services and market ecosystems (Speaker 1)
Both speakers highlight that layering AI on top of India’s digital public infrastructure creates opportunities for novel products, services and economic growth, thanks to the programmable, API-based nature of DPI and its large-scale data assets [32-44][45][49-50][96-99].
POLICY CONTEXT (KNOWLEDGE BASE)
This view echoes IGF 2023’s description of DPI as a foundation for new digital services and the recognized potential of AI-enabled interoperable networks to create market ecosystems [S25][S26][S29].
Risk mitigation and governance are essential when AI is embedded in core digital public systems
Speakers: Pramod Varma, Speaker 1
The upcoming panel will examine how DPI architecture can mitigate new risks as AI becomes embedded in foundational digital systems (Speaker 1) The DPDP Act gives individuals control over their data, turning personal data into a trusted resource for AI (Pramod Varma)
Both speakers agree that integrating AI into DPI raises new risks that must be addressed through architecture and legal safeguards such as the DPDP Act, which ensures data ownership and privacy [46-48][97-99].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for governance mirrors the AI governance frameworks advocated by OECD and UN bodies and the risk-based approaches discussed for critical sectors such as finance and public services [S27][S30][S40][S41][S42].
Similar Viewpoints
Both see the DPI‑AI combination as a catalyst for significant economic and innovation gains, envisioning new market ecosystems built on programmable public APIs [49-50][96-99].
Speakers: Pramod Varma, Speaker 1
Combining DPI with AI is predicted to deliver 10×–50× higher economic growth for countries that have such infrastructure (Pramod Varma) The upcoming panel will explore how AI‑infused DPI can spawn new products, services and market ecosystems (Speaker 1)
Both stress that governance, privacy and risk‑mitigation mechanisms are crucial for trustworthy AI deployment on DPI platforms [46-48][97-99].
Speakers: Pramod Varma, Speaker 1
The upcoming panel will examine how DPI architecture can mitigate new risks as AI becomes embedded in foundational digital systems (Speaker 1) The DPDP Act gives individuals control over their data, turning personal data into a trusted resource for AI (Pramod Varma)
Unexpected Consensus
Alignment on the need for strong data‑privacy/legal frameworks despite Pramod’s largely optimistic narrative
Speakers: Pramod Varma, Speaker 1
The upcoming panel will examine how DPI architecture can mitigate new risks as AI becomes embedded in foundational digital systems (Speaker 1) The DPDP Act gives individuals control over their data, turning personal data into a trusted resource for AI (Pramod Varma)
While Pramod’s keynote focuses on opportunities and growth, he still foregrounds the DPDP Act as a cornerstone for safe AI use, unexpectedly matching Speaker 1’s explicit call for risk mitigation and governance [46-48][97-99].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for robust privacy and legal safeguards reflect the consensus on comprehensive data-governance and sovereignty articulated in multiple IGF sessions and UN-aligned discussions [S34][S35][S36][S37][S38].
Overall Assessment

The speakers converge on two main fronts: (1) the strategic value of coupling AI with India’s API‑driven digital public infrastructure to spur innovation and economic growth, and (2) the necessity of embedding risk‑mitigation, privacy and governance mechanisms (e.g., the DPDP Act) when AI becomes part of core public services.

Moderate to high consensus – both speakers explicitly acknowledge the opportunities of AI‑DPI synergy and the parallel need for safeguards, suggesting a balanced agenda for the upcoming panel that will likely emphasize both innovation potential and responsible deployment.

Differences
Different Viewpoints
Optimistic economic impact of AI‑DPI synergy versus concern about emerging risks
Speakers: Pramod Varma, Speaker 1
Combining DPI with AI is predicted to deliver 10×‑50× higher economic growth for countries that have such infrastructure (Pramod Varma) The upcoming panel will examine how DPI architecture can mitigate new risks as AI becomes embedded in core systems (Speaker 1)
Pramod stresses a bold forecast that AI layered on India’s digital public infrastructure will generate ten to fifty times greater economic growth, portraying the integration as a largely positive, transformative force [49-50]. In contrast, Speaker 1 frames the discussion around risk mitigation, asking how DPI design can address new hazards that arise when AI is embedded in foundational digital systems, signalling a more cautious stance [97-99].
POLICY CONTEXT (KNOWLEDGE BASE)
While reports such as the World Bank’s highlight AI-DPI’s growth potential, they also flag emerging risks, mirroring the tension between optimism and caution noted in recent policy dialogues [S29][S31][S42].
Unexpected Differences
Data ownership and privacy versus implicit need for further safeguards
Speakers: Pramod Varma, Speaker 1
The DPDP Act gives individuals control over their data, turning personal data into a trusted resource for AI (Pramod Varma) How can DPI architecture mitigate new risks as AI becomes embedded in foundational digital systems? (Speaker 1)
Pramod presents the DPDP Act as a solution that already ensures trustworthy, individual-controlled data for AI, suggesting the privacy and governance challenge is largely resolved [46-48]. Speaker 1, however, raises a broader risk-mitigation question that implicitly includes privacy and governance concerns, indicating that the data-ownership issue may still require additional safeguards beyond the Act, an angle not anticipated by Pramod’s confident claim.
POLICY CONTEXT (KNOWLEDGE BASE)
The debate over data ownership and privacy aligns with longstanding discussions on big-data governance and the call for stronger safeguards in international regulatory proposals [S37][S38][S39][S33][S34][S35].
Overall Assessment

The discussion reveals a primary tension between Pramod Varma’s highly optimistic view of AI‑DPI integration—emphasizing massive economic gains, democratization, and already‑solved data‑ownership issues—and Speaker 1’s more cautious framing that foregrounds risk mitigation, governance, and the need to address emerging security and privacy challenges. While both agree on the strategic importance of coupling AI with digital public infrastructure, they diverge on the balance between opportunity and risk.

Moderate to high disagreement: the speakers are aligned on the overarching goal but differ substantially on the assessment of risks and the adequacy of existing safeguards, which could shape policy priorities and implementation strategies for AI‑enabled DPI.

Partial Agreements
Both speakers share the goal of leveraging AI together with digital public infrastructure to generate broad societal benefits. Pramod highlights the democratization of AI through wide participation and diffusion, while Speaker 1 emphasizes the potential of AI‑DPI integration to create new benefits, but the former focuses on optimism and diffusion, whereas the latter foregrounds a structured exploration of opportunities and risks [14-15][96-98].
Speakers: Pramod Varma, Speaker 1
India can definitely demonstrate what it means to democratize and diffuse AI (Pramod Varma) The session will explore how integrating AI into DPI can unlock new benefits at scale (Speaker 1)
Takeaways
Key takeaways
AI is moving from an elite, exclusive domain to a democratized, widely‑participated ecosystem in India. Strong political backing, especially from the Prime Minister, is accelerating AI diffusion. India’s Digital Public Infrastructure (DPI) – Aadhaar, eSign, DigiLocker, UPI, GST, FastTag, etc. – is API‑centric, machine‑readable, cryptographically signed and thus provides a programmable foundation for AI applications. The DPDP Act gives individuals ownership and control over their data, turning personal data into a trusted resource for AI development. Verifiable, large‑scale data trails generated by DPI constitute a “goldmine” for training and deploying AI models. The combination of programmability, composability, and abundant data positions India to achieve 10×‑50× higher economic growth compared with nations lacking such infrastructure. A vibrant, risk‑tolerant entrepreneurial ecosystem is rapidly scaling AI‑driven startups (from ~1,000 in 2016 to ~100,000 today, with a target of 1 million by 2035). The upcoming panel will explore how integrating AI into DPI can unlock new products, services, and market ecosystems while addressing emerging risks.
Resolutions and action items
None identified
Unresolved issues
Specific mechanisms for DPI architecture to mitigate new risks introduced by embedding AI in foundational digital systems. Detailed governance and regulatory frameworks needed to balance AI innovation with privacy and security under the DPDP Act. How to ensure equitable access to AI benefits across diverse population segments, especially marginalized groups. Practical pathways for scaling AI‑infused DPI services while maintaining data integrity and trust. Metrics and benchmarks to evaluate the predicted 10×‑50× economic impact of AI‑enabled DPI.
Suggested compromises
None identified
Thought Provoking Comments
India can definitely demonstrate what it means to democratize and diffuse AI.
Highlights a shift from elite, exclusive AI gatherings to inclusive participation, framing AI diffusion as a democratic process rather than a niche technology.
Sets the tone for the entire keynote, positioning India’s AI strategy as a people‑first approach and prompting the audience to consider inclusivity as a core metric for AI success.
Speaker: Pramod Varma
AI spans much beyond LLMs; the real advantage lies in the digital public infrastructure (DPI) we built—Aadhaar, eSign, DigiLocker, UPI—each API‑based, cryptographically signed, and generating a verifiable data trail.
Broadens the conversation from the hype around large language models to the foundational, programmable layers that enable AI at scale, emphasizing data quality, provenance, and interoperability.
Redirects the discussion from model‑centric debates to the importance of underlying infrastructure, leading listeners to appreciate the systemic assets that make AI deployment feasible at national scale.
Speaker: Pramod Varma
Our DPDP Act gives individuals the right to control their own data, ensuring that data belongs to the people and small businesses.
Introduces a nuanced view of data sovereignty that balances governmental data collection with individual privacy rights, a rare perspective in policy‑driven AI talks.
Adds a regulatory dimension to the conversation, prompting the audience to think about how privacy legislation can coexist with, and even empower, AI innovation.
Speaker: Pramod Varma
Countries that have invested in DPI and layered AI on top of it could be 10x to 50x more economically successful than those without such infrastructure.
Provides a bold, quantifiable prediction that links infrastructure investment directly to macro‑economic outcomes, challenging listeners to reassess the ROI of digital public goods.
Creates a turning point by moving from descriptive history to forward‑looking economic forecasting, encouraging policymakers and entrepreneurs to view DPI as a strategic economic lever.
Speaker: Pramod Varma
We went from 1,000 companies in 2016 to 100,000 startups today, and we predict 1 million startups by 2035; attempting bold solutions matters even if not all succeed.
Emphasizes the scale of entrepreneurial activity as a catalyst for AI diffusion, while acknowledging failure as an essential part of innovation ecosystems.
Shifts the narrative toward the role of grassroots entrepreneurship, inspiring the upcoming panel to explore how startups can leverage DPI and AI to solve India’s myriad problems.
Speaker: Pramod Varma
The combinatorial power of DPI and AI can create exponential impact; integrating both can unlock new products, services, and market ecosystems.
Synthesizes earlier points into a concise vision of synergy, framing the integration as a multiplier rather than a simple addition.
Serves as a bridge to the panel discussion, explicitly framing the upcoming conversation around opportunities, risks, and ecosystem development arising from DPI‑AI integration.
Speaker: Pramod Varma
Overall Assessment

Pramod Varma’s keynote strategically layered several high‑impact ideas—democratization of AI, the primacy of programmable digital public infrastructure, data ownership through the DPDP Act, bold economic forecasts, and the explosive growth of entrepreneurship. Each comment acted as a pivot, moving the audience from a narrow focus on large language models to a holistic view of how India’s unique DPI, regulatory framework, and entrepreneurial vigor can together generate exponential economic and societal benefits. These insights set the agenda for the subsequent panel, steering the conversation toward concrete opportunities, systemic risks, and the ecosystem dynamics of AI‑enabled public services.

Follow-up Questions
How can DPI architecture mitigate new risks and emerge as AI becomes embedded in foundational digital systems?
Sets the agenda for the panel to address security, governance, and resilience challenges of embedding AI into core digital infrastructure.
Speaker: Speaker 1
What are the opportunities and risks that emerge as a result of integrating AI into DPI?
Seeks to explore both positive outcomes (innovation, efficiency) and potential downsides (bias, privacy, systemic failures) of AI‑enabled public services.
Speaker: Speaker 1
Could integrating AI into DPI enable the development of new products, services and market ecosystems?
Aims to identify novel business models and economic value that could arise from AI‑augmented digital public infrastructure.
Speaker: Speaker 1
What empirical evidence can validate the claim that countries investing in DPI combined with AI will achieve 10‑50× higher economic growth compared to those without such infrastructure?
Calls for rigorous cross‑country analysis to test the bold prediction about DPI+AI driving outsized economic performance.
Speaker: Pramod Varma
How can the massive, machine‑readable, cryptographically protected GST invoice data be leveraged safely for AI training and innovation?
Highlights a unique data asset that could fuel AI applications while raising questions about privacy, data quality, and governance.
Speaker: Pramod Varma
What will be the impact of India’s DPDP Act (privacy bill) on data ownership, accessibility for AI development, and individual rights?
Points to the need to study how the new privacy framework balances user control with the data needs of AI systems.
Speaker: Pramod Varma
What factors will drive the projected growth to 1 million Indian startups by 2035, and how will AI diffusion influence startup success rates?
Suggests research into entrepreneurship dynamics, funding ecosystems, and the role of AI in scaling new ventures.
Speaker: Pramod Varma
How do India’s API‑first digital public infrastructure components (identity, eSign, DigiLocker, UPI, etc.) enable or constrain the development of AI‑powered services?
Calls for technical and policy analysis of the programmability and composability of existing APIs as foundations for AI integration.
Speaker: Pramod Varma
What comparative lessons can be drawn from other nations’ investments in DPI and AI to inform India’s strategy and global competitiveness?
Encourages international benchmarking to understand best practices and avoid pitfalls in building AI‑enhanced public infrastructure.
Speaker: Pramod Varma

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

From Innovation to Impact_ Bringing AI to the Public

From Innovation to Impact_ Bringing AI to the Public

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel discussion focused on how artificial intelligence can reshape India’s economy and social institutions, with Vijay Shekhar Sharma outlining the opportunities and challenges while Harinder Takhar raised follow-up questions [1][23]. Sharma argued that AI should be viewed as a growth catalyst rather than a job-killer, noting that AI-enhanced productivity could enable a single shopkeeper to run multiple outlets and lift GDP per-capita [9-11]. He highlighted India’s current $2.5-3.5 trillion economy and projected that an additional $2 trillion could be generated over the next decade, creating a “bull case” for AI-driven expansion [12]. Responding to queries about building indigenous capabilities, Sharma asserted that India must develop its own foundation models in both English and Hindi to move up the value chain beyond the services-oriented IT sector [34-38]. He emphasized that home-grown models are essential to embed Indian cultural knowledge and mitigate biases that arise from training on predominantly Western internet data [199-205].


The conversation then turned to concrete vertical use-cases, with Sharma noting that AI can eliminate hidden biases in loan decisions and provide unbiased credit recommendations to low-income users such as auto-rickshaw drivers [99-103][108-115]. He illustrated personal health monitoring using AI-driven wearables and chat-based advice, showing how AI can augment medical decision-making without replacing doctors [170-184]. When asked whether banks or schools would become obsolete, Sharma replied that their core functions-credit provision and social learning-remain indispensable, though AI will reshape front-end interactions and make services more accessible [138-146][151-158].


On broader societal impact, he claimed AI is an inclusive technology that can narrow the rich-poor gap by giving anyone a powerful decision-making tool, while acknowledging that new risks will emerge and must be managed [339-342][353-356]. He emphasized that AI distribution will follow a “terminal” model, with low-cost devices and cloud-based agents delivering capabilities to small merchants and rural users [389-401][365-371]. The panel also discussed the future role of AI agents, suggesting that agents will communicate with each other and act on behalf of users across services such as ride-hailing [227-236]. In conclusion, the participants agreed that India’s AI strategy should combine indigenous model development, targeted vertical applications, and responsible deployment to harness AI as a catalyst for inclusive economic growth [34-38][99-103][389-401].


Keypoints


Major discussion points


India must develop its own foundation and specialized AI models – Sharma argues that building a home-grown foundation model is essential for moving India out of a “services-only” economy and proving Indian capability on the global stage. He cites the launch of Sarvam’s model as a start and calls for many more such models, including retrained, bias-filtered versions for specific Indian contexts. [34-36][46-49][53-56]


AI will dramatically boost productivity and drive economic growth – By adopting AI-first products, even a small shopkeeper can run multiple stores, raising per-person productivity and GDP. Specific use-cases are highlighted in finance (bias-free loan decisions, personalized wealth advice), agriculture, healthcare, and micro-merchant operations. [9-12][99-105][118-124]


Addressing bias and the need for Indian-centric data – Sharma stresses that existing global models inherit biases from the predominantly Western internet corpus. An Indian-built model can incorporate local knowledge, cultural nuances, and correct historical biases, making the AI more trustworthy for Indian users. [50-56][199-205][211-214]


Traditional institutions (banks, schools) will evolve, not disappear – He explains that banks will continue to provide core services like credit and safe deposits, but their delivery will shift to AI-driven interfaces and agents. Similarly, schools will retain their social and experiential value while embracing AI-enhanced, non-iconic learning tools. [132-146][151-158][221-229]


AI as a potential leveler of inequality, with manageable risks – The speaker views AI as an inclusive technology that can reduce the gap between rich and poor by offering native-language, low-skill access. He acknowledges risks (e.g., over-reliance, safety of payments) but frames them as comparable to everyday risks that can be mitigated through design and regulation. [339-342][353-357][389-401]


Overall purpose / goal


The discussion aims to persuade Indian stakeholders-entrepreneurs, policymakers, and technologists-that the country should urgently invest in building its own AI foundation models and ecosystem. By doing so, India can shift up the value chain, harness AI to accelerate productivity across sectors, ensure culturally relevant and unbiased AI, and prepare its workforce and institutions for an AI-driven future.


Overall tone


The conversation is largely optimistic and visionary, with Sharma delivering energetic, confidence-building statements about AI’s transformative power. When addressing challenges (bias, data scarcity, potential job displacement), the tone becomes cautiously pragmatic, acknowledging risks but emphasizing mitigation and the inevitability of change. The shift from broad enthusiasm to nuanced reflection occurs around the middle of the dialogue (e.g., when discussing bias and the role of banks/schools). Throughout, the tone remains constructive and forward-looking.


Speakers

Vijay Shekhar Sharma – Founder/CEO of Paytm (implied) – Expertise: Digital payments, fintech, AI for public sector and national AI strategy. [S1][S2]


Harinder Takhar – (role not specified in the provided sources) – Expertise: AI applications, cloud infrastructure, technology policy. [S3][S4][S5]


Audience – Various participants (e.g., Yuv from Senegal, Professor Charu, Dr. Nazar) – No specific title; represent the general audience members asking questions. [S6][S7][S8]


Additional speakers:


– None


Full session reportComprehensive analysis and detailed insights

The session opened with Vijay Shekhar Sharma positioning artificial intelligence not as a threat to employment but as a catalyst for India’s economic ascent. He argued that AI-first products can lift individual productivity so dramatically that a single shopkeeper could operate several outlets, thereby expanding GDP per-capita and driving the country’s growth trajectory from its current $2.5-3.5 trillion base toward an additional $2 trillion over the next decade – an optimistic growth outlook for the Indian economy [9-12].


Sharma then made the case that India must build its own foundation models in both English and Hindi to break out of a services-only paradigm and climb the value chain. He praised Sarvam’s recent launch as a proof-of-concept and called for dozens of home-grown models, including specialised, bias-filtered versions, to demonstrate that Indian engineers can create world-class AI and to embed Indian cultural knowledge that global models lack [34-38][46-49][53-56][199-205][211-214].


When Harinder Takhar asked whether a ₹10,000-crore fund is a prerequisite for such endeavours, Vijay Shekhar Sharma down-played the capital barrier, insisting that talent, a viable business model and cost-effective training matter more than the size of the purse-string [57-66][60-61]. This tension highlighted a broader debate about the scale of investment required for indigenous AI development.


The discussion moved to concrete vertical applications. In finance, Sharma illustrated how AI can detect and remove hidden biases in loan approvals, offering unbiased credit decisions to low-income users such as auto-rickshaw drivers and providing personalised wealth-building advice (e.g., suggesting fixed deposits or sovereign gold funds) that would otherwise be unavailable to them [98-105][108-115][118-124]. In agriculture, audience members pointed to AI-driven weather and market forecasts that could help farmers choose crops and avoid spoilage [85-89]. In healthcare, Sharma shared a personal example where a language model suggested a better timing for his mother’s medication, a recommendation that was later endorsed by her doctor, demonstrating AI’s potential to augment medical decision-making without replacing clinicians [170-184].


He emphasized that Indian efforts should prioritize domain-specific LLMs of a few-billion-token size (e.g., 4-5 billion or 20 billion tokens) rather than a single massive 200-billion-token model, arguing that smaller vertical-problem models can be trained more efficiently and address local needs directly [227-236].


Addressing concerns that banks or schools might become obsolete, Sharma clarified that their core functions-custodial safety, credit provision, and the social experience of learning-will persist. AI will reshape front-end interactions, delivering services through chat-bots or agents rather than physical branches or traditional classrooms, but the underlying institutions will remain essential [138-146][151-158][132-146].


An audience member asked whether stock-brokerage platforms will become fully AI-native; Vijay replied that AI agents will become a primary interface but the underlying brokerage services will still exist, reinforcing the shift toward “agent-first” interfaces without eliminating the sector itself [255-259].


A recurring theme was the shift toward “agent-first” interfaces. Sharma described a future where conversational agents communicate directly with one another (e.g., an AI agent requesting an Uber ride without human login), eliminating icon-centric UI and enabling token-based payments. He urged developers to design applications around dialogue rather than static icons, signalling a fundamental redesign of digital ecosystems [221-229][255-259].


On inclusivity, Sharma portrayed AI as an inherently democratic technology: native-language support and low entry barriers allow anyone to wield a “super-power” that narrows the rich-poor gap. He cautioned, however, that existing power structures may seek to preserve exclusivity, potentially limiting the egalitarian impact of AI [353-357][365-371].


Education was another focal point. When asked how a tier-3/4 student can succeed in AI, Vijay advised focusing on curiosity, using AI-driven questions to augment learning, and treating AI as a productivity tool rather than requiring deep programming expertise. He encouraged students from tier-3/4 regions to leverage cloud-based agents to satisfy curiosity and enhance domain expertise, thereby gaining a competitive edge regardless of formal technical training [261-270][285-293][473-485].


Risk mitigation and governance were also discussed. While AI can reduce bias, Sharma warned that models inherit the biases of the data they are trained on, especially when that data is dominated by Western sources. He called for Indian-specific training corpora and continuous retraining to ensure cultural relevance and fairness, and he noted that many regulators operate sandbox programmes that allow controlled data sharing for sector-specific model development [50-56][199-205][315-322][323-327].


An audience member observed that large language models tend to be overly eager to please users; Vijay explained that this behaviour stems from instruction-tuning and prompt engineering and can be managed through careful model design [389-401].


To ensure widespread diffusion, Sharma proposed a “terminal” model akin to the smartphone rollout: any internet-connected device could access cloud-based AI services, while a low-cost “AI sound box” – a small, always-connected speaker that streams cloud-based models – would bring sophisticated capabilities to micro-merchants and rural entrepreneurs. This approach aims to democratise AI as a public good, mirroring Paytm’s earlier fintech diffusion strategy [389-401][365-371].


He warned that AI should never be granted unrestricted authority to initiate payments from a user’s bank account, emphasizing the need for safeguards against full autonomous control over payments [425-433].


The discussion highlighted several recurring themes: the need for multiple indigenous foundation models, the importance of domain-specific, smaller-scale LLMs, the shift toward agent-centric interfaces, and the preservation of core institutional roles in banking and education. Participants also underscored the potential of AI to boost productivity and economic growth while recognizing challenges around funding, inequality, and governance.


Session transcriptComplete transcript of the session
Vijay Shekhar Sharma

that can take you farther and farther ahead in the world economy. So I see it not as a job reduction. I see it as opportunity for India to create a global AI -dominant nation. And that does not mean that that is easy to do. But at the same point of time, most of us have to take a will, and we should be taking the commitment towards leveraging AI first. For example, like if you have a smartphone now, even a small shopkeeper does not believe he does not have a smartphone. They get payment from Paytm and work a lot of life. They assume that there is a smartphone, which is fair. Now, if you believe that you can build a business model, or entrepreneurs will build a business model as a customer, you use AI first products, the productivity will be dramatically higher.

So a person who was running a shop can run multiple shops. So GDP growth will automatically come in a different size because the productivity per person will be higher. And that is the lacking thing of India. the problem will be that will India be that bigger market or not and I want to tell you this is a very good lucky moment that we all are sitting it is tough to grow from 20 billion to 25 billion anyway that is 50 % growth and 25 % growth is very tough but luckily India is right now living in 2 .5, 3, 3 .5 trillion economy and I am comfortably going to say that 2 trillion dollar can be compoundingly growing in next 7, 10 years so if you are sitting in this economy in 2026 and you believe that next 2 trillion is going to come in next 10 years so I don’t have to tell you out of 2 trillion dollars many of you can get many of million dollars or billion dollars entities or business benefit to yourself so that’s why the India is a bull case scenario and AI on top of it is the absolute bull scenario I don’t think there is any challenge to that someone asked me this question in some other audience so I said like in Ramcharitmaras if Pran goes but Bachar doesn’t go similarly if Job goes but AI doesn’t go Well, job jai hoga nahi, kyunki people will have method to discover themselves to do more productive job.

So, in my opinion, the traditional kind of work, main job ko yaha pe metaphorically use kar raha hoon, traditional kind of work versus the AI kind of work. And so, that will happen. Main completely maata hoon, main isko apne tarike se express karna chahta hoon. Humare kisi bhi system ko banane ke liye, koi bhi job karne ke liye company ke andar ya kahin pe bhi, kahin na kahin koi bottleneck hota hai. Aur ab increasingly, ap log ye figure out karoge, ki wo wala bottleneck pe eliminate ho jata hai. Iska basically matlab hai ki sadak chodi ho gayi. Problem kahin aur shift ho gayi hai. Problems disappear nahi hoti hai, wo bas move kar jati hai kisi aur jaga ke upar.

Outer ring road ke road block se bhi hume yahi bata chalta hai. Ke road, sae pehle Nehru place pe ruka karta tha jam, ab jaake wo airport ke paas lagta hai.

Harinder Takhar

Very good. Main samajh gaya. Achha sab na aya. Okay. Toh ek aur related question tha, ki opportunity. is a lot. And when we look at the exhibition hall, someone is making a chip, someone is making a data center, someone is making a foundation model. So should we all make a foundation model in India? Should we make our own chips? Should we make our own applications?

Vijay Shekhar Sharma

Very good. So first I want to say one thing. To make a foundation model, English, we’ll speak English and Hindi, okay? And you should use AI to live translate. No, it’s okay. We’ll definitely, I understand, and we will do it. So in my opinion, India has to build a foundation model. This is no compromise statement. Not because that we can make a better financial foundation model or not, but because we as country have to move on from services culture. I mean. There is nothing wrong about IT service business. There is nothing wrong about BPO and business services, but it is an obligation on us. that we should move up in the value chain. It’s like growing up in the life.

You do not move up in the value chain. You rather continuously make something for someone. So most of founders or capable technology people in Silicon Valley will have a significant mix of Indian people. So as a people, we are able and capable. So can’t our country have enough amount of resource allocated to those individual capable people that they can make foundation model? We have to do it and we have to prove it. So I first of all applaud Sarvam that they have launched it and they have launched a perfectly, I would say awesome foundation model from India. And I want many of us to do it so that this race does not look like that there was none or only one.

There should be tens of foundation model to prove in the world that Indians can do it and Indians are doing it in India. That is why we need foundation model. And now the question of that whether it is on an ego or is it on a use case basis, the advantage of India financial or made in India sovereign model will be that the amount of… nuance and biases that happen so what is a foundation model it is like aggregated knowledge of what you give the feed off now obviously all of us have our own perfect understanding of whether this is correct or not so after the power a lot of us would know this like for a small kid making them eat banana or curd in the night is considered as if next day they may have some cold or some other thing but when you go on internet internet is half and half divided it doesn’t believe it and it believes it and then you’re totally confused but when you go Ayurveda then it says something else and obviously it follows the bath with and also at cuff kind of philosophy so that the answer is not the straightforward as straightforward as we believe but it has an answer in a different way versus an answer in a different way in a Western medicine if you will now I’m not saying Western medicine is right or wrong I’m saying I need my opinion I need the culture or the knowledge that we have inherited extended towards the next generation And I want that to happen by the intelligence that we will query, which means that that can only be done by somebody who is making for it.

If we don’t make for it, our all compounded historical knowledge will be lacking in the next generation. So instead of adding on top of it, we will not be able to take it further. So it is a case for India to build not just a perfect foundation model, also retrained models. Like you remove the biases that it has, you trim it to the ability that you want. And you make it for the purpose that you want. And that I believe India’s opportunity is even bigger beyond just the foundation model.

Harinder Takhar

So I have a two -part follow -up. All right. So I believe that this is probably just a way to discourage people. And it may be more than that. But we often hear, mostly on Twitter or from these large foundation model companies, that if you don’t have 10 ,000 crores, why are you even in this place? What is your viewpoint on that?

Vijay Shekhar Sharma

I mean, literally, PTM, we both of us, put more than 10 ,000 of you, put 25 ,000 crore on the table for making this humble QR which is everywhere now. So there is an ambition and there is a commitment. So the question whether you put a billion or two or four or five, it’s not that question. I think the question should be whether there will be enough business model of that. Will we have a skill to market it? And remember, the world is right now not just made in America. There is a European model and there is a China set of models. We need our models. And the proving point now that there is, I would say, enough knowledge of model creation is that it is not literally about a billion dollar or two or ten.

It is also about the kind of smartness that you can run on the model creation. RL and so on and so forth. And there is a lot of chemistry and math or let me put it rather better way of saying it. There is a lot of physics and gravity handling capabilities that are discovered now that you can build model in a much less cost. So whether, I mean, another question that a couple of us were asked when I was roaming around, why don’t you build an LLM? So actually, now I should ask you, what kind of LLM are we or you building when we talk about that there should be a model made in India?

Audience

Yeah, this was going to be my question to him. But it’s always fun to answer. I think that models are just not what we know them today to be question and answer knowledge sharing sessions. There are models that have capabilities for reasoning, problem solving, for building agents. So you want to give them agency, the ability to act and so on. I think that to answer your question of whether we should build or not, I believe that there is much more than a question and answer chatbot model. And we believe that we should build models that actually solve problems that actually have. the committed confident agency to take actions on behalf of others

Vijay Shekhar Sharma

okay so you’re saying and this is for everybody to know we are making models but their models for a vertical problem not for a horizontal scope so instead of talking about 200 billion token model we would be making four or five billion token model or 20 billion token just in case like we should open the questionnaire from the audience also because some of you may have and we will not have a time after that sort of line item so while we are talking you can just raise the hand and I’ll just allow this to okay now that’s you literally are asking good vertical problems to solve

Audience

oh it’s easy I mean imagine it’s a classic triangle of mess laws hierarchy the first is the financial foundation that you need to solve for financial services so there is a risk fraud control models that some of us are building and we built it and then you go on top of it like food food chain which is agree and processing of this if you want to produce high quality you food outcomes at a more yield, there is a tremendous amount of data that gets generated in visual amount of data and the farmers need access to it, nuance of it. We just heard how Prime Minister was able to ask a question that maybe a cow is not able to say that I am ill, if you have an ability to discover that the cow is ill.

Maybe a tree may not say or plant may not say that I need this mineral or supply and maybe you can read it. So there is a tremendous amount of vertical like I just told you example of finance, agri, husbandry, you can go towards now industrial, you can go towards the problem of pollution which is we are sitting in this country, this city. So there are many vertical solutions that we can build a specific use case.

Vijay Shekhar Sharma

So let me just help you guys think of it another way of it. Mr. Banj discovered an engine and that engine then got percolated by many to be made. so LLMs are like an engine and they are literally called intelligence engines just in case, now if you had access to an engine, can you make engine? you should, so that you can make your own vehicle use case, you can make a small car, you can make a big car, you can make a truck, you can make a trailer, there are so many use case of a transport so think of it in that eyesight, then you will say, well do I need to make a small car in India because I heard that there is a small car being made elsewhere no, you will need to make it, I can confirm it and so like in automobile industry, making an engine is an art, I would say making an engine equivalent of LLM is an art, yes, but many of us will have it and many of us are making for those

Audience

yeah, finance is the one of the few things which is part of vertical, but it is also one of the few things horizontal every industry requires and has a financial department food, finance, both both

Vijay Shekhar Sharma

yes so all of all humanity needs these foundations so you are into both vertical and horizontal at one go

Audience

Yeah, that’s right.

Vijay Shekhar Sharma

So, perfect. Now, question to you. The classic financial use case, considering we just heard it, what kind of use cases you believe the LLMs will save or solve for the problems that you see in financial industry and what you believe fundamentally could be solved for wider financial industry and we might have solved them or not yet?

Audience

I think that the best favor or the best value we can add is to remove biases in decision making that we already see in our financial system, which are actually a complete antithesis to the whole inclusion aim that we all have.

Vijay Shekhar Sharma

Beautiful. Give me an example of it. So, a very good starting point would be detecting whether a particular transaction should go through or not. There is a lot of bias today in the rules that we set, in the checks and balances that we set in allowing… transaction through and you can invariably say that if there is a loan officer deciding on whether you should be getting a loan, there is a lot of bias that creeps into it. We are not able to measure that today but when you ask a machine to make that decision, you actually remove those biases. So the person who may not present themselves very nicely is unlikely to get a loan but the machine doesn’t care about how you present yourself.

So you are saying that when you make a financial decision, when financial industry or system makes a decision, there may be unknown known biases and the unknown biases can be removed even if you want to keep the known biases. The good thing is that the machine actually now helps us identify what the biases used to exist which you are very used to. And speaking of financial industry and going beyond, one of the things that I want all of us to know. So classic. Financial inclusion is not just about payments inclusion or bank account inclusion, which I would say that India has perfectly solved. It will continue to grow towards the next requirement. And most of us need access to credit, access to insurance, access to classic wealth solutions.

Right now the wealth in India is only when you have crore or plus or less, somewhere around that amount. But a normal auto rickshaw driver who has 20 ,000, 50 ,000 or 2 lakh or 5 lakh saving also deserves to have a financial wealth model. And that person, poor fellow, by hearing or hearsay may invest or risk the capital at large. So the access to the financial services can become further and dramatically scalable once you add the power of AI to it. For example, like capability of auto rickshaw driver like I just now contextualized you. Imagine that person has 2 or 3 lakh rupees. You can suggest based on your risk and time horizon, you should put it in FD, you should put it in a gold or a sovereign gold fund or you could put it in let’s say some index fund.

Because you’re talking about 15 year forward money that is for your house, family, daughter or kids education or something. Now, those things can not be told by a commoner around them. And then you can make it in a language and you can answer the questions like ChatGP today answers the question in a language they speak and the language they get the answer. So in my opinion, AI will enable financial inclusion to a next level because it is not just about the access to the smartphone, but the question answers that you get from smartphone can become far more native and continuously possible. So let’s say this guy is busy whole day, can start in the evening when he is waiting for some customers and start talking to it and can take a decision.

Now, at least he has a second opinion and can do it. Similarly, healthcare, huge amount of capabilities because many of us literally just say, but you need to check whether there are some more symptoms or not and what the blood test report says. So the dramatic amount of inclusion of education, finances, healthcare will be such a catapulting impact. On the society.

Audience

Yes. So do you think the whole banking system will become redundant? Because today if I have to make a transaction, I’ll use Paytm. If I have to invest, I’ll use Paytm or a Zerodha. So why is the banks existing? Because with whole human interface, it’s becoming very difficult. They don’t solve our problems. And second thing is, is education, school is going to become redundant because what they’re learning is going to be not valid, will not have a shelf life at all.

Vijay Shekhar Sharma

Okay. I have an opinion on schools. I will, I will, I will. So it’s a very easy thing. If people start to make food at home, will the restaurant become redundant? Yeah, the demand may be a different kind and the need may be a different kind. So first of all, none of these things will become redundant because they do not offer literally the verbatim statement that we just made in this sentence, but they offer much more beyond that. For example, like banking inherently is about extending credit. So ability to have… I have a… well regulated place where you can deposit money and that deposit money is taken care of so that when you need it, it is available to you and when the economy needs it, it is available as a credit which is a business model of a bank is an ability that will never go away.

I mean it is an obligation for them to become even more able and capable in both parts. That security safety of your deposit and ability to disperse credit to the needy. Now that is a part called bank. If we treat bank as a bank branch and your question that you may not walk into the branch, fair. That may be the case because you will not need a branch to make the banking reach places. It can be very perfectly extended using a smart phone and now in a chat bot instead of just an app and that is what the beauty of this inclusion I am calling. The core machine of a financial institution is needed and even more because you will have even further bigger load coming in.

The bank, the way it is served through let’s say a branch could extend itself when the ATM happened through an ATM when apps are happening through an app. You can have a third party app. That’s not a problem but the core activity of the bank still belongs to the bank. And similarly, like you were saying for the schools, schools are not the place only for going in the classroom. How many of us did not attend the class when the teacher was in the class? And I’m talking about college days. I’m the one. Now, that does not mean the college was a bad experience. Rather, college is a social experience of meeting like -minded people and understanding and self -discovering beyond the syllabus in the class.

And I definitely agree with you that maybe a single method of teaching on a blackboard, whiteboard that you’re teaching and the people are writing and just putting up back in exam is changing. Then you have these different MBA institutes like Harvard is popular for case -based education. So those things can be extended towards now in the classroom of a common or other mass level of school. You know, that is what the power of AI is. I’m not a believer yet that you will not need it. I mean, homeschool is good, work from home is good, but going to office has its own use case. just like we just saw in the COVID days that everybody was working from home you can be selfish that I want to work from home because I have 10 more things to do that is always taking care of it and then sort it out but ultimately there is a value to go to a thing a place and that is in my opinion will perpetually remain whether it is school, bank or any other such institution your answer on the education no I completely no no no I am not going to say that the core philosophy that the bank is in a branch definitely changes the philosophy that they will be a let’s say bank manager or somebody to approve a loan will evolve but the core work will remain of the bank so DeFi etc are very different cultures or I would say they all are technology nothing wrong about them again the core philosophy that you store money at place and you demand for it banking.

Harinder Takhar

I completely agree on the school front. School is not just education, also the social experience, and we’ve all benefited from that. So nothing more to add. But I do find an interesting theme across finance, healthcare, education, that it allows you to have more access and more personalized access. Like your doctor is your doctor. It’s not the doctor that says the paracetamol example. Same with your teacher. And I think that makes a very radical impact.

Vijay Shekhar Sharma

It’s amazing. My mother who had a heart and then stroke, which ended up becoming a stroke, heart disease. And then now I leverage the power of AI to keep a track, including she has a Fitbit that generates a feed that goes to my agent. And it triggers to me notification that if all is well, you should check it. Now, that possibility, the kind of nuanced care that I’m talking about, couldn’t have been possible without a doctor support. So I’ll give you another example. And this is something that you should check out. If you have a situation like this. So there was some medicine the doctor gave her. for certain rhythm control and suppressing beats and so on and so forth as the case was.

But some of them ended up taking her palate taste away, making her not feel to eat, and she was becoming weaker. So I talked to the chat GPT, I threw the prescription, and I said, she’s not eating, is there a problem here? So then chat GPT would tell me that I think this combination at this hour, which is pre -lunch, makes this a situation where she may not have a tendency to eat. If you move it to this time, and if you move this earlier, there will be enough window for her to eat. And any which ways, by seeing your heartbeat that I’m seeing, at this time, this beat compression that was needed was not required.

So I’m just suggesting that you should do it, and then obviously put a disclaimer that you talked to doctor and so on. So basically, same medicines in a different time schedule potentially will solve for it. And I sent this to the doctor, and the doctor was like, Like, the good thing I want to tell you is, doctor replied three words, you can do it. Yes, you can. Actually, he said, yes, you can. I said, sir, do you think, I very frankly shared, because doctors don’t feel, and any person who’s skilled in that domain may not feel that you’re bringing cocky input. So I said, my mother was this, and I understand it is tough to go through this nuance.

I tried asking this, it is suggesting this, and this is a brief if you want to read the PDF, but net output is, instead of it to be done before lunch, I can do it after lunch. He said, okay. He said, yeah, you can do it. Yes, you can. That’s the three words of life. And I think then after that, she became better in terms of, because she’s already taken. Now, what we are talking is, that medical, education, finance, agriculture will catapult into a very different stage and age. I can promise you this today, if many of us, the new Gen Z generation, who’s used to a smartphone, must be asking, really? How were you living the life before?

so I can promise you one thing by the time 27 passes and now we are in 25 so some of us are seeing it and I am literally putting a deadline of 27 which means 2 years that by 27 passes every new generation will think that oh are you saying that you used to search and then click on every page and then scroll down to read and then decipher with your cognitive load that what is happening and how did you decide that it is correct or not did you not have an alternate opinion and this and that 2030s will be so shockingly different that we will look archaic in 2010s and 2020s that I can definitely promise

Audience

question from you your mother’s example made me believe that yes these days chat GPT is probably more humane than a doctor himself because this kind of conclusion might not be given by a doctor my question is Since you are giving such wonderful use cases, it’s about the curd and banana example. I would like you to elaborate on that. Would you rather want an Indian -specific model for the purpose, or you are talking about Indian context data, or is it that only Indian children get affected from that banana and curd?

Vijay Shekhar Sharma

Yeah, very good. I like it. Classic future mother, maybe. Perfect. Number one, the data gets trained on what is on digitized and available Internet, because it literally browsed and scrolled Internet and brought it. And then it gives the weight to the common knowledge. So if more people said, blah happened, it believes that, because there is nobody who answered every question for it. How does it believe that what I just now said should happen is that because it learned more number of times it was said. It’s a very surprising method of adding the bias to the model. for example like let’s say we may have a very contentious statement of history if more people tell the model, model will start believing it, did you know this it’s exactly how the social media, viral news sometime which is wrong could be believed to be true because that is how the brain thinks, we also why are you saying this, why are you saying this I feel like I should eat banana or I can eat Lato at night the more of what you see you start believing is the thesis that the model goes through now do you fundamentally believe that internet is filled up with domestic or let’s say Indian or Indian background or history and culture and richness answer is probably not because the western English civilization has brought lot of content which is in English and there is a much less content which is generated, translated in English repeated a number of times, people are discussing so it’s an inherent limitation of our own knowledge that did not propagate on internet, who would have guessed that whatever is written on internet is the ultimate truth versus what is written in the book Now that gap lacks and creates this gap of knowledge of the model.

Can international model do it? Yeah, if you want to tell that no, no, no, on this, this is the truth model. And the problem of the international labs is that how do I know this is the truth model? I can’t. And a lot of people need to say this, that it is a truth model. So there is a limitation of international labs who are making these models. Then can this be done by India lab? India lab will respectfully say this, that why don’t you learn what is written here first and treat this as that it is your definitive knowledge. And there it goes, the obligation to build in India.

Harinder Takhar

And I’ll add just one more thing. There is absolutely a very strong risk of bias from the inputs that you give. But the model has the advantage of also knowing you, who is the person who is asking the question and can reason through all the inputs and still give you a balance.

Vijay Shekhar Sharma

So this is a very interesting thing. I want to tell you that my model, because I have started to give it a lot of things, that many of you used to be doing claw and so on and so forth. it starts to say, oh you are in Delhi this is what Indians do it literally said that, I was trying to ask that what do you think okay it’s a very interesting product feature that I was discussing with it, about wealth system, KTM money and I was trying to ask that give me the cultural nuances of managing money that Indians have which I could add in my feature and the kind of feature I was like, oh yeah that’s good, I would have never thought as a logical person to think like that so yes, it can, you had a question chat GPT you are asking this is very good, very cute Amitabh, he is reading the essay again and again but it’s a good thing chat GPT versus meet our agent sometime later we will do this our agent will talk to your agent then they will decide whether we should talk to you or not this is coming, very seriously now Uber if you order Uber, your agent will call you first Uber, this is expected I have to go now and I am leaving then PJ will pop up and say should I call Uber?

yes, call it should I call the same one? I am not getting this I will make it quick all those things that we all experience that will be so dramatically different guys it is real you will be represented by your agent and that is the fight all these clients are running that is why agent came and that is why agent is hyped up at this time because your agent could be based on OpenAI or Gemini or Claude or whomsoever that is why they are talking about it and your agent will be working with you he has tagalongs right there back to you read from that

Audience

so my question is do you believe the future of stock brokerage industry will be an AI native from the ground up where AI agent can be the primary interface or you think AI will be just the feature in the stock brokerage platform

Vijay Shekhar Sharma

And I’m not saying that it is the perfect way to do it, but I’m saying that we are all in committed towards agent -first interfaces. Vijay, I have a follow -up question around this agent -first interfaces. So when an agent, on behalf of me, goes to the Uber app, will it see ads? Will it have to log in with Google? Well, nerd question. And for every one of you, the point that he’s trying to bring it is that will every other stack also change or not? Well, that is the beauty. The agent will talk to agent. Uber will also become an agent. And instead of trying to authenticate my login, et cetera, it’ll say I’m coming and I’m an agent on behalf of.

Just like your ambassadors used to say, I’m the king, I’m from his court. And he has sent this message. You take this VATS. You take this VATS. This is how it will be. I’m the king, I’m from Harinder Thakkar’s court. And he wants Uber. What kind of Uber do you want, VATS? They want such an Uber that can take them so fast. So I feel that at this time, you can throw so much money. So how will you want to pay? Will you want to do it with a token? Will you want to do it like that? Will you give it later? No, take this money. And you can call Uber. It will happen like this.

And this is real. I am telling you, you are laughing, I am laughing. One day someone said, I will stand here, and I will press the button, and the car will come. That day, he laughed at him. Before that, I will stand here, I will make a call, and I can talk to someone through the call. He laughed at that. Then, I will stand here, I will make a video, and my video will be in London, and he laughed at that. So all these technologies, which we natively, as normal as it happens, what kind of trick question it is, in near incremental future, the way this is going to be is so dramatically different, that those who are making apps, I have only one suggestion to them, that make an app with a non -icon based interface.

Yes,

Audience

One question here is, for the students who are going through their degrees right now and not in the data tech AI space, in the functional domains like finance and accounting, like HR and so on and so forth. So what do those poor people do? How do they think? Because, you know, how they are running through the education system, that’s old, that’s outdated, that’s still that. So what’s future for them?

Vijay Shekhar Sharma

So it’s a very interesting question. A lot of people tell me that, does it mean that we should study only, I mean, some days back we used to talk about we should teach programming to your kids. Like IIT preparation was done from class number five. It’s a joke, it’s a stretch joke, you can imagine what I’m trying to say. You came in class six, do IIT preparation. So by doing this, when people were born, they were given a programming language book with a book. So that will end, right? Because you have no more programming. So we are in a flux. Yes, ma ‘am. as we speak in 24, 25, 26. Okay, 24 mein kuch kam log the, 25 mein logon ko laga, 26 mein most of us will go through.

Your question has come in 26. This question did exist in 24. And there it goes, that we will remain in a flux to decide whether this is needed or not. Some will do it out of their intent and commitment to do it and learn, and some will do it for obligation. The fact that we always assumed that education will give us a job, and now the job is disappearing, so uncomfort is that, do you do it out of your vocation and interest and passion, or you do it for the outcome of a job? If you’re doing it for the outcome of a job, I’m not skilled enough in, let’s say, humanities or art, but I can produce an art which is sounding logical.

One of my teammates who was roaming around in Kerala and wanted to figure out what is a good Ayurveda outcome that he wanted to seek for some problem, and he just used Claude to generate questions and nuances and formulas and so on, and then he went to the… Kerala labs and then they were like haan ye toh bahut achha hai, aapne kaise socha, ye toh bahut acha soch ke laay ho. Now the poor fellow doesn’t even know that he does not have a skill of Ayurveda. And he’s talking to the super skill of Ayurveda as if that person is skilled. So now there goes this question. Did you need Ayurveda education to become that intelligent?

Answer is not necessarily. You were able to use the tool of AI. So to the person whosoever it is, whether it is a programming language and computer science student or whether it is a humanities student, I’m not going to say don’t do that what you’re doing. I’d rather say definitely learn how to use jaisa ek time kotha na, computer seekhlo bete, uski bhot kaam ainge. So ussi tarike se main bolna chahta ho. Leverage and make your work AI first as much more because you will be relevant. And this is not a question of these people who are right now graduating because there is something that will remain for some more time. If you think about 2030 onward when this is all prevalent.

Ab jaisa painting hoti thi, aapne dekha hoga shole ka jo studio ka jo poster bana karta tha, wo haat se banaye tha. And making posters, in Bombay, there used to be an art. Studios, posters of Mahbub studio and RK studio are called legendary. Now where are they? And they are made in a spot like this. And Ghibli art is made like this, you make this. So the point is that, does it mean we don’t need a new cartoonist? But we will need one who is making with a computer. And he will say, I want this kind of nuance, it will become an art fashion. So if the today’s artist only uses, let’s say, paint, then it will be so unique and rare that that person will have its own value.

My God, handmade art. Like today, there is a tradition of cold hand pressed oil. Which was very popular at one time, that you are not doing industrialization. Oh my God, this is perfectly hand pressed juice to you. It is made with a jug of sugar cane juice. So that is it

Audience

Thank you, Vijay. Fantastic and energetic talk. Thank you. So, a little while ago, you told me that LLM, Foundation Model, should be done. Yes, sir. And the thing is, both while making a foundation model and during the inference, actually a lot of data is needed. And just like in the finance industry, actually, there is a lot of regulation and this will keep coming. Actually, in such a constrained data environment, how do you kind of make a good LLM and inference time? So, let’s say this could be applicable on any industry, that this industry does not have a lot of data. How do we make a good LLM of that industry?

Vijay Shekhar Sharma

Short answer is that you work with that industry players, whether it is regulated or not. I apply for everyone. And you find out all stakeholders and pursue them to learn what you could bring on the table. And people understand the need of it. If not all, some will. And in my opinion, the training model and the why are you training it, if you are able to articulate it well, I mean, the process. Progress, even progress is a very interesting thing. People. the regulation is not for not progress. Regulation is for not what it can slide and fall. So remember, regulation is also for progress. And that is always the case. Regulators are the reason that we have such vibrant financial system in this country.

At the same point of time, what they protect us from is that not falling apart of the system. So there is a respect and value of what they do. And if you talk to different industry stakeholders, you will get access to the insights and data. And there are different regulators starting from their sandbox programs and so on. So they allow it. And I think I’ll just add one more thing. Data, just double click into that data. There is only some kinds of data, like my personal information, that is usually something that you do not want. The regulator doesn’t want, you don’t want. But outside of that, there is plenty. There is plenty.

Audience

Yes. So as you mentioned about AI and all these things forthcoming with the Uber, the IFPS interface, do you think that… growing as AI will be growing fast and forth there will be more inequality or will it remove inequality in terms of will power get sustained in few hands

Vijay Shekhar Sharma

perfect this is a very interesting question inequality I can take it from money perspective because inequality from other perspective I am not taking it AI for the first time is one of that technology that will be easier to use for everyone because you are talking in native language and you can speak even if you do not know how to type or write so it’s a very inclusive technology for itself and its outcome is very profoundly powerful that a rich person or a able person so let’s say that you are trying to fight a battle of certain skill or ability which are rich in terms of money or able in terms of skill you can very comfortably write this person so AI is that horse or super car or a rocket ship that you can ride easily and then you can go ahead of anybody in a zero sum business if you are in a wholesome business you can expand yourself So AI not only is inclusive, AI is actually the superpower that would reduce the gap between rich and poor and be more inclusive.

And that is what I’m trying to say, that it is not the technology of rich. Rich has a fear. You must have seen, you must have read something on social media, that parents should be given education and old should be given health and women should be given beauty. Such business models are written on social media. So there is a model written in it. I want to say that it is written for the rich to give them safety. Because rich has done this. They want safety, security, exclusivity. Why? Because they don’t want others to take away that what they have. So they want to stay disconnected from the world. That’s why rich’s lounge is different from the normal people’s lounge.

Audience

But sir, having said so, there is also a risk with AI, which we are underestimating at this point of time. So what do you think could be that potential thing?

Vijay Shekhar Sharma

I think risk is in driving the car and coming here. And also in using the phone. so risk I will not say how much risk how much not this gauge is more important the generic line that it is risk is not complete line it could be that is it a risk that every common person can take care for example like kids are not allowed to cross the road alone but as an adult you are expected to cross the road alone and there is a risk that’s why you are supposed to do the left and right check and so on so AI surprisingly is that less level of risky that even a commoner could do it that’s what I am trying to say yes yes ma my model my model is right now in the beta and we are using it and just like she mentioned very nice your infectious enthusiasm to drive this technology that’s very commendable thank you thank you so much for that and you are at the driving state so to speak thank you so like like she mentioned my always my concern is about how to solve this distribution problem of AI of AI like cool No, the problem was…

And how to make it a public good. Yeah, that’s right. I really didn’t like that concept. I think I want to tell you that technology distribution happens on a terminal. So as you remember, there was a time when computer used to not be with people. Laptop was not with people. And the smartphone was not with people. So governments used to give free laptop, free computer program. Then some free smartphone and tablet program also came. Politicians put it in the well wishes of different states. I think anyone who has a compute terminal internet connected, which is said to be a data positive country in India, everyone has access to it. So if you have access to a computing device with internet connection, you’ve got full AI access.

The good thing is it is not installed in your device. It is not a version of a software in your device. It is not requiring high amount of compute or memory or something on the device. I mean, we created an AI sound box, which is a very natively small device. And it is as capable. As even as somebody else’s, Sam Altman’s computer device. That is the beauty. about it. So AI is far more inclusive and very easy to diffuse. I’m glad to hear that you being in the driving seat, think it as a problem. That’s very nice.

Audience

And I have one more thing about this Asantic AI. It has this very human trait of trying to please you.

Vijay Shekhar Sharma

Yeah, because they were written to be written like this. Oh, you are here about asking about this thing. I think you should see left and right. Actually, agents are they’re not human agents. They are no they’re untrained beast of abilities. You prefix them and they behave like that. And it is literally an instruction. So risk, just like ma ‘am was asking, what risk can there be if someone says behind you always let him do this. You know, that kind of

Audience

Yeah, yeah. So PTMS played a very important role in diffusing the fintech to the mass. So what is the plan for diffusing AI to our country?

Vijay Shekhar Sharma

I think fundamentally I looked at it that you diffuse it to the consumers or do you diffuse it to the small businesses? Consumers -wise, I think it is tough to fight these three, four gorillas that you are seeing. So I started to work towards small businesses AI or small merchant AI So the vegetable vendor should know that when he goes in the morning, he says, brother, these days tomatoes are coming. But tomatoes will get spoiled because it is going to rain. So you bring tindas, bring potatoes. So from here, the knowledge from the core of small businesses, what should they do in the business, to the problem like, why didn’t I get money? Who cut the money?

And will I get a loan or not? The question and answer we don’t get from anyone, and they try to do it on the phone call, that scale is not possible. And then the person, individuals, it is not possible even in a very sophisticated computing system, the level that AI brings. So here we are, we are building. India’s AI for small and micro, small and micro merchants and distributing it through them. So, my belief is that big people will be helped by big people. Small people will be helped by us.

Audience

Sir, there is one more question. Thank you. AI sound box. Sir, actually, our education system that is designed for industrial era. So, what do you think like in AI era, do we still need to follow this education system because in AI…

Vijay Shekhar Sharma

Look, opinion on education system… My father is a teacher, so I won’t give it to him because my father will beat me up and I will do it from a distance. So, I don’t understand that you haven’t understood it yet. You didn’t study in class. I was a topper, by the way. Class. Just in case. That’s when I was beaten. I think this question of what is a good education system will evolve as an answer in next five years further ahead. We are literally in the beginning of… In other words, you land at the airport right now and ask, do you have Indian food here? Go to the city, stay for two days, then ask where the Indian food is.

So it is a problem to be solved. It will and it will evolve much later. Yes, your question. Oh no, no, the lady in the back, sorry. This is the visual identity goes through that. Are you a doctor? Oh, it looks like.

Audience

I have a question. What is the one thing that will never allow AI to perform in payments domain? Specifically in the payments domain.

Vijay Shekhar Sharma

Oh, easy. You would not want it to have full control of your bank account and make payments. Because if it does some stupid thing, you have allowed it. It’s like saying, we will not give our standing and Ram Rajan. Do you understand? Do what you think is right. We will not do that. So full control should never be given to your bank account. This is the thing.

Audience

But if we go a little deeper into technical ways when we are talking about ISO. Or 2022 or maybe RTR. payments, whatever. I really liked that you’re talking about nerds. But because of this technology, the payment didn’t increase. We made it IP guys. So you’re a very old technology person. Actually, sir, I have a job in Canada. You won’t believe, RTR is going to come there. In Canada, they’re going to be front -line and they’re going to be Mr. Stargazer. So I feel very proud there. They say that I’m from India and we use Paytm. So kudos. Thank you so much. Thank you so much, sir. Thank you. Namaste, sir. So my question is this. What is the minimum…

Vijay Shekhar Sharma

I’m serious, man. I love you guys. The problem with Gen Z is that you have to think a little. Oh my God. I should ask this guy. This is the risk. Because if he says something like this one day, go home and fight, then you leave this friend. Then he won’t believe the whole thing. Look, before us, Gen Z people used to say, they’re on the computer, they’re on the screen, they don’t go out to play. They don’t go out. They don’t read the newspaper. Go to the beach. Go there. Go out. So we were told this. Now we’ll tell you. What are you doing? You’re taking all the questions like this. Use your brain.

But your t -shirt will be, I don’t use my brain, I use tokens.

Harinder Takhar

Sir, one more question.

Vijay Shekhar Sharma

I will do a quick question. I will make a question out of four questions and answer it. Okay, okay.

Audience

Sir, what is the minimum effective strategy a tier 3, 4 student can follow so that he can do very well in AI? Tier 3, 4 students. Tier 3 or tier 4 students.

Vijay Shekhar Sharma

Tier 3 or tier 4 students. What does tier mean? Class 3 class? No, no, sir. City? Yes, city, city. Like rural area. Okay, okay. I am also tier 3 from Aligarh. So what they can do so that they can do very good in AI and opportunities. I got it. Basically you are trying to say what to do if you have a student there. What is your question? Go ahead. How do you say it? Farms become cleaner. Air becomes cleaner. Productive becomes dreamer. what will be left for labor market? Oh, this was a very earlier question. I had raised my hand. So, he is saying, when farms become greener, air becomes cleaner, productivity becomes heavier, what will we be doing?

Okay, I got it.

Audience

Do you see AI as a leveler in a fintech segment and how will you compete with your peers if they are also adopting AI and AGI?

Vijay Shekhar Sharma

No problem. I get it. Okay. Any other questions? I’ll answer these things. Go ahead. If you can speak, I will look. Sure. Go ahead.

Audience

So, I would like to ask one thing that I would like to fall back on your point that you said that agents will do the talking for us, humans. So, I have seen…

Vijay Shekhar Sharma

How will I talk? Okay, go ahead. No, no. That’s a different question entirely.

Audience

My question is that the agents have started forming their own websites. If you search…

Vijay Shekhar Sharma

Yeah. So, what is the question that you have?

Audience

I have the question that can… How can this economy succeed that can we integrate agents in the human economy that we have currently?

Vijay Shekhar Sharma

Okay, perfect. Okay. So the inherent question of tier 3 school kid is, which is the problem of every one of us to believe that whether it will be inclusive or not. I have a very simple answer to give. You consider this, like for using the internet, for using the computer, you had to get a QWERTY keyboard and then you had to learn programming. So in AI, just try to get all your work done with AI. And then leverage it as an extension of what your education allows. If you study engineering, if you study art, then you should ask, tell me the comparison between Shakespeare’s English and what was in India going on in that time and how were they writing in Hindi.

Ask the questions and enhance your curiosity as a student. You could be in tier 3, you could be in tier 1 city. But the curiosity and then fulfilment. Filling it using AI will give you the superpower that nobody will have. it so enhance your curiosity because you have access to AI no more questions please yeah and your question was on labor market so the labor market is very simple I don’t think the labor stands for less labor for all of us are also labor and I’m not treating just the physical labor as a labor so the the AI’s ability is that whatever is digital it can perform it superiorly so now imagine is you are you tactically doing exactly what the keyboard typing is or are you also thinking and doing something that you are asking this so you become more productive and the work market becomes even more richer and fulfilling for you so your job will become fulfilling and the businesses will be able to expand the places where the otherwise could have not expanded so subcassad subcassad thank you thank you and please exit from the left Thank you.

Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (15)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Vijay Shekhar Sharma positioned artificial intelligence not as a threat to employment but as a catalyst for India’s economic ascent, emphasizing job evolution rather than displacement.”

The knowledge base records Sharma emphasizing “Job Evolution Rather Than Displacement,” confirming his framing of AI as a catalyst rather than a threat [S1].

Confirmedhigh

“Sharma praised Sarvam’s recent launch as a remarkable, impressive proof‑of‑concept for Indian AI models.”

Panel excerpts note Sharma describing the Sarvam model launch as a “remarkable announcement” and “really impressive,” corroborating his praise of the model as a proof-of-concept [S69] and [S70].

Additional Contextmedium

“India must build its own foundation models in English and Hindi to break out of a services‑only paradigm and embed Indian cultural knowledge.”

The broader discussion of India’s strategic positioning in AI and semiconductors highlights a national push for indigenous AI capabilities, providing context for the call to develop home-grown English and Hindi foundation models [S67].

Additional Contextmedium

“AI can detect and remove hidden biases in loan approvals, offering unbiased credit decisions to low‑income users such as auto‑rickshaw drivers.”

Research on AI-enabled credit analytics notes that while AI can help surface bias, it can also exacerbate bias if not carefully managed, adding nuance to the claim that AI will simply “remove” hidden biases [S77].

External Sources (77)
S1
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S2
From Innovation to Impact_ Bringing AI to the Public — – Vijay Shekhar Sharma- Audience – Vijay Shekhar Sharma- Harinder Takhar
S3
From Innovation to Impact_ Bringing AI to the Public — – Vijay Shekhar Sharma- Harinder Takhar
S4
https://dig.watch/event/india-ai-impact-summit-2026/from-kw-to-gw-scaling-the-infrastructure-of-the-global-ai-economy — By all means. The second layer is one of the layer is the serving layer when you build these applications. How do you do…
S5
https://dig.watch/event/india-ai-impact-summit-2026/transforming-health-systems-with-ai-from-lab-to-last-mile — Last we saw was in G20. Hopefully, it brings back memories. Yes. Happy ones. I’d like to keep it that way. She has had e…
S6
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S7
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S8
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S9
A Digital Future for All (afternoon sessions) — AI is enabling economic progress and entrepreneurship, especially in emerging markets. It can boost productivity across …
S10
AI: The Great Equaliser? — Striking this delicate balance allows for progress within the technology framework. It is worth noting that the analysis…
S11
Shaping the Future AI Strategies for Jobs and Economic Development — It’s not an obstacle. It’s not an obstacle for the innovation. So in order to do that, we need to build trust also, and …
S12
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — Sharma identifies compute resources and research talent as the main barriers, suggesting regulatory issues are less sign…
S13
Secure Finance Risk-Based AI Policy for the Banking Sector — “Yet, inclusion cannot be assumed”[73]. “If harnessed responsibly, AI can convert this expanding digital footprint into …
S14
Impact of the Rise of Generative AI on Developing Countries | IGF 2023 Town Hall #29 — Additionally, generative AI can democratise financial services by allowing all participants to easily access the service…
S15
https://dig.watch/event/india-ai-impact-summit-2026/from-innovation-to-impact_-bringing-ai-to-the-public — And that is what I’m trying to say, that it is not the technology of rich. Rich has a fear. You must have seen, you must…
S16
MedTech and AI Innovations in Public Health Systems — And I’m going to do that. be within the delivery thing, not as a layer on top. And if you then, if we focus on, you know…
S17
AI for Social Good Using Technology to Create Real-World Impact — So I think I have to answer this in two parts. The first part is how do we basically leverage what Nandan refers to as t…
S18
Shaping AI’s Story Trust Responsibility & Real-World Outcomes — So I think it’s going to be a force for good. If I look at banking, I don’t think the core of banking is going to change…
S19
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Kumar made the provocative observation that India needs “fewer, smarter people”—engineers with systems thinking and rese…
S20
How AI Drives Innovation and Economic Growth — “So, you know, for all countries, but especially for emerging markets and developing economies, AI can be a game changer…
S21
WS #205 Contextualising Fairness: AI Governance in Asia — Milton Mueller challenges the idea of “cleaning” biased data, arguing that historical data inherently reflects past bias…
S22
Building Trustworthy AI Foundations and Practical Pathways — “India has scale, India has linguistic diversity, but India also has a lot of different things.”[63]. “In many regions o…
S23
Technology Rewiring Global Finance: A Panel Discussion Summary — Traditional banking will evolve significantly with decreased need for physical branches, but banks won’t disappear as th…
S24
How can Artificial Intelligence (AI) improve digital accessibility for persons with disabilities? — Examples include children with disabilities being provided with non-inclusive educational materials, political participa…
S25
Empowering Workers in the Age of AI — Verick emphasised that the benefits of AI adoption are similarly unequal, with the global north positioned to capture mo…
S26
Open Forum: A Primer on AI — In summary, the widespread adoption of AI presents opportunities and challenges. While it can boost equality, address cl…
S27
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — “When it comes to discovery, we need to develop foundation models for proteins, RNA, cellular circuits and systems biolo…
S28
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — Unexpectedly, these speakers represent different philosophies toward AI development. Sheth emphasizes building indigenou…
S29
From Innovation to Impact_ Bringing AI to the Public — Perhaps the most compelling argument presented centres on India’s need to develop its own foundation models. Sharma fram…
S30
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — Building indigenous foundation models and sector‑specific LLMs Sharma stresses that India must create its own foundatio…
S31
Artificial intelligence (AI) – UN Security Council — Furthermore, rushed regulations could inadvertently favor large corporations over smaller entities. Another speaker poin…
S32
Digital Health at the crossroads of human rights, AI governance, and e-trade (SouthCentre) — Concentration of health data can lead to concentration of economic power, potentially exacerbating market inequalities. …
S33
Skilling and Education in AI — The Professor took a notably realistic turn in acknowledging that AI will inevitably create new forms of inequality, des…
S34
Open Forum: A Primer on AI — In summary, the widespread adoption of AI presents opportunities and challenges. While it can boost equality, address cl…
S35
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S36
Conversational AI in low income & resource settings | IGF 2023 — Additionally, the potential of AI and chatbots in low-resource settings is acknowledged. The analysis suggests that thes…
S37
Developing capacities for bottom-up AI in the Global South: What role for the international community? — Chami argues that when building community-driven AI, it’s important to connect with other small and medium enterprises i…
S38
How AI Drives Innovation and Economic Growth — Appropriate technology solutions for developing countries Zutt advocates for a focus on ‘small AI’ rather than large-sc…
S39
AI for Good – food and agriculture — Dongyu Qu: Excellencies, ladies, gentlemen, good morning. A year ago, we all gathered for the Previous AI for Good Summi…
S40
Turbocharging Digital Transformation in Emerging Markets: Unleashing the Power of AI in Agritech (ITC) — The business model for AI in farming can be particularly challenging, especially for smallholder farmers in emerging eco…
S41
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S42
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — Economic | Development | Sociocultural Georgieva describes AI’s impact on labor markets as dramatic and uneven, affecti…
S43
AI for Social Empowerment_ Driving Change and Inclusion — Inequality and broader socio‑economic effects She warns that AI is exacerbating inequality by increasing capital concen…
S44
Comprehensive Report: Preventing Jobless Growth in the Age of AI — Create stronger partnerships between educational institutions (especially community colleges) and businesses to align tr…
S45
Interim Report: — 67. A new mechanism (or mechanisms) is required to facilitate access to data, compute, and talent in order to develop, d…
S46
Secure Finance Risk-Based AI Policy for the Banking Sector — Compliance functions increasingly rely on automated pattern recognition, while adaptive cybersecurity models respond to …
S47
eTrade for all leadership roundtable: The role of partnership for a more inclusive and sustainable digital future — These entities possess the advantage of agility, risk-tolerance, and innovation, making them valuable contributors to po…
S48
Shaping AI’s Story Trust Responsibility & Real-World Outcomes — Different sectors show varying risk tolerance levels, with Ekudden noting that enterprise risk assessment has become “qu…
S49
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — Larry Wade: Yeah, I can take that one. And just before I dive in there, something Judith said, and you said it as well, …
S50
Impact of the Rise of Generative AI on Developing Countries | IGF 2023 Town Hall #29 — Additionally, generative AI can democratise financial services by allowing all participants to easily access the service…
S51
The rise of AI in financial services: balancing opportunities and challenges — According to industry executives, AIis increasingly seenas a game-changer in the financial services sector, offering sig…
S52
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — Building indigenous foundation models and sector‑specific LLMs Sharma stresses that India must create its own foundatio…
S53
From Innovation to Impact_ Bringing AI to the Public — The conversation highlights India’s advantageous position as a $2.5-3.5 trillion economy with potential to add another $…
S54
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — Unexpectedly, these speakers represent different philosophies toward AI development. Sheth emphasizes building indigenou…
S55
AI: The Great Equaliser? — It is worth noting that the analysis acknowledges that AI technology may not significantly reduce job numbers. Instead, …
S56
How AI Drives Innovation and Economic Growth — “So, you know, for all countries, but especially for emerging markets and developing economies, AI can be a game changer…
S57
A Digital Future for All (afternoon sessions) — AI is enabling economic progress and entrepreneurship, especially in emerging markets. It can boost productivity across …
S58
WS #205 Contextualising Fairness: AI Governance in Asia — Milton Mueller challenges the idea of “cleaning” biased data, arguing that historical data inherently reflects past bias…
S59
GPAI: A Multistakeholder Initiative on Trustworthy AI | IGF 2023 Open Forum #111 — Abhishek Singh:I can take that, no worries. Thank you, Abhishek. The floor is yours. You can give your question. Yeah, t…
S60
Open Forum #37 Her Data,Her Policies:Towards a Gender Inclusive Data Future — Addressing bias in data and algorithms Gender-inclusive data actively identifies and addresses biases in data and algor…
S61
WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy — Monica Lopez: Okay, yes. So, can you hear me okay? Yes? All right. Well, first of all, thank you for the forum organiz…
S62
Technology Rewiring Global Finance: A Panel Discussion Summary — Traditional banking will evolve significantly with decreased need for physical branches, but banks won’t disappear as th…
S63
How can Artificial Intelligence (AI) improve digital accessibility for persons with disabilities? — Examples include children with disabilities being provided with non-inclusive educational materials, political participa…
S64
AI/Gen AI for the Global Goals — Christopher P. Lu: Yeah, I mean, look, AI is new, but it’s actually really not that new. I mean, we’ve been having this …
S65
AI Governance: Ensuring equity and accountability in the digital economy (UNCTAD) — Building trust in digital systems and expanding participation in AI decision-making are essential for successful impleme…
S66
Artificial intelligence (AI) and cyber diplomacy — The speaker argued for balanced attention across short-term, mid-term, and long-term AI risks, cautioning against fixati…
S67
The Global Power Shift India’s Rise in AI & Semiconductors — This panel discussion focused on India’s strategic positioning in artificial intelligence and semiconductor technologies…
S68
Multi-stakeholder Discussion on issues about Generative AI — He believes these applications have the potential to improve society and drive economic development.
S69
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — He’s the man. I think for the last couple of days, one of the remarkable announcements was the launch of Sarvam’s new mo…
S70
https://dig.watch/event/india-ai-impact-summit-2026/partnering-on-american-ai-exports-powering-the-future-india-ai-impact-summit-2026 — He’s the man. I think for the last couple of days, one of the remarkable announcements was the launch of Sarvam’s new mo…
S71
https://dig.watch/event/india-ai-impact-summit-2026/ai-algorithms-and-the-future-of-global-diplomacy — I think the counselor did allude to industrial AI. That’s a fantastic use case of cooperation where you and India could …
S72
Keynote by Uday Shankar Vice Chairman_JioStar India — But our ability to translate our abundant ambition into reality has also been constrained by a few structural factors. C…
S73
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — While the panel focused heavily on Global South inclusion, an audience member challenged this narrow focus by highlighti…
S74
Policy Network on Artificial Intelligence | IGF 2023 — Furthermore, the extraction of resources for technology development is shown to have significant negative impacts on ind…
S75
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — The discussion revealed tensions around funding approaches. Anthony identified that “historical donor-led funding approa…
S76
Fireside Conversation: 01 — And I think if all the investments in AI are going to deliver the value to society, not just to individuals, we’ll have …
S77
A Global AI in Financial Services Survey — Figure 9.2: Data sources used for AI-enabled credit analytics ## 9.2. Will the Usage of AI in Credit Analytics Exacerbat…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
V
Vijay Shekhar Sharma
19 arguments188 words per minute7835 words2490 seconds
Argument 1
AI will dramatically increase individual productivity, enabling small shopkeepers to run multiple outlets and driving higher GDP per capita (Vijay Shekhar Sharma)
EXPLANATION
Sharma argues that AI adoption will boost the productivity of individuals, allowing even small entrepreneurs to scale their operations, which in turn will raise overall GDP per capita for India.
EVIDENCE
He points to the ubiquity of smartphones among shopkeepers and how AI-first products can multiply productivity, enabling a single shopkeeper to manage several shops and thereby increase GDP growth through higher per-person output [6-12].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sharma’s claim aligns with observations that AI-driven productivity gains can raise GDP per-capita, as noted in the Innovation to Impact discussion which highlights higher per-person output and potential $2 trillion growth for India [S2] and with broader findings on AI boosting productivity across sectors [S9] and not reducing jobs but increasing economic growth [S10].
MAJOR DISCUSSION POINT
AI as an Economic Growth Engine for India
Argument 2
AI will not eliminate jobs but will shift bottlenecks, creating new, more productive work opportunities (Vijay Shekhar Sharma)
EXPLANATION
Sharma contends that while AI removes existing bottlenecks in processes, it does not erase work; instead, problems move elsewhere, leading to new kinds of tasks and opportunities.
EVIDENCE
He explains that every system has bottlenecks, and as AI eliminates them, the problems simply relocate to other parts of the system, analogous to traffic moving from one congested road to another [16-21].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The view that AI shifts bottlenecks rather than eliminates jobs is supported by analyses that AI increases productivity without large job losses and creates new work, as discussed in the ‘Great Equaliser’ report [S10] and in the Shaping the Future panel on infrastructure bottlenecks and job evolution [S11, S1].
MAJOR DISCUSSION POINT
AI as an Economic Growth Engine for India
Argument 3
India must build its own foundation models to move up the value chain and reduce dependence on service‑only economy (Vijay Shekhar Sharma)
EXPLANATION
Sharma asserts that creating indigenous foundation models is essential for India to transition from a services‑centric economy to one that adds higher‑value AI capabilities.
EVIDENCE
He states that building a foundation model is a non-compromisable need for moving up the value chain, citing the launch of Sarvam’s model as a positive example and urging the creation of many such models to prove Indian capability [34-39][44-49].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity of indigenous foundation models for moving up the value chain is echoed in multiple sources that call for India to build its own LLMs to reduce reliance on foreign services and to showcase capability [S2, S1].
MAJOR DISCUSSION POINT
Need for Indigenous Foundation Models and AI Infrastructure
AGREED WITH
Audience
Argument 4
Indigenous models are essential to capture Indian cultural knowledge and mitigate bias inherent in globally trained models (Vijay Shekhar Sharma)
EXPLANATION
Sharma explains that global models are trained predominantly on Western internet content, which can embed cultural biases, so Indian‑specific models are needed to reflect local truth and nuance.
EVIDENCE
He describes how models learn from the frequency of statements on the internet, leading to bias, and argues that Indian labs should prioritize learning from Indian sources to create a ‘truth model’ that respects Indian culture and history [199-205].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Concerns about cultural bias in globally trained models and the need for Indian‑specific models are documented in the Innovation to Impact transcript and the Trusted AI keynote, which stress preserving Indian cultural knowledge and mitigating Western‑centric bias [S2, S1].
MAJOR DISCUSSION POINT
Need for Indigenous Foundation Models and AI Infrastructure
AGREED WITH
Audience
Argument 5
The cost barrier (₹10,000‑₹25,000 crore) is less important than having skilled talent and cost‑effective training methods (Vijay Shekhar Sharma)
EXPLANATION
Sharma downplays the significance of massive capital outlays, emphasizing that the decisive factor is the availability of talent and smart, efficient ways to train models.
EVIDENCE
He references Paytm’s investment of over ₹10,000-₹25,000 crore in QR technology as proof of commitment, then argues that the real question is whether there is a viable business model and skilled people, not the size of the budget [57-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Barriers to AI development are framed primarily around talent and compute rather than capital, as highlighted in the Global South adoption report which identifies talent and compute resources as key constraints [S12].
MAJOR DISCUSSION POINT
Need for Indigenous Foundation Models and AI Infrastructure
DISAGREED WITH
Harinder Takhar
Argument 6
AI can eliminate hidden biases in financial decisions such as loan approvals, fostering greater financial inclusion (Vijay Shekhar Sharma)
EXPLANATION
Sharma claims that AI can detect and remove both known and unknown biases in financial decision‑making, leading to fairer outcomes and broader inclusion.
EVIDENCE
He gives the example of using AI to decide whether a transaction should proceed, noting that machine-based decisions avoid the personal biases a human loan officer might introduce [98-104].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI’s role in reducing hidden bias in financial decisions and promoting inclusion is reflected in the Secure Finance policy paper that notes AI can broaden fair access to financial services [S13] and in the generative AI impact report on democratising finance via chatbots [S14].
MAJOR DISCUSSION POINT
Vertical AI Use Cases (Finance, Agriculture, Healthcare, etc.)
AGREED WITH
Audience
Argument 7
AI‑driven personalized financial advice can serve low‑income individuals, expanding access to wealth‑building products (Vijay Shekhar Sharma)
EXPLANATION
Sharma illustrates how AI can provide tailored investment recommendations to people with modest savings, helping them make informed decisions about deposits, gold, or index funds.
EVIDENCE
He narrates a scenario where an auto-rickshaw driver with ₹2-5 lakh receives AI-generated suggestions on suitable financial products, delivered in the driver’s native language [108-119].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Personalised AI financial advice for low‑income users aligns with findings that AI can democratise financial services and improve inclusion, as described in the Secure Finance policy and the generative AI impact study [S13, S14].
MAJOR DISCUSSION POINT
Vertical AI Use Cases (Finance, Agriculture, Healthcare, etc.)
Argument 8
AI can analyze health data (e.g., medication timing) to provide actionable insights, improving patient outcomes (Vijay Shekhar Sharma)
EXPLANATION
Sharma shares a personal case where AI helped adjust his mother’s medication schedule, demonstrating AI’s potential to augment clinical decision‑making.
EVIDENCE
He recounts using ChatGPT to evaluate a prescription that caused his mother to lose appetite, receiving a recommendation to shift dosing time, which the doctor approved, leading to improved health [170-190].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Use of AI for health data analysis and patient outcomes is supported by discussions on AI in public health systems and digital‑health stacks, which highlight AI‑driven insights for medication and care pathways [S16, S17].
MAJOR DISCUSSION POINT
Vertical AI Use Cases (Finance, Agriculture, Healthcare, etc.)
Argument 9
Banks will remain essential for custodial and credit functions; AI will enhance front‑end services but not replace core banking roles (Vijay Shekhar Sharma)
EXPLANATION
Sharma maintains that while AI can streamline interfaces and automate routine tasks, the fundamental responsibilities of banks—deposit safety and credit provision—will persist.
EVIDENCE
He explains that banks’ core activities of safeguarding deposits and extending credit cannot be eliminated, even as branches give way to apps and agents, and that AI will augment but not replace these functions [138-151].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The assertion that banks will retain core custodial and credit roles while AI transforms front‑end services matches observations that banks’ core functions remain unchanged but customer experience will be reshaped [S2, S18].
MAJOR DISCUSSION POINT
Future of Traditional Institutions (Banks, Schools)
DISAGREED WITH
Audience
Argument 10
Schools will continue to provide social and holistic learning experiences; teaching methods will evolve but the institution’s purpose endures (Vijay Shekhar Sharma)
EXPLANATION
Sharma argues that education’s value lies in social interaction and personal development, which technology will augment rather than eradicate.
EVIDENCE
He cites the role of schools in fostering social experiences, case-based learning, and personal discovery beyond textbook content, emphasizing that these aspects remain vital despite digital tools [151-160].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The continued relevance of schools for social learning despite AI-driven changes is mentioned in the Innovation to Impact session, which notes schools will evolve rather than disappear [S2] and in broader commentary on education reform needs [S1].
MAJOR DISCUSSION POINT
Future of Traditional Institutions (Banks, Schools)
Argument 11
AI agents will communicate directly with other agents (e.g., Uber), removing the need for manual log‑ins and enabling seamless service orchestration (Vijay Shekhar Sharma)
EXPLANATION
Sharma envisions a future where autonomous agents act on behalf of users, negotiating with other agents and handling transactions without human‑level authentication steps.
EVIDENCE
He describes a scenario where an AI agent contacts Uber’s agent, bypasses traditional login, and negotiates ride details, illustrating agent-to-agent interaction [227-236].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The vision of AI agents interacting directly with other agents mirrors the agent-first paradigm discussed in the Trusted AI keynote, which describes personal agents and inter-agent communication [S1].
MAJOR DISCUSSION POINT
Agent‑First Interfaces and Inter‑Agent Communication
AGREED WITH
Audience
Argument 12
Designing applications around conversational agents requires moving away from icon‑centric UI toward dialogue‑driven experiences (Vijay Shekhar Sharma)
EXPLANATION
Sharma suggests that future app design should prioritize natural language interaction rather than visual icons, aligning with the rise of agent‑first interfaces.
EVIDENCE
He advises developers to create non-icon-based interfaces, emphasizing conversational design as the new norm for AI-driven applications [254-255].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift from icon-centric to dialogue-driven UI is highlighted in the Trusted AI keynote, urging developers to adopt conversational design for agent-first applications [S1].
MAJOR DISCUSSION POINT
Agent‑First Interfaces and Inter‑Agent Communication
Argument 13
Mastery of programming is less critical than the ability to leverage AI tools to amplify one’s domain expertise (Vijay Shekhar Sharma)
EXPLANATION
Sharma claims that future relevance will depend more on skillfully using AI rather than on traditional coding proficiency.
EVIDENCE
He references past emphasis on early programming education, argues that the paradigm is shifting, and highlights that using AI as an extension of one’s knowledge is now more valuable [262-270].
MAJOR DISCUSSION POINT
Education and Skill Development for the AI Era
Argument 14
Students from tier‑3/4 regions can succeed by using AI to satisfy curiosity and solve problems, regardless of formal technical training (Vijay Shekhar Sharma)
EXPLANATION
Sharma encourages students in less‑privileged areas to adopt AI as a learning aid, leveraging it to ask questions, explore subjects, and enhance productivity.
EVIDENCE
He advises tier-3/4 students to treat AI as a tool for inquiry, giving examples of asking AI to compare Shakespeare with Hindi literature, and stresses that curiosity combined with AI yields a “super-power” [505-514].
MAJOR DISCUSSION POINT
Education and Skill Development for the AI Era
Argument 15
The current education system, built for the industrial era, will need substantial reform within the next five years to stay relevant (Vijay Shekhar Sharma)
EXPLANATION
Sharma predicts that the traditional schooling model will evolve as AI becomes pervasive, requiring a re‑thinking of curricula and delivery methods.
EVIDENCE
He remarks that the question of a suitable education system will be answered in the coming five years, likening the present situation to early internet navigation where solutions are still being discovered [415-419].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for a rapid overhaul of the education system to match AI‑driven realities are echoed in the Trusted AI keynote’s remarks on re‑thinking curricula and in broader analyses of AI’s impact on schooling [S1, S9].
MAJOR DISCUSSION POINT
Education and Skill Development for the AI Era
Argument 16
AI is inherently inclusive (native‑language support, low entry barrier) and can act as a “super‑power” that narrows the rich‑poor gap (Vijay Shekhar Sharma)
EXPLANATION
Sharma posits that AI democratizes capability, allowing anyone to leverage sophisticated tools, thereby reducing economic inequality.
EVIDENCE
He likens AI to a super-car that anyone can ride, stating that it enables poorer individuals to compete with richer ones, and argues that AI is not a technology of the rich alone [339-342].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The inclusive potential of AI is highlighted in the ‘Great Equaliser’ discussion and in commentary that AI is not solely a technology of the rich, emphasizing its capacity to level economic disparities [S10, S15, S14].
MAJOR DISCUSSION POINT
Risks, Bias, and Inequality Concerns
AGREED WITH
Harinder Takhar
DISAGREED WITH
Audience
Argument 17
Risks include over‑reliance, potential erroneous actions, and the danger of granting AI full control over payments (Vijay Shekhar Sharma)
EXPLANATION
Sharma warns that while AI offers benefits, giving it unrestricted authority—especially over financial transactions—poses serious safety concerns.
EVIDENCE
He explicitly states that AI should never have full control of a bank account to avoid disastrous mistakes, using the analogy of not handing a standing order to an untrusted entity [427-433]; earlier he also mentions generic risk of misuse [353-354].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Risk‑focused analyses note the need for safeguards when AI handles financial transactions, as outlined in the Secure Finance risk‑based policy and related governance discussions [S13, S14].
MAJOR DISCUSSION POINT
Risks, Bias, and Inequality Concerns
Argument 18
Prioritize AI solutions for micro‑ and small merchants, delivering context‑specific insights (e.g., inventory, loan eligibility) (Vijay Shekhar Sharma)
EXPLANATION
Sharma proposes focusing AI deployment on the vast segment of micro‑ and small‑scale traders to boost their productivity and access to finance.
EVIDENCE
He describes scenarios where a vegetable vendor receives AI-driven advice on inventory based on weather, and where merchants can query AI about loan eligibility, illustrating the targeted use-cases for small businesses [389-401].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Targeting AI for micro‑ and small‑scale traders to boost productivity is supported by observations that AI can drive entrepreneurship and efficiency for small businesses in emerging economies [S9, S2].
MAJOR DISCUSSION POINT
Strategy for Diffusing AI Across India
AGREED WITH
Audience
Argument 19
AI access can be democratized through any internet‑connected device, similar to the smartphone rollout, making it a public good (Vijay Shekhar Sharma)
EXPLANATION
Sharma asserts that once a user has a device with internet connectivity, AI services become universally reachable without heavy local compute requirements.
EVIDENCE
He draws parallels with the diffusion of laptops and smartphones, notes that an AI sound box can run on minimal hardware, and emphasizes that any internet-connected terminal provides full AI capability [364-371].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The analogy between AI diffusion and the smartphone rollout is drawn in the Innovation to Impact session, which stresses that any internet‑connected device can deliver AI services [S2, S9].
MAJOR DISCUSSION POINT
Strategy for Diffusing AI Across India
A
Audience
4 arguments174 words per minute1285 words441 seconds
Argument 1
There should be tens of foundation models to prove in the world that Indians can do it and Indians are doing it in India (Audience)
EXPLANATION
The audience stresses the need for multiple indigenous foundation models rather than a single flagship, to showcase breadth of capability.
EVIDENCE
During the discussion, audience members state that having many foundation models will demonstrate Indian competence and prevent the perception that only one model exists [48-49].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for multiple indigenous foundation models are echoed in the Trusted AI keynote and the Innovation to Impact discussion, both urging a breadth of Indian LLMs to demonstrate capability and reduce reliance on foreign models [S2, S1].
MAJOR DISCUSSION POINT
Need for Indigenous Foundation Models and AI Infrastructure
AGREED WITH
Vijay Shekhar Sharma
Argument 2
AI can deliver data‑rich recommendations to farmers, optimizing crop choices and yields (Audience)
EXPLANATION
The audience highlights agriculture as a vertical where AI can process large datasets to give farmers actionable insights for better yields.
EVIDENCE
They mention that farmers generate massive visual data and need AI to interpret it for decisions such as crop selection, thereby improving productivity [85-89].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The potential for AI to boost agricultural productivity through data-rich recommendations is mentioned in broader sector-wide analyses of AI’s impact on emerging markets [S9].
MAJOR DISCUSSION POINT
Vertical AI Use Cases (Finance, Agriculture, Healthcare, etc.)
AGREED WITH
Vijay Shekhar Sharma
Argument 3
AI will act as an augmenting layer rather than a replacement, enabling agents to handle routine interactions (Audience)
EXPLANATION
The audience suggests AI will supplement existing institutions, allowing agents to manage routine tasks while the core institution remains.
EVIDENCE
A participant notes that AI will provide a layer that handles routine interactions, implying that banks or schools will still exist but be enhanced [92].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The notion that AI augments rather than replaces existing institutions aligns with the ‘Great Equaliser’ report and Secure Finance policy, which describe AI as a layer that enhances services while core functions remain [S10, S13].
MAJOR DISCUSSION POINT
Future of Traditional Institutions (Banks, Schools)
AGREED WITH
Vijay Shekhar Sharma
Argument 4
Model bias originates from internet data dominance; Indian‑specific models are needed to reflect local truth and cultural nuance (Audience)
EXPLANATION
The audience points out that globally trained models inherit biases from predominantly Western internet content, necessitating Indian‑focused training data.
EVIDENCE
They explain that models learn from the frequency of statements online, leading to bias, and argue that Indian labs should first ingest Indian-origin content to create a more accurate “truth model” [199-205].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Concerns about bias from Western‑centric internet data and the need for Indian‑focused models are reiterated in the Innovation to Impact transcript and the Trusted AI keynote [S2, S1].
MAJOR DISCUSSION POINT
Risks, Bias, and Inequality Concerns
Agreements
Agreement Points
India should develop multiple indigenous foundation models to demonstrate capability and reduce reliance on foreign models
Speakers: Vijay Shekhar Sharma, Audience
India must build its own foundation models to move up the value chain and reduce dependence on service‑only economy (Vijay Shekhar Sharma) Indigenous models are essential to capture Indian cultural knowledge and mitigate bias inherent in globally trained models (Vijay Shekhar Sharma) There should be tens of foundation models to prove in the world that Indians can do it and Indians are doing it in India (Audience)
Both Vijay and the audience stress that India needs to create several home-grown foundation models, not just a single flagship, to prove Indian AI competence and to embed local cultural knowledge, thereby avoiding dependence on Western-centric models [34-39][44-49][48-49].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with India’s digital sovereignty agenda calling for indigenous foundation models and sector-specific LLMs, as highlighted in multiple policy-level discussions (e.g., keynote remarks on building trusted AI and industrial innovation bridges) [S27][S28][S30].
AI can eliminate hidden biases in financial decision‑making and thereby increase financial inclusion
Speakers: Vijay Shekhar Sharma, Audience
AI can eliminate hidden biases in financial decisions such as loan approvals, fostering greater financial inclusion (Vijay Shekhar Sharma) I think the best favor or the best value we can add is to remove biases in decision making that we already see in our financial system (Audience)
Vijay and the audience agree that AI-driven systems can detect and remove both known and unknown biases in credit and transaction decisions, leading to fairer outcomes for a broader population [98-104][97].
POLICY CONTEXT (KNOWLEDGE BASE)
Regulators see AI as a tool to reduce bias in credit scoring and expand access, reflected in banking sector risk-based AI policy and calls for democratizing financial services through AI-driven chatbots [S46][S50].
Prioritising AI solutions for micro‑ and small‑scale merchants and farmers to boost productivity and access to services
Speakers: Vijay Shekhar Sharma, Audience
Prioritize AI solutions for micro‑ and small merchants, delivering context‑specific insights (e.g., inventory, loan eligibility) (Vijay Shekhar Sharma) AI can deliver data‑rich recommendations to farmers, optimizing crop choices and yields (Audience)
Both parties highlight the need to focus AI on the vast segment of small traders and agricultural producers, providing them with actionable insights such as weather-based inventory advice or crop-selection recommendations [389-401][85-89].
POLICY CONTEXT (KNOWLEDGE BASE)
Prioritizing AI for micro-enterprises and smallholder farmers is echoed in agritech initiatives and ‘small AI’ strategies that stress low-resource solutions for the Global South [S38][S39][S40][S37].
AI is an inclusive technology that can narrow the rich‑poor gap by providing low‑entry‑barrier, native‑language tools
Speakers: Vijay Shekhar Sharma, Harinder Takhar
AI is inherently inclusive (native‑language support, low entry barrier) and can act as a “super‑power” that narrows the rich‑poor gap (Vijay Shekhar Sharma) it allows you to have more access and more personalized access… across finance, healthcare, education (Harinder Takhar)
Vijay’s claim that AI democratises capability and Harinder’s observation that AI enables more personalised access across key sectors converge on the view that AI can act as a level-er of inequality [339-342][165-168].
POLICY CONTEXT (KNOWLEDGE BASE)
The inclusive potential of AI, especially native-language interfaces, is documented in inclusive AI dialogues and low-resource conversational AI projects aiming to bridge the digital divide [S35][S36][S34][S50].
Future applications will be built around AI agents that interact directly with other agents, moving beyond traditional UI paradigms
Speakers: Vijay Shekhar Sharma, Audience
AI agents will communicate directly with other agents (e.g., Uber), removing the need for manual log‑ins and enabling seamless service orchestration (Vijay Shekhar Sharma) AI will act as an augmenting layer rather than a replacement, enabling agents to handle routine interactions (Audience)
Both Vijay and the audience envision an “agent-first” future where conversational agents replace icon-centric interfaces and negotiate directly with other agents, streamlining services [227-236][92].
Similar Viewpoints
Both stress the strategic necessity of multiple indigenous foundation models for India’s AI leadership [34-39][44-49][48-49].
Speakers: Vijay Shekhar Sharma, Audience
India must build its own foundation models… There should be tens of foundation models…
Both agree that AI can serve as a tool to eradicate bias in financial services, enhancing inclusion [98-104][97].
Speakers: Vijay Shekhar Sharma, Audience
AI can eliminate hidden biases in financial decisions… I think the best value we can add is to remove biases in decision making…
Both advocate targeting AI at the grassroots economic actors—small traders and farmers—to improve productivity and access to finance [389-401][85-89].
Speakers: Vijay Shekhar Sharma, Audience
Prioritize AI solutions for micro‑ and small merchants… AI can deliver data‑rich recommendations to farmers…
Both see AI as a democratizing force that expands personalised access across sectors, potentially reducing inequality [339-342][165-168].
Speakers: Vijay Shekhar Sharma, Harinder Takhar
AI is inherently inclusive… it allows you to have more access and more personalized access…
Both predict a shift toward agent‑first architectures where AI agents handle routine tasks and interact with each other, supplanting traditional UI models [227-236][92].
Speakers: Vijay Shekhar Sharma, Audience
AI agents will communicate directly with other agents… AI will act as an augmenting layer rather than a replacement…
Unexpected Consensus
Agreement that AI can act as a level‑er of inequality despite concerns about concentration of power
Speakers: Vijay Shekhar Sharma, Harinder Takhar
AI is inherently inclusive… (Vijay Shekhar Sharma) it allows you to have more access and more personalized access… (Harinder Takhar)
Harinder, primarily a moderator, echoed Vijay’s inclusive narrative, highlighting that AI’s native-language and low-barrier nature can broaden personalised access across finance, health and education-an alignment not explicitly anticipated from his role [339-342][165-168].
POLICY CONTEXT (KNOWLEDGE BASE)
While AI can level disparities, scholars warn about data concentration and capital concentration that may reinforce inequality, a tension noted in several policy analyses [S32][S33][S34][S43].
Overall Assessment

The discussion shows strong convergence among speakers on four core themes: (1) the strategic imperative for multiple indigenous foundation models; (2) AI’s capacity to remove bias and expand financial inclusion; (3) targeting AI for micro‑merchants and farmers; (4) an agent‑first, inclusive future where AI narrows inequality. These points are reinforced across Vijay’s detailed arguments and audience/Harinder reflections, indicating a high degree of consensus.

High consensus – the participants largely reinforce each other’s visions, suggesting a unified policy direction for India’s AI agenda that prioritises indigenous model development, inclusive deployment, and agent‑centric design.

Differences
Different Viewpoints
Scale of investment required to build foundation models
Speakers: Vijay Shekhar Sharma, Harinder Takhar
The cost barrier (₹10,000‑₹25,000 crore) is less important than having skilled talent and cost‑effective training methods (Vijay Shekhar Sharma) If you don’t have ₹10,000 crore, why are you even in this place? (Harinder Takhar)
Vijay Shekhar Sharma downplays the need for massive capital, arguing that the decisive factor is talent and a viable business model rather than the size of the budget [57-66]. Harinder Takhar, echoing a common industry refrain, suggests that without a multi-billion-rupee fund, participation in foundation-model development is unrealistic [60-61]. This reflects a clash over whether large financial commitments are a prerequisite or a secondary concern.
POLICY CONTEXT (KNOWLEDGE BASE)
The required scale of investment for building foundation models is a recurring theme in policy briefs on sovereign AI capability, emphasizing substantial public and private funding commitments [S27][S30].
Whether AI will widen or narrow socioeconomic inequality
Speakers: Vijay Shekhar Sharma, Audience
AI is inherently inclusive (native‑language support, low entry barrier) and can act as a “super‑power” that narrows the rich‑poor gap (Vijay Shekhar Sharma) Will AI increase inequality by concentrating power in a few hands? (Audience)
Vijay Shekhar Sharma argues that AI democratises capability, allowing poorer users to compete with richer ones and thus reducing inequality [339-342]. An audience member raises the counter-concern that rapid AI diffusion might instead concentrate power and exacerbate inequality [338-342]. The two positions diverge on AI’s net distributional impact.
POLICY CONTEXT (KNOWLEDGE BASE)
The debate over AI’s impact on socioeconomic inequality features in academic and policy forums, with arguments that AI can both narrow and widen gaps depending on deployment and governance [S33][S34][S42][S43].
Extent to which AI should replace traditional institutions (banks, schools)
Speakers: Vijay Shekhar Sharma, Audience
Banks will remain essential for custodial and credit functions; AI will enhance front‑end services but not replace core banking roles (Vijay Shekhar Sharma) Will the whole banking system become redundant? (Audience)
The audience questions whether banks (and later schools) will become obsolete in an AI-driven world [125-131]. Vijay Shekhar Sharma counters that core banking activities-deposit safety and credit provision-will persist, with AI only augmenting user interfaces [138-151]. This reflects a disagreement on the depth of AI-driven disruption to legacy institutions.
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions on AI replacing traditional institutions reference banking AI adoption guidelines and education sector AI integration studies, highlighting cautious approaches to substitution [S46][S33][S51].
Unexpected Differences
Risk tolerance for AI control over payments
Speakers: Vijay Shekhar Sharma, Audience
Risks include over‑reliance and the danger of granting AI full control over payments (Vijay Shekhar Sharma) What is the one thing that will never allow AI to perform in the payments domain? (Audience)
While the audience seeks a concrete technical limitation for AI in payments, Vijay provides a broad principle-never give AI full autonomous control of bank accounts-highlighting a mismatch between a specific technical query and a policy-level response [425-433][353-354].
POLICY CONTEXT (KNOWLEDGE BASE)
Risk tolerance for AI-controlled payments is addressed in sector-specific risk-based AI policies and broader risk-tolerance assessments that differentiate between enterprise and government expectations [S46][S48].
Overall Assessment

The discussion reveals moderate disagreement primarily around the scale of investment needed for indigenous AI models, the distributional impact of AI on inequality, and the depth of AI‑driven disruption to legacy institutions such as banks and schools. While participants converge on the overarching goal of building Indian AI capability and using AI for sectoral inclusion, they diverge on how much capital is essential, whether AI will level or polarise society, and how far AI should replace traditional structures.

Medium – the disagreements are substantive but do not fracture the overall consensus on AI’s strategic importance for India. They signal the need for clearer policy guidance on funding models, inclusive design, and the scope of AI integration with existing institutions.

Partial Agreements
Both parties share the goal of creating indigenous foundation models to showcase Indian AI capability, but Vijay emphasises the strategic necessity for value‑chain migration while the audience stresses the symbolic importance of multiple models [34-39][48-49].
Speakers: Vijay Shekhar Sharma, Audience
India must build its own foundation models to move up the value chain (Vijay Shekhar Sharma) There should be tens of foundation models to prove Indian capability (Audience)
Both agree on leveraging AI for sector‑specific inclusion (finance, agriculture), differing only in the vertical focus—Vijay highlights bias removal in lending, while the audience points to agronomic decision support [98-104][85-89].
Speakers: Vijay Shekhar Sharma, Audience
AI can eliminate hidden biases in financial decisions, fostering greater financial inclusion (Vijay Shekhar Sharma) AI should provide data‑rich recommendations to farmers and other verticals (Audience)
Takeaways
Key takeaways
AI will act as a major productivity and economic growth engine for India, enabling small entrepreneurs to scale and boosting GDP per capita. India must develop its own foundation models and AI infrastructure to move up the value chain, capture cultural knowledge, and reduce bias from globally trained models. Multiple foundation models are needed to serve diverse verticals (finance, agriculture, healthcare, etc.) rather than relying on a single flagship model. AI can eliminate hidden biases in financial decisions, provide personalized financial advice for low‑income users, and deliver domain‑specific insights for farmers and patients. Traditional institutions such as banks and schools will not disappear; AI will augment their services while core functions (custody, credit, social learning) remain essential. Future applications will be built around “agent‑first” conversational interfaces, with agents communicating directly with other agents (e.g., Uber) and reducing reliance on icon‑based UI. Education should shift from rote programming to AI‑augmented problem solving; students from any background can succeed by leveraging AI tools. AI is inherently inclusive (native‑language support, low entry barrier) and can narrow the rich‑poor gap, but risks include over‑reliance, erroneous actions, and giving AI full control over payments. Diffusion strategy should prioritize micro‑ and small merchants, leveraging any internet‑connected device as a terminal, similar to the smartphone rollout.
Resolutions and action items
Commit to building Indian foundation models (multiple) and encourage other teams to develop their own models. Support and scale Sarvam’s foundation model initiative as a proof‑of‑concept. Collaborate with industry stakeholders and regulators to obtain domain data and create vertical AI solutions (finance, agri, health). Develop and deploy AI‑first, agent‑centric interfaces, moving away from icon‑centric UI designs. Create and distribute AI tools (e.g., AI sound box) for small and micro merchants to provide real‑time business insights. Promote AI literacy across all education levels, emphasizing AI‑augmented workflows over pure programming skills. Establish guidelines to prevent AI from having full autonomous control over payment accounts.
Unresolved issues
Exact funding mechanisms and scale needed for large‑scale foundation model training (whether billions of rupees are required). Detailed governance framework for bias mitigation and validation of Indian‑specific models. Implementation roadmap for integrating AI agents across existing platforms (e.g., authentication, payment flows). Specific reforms needed in the education system and timeline for their rollout. Comprehensive risk management strategies for AI‑driven decision making in finance and healthcare. How to ensure equitable access and prevent new forms of inequality as AI capabilities expand.
Suggested compromises
Treat the cost barrier (₹10,000‑₹25,000 crore) as secondary to talent and efficient training methods, emphasizing skill over massive capital outlay. Encourage multiple foundation models rather than a single national model to foster competition and avoid a monopoly. Maintain core functions of banks and schools while allowing AI to augment front‑end services, preserving social and custodial roles. Balance AI‑driven automation with human oversight, especially in high‑risk domains like payments and medical advice.
Thought Provoking Comments
I see AI not as job reduction but as an opportunity for India to become a global AI‑dominant nation, boosting productivity and GDP growth.
Reframes the dominant narrative of AI as a threat to jobs into a growth engine for the country, setting an optimistic macro‑economic frame for the whole discussion.
Established the overarching theme of the session; subsequent speakers referenced productivity, GDP, and India’s ‘bull case’ scenario, steering the conversation toward opportunities rather than fears.
Speaker: Vijay Shekhar Sharma
India has to build its own foundation model because otherwise our cultural knowledge and biases will be lost; we need an Indian‑made model to capture our history, language and nuance.
Introduces the strategic imperative of indigenous AI models, linking technical work to cultural preservation and bias mitigation—a perspective not previously raised.
Prompted a deeper dive into model bias, the need for vertical (domain‑specific) models, and sparked audience questions about building foundation models versus using foreign ones.
Speaker: Vijay Shekhar Sharma
The best value we can add is to remove biases in decision making that we already see in our financial system.
Identifies a concrete, socially impactful use‑case of AI—bias reduction in finance—moving the discussion from abstract potential to tangible benefit.
Led Vijay to elaborate on loan‑approval bias, illustrating how AI can increase financial inclusion; the thread expanded to cover broader inclusion in finance, healthcare and education.
Speaker: Audience member (financial bias comment)
I used ChatGPT to check my mother’s medication schedule; the model suggested moving a dose to avoid loss of appetite, and the doctor approved the change.
Provides a personal, real‑world example of AI augmenting medical decision‑making, demonstrating practical utility and trust in AI‑generated advice.
Shifted the conversation toward healthcare applications, reinforcing the theme of AI as an assistive tool rather than a replacement, and inspired further questions about AI in medicine.
Speaker: Vijay Shekhar Sharma
Banks and schools will not become redundant; their core functions remain while the interface changes (e.g., agents, apps, AI chat).
Challenges the common fear that AI will eliminate established institutions, offering a nuanced view that separates core services from delivery mechanisms.
Calmed audience anxieties, redirected dialogue to how AI can enhance existing systems, and set up later discussion on agent‑first interfaces and UI redesign.
Speaker: Vijay Shekhar Sharma
AI is an inclusive technology that will reduce the gap between rich and poor; it is a super‑power that anyone can ride.
Directly addresses concerns about inequality, positioning AI as a democratizing force rather than a tool for the elite.
Prompted participants to consider AI’s role in social equity, leading to follow‑up questions on distribution, public‑good models, and the risk of concentration of power.
Speaker: Vijay Shekhar Sharma
We should move to agent‑first interfaces: agents will talk to agents, eliminating the need for human logins and enabling seamless transactions.
Introduces a forward‑looking interaction paradigm that reimagines the entire digital ecosystem, moving beyond UI/UX to autonomous agent communication.
Generated a cascade of questions about authentication, token payments, and the future design of apps; it became a pivot point toward discussing the architecture of AI‑driven services.
Speaker: Vijay Shekhar Sharma
Leverage AI as an extension of your education; be curious, ask the model to fill gaps, and use it to become super‑productive regardless of your background.
Offers actionable advice for students, especially from tier‑3/4 areas, linking AI adoption to personal empowerment and future employability.
Shifted the discussion toward education reform, inspired audience queries about how under‑served students can enter AI, and reinforced the inclusive narrative.
Speaker: Vijay Shekhar Sharma
I completely agree on the school front; AI allows more personalized access—your doctor is your doctor, your teacher is your teacher—making a radical impact.
Synthesizes earlier points into a concise statement about personalization across sectors, highlighting AI’s transformative potential.
Validated Vijay’s earlier claims, reinforced the personalization theme, and helped transition the conversation toward sector‑specific examples (finance, healthcare, education).
Speaker: Harinder Takhar
Overall Assessment

The discussion was driven by a handful of pivotal remarks that repeatedly shifted the focus from abstract AI hype to concrete, India‑centric strategies and societal impacts. Vijay Shekhar Sharma’s opening framing of AI as a growth catalyst set a positive tone, while his insistence on building indigenous foundation models anchored the technical conversation in cultural and bias considerations. Audience inputs about bias removal and financial inclusion introduced tangible use‑cases, prompting deeper exploration of AI’s role in finance, health, and education. The introduction of ‘agent‑first’ interfaces reoriented the dialogue toward future system architecture, and the repeated emphasis on inclusivity—both in terms of socioeconomic equity and educational access—served to counter fears of AI‑driven inequality. Collectively, these comments steered the conversation from speculative concerns to actionable pathways for India’s AI ecosystem, highlighting both opportunities and responsible implementation.

Follow-up Questions
Should India build its own foundation model, chips, and applications?
Seeks strategic direction on whether India should develop indigenous AI infrastructure versus relying on external solutions.
Speaker: Harinder Takhar
Is a large capital (e.g., 10,000 crore) necessary to develop AI models in India?
Questions the perceived high financial barrier for AI model creation and its impact on participation.
Speaker: Harinder Takhar
Should India develop an Indian‑specific foundation model rather than using global models?
Requests clarification on the need for culturally and contextually tailored AI models for India.
Speaker: Audience
Will the stock brokerage industry become AI‑native with agents as the primary interface, or will AI remain just a feature?
Explores the depth of AI integration in financial services and its implications for user experience.
Speaker: Audience
What is the future for graduates in non‑technical fields (finance, HR, etc.) in an AI‑driven world?
Looks for guidance on career relevance and skill development for those outside core AI/tech domains.
Speaker: Audience
Will banks become redundant due to AI and digital interfaces?
Raises concern about the core role of traditional banking institutions in an AI‑enabled financial ecosystem.
Speaker: Audience
Will schools become redundant due to AI‑driven education?
Questions the long‑term relevance of conventional schooling in the face of AI‑based learning tools.
Speaker: Audience
What is the minimum effective strategy for tier‑3/4 students to excel in AI?
Seeks actionable steps for students from less‑privileged backgrounds to succeed in AI.
Speaker: Audience
How can we build effective LLMs for regulated industries that have limited data?
Looks for methods to develop high‑quality language models under data scarcity and regulatory constraints.
Speaker: Audience
Will AI act as a leveler in fintech, and how can we compete if peers also adopt AI/AGI?
Investigates AI’s potential to democratize fintech and competitive strategies in an AI‑saturated market.
Speaker: Audience
What is the one thing that should never allow AI to have full control over payments?
Identifies a critical safety boundary for AI in financial transaction processing.
Speaker: Audience
What are the potential risks of AI that are currently underestimated?
Calls for a deeper assessment of hidden or emerging hazards associated with AI deployment.
Speaker: Audience
How can AI agents be integrated into the existing human economy?
Seeks a framework for seamless interaction between autonomous AI agents and traditional economic actors.
Speaker: Audience
How can AI be distributed as a public good to ensure wide accessibility?
Highlights the challenge of making AI technology broadly available beyond early adopters.
Speaker: Vijay Shekhar Sharma
How can AI reduce inequality and become inclusive for all socioeconomic groups?
Explores AI’s role in bridging the wealth and opportunity gap across society.
Speaker: Audience
How can bias be removed from financial decision‑making using AI?
Looks for techniques to detect and eliminate hidden biases in lending and other financial processes.
Speaker: Audience
How can we ensure Indian AI models reflect cultural relevance and avoid bias from predominantly Western internet data?
Calls for research into curating Indian‑specific training data to produce culturally accurate models.
Speaker: Vijay Shekhar Sharma
How can vertical AI solutions (finance, agriculture, healthcare) be built effectively with limited industry data?
Seeks strategies for developing domain‑specific AI applications despite data constraints.
Speaker: Audience

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

National Disaster Management Authority

National Disaster Management Authority

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by highlighting the rising frequency, intensity and complexity of disasters worldwide and the parallel surge in artificial-intelligence capabilities, framing the question of how India can develop AI-driven resilience models [1-5]. Moderators emphasized that the next frontier in disaster risk reduction is not merely better algorithms but the institutionalisation of AI within national resilience architectures [7].


The discussion then turned to policy reforms, with Mauritius’s Minister Avinash Ramtohul stressing that disasters now affect both the physical and virtual realms, including cyber-attacks, and that governance must bridge these domains [26-32]. He advocated creating digital-twin representations of critical infrastructure to enable emergency services to locate people and assets in real time, and argued that such digital maps should be accessible to authorised responders as part of reform [33-36]. Ramtohul also warned that fully automated decision-making can be dangerous, insisting on a “human-in-the-loop” approach for AI-based early-warning alerts, a stance reflected in Mauritius’s policy to require human-verified messages in its cell-broadcast system [45-55].


Beth Woodham from the UK Met Office described the agency’s strategy of developing hybrid weather models that blend physics-based forecasts with machine-learning outputs, proceeding incrementally while co-developing benchmarks with partners such as India [65-73]. She noted that building trust requires aligning evaluation metrics with user needs and that joint development of both models and their testing frameworks is essential for operational adoption [72-74].


Som Satsangi highlighted India’s current shortfall in high-performance computing, noting that the country’s supercomputers total roughly 40-100 petaflops compared with the exaflop-scale systems used in the United States for real-time AI analytics [92-100]. He argued that the gap in core infrastructure, which can cost hundreds of millions of dollars, must be closed through public-private partnerships and that power and cooling requirements are equally critical for deploying large-scale AI clusters [106-124].


Pankaj Shukla outlined a five-layer AI architecture-infra-structure, operating system, platform services, models and applications-and stressed the need for a central “living intelligence” that can be synchronised with edge or air-gapped devices to deliver actionable insights even in disconnected, high-risk settings [136-144][145-152]. He explained that today’s hyperscaler clouds can be extended on-premises in a zero-trust fashion, allowing rugged devices to run distilled models locally for rapid response [150-152].


Startup founder Nikhilesh Kumar added that effective disaster-risk platforms must integrate four layers-modeling, asset, people and workflow-and that AI can transform scattered, unstructured data from satellites, social media and agency records into near-real-time nowcasts, such as the dam-level forecasts demonstrated for thousands of Indian reservoirs [155-166]. He further pointed out that AI can extract hazard information from news and other unstructured sources to build location-specific risk databases that support insurance and mitigation planning [169-171].


Dr. Mrutyunjay Mohapatra reiterated the global “early warning for all” agenda, emphasizing that hybrid AI-physical models improve forecast precision but are limited by data quality and computing capacity, and suggested low-cost GPU-based box models as a viable solution for resource-constrained nations [185-194][203-207]. Finally, Dr. Krishna Vatsa described India’s expanding observational networks and the pressing need to develop processing capacity and clear data-center architectures to turn the growing data streams into reliable, citizen-focused early warnings, concluding that coordinated investment, partnership and governance are essential to realise AI-enabled disaster resilience at scale [220-229][230-247].


Keypoints

Major discussion points


Policy & governance: bridging the physical and virtual worlds – The Minister of IT (Mauritius) emphasized that disaster risk must cover both physical hazards and cyber-threats, calling for a “digital twin” that links real-world assets to virtual models and insisting that critical AI-driven alerts remain human-verified and “human-in-the-loop” to avoid fully automated decisions that could cause harm[26-33][34-42][45-47][52-55].


Hybrid AI-physical modelling and co-development – The UK Met Office highlighted that AI will augment, not replace, traditional physics-based weather models through blended or hybrid approaches, and stressed the need for joint benchmarking and evaluation frameworks with partners (including low-resource countries) to build trust in AI-generated forecasts[65-71][72-74].


National-scale infrastructure and resource constraints – Hewlett Packard Enterprise’s Som Satsangi pointed out that India’s current super-computing capacity (≈ 28 petaflops) is far below the exaflop-scale systems used elsewhere, making the cost, power, and cooling requirements for AI-driven early-warning platforms a major barrier; he called for public-private partnerships to acquire the necessary sovereign data infrastructure[92-100][106-108][109-126].


Cloud-edge architecture for low-connectivity, high-risk environments – Google Cloud’s Pankaj Shukla described a layered architecture (infrastructure, operating system, services, models) that creates a central “living intelligence” while enabling edge-deployed, zero-trust, rugged devices to operate even when disconnected, ensuring real-time analytics and safe dissemination of warnings[136-152].


Start-up driven DPIs/DPGs and workflow translation – Vassar Labs’ Nikhilesh Kumar outlined four AI-enabled layers (modeling, asset/people, workflow, DPI/DPG) and illustrated how startups can integrate scattered agency data, generate near-real-time nowcasts for millions of water bodies, and convert unstructured news into structured risk datasets that feed insurance and mitigation systems[155-167][168-172].


Overall purpose / goal of the discussion


The panel was convened to explore how Artificial Intelligence can be institutionalized within national disaster risk reduction (DRR) frameworks, especially for India, by examining policy reforms, technical integration, infrastructure needs, and collaborative models (government, industry, academia, and startups) that together can build scalable, trustworthy, and inclusive early-warning and resilience systems.


Tone of the discussion


– The conversation began with a formal, forward-looking tone, framing AI as the “next frontier” in disaster governance.


– It then shifted to a technical and pragmatic tone, with speakers detailing concrete challenges (cyber-security, super-computing gaps, data quality) and realistic constraints.


– As the dialogue progressed, the tone became collaborative and solution-oriented, emphasizing partnerships, co-development, and actionable road-maps.


– The session concluded on a constructive, call-to-action tone, urging coordinated effort across agencies and sectors to translate AI advances into operational resilience.


Speakers

Moderator – Role: Moderator of the panel discussion.


Avinash Ramtohul – Minister for Information Technology, Communication and Innovation, Republic of Mauritius [S9]; expertise: AI-enabled early warning systems, digital twins, cybersecurity, policy reforms for disaster risk governance.


Beth Woodhams – Senior Manager, UK Met Office [S1]; expertise: disaster risk reduction, weather forecasting, integration of machine-learning models with physical weather models, international co-development of AI-driven meteorological services.


Som Satsangi – Former SVP & Managing Director, Hewlett Packard Enterprise India [S12]; expertise: AI deployment in geospatial and climate analytics, sovereign data architectures, large-scale infrastructure for real-time disaster alerts.


Pankaj Shukla – Head of Customer Engineering, Google Cloud India [S14]; expertise: AI-driven analytics, hazard mapping, predictive analytics, cloud and edge infrastructure for low-connectivity, high-risk environments.


Nikhilesh Kumar – CEO & Co-founder, Vassar Labs [S11]; expertise: AI solutions for disaster risk reduction, modeling layers, data integration, startup-driven platforms for population-scale early warning.


Dr. Mrutyunjay Mohapatra – Director General, India Meteorological Department (IMD) [S7]; expertise: AI-enhanced weather forecasting, hybrid physical-AI models, early warning systems at national scale.


Dr. Krishna Vatsa – Head of Department, National Disaster Management Authority (NDMA) [S3]; expertise: data integration, AI for early warning precision, building scalable disaster-risk information systems.


Additional speakers:


Mr. Martin – Mentioned in the discussion but did not speak; no role or expertise provided.


Full session reportComprehensive analysis and detailed insights

The panel opened by underscoring that disasters are becoming more frequent, intense and complex worldwide, while advances in artificial-intelligence (AI) are occurring at an unprecedented pace[1-5]. The moderator framed the central challenge: “How can India develop an AI-enabled model for resilience?” and argued that the next frontier in disaster risk reduction (DRR) is the institutionalisation of AI within national resilience architectures[1-5][7].


Policy and governance – bridging the physical and virtual worlds


Minister Avinash Ramtohul of Mauritius expanded the definition of disaster to include cyber-attacks that can cripple digital systems as well as traditional hazards such as floods and cyclones[30-32]. He advocated the creation of digital-twin representations of critical infrastructure so that emergency services can locate people and assets in real time, and insisted that these virtual maps be accessible to authorised responders (fire, medical, etc.) as part of a broader reform agenda[34-36]. Crucially, he warned against fully automated decision-making, calling for a human-in-the-loop approach and for all early-warning messages to be human-verified before broadcast, a policy already being piloted in Mauritius’s cell-broadcast system[45-55].


Hybrid AI-physical modelling – the Met Office perspective


Beth Woodham explained that the UK Met Office is developing machine-learning weather models that will augment, not replace, physics-based forecasts. The agency plans a gradual rollout in which AI outputs are blended with traditional model results, creating hybrid forecasts that increase confidence as the AI component matures[65-71]. To build trust, the Met Office is co-developing both the models and the benchmarking framework with partners such as India and the World Meteorological Organization, emphasizing that co-development and joint benchmarking are essential to operationalise AI-driven forecasts in low-resource settings[71-74].


Computational infrastructure – India’s current shortfall


Som Satsangi highlighted that India’s super-computing capacity (≈ 28 petaflops) is far below the exaflop-scale systems (1-2 exaflops) used in the United States for real-time AI analytics[92-100]. He noted that each exaflop-class machine costs US $400 million-$1 billion and requires massive power and water-cooling resources, making it unaffordable for a single government entity[106-108][109-126]. Consequently, he called for public-private partnerships to acquire and operate the necessary sovereign data infrastructure[106-108][109-110].


Interoperability, sovereign data and governance


Satsangi stressed that AI systems must be built on sovereign-compatible data architectures and that clear governance mechanisms are needed for life-saving decisions, especially in a federal context such as India where multiple state and central agencies must interoperate[80-81][46-55]. The moderator asked about “standards of explainability” for AI-driven alerts, but the panel did not reach a concrete consensus on specific metrics[7].


Cloud-edge architecture for low-connectivity, high-risk settings


Pankaj Shukla described a five-layer AI stack – infrastructure, operating system, platform services, models and applications – that creates a central “living intelligence” while allowing distilled models to run on edge or air-gapped rugged devices. This architecture enables real-time analytics even when connectivity is lost, supports zero-trust security, and mitigates misinformation by delivering verified alerts directly to field teams[136-144][148-152].


Start-up contribution – DPIs and DPGs


Nikhilesh Kumar outlined a four-layer framework (modeling, asset/people, workflow, DPI/DPG) that startups can use to turn fragmented, unstructured data into actionable insights. DPI (Disaster Prediction Interface) and DPG (Disaster Prediction Grid) are the delivery mechanisms. He gave the example of nowcasting for nearly one million water bodies by fusing 30-minute satellite imagery and radar data, then translating the output into hydraulic forecasts for thousands of dams in real time[155-166]. He also demonstrated how AI can extract hazard-specific information from news and social media to build location-specific risk databases that support insurance and mitigation planning[169-172].


National implementation – hybrid models and low-cost alternatives


Dr Mrutyunjay Mohapatra linked the discussion to the UN “early-warning for all” agenda, noting that hybrid AI-physical models improve forecast precision but are limited by data quality and computing capacity[185-194]. He pointed out that only about 5 % of satellite data is currently usable, and that improving data quality with AI benefits both AI and physics models[203-207]. For resource-constrained contexts, he promoted a GPU-based “box-model” that can deliver acceptable forecasts without the need for exaflop supercomputers, offering an affordable pathway for small island states and low-income regions[207-208].


Observational networks and processing bottlenecks


Dr Krishna Vatsa (NDMA) described India’s ambitious plan to quadruple seismometers, install automated weather stations in every village and expand landslide sensors, thereby generating massive new data streams[226-229]. However, he highlighted a critical gap: the lack of processing capacity and a clear data-centre architecture to turn this raw data into citizen-focused early warnings[230-247]. He called for a coordinated roadmap that incrementally builds AI-processing capability while ensuring that data centres meaningfully serve early-warning agencies[238-247].


Areas of consensus


All participants concurred that AI should be embedded within a coherent national resilience framework, that human oversight is essential for life-saving alerts, and that hybrid AI-physics models are the preferred technical approach. They also agreed on the necessity of large-scale computing resources, interoperable sovereign data architectures, and strong cross-sector collaboration (government, industry, academia, startups)[7][46-55][65-71][80-81][136-144][155-162].


Key points of disagreement


A tension emerged between the emphasis on sovereign-compatible data architectures (Satsangi) and the Met Office’s advocacy for open co-development and shared benchmarking (Woodham), reflecting differing priorities in balancing security with collaborative innovation[80-81][71-74]. A second divergence concerned computational scale: Satsangi argued that India must acquire exaflop-scale supercomputers to support real-time AI alerts, whereas Mohapatra suggested that GPU-based box models provide a viable, low-cost alternative for nations lacking such resources[92-100][207-208].


Conclusions and actionable recommendations


The panel distilled several key take-aways: AI must be institutionalised, human-in-the-loop, and blended with physics models; India needs to close a substantial computing-capacity gap while exploring affordable GPU solutions; a five-layer cloud-edge architecture should underpin a central living intelligence; startups can deliver DPIs/DPGs that integrate hazard, asset, and population data; and expanding observational networks must be matched with clear data-centre strategies.


Proposed action items:


1. Develop integrated digital-twin and hybrid forecasting platforms for critical infrastructure and meteorological services (Ramtohul, Woodham).


2. Create a sovereign-data architecture with defined governance and audit standards for AI-driven decisions (Satsangi).


3. Pursue public-private partnership models to fund either exaflop-scale or GPU-based compute clusters, incorporating sustainable power and cooling solutions (Satsangi).


4. Deploy the five-layer AI stack and ensure edge-ready, zero-trust devices for last-mile dissemination (Shukla).


5. Encourage startup-led DPIs/DPGs that translate multi-agency data into actionable workflows and risk databases (Kumar).


6. Promote GPU-based box-model forecasting as an interim solution for low-resource settings while larger infrastructure is built (Mohapatra).


7. Accelerate the rollout of automated weather stations, seismic sensors and landslide monitors, coupled with a roadmap for integrating these streams into AI-enabled early-warning pipelines (Vatsa).


Unresolved issues include financing the required supercomputing capacity, finalising standards for AI explainability, safeguarding AI-driven alerts against cyber-threats, and defining a governance model that balances sovereign data protection with collaborative benchmarking. Addressing these challenges will be essential for realising a scalable, trustworthy, and inclusive AI-enabled disaster resilience system across India and comparable jurisdictions[45-55][65-74][92-100][107-110][136-152][155-172][185-194][203-207][220-247].


Session transcriptComplete transcript of the session
Moderator

defining moment for disaster risk governance. Around the world, the frequency, intensity, and complexity of disasters are increasing. Climate variability is compounding existing vulnerabilities. Urbanization is concentrating risk, and cascading hazards are challenging traditional response models. At the same time, we are witnessing unprecedented advances in AI. So, at this point of time, how does India bring or develop a model with AI for resilience? We believe that the next frontier in DRR is not better algorithms alone, it is institutionalizing AI within national resilience architecture. Thank you very much. Thank you very much. from pilot projects to national and global resilience systems. Before we start the discussion, let me invite and call on the stage for the panel discussion His Excellency Dr.

Avinash Ramtohol, the Minister for Information Technology, Communication and Innovation from the Republic of Mauritius. Welcome, sir. He is a key contributor to national strategies for AI in resilient infrastructure and South -South cooperation. I would like to invite Ms. Beth Woodham, Senior Manager from UK Met Office. She is a specialist in disaster risk reduction via forecasting innovations and AI explorations for prediction. Welcome. I would like to invite Mr. Som Satsangi, former SVP and Managing Director for Hewlett Packard Enterprise India with industry insights on AI deployment in geospatial and climate analytics. Welcome, Mr. Som. I would like to now call upon Mr. Nikhilesh Kumar, CEO and co -founder of Vassar Labs. He is an innovator in leveraging AI for DRR.

Welcome, Nikhilesh. And lastly, Mr. Pankaj Shukla, Head of Customer Engineering, Google Cloud India, for Practical AI Applications, Hazard Mapping, Predictive Analytics, and EWS Scale -Up. Thank you. Thank you. and focus on double integration during this panel discussion. So my question first to Minister for IT, Communication and Innovation, Republic of Mauritius. Minister, the small island developing states face existential climatic threats. From your perspective, what policy reforms are required to institutionalize AI -enabled early warning and alerting systems within national governance frameworks? And how can countries with limited resources ensure sustainability in such ventures?

Avinash Ramtohul

Thank you and good morning, everybody. Thank you for the opportunity to be here amongst you. First of all, I would like to say a couple of points before I get into the actual response there. Today, just like we have the physical world, we have a virtual world as well in which we all live. And that virtual world is so much bigger. Than the physical world we can see here in front of us at the moment. And just like disaster can strike the physical world, and that is the scope of the discussion, disaster can also strike the virtual world. And as we grow in dependency on the virtual world, on our digital systems, we should be well aware that disaster is not just the flood, the cyclone, the drought.

Disaster can also be the cybersecurity attacks that can actually create havoc in our lives. Therefore, it is very important that the scope of the discussions when we look at disaster be also extended to the virtual world and cybersecurity attacks. Now, having said this, in terms of policy reform, it is very important that we also create this bridge between the physical world and the virtual world. And I will explain myself. Just imagine, as we speak here, there is a big fire that breaks out in one organization. There is a big fire that breaks out in one organization. and because it broke out there are you know automated connections that go to the fire services to the medical services they will proactively now start driving to this place but when they come to this place where would they know where are the people because their main objective is to save the lives of the people secondary the material where would they know where are the people do they have a plan a structural plan of this this space do they know where do the pipes cross now I’m talking about a digital twin it is really important that we create that digital twin which will be the bridge between the physical world and the virtual world and the architectural map of that digital twin should be accessible to a certain set of operators the medical the fire services now this is part of the reform that we are looking at And in a small administration, it becomes easier to do it, as opposed to a huge administration like India.

Now, there’s one more thing in there. As, let’s say, we also have the structural plan, how do we know where the people are? Can we have heartbeat indication? Can we have the thermal map of the place so that we know wherever there’s 37, 38, 39 degrees, well, 37, 38 is better. Do we know where the people are located so that when the fire services come, they go straight to that spot? So this is very important. And another reform that is important that we be aware of is that when there is some kind of a pandemic, there is, and which is contagious, there is human -to -human virus transfer. Now, we are all very excited. We are very excited about artificial intelligence.

intelligence but we are also aware that there is this possibility of virus infecting systems right and just like virus infects people virus also infects systems and virus get contagious in computers as well we all know that therefore we need to also have mechanisms to protect because if we have a message that goes through an early warning system to people this already creates an alert in the minds of people the adrenaline surge starts already but if that message is infected it can create a lot of disruption in our daily lives and this we need to be very careful of therefore in terms of reform the decision making process and i think it was mr somebody earlier mentioned in the previous panel the decision making process is automated now automation can be and 100 % automation in the field of AI where it concerns the lives of people can be dangerous.

Therefore, human in the loop or human on the loop is critical in these kinds of environments. And this is also part of what we are looking at in Mauritius. Yes, it’s true that as a small island developing state, we call it SIDS, we have our own set of flash floods that can actually occur. Within a couple of hours, we can have flash floods and we can see cars floating around already. And this has happened in the country. And we don’t want that to happen again. Therefore, there are early warning systems that we are deploying, like cell broadcast systems, which we have planned to deploy. Now, again, the message that goes into that system should be a message that is human verified.

That is, decisions like these that are sensitive, highly sensitive, cannot be 100 % automated. That’s part of our policy. as well and want to ensure that humans are involved because machines cannot decide for humans human decide for machines and this needs to be critical and needs to be given the attention that it deserves and I believe our Prime Minister Modi ji also mentioned in his intervention yesterday that there is a great necessity to ensure that human remain part of the decision -making process in the application of AI for disaster management so these are a few points I wanted to mention thank you

Moderator

insights in developing resilient governance frameworks which actually are scalable across the nation’s and this is a way to go resilient system which is resilience to even cyber attacks and of sustainable meaningful and not giving any say fatigue you you of alerts also is very much vital for a robust system to be effective across all disasters. So we now come to the second panelist, Ms. Woodham. My question to you is the national meteorological agencies play a crucial role in operational forecasting and early warning delivery. From the perspective of the UK Met Office, how can AI complement physical weather and climatic models to improve forecast, lead time, and impact, basically impact -based warnings to gain public trust?

And what institutional partnerships are necessary to ensure that AI -driven meteorological insights translate into actionable decisions and actions at national and local levels, with special emphasis on low -resource countries?

Beth Woodhams

Hello? Yeah. Right. Thank you. Thank you for your question. Thank you, it’s a real honour to be part of this panel. So at the Met Office we are currently developing machine learning weather models and we absolutely do not see these as a replacement for our physical models. Our plan over the coming years is to step by step implement these models through blending. This could be hybrid models, physics based and machine learning based. It could be blending the output from both of these models after they’ve run. The truth is we don’t know what the answer to this solution is yet but in order to build the trust amongst the users of our models, the customers of our data we’re certainly not going to have a complete shift.

We are going to do this step by step increasing our blending. As we become more confident with the data. so it’s from this conference you know it’s very clear that um companies from the private sector are developing these models in the public sector of course we’re developing them too and sovereign capability remains really important but for public sectors we really need to have that co that co -development um at the met office we have a long history of co -developing with partners like india so through wc ssp india and through age um wiser asia pacific we have these partnerships we’ve co -developed physics -based models and we really want to do the same with machine learning models as well at the met office we’re starting standardizing our benchmark and evaluation benchmarking and evaluation we really want to make sure that when we’re doing comparisons between machine learning and physics -based models we’re being focused on the same thing so we’re really wanting to do the same thing with machine learning and we’re really wanting to do the same thing with machine learning and we’re really wanting to space models.

There’s a lot of metrics we can look at that show machine learning models are doing well, but are these the metrics that are most important to users? Therefore, not only do we want to co -develop the actual models with partners, we want to co -develop the benchmarking and the tests that we do on these models. Thank you.

Moderator

Thank you, Beth, for giving your insight into how the National Meteorological Agency’s may plan to use AI to the systems. So now we move towards the technologies so that how do we really create resilient systems for forecasting. So my first question would be to Mr. Som Sasangi. The private sector innovation has advanced rapidly. So there are first foremost question I think which comes to my mind is how can technology providers design AI systems that are interoperable and with sovereign data architectures because that is the crucial issue to be cracked. So we have to design AI systems that are interoperable with sovereign data architectures and compatible with diverse governance ecosystems. For a country like India with federal government and the state governments so this is a very vital nut to be cracked from the technology’s point of view and similarly what standards of explainability are necessary when AI informs life -saving decisions.

Som Satsangi

Thanks Manish. Really a great question and probably in this room I’ll be calling something which is very very important because just when I walked in I heard Mr. Martin and he spoke a couple of points which are so important. and critical for a country like India. He spoke about the government. He spoke about the procurement policies and the scale. So all these three things are so important and critical when we look from India’s standpoint with 1 .2 billion plus citizens. I’ve been the managing director of Hewlett Packard Enterprise for the last nine years, and I know I’ve been involved almost in all large critical infrastructure projects, whether it’s UIDI or any kind of transaction, COVID, all applications.

And we know that all these things we have developed at this scale and delivered. Probably when we look, the most important aspect of the human life with the climate change, the disaster which is happening across the world, the length and breadth of India on coastal side. So, but somehow, are we ready to do it? I don’t think we are ready. While, and I’ll give you some pointer which are very important and why it’s not happening to the Mr. Martin point. And I’ll just, while India has very ambitious plan for national supercomputer mission way back in 2015, where India said, okay, we’ll be investing 4 ,500 crore to develop some of the supercomputer which will be the high class.

But in last 10 year, what we have developed is some 37 supercomputer with just the 40 petaflop of capital. is that sufficient? Now we are planning, okay, we’ll add another 50 petaflop. But when you look at the global level, the kind of infrastructure which is, if we have to manage in a real time this alert and warning system, and I’ll give you one or two examples in United States. The top which have been developed and deployed to do these things, they have got a capacity of almost one to two exaflop. And one exaflop is close to around thousand petaflop. Whole India we don’t have even today 100 petaflop of data. And US, there are multiple systems which provide this real time information and each of them, one is Al Capitan which is 1 .8 exaflop then Frontier system which has been deployed by the Oak Ridge National University has got 1 .3 exaflop.

capacity. Aurora, which is recently deployed by the Argonne National University, has got one extra flop of power and capability. So these are the kind of systems which are deployed so that actually they can take the power of AI in a real -time environment, whether it’s geospatial data or satellite information data or it’s any kind of live information, and analyze these things with the help of AI in a real -time environment and provide the alert much ahead of those things. Somehow we are not able to provide. So in India if we want this early warning system to be done, I think our main focus needs to be how we can have the core infrastructure which will meet the requirement of this.

And last couple of days in every discussion, this is what is coming out with the global CIO and CEO, that probably India need the core infrastructure which we have not developed. Now, we might say, okay, we are doing 10 ,000 to the AI and all, but that is getting distributed to a large number of tech and SMB guys who are developing the application. But what government needs, because it’s a sovereign data, that government need to buy this kind of infrastructure. But I know each of the system will cost anything between 400, 500 million dollars to a billion dollars. Government may not be able to spend that kind of money. So probably that’s a place where private partnership becomes very, very important in cricket.

So my request is that probably department should be looking how the large global institution and technology partner can bring the core infrastructure and technology, because today technology is not a barrier. It’s an infrastructure and the scale and the procurement process and some of these policies. How the various data will be getting integrated is a problem. So if we can address these things, to the scale probably India has done on the DPI side where we have implemented and the best example to the global level where the UIDI is being used almost by almost more than 800 -900 million citizens in the country. We can deliver that. We have got a capability and with all these AI transformation which is happening, our Honorable Prime Minister already said that India is going to be leapfrogging on those things and going to the global leader in the AI space with all those technology embedded along with the capability what India has got.

Only what is required the infrastructure, but infrastructure will come with a huge cost. When you are going to get the infrastructure, another element comes is the power, energy and water. That’s going to be very critical. So somebody has to look at all the three aspects. You can get the infrastructure if you don’t have the power. So we need to have the power which can help and power these kind of systems. So alternative power resources are going to be very, very critical. They’ll be all water -cooled system because they will have hundreds of thousands of GPU and CPUs running together kind of thing. They will require a huge power and huge water capabilities. So need to have that thing.

So India need to start thinking on those lines to create that thing. If we have to protect and we have to get the right early warning alert to save the life of millions of citizens in the country. Thank you.

Moderator

Thank you. And definitely DRR also offers an opportunity for us to ponder. So taking forward from Mr. Som, I’ll go to Mr. Pankaj Shuklaji, the head of customer engineering at Google Cloud AI. Basically, cloud computing and AI platforms enable real analytics, real -time analytics at scale. So what are the critical infrastructure investments which are essential to support AI deployment in low connectivity and high -risk environments? That’s very vital, looking at the geography of our nation. And how can AI -driven dissemination ensure last -mile inclusion while mitigating misinformation risk? So your insights on that.

Pankaj Shukla

Good afternoon, everyone. So irrespective of the technology, when we talk of disaster management and resilience, essentially what we are trying to do is, we are trying to turn the chaotic reality on the ground into actionable intelligence. so for example the data fragmentation which is sitting with multiple ministries and social media various places all across all of that first of all needs to be brought to a place or at least should be you should have an ability to bring all of that data and turn into a living intelligence so once the data is there which is structured as well as unstructured data then we have the ability of our AI models today which are multi -modal to make sense of completely chaotic data noisy data into a real intelligence at unimaginable speed that is essentially what AI is all about so when it comes to the real implementation of this entire architecture and panelists spoke about multiple aspects that how can we use AI how do we actually implement it on ground if you look at what we need and essentially if we talk of AI broadly it has it is it is there at the five layers.

One is the infrastructure layer. Second is the operating system layer which runs on top of infrastructure. I am not talking about just servers and data centers but an operating system layer which scales from a central to an edge location to a multiple regional locations. Then on top of that the services which are required, platform services which are required to basically build the AI applications, make the use of the right models etc. Then you have got the models, the multi -model ability of the models like Gemini and various other hyperscalers which provide and for example the NDIA mission, lot of Indian providers are building models. Ma ‘am spoke about lot of the models which for example other companies are building.

So the question is that how are we able to make the use of all diverse set of models in a dynamic manner and use agentic AI on top of that. To build applications and turn that into a real action. which can be disseminated at the places where we want. For both proactive during the response and as well as after the response. So the question is how do we implement it? So implementation of this will require a framework or an architecture which essentially has a central living intelligence of all the data on which you are basically experimenting and pre -training the models and tuning using different type of models and building applications. The real application of that is going to happen at a place which might get completely disconnected from a central place.

So you should be able to actually build all of these applications, AI applications, make use of that data, make a single central single truth centrally but ability to send that intelligence back to a tactical location. Today it is exactly possible. So organizations for example Google and many other organizations are actually trying to build. Basically bring all the goodness of hyperscaler cloud. for the entire infrastructure and managed services layer as well as AI tooling to on -prem and then ability to also run those in a completely disconnected air gap environment in a zero trust manner. So you have the security of your data and your applications and then also you should have an ability to connect the edge locations in a federated manner to the central place but if required during a disaster you should be able to carry it a rugged device which basically has a basic small set of central intelligence sitting into that with all these necessary models to basically take action on the ground and that action could be related to either finding out where are your assets where is the maximum impact which has happened, how do you actually kind of send the information to various places.

So all of those things are absolutely possible today and while there is huge amount of infrastructure at its own set is required actually to train models, build models. but that’s happening across the country we should have an ability to bring all the good models, the models for the right thing to basically run it on prem at a smaller set of infrastructure and make a smaller set of that which can run in a tactical location which sits possibly in a very limited infrastructure and compute that is what we

Moderator

Thank you Pankaj, I think for giving Google’s insight into building rugged systems at scale deployment of AI solutions in low cost and high risk environments we have another contributor to the DRR particularly AI deployment in DRR could be from the startups and we have Mr. Nikhilesh Kumar CEO and founder from Vassar Labs basically Nikhilesh you can enlighten us about how startups can contribute in developing a DPG at population scale for DRR particularly for countries like India

Nikhilesh Kumar

The modeling layer, which is transforming the data into various insights, hazards. The asset and the people layer, which is getting impacted, where we need to know today in personalized and in precision, where exactly we talked about, like if flood is coming, which road, which houses, if a landslide is coming, which area. And the fourth, which is most important, is the workflows to translate the actions. And this is where we see a role of DPI and DPG, because all these four layers are not done by one person. They are looking to data scattered across various agencies. And we need to have DPIs and DPGs, which are built across this data right from. different institutions which are bringing metrological data, some institutions which are bringing water and other asset related data, some institutions which are creating different layers like survey of India in case of India on earthquake and various other layers.

Now we also see a role of AI today being playing a role and I will just give an example for that. So we see as today extreme events are happening, one of the first pressure is in the water sector where we see extreme floods sudden gush of water coming into the large dams and dams is one of the most extreme and vulnerable asset that we are all impacted to it and we would also see that the large dams are perhaps we have got a good handle to control them during the disaster but we have big number of dams in the country where they are unregulated and they are scattered across in large numbers and there is no forecast available for them.

So how do we churn in real time, in near real time, both hourly and in days time for close to 1 million water bodies which anytime can be vulnerable. So one of the solutions that we recently saw was translating and making leveraging actually the data gap utilizing AI and AI sitting on real time satellite data 30 minutes interval, radar data coming from, currently all these assets are available from IMD and translating that into a nowcast. Now that nowcast layer translated into hydraulics to each of the dams and in cyclone month close to 5000 dams were given in real time. So such use cases you see is where data is getting connected is available in real time in an interoperable format and then there are players who can translate this data into actions.

I see that contributions where such platforms are brought at national and state scale and these use cases are also packaged and made available for different recipient departments to translate them into actions. So I would say that sir and one more thing I would just like to add taking this as a forum sir. Risk assessment and risk reduction both has a very big gap when it comes to data especially various events across take about earthquake, take about other type of disasters where… where parametric measurements historic have not been available and knowing a frequency… of location specific frequency of these hazards has been actually lacking because you don’t have a database. Now AI can play a very good role here where the information is lying in lots of news which has happened and which contains unstructured information on location, unstructured information on the hazard that has impacted the damages.

So AI can actually uncover these informations and create a structured data sets each hazard wise and these will also feed into various DPGs that will further unlock to the insurance sector to make this which will basically benefit from knowing location specific intensity and also frequency of the risks. I will just close with that sir.

Moderator

Thank you Nikhilish. You have aptly summarized that startups in this sector, definitely can play a very vital role particularly for developing a rugged AI systems for India at population scale. Now we have heard the panelists from the Indian perspective since we are running large systems. So we would like also to have all the members to have the benefit of the insights of how the national systems are functioning in India and how technology is basically being deployed at scale for DRR perspectives. So firstly I would like to get insight from Dr. Prithunjay Mahapatra. Yes. DG of IMD is here and IMD can elaborate on how robust AI based systems can be deployed, are being deployed at population scale in the Indian context.

DG of IMD is here and IMD can elaborate on how robust AI based systems can be deployed at population scale in the Indian context. Dr. Bapa.

Dr. Mrutyunjay Mohapatra

Namaskar. Good morning to all of you. Respected Dr. Komal Kishore sir, Adas Nand sir, our Krishnamurthy sir and distinguished panelists and their delegates, friends and colleagues. On the outside, I congratulate NDA. For organizing this session, which has given a lot of thoughts to each of us who are represented here. I’ll just start with what the initiative has been taken up by the UN and the WMO. A clarion call was given in 2022 that early warning for all. And when you go for early warning for all, it includes all the countries, all the people and all sectors, all the strata of the society. So with that, when it came of that, actually less than 50 % of countries had the early warning at that time.

Now the number is increasing. but still the time is short now by 2027 we have to achieve the 100 % what’s the early warning for all it is a long goal and if we review now we find that during this last 5 years there is a huge jump in technology and AI is one such technology which is helping for extension of the early warning for all looking at the various components of that you need first the risk knowledge at each and every point what our friend told us like Nikhil and it is not possible with the existing network of any country to have the risk knowledge at each and every location but at the same time there are unstructured data as it is told which can be utilized to create the knowledge to create the risk hazard vulnerability assessment which can be the historical knowledge which can be utilized in real time when you go for the prediction of any severe weather event Next point comes the early warning.

Yes, over the early warning aspects, you will see that there has been also a huge jump in recent years with the inclusion of many AI -based models. You will find that each and every large NMHS, you can say, established NMHS, they are utilizing AI. IMD is also utilizing AI for taking a decision with respect to the early warning. At the same time, I will tell you, AI has come off as a hybrid model along with the physical models. We cannot get away with the physical models because physical models provide you the physical understanding, the reasoning. And hence, the human knowledge gets into the picture with the help of these physical models. So therefore, AI has to be suitably connected with the physical models.

That’s what everyone is doing, starting with European Center or Indian Center. And to do that also, there’s been many collaborations and integrations towards that. So after that, if you look at the basic backbone, which is the modeling, the modeling starts with the basic assumption that weather forecasting is an initial value problem. You cannot give weather forecast if you do not have what is the initial status of Earth, ocean, and atmosphere. So therefore, the basic thing which we are talking of now, that is already defined in the physical modeling system. The system, unless you improve the data, initial data, with all types of observational tools and techniques, you cannot improve the weather forecast. So therefore, here, by collecting or by creating the data with the help of AI also will go a long way in improving not only the AI models, but also the physical models and the hybrid models.

Once you have the good data, the quality of the data can be improved with the help of AI. There are, I’ll tell you, from satellite we get data, a lot of data, but only five. Five percent of the data from satellite is usable. We cannot use the data from satellites because of the quality. And further is that the quantity. you cannot accommodate all types of data in the physical modeling system as our friends have been telling that you need infrastructure, you do not have a computational infrastructure where you can utilize 100 % satellite data so yes you are true that in India we do not have sufficient computing infrastructure, we have at least now 28 petaflops in IMD and outside of course with National Subcontinent Motion we have come up like that, but that is not sufficient and therefore there is scope for public engagement for the augmenting the computational infrastructure and other digital infrastructures but at the same time there is another scope because of AI, now box model has come up a poor country, a poor small island nation cannot venture or cannot even dream to have a high performance computing system, they can go for AI system, a box model has come up where you can give it a small island nation and there with the help of a few GPU nodes they can have the forecast and that has come up and it will grow gradually and we will have the affordability to early warning with the help of AI.

of this GPU -based AI -driven or data -driven models. So after that comes the forecast. Once you come to forecast, now we have already come up to AI consensus. But physical consensus plus AI consensus, then again you will go for the final forecast. Then finally you go to the sectoral applications. There is a huge scope here with the improvement of economic conditions and societal conditions in every country to improve our decision -making for each and every sector, and there AML can play a role. So I urge upon all the industries, academia, R &D, and think tanks to collaborate with NMHS, especially with the India Metrology Department and other organizations here, to have very authentic, specific, and judicious utilization of AI with limited reasonable resources available in the country.

So thank you very much.

Moderator

Thank you, Dr. Mapatra, for your valuable insight. Now, since NDMA is the APEC’s national body, which basically will integrate… all the varied systems into creating a rugged AI… systems. I would like the entire audience to get the benefit of the vision of NDMA from member and HOD Dr. Krishna Vassa. Sir, can you please elaborate how NDMA intends to take this forward to create a sustainable, low -cost, at -scale model for the country?

Dr. Krishna Vatsa

Thank you very much for giving me this platform. I would like to mention that we already have a huge amount of data that exists in relation to almost all the hazards. Look at the earthquakes. We record all the micro -earthquakes for the entire country. The kind of data that exists for the earthquakes which are below 3 also can give us a very good indication of the kind of earthquakes that we can experience in the Himalayas. and other regions. And this data, the availability of data is going to increase exponentially as we are investing in the observational networks. Almost every mitigation program that we are doing, we have included a significant aspect of early warning systems. In the next five years or so, every village in India will have an automated weather station.

We will have a large number of instrumentation for measuring landslides. We are going to at least quadrupling the seismometers, strong motion accelerographs. So we will be investing a huge amount of money in improving our observational networks across the hazards, which will mean that we will have access. to still a larger amount of data. What is important is that we need to have that capacity to process the data and apply the AI models and improve the precision of early warning. That is an area that is the sphere where we are struggling right now. It’s one thing to set up the observational network. The another thing is to how to collect the data, process the data and generate the information that could be used and more so when it comes to informing the common citizens.

Scientists is one thing. You are getting a huge amount of data, but we are not doing it for the scientists. We are doing it for the people who get affected by disasters. So how do we go about it? And the roadmap is not sufficiently clear and I keep talking to all kinds of people. somebody would come and say that you set up a huge data center. No? Okay, that’s fine, great. But people also say if you are setting up a huge data center and you are not really empowering all the early warning agencies, then how are you going to justify the investment in data centers? The data comes to individual agencies. How do the data center and the individual early warning agencies interact so that we have a good model available?

And we don’t have unlimited resources. So the point is, this is where we need more clarity. How do we go about using our existing networks to improve the precision in early warning risk information through a gradually incremental way of building capacities that of course include the data center that should include improving our connection with the LLM models. But it’s also very, very important that we need to find a way of improving the overall architecture. That is one area where we are struggling and where we need some guidance. Thank you very much.

Moderator

Thank you, sir. I think we are coming to the close of the discussion. I’ll request Krishna Vassa, sir, to please give a moment to our panelists. Just give the back. also then request all our dignitaries in the front row to after this memento is over for a quick photograph and then we vacate the room. Thank you. Thank you. I’ll request the leadership from the states of Tamil Nadu and Andhra Pradesh, Telangana also to come, please, in the front for the photograph. We are very happy to inform that most of the states are also represented from the State Disaster Management Authority. Thank you very much. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (28)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedmedium

“AI advances are occurring at an unprecedented pace.”

The knowledge base notes that artificial intelligence is advancing rapidly and unpredictably, confirming the claim of unprecedented AI pace [S92] and [S93].

Confirmedhigh

“Minister Avinash Ramtohul of Mauritius expanded the definition of disaster to include cyber‑attacks that can cripple digital systems as well as traditional hazards such as floods and cyclones.”

Sources describe Ramtohul outlining policy reforms that address cyber threats alongside conventional hazards, confirming his expanded disaster definition [S1] and [S100].

Additional Contextmedium

“He advocated the creation of digital‑twin representations of critical infrastructure so that emergency services can locate people and assets in real time.”

While the report attributes the digital-twin idea to Ramtohul, the knowledge base discusses digital twins as a broader concept for climate-extreme management, providing additional context on the technology’s relevance [S20].

Confirmedhigh

“He warned against fully automated decision‑making, calling for a human‑in‑the‑loop approach and for all early‑warning messages to be human‑verified before broadcast, a policy already being piloted in Mauritius’s cell‑broadcast system.”

The knowledge base records that Mauritius is planning cell-broadcast systems with human-verified messaging protocols to avoid misinformation during emergencies, confirming the human-in-the-loop policy [S1] and [S45-55].

Confirmedhigh

“The UK Met Office is developing machine‑learning weather models that will augment, not replace, physics‑based forecasts.”

Met Office documentation outlines a strategic plan to integrate AI/ML with traditional physics-based weather and climate models, confirming the hybrid approach described in the report [S32].

Confirmedmedium

“The Met Office is co‑developing both the models and the benchmarking framework with partners such as India and the World Meteorological Organization.”

The knowledge base explicitly mentions co-development of benchmarking and testing frameworks with partner organisations, aligning with the report’s statement [S8].

Additional Contextlow

“Calls for human‑in‑the‑loop AI systems echo broader concerns about preserving human agency in automated decision‑making.”

Other sources highlight similar warnings about over-reliance on algorithms and stress the need for human oversight, providing broader context to Ramtohul’s stance [S105] and [S108].

External Sources (109)
S1
National Disaster Management Authority — Beth Woodhams from the UK Met Office explained their approach of gradually blending machine learning models with traditi…
S2
Beneath the Shadows: Private Surveillance in Public Spaces | IGF 2023 — Beth Curley, a programme officer with the National Endowment for Democracy’s International Forum for Democracy, contribu…
S3
National Disaster Management Authority — – Pankaj Shukla- Nikhilesh Kumar- Dr. Krishna Vatsa – Som Satsangi- Dr. Mrutyunjay Mohapatra- Dr. Krishna Vatsa
S4
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S5
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S6
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S7
National Disaster Management Authority — – Beth Woodhams- Dr. Mrutyunjay Mohapatra – Som Satsangi- Dr. Mrutyunjay Mohapatra- Dr. Krishna Vatsa
S8
https://dig.watch/event/india-ai-impact-summit-2026/national-disaster-management-authority — Welcome, sir. He is a key contributor to national strategies for AI in resilient infrastructure and South -South coopera…
S9
National Disaster Management Authority — Minister Avinash Ramtohul from Mauritius provided a unique perspective by fundamentally expanding the conceptual framewo…
S10
https://dig.watch/event/india-ai-impact-summit-2026/national-disaster-management-authority — Welcome, sir. He is a key contributor to national strategies for AI in resilient infrastructure and South -South coopera…
S11
National Disaster Management Authority — – Pankaj Shukla- Nikhilesh Kumar – Pankaj Shukla- Nikhilesh Kumar- Dr. Krishna Vatsa
S12
https://dig.watch/event/india-ai-impact-summit-2026/national-disaster-management-authority — Welcome, sir. He is a key contributor to national strategies for AI in resilient infrastructure and South -South coopera…
S13
National Disaster Management Authority — – Som Satsangi- Dr. Krishna Vatsa
S14
National Disaster Management Authority — – Pankaj Shukla- Nikhilesh Kumar – Pankaj Shukla- Nikhilesh Kumar- Dr. Krishna Vatsa
S15
Open Forum #33 Building an International AI Cooperation Ecosystem — – Balancing technological development with national security interests Participant: ≫ Distinguished guests, dear friend…
S16
WS #31 Cybersecurity in AI: balancing innovation and risks — – Gladys Yiadom: Moderator AUDIENCE: So I was just going to add today about, if you look at the traffic on the intern…
S17
Steering the future of AI — – **Nicholas Thompson**: Moderator from The Atlantic Yann LeCun: reach human level intelligence or something approachin…
S18
Building the Next Wave of AI_ Responsible Frameworks & Standards — “The second most important element in this framework is to ensure these safety benchmarks are co -created with the indus…
S19
MedTech and AI Innovations in Public Health Systems — How can the data show to them that, this is the… key problem in this particular area. We’ve been talking about with An…
S20
Survival Tech Harnessing AI to Manage Global Climate Extremes — “It has to be a hybrid model which has to be connected with the physical systems of the various sensor fabric and the sa…
S21
UNSC meeting: Artificial intelligence, peace and security — Gabon:Thank you, Madam President. I thank the United Kingdom for organizing this debate on artificial intelligence at a …
S22
AI Without the Cost Rethinking Intelligence for a Constrained World — But I will pick which ones I need based on my input or dynamically. And that is called dynamic sparsity, right? So So I’…
S23
Shaping the Future AI Strategies for Jobs and Economic Development — “They are giving GPUs available at 65 rupees per month.”[119]. “so there are quite a few no no it’s public it’s all publ…
S24
From India to the Global South_ Advancing Social Impact with AI — Cross-sector movement of professionals between government, academia, and industry is essential for knowledge transfer
S25
From KW to GW Scaling the Infrastructure of the Global AI Economy — A central theme was India’s potential to become a global AI hub, with projections suggesting the country will scale from…
S26
The Global Power Shift India’s Rise in AI & Semiconductors — -Public-Private Partnership Models and Capital Requirements: The discussion highlighted the need for substantial capital…
S27
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — And the big mindset shift that’s starting to occur is this notion that, you know, these aren’t just productivity tools. …
S28
Building Sovereign and Responsible AI Beyond Proof of Concepts — It wasn’t applying it the same to equal to everyone. And there was no agreed process for how you would escalate if there…
S29
Artificial intelligence (AI) – UN Security Council — In conclusion, the discussions highlighted the importance of fostering transparency and accountability in AI systems. En…
S30
How to make AI governance fit for purpose? — ### Chinese Perspective ### Singapore Perspective **Additional speakers:** Anne Bouverot: Thank you so much, Gabriela…
S31
The Virtual Worlds we want: Governance of the future web | IGF 2023 Open Forum #45 — Elena Plexida:Thank you, Miapetra, hello, everyone. Indeed, ICANN coordinates the internet unique identifiers, the names…
S32
AI may reshape weather and climate modelling — The UK’s Met Office has laid out a strategicplanfor integrating AI, specifically machine learning (ML), with traditional…
S33
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — He explains that remote, low‑connectivity scenarios benefit from edge deployment, while most workloads run on the cloud.
S34
Indias AI Leap Policy to Practice with AIP2 — “This is essentially what we provide for startups.”[16]. “And startups become a very, very important player in this game…
S35
AI for food systems — Boogaard argued that AI can be transformative by connecting smallholders to help them grow, process, distribute, and acc…
S36
Policy Network on Artificial Intelligence | IGF 2023 — The panel discussion explored several crucial aspects of AI technology and its societal impact. One notable challenge hi…
S37
Conversational AI in low income & resource settings | IGF 2023 — Addressing the digital divide is important, as 2.6 billion people globally lack reliable internet access, hindering effe…
S38
Panel #3: « Gouverner les données : entre souveraineté, éthique et sécurité à l’ère de l’interconnexion » — Talla N’diaye Merci, merci beaucoup. Tout d’abord, je tiens à vous remercier, à remercier Henri et toute l’équipe de l’O…
S39
Operationalizing data free flow with trust | IGF 2023 WS #197 — Amid global health, financial and geopolitical crises that pose risks to the very functioning of a rules-based multilate…
S40
The Challenges of Data Governance in a Multilateral World — In conclusion, India’s progress in embracing technology and digitization, as demonstrated by the Digital Personal Data P…
S41
AI as critical infrastructure for continuity in public services — “Two, standardized API so that system -to -system communication will be smooth.”[24]. “And second, we also have a harmon…
S42
Open Forum #14 Data Without Borders? Navigating Policy Impacts in Africa — Souhila Amazouz: Thank you. Good morning. Do you hear me? Yes, yes. Yes, good morning, everybody. And thank you, m…
S43
How to construct a global governance architecture for digital trade — Current governance arrangements that underpin data flows are incoherent and fragmented, reflecting conflicting private i…
S44
Opportunities of Cross-Border Data Flow-DFFT for Development | IGF 2023 WS #224 — The analysis delves into various aspects of cross-border data, open data, and data protection. Shilpa, a researcher at t…
S45
WSIS Action Line C7 E-environment — This infrastructure enhancement will improve data sharing, forecasting accuracy, and integration with early warning syst…
S46
AI to improve forecasts and early warnings worldwide — The World Meteorological Organisation has highlighted the potential of AI toimprove weather forecastsand early warning s…
S47
AI model improves long-range space weather forecasts — Scientists from Southwest Research Institute and the National Center for Atmospheric Research, supported by the National…
S48
AI may reshape weather and climate modelling — The UK’s Met Office has laid out a strategicplanfor integrating AI, specifically machine learning (ML), with traditional…
S49
Diplomacy in beta: From Geneva principles to Abu Dhabi deliberations in the age of algorithms — Governance must extend across the full AI lifecycle: pre-design, design, development, evaluation, testing, procurement, …
S50
Artificial intelligence (AI) – UN Security Council — Another session highlighted the need for transparency and accountability in AI algorithms. The speakers advocated for AI…
S51
HIGH LEVEL LEADERS SESSION IV — The analysis highlights several key points regarding the importance of a human rights-based approach to new technologies…
S52
Pre 12: Resilience of IoT Ecosystems: Preparing for the Future — As AI becomes integrated into IoT systems, proper governance frameworks are essential to ensure ethical and trustworthy …
S53
The State of Digital Fragmentation (Digital Policy Alert) — Disruption is required across various spaces. Global challenges require some form of disruption. Enforcement of laws an…
S54
Top digital policy developments in 2019: A year in review — But this potential cannot be fully exploited if the world continues to be split between those who have access to digital…
S55
Building a Digital Society, from Vision to Implementation — Small island developing states face common challenges and should work together
S56
High Level Dialogue: Strengthening the Resilience of Telecommunication Submarine Cables — ### Small Island States and Landlocked Countries Sandra Maximiano: So as we actually just listen here, many accidents h…
S57
Main Session on Cybersecurity, Trust & Safety Online | IGF 2023 — The analysis also highlights the importance of knowledge-sharing in the context of cybersecurity. It suggests the creati…
S58
Cybersecurity emerges as policy topic — Cybersecurity emerges as a policy, technical, and diplomatic issue.
S59
Cybersecurity, cybercrime, and online safety — The analysis also recognises the importance of multistakeholder governance in ensuring a safer cybersecurity environment…
S60
National Disaster Management Authority — There was unexpected consensus on expanding the traditional definition of disasters to include cyber threats. This repre…
S61
AI Without the Cost Rethinking Intelligence for a Constrained World — So the energy savings come from the three orders of magnitude lower compute costs. We’ve done four presentations with NV…
S62
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Fundamental infrastructure challenges—including limited computing power, inadequate connectivity, and capacity gaps—requ…
S63
AI: Lifting All Boats / DAVOS 2025 — Dowidar mentioned ongoing work with UNDP on AI-powered early warning systems. Further research on implementation and sca…
S64
National Disaster Management Authority — This comment fundamentally expanded the scope of disaster risk reduction beyond traditional natural disasters to include…
S65
The Virtual Worlds we want: Governance of the future web | IGF 2023 Open Forum #45 — Elena Plexida:Thank you, Miapetra, hello, everyone. Indeed, ICANN coordinates the internet unique identifiers, the names…
S66
AI may reshape weather and climate modelling — The UK’s Met Office has laid out a strategicplanfor integrating AI, specifically machine learning (ML), with traditional…
S67
https://dig.watch/event/india-ai-impact-summit-2026/national-disaster-management-authority — Hello? Yeah. Right. Thank you. Thank you for your question. Thank you, it’s a real honour to be part of this panel. So a…
S68
Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit — Vivek outlines India’s national supercomputing capability, noting the installed petaflop capacity and the large number o…
S69
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — And the big mindset shift that’s starting to occur is this notion that, you know, these aren’t just productivity tools. …
S70
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — He explains that remote, low‑connectivity scenarios benefit from edge deployment, while most workloads run on the cloud.
S71
[Parliamentary Session 3] Researching at the frontier: Insights from the private sector in developing large-scale AI systems — Fuad Siddiqui: Thank you. Good morning. Yeah, I’m delighted to be here. And it’s always great to be back in Saudi. I …
S72
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — I mean, access to compute is what makes or breaks a startup. So the way in India, the way I see it, the way we have star…
S73
How to make AI governance fit for purpose? — The discussion maintained a collaborative and optimistic tone throughout, despite representing different national perspe…
S74
Open Forum #30 High Level Review of AI Governance Including the Discussion — The discussion maintained a collaborative and constructive tone throughout, characterized by mutual respect and shared c…
S75
Laying the foundations for AI governance — The tone was collaborative and constructive throughout, with panelists building on each other’s points rather than disag…
S76
Opening address of the co-chairs of the AI Governance Dialogue — The tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate fo…
S77
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S78
Day 0 Event #257 Enhancing Data Governance in the Public Sector — The discussion maintained a pragmatic and collaborative tone throughout, with speakers acknowledging both opportunities …
S79
Pathways to De-escalation — The overall tone was serious and somewhat cautious, reflecting the gravity of cybersecurity challenges. While the speake…
S80
Swiss AI Initiatives and Policy Implementation Discussion — The discussion maintained a professional, collaborative tone throughout, with speakers presenting both opportunities and…
S81
AI as critical infrastructure for continuity in public services — The discussion maintained a collaborative and constructive tone throughout, with participants building on each other’s p…
S82
WSIS Action Line C7 E-learning — The discussion maintained a professional and collaborative tone throughout, with speakers demonstrating cautious optimis…
S83
High-Level Dialogue: The role of parliaments in shaping our digital future — The discussion maintained a tone of cautious optimism throughout. Speakers acknowledged significant challenges and risks…
S84
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S85
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S86
AI for equality: Bridging the innovation gap — The conversation maintained a consistently optimistic yet realistic tone throughout. Both speakers demonstrated enthusia…
S87
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — The tone was collaborative and solution-oriented throughout, with participants acknowledging both the urgency and comple…
S88
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S89
AI and Human Connection: Navigating Trust and Reality in a Fragmented World — The tone began optimistically with audience engagement but became increasingly concerned and urgent as panelists reveale…
S90
AI for Good Technology That Empowers People — The tone was consistently optimistic and collaborative throughout, with speakers demonstrating genuine enthusiasm for so…
S91
Global Digital Compact: AI solutions for a digital economy inclusive and beneficial for all — The discussion maintained a consistently optimistic and action-oriented tone throughout. While speakers acknowledged ser…
S92
Open Forum: A Primer on AI — Artificial Intelligence is advancing at a rapid pace
S93
Opening — Pace of technological progress is accelerating unpredictably
S94
9821st meeting — 2. Creation of an International Scientific Panel on Artificial Intelligence Ecuador:Mr. President, I thank the United S…
S95
Agenda item 5: Day 2 Morning session — Vietnam:Thank you Chair. Again in our first intervention this week we would like to reaffirm our strong support for the …
S96
Responsible AI in India Leadership Ethics & Global Impact part1_2 — Absolutely. So as you said, one size doesn’t fit all. Right. And I liked your coinage of bring your own AI. So let me qu…
S97
Resilient infrastructure for a sustainable world — – **Helen Ng** – Works at UNDRR (United Nations Office for Disaster Risk Reduction), focuses on resilient infrastructure…
S98
WS #49 Benefit everyone from digital tech equally & inclusively — – The need for national platforms to coordinate disaster risk reduction efforts. 3. Information Governance for Disaster…
S99
AI Meets Agriculture Building Food Security and Climate Resilien — These key comments fundamentally shaped the discussion by introducing critical frameworks that moved the conversation be…
S100
Agenda item 5 : Day 4 Morning session — Mauritius:Good morning, Chair. In an increasingly interconnected world where the threat landscape is constantly evolving…
S101
Protecting critical infrastructure in a fragile cyberspace — ‘Securing Critical Infrastructure in Cyber: Who and How?’ is the name of one of the main panels at IGF 2024 in Riyadh, w…
S102
Dynamic Coalition Collaborative Session — Development | Economic | Infrastructure Rajendra warns that without proper classification of certain technologies as di…
S103
Roundtable — His insights notably advance the ongoing discourse regarding the identification and protection of critical infrastructur…
S104
Host Country Open Stage — D Silva emphasized the transformative potential of sustainability reporting, stating that “transparency is not just abou…
S105
Toward Collective Action_ Roundtable on Safe & Trusted AI — Professor Jonathan Shock warned against the “Silicon Valley approach of move fast and break things” when dealing with go…
S106
UN Secretary-General warns humanity cannot rely on algorithms — UN Secretary-General António Guterres hasurgedworld leaders to act swiftly to ensure AI serves humanity rather than thre…
S107
The fading of human agency in automated systems — In practice, however, being “in the loop” frequently means supervising outputs under conditions that make meaningful jud…
S108
Leaders TalkX: Moral pixels: painting an ethical landscape in the information society — Abbasi warns against over-reliance on algorithmic decision-making without proper human oversight. She argues that this a…
S109
JMA to test AI-enhanced weather forecasting — The Japan Meteorological Agency (JMA) is exploring the use of AI toimprove the accuracy of weather forecasts, with a par…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Avinash Ramtohul
1 argument153 words per minute918 words358 seconds
Argument 1
Human‑in‑the‑loop & digital‑twin for emergency response (Avinash Ramtohul)
EXPLANATION
Ramtohul stresses that disaster response must integrate a digital twin of physical infrastructure to guide emergency services, and that critical decisions should always involve a human element rather than full automation. He argues that this bridge between the virtual and physical worlds enhances situational awareness and safety during incidents such as fires.
EVIDENCE
He describes a scenario where a fire triggers automated alerts to fire and medical services, but stresses the need for a digital twin that provides structural plans, pipe locations, and real-time occupancy data so responders know exactly where people are (digital twin concept) [35-42]. He also emphasizes that decision-making must retain a human-in-the-loop to avoid dangerous 100 % automation, especially for life-saving alerts, and cites Mauritius’ policy of human-verified messages in early warning systems [46-55].
MAJOR DISCUSSION POINT
Human‑in‑the‑loop & digital‑twin for emergency response
AGREED WITH
Beth Woodhams, Som Satsangi, Pankaj Shukla
M
Moderator
1 argument87 words per minute1167 words797 seconds
Argument 1
AI must be embedded in national resilience architecture, not just algorithms (Moderator)
EXPLANATION
The moderator argues that the next frontier in disaster risk reduction lies in institutionalising AI within the broader national resilience framework, rather than focusing solely on algorithmic improvements. Embedding AI at the governance level ensures systematic, scalable, and sustainable use across disaster management processes.
EVIDENCE
The opening remarks state that “the next frontier in DRR is not better algorithms alone, it is institutionalizing AI within national resilience architecture” [7].
MAJOR DISCUSSION POINT
AI must be embedded in national resilience architecture, not just algorithms
B
Beth Woodhams
2 arguments154 words per minute385 words149 seconds
Argument 1
Hybrid blending of AI and physics models, phased rollout (Beth Woodhams)
EXPLANATION
Woodhams explains that the Met Office will not replace physical weather models with AI but will gradually introduce machine‑learning components through hybrid blending. This phased approach allows confidence to build as AI outputs are combined with established physics‑based forecasts.
EVIDENCE
She outlines the plan to develop machine-learning weather models and to implement them step-by-step by blending physics-based and ML outputs, noting that the Met Office will increase blending as confidence grows [65-71].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Met Office’s strategy of gradually blending machine-learning weather models with established physics-based forecasts is documented in the NDMA report on Beth Woodhams’ presentation [S1].
MAJOR DISCUSSION POINT
Hybrid blending of AI and physics models, phased rollout
Argument 2
Co‑development and joint benchmarking with partners to ensure user‑relevant metrics (Beth Woodhams)
EXPLANATION
Woodhams stresses the importance of co‑creating AI models and their evaluation frameworks with external partners, ensuring that performance metrics align with user needs. Joint benchmarking will help maintain transparency and trust in AI‑enhanced forecasts.
EVIDENCE
She describes existing co-development partnerships with India and other regions, and the Met Office’s effort to standardise benchmarking and evaluation of ML versus physics models, emphasizing metrics that matter to users [71-74].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Co-creation of safety benchmarks and joint evaluation frameworks with industry and academia is emphasized in the responsible AI framework discussion [S18] and reinforced by the NDMA summary of Woodhams’ remarks [S1].
MAJOR DISCUSSION POINT
Co‑development and joint benchmarking with partners to ensure user‑relevant metrics
DISAGREED WITH
Som Satsangi
D
Dr. Mrutyunjay Mohapatra
3 arguments171 words per minute982 words344 seconds
Argument 1
AI as a hybrid complement to physical models improves early‑warning accuracy; need better data quality (Dr. Mrutyunjay Mohapatra)
EXPLANATION
Mohapatra notes that AI should augment, not replace, physical weather models, creating hybrid systems that improve forecast accuracy. He also highlights that data quality—especially from satellites—is a limiting factor and that AI can help enhance both data and model performance.
EVIDENCE
He states that large NMHSs, including IMD, are using AI alongside physical models, and that AI must be suitably connected with physical models to retain physical reasoning [190-197]. He further points out that only about five percent of satellite data is usable due to quality issues, and that AI can improve data quality and increase usable observations [203-207].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A hybrid approach that links AI with physical sensor and satellite systems for resilient early-warning is advocated in the “Survival Tech Harnessing AI to Manage Global Climate Extremes” briefing [S20].
MAJOR DISCUSSION POINT
AI as a hybrid complement to physical models improves early‑warning accuracy; need better data quality
Argument 2
“Box‑model” GPU‑based AI offers affordable forecasting for resource‑constrained nations (Dr. Mrutyunjay Mohapatra)
EXPLANATION
Mohapatra introduces the concept of a low‑cost “box‑model” that runs on a few GPU nodes, enabling small or low‑resource countries to generate AI‑driven forecasts without massive supercomputing infrastructure. This approach democratises access to advanced forecasting capabilities.
EVIDENCE
He explains that a box-model using GPU-based AI can provide forecasts for poor or small island nations, allowing them to achieve early warning with limited hardware, and that such models are becoming increasingly affordable [207].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concept of low-cost, GPU-based AI models using dynamic sparsity to reduce compute requirements is described in the “AI Without the Cost” talk [S22], and the need to democratize GPU access for innovators in India is noted in the AI scaling discussion [S23].
MAJOR DISCUSSION POINT
“Box‑model” GPU‑based AI offers affordable forecasting for resource‑constrained nations
DISAGREED WITH
Som Satsangi
Argument 3
Call for cross‑sector collaboration (industry, academia, R&D) to enhance AI use in disaster management (Dr. Mrutyunjay Mohapatra)
EXPLANATION
He urges industries, academia, research institutions, and think‑tanks to work together with national meteorological and disaster agencies to ensure authentic, judicious AI deployment. Collaborative effort is presented as essential for scaling AI benefits in disaster risk reduction.
EVIDENCE
He calls on all sectors-industry, academia, R&D, and think-tanks-to collaborate with NMHSs and other organisations to achieve effective AI utilisation in disaster management [213-214].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Cross-sector movement of professionals between government, academia, and industry as essential for knowledge transfer is highlighted in the “From India to the Global South” report [S24].
MAJOR DISCUSSION POINT
Call for cross‑sector collaboration (industry, academia, R&D) to enhance AI use in disaster management
S
Som Satsangi
3 arguments147 words per minute983 words398 seconds
Argument 1
India’s current supercomputing capacity is far below exaflop needs for real‑time AI alerts (Som Satsangi)
EXPLANATION
Satsangi compares India’s existing supercomputing capability (tens of petaflops) with the exaflop‑scale systems used in the United States for real‑time AI‑driven early warning. He argues that the gap in computational power hampers India’s ability to deliver timely alerts at national scale.
EVIDENCE
He cites India’s 40 petaflop capacity from the 2015 supercomputer mission and the current 37 supercomputers totaling ~40 petaflop, contrasted with US systems like Al Capitan (1.8 exaflop), Frontier (1.3 exaflop), and Aurora (1 exaflop) that support real-time AI analytics [92-100].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The gap between India’s petaflop-scale supercomputers and the exaflop systems used for real-time AI alerts in the US is discussed in the AI infrastructure scaling overview for India [S25], and the need for massive capital investment is underscored in the public-private partnership analysis [S26].
MAJOR DISCUSSION POINT
India’s current supercomputing capacity is far below exaflop needs for real‑time AI alerts
Argument 2
Massive cost, power, and water requirements for high‑performance AI data centres; private‑public partnerships essential (Som Satsangi)
EXPLANATION
He highlights that building exaflop‑scale AI infrastructure would cost hundreds of millions to a billion dollars and would demand substantial electricity and water for cooling. Consequently, he advocates for private‑sector partnerships to share the financial and operational burden.
EVIDENCE
He notes that each exaflop-class system can cost $400-$500 million to $1 billion, and that such facilities need massive power, alternative energy sources, and water-cooling for hundreds of thousands of GPUs/CPUs [107-125].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The high financial, electricity, and water demands of exaflop-class AI data centres and the recommendation for private-public partnership models are detailed in the AI capital requirements briefing [S26] and the scaling-capacity projection for India [S25].
MAJOR DISCUSSION POINT
Massive cost, power, and water requirements for high‑performance AI data centres; private‑public partnerships essential
Argument 3
Need sovereign‑data‑compatible architectures and explainability standards for life‑saving AI decisions (Som Satsangi)
EXPLANATION
Satsangi stresses that AI systems for disaster response must be built on architectures that respect sovereign data constraints and provide transparent, explainable outputs for critical decisions. He links this requirement to procurement policies and the need for interoperable, standards‑based solutions.
EVIDENCE
He references the necessity for AI systems to be interoperable with sovereign data architectures and compatible with diverse governance ecosystems, especially in a federal country like India, and calls for standards of explainability when AI informs life-saving decisions [80-81].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for sovereign-compatible AI architectures and transparent, explainable outputs for critical decisions are made in the “Building Sovereign and Responsible AI Beyond Proof of Concepts” discussion [S28] and reinforced by the UN Security Council AI governance summary on transparency [S29].
MAJOR DISCUSSION POINT
Need sovereign‑data‑compatible architectures and explainability standards for life‑saving AI decisions
DISAGREED WITH
Beth Woodhams
D
Dr. Krishna Vatsa
2 arguments126 words per minute507 words240 seconds
Argument 1
Expanding observational networks generate huge data; lack of processing capacity and clear architecture hampers citizen‑focused early warnings (Dr. Krishna Vatsa)
EXPLANATION
Vatsa explains that while India is rapidly expanding its observational infrastructure (weather stations, seismometers, etc.), the country lacks sufficient data‑processing capacity and a coherent architecture to turn this data into actionable early warnings for the public. This gap limits the effectiveness of early‑warning systems.
EVIDENCE
He details the planned deployment of automated weather stations in every village, increased landslide instrumentation, and quadrupling of seismometers, while noting the current struggle to process the resulting data and deliver precise citizen-focused warnings [220-236]. He also points out the unclear roadmap for integrating data centres with early-warning agencies [238-247].
MAJOR DISCUSSION POINT
Expanding observational networks generate huge data; lack of processing capacity and clear architecture hampers citizen‑focused early warnings
Argument 2
Massive investment in observational infrastructure (weather stations, seismometers) requires parallel development of data‑processing and AI capabilities (Dr. Krishna Vatsa)
EXPLANATION
Vatsa emphasizes that the large financial outlay for expanding observational networks must be matched by investments in computational infrastructure and AI tools to fully exploit the data. Without parallel development, the observational data cannot be transformed into high‑precision early‑warning information.
EVIDENCE
He mentions the upcoming nationwide rollout of automated weather stations, extensive landslide sensors, and a four-fold increase in seismometers, and stresses the need for processing capacity and AI models to improve early-warning precision [226-231].
MAJOR DISCUSSION POINT
Massive investment in observational infrastructure (weather stations, seismometers) requires parallel development of data‑processing and AI capabilities
P
Pankaj Shukla
2 arguments161 words per minute761 words283 seconds
Argument 1
Five‑layer AI stack (infrastructure, OS, platform services, models, applications) enabling central “living intelligence” with edge capability (Pankaj Shukla)
EXPLANATION
Shukla outlines a five‑layer architecture for AI in disaster management, starting from the physical infrastructure up to AI‑driven applications. This layered approach creates a central “living intelligence” that can be extended to edge locations for real‑time decision support.
EVIDENCE
He describes the layers: infrastructure, operating system (central to edge), platform services for building AI applications, multi-modal models (e.g., Gemini), and applications that turn intelligence into action, emphasizing the need for a central living intelligence that feeds edge devices [136-144].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A layered AI architecture that creates a central “living intelligence” and extends to edge devices is described in the “Building Trusted AI at Scale” keynote [S27].
MAJOR DISCUSSION POINT
Five‑layer AI stack (infrastructure, OS, platform services, models, applications) enabling central “living intelligence” with edge capability
Argument 2
Ability to run AI in disconnected, zero‑trust, rugged devices for last‑mile dissemination and misinformation mitigation (Pankaj Shukla)
EXPLANATION
He explains that AI solutions must operate in air‑gapped, low‑connectivity environments using rugged devices that maintain zero‑trust security. This capability ensures that critical alerts can be delivered at the last mile while protecting against misinformation and data breaches.
EVIDENCE
He details how AI applications can be packaged to run on rugged, disconnected devices with a small set of central intelligence, maintaining zero-trust security, and can still provide actionable information such as asset location and impact assessment during disasters [148-152].
MAJOR DISCUSSION POINT
Ability to run AI in disconnected, zero‑trust, rugged devices for last‑mile dissemination and misinformation mitigation
N
Nikhilesh Kumar
3 arguments128 words per minute627 words293 seconds
Argument 1
Multi‑layer data integration (hazard, asset, people, workflow) to produce actionable insights at scale (Nikhilesh Kumar)
EXPLANATION
Kumar describes a four‑layer framework—hazard modeling, asset and population impact, and workflow translation—that integrates data from multiple agencies to generate actionable disaster‑risk insights. This integrated approach is essential for scaling decision‑support platforms (DPIs/DPGs).
EVIDENCE
He outlines the four layers: modeling (hazard), asset & people impact, and workflows for action, noting that data is scattered across agencies (meteorological, water, survey) and must be combined into DPIs/DPGs for effective response [155-162].
MAJOR DISCUSSION POINT
Multi‑layer data integration (hazard, asset, people, workflow) to produce actionable insights at scale
Argument 2
AI‑driven nowcasting of millions of water bodies and dams using real‑time satellite and radar data (Nikhilesh Kumar)
EXPLANATION
He presents a use case where AI processes 30‑minute interval satellite and radar data to nowcast water levels for roughly one million water bodies, delivering real‑time alerts to thousands of dams during cyclone events. This demonstrates AI’s capacity for large‑scale, near‑real‑time hazard monitoring.
EVIDENCE
He explains that AI leverages real-time satellite and radar data to nowcast conditions for about one million water bodies, translating the nowcast into hydraulic models for roughly 5,000 dams during cyclone periods, providing timely alerts [163-166].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The use of hybrid AI models that ingest real-time satellite and radar observations for large-scale water-body nowcasting is presented in the “Survival Tech Harnessing AI to Manage Global Climate Extremes” briefing [S20].
MAJOR DISCUSSION POINT
AI‑driven nowcasting of millions of water bodies and dams using real‑time satellite and radar data
Argument 3
Extraction of structured risk information from unstructured news to support insurance and risk‑reduction efforts (Nikhilesh Kumar)
EXPLANATION
Kumar argues that AI can mine unstructured news reports to extract location‑specific hazard and damage information, creating structured datasets that feed into DPIs and insurance models. This enhances risk assessment and supports targeted risk‑reduction strategies.
EVIDENCE
He notes that AI can process large volumes of news containing unstructured location and hazard details, converting them into structured, hazard-wise datasets that can be used by DPIs and the insurance sector for location-specific risk intensity and frequency analysis [169-172].
MAJOR DISCUSSION POINT
Extraction of structured risk information from unstructured news to support insurance and risk‑reduction efforts
Agreements
Agreement Points
Hybrid AI‑physics models improve forecast accuracy and early warning
Speakers: Beth Woodhams, Dr. Mrutyunjay Mohapatra
Hybrid blending of AI and physics models, phased rollout AI as a hybrid complement to physical models improves early‑warning accuracy; need better data quality
Both speakers emphasize that AI should augment, not replace, physical weather models, using a blended or hybrid approach to increase confidence and accuracy of forecasts and early warnings [65-71][190-197].
POLICY CONTEXT (KNOWLEDGE BASE)
The World Meteorological Organisation and the UK Met Office have highlighted AI-physics hybrid models as a way to boost forecast skill and early-warning lead times, urging public-private cooperation to deploy them [S46][S48][S45].
Substantial computational and data‑processing infrastructure is essential for AI‑driven disaster management
Speakers: Som Satsangi, Dr. Krishna Vatsa, Pankaj Shukla
India’s current supercomputing capacity is far below exaflop needs for real‑time AI alerts Expanding observational networks generate huge data; lack of processing capacity and clear architecture hampers citizen‑focused early warnings Five‑layer AI stack (infrastructure, OS, platform services, models, applications) enabling central “living intelligence” with edge capability
All three highlight the need for large-scale computing resources, data-centre capacity and a layered AI architecture to process the massive data streams from expanded sensor networks and deliver real-time alerts [92-100][106-108][111-125][220-231][238-247][136-144].
POLICY CONTEXT (KNOWLEDGE BASE)
UN-endorsed frameworks treat AI as critical infrastructure, calling for standardized APIs and robust compute resources, while recent analyses stress the high energy and cost demands of large-scale models [S41][S61][S62].
Public‑private and cross‑sector collaboration is required to finance and implement AI for DRR
Speakers: Som Satsangi, Dr. Mrutyunjay Mohapatra, Nikhilesh Kumar
Massive cost, power, and water requirements for high‑performance AI data centres; private‑public partnerships essential Call for cross‑sector collaboration (industry, academia, R&D) to enhance AI use in disaster management Multi‑layer data integration (hazard, asset, people, workflow) to produce actionable insights at scale
The speakers agree that the financial, technical and operational challenges of AI-enabled DRR can only be met through partnerships among government, industry, academia and startups, sharing costs and expertise [107-110][213-214][155-162].
POLICY CONTEXT (KNOWLEDGE BASE)
The WMO and UNDP have repeatedly called for joint financing and multi-stakeholder partnerships to scale AI-enabled early-warning systems [S46][S63][S36].
Interoperable, sovereign‑compatible data architectures and integrated data layers are critical
Speakers: Som Satsangi, Nikhilesh Kumar, Pankaj Shukla, Dr. Krishna Vatsa
Need sovereign‑data‑compatible architectures and explainability standards for life‑saving AI decisions Multi‑layer data integration (hazard, asset, people, workflow) to produce actionable insights at scale Five‑layer AI stack (infrastructure, OS, platform services, models, applications) enabling central “living intelligence” with edge capability Expanding observational networks generate huge data; lack of processing capacity and clear architecture hampers citizen‑focused early warnings
All underline the necessity of a clear, interoperable data governance framework that respects sovereign data constraints and links multiple hazard, asset and population data streams into a unified AI-driven decision support system [80-81][155-162][136-144][238-247].
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions at the IGF and policy papers on data sovereignty stress the need for interoperable, trust-based data flows that respect national sovereignty while enabling cross-border sharing [S38][S39][S40][S41].
Trust, explainability and human oversight are essential for AI‑driven alerts
Speakers: Avinash Ramtohul, Beth Woodhams, Som Satsangi, Pankaj Shukla
Human‑in‑the‑loop & digital‑twin for emergency response (Avinash Ramtohul) Co‑development and joint benchmarking with partners to ensure user‑relevant metrics Need sovereign‑data‑compatible architectures and explainability standards for life‑saving AI decisions Ability to run AI in disconnected, zero‑trust, rugged devices for last‑mile dissemination and misinformation mitigation
There is consensus that AI systems must retain human verification, provide transparent metrics, adhere to explainability standards and operate securely, especially when life-saving decisions are involved [46-55][71-74][80-81][148-152].
POLICY CONTEXT (KNOWLEDGE BASE)
Guidelines from the UN and multistakeholder bodies emphasize explainability, transparency, and mandatory human oversight throughout the AI lifecycle for high-risk applications such as disaster alerts [S49][S50][S51][S52].
AI solutions must reach the last mile in low‑connectivity settings while avoiding alert fatigue and misinformation
Speakers: Pankaj Shukla, Nikhilesh Kumar, Moderator
Ability to run AI in disconnected, zero‑trust, rugged devices for last‑mile dissemination and misinformation mitigation AI‑driven nowcasting of millions of water bodies and dams using real‑time satellite and radar data Insights in developing resilient governance frameworks which actually are scalable across the nation’s and this is a way to go resilient system which is resilience to even cyber attacks and of sustainable meaningful and not giving any say fatigue you you of alerts also is very much vital for a robust system to be effective across all disasters
All three stress that AI-enabled early warning must be delivered to end-users in remote or disconnected areas, be designed to prevent alert fatigue and guard against misinformation, and remain robust across disaster types [148-152][163-166][56-57].
POLICY CONTEXT (KNOWLEDGE BASE)
Studies on AI for food systems and the digital divide underline the importance of designing solutions that function in low-connectivity environments and reach underserved populations, warning against over-alerting and misinformation [S35][S37][S54].
Similar Viewpoints
All participants concur that AI should be institutionalised within a coherent national resilience framework, involving governance structures, interoperable architectures, human oversight and collaborative development to ensure effective, trustworthy disaster risk reduction [7][46-55][71-74][80-81][136-144][155-162].
Speakers: Moderator, Avinash Ramtohul, Beth Woodhams, Som Satsangi, Pankaj Shukla, Nikhilesh Kumar
AI must be embedded in national resilience architecture, not just algorithms (Moderator) Human‑in‑the‑loop & digital‑twin for emergency response (Avinash Ramtohul) Co‑development and joint benchmarking with partners to ensure user‑relevant metrics (Beth Woodhams) Need sovereign‑data‑compatible architectures and explainability standards for life‑saving AI decisions (Som Satsangi) Five‑layer AI stack … enabling central “living intelligence” with edge capability (Pankaj Shukla) Multi‑layer data integration … to produce actionable insights at scale (Nikhilesh Kumar)
Unexpected Consensus
Human‑verified alerts and digital‑twin bridging of physical and virtual worlds are needed even for small island states and large federal nations alike
Speakers: Avinash Ramtohul, Dr. Krishna Vatsa
Human‑in‑the‑loop & digital‑twin for emergency response (Avinash Ramtohul) Expanding observational networks generate huge data; lack of processing capacity and clear architecture hampers citizen‑focused early warnings (Dr. Krishna Vatsa)
Despite the difference in scale, both the Minister of Mauritius and the Indian Meteorological Director stress that disaster alerts must be grounded in accurate situational data (digital twin or sensor networks) and verified by humans before dissemination, revealing an unexpected alignment of priorities between a small island developing state and a large federal country [46-55][235-237].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs for Small Island Developing States highlight the need for digital-twin tools and verified alerts to enhance resilience, especially for critical infrastructure like submarine cables [S55][S56][S45].
Overall Assessment

There is strong, multi‑dimensional consensus among the moderator and all panelists that AI for disaster risk reduction must be hybrid, human‑centred, built on robust computational and data‑processing infrastructure, governed by interoperable sovereign‑compatible architectures, financed through public‑private partnerships, and delivered securely to end‑users. The shared emphasis on trust, explainability, and last‑mile accessibility underscores a unified vision for scalable, inclusive AI‑enabled resilience.

High consensus across technical, policy, financial and ethical dimensions, indicating that future initiatives are likely to focus on integrated hybrid models, capacity‑building infrastructure, collaborative governance frameworks and secure, human‑overseen deployment.

Differences
Different Viewpoints
Scale and cost of computing infrastructure for AI‑driven early warning
Speakers: Som Satsangi, Dr. Mrutyunjay Mohapatra
India’s current supercomputing capacity is far below ex‑aflop needs for real‑time AI alerts (Som Satsangi) “Box‑model” GPU‑based AI offers affordable forecasting for resource‑constrained nations (Dr. Mrutyunjay Mohapatra)
Som argues that India must acquire exaflop-scale supercomputers (costing $400-500 million to $1 billion each) and secure massive power and water resources, requiring private-public partnerships to meet real-time AI alert needs [92-100]. Mohapatra counters that a low-cost GPU-based “box-model” can deliver forecasts for small or low-resource countries without such massive infrastructure, presenting an alternative, affordable path [207-208].
POLICY CONTEXT (KNOWLEDGE BASE)
Recent research shows that next-generation AI models can reduce compute costs by orders of magnitude, yet financing large-scale infrastructure remains a challenge for many regions [S61][S62].
Data governance approach: sovereign‑data architectures and explainability vs open co‑development and benchmarking
Speakers: Som Satsangi, Beth Woodhams
Need sovereign‑data‑compatible architectures and explainability standards for life‑saving AI decisions (Som Satsangi) Co‑development and joint benchmarking with partners to ensure user‑relevant metrics (Beth Woodhams)
Som stresses that AI systems must respect sovereign data constraints and provide transparent, explainable outputs for critical decisions, linking this to procurement and interoperability requirements [80-81]. Woodhams emphasizes collaborative model creation and shared benchmarking with external partners, focusing on user-centric performance metrics rather than explicit sovereignty safeguards [71-74].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates at the IGF contrast sovereign-centric data regimes with calls for open, benchmarked AI development, reflecting broader tensions in global data governance frameworks [S38][S39][S49].
Unexpected Differences
Inclusion of cyber‑security as a disaster domain
Speakers: Avinash Ramtohul, Other panelists (e.g., Beth Woodhams, Som Satsangi, Dr. Mohapatra, etc.)
Disaster can also strike the virtual world; cybersecurity attacks can create havoc (Avinash Ramtohul) Other speakers focus exclusively on physical hazards (weather, floods, earthquakes) without mentioning cyber threats
Avinash expands the definition of disaster to include virtual-world incidents such as cyber-attacks and stresses the need for AI-enabled safeguards in that domain [30-33]. The remaining panelists discuss only physical hazards and AI for forecasting, response and infrastructure, showing an unexpected divergence in the scope of what constitutes a disaster in the AI-DRR context.
POLICY CONTEXT (KNOWLEDGE BASE)
National disaster management authorities and IGF sessions have begun to classify cyber incidents as disasters, urging integrated cybersecurity policies within resilience planning [S60][S57][S58].
Overall Assessment

The panel broadly concurs on the strategic importance of AI for disaster risk reduction, yet key tensions emerge around the scale and financing of computing infrastructure, the governance model for data (sovereign versus open co‑development), and the pace and architecture of AI deployment. A notable surprise is the differing view on whether cyber‑security incidents should be treated as disasters alongside traditional physical hazards.

Moderate to high. While there is consensus on the goal of AI‑enhanced resilience, the disagreements on infrastructure investment, data governance, and scope of disaster definition could impede coordinated policy action unless reconciled. These divergences suggest the need for a hybrid policy framework that accommodates both high‑performance national infrastructure and low‑cost alternatives, aligns sovereign data requirements with collaborative benchmarking, and broadens disaster definitions to include cyber threats.

Partial Agreements
All speakers agree that AI should be integrated into disaster risk reduction to improve early warning and response, but they diverge on the preferred implementation pathway: Avinash calls for digital twins and human‑in‑the‑loop safeguards; Woodhams proposes a gradual hybrid blending of ML with physics models; Som pushes for large‑scale sovereign‑compatible supercomputing infrastructure; Mohapatra suggests low‑cost GPU box models; Pankaj outlines a layered architecture that can operate at edge locations; Nikhilesh stresses multi‑layer data integration across agencies; and Vatsa highlights the need to match expanding sensor networks with processing capacity. These differing methods reflect varied views on speed, cost, governance and technical architecture [46-55][65-71][92-100][207-208][136-144][155-162][226-236].
Speakers: Avinash Ramtohul, Beth Woodhams, Som Satsangi, Dr. Mrutyunjay Mohapatra, Pankaj Shukla, Nikhilesh Kumar, Dr. Krishna Vatsa
Human‑in‑the‑loop & digital‑twin for emergency response (Avinash Ramtohul) Hybrid blending of AI and physics models, phased rollout (Beth Woodhams) India’s current supercomputing capacity is far below ex‑aflop needs for real‑time AI alerts (Som Satsangi) “Box‑model” GPU‑based AI offers affordable forecasting for resource‑constrained nations (Dr. Mrutyunjay Mohapatra) Five‑layer AI stack (infrastructure, OS, platform services, models, applications) enabling central “living intelligence” with edge capability (Pankaj Shukla) Multi‑layer data integration (hazard, asset, people, workflow) to produce actionable insights at scale (Nikhilesh Kumar) Expanding observational networks generate huge data; lack of processing capacity and clear architecture hampers citizen‑focused early warnings (Dr. Krishna Vatsa)
Takeaways
Key takeaways
AI must be embedded within national disaster risk governance structures, not treated as a stand‑alone technology. Human‑in‑the‑loop and digital‑twin concepts are essential for safe, life‑saving AI decisions (Avinash Ramtohul). Hybrid blending of AI‑based machine‑learning models with traditional physics‑based weather models is the preferred path; rollout should be incremental and benchmarked with partners (Beth Woodhams, Dr. Mrutyunjay Mohapatra). India’s current high‑performance computing capacity (≈28 PFLOPS) is far below the exaflop scale required for real‑time, nation‑wide AI alerts; massive investment in infrastructure, power, and cooling is needed (Som Satsangi). Sovereign‑data‑compatible architectures, clear explainability standards, and robust procurement policies are critical for interoperable AI systems across federal and state agencies (Som Satsangi). A five‑layer AI stack (infrastructure, OS, platform services, models, applications) enables a central “living intelligence” with edge‑capable, zero‑trust, rugged devices for last‑mile dissemination and misinformation mitigation (Pankaj Shukla). Start‑ups can add value by integrating multi‑layer data (hazard, asset, people, workflow), delivering now‑casts for millions of water bodies, and extracting structured risk information from unstructured news for insurance and risk‑reduction (Nikhilesh Kumar). Box‑model GPU‑based AI forecasting offers an affordable path for resource‑constrained nations and can complement larger supercomputing efforts (Dr. Mrutyunjay Mohapatra). Massive expansion of observational networks (weather stations, seismometers, landslide sensors) will generate huge data streams; processing capacity and a clear data‑center architecture are still lacking (Dr. Krishna Vatsa). Cross‑sector collaboration (government, academia, industry, startups, international partners) is repeatedly called for to build, benchmark, and operationalize AI for DRR.
Resolutions and action items
Develop and maintain digital‑twin representations of critical infrastructure that are accessible to emergency services (suggested by Avinash Ramtohul). Implement a phased hybrid AI‑physics modelling approach with joint benchmarking frameworks involving national meteorological agencies and international partners (Beth Woodhams). Create a sovereign‑data architecture with defined explainability and audit standards for AI‑driven life‑saving decisions (Som Satsangi). Pursue public‑private partnerships to fund and deploy exaflop‑scale computing resources, including power and water‑cooling solutions (Som Satsangi). Design and roll out the five‑layer AI stack, ensuring edge‑ready, zero‑trust devices for disconnected environments (Pankaj Shukla). Encourage startups to build Disaster‑Prediction‑Generators (DPGs) that integrate hazard, asset, people, and workflow data, and to package these for state and national agencies (Nikhilesh Kumar). Promote the box‑model GPU‑based forecasting approach for low‑resource settings as an interim solution while larger infrastructure is built (Dr. Mrutyunjay Mohapatra). Accelerate deployment of automated weather stations and other sensors to achieve village‑level coverage, coupled with a roadmap for data‑center integration and AI processing capacity (Dr. Krishna Vatsa). Establish a national coordination forum (e.g., under NDMA) to define architecture, data‑center roles, and incremental capacity‑building steps (Dr. Krishna Vatsa).
Unresolved issues
Funding mechanisms and timelines for acquiring exaflop‑scale supercomputing infrastructure and associated power/water resources. Specific governance model for sharing and protecting sovereign data across ministries, states, and private partners. Detailed standards for AI explainability and accountability in emergency decision‑making; no consensus reached. Operational plan for integrating AI‑generated alerts with existing early‑warning channels (cell broadcast, SMS, sirens) while preventing misinformation. Clear roadmap for scaling the central data‑center architecture to serve numerous state‑level early‑warning agencies. Mechanisms to ensure cybersecurity of AI‑driven alert messages and to prevent malicious manipulation of warning systems. Allocation of responsibilities among ministries, NDMA, and private sector for building and maintaining the five‑layer AI stack.
Suggested compromises
Adopt a gradual hybrid AI‑physics model rollout, blending outputs and increasing AI share as confidence grows (Beth Woodhams). Maintain human verification for high‑impact alerts while allowing AI to automate lower‑risk data processing (Avinash Ramtohul). Leverage public‑private partnerships to share the financial burden of high‑cost infrastructure, rather than relying solely on government spending (Som Satsangi). Use affordable box‑model GPU clusters for immediate forecasting needs in low‑resource contexts, while continuing to develop larger supercomputing capacity (Dr. Mrutyunjay Mohapatra). Implement incremental capacity‑building: first enhance observational networks, then develop processing pipelines, followed by full AI integration, rather than attempting a single large‑scale deployment (Dr. Krishna Vatsa).
Thought Provoking Comments
Disasters are not only physical (floods, cyclones) but also virtual – cyber‑attacks can cause havoc, and we need a bridge between the physical and virtual worlds via digital twins that map structures, utilities and even real‑time human presence.
He expanded the definition of disaster to include cybersecurity and introduced the concept of a digital twin as a critical policy reform, linking physical response with virtual data infrastructure.
Shifted the discussion from traditional DRR to a broader, integrated view that includes cyber resilience; prompted later speakers to consider data architecture, interoperability, and the need for human‑in‑the‑loop decision making.
Speaker: Avinash Ramtohul (Minister, Republic of Mauritius)
We will not replace physical weather models with AI; instead we will blend physics‑based and machine‑learning models, co‑develop benchmarks with partners, and ensure the metrics we use reflect what users actually need.
She highlighted a pragmatic, hybrid modelling approach and stressed co‑development and user‑centric evaluation, challenging any notion of AI as a silver‑bullet replacement.
Guided the conversation toward collaborative model development and the importance of trustworthy metrics, influencing subsequent remarks on standards, explainability, and partnership models.
Speaker: Beth Woodhams (Senior Manager, UK Met Office)
India lacks the exaflop‑scale supercomputing infrastructure needed for real‑time AI‑driven early warning; building such capacity costs billions, so private‑sector partnerships and massive power/water resources are essential.
He provided a stark reality check on India’s computational capacity, quantified the gap with global examples, and linked infrastructure to policy and procurement challenges.
Created a turning point focusing the panel on resource constraints and the role of public‑private collaboration; later speakers (Google, NDMA) addressed how to work around these limitations with edge and federated solutions.
Speaker: Som Satsangi (Former SVP, Hewlett Packard Enterprise India)
AI deployment requires a five‑layer architecture—infra, operating system, platform services, models, and applications—plus edge/federated capabilities that can run in air‑gapped, zero‑trust environments and even on rugged devices for disconnected disaster zones.
He articulated a concrete technical framework for scaling AI in low‑connectivity, high‑risk settings, moving the discussion from abstract policy to actionable system design.
Steered the conversation toward practical implementation strategies, influencing the startup perspective on modular platforms and prompting NDMA to consider integration of data centers with field operations.
Speaker: Pankaj Shukla (Head of Customer Engineering, Google Cloud India)
Four layers are needed for disaster‑risk platforms: modeling, asset/people mapping, and workflow translation. AI can turn unstructured news and satellite data into structured hazard databases, unlocking insurance and risk‑reduction opportunities.
He introduced a startup‑centric view that connects data ingestion, AI‑driven insight, and actionable workflows, emphasizing the role of AI in filling data gaps for risk assessment and insurance.
Expanded the dialogue to include private‑sector innovation and the importance of data pipelines, leading to acknowledgment of the need for interoperable formats and DPIs/DPGs by other panelists.
Speaker: Nikhilesh Kumar (CEO, Vassar Labs)
The UN’s ‘Early Warning for All’ goal demands hybrid AI‑physical models; AI can improve data quality (e.g., only 5 % of satellite data is usable) and even low‑resource nations can use box‑model GPU solutions instead of massive supercomputers.
He linked global policy targets with technical realities, highlighted AI’s role in data quality, and offered a scalable solution for poorer nations, reinforcing the earlier infrastructure concerns.
Reinforced the need for hybrid approaches and democratized AI access, influencing the conversation on affordable models and encouraging collaborative efforts across agencies.
Speaker: Dr. Mrutyunjay Mohapatra (Director General, India Meteorological Department)
We have massive observational data (e.g., micro‑earthquakes, automated weather stations) but lack the capacity to process it and deliver actionable warnings to citizens; the challenge is integrating data centers with early‑warning agencies in a cost‑effective, incremental way.
He pinpointed the bottleneck between data collection and actionable dissemination, emphasizing the need for clear architecture and incremental capacity building.
Served as a synthesis point, bringing together earlier themes of infrastructure, interoperability, and user‑focused delivery, and set the stage for concluding remarks on coordinated national strategy.
Speaker: Dr. Krishna Vatsa (Head of Department, NDMA)
Overall Assessment

The discussion evolved from a broad conceptualization of disaster risk (including cyber threats) to concrete challenges of infrastructure, data quality, and implementation. Key comments—especially the digital‑twin vision, the quantified supercomputing gap, the five‑layer AI architecture, and the hybrid AI‑physical modeling approach—acted as turning points that redirected the conversation toward practical, scalable solutions and highlighted the necessity of public‑private partnerships, interoperable standards, and user‑centric design. Collectively, these insights shaped a nuanced narrative: while AI offers transformative potential for DRR, realizing it at national scale demands coordinated policy reforms, robust yet affordable computational resources, and integrated data‑to‑action pipelines.

Follow-up Questions
How can a digital twin of critical infrastructure be created and made accessible to emergency services for real-time response?
Digital twin bridges physical and virtual worlds, essential for locating people and assets during disasters.
Speaker: Avinash Ramtohul
What cybersecurity safeguards are needed to protect AI-driven early warning messages from malicious code or virus infection?
Early warning messages could be compromised, leading to misinformation and panic.
Speaker: Avinash Ramtohul
What governance frameworks ensure human-in-the-loop oversight for AI decisions affecting lives in disaster management?
Full automation can be dangerous; human verification is critical.
Speaker: Avinash Ramtohul
Which performance metrics of AI weather models are most relevant to end‑users and how should they be benchmarked?
Need to build trust; metrics must reflect user needs, not just technical scores.
Speaker: Beth Woodhams
How can standardized benchmarking and evaluation protocols be co‑developed for hybrid AI‑physical weather forecasting models?
Consistent evaluation ensures comparability and trust across partners.
Speaker: Beth Woodhams
What cost‑effective strategies can India adopt to develop exaflop‑scale high‑performance computing infrastructure required for real‑time AI‑driven disaster alerts?
Current capacity far below needed; infrastructure is a bottleneck.
Speaker: Som Satsangi
What sustainable power and cooling solutions are required to support large AI supercomputers for disaster risk reduction?
High energy and water demand; need environmentally viable options.
Speaker: Som Satsangi
How can public‑private partnership models be structured to finance and operate AI infrastructure for national early warning systems?
Government alone cannot bear costs; private sector involvement essential.
Speaker: Som Satsangi
What technical standards are needed to ensure AI systems are interoperable with sovereign data architectures across federal and state levels?
Interoperability is crucial for unified DRR across jurisdictions.
Speaker: Som Satsangi
What explainability standards should be applied when AI informs life‑saving disaster response decisions?
Transparency needed for trust and accountability.
Speaker: Som Satsangi
How can AI pipelines be scaled to provide near‑real‑time nowcasting for millions of water bodies and dams using satellite and radar data?
Critical for flood management; requires handling massive data streams.
Speaker: Nikhilesh Kumar
What AI techniques can extract structured hazard and damage information from unstructured news and social media sources to build comprehensive risk databases?
Current lack of historic hazard frequency data; AI can fill gaps.
Speaker: Nikhilesh Kumar
How should Data Processing Interfaces (DPIs) and Data Product Generators (DPGs) be designed to translate multi‑agency data into actionable disaster response workflows?
Coordination across agencies is needed for effective action.
Speaker: Nikhilesh Kumar
What is the feasibility and performance of low‑cost GPU‑based ‘box model’ AI solutions for early warning in small island developing states?
Offers affordable alternative to supercomputers for resource‑constrained nations.
Speaker: Mrutyunjay Mohapatra
What mechanisms can engage the public or community to augment national computational infrastructure for AI‑driven DRR?
Leveraging broader resources could bridge infrastructure gaps.
Speaker: Mrutyunjay Mohapatra
What architectural model best links central data centers with distributed early warning agencies to ensure timely information flow?
Need clear integration to justify data center investments.
Speaker: Krishna Vatsa
What phased roadmap should be followed to incrementally build AI capacity for disaster risk reduction within limited resources?
Gradual capacity building needed to avoid over‑investment and ensure sustainability.
Speaker: Krishna Vatsa

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

MahaAI Building Safe Secure & Smart Governance

MahaAI Building Safe Secure & Smart Governance

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by asserting that artificial intelligence is already reshaping governance, markets and geopolitics and that the central challenge is not whether AI will influence policy but how governance will shape AI itself [1-3]. Speakers described a “governance paradox” in which overly slow regulation risks harm while heavy regulation risks stagnation, prompting a call for “intelligent governance” that is human-centered, transparent, risk-based and globally coordinated [6-13]. Because AI transcends borders, they argued for interoperable frameworks, shared safety standards and adaptive policies that evolve with the technology [14-18].


The Maharashtra government presented its “Maha AI” initiative as a living laboratory, highlighting AI-powered crime-fighting tools such as the Mahak Crime OS and an intelligent cloud-native infrastructure called Mahaiti that supports smart recruitment, urban dashboards and flood-management pilots [47-53]. Minister Ashish Shelar emphasized five pillars-compute, data, state AI governance, interoperability standards and capacity building-as the foundation for safe, secure and smart governance [58]. Praveen Pardeshi stressed the need for green energy to power AI, large-scale capacity-building through AI university courses, and warned that data monetisation without safeguards could undermine national interests, citing the development of a “Maha GPT” system for both officials and citizens [80-84][85-87][95-114].


Yashasvi Yadav outlined Maharashtra’s cyber-security project that leverages AI tools to monitor the dark web, freeze over ₹1,000 crore of fraud funds and protect more than 70 young women from sextortion, while also warning that quantum computing could soon break current encryption standards [119-136][137-150]. Suresh Sethi described how population-scale digital public infrastructures enable AI to move from static identity verification to dynamic eligibility and predictive welfare delivery, but insisted that AI decisions must be explainable, auditable and backed by human redress mechanisms [158-186][187-190]. Ranjit Goswami reinforced a holistic view that AI should serve welfare and happiness, urging integration of departmental data through a common Aadhaar-linked database to avoid siloed services [200-216]. Beena Sarkar highlighted ethical concerns, especially gender bias and the misuse of emerging devices such as smart glasses, calling for a dedicated safety institute to evaluate new technologies before market entry [221-258].


Amit Kapoor pointed out critical gaps in skill levels, broadband quality and infrastructure in Maharashtra, arguing that without rapid investment in education, connectivity and data centres AI’s benefits for nutrition, education and employment in tier-2 and tier-3 cities will remain unrealised, and he contested the optimism about post-2020 job creation expressed earlier [266-310][311-317]. Across the discussion, participants converged on the need for transparent, accountable and adaptable AI policies that combine technical standards with human-centric safeguards to ensure inclusive prosperity [12-13][20-24]. The session concluded that governing intelligence with wisdom, through coordinated global norms and domestic capacity building, is essential to harness AI’s potential while mitigating its risks [32][33].


Keypoints


Major discussion points


Intelligent, adaptive AI governance is essential – AI is already reshaping governance, markets and geopolitics, creating a paradox where over-regulation stalls innovation and under-regulation risks harm; the solution is “intelligent governance” that is human-centered, transparent, risk-based, globally coordinated and continuously adaptive[1-13][14-18].


Maharashtra’s “Maha AI” vision and five-pillar framework – The state positions itself as a living laboratory, deploying AI-powered crime-prevention tools, a cloud-native “Mahaiti” infrastructure, and a governance stack built on compute, data, state AI governance, standards and capacity-building[46-58].


Operational pilots and data strategies – Initiatives highlighted include (a) large-scale green energy to power AI and staff up-skilling through an AI university[80-86]; (b) a state data authority that seeks to monetize India’s health and other public data rather than let it be exploited externally[96-102]; and (c) “Maha GPT”, a small-language-model interface that lets officials and citizens query complex government orders in real time[108-114].


AI in cyber-security and the looming quantum threat – The Maharashtra Cyber Security Project uses AI tools for dark-web monitoring, threat analysis and rapid response, reporting over ₹1,000 crore frozen and 70 lives saved in six months[119-136]; however, quantum computing could break current encryptions, prompting urgent preparation[145-150].


Ethical AI, bias mitigation and the need for infrastructure & skills – Women-for-Ethical-AI advocates stress evaluating new devices for gender-based risks and establishing safety institutes before deployment[221-257]; parallel concerns about uneven internet quality, low skill levels and under-employment in tier-2/3 cities underscore the requirement for education, broadband upgrades and affordable AI services to avoid widening inequality[266-306].


Overall purpose / goal of the discussion


The panel convened to articulate a shared vision for responsible AI governance, showcase Maharashtra’s concrete AI-driven public-service initiatives, surface emerging risks (cyber-security, quantum computing, ethical bias), and agree on concrete steps-capacity-building, data stewardship, infrastructure investment, and global cooperation-to ensure AI delivers safe, inclusive, and human-centred outcomes for the state and beyond.


Overall tone and its evolution


– The opening remarks are optimistic and visionary, emphasizing opportunity and the moral imperative of wise governance.


– As the panel moves into individual presentations, the tone becomes pragmatic and demonstrative, detailing specific projects, technical solutions, and capacity-building efforts.


– When cyber-security and quantum computing are discussed, the tone shifts to cautious and urgent, highlighting threats and the need for rapid preparedness.


– The later contributions on ethics, bias, and digital divide adopt a critical yet constructive tone, calling for safeguards, inclusive policies, and infrastructure upgrades.


– The session closes on a collaborative and hopeful note, reaffirming commitment to “smart, safe, secure governance” and thanking participants.


Overall, the conversation moves from high-level aspiration to concrete implementation, then to risk awareness, and finally to a collective call for responsible action.


Speakers

Mr. Virendra Singh – –


Ms. Beena Sarkar – Customer Success Executive, ServiceNow; Volunteer with Women for Ethical AI South Asia (UNESCO) – Ethical AI, gender bias, AI governance [S2]


Dr. Amit Kapoor – Chair, Institute for Competitiveness – Economic policy, competitiveness, workforce development [S3]


Mr. Ashish Shelar – Honorable Minister of IT and Cultural Affairs, Government of Maharashtra – Technology-driven governance [S4]


Mr. Devroop Dhar – Co-Founder & CEO, Primus Partners; Moderator of the panel – Business strategy and consulting, session moderation [S6]


Mr. Ranjeet Goswami – Head, Corporate Affairs, Tata Consultancy Services – Technology solutions and governance [S9]


Mr. Yashasvi Yadav – Additional Director General of Police, Maharashtra Cyber Department, Government of Maharashtra – Cyber security, law enforcement, AI applications in cyber [S10]


Moderator – Moderator – Session moderation [S12]


Mr. Praveen Pardeshi – –


Mr. Suresh Sethi – Managing Director & CEO, Protean eGov Technologies – Digital public infrastructure, AI in governance [S16]


Additional speakers: None


Full session reportComprehensive analysis and detailed insights

The session opened with Mr Virendra Singh warning that artificial intelligence is already reshaping governance, markets and geopolitics, and that the real dilemma is not whether AI will influence policy but whether governance will shape AI itself. He described a “governance paradox”: moving too slowly risks harm, while overly heavy regulation risks stagnation, and argued that the answer is not control versus innovation but intelligent governance – a human-centred, transparent, risk-based and globally coordinated framework that must evolve as AI evolves because static policies cannot manage dynamic intelligence[1-3][8-10][4-6].


In the keynote, Shri Ashish Shelar, Minister of IT & Cultural Affairs, highlighted that the AI Impact Summit 2026 is the first AI summit hosted in the global south, bringing together 20 heads of state, 60 ministers and hundreds of AI leaders[11-15]. He presented “Maha AI” as Maharashtra’s living laboratory and outlined five pillars for a safe, secure and smart governance stack: (i) compute and cloud at scale, (ii) high-quality public data sets, (iii) a dedicated state AI governance body, (iv) interoperability and standards, and (v) systematic capacity-building[16-22]. Flagship projects include the AI-powered Mahak Crime OS, showcased by Microsoft’s Satya Nadella, which accelerates crime prevention, detection and investigation, and the Mahaiti cloud-native, API-driven platform that underpins smart recruitment, AI-based property mapping, real-time urban dashboards for traffic, weather and civic issues, as well as pilots in flood-management and smart mobility[23-30]. The minister articulated three sector-wide imperatives – safeguard digital sovereignty, adopt AI responsibly, and treat AI governance as strategic infrastructure[35-38], and warned of digital-health, disinformation, deep-fakes and cyber-fraud, proposing a combined response of robust cyber-security, digital-literacy and critical-thinking, and a hybrid verification ecosystem[39-42].


The panel then moved to AI initiatives and capacity-building. Mr Praveen Pardeshi described Maharashtra’s green-energy target of more than 19 GW of solar capacity to power AI workloads, and announced an AI university and the IGOT (online learning) platform to up-skill civil servants[43-48]. He explained that the State Data Authority is creating a single source of truth for health and other public datasets, aiming to monetise these assets for national benefit rather than allowing foreign exploitation[49-55]. Pardeshi also unveiled “Maha GPT”, a small-language-model interface that lets officials and citizens query over 150 000 government orders (GRs) in real time, untangling complex regulations[56-61].


Mr Yashasvi Yadav outlined the Maharashtra Cyber Security Project, which integrates state-of-the-art AI tools, dark-web monitoring, threat analysis and a 24/7 helpline (1930) staffed by more than 150 cyber consultants[62-70]. Within six months the initiative froze and returned over ₹1 000 crore to victims and rescued more than 70 young women from sextortion and cyber-bullying, effectively saving 70 lives[71-78]. Yadav warned that quantum computing threatens to break current encryption standards-including RSA, blockchain and banking systems-unless India accelerates its quantum research, noting the country’s $1 billion spend versus rivals’ $15-20 billion[135-150].


Suresh Sethi highlighted Maharashtra’s population-scale Digital Public Infrastructure (DPI), which provides the data backbone for AI-enabled identity, payment and welfare systems. By moving from static identity verification to machine-readable, verifiable credentials, AI can automatically determine dynamic eligibility for subsidies, reduce inclusion and exclusion errors, and enable predictive governance that anticipates distress and triggers timely benefits[79-92]. He stressed that such AI layers must be explainable, auditable and coupled with a clear human-redress pathway to preserve accountability and public trust[93-100].


Major Ranjit Goswami (TCS) argued that AI should be viewed not merely as a technical efficiency tool but as a means to deliver welfare and happiness to the community. He called for a holistic, cross-departmental data architecture where every citizen is seen as a citizen of the state rather than of a siloed department, advocating integration with the Aadhaar database and common data standards across ministries[101-110].


Beena Sarkar, representing Women for Ethical AI, warned that emerging hardware such as smart glasses can jeopardise privacy and gender safety if released without rigorous assessment. She proposed establishing an India Safety Institute to vet new technologies for potential threats to women and the broader public before market entry[111-124].


Amit Kapoor drew attention to socio-economic challenges: only about 20 % of Maharashtra’s 9-crore-strong workforce possesses advanced skill levels, while the remaining 80 % are at basic levels, creating a bottleneck for AI adoption[125-129]. He highlighted inadequate broadband speeds-averaging 58 Mbps in Mumbai-and insufficient data-centre capacity as further constraints on scaling AI services to tier-2 and tier-3 cities[130-136]. Kapoor warned that without deliberate investment in education, connectivity and affordable AI, the technology could become a “dumping” element that harms mental health, fuels doom-scrolling and deepens inequality, especially among children[137-144]. He also identified opportunities for AI to monitor nutrition, water, sanitation and education at granular geographic levels, potentially addressing persistent development gaps[145-150].


Across the panel, participants converged on four core themes: (i) AI governance must be intelligent, human-centred and adaptable; (ii) robust, population-scale data infrastructure is essential for AI-enabled public services; (iii) capacity-building, explainability and human oversight are non-negotiable safeguards; and (iv) emerging risks-quantum computing, gender-biased hardware and the digital divide-require proactive, coordinated responses[150-165]. While optimism about AI’s transformative potential was widespread, disagreements emerged regarding the balance between dynamic, predictive AI services and the risk of societal “dumping”, as well as between monetising public data and protecting it from quantum-enabled threats[135-150]. The session closed with a collective call to “govern intelligence with wisdom”, urging coordinated global norms, domestic capacity-building and ethical safeguards to ensure that AI delivers safe, inclusive and sustainable prosperity for Maharashtra and beyond[150-165].


Session transcriptComplete transcript of the session
Mr. Virendra Singh

Artificial intelligence is real and it is influencing governance, markets, public services and even geopolitics. The question before us is not whether AI will shape governance. The question is whether governance is going to shape the artificial intelligence. It’s transforming governance in fundamental ways today. Through decision intelligence, public service delivery at scale and national security and strategic stability. Moreover, the governance challenge becomes uniquely complex as AI introduces speed, opacity to a level, concentration, global reach and dual use. This creates a governance paradox. Regulate too slowly and risk harm. Regulate too slowly and risk harm. Regulate too heavily and risk stagnation. The answer is not control versus innovation. The answer is intelligent governance. Therefore, the principle of AI governance should necessarily include human -centered design, transparency and accountability, risk -based regulations, global cooperation, and adaptive policies.

AI does not recognize borders. We need interoperable frameworks, shared safety standards, and cooperative oversight mechanisms. Governance frameworks must evolve as the artificial intelligence evolves. Static policies cannot manage dynamic intelligence. In this era, we move from individual, national -level policies to coordinated global norms, and that’s the necessity for today. History will not judge us by our sophisticated algorithms. It will judge us by the wisdom of our governance. The industrial revolution reshaped economies. The digital revolution reshaped communication. The AI revolution will reshape decision -making itself. With that power comes great responsibility. The digital revolution which we are undergoing today, surely we stand at crossroads. One path leads to inequity, instability and uncontrolled disruption. The other leads to augmented human capability, smart governance and inclusive prosperity.

The difference between these futures will not be determined by machines. It will be determined by us, the policy makers, the people who use it and each and every person who is involved in the process of AI governance. Therefore, we need to commit today to ourselves for building AI systems that are safe, secure, transparent, equitable and sustainable. and aligned with core human values. And that’s what we are going to discuss here today. Last but not the least, the message that this panel discussion and the fireside chat is going to give is let us govern intelligence with wisdom. Thank you so much.

Moderator

Thank you, sir. We are truly honored to have with us today a leader who has been forefront of technology -driven governance in Maharashtra. I now request Shri Ashish Shailar Sir, Honorable Minister of IT and Cultural Affairs, Government of Maharashtra, to grace us with a keynote address.

Mr. Ashish Shelar

Good morning to everybody. Respected guests, dignitaries, Excellencies and all the policy makers, members of the media, dear friends, young challengers, ladies and gentlemen. Namaskar, Vande Mataram and a very good morning to all. India today is not merely hosting an AI summit. India is helping to ride the operating system of AI age. We meet at Bharat Mandapam under the banner Maha AI, building safe, secure and smart governance. As a part of AI Impact Summit 2026, the first in its global series to be hosted in the global south. Over 20 heads of state, 60 ministers and hundreds of AI leaders from the industry and academia are here. Reflecting a shared vision. A shared conviction that AI must be inclusive, responsible and resilient.

I hereby say under the leadership of Chief Minister Devendra Fadanvisi, Maharashtra has positioned itself as a living laboratory for AI in governance. Our partnership with global technology leaders, for example, our AI -powered Mahak Crime OS, showcased by Microsoft’s Satya Nadellaji, has already transformed how we prevent, detect and investigate crime, faster response, shorter investigation cycles and more transparent processes. Simultaneously, our state digital agency, Mahaiti, is building what we call an intelligent government infrastructure, a cloud native, and a cloud -based infrastructure. modular, API -driven backbone that uses AI to integrate services, predict needs, and respond in real -time. This spans smart recruitment, AI -based property mapping for urban local bodies, real -time urban dashboard for traffic, weather, and civic issues, and pilots in flood management and smart mobility.

The philosophy is simple. Use AI not to distance the state from the citizens, but to make governance more human, faster, more responsive, and more inclusive. In other words, scale empathy through insight. Across the world, public sectors are resting with the same three imperatives. So better safeguard digital sovereignty. and adopt AI responsibly. Many countries have realized that interportal, public, date and robust AI governance are becoming strategic infrastructure at par with energy, transport or telecom. For Maharashtra, Maha AI is our response to this challenge. A safe, secure, smart governance stack must rest on five pillars. Computers, compute and cloud at scale number one, high quality public data sets number two, state AI governance number three, and interpolarity and standards number four, and capacity building is number five.

Smart governance is not only about deploying chatbots or dashboards. It is about building resilient, auditable, human -centered, AI systems. into the nerve systems of cities and states, transport, energy, public safety, urban planning, disaster response and welfare delivery, without trustworthy air governance, smart cities’ risk opacity, bias, security breaches and erosion of public trust. In Maharashtra, we see internet health as a core policy concern, just as physical health is essential for individuals, digital health is essential for societies. Disinformation, deepfakes, AI -generated fraud and cyberattacks can undermine democracies, markets and communities with unprecedented speed. Our response must be combined. That is robust cyber security, digital literacy and critical thinking, hybrid verification ecosystem. and that is our response as far as our state is concerned.

I am really happy to be the part of this summit and at the same time giving a response addressing the challenges and our ecosystem under the name of Mahayaya and therefore we are here to present our case as a building safe, secure, smart governance and appeal all the best of the technologies and platforms of the world to associate, coexist and work with us. Thank you so much. Thank you so much.

Moderator

Thank you, sir. Your vision for a digitally empowered Maharashtra truly sets the tone for everything we will discuss today. And now the highlight of today’s session. May I request all the panelists to join us on the stage please Sri Praveen Pardeshi Sir Yashasvi Adav Sir Dr. Anupam Chattopadhyay Dr. Amit Kapoor Mr. Suresh Sethi Major Ranjit Goswami Ms. Bina Sarkar Mr. Dev Rukhdar Davinder Sandhu Dr. Ganesh Ramakrishnan Vikash Chandra Rastogi Sir Rajesh Agarwal Sir Mr. Suresh Sethi Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Shri Yashastri Yadav, Additional Director General of Police, Maharashtra Cyber Department, Government of Maharashtra, Dr. Anupam Chattopadhyay, Associate Professor, Nanyang Technological University, Singapore, Dr. Amit Kapoor, Chair, Institute for Competitiveness, Mr.

Suresh Sethi, Managing Director and CEO, Protean EGov Technologies, Major Ranjit Goswami, Head, Corporate Affairs, Tata Consultancy Services, Ms. Beena Sarkar, Customer Success Executive Service Now, and moderating this conversation is Mr. Devaroop Dhar, Co -Founder and CEO, Primus Partners. I now hand over to Mr. Devaroop Dhar to moderate this session.

Mr. Devroop Dhar

Thank you, Aditi, and a warm welcome to all our panel members. I’ll start with Praveen Paradesi, sir. So, sir, at Mitra, you have been experimenting a lot with AI. There are multiple AI initiatives which have been taken. If you could share your thoughts, your vision as we start this.

Mr. Praveen Pardeshi

So first, the most important thing about AI is getting our energy, some of the hard things right. So one is energy because, you know, remember President Trump just mentioned that why should we be paying for Indians who are processing most of our answers. So I think we are pushing on green energy, more than 19 ,000 megawatts of that to come up at solar level. So that’s the fuel for AI in future. The second is that capacity building. So all our staff from Mitra and a lot of our other departments, we went to this AI university and gave them a course. And we also hope that there will be online courses available on IGOT where government staff can become empowered to use AI.

Then we look at what is the impact on the economy. So the impact on the economy, mostly people are concerned about the jobs. And that’s a real issue. One analysis from NITI. Shows that from 1950. to 2020, all highly educated people with postgraduate degrees, engineers, they are the ones who had a 95 % plus chance of getting jobs. But from 2020 till now, 0 .65 % is the rate at which physical jobs, that is, masons, bricklayers, home carers, their value and their employability is increasing vis -a -vis highly educated ones. So this is the impact of AI. So how should we do this better? One is, of course, as our Secretary IT mentioned, making it available in government, ensuring seamless access to services.

But other aspects which we don’t look at, which we are working through our state data authority, is how do we also encash the data at a large scale? Because otherwise we become… a sitting target for people to just use India’s data for monetizing their own values. So two big examples are pharmaceuticals. India has the largest population and the number of diseases, experiments, et cetera. Now, this is all health data, and this is very valuable for pharma companies. So the state data authority is working on issues wherein we can make it a single source of proof, make it available also, and if there is a commercial possibility, make those resources available to India, that is to our government, and not to be cashed for free.

So these are some of the applications. Government issues many, many orders. We have issued more than 150 ,000 orders, which are called government GRs. And it’s a maze through which it’s very difficult even for government officers, whose departments it is, to understand what is the latest position on a complicated situation. So we are working with Professor Ganesh here from IIT Mumbai. So Mitra and we are working together with them. And we are working with the government to make sure that we are getting the best results. And we are working with the government to make sure that we are getting the best results. disentangle all these orders through a small language model, not a large language model, and so that you can query at two levels.

One, for the government officers. The government officers should be able to ask in any complicated situation, what is the latest position on whether an additional FSIF or a building permit can be given in this situation or not, what are the Supreme Court orders. And on the other hand, citizens should also be able to ask under those rules. So this is called Maha GPT, and hopefully this will be the first application, which is both available to government officers and to citizens. I stop here.

Mr. Devroop Dhar

Thank you, sir, for sharing all these examples, wonderful examples. I’ll come to Yashishvi Yadav, sir. Cyber is another large user of AI. So if you could share your examples, your experience around how cyber is using AI.

Mr. Yashasvi Yadav

Okay, so law enforcement and cyber security. is one of the major concerns for law enforcement agencies all over the world. And I would like to draw your attention that under the visionary leadership of our Chief Minister, Mr. Devinder Fadnavis, who had the foresight to use AI in law enforcement work about five years ago. So we have launched and implemented this Maharashtra Cyber Security Project, which generously borrows AI tools, technologies, algorithms. And we are fighting real crime with these technologies. And the USP of this project is that the best and the most state -of -the -art tools, technologies, experienced consultants in the field of cyber security, and experienced and professional police officers have all coalesced, under one roof.

And there is a live police station as well. So this is how cyber security is being embellished through this project. And the beauty is that the dark web monitoring, threat analysis, social media monitoring, or any type of cyber crime, sextortion, ransomware, cyber bullying, and a lot of other types which cyber crime takes. They can all be undertaken, handled by just one, on the fingertips by just one helpline number, 1930. So this agency, if any kind of cyber issue is with any citizen, they can just dial this 1930 number and all the cyber solutions will be provided by more than 150 cyber consultants. So a lot of AI tools are being used. And this is the reason why we are using this and seamlessly.

And the best part of this whole exercise is that in less than six months, more than 1000 crore of rupees, which would have gone into the hand of the scamsters have been frozen and are being ultimately returned to the victims. What a big relief to the victims. And more than 70 young girls who were being subjected to intense cyberbullying, blackmailing and sextortion and were on the verge of committing suicide because of very efficient AI tracking. They were prevented from taking the extreme step. And 70 lives have been saved in less than six months of its operation. So that’s how AI is at the forefront. Thank you. Is at the forefront of being the bulwark against cyber security concerns.

I would like to draw your attention to only one report that we generated. which is called Echoes of Pahalgam. Now, in that case, while the Indian Army was fighting a conventional war with Pakistan because of the Pahalgam incident, more than one million cyber attacks were launched by nation -state actors, whom we called as APT, groups from Indonesia, Pakistan, and even Turkey, and so many other countries. And they were thwarted by such AI tools, which we call as threat intelligence tools, like Luminar, Cognite, or Pathfinder, which are big data analytical tools. So in the dark net, we still find the traces of these cyber attacks. So cyber crime is now slowly progressing into cyber terrorism and cyber warfare.

So that is what we have to be very, very careful. and before I end this preliminary address, I would like to also draw your attention what beyond AI? A big, big threat is lurking. It is called quantum computing. Now quantum computing can do processes in qubits, hundreds of millions of qubits. It’s at speed and it can solve complex issues in less than six seconds which the best of supercomputers would take more than 50 years to do it immediately. So quantum computing can break the best of encryptions including RSA encryptions of the banking industry, including blockchain technology. Now if these encryptions are broken in less than few minutes, the whole financial system can be lopsided and lots of money can flow into actors or threat actors which we are not aware at all.

Bitcoin can be broken Even credit card Encryptions can be broken Banking systems, encryptions can be broken So right now we have to prepare What quantum computing Can give us as in terms of Pros and what can be The shortcomings or dangers Lurking because of quantum Computing, the cons So we have to prepare because China And other countries have already invested 15 billion dollars Or even close to 20 billion dollars We have invested only 1 billion Dollar till now So that’s how we have to catch up with Quantum computing before it’s too late So this is how cyber security And law enforcement perspective On AI And I would like to pass on the baton To the next speaker, thank you

Mr. Devroop Dhar

Thank you sir, that was quite reassuring as well And since you spoke about quantum I want to bring in Dr. Anupam Chattopadhyay. Anupam, you’re working at the intersection of quantum and AI. So if you could share your thoughts, how are things moving in that direction? pooled into the product in order to help this. Thanks, Anupam, for giving a perspective, both from research as well as industry. So I’ll come to Suresh, you. So you’re working extensively in DPIs, and there’s a strong interaction between DPIs and AI. So from your experience, how are things moving in the DPI space, and how is AI influencing?

Mr. Suresh Sethi

Thanks, Devroop. I think from a DPI perspective, we are all very familiar. We today have population -scale digital rates. And, you know, that puts us in a very sweet spot because today a lot of times when we start using or embedding AI into any technology, are we ready to embed AI or not? And I think there was reference to data sets, how is data organized, how can you enable AI on top of it? So I think, first of all, the population -scale DPI that we have, that gives us a significant advantage. Whether we talk about identity, we talk about our UPI rails, which is the payment and the transactional layer which comes on top of it.

And similarly, when we look at data itself, DigiLocker today has millions of authenticated documents that come into play over there. So while we have the digital infrastructure in place, if we can embed an intelligence layer on top of it, and if the question is around targeting subsidy, getting the right beneficiary, putting the money in the hands of the right person, that becomes a very, very important and significant leverage. I will just take two or three examples where we see AI playing a significant role. One is that as we move from static identity to dynamic eligibility. Now, we’ve seen it all happen. Today, we have digitally verifiable credentials. So the moment you are using static identity, you are only able to prove who you are, what do you do.

and then you are applying for getting some sort of benefits or subsidy coming through to you. But if you have verifiable credentials, these are credentials which are machine readable. We talk about the concept of blue dot, which technically means all of us have certain attributes which are associated with us. If these are available in a machine readable format, then AI can actually determine who is eligible for what subsidy. The second part comes to the fact saying, are we being reactive or can we do predictive governance? And predictive governance can be strongly enabled by AI because the moment you are having credentials which are digitally verifiable, you are actually able to predict who needs some sort of subsidy.

Now today, if there’s a distress in income and that can be tracked, you can trigger some sort of benefit to that and coming at a government level. If you can put data. consented data being shared with the government, the same can come through. And last but not least is the important part related to inclusion error and exclusion error. So when we talk about inclusion error, we are talking about leakages. When we are talking about exclusion, naturally we are saying the right person is not getting what is due to them. So your ability to be able to predict precision using verification is going to be very critical. Again, an AI layer can be embedded over there.

But all this very clearly, and we’ve heard it before, all this is very clearly important to have the guardrails around it. So AI should be explainable. The moment we are saying explainable, today a decision taken not to give benefits to somebody should be very clearly explained. And similarly, if there are benefits going out, that should also be explainable. The second part is auditable. So whatever we are doing, there has to be an audit layer over there to explain what has happened. And more importantly, there should be a human redressal pathway because ultimately you can’t put everything to machines. You have to have that human person coming into play and having accountability settled over there. So I think these are critical aspects which can make governance more predictive, more precise, and more proactive going forward by embedding an intelligence layer into the DBI.

Mr. Devroop Dhar

Thanks, Suresh. I think very valid and meaningful points. I’ll come to Ranjit. Now, we’re talking about AI. There are a lot of happenings which are there. We have seen, we have heard about so many things at the summit. Now, a major tech company like TCS, which is doing a lot of work in this, how does large tech companies come in and collaborate with state governments? How can you enable that? What can be the steps in that?

Mr. Ranjeet Goswami

Thank you, Devroop. I think… we need to first take a holistic view of what are we trying to achieve with AI. The tagline for this summit, if we go by that, welfare for all, happiness for all is very holistically kind of captures it. As coming from Tata Group, I am reminded that 170 years back almost, our founding father, Jamshedji Tata spoke about giving us the guidance saying that society or community is not another stakeholder in the business. It is the purpose as to why the business exists in the first place. Similarly, if we were to apply the same analogy over here today, I think AI is not a technical tool which is fundamentally going to make the governance more efficient.

It is fundamentally meant for how to bring the benefit, welfare and happiness to the community at large. If we try and approach the question from that perspective, it definitely comes out. And like even Suresh alluded to, how do we make sure that it is inclusive, the people get the right benefits that they are entitled to, and do not go to somebody who is not entitled to. The colleague from the police forces also spoke about as to how the criminal tracking and other things. These are translating the intent into action at the ground level. Lastly, when it comes to organizations like DCS, we believe that each department in the government firstly should not be treated in isolation.

Each department in the government should have the ability to have a common database of people, be able to extract the information, and ensure that the citizen is seen as a citizen of the state or the country, and not as a citizen of the department. So that common databasing is something that we are trying to approach. We have the Aadhaar database. Not every department is still connected to it. Of course, we are trying to find a reason as to how that can become the major point of it. So small steps like that, and of course, bringing in the platform’s intelligence to its core. That’s fundamentally the steps that we have taken.

Mr. Devroop Dhar

Thanks, Ranjit. With that, I’ll go to Bina. Bina, I want to talk on the aspect of ethical AI biases, especially you work with women for ethical AI. How do you see biases or maybe biases around gender diversity creeping in and what needs to be done around this?

Ms. Beena Sarkar

Thank you, Debru. So, yes, I do work. I volunteer with the Women for Ethical AI South Asia chapter. It’s powered by UNESCO. So one of the key questions that we ask ourselves is what are we solving for? Every time and I’ve been looking at the various solutions that are. Debuting or being showcased as part of the. India AI mission. Many a times when we look at a particular hardware or a piece of any new device through which we are delivering what we now call as AI services, mostly on large language models, I should say that. What we sometimes seem to miss is the wood for the trees. I will just give a very hard example over here.

We do know smart glasses is not a new phenomenon. It was introduced by Google Glasses way back in 2015, I remember 2013, 2015, 2016, about that range. One of the reasons why it was recalled was you had safety concerns because people were taking images, videos without consent. we have seen the return of these glasses and those concerns have not gone away and yet you see them in the market you see it in India being sold in every in any optic store, in my neighborhood optic store I have my colleagues who flaunt it saying that I am so cool, I have taken the latest piece of technology so when we talk about how do you build ethics and governance Yashasvi sir said you are the best, you are the best framework.

So what this means is we are not giving guns to everybody. Right. India has been very, very smart about it. Of course, there are certain countries where owning a gun is is fine. It’s as for their rights. That doesn’t mean India has to adopt it. Right. We have our very able police force. We have the Indian army. Right. So what is exciting outside? One needs to contextualize it, humanize it, see if it is threatening 50 percent of your population. I’m a part of that population and then decide whether it even needs to exist. In that market. So when you’re building out solutions and when you’re building out devices, we have the India Safety Institute. It has been instituted in 2025.

I do know that. What I would urge policymakers is that it should not, while we do have it and I do know we are working with Research Institute, industry, industry, ideally, if any new device comes like this, the first line of defense, so to speak, should be this institute. That actually should determine whether it actually creates a problem for the police force, for cyber security, right? Does it threaten 50 % of the population? We are already seeing it playing out in the UK, US. There is no policy that protects us. Even now, we are not protected as women. Leave women, even children, right? So I think that is something one really needs to take into consideration. While we love technology, trust me, as a lady, I find it extremely liberating to be able to create applications with just language.

But if you use that technology against us by bringing out devices and hardware that endanger us, I feel that’s where it breaks down. So you definitely, as part of ethics, you need to evaluate it from that framework. I call it the Kali versus the Rakta Bija effect. I’m sure some of you know that. Why would you create a Rakta Bija? Create a Kali.

Mr. Devroop Dhar

So, Beena, I think a very valid point that you have made. With that, I’ll move to Dr. Amit Kapoor. We’re talking of AI and its impact and benefits. How do you see this benefit percolating? Next level of cities, tier 2, tier 3, other places.

Dr. Amit Kapoor

So, Devru, this is a very important question. And as I was hearing all the panelists here, I would like to rake a few points. We definitely agree that, yes, AI can be transformational. But we have to understand a couple of points here. One of them is that, S2, what is the quality of education that we are giving so that people are able to use it in the right earnest? In fact, the issue that Bina was talking about is about ethics, education, and so on and so forth. The larger point here is when you talk about the skill development levels in Maharashtra, out of the 100 % or 9 crore workforce that you have, only about 20 % of them are at skill level 3 and 4.

80 % of them are at skill level 1 and 2. So if you have to move beyond that, you need to do something far greater. That means you need to embed your education system and build it very strongly. And then the second point out here is that how do we use, and if you really want to talk about tier 1 and 2, sorry, tier 2 and tier 3 cities, we have to also understand that what is the level of penetration and quality of internet in these locations. We can tom -tom about internet and everything, but the numbers are not very supportive right now. And you talk about internet and broadband connectivity. There are severe issues with this within the state of Maharashtra itself, which is supposedly one of the finest states in terms of internet connectivity itself.

The average speed of internet traffic in Bombay or in Mumbai is about 58 Mbps on a broadband network. When you are talking about usage of AI, and if you want to take it to the masses, then you have to have far better, deeper internet connectivity and broadband. I think it has to be done on a war footing. Not that we are not getting there, but it will have to be done faster and quicker. The second thing is going to be about inadequacy of supporting infrastructure. So this is where I also see that there is a tremendous level of opportunity that exists in Maharashtra to create this. Because if you really look at it, like 16 % of India’s workforce, IT workforce, or what you call a technology workforce sits in one single city in Maharashtra.

That is Pune. So if you’re really talking about it, that means Maharashtra has the potential, the talent to really take it to the next level. And that is where you have to build infrastructure. You will have to build what you call, when you talk about the data centers, et cetera, the opportunity does exist and things. And last but not the least is about, it’s going to be about cost and affordability. You will have to bring cost and affordability to these services as you go along. But having said that, I think the larger potential, we have not touched here. And when you talk about Tier 2 and Tier 3 cities, this technology has a huge possible impact that can be done at Tier 2 and Tier 3 cities, and that is about nutrition.

Today, Maharashtra actually has a problem on nutrition. Fifty percent of the people in Maharashtra are malnourished even today. How do I use this technology to assess what is happening in my PIN code level or a smaller level of geography in various cities? And location. The second thing is about water and sanitation, access to basic knowledge. What about access to can AI solve my education problem in tier two and tier three cities? In fact, none of us is talking about the elephant in the room. And that elephant in the room is that we are all super excited about AI, but we are not understanding that AI is also going to be the biggest dumping down element for the society.

In fact, when you talk about AI itself, it is going to make bonobos out of us. People out here, when you actually use the doom scrolling, when you talk about Instagram, what is happening to our children? And that is exactly what AI is going to do. How do we set our education system right? That’s what you have to do in tier two and tier three cities. Last point, and that is about the higher education space itself. When you talk about your workforce, and close to about 50 % of it is underemployed. So I have to disagree with Praveen for one small point. He made a very powerful point in terms of saying that how after 2020, there has been a transformation.

In terms of how people are getting jobs, et cetera. I do agree with. But the larger point here is that we are also not preparing our workforce right, and that is happening in Tier 2 and Tier 3 cities. So we need to take it there. Potential exists. As of today, Maharashtra is the engine of growth in India. We cannot debate that. Even today, it is about close to 17 % to 18 % of India’s GDP, and it will define India’s growth story in the future, definitely. But if Maharashtra does it right, then the country can follow suit on this. And that is where things have to be.

Mr. Devroop Dhar

Thank you, Dr. Amit. And thanks to all the panelists. So with that, we’ll come to the end of the panel discussion.

Moderator

Thank you so much. Thank you to all our esteemed panelists and senior officers who are here. May I request all our panelists to just step forward for a photo? Very interesting, sir. May I request you to join our esteemed panelists? on this piece. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (42)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Artificial intelligence is already reshaping governance, markets and geopolitics.”

The knowledge base states that AI is influencing governance, markets, public services and even geopolitics, confirming the claim.

Additional Contextmedium

“The answer is not control versus innovation but intelligent governance – a human‑centred, transparent, risk‑based and globally coordinated framework.”

S56 discusses the need to move deliberately and maintain things alongside acceleration, adding nuance to the concept of balanced, intelligent governance.

Confirmedhigh

“The AI Impact Summit 2026 is the first AI summit hosted in the global south.”

S116 notes that the AI Impact Summit 2026 will be the first global AI summit of this scale convened in the Global South, confirming the claim.

Additional Contextmedium

“The minister warned of digital‑health, disinformation, deep‑fakes and cyber‑fraud, proposing a combined response of robust cyber‑security, digital‑literacy and critical‑thinking, and a hybrid verification ecosystem.”

S123 highlights concerns around digital health technology and the need to address digital disparity, providing additional context to the minister’s warning about digital‑health risks.

Additional Contextmedium

“The State Data Authority is creating a single source of truth for health and other public datasets, aiming to monetise these assets for national benefit rather than allowing foreign exploitation.”

S124 discusses strategic sovereignty through data control and governance policies, adding nuance to the claim about a single source of truth and monetisation to protect national interests.

External Sources (128)
S1
MahaAI Building Safe Secure & Smart Governance — Mr. Virendra Singh established the intellectual foundation by reframing the central question facing policymakers. Rather…
S2
MahaAI Building Safe Secure & Smart Governance — – Mr. Praveen Pardeshi- Ms. Beena Sarkar – Ms. Beena Sarkar- Most other panelists
S3
MahaAI Building Safe Secure & Smart Governance — -Dr. Amit Kapoor- Role/Title: Chair, Institute for Competitiveness, Area of expertise: Economic policy, competitiveness,…
S4
MahaAI Building Safe Secure & Smart Governance — -Mr. Ashish Shelar- Role/Title: Honorable Minister of IT and Cultural Affairs, Government of Maharashtra, Area of expert…
S5
AI Meets Agriculture Building Food Security and Climate Resilien — -Ashish Shailar- Honorable Minister (specific portfolio not mentioned)
S6
MahaAI Building Safe Secure & Smart Governance — -Mr. Ashish Shelar- Role/Title: Honorable Minister of IT and Cultural Affairs, Government of Maharashtra, Area of expert…
S7
https://dig.watch/event/india-ai-impact-summit-2026/secure-talk-using-ai-to-protect-global-communications-privacy — A guest on the fireside chat is a seasoned global telecom leader who has not only defined the arc of the industry, but a…
S8
Keynote-Rishad Premji — -Mr. Dario Amote: Role/Title: Not specified; Area of expertise: Artificial intelligence (described as pioneer and though…
S9
MahaAI Building Safe Secure & Smart Governance — -Major Ranjit Goswami- Role/Title: Head, Corporate Affairs, Tata Consultancy Services, Area of expertise: Technology sol…
S10
MahaAI Building Safe Secure & Smart Governance — – Mr. Ashish Shelar- Mr. Praveen Pardeshi- Mr. Yashasvi Yadav
S11
https://dig.watch/event/india-ai-impact-summit-2026/mahaai-building-safe-secure-smart-governance — Thank you, sir, for sharing all these examples, wonderful examples. I’ll come to Yashishvi Yadav, sir. Cyber is another …
S12
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S13
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S14
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S15
MahaAI Building Safe Secure & Smart Governance — – Mr. Ashish Shelar- Mr. Praveen Pardeshi- Mr. Yashasvi Yadav – Mr. Suresh Sethi- Mr. Praveen Pardeshi
S16
MahaAI Building Safe Secure & Smart Governance — – Mr. Praveen Pardeshi- Mr. Ranjeet Goswami- Mr. Suresh Sethi – Mr. Virendra Singh- Mr. Suresh Sethi
S17
Building the Workforce_ AI for Viksit Bharat 2047 — Dr. Singh’s key assertion was that “artificial intelligence can substitute everything on this planet but it cannot subst…
S18
9821st meeting — For Mozambique, it is essential that the international community establishes norms and standards that promote trust and …
S19
Panel 3 – Legal and Regulatory Tools to Reduce Risks and Strengthen Resilience  — The level of disagreement was moderate and constructive. Speakers shared common goals of protecting submarine cable infr…
S20
Unveiling Trade Secrets: Exploring the Implications of trade agreements for AI Regulation in the Global South — In conclusion, Brazil’s ongoing efforts to establish a comprehensive legal framework for AI regulation are commendable. …
S21
AI reshapes cybercrime investigations in India — Maharashtra policeare expandingthe use of an AI-powered investigation platform developed with Microsoft to tackle the ra…
S22
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — Another concern is the challenges AI poses for law enforcement agencies. AI technology performs tasks at a pace that sur…
S23
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — Sure. Thanks. Thanks for your question. I think this builds on actually the last couple of comments. I mean, what we’re …
S24
Building Population-Scale Digital Public Infrastructure for AI — And I’ll give you a small example as to how diffusion is happening. First of all, Shankar, really honored to have worked…
S25
AI for Democracy_ Reimagining Governance in the Age of Intelligence — I believe this had been the most important event. We are more or less actually reaching to the… culmination of this hi…
S26
Regulating Open Data_ Principles Challenges and Opportunities — In fact, the real question is the criticality of data in identifying relevant areas of policy intervention by the state …
S27
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Amal El Fallah Seghrouchini:Hello, everybody. I am very happy to talk about AI in cybersecurity. And I think that there …
S28
Cybercrime: Recognising and preventing malicious activities online — Given the potential harm caused by cybercrime and the challenges that law enforcement agencies face bringing offenders t…
S29
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 part 5 — Peru: Thank you very much, Mr. Chairman. Following the guiding questions for this section on CBMs and good practices, …
S30
From High-Performance Computing to High-Performance Problem Solving / Davos 2025 — Quantum computing poses a potential threat to current encryption methods
S31
What is it about AI that we need to regulate? — Several countries shared their approaches to balancing innovation with governance. InWS #283, Jayantha Fernando from Sri…
S32
WS #98 Towards a global, risk-adaptive AI governance framework — Mateos emphasizes the need to balance innovation and regulation in AI risk frameworks. She argues that while protecting …
S33
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — In the document and then in our trainings, we have four pillars. They’re all linked. The first pillar is context-based a…
S34
The Global Power Shift India’s Rise in AI & Semiconductors — So the goal of Genesis Project is to really, one, align public and private partnership, two, invest government resources…
S35
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — Harleen Kaur outlined the policy framework built around four pillars: treating foundational datasets as public goods, in…
S36
Global South at the heart of India AI plan — India hasunveiledthe New Delhi Frontier AI Impact Commitments, a new initiative aimed at promoting inclusive and respons…
S37
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — And if you combine with the AI and you build your AI stack properly, you are looking for round the clock green power. So…
S38
Encryption — There are concerns that quantum computers, when widely available, could undermine current encryption techniques, renderi…
S39
Opening of the session/OEWG 2025 — AI can be used for automated phishing attacks and deepfake-based disinformation campaigns. Quantum computing has the pot…
S40
Pre 12: Resilience of IoT Ecosystems: Preparing for the Future — Cybersecurity | Human rights Emerging Technology Threats and Quantum Computing Impact IoT devices currently collect an…
S41
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what…
S42
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Achieving inclusive AI requires addressing inequalities across three fundamental areas: access to computing infrastructu…
S43
Gender rights online — AI systems can learnbiasesfrom training data, leading to discriminatory outcomes online, includinggender-based dispariti…
S44
High-Level Session 4: From Summit of the Future to WSIS+ 20 — Walton raises concerns about the ethical implications of AI and other emerging technologies. He emphasizes the need for …
S45
Cybersecurity, cybercrime, and online safety — Lucien Castex:Thank you. Thank you very much, Veronika. It’s indeed quite important to bring a gender perspective to cyb…
S46
Disrupt Harm: Accountability for a Safer Internet | IGF 2023 Open Forum #146 — APC and their members are looking to shift the narrative around gender issues online to focus on the positive aspects an…
S47
Framework to Develop Gender-responsive Cybersecurity Policy | IGF 2023 WS #477 — Additionally, there was a discussion on the gender perspective of cybercrime legislation and the strategies employed. Je…
S48
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — Zhang and Professor Gong Ke agreed on the fundamental importance of infrastructure development for AI advancement. Their…
S49
AI as critical infrastructure for continuity in public services — And if we use AI, we can also use it for the security of our business. And how can we train the national data? That’s wh…
S50
Multistakeholder Partnerships for Thriving AI Ecosystems — -Infrastructure and capacity building as foundational requirements: Discussion covered the need for sensing infrastructu…
S51
Inclusive AI governance: Universal values in a pluralistic world — embedintercultural philosophical engagementinto policy-making processes prioritisehuman-centric governancethat protects…
S52
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The roadmap is built upon core principles including “human and planetary welfare, accountability and transparency, inclu…
S53
AI for Social Empowerment_ Driving Change and Inclusion — He asks how governments and institutions can govern AI responsibly to minimise labour market disruption and ensure a smo…
S54
Open Forum #30 High Level Review of AI Governance Including the Discussion — ## Introduction and Context Lucia Russo: Thank you, Yoichi. Good morning and thank you my fellow panelists for this int…
S55
The perils of forcing encryption to say “AI, AI captain” | IGF 2023 Town Hall #28 — Increasingly, proposals across jurisdictions are pushing for content scanning or detection mechanisms in end-to-end encr…
S56
AI Meets Cybersecurity Trust Governance & Global Security — Alejandro Mayoral Banos,: is not only a technical matter. It is essentially a human rights issue. We will discuss today…
S57
Advancing Scientific AI with Safety Ethics and Responsibility — -Balancing Open Science with Security: Panelists explored the challenge of preserving open science benefits while preven…
S58
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — Virkkunen explains that the EU’s AI regulation is not as comprehensive as critics suggest, focusing primarily on high-ri…
S59
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — ### Initiative Background Jungwook Kim: Thank you. So Korea is ranked as one of the leading countries in OECD Digital G…
S60
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — man’s promise. It can enhance public service delivery, it can improve decision -making, it can optimize resource managem…
S61
Reinventing Digital Inclusion / DAVOS 2025 — Paula Ingabire discusses Rwanda’s focus on identifying AI use cases that can transform public sector delivery. This appr…
S62
Agentic AI and the new industrial diplomacy — Several trends are converging:UNandUNESCOframeworks emphasize that AI should augment human capabilities, not replace hum…
S63
Secure Finance Risk-Based AI Policy for the Banking Sector — It should be risk -based intensity. Fairness and non -discrimination. Third is explainability and transparency. And four…
S64
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S65
HIGH LEVEL LEADERS SESSION IV — The analysis highlights several key points regarding the importance of a human rights-based approach to new technologies…
S66
Day 0 Event #165 From Policy to Practice: Gender, Diversity and Cybersecurity — Canada requires all policy initiatives and programs to undergo a Gender-Based Analysis Assessment (GBA+). This assessmen…
S67
Digital Policy Perspectives — Their activities are directed towards enhancing gender equality in the digital space, aligning with the aims of Sustaina…
S68
Human Rights-Centered Global Governance of Quantum Technologies: Implications for AI, Digital Rights, and the Digital Divide — **Dual-Use Risks**: Quantum technologies present both opportunities and threats, particularly regarding encryption and s…
S69
Launch / Award Event #169 Report Launch: Quantum encryption: blessing or havoc? — Strong consensus exists on the urgency of quantum threats, need for multi-stakeholder coordination, IoT vulnerabilities,…
S70
What is it about AI that we need to regulate? — Responsible Deployment of AI and Quantum Computing in Critical InfrastructureThe deployment of emerging technologies lik…
S71
Quantum-IoT-Infrastructure: Security for Cyberspace | IGF 2023 WS #421 — It is argued that governments and the technology industry need to continuously and significantly invest in quantum techn…
S72
MahaAI Building Safe Secure & Smart Governance — Artificial intelligence is real and it is influencing governance, markets, public services and even geopolitics. The que…
S73
Panel Discussion Inclusion Innovation & the Future of AI — This comment fundamentally redefines AI governance from a defensive, compliance-focused activity to a proactive, value-c…
S74
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — That is why we must frame this not simply as technology policy, but as democratic governance. The choices made today abo…
S75
Main Session 2: The governance of artificial intelligence — The innovation vs. risk management debate represents a false dichotomy – both elements must be addressed simultaneously …
S76
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — So two years ago, the French Prime Minister’s Digital Directorate elaborated a strategy based on five pillars. The first…
S77
The Global Power Shift India’s Rise in AI & Semiconductors — So the goal of Genesis Project is to really, one, align public and private partnership, two, invest government resources…
S78
AI Meets Agriculture Building Food Security and Climate Resilien — The Chief Minister positioned Maharashtra as offering a compelling agri-innovation ecosystem, actively inviting venture …
S79
AI and Data Driving India’s Energy Transformation for Climate Solutions — -Building Climate and Energy Data Infrastructure: The discussion focused on creating unified, standardized data architec…
S80
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — And if you combine with the AI and you build your AI stack properly, you are looking for round the clock green power. So…
S81
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — Ankush points out that data is the raw material for AI and that India’s massive data generation enables a sovereign AI e…
S82
Encryption — There are concerns that quantum computers, when widely available, could undermine current encryption techniques, renderi…
S83
Opening of the session/OEWG 2025 — AI can be used for automated phishing attacks and deepfake-based disinformation campaigns. Quantum computing has the pot…
S84
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Amal El Fallah Seghrouchini:Hello, everybody. I am very happy to talk about AI in cybersecurity. And I think that there …
S85
Pre 12: Resilience of IoT Ecosystems: Preparing for the Future — Cybersecurity | Human rights Emerging Technology Threats and Quantum Computing Impact IoT devices currently collect an…
S86
Dynamic Coalition Collaborative Session — Cybersecurity | Infrastructure | Encryption Quantum computing as threat versus solution The speaker presents a more op…
S87
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what…
S88
Gender rights online — AI systems can learnbiasesfrom training data, leading to discriminatory outcomes online, includinggender-based dispariti…
S89
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S90
High-Level Session 4: From Summit of the Future to WSIS+ 20 — Walton raises concerns about the ethical implications of AI and other emerging technologies. He emphasizes the need for …
S91
S92
Keynote by Uday Shankar Vice Chairman_JioStar India — The tone is consistently optimistic and visionary throughout, beginning with congratulatory remarks and maintaining an i…
S93
Welcome Address — The tone is consistently optimistic, visionary, and confident throughout the speech. Modi maintains an inspirational and…
S94
Opening address of the co-chairs of the AI Governance Dialogue — The tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate fo…
S95
Opening — The overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological ch…
S96
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — The tone is consistently visionary, authoritative, and optimistic throughout. The speaker maintains an inspirational and…
S97
AI, Data Governance, and Innovation for Development — The overall tone was optimistic and solution-oriented, with speakers focusing on practical ways to overcome obstacles th…
S98
How Small AI Solutions Are Creating Big Social Change — The discussion maintained a consistently optimistic and collaborative tone throughout. Panelists demonstrated mutual res…
S99
Capacity Building in Digital Health — The discussion maintained an optimistic and solution-oriented tone throughout, with panelists acknowledging significant …
S100
Building Population-Scale Digital Public Infrastructure for AI — The tone is optimistic and collaborative throughout, with speakers sharing concrete examples of successful implementatio…
S101
Workshop 3: Quantum Computing: Global Challenges and Security Opportunities — The discussion aimed to examine the current state and future implications of quantum computing, focusing on both the opp…
S102
Shaping an inclusive global action to anticipate quantum technologies — Audience 6:So for anyone who is dealing with quantum computing, will know that the quantum computing is actually picture…
S103
Pathways to De-escalation — The overall tone was serious and somewhat cautious, reflecting the gravity of cybersecurity challenges. While the speake…
S104
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S105
Open Forum #46 Developing a Secure Rights Respecting Digital Future — The discussion maintained a consistently collaborative and constructive tone throughout. It was professional yet accessi…
S106
Bridging the Digital Divide: Inclusive ICT Policies for Sustainable Development — ### Ethics and Environmental Challenges Hakikur Rahman: This is Dr. Haktikur Rahman from International Standard Univers…
S107
WS #257 Emerging Norms for Digital Public Infrastructure — The tone of the discussion was largely analytical and academic, with panelists offering nuanced views based on their exp…
S108
Multigenerational Collaboration: Rethinking Work, Learning and Inclusion in the Digital Age — The discussion maintained a professional yet urgent tone throughout, with speakers expressing both optimism about collab…
S109
Any other business /Adoption of the report/ Closure of the session — In conclusion, the session ended with a sense of accomplishment for the work done and a hopeful outlook for the future. …
S110
Closing Ceremony — The discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebrat…
S111
[Parliamentary Session Closing] Closing remarks — The tone of the discussion was formal yet collaborative and appreciative. There was a sense of accomplishment for the wo…
S112
Closing Session  — The tone throughout the discussion was consistently formal, collaborative, and optimistic. It maintained a celebratory y…
S113
Information Society in Times of Risk — The discussion maintained a consistently academic and collaborative tone throughout. It was professional and research-fo…
S114
WS #162 Overregulation: Balance Policy and Innovation in Technology — James Nathan Adjartey Amattey, from the private sector in Africa, pointed out that the COVID-19 pandemic demonstrated th…
S115
Press Briefing by HMIT Ashwani Vaishnav on AI Impact Summit 2026 l Day 5 — Congratulations on the declaration, sir. I just wanted to know, could you give us names of some of the countries that ha…
S116
Prominent United Nations leaders to attend AI Impact Summit 2026 — Senior United Nations leaders, including Antonio Guterres, will take part in theAI Impact Summit 2026, set to be held in…
S117
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — AI is a risk. AI is a risk. AI is a risk. is a dominant source of harm. That requires urgent attention and action. First…
S118
https://dig.watch/event/india-ai-impact-summit-2026/press-briefing-by-hmit-ashwani-vaishnav-on-ai-impact-summit-2026-l-day-5 — Sir, my question is in regards to the Global South. Since this was the first summit to be held in a Global South country…
S119
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — The technical requirements for trustworthy AI emerged through multiple perspectives. Valerian Ghez from photonic quantum…
S120
Microsoft at 50 – A journey through code, cloud, and AI — Microsoft, the American tech giant, wasfounded50 years ago, on 4 April 1975, by Harvard dropout Bill Gates and his child…
S121
https://dig.watch/event/india-ai-impact-summit-2026/ai-driven-enforcement_-better-governance-through-effective-compliance-services — The vision of a sovereign SLM for the tax domain stands out as a transformative initiative. The session on a Roadmap on …
S122
A Conversation with Satya Nadella and Klaus Schwab — Satya has been at Microsoft for 32 years, having experienced major paradigm shifts in industry. This virtual forum allo…
S123
Digital Health at the crossroads of human rights, AI governance, and e-trade (SouthCentre) — In conclusion, digital health technology holds immense potential for improving health systems globally. However, it is e…
S124
Agents of Change AI for Government Services & Climate Resilience — Governments can implement strategic sovereignty through data control and governance policies while pursuing longer-term …
S125
Policy Network on Artificial Intelligence | IGF 2023 — All the groups in the end navigated towards capacity building and included some recommendations or sentences on that.
S126
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — Audience:Holly, please. Hi. I’m Holly Hamblett with Consumers International. We’re a membership organization of consumer…
S127
Panel Discussion: 01 — Capacity development | Artificial intelligence
S128
Seismic Shift — GW target for wind power had been met. While the deployment of utility-scale solar is largely on track, the deployment o…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
M
Mr. Virendra Singh
2 arguments110 words per minute371 words201 seconds
Argument 1
Intelligent, human‑centered AI governance
EXPLANATION
He argues that AI governance must be built around human‑centered design, ensuring transparency and accountability. This approach places people at the core of AI policy rather than technology alone.
EVIDENCE
He states that the principle of AI governance should necessarily include human-centered design, transparency and accountability, risk-based regulations, global cooperation, and adaptive policies [13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The five core principles of “intelligent governance”-human-centred design, transparency, accountability, risk-based regulation, global cooperation and adaptive policies-are detailed in the MahaAI report [S1].
MAJOR DISCUSSION POINT
Human‑centered AI governance
AGREED WITH
Mr. Ashish Shelar, Mr. Suresh Sethi, Ms. Beena Sarkar
Argument 2
Risk‑based regulation, global cooperation, and adaptive policies
EXPLANATION
He highlights the need for regulation that balances speed and rigor, warning against both under‑regulation and over‑regulation. Risk‑based, globally coordinated, and adaptable policies are presented as the solution.
EVIDENCE
He notes that regulating too slowly risks harm while regulating too heavily risks stagnation, and emphasizes risk-based regulation, global cooperation and adaptive policies as part of AI governance [8-10][13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 emphasizes risk‑based regulation, coordinated global cooperation and adaptive policy frameworks as essential components of AI governance.
MAJOR DISCUSSION POINT
Balanced AI regulation
AGREED WITH
Moderator, Mr. Devroop Dhar
M
Mr. Ashish Shelar
3 arguments99 words per minute586 words351 seconds
Argument 1
AI‑powered Mahak Crime OS improves crime detection, transparency, and response
EXPLANATION
He describes an AI‑driven crime‑fighting operating system that accelerates investigation and enhances transparency. The system is presented as a concrete example of AI improving public safety.
EVIDENCE
He cites the AI-powered Mahak Crime OS, showcased by Microsoft’s Satya Nadella, which has transformed crime prevention, detection and investigation by enabling faster response, shorter investigation cycles and more transparent processes [47].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-powered Mahak Crime OS is being rolled out to over 1,100 police stations in Maharashtra, accelerating investigations and enhancing transparency [S21].
MAJOR DISCUSSION POINT
AI for crime detection
Argument 2
Mahaiti cloud‑native infrastructure enables real‑time, AI‑driven public services
EXPLANATION
He outlines a cloud‑native, modular, API‑driven platform that uses AI to integrate services, predict needs and respond instantly. The infrastructure supports a range of smart‑city applications.
EVIDENCE
He describes Mahaiti, the state digital agency’s cloud-native, modular, API-driven infrastructure that uses AI to integrate services, predict needs and respond in real-time, supporting smart recruitment, property mapping, urban dashboards, flood management and smart mobility pilots [48-49].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The MahaAI document describes Mahaiti as a modular, API-driven, cloud-native backbone that integrates services and delivers real-time AI-driven public-service outcomes [S1].
MAJOR DISCUSSION POINT
AI‑enabled digital infrastructure
AGREED WITH
Mr. Praveen Pardeshi, Mr. Suresh Sethi, Mr. Ranjeet Goswami
Argument 3
“Scale empathy through insight” – AI to make governance more human
EXPLANATION
He emphasizes that AI should bring the state closer to citizens, making governance faster, more responsive and inclusive. The guiding principle is described as scaling empathy through insight.
EVIDENCE
He explains the philosophy of using AI to make governance more human, faster, more responsive and inclusive, summarised as ‘scale empathy through insight’ [51-53].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The guiding philosophy of “scale empathy through insight” for more human, faster and inclusive governance is articulated in the MahaAI report [S1].
MAJOR DISCUSSION POINT
Human‑centric AI governance
AGREED WITH
Mr. Virendra Singh, Mr. Suresh Sethi, Ms. Beena Sarkar
M
Mr. Praveen Pardeshi
2 arguments171 words per minute601 words209 seconds
Argument 1
State data authority to monetize data responsibly and protect national interests
EXPLANATION
He argues that the state must treat large‑scale data as a strategic asset, monetising it while ensuring benefits remain with India. The health data of the population is given as a key example.
EVIDENCE
He explains that the state data authority is working to monetise large-scale data such as health data for pharmaceuticals, creating a single source of proof and ensuring commercial benefits stay with India rather than being given away for free [96-102].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 reports that the state data authority is developing mechanisms to monetise large‑scale health data while ensuring commercial benefits remain with India.
MAJOR DISCUSSION POINT
Responsible data monetisation
DISAGREED WITH
Mr. Yashasvi Yadav
Argument 2
Maha GPT for querying government orders and citizen services
EXPLANATION
He presents ‘Maha GPT’, a small language model designed to parse over 150,000 government orders, enabling both officials and citizens to query the latest regulatory positions. This tool aims to increase transparency and accessibility of governance information.
EVIDENCE
He outlines the development of ‘Maha GPT’, a small language model that can disentangle over 150,000 government orders, allowing government officers and citizens to query the latest positions on permits, Supreme Court orders and other rules [108-114].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The MahaAI paper outlines the creation of “Maha GPT”, a small language model that can parse over 150,000 government orders for officials and citizens [S1].
MAJOR DISCUSSION POINT
AI‑driven government information access
M
Mr. Yashasvi Yadav
3 arguments129 words per minute756 words350 seconds
Argument 1
AI tools in Maharashtra Cyber Security Project detect and prevent cybercrime, saving lives
EXPLANATION
He highlights the deployment of AI across multiple cyber‑security functions, which has led to substantial financial recoveries and the protection of vulnerable individuals. The project is portrayed as a life‑saving initiative.
EVIDENCE
He notes that the Maharashtra Cyber Security Project employs AI tools to fight real crime, leading to over 1,000 crore rupees frozen and returned to victims, and the rescue of more than 70 young girls from cyber-bullying, saving 70 lives within six months [122-124][130-136].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Maharashtra Cyber Security Project leverages AI tools to combat real crime, freezing assets and rescuing victims, as described in the MahaAI report [S1] and reinforced by AI-driven cybercrime investigation case studies [S21].
MAJOR DISCUSSION POINT
AI for cyber‑security and victim protection
Argument 2
AI‑driven threat intelligence thwarted nation‑state cyber attacks
EXPLANATION
He cites a specific incident where AI‑based threat‑intelligence platforms blocked large‑scale attacks from multiple nation‑state actors. The success demonstrates AI’s strategic defensive capability.
EVIDENCE
He references the ‘Echoes of Pahalgam’ report where AI-driven threat-intelligence tools such as Luminar, Cognite and Pathfinder thwarted nation-state cyber attacks launched during a conventional war with Pakistan [139-141].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-based threat-intelligence platforms are highlighted as key to defending against sophisticated nation-state attacks in the IGF 2023 discussion on AI-driven cyber defence [S22].
MAJOR DISCUSSION POINT
AI‑enabled threat intelligence
Argument 3
Quantum computing threatens encryption; urgent preparedness needed
EXPLANATION
He warns that quantum computers can break current encryption standards, jeopardising financial systems and blockchain technologies. Immediate investment and preparedness are urged to stay ahead of global competitors.
EVIDENCE
He warns that quantum computing, capable of processing hundreds of millions of qubits in seconds, can break RSA, blockchain and banking encryptions, posing a major risk to financial systems and urging urgent preparedness [145-150].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Both the MahaAI report and the Davos 2025 briefing note warn that quantum computers can break RSA, blockchain and banking encryptions, calling for immediate preparedness [S1][S30].
MAJOR DISCUSSION POINT
Quantum risk to cybersecurity
DISAGREED WITH
Mr. Praveen Pardeshi
M
Mr. Suresh Sethi
3 arguments162 words per minute643 words237 seconds
Argument 1
Population‑scale DPI provides data foundation for AI in identity, subsidies, and payments
EXPLANATION
He points out that India’s extensive digital public infrastructure, covering identity, payments and document storage, creates a massive data pool for AI applications. This foundation enables AI‑driven services at scale.
EVIDENCE
He notes that India’s population-scale digital public infrastructure, including identity systems, UPI payment rails and DigiLocker documents, provides a massive data foundation for AI applications [160-166].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Building population-scale digital public infrastructure as a data backbone for AI applications is discussed in the AI-for-development briefing [S24].
MAJOR DISCUSSION POINT
DPI as AI data backbone
Argument 2
Dynamic eligibility and predictive governance enabled by AI on verifiable credentials
EXPLANATION
He explains that machine‑readable, verifiable credentials allow AI to determine who is eligible for subsidies and to anticipate needs, shifting governance from reactive to predictive. This enhances precision and reduces errors.
EVIDENCE
He describes how verifiable, machine-readable credentials enable AI to determine dynamic eligibility for subsidies and support predictive governance by anticipating income distress and triggering benefits automatically [167-176].
MAJOR DISCUSSION POINT
AI‑driven dynamic eligibility
AGREED WITH
Mr. Ashish Shelar, Dr. Amit Kapoor
DISAGREED WITH
Dr. Amit Kapoor
Argument 3
Necessity of explainability, auditability, and human redress in AI‑driven DPI
EXPLANATION
He stresses that AI decisions must be transparent, auditable and subject to human oversight to maintain accountability. Explainability and redress mechanisms are presented as essential guardrails.
EVIDENCE
He stresses that AI systems must be explainable, auditable and include human redress pathways to ensure accountability when decisions about benefits are made [184-190].
MAJOR DISCUSSION POINT
Governance safeguards for AI
AGREED WITH
Ms. Beena Sarkar
M
Mr. Ranjeet Goswami
2 arguments156 words per minute361 words138 seconds
Argument 1
Tata’s holistic AI approach focuses on welfare, common databases, and Aadhaar integration
EXPLANATION
He invokes Tata’s legacy to argue that AI should serve welfare and happiness, requiring shared databases across departments and integration with Aadhaar. This holistic view aims to treat citizens as members of the state rather than of isolated agencies.
EVIDENCE
He references Tata’s 170-year legacy, stating that AI should serve welfare and happiness for all, requiring common databases across departments and integration with Aadhaar to view citizens as state or country members rather than departmental entities [200-215].
MAJOR DISCUSSION POINT
AI for inclusive welfare
Argument 2
Collaboration between government and large tech firms to embed intelligence in core platforms
EXPLANATION
He highlights the need for partnership between public authorities and major technology companies to integrate AI capabilities into foundational platforms. Such collaboration is portrayed as essential for scaling intelligent governance.
EVIDENCE
He notes that embedding intelligence into core platforms requires collaboration between government and large technology companies, emphasizing the need to bring platform intelligence to the core of systems [216-217].
MAJOR DISCUSSION POINT
Public‑private AI collaboration
M
Ms. Beena Sarkar
2 arguments116 words per minute592 words306 seconds
Argument 1
Evaluation of emerging hardware (e.g., smart glasses) for privacy and gender safety
EXPLANATION
She raises concerns about new wearable devices that may infringe on privacy, especially for women, citing past issues with Google Glass. The argument calls for careful assessment before widespread adoption.
EVIDENCE
She highlights safety concerns of smart glasses, recalling the Google Glass recall due to non-consensual image capture, and stresses the need to assess privacy and gender-related risks of such devices [230-236].
MAJOR DISCUSSION POINT
Hardware privacy and gender impact
DISAGREED WITH
Mr. Ashish Shelar
Argument 2
Recommendation for India Safety Institute to vet technologies for societal impact
EXPLANATION
She proposes that the India Safety Institute, created in 2025, should act as the first line of defense to evaluate new technologies for potential threats to public safety, especially for women and children. This institutional mechanism aims to safeguard society.
EVIDENCE
She recommends that the India Safety Institute, established in 2025, should serve as the first line of defense to evaluate new technologies for potential threats to police, cybersecurity and societal safety, especially for women and children [244-250].
MAJOR DISCUSSION POINT
Institutional tech safety review
D
Dr. Amit Kapoor
3 arguments202 words per minute911 words269 seconds
Argument 1
AI can monitor nutrition, water, sanitation, and education at granular geographic levels
EXPLANATION
He suggests that AI can be deployed to assess essential services such as nutrition, water and sanitation down to the PIN‑code level, enabling targeted interventions. This granular monitoring is presented as a way to address malnutrition and basic service gaps.
EVIDENCE
He proposes using AI to assess nutrition, water, sanitation and education at the PIN-code level, enabling granular monitoring of malnutrition and basic services across Maharashtra [292-298].
MAJOR DISCUSSION POINT
AI for granular public health monitoring
Argument 2
Urgent need for upskilling, higher education, broadband connectivity, and affordable AI services
EXPLANATION
He points out the low skill levels of the majority of the workforce, inadequate broadband speeds, and the necessity for rapid investment in education, connectivity and affordable AI. These steps are deemed critical for inclusive AI adoption.
EVIDENCE
He cites that only about 20 % of Maharashtra’s workforce has high-skill levels, points to average broadband speeds of 58 Mbps in Mumbai, and calls for rapid investment in skill development, internet infrastructure and affordable AI services [270-283].
MAJOR DISCUSSION POINT
Capacity building and infrastructure for AI
Argument 3
Concern that AI may exacerbate mental‑health issues and “dumping” of society if not managed
EXPLANATION
He warns that unchecked AI could degrade societal well‑being, turning people into passive consumers and harming mental health, especially among children. The argument calls for safeguards in education and usage.
EVIDENCE
He warns that AI could become a ‘dumping’ element, turning people into ‘bonobos’, and that doom-scrolling on social media may harm children’s mental health, urging education system reforms [299-303].
MAJOR DISCUSSION POINT
AI’s societal and mental‑health risks
DISAGREED WITH
Mr. Suresh Sethi
M
Moderator
1 argument47 words per minute272 words342 seconds
Argument 1
Call for intelligent, adaptive governance mechanisms
EXPLANATION
The moderator urges the panel to adopt intelligent and adaptable governance frameworks that can keep pace with AI’s rapid evolution. This call underscores the need for flexible policy design.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Adaptive, intelligent policy design is one of the five pillars of the MahaAI governance framework [S1].
MAJOR DISCUSSION POINT
Adaptive AI governance
AGREED WITH
Mr. Virendra Singh, Mr. Devroop Dhar
M
Mr. Devroop Dhar
1 argument46 words per minute393 words510 seconds
Argument 1
Need for intelligent, adaptive policies to balance innovation and risk
EXPLANATION
He stresses that policies must be both intelligent and adaptable, striking a balance between fostering innovation and mitigating potential harms of AI. This perspective highlights the policy dilemma of speed versus safety.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 stresses the need for policies that are both intelligent and adaptable to balance innovation with risk mitigation.
MAJOR DISCUSSION POINT
Balanced AI policy
AGREED WITH
Mr. Virendra Singh, Moderator
M
Mr. Anupam Chattopadhyay
1 argument0 words per minute0 words1 seconds
Argument 1
Quantum‑AI Intersection
EXPLANATION
He acknowledges that the convergence of quantum computing and AI represents a critical emerging frontier that requires focused research and policy attention.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The convergence of quantum computing and AI as an emerging research frontier is highlighted in the Davos 2025 report on quantum threats and opportunities [S30].
MAJOR DISCUSSION POINT
Emerging quantum‑AI research
Agreements
Agreement Points
Human‑centered and inclusive AI governance
Speakers: Mr. Virendra Singh, Mr. Ashish Shelar, Mr. Suresh Sethi, Ms. Beena Sarkar
Intelligent, human‑centered AI governance “Scale empathy through insight” – AI to make governance more human Necessity of human redress in AI‑driven DPI Evaluation of emerging hardware for privacy and gender safety
All four speakers stress that AI systems and policies must be designed around people, ensuring transparency, accountability, empathy and safeguards for vulnerable groups [13][51-53][189-190][241-243].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with the inclusive AI governance framework that embeds universal values and human-centric principles in pluralistic settings [S51] and reflects the OECD’s multistakeholder AI governance work discussed at the AI Governance Open Forum [S54]; it also echoes the human-rights-based approach advocated for new technologies [S65].
Need for risk‑based, adaptive and balanced regulation
Speakers: Mr. Virendra Singh, Moderator, Mr. Devroop Dhar
Risk‑based regulation, global cooperation, and adaptive policies Call for intelligent, adaptive governance mechanisms Need for intelligent, adaptive policies to balance innovation and risk
Speakers agree that AI regulation must avoid both under-regulation and over-regulation by being risk-based, globally coordinated and adaptable to rapid technological change [8-10][13].
POLICY CONTEXT (KNOWLEDGE BASE)
Risk-based, adaptive regulation is echoed in sector-specific AI policy for banking that prioritises risk intensity, explainability and accountability [S63], in the EU AI Act’s focus on high-risk use cases while keeping low-risk AI lightly regulated [S58], and in OECD guidance on responsible deployment of AI and quantum technologies [S70].
Robust data infrastructure as foundation for AI‑enabled public services
Speakers: Mr. Ashish Shelar, Mr. Praveen Pardeshi, Mr. Suresh Sethi, Mr. Ranjeet Goswami
Mahaiti cloud‑native infrastructure enables real‑time, AI‑driven public services State data authority to monetize data responsibly and Maha GPT for querying government orders Population‑scale DPI provides data foundation for AI in identity, subsidies and payments Collaboration on common databases and Aadhaar integration for inclusive welfare
All speakers highlight the creation of a unified, scalable digital public infrastructure (cloud-native platforms, DPI, common databases) that supplies high-quality data for AI applications in governance [48-49][96-114][160-166][210-215].
POLICY CONTEXT (KNOWLEDGE BASE)
The importance of data centres, compute resources and renewable energy for AI is highlighted in China’s AI-Plus Economy infrastructure roadmap [S48]; Poland’s public LLM initiative underscores AI as critical infrastructure for continuity of services [S49]; multistakeholder partnership discussions stress sensing infrastructure and data accessibility as prerequisites for thriving AI ecosystems [S50]; and the OECD Digital Government Index tracks data-infrastructure readiness in public sector AI adoption [S59].
AI as a catalyst for smarter, more responsive public service delivery
Speakers: Mr. Ashish Shelar, Mr. Suresh Sethi, Dr. Amit Kapoor
Mahaiti platform supports smart recruitment, property mapping, urban dashboards, flood‑management pilots Dynamic eligibility and predictive governance enabled by AI on verifiable credentials AI can monitor nutrition, water, sanitation and education at granular geographic levels
The panel concurs that AI can transform service delivery-from urban management to welfare eligibility and public-health monitoring-by providing real-time insights and predictive capabilities [48-49][167-176][292-298].
POLICY CONTEXT (KNOWLEDGE BASE)
Panel discussions at the Global Vision for AI Impact session note AI’s potential to enhance decision-making, resource management and service delivery in the public sector [S60]; Rwanda’s AI pilots illustrate how governments can use AI to transform public services before formal regulation [S61]; and Poland’s deployment of a national LLM demonstrates AI-driven public service innovation [S49].
Capacity building and skill development are essential for AI adoption
Speakers: Mr. Praveen Pardeshi, Dr. Amit Kapoor
Capacity building through AI university courses and online training for government staff Urgent need for upskilling, higher‑education, broadband connectivity and affordable AI services
Both speakers stress that a skilled workforce and adequate digital infrastructure are prerequisites for effective AI deployment and inclusive growth [84-86][270-283].
POLICY CONTEXT (KNOWLEDGE BASE)
Capacity building is identified as a foundational requirement for AI ecosystems in multistakeholder partnership reports [S50]; the AI Policy Research Roadmap lists capacity building alongside inclusivity and accountability as core principles [S52]; discussions on AI for social empowerment stress skill development to mitigate labour market disruption [S53]; and the OECD Digital Government Index highlights the need for skilled personnel in AI-enabled public services [S59].
Safeguards – explainability, auditability and human oversight for AI systems
Speakers: Mr. Suresh Sethi, Ms. Beena Sarkar
Necessity of explainability, auditability, and human redress in AI‑driven DPI Recommendation for India Safety Institute to vet emerging technologies for societal impact
Both emphasize that AI must be transparent, auditable and subject to human review, and that institutional mechanisms should evaluate new technologies before deployment [184-190][244-250].
POLICY CONTEXT (KNOWLEDGE BASE)
Banking sector AI policy mandates explainability, transparency and auditability as pillars of risk-based governance [S63]; algorithmic transparency is a recurring theme in AI Security Council deliberations [S64]; human oversight is emphasized in human-rights-centered AI governance frameworks [S65]; and safety-ethics discussions advocate for tiered access and contextual safeguards to balance openness with security [S57].
Similar Viewpoints
Both advocate that AI governance should prioritize human values, empathy and transparency rather than treating AI as a purely technical tool [13][51-53].
Speakers: Mr. Virendra Singh, Mr. Ashish Shelar
Intelligent, human‑centered AI governance “Scale empathy through insight” – AI to make governance more human
All three stress the strategic importance of a unified, high‑quality data ecosystem (DPI, state data authority, Aadhaar) as the backbone for AI‑driven governance [96-114][160-166][210-215].
Speakers: Mr. Praveen Pardeshi, Mr. Suresh Sethi, Mr. Ranjeet Goswami
State data authority to monetize data responsibly Population‑scale DPI as AI data backbone Common databases and Aadhaar integration for welfare
Both highlight quantum computing as a looming security challenge that intersects with AI, calling for proactive research and policy attention [145-150][152-154].
Speakers: Mr. Yashasvi Yadav, Mr. Anupam Chattopadhyay
Quantum computing threatens encryption; urgent preparedness needed Quantum‑AI intersection as emerging frontier
Unexpected Consensus
Quantum computing as an imminent risk to cybersecurity and encryption
Speakers: Mr. Yashasvi Yadav, Mr. Anupam Chattopadhyay
Quantum computing threatens encryption; urgent preparedness needed Quantum‑AI Intersection
While Yashasvi frames quantum as a security threat to encryption and financial systems, Anupam brings a research-policy perspective, yet both converge on the urgency of addressing quantum-AI convergence-an area not originally emphasized by other panelists [145-150][152-154].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of quantum technologies highlight dual-use risks and the threat to current encryption schemes, calling for coordinated policy responses [S68]; a consensus on the urgency of quantum threats and the need for multi-stakeholder action is documented in recent quantum-encryption reports [S69]; security implications for IoT and critical infrastructure are explored in the Quantum-IoT-Infrastructure workshop [S71]; and responsible deployment guidance stresses assessing vulnerabilities before integrating quantum capabilities [S70].
Gender‑focused hardware safety assessment
Speakers: Ms. Beena Sarkar, Mr. Ashish Shelar
Evaluation of emerging hardware (e.g., smart glasses) for privacy and gender safety AI should make governance more human, faster, more inclusive
Beena’s call for gender-sensitive hardware vetting aligns unexpectedly with Ashish’s broader human-centric AI philosophy, linking device safety directly to inclusive, empathetic governance [230-236][51-53].
POLICY CONTEXT (KNOWLEDGE BASE)
Gender perspectives in cybersecurity policy have been raised at UN and IGF sessions, emphasizing the need to address diversity and safety in hardware design [S45][S46][S47]; Canada’s requirement for Gender-Based Analysis Plus (GBA+) in all policy initiatives provides an institutional model for gender-focused safety assessments [S66]; and broader digital gender-equality initiatives align with Sustainable Development Goal 5 to embed gender considerations in technology policy [S67].
Overall Assessment

The panel shows strong convergence around four core themes: (1) human‑centered, inclusive AI governance; (2) adaptive, risk‑based regulatory frameworks; (3) building a unified, population‑scale data infrastructure as the foundation for AI‑enabled public services; and (4) investing in capacity building, explainability and safeguards (including quantum and hardware risks).

High consensus – the majority of speakers echo each other’s positions across multiple domains, indicating a shared vision for responsible, people‑first AI deployment in Maharashtra. This broad agreement suggests that policy initiatives emerging from the summit are likely to receive cross‑sectoral support and can be advanced with confidence.

Differences
Different Viewpoints
AI’s societal impact – optimistic predictive governance versus concerns of mental‑health degradation and societal “dumping”
Speakers: Mr. Suresh Sethi, Dr. Amit Kapoor
Dynamic eligibility and predictive governance enabled by AI on verifiable credentials Concern that AI may exacerbate mental‑health issues and “dumping” of society if not managed
Suresh Sethi argues that AI applied to verifiable credentials can create dynamic eligibility for subsidies and enable predictive governance, improving precision and reducing inclusion/exclusion errors [167-176][184-190]. Amit Kapoor counters that while AI is transformational, unchecked deployment could become a “dumping” element, turning people into passive consumers and harming mental health, especially among children, urging safeguards in education and usage [299-303].
Emerging hardware privacy and gender safety versus AI‑driven governance without explicit hardware safeguards
Speakers: Ms. Beena Sarkar, Mr. Ashish Shelar
Evaluation of emerging hardware (e.g., smart glasses) for privacy and gender safety “Scale empathy through insight” – AI to make governance more human, faster, more responsive and inclusive
Beena Sarkar raises concerns about new wearable devices such as smart glasses, citing past privacy violations and emphasizing the need to assess gender-specific safety risks, recommending the India Safety Institute as a first-line evaluator [230-236][244-250]. Ashish Shelar promotes AI as a tool to make governance more human and inclusive, focusing on cloud-native infrastructure and AI-driven services without addressing hardware privacy implications [51-53][48-49].
Monetising large‑scale state data versus protecting data against quantum‑enabled security threats
Speakers: Mr. Praveen Pardeshi, Mr. Yashasvi Yadav
State data authority to monetize data responsibly and protect national interests Quantum computing threatens encryption; urgent preparedness needed
Praveen Pardeshi argues that the state data authority should treat health and other large-scale data as strategic assets, monetising them while ensuring commercial benefits stay with India [96-102]. Yashasvi Yadav warns that quantum computing can break current encryption standards (RSA, blockchain, banking), posing severe risks to financial systems and urging immediate investment and preparedness [145-150]. The tension lies between exploiting data for economic gain and safeguarding it from emerging quantum threats.
Unexpected Differences
Hardware privacy and gender safety raised by a gender‑focused ethics advocate against a generally technology‑positive narrative
Speakers: Ms. Beena Sarkar, Other panelists (e.g., Mr. Ashish Shelar, Mr. Virendra Singh)
Evaluation of emerging hardware (e.g., smart glasses) for privacy and gender safety Intelligent, human‑centered AI governance
While the majority of the panel celebrated AI’s role in improving governance and public safety, Beena Sarkar introduced a distinct concern about emerging hardware (smart glasses) potentially infringing on women’s privacy and safety, recommending institutional vetting. This focus on gender-specific hardware risk was not anticipated in the otherwise AI-centric discussion [230-236][244-250].
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between gender-focused cybersecurity advocacy and prevailing technology-positive narratives is documented in IGF gender-cybersecurity workshops, where advocates call for explicit hardware safeguards while many participants emphasize broader tech optimism [S45][S46][S47].
Warning that AI could become a “dumping” element harming mental health, contrasting with the panel’s largely optimistic tone
Speakers: Dr. Amit Kapoor, Other panelists (e.g., Mr. Suresh Sethi, Mr. Ashish Shelar)
Concern that AI may exacerbate mental‑health issues and “dumping” of society if not managed Dynamic eligibility and predictive governance enabled by AI on verifiable credentials
Amit Kapoor’s stark warning about AI turning people into passive “bonobos” and causing mental-health problems was unexpected given the panel’s focus on AI as a catalyst for efficiency, inclusion, and welfare. This introduces a cautionary perspective not echoed by other speakers who emphasized AI’s positive governance outcomes [299-303][167-176].
Overall Assessment

The panel largely converged on the promise of AI for smarter, more inclusive governance, but key tensions emerged around the balance between AI‑driven efficiency and societal risks. Disagreements centered on (1) the optimistic view of AI enabling predictive, dynamic services versus concerns about mental‑health impacts; (2) the need to monetize state data for economic benefit versus safeguarding it against quantum‑enabled security threats; and (3) the omission of hardware privacy and gender‑specific safety considerations in a technology‑positive narrative.

Moderate to high. While there is broad consensus on AI’s strategic importance, the divergent views on risk management, ethical safeguards, and data security indicate substantial policy friction that could affect the design of governance frameworks, requiring careful integration of protective measures alongside innovation.

Partial Agreements
All speakers share the overarching goal of leveraging AI to enhance governance, public services, and welfare. However, they differ on the primary mechanisms: Virendra emphasizes human‑centered design, transparency and adaptive policies [13][8-10]; Ashish highlights AI‑driven crime‑fighting tools and a cloud‑native infrastructure [47][48-49]; Praveen focuses on a small language model (Maha GPT) to improve information access [108-114]; Suresh stresses AI on verifiable credentials for dynamic eligibility and predictive governance [167-176]; Ranjit calls for public‑private partnerships and common databases, especially Aadhaar integration, to embed AI across departments [200-215][216-217].
Speakers: Mr. Virendra Singh, Mr. Ashish Shelar, Mr. Praveen Pardeshi, Mr. Suresh Sethi, Mr. Ranjeet Goswami
Intelligent, human‑centered AI governance AI‑powered Mahak Crime OS improves crime detection, transparency, and response Maha GPT for querying government orders and citizen services Dynamic eligibility and predictive governance enabled by AI on verifiable credentials Collaboration between government and large tech firms to embed intelligence in core platforms
Takeaways
Key takeaways
AI governance must be intelligent, human‑centered, risk‑based, adaptive and globally coordinated rather than static or overly restrictive. Maharashtra is positioning itself as a living laboratory for AI‑driven smart governance through initiatives such as Mahak Crime OS, the Mahaiti cloud‑native infrastructure, and Maha GPT. AI is already enhancing law‑enforcement and cybersecurity, with measurable outcomes (e.g., fraud recovery, lives saved) and is essential for countering nation‑state cyber threats. Quantum computing poses an imminent risk to current encryption standards; preparedness and investment are urgently needed. Population‑scale Digital Public Infrastructure (DPI) provides a data foundation for AI in identity verification, dynamic eligibility, and predictive governance, but requires explainability, auditability and human redress mechanisms. Collaboration between government and large tech firms (e.g., TCS, Microsoft) is critical for embedding intelligence into core public platforms and creating common databases. Ethical AI considerations, especially gender‑related privacy and safety concerns, must be evaluated through dedicated bodies such as the India Safety Institute. Socio‑economic impact of AI hinges on upskilling the workforce, expanding broadband connectivity, and ensuring affordable AI services, particularly in Tier‑2/3 cities. AI can be leveraged for granular monitoring of nutrition, water, sanitation and education, but unchecked deployment may exacerbate mental‑health and societal “dumping” issues.
Resolutions and action items
Launch and operationalise Maha GPT to provide query access for government officers and citizens on orders, regulations and services. Strengthen the State Data Authority to create a single source of truth for health and other public data and to monetize data responsibly for national benefit. Continue scaling the Mahaiti cloud‑native, API‑driven infrastructure to support real‑time AI‑driven public services. Expand AI‑powered Mahak Crime OS across additional law‑enforcement domains and maintain the 1930 helpline for cyber‑crime assistance. Establish the India Safety Institute (or empower it) to vet emerging hardware and AI applications for privacy, gender safety and broader societal impact. Implement capacity‑building programmes (AI university, IGOT online courses) for government staff to increase AI literacy. Adopt explainability, auditability and human‑redress frameworks for AI decisions within DPI and welfare delivery systems. Prioritise investment in green energy, broadband expansion and affordable AI services to support Tier‑2/3 city adoption. Develop a coordinated quantum‑computing preparedness roadmap, including research funding and encryption‑resilience strategies.
Unresolved issues
Specific mechanisms for global interoperability and shared safety standards for AI governance remain undefined. Details on how the State Data Authority will commercialise data while protecting privacy and sovereignty are not fully articulated. Concrete standards and processes for AI explainability and auditability across diverse government departments are still pending. The timeline, funding model and governance structure for the quantum‑computing preparedness initiative were not clarified. How to systematically address AI‑induced mental‑health risks and societal “dumping” effects in Tier‑2/3 contexts needs further discussion. Procedures for integrating Aadhaar with all departmental databases and ensuring data security were mentioned but not resolved.
Suggested compromises
Adopt a balanced, risk‑based regulatory approach (“intelligent governance”) that avoids both over‑regulation (stagnation) and under‑regulation (harm). Combine AI automation with human oversight, ensuring explainable decisions and a clear human redress pathway. Leverage existing public infrastructure (DPI, Aadhaar) while protecting individual privacy through audit and transparency measures. Encourage private‑sector AI innovation while requiring adherence to ethical standards and safety vetting by the India Safety Institute.
Thought Provoking Comments
The question before us is not whether AI will shape governance. The question is whether governance is going to shape the artificial intelligence.
Frames the debate as a two‑way relationship, emphasizing that policy choices will determine AI’s trajectory rather than AI dictating policy.
Set the thematic foundation for the entire panel, prompting subsequent speakers to discuss governance frameworks, regulatory approaches, and the need for ‘intelligent governance’ rather than mere regulation.
Speaker: Mr. Virendra Singh
Use AI not to distance the state from the citizens, but to make governance more human, faster, more responsive, and more inclusive. In other words, scale empathy through insight.
Introduces the concept of ‘scale empathy’, linking technology deployment directly to citizen‑centric outcomes, and positions Maharashtra as a ‘living laboratory’.
Shifted the conversation from technical showcases to the purpose of AI in public service, influencing later speakers (e.g., Praveen Pardeshi and Suresh Sethi) to frame their initiatives around citizen impact and inclusivity.
Speaker: Mr. Ashish Shelar
We need to encash the data at a large scale… make health data a single source of proof and ensure commercial value stays with India, not foreign entities.
Highlights data as a strategic economic asset and raises the issue of data sovereignty and monetization, moving beyond operational AI uses.
Prompted a deeper discussion on data governance, leading Suresh Sethi to talk about auditable AI layers and Ranjit Goswami to mention common databases across departments.
Speaker: Mr. Praveen Pardeshi
In less than six months, more than 1,000 crore rupees have been frozen and returned to victims, and 70 lives saved, thanks to AI‑driven cyber tools.
Provides concrete, quantifiable outcomes of AI in law enforcement, demonstrating real‑world impact and building credibility for AI interventions.
Reinforced the narrative of AI as a protective tool, setting up the segue to his warning about quantum computing, which introduced a new risk dimension to the discussion.
Speaker: Mr. Yashasvi Yadav
Quantum computing can break RSA, blockchain, and banking encryptions within minutes; we are investing only $1 billion while rivals spend $15‑20 billion. We must prepare now.
Introduces a forward‑looking security threat that could undermine current AI and cyber safeguards, expanding the scope of the conversation to future‑proofing.
Shifted the tone from current successes to urgent strategic preparedness, prompting the moderator to bring in Dr. Anupam for a quantum‑AI perspective and adding urgency to the panel’s recommendations.
Speaker: Mr. Yashasvi Yadav
Dynamic eligibility and predictive governance: AI can move us from static identity verification to machine‑readable verifiable credentials that automatically determine subsidy eligibility.
Articulates a concrete evolution of public service delivery, linking AI to precision targeting of benefits and introducing concepts of inclusion/exclusion errors.
Deepened the technical discussion, leading other panelists to consider explainability, auditability, and human redress mechanisms, and reinforced the need for robust data standards.
Speaker: Mr. Suresh Sethi
AI should be built on a common citizen database, not siloed departmental records; Aadhaar must be integrated across all departments to see citizens as citizens of the state, not of a department.
Emphasizes systemic integration and the importance of unified data architecture for effective AI, moving the conversation from isolated pilots to holistic state‑wide strategy.
Encouraged consensus on data unification, influencing later remarks on data governance and prompting the audience to consider cross‑departmental collaboration.
Speaker: Mr. Ranjit Goswami
When new devices like smart glasses are released without safety oversight, they can threaten privacy and safety, especially for women; we need a dedicated safety institute to evaluate such technologies before market entry.
Brings gender‑focused ethical concerns to the fore, highlighting the gap between technological enthusiasm and societal safeguards.
Introduced an ethical dimension that shifted the discussion toward regulatory safeguards, influencing Amit Kapoor’s later critique of AI’s societal impact and reinforcing the need for ethical frameworks.
Speaker: Ms. Beena Sarkar
AI risks turning society into ‘bonobos’ through doom‑scrolling and content overload; without strong education, skill development, and affordable connectivity, AI will exacerbate inequality rather than alleviate it.
Offers a critical, socio‑economic perspective, warning of AI’s potential to deepen existing disparities and calling for systemic investment in education and infrastructure.
Served as a turning point that broadened the conversation from technical implementation to societal readiness, prompting acknowledgment of the ‘elephant in the room’ and reinforcing earlier calls for capacity building.
Speaker: Dr. Amit Kapoor
Overall Assessment

The discussion was driven forward by a series of pivotal remarks that moved the panel from high‑level optimism to a nuanced, multi‑layered debate. Early framing by Mr. Singh and Mr. Shelar set a citizen‑centric, governance‑focused agenda. Subsequent insights on data sovereignty (Pardeshi), tangible AI successes (Yadav), and emerging threats (quantum computing) introduced both opportunity and urgency. Technical depth was added by Sethi’s vision of dynamic eligibility and Ranjit’s call for unified citizen databases, while Beena’s gender‑bias concerns and Amit’s critique of societal readiness injected essential ethical and equity considerations. Collectively, these comments reshaped the conversation, prompting participants to address not only how AI can be deployed, but also how it must be governed, secured, and made inclusive, thereby steering the panel toward actionable, holistic recommendations.

Follow-up Questions
How can the state monetize and securely share large‑scale public data, such as health and pharmaceutical data, while ensuring it benefits the government rather than external entities?
Ensuring data sovereignty and capturing economic value from India’s massive health datasets is critical for national interest and to prevent exploitation by foreign actors.
Speaker: Praveen Pardeshi
What are the potential risks and benefits of quantum computing for encryption and national security, and how should India prepare for them?
Quantum computers could break current cryptographic standards, threatening financial systems, banking, and national security; proactive research and preparedness are essential.
Speaker: Yashasvi Yadav
What guardrails, explainability, auditability, and human redress mechanisms are needed for AI‑driven decision‑making in public service delivery?
Transparent and accountable AI systems are necessary to maintain public trust and ensure that automated decisions can be reviewed and corrected by humans.
Speaker: Suresh Sethi
How should ethical evaluation frameworks, such as the India Safety Institute, assess new AI‑enabled hardware (e.g., smart glasses) for gender bias and safety concerns?
Evaluating emerging devices for potential harm to women and vulnerable groups prevents misuse and aligns technology deployment with ethical standards.
Speaker: Beena Sarkar
What strategies can be employed to extend AI benefits to Tier‑2 and Tier‑3 cities, particularly for nutrition monitoring, water and sanitation, and education?
Targeted AI applications can address persistent development gaps in smaller cities, improving health, infrastructure, and learning outcomes.
Speaker: Amit Kapoor
How can AI be used to predict and prevent inclusion and exclusion errors in subsidy distribution?
Reducing leakages and ensuring rightful beneficiaries receive aid enhances the efficiency and fairness of welfare programs.
Speaker: Suresh Sethi
What steps are required to create a common, interoperable citizen database across government departments, linking to Aadhaar and other sources?
A unified data backbone enables seamless service delivery and avoids siloed information that hampers citizen‑centric governance.
Speaker: Ranjit Goswami
What capacity‑building programs and online courses are needed to empower government staff to effectively use AI tools?
Building AI literacy among civil servants is essential for successful implementation and avoids skill bottlenecks.
Speaker: Praveen Pardeshi
How can AI‑driven language models like Maha GPT be safely deployed for both officials and citizens, ensuring accuracy and privacy?
Deploying large‑scale conversational agents in governance requires safeguards to protect data, prevent misinformation, and maintain trust.
Speaker: Praveen Pardeshi
What research is needed on the societal impacts of AI, such as mental‑health effects from doom‑scrolling and content consumption among children?
Understanding negative social consequences helps shape policies that mitigate harm while leveraging AI’s benefits.
Speaker: Amit Kapoor
How can AI threat‑intelligence tools be enhanced to detect and mitigate nation‑state cyber‑attacks in real time?
Strengthening AI‑based cyber‑defence is vital to protect critical infrastructure against sophisticated state‑sponsored threats.
Speaker: Yashasvi Yadav
What policies and investments are required to close the gap in quantum computing research and development compared to other countries?
Closing the investment disparity ensures India remains competitive and can safeguard its digital ecosystem against future quantum threats.
Speaker: Yashasvi Yadav
How can AI governance frameworks be made adaptive and dynamic to keep pace with evolving AI technologies?
Static regulations quickly become obsolete; adaptive governance is needed to balance innovation with risk mitigation.
Speaker: Virendra Singh

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Global Perspectives on Openness and Trust in AI

Global Perspectives on Openness and Trust in AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel convened by AI Now and AAPTI examined how the concept of “openness” shapes AI governance and its political economy, noting that the term does far more work than a simple technical label [12-17]. Participants argued that “open” functions as a proxy for broader values such as democratization, participation, and sovereignty rather than merely sharing code or model weights [16-17].


Alondra Nelson explained that the Biden administration has framed openness as a binary outcome-either a model is open or it is not-contrasting with the original open-source ethos that views openness as a socio-technical gradient [27-33][40-42]. She warned that this binary framing allows geopolitical concerns to eclipse accountability, transparency, and democratic control over AI systems [45-47], and added that U.S. AI policy now operates more through industrial levers such as tariffs, export controls, and high-cost H-1B visas, which she described as “hyper-regulatory” and less democratic than formal rulemaking [56-63][64-68].


Anne Bouverot highlighted that China’s use of open-source tools has enabled it to catch up technologically, while European countries view open source as a competitive lever for middle-power coalitions [75-84][88-92]. She argued that ad-hoc “coalitions of the willing” among middle powers can harness openness to build digital sovereignty without relying on a single dominant stack [89-92].


Astha Kapoor warned that for Global South nations, openness can become a risky “adoption” narrative that diverts resources from structural challenges and may turn these countries into test-beds for external AI models [111-119][124-126]. Ravneet Kaur described the Competition Commission of India’s study of AI markets, identifying risks such as ecosystem lock-in, price discrimination, and opaque partnerships, and emphasized that ensuring access to data, compute, and skills is essential for fair competition [128-138][148-158]. She argued that competition is a crucial tool for preserving national sovereignty in the AI era, requiring transparent governance and contestable markets [161-170][172-173].


Karen Hao presented two open-source initiatives-the “big science” multilingual LLM project and New Zealand’s Tahiku Media speech-recognition model-that illustrate participatory, consent-driven openness and return value to data-providing communities [179-202]. She cautioned that scaling such models should not mean monopolistic distribution, but rather a decentralized “small-AI” approach that enables diverse industries and communities to develop their own solutions [207-212].


The discussion concluded that redefining openness as a democratic, community-centered practice, supported by transparent competition policy and inclusive coalitions, is essential for equitable AI development worldwide [40-42][161-170][207-212].


Keypoints


Major discussion points


Re-defining “openness” in AI beyond technical binaries – The panel opened by noting that “open” is a stand-in for broader values such as democratization, participation and sovereignty [12-17]. Alondra emphasized that the U.S. has treated openness as a binary rather than a gradient and argued for a socio-technical view that links openness to power-shifting, accountability and community use [30-34][40-46][47-49]. Anne highlighted how open-source can be a strategic lever for middle-power countries while acknowledging its limits [75-89]. Karen illustrated concrete projects (the large-scale open-source LLM effort and the Tahiku Media Māori speech-recognition model) that embody a participatory, consent-driven notion of openness [179-202].


Governance mechanisms and the politics of U.S. AI policy – Alondra pointed out that, although the Biden administration appears “light-touch” on formal regulation, it is exercising heavy influence through trade, export controls and immigration policy, which she described as “hyper-regulatory” and “anti-democratic” compared with traditional rule-making that includes public comment [55-66][67-68]. Amba’s follow-up question framed this shift as a move away from transparent, accountable regulation toward less publicly scrutinised levers [50-52].


Competition, market power and digital sovereignty – Ravneet Kaur explained the Competition Commission of India’s focus on anti-competitive practices (self-preferencing, bundling, exclusive agreements) across digital markets and, more recently, AI [128-138][141-152]. She argued that competition is essential for preventing entry barriers, ensuring transparency, and protecting sovereignty, especially for “global-majority” economies [161-170][166-170]. The discussion linked competition policy to broader concerns about data, compute and talent access [154-159].


Inclusion, representation and gender equity – Amba noted that the panel was the only all-female one at the summit and called it a “badge of honor” that should be improved in future iterations [4-5][69-74]. Audience members raised questions about who is truly included in the “all-inclusive” AI vision, pointing to the under-representation of Chinese participants and the need for gender-balanced engagement [298-306][311-317]. Karen later critiqued “corporate speak” that co-opts inclusion language while preserving closed platforms [254-258].


Community agency, labor and ethical risks – Alondra reflected on the lack of community transparency around data-center siting and the importance of community involvement in AI conferences [219-226][232-236]. Karen and later audience participants highlighted labor exploitation in data-collection pipelines and called for third-party labeling, “open-washing” safeguards, and design-by-consent approaches to protect workers and data subjects [277-283][369-381].


Overall purpose / goal of the discussion


The panel was convened to broaden the conversation about “openness” in AI governance, interrogate how power, politics and market structures shape AI development, and explore concrete pathways-through policy levers, competition law, community-driven projects, and inclusive representation-to align AI with the public interest across diverse geopolitical contexts (U.S., Europe, India, Global South).


Overall tone and its evolution


Opening (0:00-12:00): Formal, optimistic, and collaborative, with Amba framing the session as a “stimulating” exchange and participants outlining shared values around openness [1-3][12-17].


Middle segment (12:00-28:00): Becomes more critical and analytical; Alondra critiques the binary view of openness and the “anti-democratic” nature of U.S. policy [55-68]; Anne and Astha discuss geopolitical power shifts and the risks of a one-size-fits-all model [75-89][111-126]; Ravneet details concrete anti-competitive concerns [128-158].


Later segment (28:00-41:00): Reflective and hopeful, emphasizing community participation, concrete open-source case studies, and the potential of competition to safeguard sovereignty [179-202][219-236][161-170].


Closing (41:00-end): Cautiously optimistic, acknowledging corporate co-optation of inclusion language while urging deeper democratic engagement and concrete actions for labor justice and broader representation [254-258][369-381][389-391].


Overall, the tone moves from introductory enthusiasm to a nuanced critique of existing power structures, then toward constructive optimism about community-driven solutions and the need for inclusive, democratic AI governance.


Speakers

Amba Kak – Moderator and co‑host of the panel; affiliated with the AI Now Institute and the AAPTI Institute.


Alondra Nelson – Former Deputy Director of the White House Office of Science and Technology (Biden administration); Harold F. Linder Professor, Institute for Advanced Study [​S22][​S24].


Anne Bouverot – French President’s Special Envoy for the AI Action Summit; Special Envoy for Artificial Intelligence, France; former Director General of the GSMA [​S27].


Astha Kapoor – Representative of the AAPTI Institute / Civil Society, Asia‑Pacific Group; policy researcher on data stewardship [​S7][​S9].


Ravneet Kaur – Chairperson, Competition Commission of India [​S1].


Karen Hao – Journalist and author of Empire of AI, covering AI policy and ethics.


Audience members


Audience member 1 – Founder of Corral Inc. [​S10].


Audience member 2 – Participant from a German delegation (part of a group from Germany) [​S29].


Audience member 3 – Student (asked about open‑source Chinese models) [​S13].


Audience member 4 – Intellectual property and business lawyer [​S17].


Audience member 5 – Audience participant (question on AI’s impact on labor); no specific role identified.


Audience member 6 – Audience participant (question on “open‑washing”); no specific role identified.


Additional speakers:


None (all speakers appearing in the transcript are listed above).


Full session reportComprehensive analysis and detailed insights

The panel was jointly convened by the AI Now Institute and the AAPTI Institute as a capstone to an intensive week of debate. Amba Kak opened by noting the “political economy of AI” as the common thread that links New York and Bangalore and highlighted the panel’s composition of senior figures from government, academia and journalism [1-3][4-5]. She also drew attention to the fact that this was the only all-female panel at the summit, framing it both as a point of pride and a reminder of the work still needed to normalise gender-balanced representation [4-5]. Kak also thanked Amlan Mohanty for co-conceptualising the panel and the summit organising team, Sanjana Mishra and Iksho Virat, for their logistical work [1-5].


A central theme introduced early on was the contested meaning of “openness”. Kak observed that discussions of openness have largely focused on technical affordances such as open-source code, model weights or hardware, yet the term is being used as a proxy for much broader values-including democratisation, participation, agency and even sovereignty [12-17].


Alondra Nelson (former Deputy Director of the White House Office of Science and Technology Policy) argued that the Biden administration has reframed openness as a binary outcome-either a model is “open” or it is not-rather than as a gradient that reflects the original open-source ethos of shifting power and fostering accountability [27-33][40-43]. She warned that this binary framing allows geopolitical concerns to eclipse the socio-technical dimensions of openness, such as transparency and democratic control, and that merely releasing model weights without accompanying data, APIs or governance mechanisms is insufficient [44-49].


Nelson explained that U.S. AI policy is increasingly pursued through industrial levers-tariffs, export controls, semiconductor restrictions and costly H-1B visas-rather than through traditional rule-making processes that invite public comment. She called the reliance on industrial levers a “hyper-regulatory” strategy and argued that, because it sidesteps formal rule-making, it is comparatively anti-democratic [55-63][64-68].


Anne Bouverot, France’s special envoy for the AI Action Summit, contextualised the geopolitical shift by recalling the U.S. announcement of the “Stargate” project and Vice-President Vance’s call for global customers [75-81]. She highlighted how China has leveraged open-source tools to catch up technologically, using open-source as a lever to gain a seat at the table [82-84]. For Europe and other middle-power nations, Bouverot argued that open-source can serve as a competitive instrument that enables “coalitions of the willing” to build digital sovereignty without having to develop an entire stack from scratch [88-92].


Astha Kapoor, representing Global South perspectives, cautioned that the prevailing narrative of openness as a catalyst for adoption can be hazardous for developing economies. She explained that framing openness merely as a driver of data or multilingual datasets risks turning Global South countries into test-beds for external AI models, diverting attention from structural challenges in health, education and broader development [111-119][124-126].


Ravneet Kaur, Chair of the Competition Commission of India, presented the commission’s recent market study on AI, which identified anti-competitive practices such as self-preferencing, bundling, tying, exclusive agreements and ecosystem lock-in across digital markets [128-138][141-152]. She stressed that access to data, compute infrastructure and skilled talent is pivotal for fair competition, and that transparency and accountability throughout the AI lifecycle are essential to safeguard consumer welfare and national sovereignty [153-158][161-170][166-170]. Kaur positioned competition policy as a concrete tool to prevent market foreclosure and to ensure that AI systems remain contestable and transparent [161-166].


Karen Hao illustrated concrete realisations of a broader, participatory notion of openness. She described the “big-science” multilingual large-language-model project, which brought together over a thousand researchers from 70 countries to create an open-source model with transparent data curation, shared governance and value-return mechanisms for contributing cultural institutions [179-182]. Hao also recounted the Tahiku Media Māori speech-recognition initiative in New Zealand, where the community was consulted from the outset, consent was obtained for data use, and the resulting model was co-designed to serve language revitalisation goals [183-202].


Continuing the discussion on scale, Hao argued that the Silicon-Valley conception of “scale”-a single model distributed to everyone by a monopolistic provider-is misleading. She proposed that true scale should be understood as many communities developing their own, application-specific models, thereby avoiding the concentration of power inherent in monolithic large-scale systems [207-212].


Nelson reflected that, unlike many prior conferences, this summit actively included a broad cross-section of participants-students, “aunties” and other community members-making the event “revolutionary” in its inclusivity [232-236]. She also highlighted the lack of transparency around data-centre siting, where local officials are often bound by NDAs, underscoring a gap in community oversight of critical infrastructure [219-226].


The audience-question segment broadened the conversation. On individual agency, labour concerns and the risk of “open-washing”, Hao suggested that consumers can exercise agency by choosing open-source tools aligned with their values and called for third-party labelling schemes-similar to those used in fashion or food supply chains-to make the provenance and resource usage of AI models clear [277-283]. She warned that corporate rhetoric about inclusion often masks a strategy of locking users into closed platforms [254-258]. In response to a question about gender balance and Chinese participation, Astha Kapoor noted that democratisation is largely about market access and that true inclusion must go beyond token representation, urging more gender-balanced participation [300-306]. When asked about Chinese open-source models, Alondra observed that, although she has not worked directly with them, they can be fine-tuned to remove overt ideological bias and are already being leveraged by enterprises [312-318]. Finally, regarding an IP-focused query, Ravneet Kaur clarified that the Competition Commission’s remit is limited to curbing anti-competitive abuse and does not extend to adjudicating intellectual-property rights [322-328].


In closing, Kak thanked the participants and noted the richness of the dialogue, emphasizing that the consensus underscored openness as a socio-technical, democratic practice that must be coupled with transparent competition policy and genuine community participation [389-391]. She also urged future summits to include more regulator and enforcement voices so that AI actors are held accountable to the public [389-391].


Overall, the discussion highlighted that openness must be understood as a socio-technical, democratic practice, that competition policy can serve as a tool for digital sovereignty, and that genuine community participation-across gender, geography and sector-is essential for an AI future that serves the public interest [389-391].


Session transcriptComplete transcript of the session
Amba Kak

The AI Now Institute and the AAPTI Institute, we are honored and delighted to be co -hosting this panel at the close of what has been an extremely stimulating, some would say over -stimulating week. What brings AAPTI and AI Now together, despite the many kinds of distance between New York and Bangalore, is our focus on the political economy of AI and our insistence that questions of technology are always questions of power. So we have a formidable panel by every standard, leaders in their field advocating for AI in the public interest, traversing several fields of government service, academia, and journalism, sometimes in the same person, as you will know if you read their bios, which I’m going to skip for reasons of expediency, but I’m going to talk through some of their specific advantages in the conversation.

You know, it always pains me a little bit to even bring it up, but I’m going to do it anyway, which is it is exceptional that this is also the only female -only panel at this symposium. Hopefully that’s not something we have to say a lot or something that we have to wear as a badge of honor, but more something to work on for future iterations. So before we begin, I don’t think he’s in the room, but I want to also thank Amlan Mohanty, who’s been a partner in conceptualizing and helping to bring this panel to light, and to our wonderful summit organizing team, Sanjana Mishra and Iksho Virat, for their tireless efforts. I hope you all get good sleep tonight after a very long week.

Okay, so let’s get into it. I’m going to moderate this panel, so I’ll take a seat. Thank you. So let’s get into it. Okay, let’s get into it. There have been many discussions about openness at this summit. You’ve probably been in at least one of them. For the most part, these discussions have focused on the kind of technical affordances of open source, open -weighted models, open hardware. But what’s clear is that the word open is doing a lot of work in these conversations. It’s a stand -in for many much broader values of democratization, of participation, agency, even sovereignty. So in today’s panel, we’re going to kind of widen our understanding of what openness could mean in this conversation about AI.

And I’m going to start with Alondra. Alondra has been the deputy director of the White House Office of… of science and technology under President Biden. And at the time, there was a very heated debate about the geopolitical but also safety implications of open source and what U .S. government policy would be on these issues. And it seems like under this current administration, we’ve landed on a pro -open source overall orientation. But at the same time, it feels as if in many senses, AI governance in the United States is more closed than it has ever been. So I guess I wanted to ask, what do you see as the broader challenges to openness in AI governance today?

Alondra Nelson

Thank you for organizing this, colleagues. And good to be here and good to close out this exciting summit with you all. So a couple of things. I mean, I would say the Biden administration, I think, took the questions. Question of open weight model. as a gradient, right? So it was a spectrum. So that open was not a binary. It’s either open or not open. And I think the new administration, the current administration, takes it much more as a binary, that open is a thing that you sort of have achieved and it is now open as opposed to being closed. I think the difference is that, to your point from the opening, Amba, is that I think part of what we were trying to do in the Biden administration was really go back to a kind of foundational sense of openness that comes out of an open source movement that really thinks about openness as a kind of socio -technical characteristic and not just a technical characteristic.

So certainly the questions around open models, AI models, are often around technical things like model weights. Are the model weights shared? Only the model weights shared? Is it also the case that the training data is shared? You know, is the API, open to a certain extent or closed to a certain extent. So the technical things are certainly there. But I think if we go back to a sort of broader understanding of openness that comes out of sort of open source software, it was about shifting power. It was about forms of accountability. It was about sort of openness as a kind of practice and openness as shared infrastructure, openness as resources that could be used by lots of different communities, things that could be, you could modify the technology, that you could sort of just use the technology for the sort of purposes of your community or the purposes that you had.

And so that meant that that older, I think, broader definition of open was much more about democracy and transparency and accountability in a way that if you take even, you know, a so -called open source model like Lama 2 or Lama 3, which isn’t really open source and that we’re… We’re being asked to be content with… model weights as open. So I think the, you know, why we want to really push back on that is because, you know, that we are often, I think, using geopolitical stakes as a justification for not doing the socio part of the socio -technical, for not doing the accountability and the transparency and the democracy part because, you know, too dangerous because in the UNESCO context, China, you know, these things just sort of sit in as signs for explanations for, you know, why things can’t be different.

And I think it’s the case that to go, you know, to be reminded of a kind of broader sense of open reminds us that, you know, it’s not this binary and that one can have, you know, there obviously may be places where you don’t want open source. Like, do you want open source, like nuclear deploy AI? Like, probably not, right? But the binary… The debate gets carried forward as if, like, every open source use or open weight use is that. use as opposed to the sort of gradient of uses that are much safer and moreover are beneficial to communities, to helping people achieve their goals and sort of certainly much better for public transparency and accountability about what these systems do in the world.

Amba Kak

Can I ask a quick follow -up and then I want to move to Anne, which is that the other sort of defining feature of certainly of U .S. government policy today is that it’s happening less through traditional, you know, the traditional forms of regulation that we’re used to and much more through industrial policy, through trade policy, through immigration. But these are also spheres that have been, I would say, relatively even more immunized from public accountability or harder to, you know, harder for the broader public to weigh in on. So just wanted your thoughts on how we…

Alondra Nelson

Yes, I’ve been writing and thinking about this. Thank you for that question. So… So, you know, we’ve spoken a lot about the new administration and gets talked about as being deregulatory in regards to AI and being very light and being, quote, unquote, light touch. And I think if we actually pose that as a question as opposed to accepting it as a statement and actually look at what the current administration in the U .S. is doing around AI, it’s actually taking a quite very heavy hand to sort of steer AI. So you mentioned some of the levers that they’re using, tariffs, trade policy, export controls of semiconductor trips, in the U .S. context even immigration. So, you know, there are, you know, I think companies are getting out of it and around it depending on their relationship to Washington, but we’re told that an H -1B visa for a high -tech worker is $100 ,000 per worker, right?

And so that’s, you know, 10x, 20x or whatever times a company, that’s quite a lot of money. And also just… The way that science is being funded to the extent that, you know, the federal government plays a large role in driving the sort of research ecosystem for technology. So all of those things are being very heavily shaped in the current administration in the U .S. And so… So it may not be regulatory in the sense of formal rulemaking as it happens in the United States context, but it is certainly hyper -regulatory, I think, in a lot of other ways. And I’ll go back to my keyword of the day, the democracy piece, which is the upside of formal rulemaking, even though it can be clunky, it can take a long time, sometimes the pace is too slow for the pace of the technology, all of those things can be true, is that it has democratic input.

So if you’re doing a rulemaking in the context of the U .S. federal government, there will be a public call, there will be a public notice that you’re doing the rulemaking, there will be a public call for input. So even if you don’t agree with the outcome, there are sort of moments of sort of democratic input. When we are doing AI policy by fiat and through executive authority only, those inputs, even if those limited inputs are even gone. So it’s not only, I think, quite heavy -handed. It’s unfortunately, I think, anti -democratic relative to the status quo.

Amba Kak

Yeah, exactly. Anne, I want to move to you. As the French president’s special envoy for the AI Action Summit, you’ve been at the heart of a lot of global coordination on AI governance. And there was a time, I would say, the last 10 years have been characterized by open versus closed as a kind of binary or a way of organizing the world into particular camps when it comes to AI, the democratic open world and the rest of the world. But it’s interesting how much that has, you know, the ground beneath us has shifted in the last few years. And it has been particularly interesting to note at this summit that it is middle powers as a frame that is coming through as a kind of new organizing principle.

So I guess I want to say, I mean, do you see that openness still has value in forging multilateral, solidarities and especially in this brave new world we’re in?

Anne Bouverot

Yes, absolutely. I mean, clearly the geopolitical landscape has really shifted. At the AI Action Summit in Paris, it was exactly a year ago in February. It was just after the inauguration in the U .S. It was the first international trip for Vice President Vance, and what a speech that was, just before Munich, the Munich Security Conference. It was a moment where the U .S. announced at the White House the Stargate project. So it was a very strong and loud message from the U .S. saying, we’re here, we’re investing, we’re the world leaders. And at the summit, J .D. Vance said very clearly, we want all of you to be customers of our technology. And at the same time, this is the moment when DeepSeek emerged on the world map and everybody realized that actually China, using open source, which is why I want to come to that, was really saying we have a seat at the table and we’re actually playing that game.

And China using open source is actually very interesting because open source has a number of benefits and also risks. I don’t think it’s the answer to everything, but clearly it’s a way for challengers to catch up. This is how Android came to the world of smartphones. There’s many examples, and this is what China has taken as a lever. To be in that race. But then on to what does it mean for other countries than the U .S. and China. It also means that this is a tool that can be used by other countries. which is why in France and in Europe we’re very much in favor of open source as a competitive tool and as a way to leverage the knowledge and the findings of others to then just stand on their shoulders and continue to develop technology.

It doesn’t mean that everything should be open source there are cases where you do want to be careful depending on the use case but as a way to develop and stimulate competition it is very powerful it’s not the only tool you mentioned middle economies middle powers there was this fantastic speech by Mark Carney at Davos and there was a speech by Macron as well that maybe I’ll conclude with but this idea that middle economies that have some resources, not the resources to build their own stack top to bottom and to fund frontier level AI but But together, by building coalitions of the willing, these middle economies can do a lot of things. I believe that Canada, France, Germany, Switzerland, India, Japan, Australia, I can name a few of them.

And it doesn’t have to be one big block of these middle powers, but ad hoc coalitions of the willing. So I believe this is really something that can be useful in the evolution of governance.

Amba Kak

That was a fascinating account, and I think what it also highlights is that actually, whether you’re China or the U .S. or the middle powers or France, there’s a level at which everyone, as we discussed, can in some limited way be pro -open source. So do you think then that the differentiation will be at the layer of governance and our approaches to how we govern? How do we govern these technologies?

Anne Bouverot

I don’t know, is really the answer. Governance is such a broad word. There’s a lot of, for example, open source is really being taken as a tool by startups and scale -ups in Europe and in other countries. I mean, by Mistral, by Cohere, by Sakana AI in Japan, by a number. Is that governance? I don’t know. But clearly, governance and countries and institutions have a role to play in saying, how do we shape those coalitions of the willing? How do we put public funding or access to publicly funded compute or access to data sets that countries can help to put together? How do we put that at use and in which ways? So what are the governance?

How do we put that at use and in which ways? How do we put that at use and in which ways? How do we put that at use and in which ways? that we use to strengthen digital sovereignty and resilience.

Amba Kak

Precisely, yeah, that’s sort of what I was getting at. Okay, Aas, I’ll quickly move to you. Middle powers, as we just discussed, it’s a very broad term, and what it conceals is that there are many different economic and political aspirations of the countries that are bundled in that mix, and especially for countries like India or other countries in the global south, what are the unique kind of forms of both leverage and dependence in this current environment?

Astha Kapoor

Yeah, thanks so much, Amba. I mean, I think that what we’ve been tussling with over the last few days is that we went from global south to middle powers very quickly in a matter of days, which changes our form a little bit and our aspirations, and I think that that is what we have to grapple with, which is that as global south, our needs are very different in terms of we have structure. We have structural issues around health, around education that need to be addressed. We also have, you know, things that we need to do in terms of moving the country forward beyond what is just technologically mediated progress. And I think that what we’ve been hearing around over the last five days is that things like, well, open data or multilingual data sets is what is going to be that push.

So, you know, our languages will now be online. But then at the same time, we also have to realize that without having openness or control or agency or frictions across that entire AI stack, we are basically risking our populations in the Global South doing the labor to bring people online. So openness as a driver of adoption is actually quite a dangerous frame for Global South countries because it moves attention from where we might need to invest our resources. to then thinking that the only way to our historical problems is via adoption. And we’ve also seen that in the absence of governance, India is not new to the openness discourse, right? We have had a history over the last 12 years or 15 years on digital public infrastructure, but we’ve also seen the limits of once adoption occurs and when you have innovation, people with the deepest pockets come to innovate there because this is an enormous market.

So I think that you mentioned, Karni, like if we are a middle power, we’re definitely on the menu as a market. If we are a global south country, I think that there’s value in thinking about what that solidarity is because you’re right, there’s no homogeneity. And I think we’ve missed some of those questions around what we as large markets diversify. We are not here. We’re not here to do the labor to, you know, test bed models that are built elsewhere. So I think openness as dialogue, as distribution of value is what we need to think about.

Amba Kak

so many soundbites that I want to clip out of what you just said that was incredible, thank you Chairperson Kaur, firstly thank you so much for being here, I think what Asha said actually leads in well to the question I wanted to ask you which is how does one combat this dependence and as the Chair of the Competition Commission of India you’re a regulator that has been kind of ahead of the curve of looking at anti -competitive trends in this market, so from your perspective can you say a little bit both about the key implications of competition in the AI market and also if you see competition as a lever in the so called sovereignty toolkit

Ravneet Kaur

Thank you Amba, so for us at the Competition Commission of India, we’ve been looking at a lot of developments happening in the internet economy and these developments have changed the way businesses work how consumers interact with the markets and how value is being is being created. So things are moving very rapidly on the digital front. And as the commission, we have looked at what can be the practices which can be anti -competitive. Apart from the benefits which are coming from a digital economy, we have numerous benefits when it comes to economies of scale, the network effects, the efficiencies which are coming from that. But then there are also these risks which are there. And some of these have already been observed by the commission.

So the key ones which we found in the case of digital markets is the self -preferencing which is happening. Tying and bundling is occurring in numerous cases. Leveraging is being done. And there are these exclusive agreements where unfair terms are being also sought and, you know, parity agreements, parity arrangements. are being put in place. So in the competition commission, we have looked at this conduct when it comes to search engines. We’ve looked at it mobile ecosystems, online intermediation services, whether it is hotel, bookings, food ordering, e -commerce, or it is social media platforms. So across the entire spectrum, the commission has been looking at it. And very interestingly, when we started looking at AI, what could be the impact of AI?

So we did a market study on AI and competition, and the report has been released recently, October 25. It’s available on our website. And we found a lot of similarities in the way AI can function as well. So AI can bring a lot of benefits. We are seeing a lot of benefits when it comes to healthcare, education, logistics, supply chain management, and a lot of agriculture. And I’m seeing a lot of good things happening on that front. But also there are these potential possibilities or risks where you could see concentration in the entire AI value chain. There could be ecosystem lock -in, which might happen. Then there could be targeted price discrimination of people based on location, economic means, et cetera.

And then exclusive partnerships and the systems being opaque. So those were the things identified in the market study. And as a first step, we thought we need to make everybody aware because the important issue is one of access. Who has the access? That is a person who will determine what will happen in future. So it is access to data. It’s access to compute infrastructure. It is access to even skill sets. So whether we are able to build up the required skill sets within the country to be able to compete effectively. so those issues have brought us to work towards a framework where we are saying in the entire life cycle of the AI system how can we bring in transparency how can we bring in accountability

Amba Kak

I think that’s so important too because we focus a lot on big tech control over infrastructure people are familiar, inputs but I think what you’re pointing to is that it’s access to the consumer the pathways to monetization are happening at the distribution layer so really paying close attention to making sure that we have free and open competition in that layer and firms can’t take dominance from one market into another seems really important my second maybe more provocative question was around do you see competition as a tool for particularly global majority countries to retain and exercise sovereignty in the kind of AI age

Ravneet Kaur

when we look at AI we are looking at how far we can develop and how much we can do to make sure that we are able to make the most of the market and how much we can do to make sure that we are able to and deploy, monitor our AI systems that we are putting in place. And that’s where the issue comes up that we need to have the autonomy to be able to deploy the systems as per our economic, strategic, and societal priorities. And that’s where we see the very critical thing that how we can ensure that AI does that. And competition is a very important aspect of it. We just can’t forget about it because competition is what is going to ensure that there are no entry barriers, that players who are already there are not using their dominance to foreclose competition, to foreclose the market, and also that the consumers are not left locked in into a particular system because they can’t move their data and their various benefits that they are deriving from the AI systems to some other applications.

So really competition is at the heart of it, and I don’t see any way where we can forget about market. Thank you. markets would need to be contestable, fair, competitive. And for that, you know, that is where I would like to point out about our study, that we have clearly brought out that people who are deploying the technology, they have to have technical transparency. The stakeholders have to be able to understand what’s happening, what is this technology or this application being used for. And then there has to be governance transparency. That is that how you are governing that system. That also needs to be transparent. So once we are able to ensure that the people who are deploying these systems are looking at all these aspects, then the self -audit is happening, then maybe we would be able to safeguard competition because at the really crux of it all is maintaining competition.

competition.

Amba Kak

Thank you so much. Karen, I’m going to move to you. And just by the fact that there was so many, a line of people trying to take a selfie with you before we started, I’m going to assume that many people in the audience are familiar with Karen’s incredible book, Empire of AI. Her work has really delved into the global inequities that are embedded in the AI sort of global supply chain. I want to ask you where, I mean, your book is full of rich examples, but where do you see that open approaches to developing AI in some ways pose a challenge to this empire model of AI?

Karen Hao

big science project. It was this project that brought together over a thousand researchers from 70 countries, 250 institutions to try and create an open source large language model that not only would allow many different researchers to then interrogate what is actually happening beneath the surface of a large language model, but also to completely rethink what it would take to develop these technologies in a fundamentally more beneficial way where, for example, there’s better data governance practices, where you’re actually curating and cleaning the data, making it transparent for people, being able to track which data owners are then contributing to what aspect of value generation within the model. And this kind of goes back to Alonzo’s point as well, where you were saying…

that we really need to understand openness with a much broader conception of what openness means. It’s not just technical openness. And this project really embodied that, where they were working together with lots of different cultural institutions, with libraries, historical institutions, to try and figure out better ways of capturing the rich data that they had, but with respect to that institution and with a way to then deliver value back to that institution so the value chain wasn’t going just to the model creators themselves. Another project that I really loved is one that I highlighted in my book in the epilogue, which is the Tahiku Media AI speech recognition model. So Tahiku Media, they are a nonprofit radio station in New Zealand, and they broadcast in Te Reo Maori, or the Maori language, the language of the indigenous peoples in New Zealand.

And when a couple years ago, there’s been this big movement within New Zealand to try and revitalize the Maori language because it has been a huge challenge for them. almost been lost through the process of colonization. And Tahiku Media thought they had a very unique opportunity with this rich archival audio of Tōrero Māori to open this up to the community and help facilitate more language learning. They wanted to make it more accessible than simply just allowing people to listen to it, though. They wanted to create an application where you listen to the audio while you see a transcription of the audio. You can click on the transcription to get automatic translation. You can figure out how the language actually works.

But they realized they didn’t have enough capability to transcribe this because there simply were not enough proficient Tōrero Māori speakers. So this was the perfect use case where they could leverage building an AI speech recognition tool to do that work for them. But they went about this project in a totally different way. They made it extremely open and participatory to the community. Also not in a technical way, but in a social way. where they engaged immediately with the community to ask them, do you want this AI tool? And once the community said yes, they then had a public education campaign where they taught everyone what is AI in the first place, what do we actually need, we need a model, we need data, this is the kind of data that we need, this is the data that we would need from you.

And then once they actually engaged in that process and they developed so much trust with the community, they were able to collect enough data from the community with full consent in just a few days to train a speech recognition model. And then they continued to go back to the community and they said, now that we have this model, what kinds of applications do you actually want us to develop with this? What kinds of new AI models do you want to develop with this? And all of this was built on another open source project, which was the Mozilla Foundation’s deep speech model, which was similarly developed. With that kind of broader definition of openness, it was a model developed purely with also consentful data donations.

And so the entire stack was with the spirit of collaboration, with participation from everyone in the community, with an equal exchange of value where the people who are giving the data have a vote, have a say in then how the model ultimately can help support their journey in language learning. So both of those examples I always hold in my head when I’m thinking of what are the visions of AI that we actually want to support, what are the visions of open space AI that we actually want to support.

Amba Kak

So as you were speaking, I was just thinking, apart from being open and participatory in all the ways you said, these examples also provide a contrast to the idea that there is one model to rule them all, there’s this very sort of large language, we’re taking a single bet on a single technology, type of approach. But similarly, one of the… of the, I guess, common retorts to these experiments in some sense is that we can’t do that at scale. And so I’m just curious, what do you see as the tension between these kinds of governance structures and scale, and is there a trade -off?

Karen Hao

So I would reframe what we mean by scale, because what we are taught by Silicon Valley is that scale means they distribute to everyone, but they are the sole distributor. And to me, that’s not scale. That’s a monopoly. And what really we would want from scale is different communities all around the world, different industries, different companies, each developing models by and for them at scale. Like, that’s, to me, like a much more appropriate way of thinking about scale. And in fact, what’s so interesting is, like, because of the data imperative for large language models and the compute imperative for large language models as they’re currently being trained by the main company, they’re not going to be able to do that.

There is not a, there isn’t a good ability to diffuse this technology across. many different industries or many different communities. Most industries are data -poor industries. They’re not like the Internet industries. They don’t sit on vast amounts of data. And so if we actually want to diffuse AI to more people around the world and for more use cases around the world, in fact, we need to think of scale from a small AI perspective, a community -driven perspective, application -specific perspective, and that’s how we’re going to get scale.

Amba Kak

Okay, we’ve heard, I guess, a range of rich perspectives, and I’m going to take it as a good sign that all our panelists seem to be actively taking notes and sort of engaging with what each other was saying. So I was going to propose as a sort of round two that I might ask, just based on the conversation we’ve just had, Alondra, what is something that’s sort of sticking with you or that you’re working through in response?

Alondra Nelson

Yeah, I think community. So Karen queued that up for me, and the note that I was just writing here was about that, and I was thinking about… is how the stack that we are building now is explicitly closed to community. And I was thinking in particular about the data center and cloud layer. So in the U .S. context, there’s a lot of contestation. There’s growing contestation in communities about data centers. What folks might not know is that part of the contestation is because elected officials are asked to sign NDAs and contracts are being signed to stand up data centers in the dark of night and communities don’t even know. So the sort of lack of openness around the infrastructure, that infrastructural piece of the AI stack is actually quite profound.

And then I was thinking the opposite. So my reflection on the time here, which I’m still going to be processing for quite a long time. It’s my first time in New Delhi, my first time in India. It’s been an incredible experience. But I’ve been to a lot of AI conferences like, you know, NeurIPS. and everything, you know, like professional ones, not professional ones. A lot. A lot. This is the first one I’ve ever been to that has included the community in any considerable way. And it just is, I mean, I think it’s a revolutionary thing. And if we’re really serious about having democracy and community and voice, AI conferences need to look much more like this one than the ones that we spend a lot of our time going to.

So, you know, so who knows what will be the outcome of this week together. But it has been extraordinary and distinctive in the inclusion of lots of, you know, unks, aunties, college students, and lots in between.

Amba Kak

Aastha, closing reflections.

Astha Kapoor

Yeah. First of all, thank you for that reframe. As somebody who was here on the 16th, I was feeling so overwhelmed, and my instinct was like, there are too many people. But I do appreciate that. That reframe on the fact that this is the community that is going. to build and question and do the work, I think, that we all keep talking about. And I think from that is also, my word is also community, but I think friction, how do we enable some of that, both the coalescing, but also the dialogue, the questions, the where is the value for me part of it. And I know an example that was presented yesterday on the Amul Co -op, we’ve been doing a lot of work with cooperatives, to me, which is a nice space because it is the governance question of one member, one vote, you can pool things.

So how do they become recipients but co -designers in some of the things that we’ve heard over the last few days. So, yeah. Just closing reflections and maybe even just a takeaway that you’re sitting with after this week.

Ravneet Kaur

Yeah, sure. So for me, I think the very important thing, which came out from this AI impact summit is that the governments need to be very active about how they are ensuring. that the deployment of AI is happening. And for that, I am very happy with the way we are going in terms of, you know, we did a great job when it came to digital identity and digital payments. So now we are looking at a digital public infrastructure, how you’re going to be able to provide compute platforms for startups, for people who don’t have the resources, make available data, and then the focus which is there on small language models. Everything doesn’t need to be large, especially when we look at things which are very language -specific, very related to our country and to our solutions.

So that’s one of the key takeaways that I have. And the other, of course, is that we’ll be going, all of us at the Competition Commission are now, you know, going back with this, that one needs to be very alert as to what are the kind of systems which are being put in place and are flexible. Is there transparency? Is there accountability? so those are the key things because at the end of the day it is trust if you can build up trust if your systems are not opaque then you would be able to get the people on board onto your applications and to your systems and that’s where success lies, that’s where value is

Amba Kak

I’ll say ma ‘am that one of my key takeaways and hopefully someone from the Swiss government is listening for next year is that we also need to see much more voices from the enforcers, those that are going to make sure that the players in this space are accountable to the public and not above the law and so I’m very grateful that you’re here and I hope that future summits see more enforcers at the table okay Karen you get the last word and I would say I’m going to open up for questions so start thinking of

Karen Hao

I think my biggest reflection from the summit which I also shared in an event last night is that um um It’s so interesting to observe corporate speak in these spaces. And the thing that struck me the most about this summit is that this corporate speak has gotten very sophisticated in that they have adopted the language of inclusion, diversity, empowering marginalized communities to talk about ultimately selling their technology and making sure that you kind of buy into helping them lock in their closed platforms. And I hope that because we have more community engagement and there’s more openness in a lot of the discussions that are happening kind of alongside this very sophisticated corporate speak that all of you will take away from the summit this broader idea of what it really means to ultimately build a future where AI can empower people.

It does not actually mean the democracy that the companies offer us. It in fact means that we should all be thinking very deeply about. What are the problems that we really need to solve in as individuals within our families, our communities, our companies, our context. and then whether or not AI is even the right solution for that problem and then how to design and develop from the ground up AI solutions that truly are empowering and enabling and help tackle those problems and bring everyone along together.

Amba Kak

That was, yeah, what a great note to end on. And honestly, a note of optimism and a note to build towards the futures we want to see. Okay, so does anyone have any questions? Okay, I saw you first. Go ahead.

Audience member 1

Hi, everyone. And, yeah, I was one of the people in line looking for the signature on the book. So I read Carol. It’s a reference book. And my question is addressed to you. So all of this, it makes sense, but it makes sense in a more macro way. From a micro perspective where an individual is exposed to AI and, you know, at their workplaces and we’re expected to use it and, you know, that there’s no getting away from it. How do we reconcile the fact that, you know, probably there is a whole lot of exploitation behind the models that we’re using? But at the same time, you can’t not use it because it’s just, it’s every day.

I don’t use it. Yeah. Yeah. So I’d like to know a little bit more about that. How? Yeah.

Karen Hao

No, I actually, I think it’s totally possible to not use these tools. But also, I would say that oftentimes our conversations around adopting AI are posed as a binary. Like either you go completely all in. Or you go none at all. Yeah. and there’s actually a million possibilities in between right there are so many different ways that you could refrain from using air in certain contexts but maybe there are other ways that it helps you um being more intentional about what kinds of ai tools you adopt from which kinds of companies like we’ve been talking a lot about openness so maybe you choose to use more open ai technologies rather than the closed ones um one of the things that i feel is missing right now within the ai ecosystem that makes the burden very very high on consumers is that we don’t really have third -party organizations doing analysis to make clear like clear and easy labels for consumers to determine what values and what degree of resources are being used to develop different types of ai models so that they’re they can actually make informed decisions but we have lots of precedent of this happening in other industries like the fashion supply chain and food and coffee and so i hope that someone out there listening will start working on this like develop some kind of third party third party labeling system so that consumers can actually start making more informed choices.

The other thing that I would say is I also don’t think individuals like we aren’t just consumers. That’s not the only way that individuals can push against the inevitability narratives of AI. We’ve seen amazing protests that have broken out all around the world to push against data centers. We’ve seen protests from parents who feel that their children are being harmed and this rapid escalation of AI advancement is getting out of control. We’ve seen artists and writers using the tools of litigation to counter when these companies are infringing on their intellectual property in ways that they don’t stand for. There are many different ways I think within your life AI is everywhere and also that means you as an individual and within your community have a thousand different touch points for how you can interact with the AI supply chain and in each of those touch points you can choose whether to resist or adopt or be neutral and so there’s yeah like I hope that people actually feel significantly more agency than I think people generally feel today.

Amba Kak

Thank you. Okay, I think we should do a couple of questions. So you, you, and you. Okay, let’s go in that order. So we’ll take those three questions and then…

Audience member 2

Hello, thank you so much. This was, I think, my favorite panel of the whole summit. And also, like, an all -female panel. I think it’s nice. It’s also kind of connected to a reflection. You know, my question is, like, I feel like at this space, I’ve realized there’s not as many women by far. As men. And, again, as you said, it’s the only female panel. And we’re here with a group of 15 people from Germany. And, like, half of us is male and half of us is female. Often just our male counterparts get addressed and somebody’s just speaking to them and, you know, not, like, asking them for money or other, like, in terms of, like, pitching their business idea, whatever.

But I’ve also noticed other things. Like, the theme is, right, AI all -inclusive, right? But I’m wondering, like, who does this include? In this specific context? In which vision, like, do you understand, like, from this summit, who you think is included in this vision for all -inclusive? and also I’ve realized, I don’t know if anybody else has realized but I feel like China is quite an important power in the AI governance space but the amount of Chinese people here I’ve seen is very low and it’s just something I realized that I noticed so I feel like it’s still just some reflection and I wonder how you see this, like what does this notion of all inclusive mean for you or how you perceived it here?

Amba Kak

Thank you, that was many important and provocative questions you just asked

Audience member 3

I was curious, kind of as a follow up to our colleague here, your role on the open source Chinese models which are clearly the most intelligent in the open source space but clearly have a deep CCP perspective and so I’m curious like how does that come together in this ecosystem and how can we leverage it appropriately?

Audience member 4

Hello. Thank you panel for the wonderful discussion. I’m an intellectual property and business lawyer. So my question is related to intellectual property, specific to Ravneet. Just I wanted to know how you see the openness of AI in context of the intellectual property as openness is somewhere giving the restriction in context of the intellectual property.

Amba Kak

Why don’t we start with that question?

Ravneet Kaur

Okay, sure. Sure. So when you look at intellectual property, because, you know, there’s a lot of research, development and innovation which has gone into the development of that technology. And whatever is put in place, and there are these copyrights, there are these patent acts which are protecting that. When it comes to the competition commission, we come into the picture only if we find that there is an abuse. Wherever, whatever innovation has been done, it is being used to ensure that there is an abuse. And we want to ensure that no other people can come into the map into the into the same map. And it is being used to enforce conditions which are unfair. So that is the only space where we come in.

Otherwise, the purpose of the commission is not to stifle innovation. We are to, in fact, protect innovation because that’s the way to grow. That’s the way markets will grow further. Competition will increase. New players will keep coming in, better technologies, better value for the customer. So consumer welfare is one of the very critical things we look at. That’s how we address these issues.

Amba Kak

I wonder if, Aastha, you can talk to the gender and that broader question on inclusion.

Astha Kapoor

Yeah. Thank you so much for that question. I think it’s what we’ve all been feeling as well. I think that basis, what I have understood in so very early, overwhelmed sense is that there is inclusion, as Karen was saying, is also being chosen host as a word for adoption. And I think that that is the primary framing that I’m taking away from this. I think that that is the primary framing that I’m taking away from this. I think that that is the primary framing that I’m taking away from this. I think that that is the primary framing that I’m taking away from this. I think that that is the primary framing that I’m taking away from this.

I think that that is the primary framing that I’m taking away from this. I think that that is the primary framing that I’m taking away from this. democratization is about market access. The working group also says so. And I think that the gender perspective will also, and we’ve seen this again in previous iterations of the tech will save us, financial inclusion, digital financial inclusion variety, which is like get people online. And then what ends up happening is that when you realize that you’re not able to make money of these, like, you know, the bottom 80%, then you start to get drop -offs there. So it is at the moment of that hype cycle of getting everybody online, and then whether we’re able

Amba Kak

I don’t know, maybe you could take the question on Chinese open source AI and how we feel about it.

Alondra Nelson

I’ll try. I mean, one thing I would say about, there’s been some news reporting on, you know, about the fact that this week took place during the Lunar New Year and that that probably had some impact on participation at Ramadan as well. I mean, you know, so I think, that’s not lost, I shouldn’t be lost on. any of us for this question of inclusion. I think, I mean, I haven’t worked with the Chinese model, so I don’t know, but if they’re open source models, you should be able to tune them so that they don’t have, you know, at least as much kind of, you know, CCP kind of ideological control. I don’t know if you do that in the training data or inference level or where you do it, but, and it seems that they are, there are a lot of companies that are building on the Chinese models, and so it seems like even in the enterprise space, and so that is clearly not a hurdle to some of the enterprise kind of uses and applications that people want to build on them, so.

Amba Kak

I think we can take two more questions. Okay, so your hand, and I just want to take someone from the middle. You can go. Okay. The alarm just went off. So if you could also make sure that it’s a crisp question that would allow there to also be answers. Yeah.

Audience member 5

So I am really interested in how AI is going to impact labor. And one of the biggest concerns in this area is the fact that, you know, AI can train on the intellectual labor of so many people without giving credit, without giving compensation. So there are obviously regulatory approaches to this. But I’m more interested in like an up. So new research that’s happening about protecting publicly available data, be it images, be it websites, be it written content in a way that that data, if it’s used directly by AI, it’s either useless to it or it’s harmful to it. I think there’s some research happening in University of Chicago around that and some other places. So my question here is twofold.

First, is this like a good approach to sort of protect intellectual property or data by creating? Protection by design. And two, how does it tackle? How does it go with the idea of openness? Right. Because on the one hand, it’s.

Amba Kak

Thank you for the question. I just want to make sure we have time for the others. They’re going to kick us out of this room. That’s the final question and then maybe Karen, you can address the labour question.

Audience member 6

Hi, I wanted to ask about open washing. We’ve been hearing the term in previous discussions about openness in competition. And I just wanted to ask in terms of enforcement, how should competition authorities assess whether this openness is genuinely lowering entry barriers or whether underlying dependencies still exist essentially. Do we need new analytical tools? Does there need to be a reworking of the frameworks around competition? That’s essentially the question I wanted to ask. Thank you.

Amba Kak

Karen and then Jayperson Kaur, you will have the last word.

Karen Hao

Sorry, can you remind me the very last part of your question? You were talking about… The labour one. Yes. I agree with everything that you said, basically, that, yes, this is a huge problem. Yeah, like labor exploitation is absolutely happening, both with the exploitation of the labor that is being used to produce the data and also labor exploitation of, like, data workers that are cleaning the data. And I think that just shows, given that the labor exploitation is happening all through the supply chain, that that is kind of inherent in the logic of how these models are being created, and we need to fundamentally rethink that from the ground up.

Ravneet Kaur

So when we do a competition assessment, we are looking at numerous economic factors that are also taken into consideration. It is not based on, you know, what has been submitted to us. And a very detailed analysis is done to understand whether there is any competition harm. And the other aspect which is looked into is what are the effects which are there. Is there an appreciable adverse effect? So we have to establish both the things, and this is done on a case -to -case basis after doing a very rigorous analysis. of both the data which is available in the public domain and the analysis done by our internal teams. Only then we are able to determine whether there’s a harm to competition.

Amba Kak

Okay, thank you all so much for being here. This is such a rich conversation and thank you all for being part of it. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (32)
Factual NotesClaims verified against the Diplo knowledge base (5)
Additional Contextmedium

“The panel was jointly convened by the AI Now Institute and the AAPTI Institute as a capstone to an intensive week of debate.”

The knowledge base confirms that the AI Now Institute was a convening organization for a panel on openness, but does not mention the AAPTI Institute, so the AI Now involvement is corroborated while the joint role of AAPTI is not documented.

Confirmedhigh

“A central theme introduced early on was the contested meaning of “openness”.”

The panel discussion explicitly examined the concept of openness in AI, as recorded in the knowledge base entry on the Global Perspectives on Openness and Trust in AI panel [S1] and the European AI Governance Strategy discussion that foregrounded openness [S114].

Additional Contextlow

“The panel was the only all‑female panel at the summit, highlighting gender‑balanced representation needs.”

While the report’s claim about an all-female panel is not directly confirmed, the knowledge base contains entries discussing gender parity and the importance of diverse representation in AI forums [S108] and broader gender-equality initiatives [S40], providing contextual background.

Additional Contextmedium

“Open‑source can serve as a competitive instrument for Europe and other middle‑power nations, enabling “coalitions of the willing” to build digital sovereignty without developing an entire stack from scratch.”

The knowledge base links openness to digital sovereignty and strategic positioning, noting that open-source tools are discussed as means for nations to achieve technological autonomy and collaborative coalitions [S114] and in broader digital sovereignty debates [S117].

Confirmedhigh

“Astha Kapoor, representing Global South perspectives, cautioned that the prevailing narrative of openness as a catalyst for adoption can be hazardous for developing economies.”

Astha (Aastha) Kapoor’s participation in a Global South-focused session on digital governance is recorded in the knowledge base, confirming her role as a speaker representing Global South concerns [S9].

External Sources (117)
S1
Global Perspectives on Openness and Trust in AI — -Ravneet Kaur- Chairperson of the Competition Commission of India
S2
Capacity Building in Digital Health — -Dr. Sarvjeet Kaur: Secretary of the Indian Nursing Council, represents 2.2 million nurses, regulatory role in nursing e…
S4
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S5
Global Perspectives on Openness and Trust in AI — – Alondra Nelson- Karen Hao- Amba Kak – Ravneet Kaur- Amba Kak
S6
Less experienced, low-income users prefer an open, unlimited internet, a recent study reports — A recent Master thesis at the Oxford University contributes to a heated debate about the pros and cons of the zero-rated…
S7
Dare to Share: Rebuilding Trust Through Data Stewardship | IGF 2023 Town Hall #91 — Astha Kapoor:Yeah. Thank you for this and thank you for the audience too for coming at this very early hour. I guess to …
S8
Global Perspectives on Openness and Trust in AI — These key comments fundamentally transformed what could have been a technical discussion about open-source AI into a sop…
S9
Global South Solidarities for Global Digital Governance | IGF 2023 Networking Session #110 — Astha Kapoor, Aapti Institute, Civil Society, Asia-Pacific Group
S10
AI Transformation in Practice_ Insights from India’s Consulting Leaders — -Audience member 1- Founder of Corral Inc -Audience member 6- Role/title not mentioned
S11
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S12
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S13
Global Perspectives on Openness and Trust in AI — – Alondra Nelson- Audience member 3
S14
AI Transformation in Practice_ Insights from India’s Consulting Leaders — -Audience member 3- Student -Audience member 6- Role/title not mentioned
S15
https://dig.watch/event/india-ai-impact-summit-2026/ai-transformation-in-practice_-insights-from-indias-consulting-leaders — Absolutely. Audience member 3: Namaste sir. I am a student. So my question is that what should be the effective strateg…
S16
AI Transformation in Practice_ Insights from India’s Consulting Leaders — -Audience member 4- Geeta, from GCC (Global Capability Center) background -Audience member 6- Role/title not mentioned
S17
Global Perspectives on Openness and Trust in AI — -Audience member 4- Intellectual property and business lawyer
S18
https://dig.watch/event/india-ai-impact-summit-2026/ai-transformation-in-practice_-insights-from-indias-consulting-leaders — Sorry, we have a lot of people who’ve raised their hands. I think we can just probably take a couple of questions. I thi…
S20
Harnessing Collective AI for India’s Social and Economic Development — – Professor Manjunath- Audience Member 5
S21
Global Perspectives on Openness and Trust in AI — – Karen Hao- Audience member 1- Audience member 5
S22
AI Safety at the Global Level Insights from Digital Ministers Of — -Alondra Nelson: Professor who holds the Harold F. Linder Chair and leads science, technology, and social values lab at …
S23
A Digital Future for All (afternoon sessions) — – Alondra Nelson – Harold F. Linder Professor, Institute for Advanced Study Alondra Nelson: I do. I do. I mean, I thin…
S24
Global Perspectives on Openness and Trust in AI — -Alondra Nelson- Former deputy director of the White House Office of Science and Technology under President Biden
S25
Digital Technologies and the Environment: a Synergy for the Future — 17. Sengupta, Rajid, 2021. World needs to rethink internet use post-COVID-19 . Retrieved 30 November 2021 from: https://…
S26
Global Perspectives on Openness and Trust in AI — – Alondra Nelson- Karen Hao – Ravneet Kaur- Karen Hao
S27
Building Trusted AI at Scale – Keynote Anne Bouverot — -Anne Bouverot: Special Envoy for Artificial Intelligence, France; Diplomat and technologist; Former Director General of…
S28
How to make AI governance fit for purpose? — – Anne Bouverot- Chuen Hong Lew – Jennifer Bachus- Anne Bouverot
S29
Global Perspectives on Openness and Trust in AI — -Audience member 2- Part of a group from Germany
S30
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S31
The Arc of Progress in the 21st Century / DAVOS 2025 — – Paula Escobar Chavez: Audience member asking a question (specific role/title not mentioned)
S32
Study highlights inaccuracy of AI chatbots in providing election information — A recentstudyby the AI Democracy Projects, acollaborationbetween Proof News and the Science, Technology and Social Value…
S33
WS #466 AI at a Crossroads Between Sovereignty and Sustainability — – Pedro Ivo Ferraz da Silva Environmental Impact and Climate Justice Valdivia criticizes the lack of democratic partic…
S34
Main Session on Artificial Intelligence | IGF 2023 — Canales Lobel also highlights the significance of effective global processes in AI governance, advocating for seamless c…
S35
https://dig.watch/event/india-ai-impact-summit-2026/global-perspectives-on-openness-and-trust-in-ai — I think there’s some research happening in University of Chicago around that and some other places. So my question here …
S36
Principles for governing the Internet — – The pillar on ‘human rights’ is clearly related to the objective of freedom of expression (information, communication…
S37
From Technical Safety to Societal Impact Rethinking AI Governanc — “And I do think that the political level, while we need technical inputs, the only force in the world”[93]. “How can you…
S38
Leaders TalkX: When policy meets progress: paving the way for a fit for future digital world — Dr. Aminu Maida from Nigeria described their shift from traditional command-and-control regulation to data-driven approa…
S39
Taming Competition in Low and High Orbit — Similarly, national competition in the space sector is seen as a positive factor, fostering security collaboration and s…
S40
Charting New Horizons: Gender Equality in Supply Chains – Challenges and Opportunities — In addition to research partnerships, the chapter demonstrates a firm stance on gender equality by planning to offer sch…
S41
Internet standards and human rights | IGF 2023 WS #460 — In conclusion, the lack of diversity in internet standards bodies, such as the IETF, is a significant concern. The under…
S42
Opening remarks — Rodrigo de la Parra:Thank you, Professor Glaser, thank you for the invitation. Your Excellency, Minister Luciana Santos,…
S43
From summer disillusionment to autumn clarity: Ten lessons for AI — We must approach this with a clear understanding. Trading our knowledge for AI services is not inherently bad – in fact,…
S44
Global AI Policy Framework: International Cooperation and Historical Perspectives — This comment provided a conceptual resolution to many tensions discussed throughout the panel. It offered a concrete pol…
S45
Competition law and regulations for digital markets: What are the best policy options for developing countries? (UNCTAD) — Competition policy and advocacy play an important role, especially in developing countries, where competition authoritie…
S46
WS #19 Satellites, Data, Action: Transforming Tomorrow with Digital — There is competition in the LEO satellite market between private companies and government-backed initiatives. This compe…
S47
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — By engaging users and technical communities, policymakers can gain valuable insights and perspectives, ultimately leadin…
S48
Digital Policy Perspectives — Sulyna Abdullah:Thank you, Leona. First of all, I’d like to apologize for the misrepresentation of my photograph on scre…
S49
WS #323 New Data Governance Models for African Nlp Ecosystems — Melissa Omino: Thanks, Mark. I think that in order to have real equity, we need, we are required to think about communit…
S50
Rethinking Africa’s digital trade: Entrepreneurship, innovation, & value creation in the age of Generative AI (depHub) — This data collection occurred without clear information or consent from the individuals, leading to ethical concerns, es…
S51
Workshop 1: AI & non-discrimination in digital spaces: from prevention to redress — **Prevention measures:** Audience responses supported proactive approaches including impact assessments, community invol…
S52
Open Forum #37 Her Data,Her Policies:Towards a Gender Inclusive Data Future — Challenges in Policy Implementation There is a need for transparency and collaboration in communicating policies to the…
S53
Global Perspectives on Openness and Trust in AI — “Hi, I wanted to ask about open washing”[48]. “Do we need new analytical tools?”[39]. “how should competition authoritie…
S54
Digital Ecosystems and Competition Law: Ecological Approach (HSE University) — Competition authorities have recognised the need for ecosystem-level assessments. Competition authorities in developing…
S55
https://dig.watch/event/india-ai-impact-summit-2026/global-perspectives-on-openness-and-trust-in-ai — Thank you for the question. I just want to make sure we have time for the others. They’re going to kick us out of this r…
S56
EU Report calls for new antitrust rules for tech giants — The European Commission has published a report titled ‘Competition Policy for the Digital Era’examining the EU antitrust…
S57
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S58
Artificial Intelligence & Emerging Tech — In conclusion, the meeting underscored the importance of AI in societal development and how it can address various chall…
S59
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S60
360° on AI Regulations — In conclusion, the analysis reveals that AI regulation is guided by existing laws, and there is a complementary nature b…
S61
Responsible AI for Shared Prosperity — The balance between open-source development and community sovereignty presents ongoing challenges. While open-source app…
S62
How Small AI Solutions Are Creating Big Social Change — African languages. And we just released a data set of 21 now, 27 voice languages, given that Africa has 2 ,000 or so lan…
S63
Driving Social Good with AI_ Evaluation and Open Source at Scale — The conversation then shifted to the growing problem of AI-generated code submissions to open source projects. Sanket Ve…
S64
AI that serves communities, not the other way round — At theWSIS+20 High-Level Eventin Geneva, a vivid discussion unfolded around how countries in the Global South can build …
S65
Host Country Open Stage — Collaborative approaches are essential for addressing complex societal challenges in small populations Nordhaug argues …
S66
United Nations Office for Digital and Emerging Technologies — In hisRoadmap for Digital Cooperation,the UN Secretary-General recognised the critical role of open source solutions in …
S67
Connecting open code with policymakers to development | IGF 2023 WS #500 — Henri Verdier:Thank you for your very precise and important questions. First, as you said, most people of power has quit…
S68
WS #208 Democratising Access to AI with Open Source LLMs — Abraham Fifi Selby: All right, thank you very much for the session, and I’m very happy to join this panel. I’m from th…
S69
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — In conclusion, the UNESCO recommendation on AI ethics provides crucial guidance for global AI governance. By grounding A…
S70
Discussion Report: Sovereign AI in Defence and National Security — Policy and Regulatory Considerations Regulatory frameworks can be adapted to different national contexts The moderator…
S71
Building Indias Digital and Industrial Future with AI — Deepak Maheshwari from the Centre for Social and Economic Progress provided historical context, tracing India’s digital …
S72
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — That is why we must frame this not simply as technology policy, but as democratic governance. The choices made today abo…
S73
Global Perspectives on Openness and Trust in AI — Alondra Nelson, former deputy director of the White House Office of Science and Technology Policy, provided the panel’s …
S75
Democratizing AI: Open foundations and shared resources for global impact — This comment elevated the technical sophistication of the discussion and established credibility for Switzerland’s democ…
S76
How to make AI governance fit for purpose? — Anne Bouverot described Europe’s evolution from regulation-focused approaches toward innovation and practical outcomes. …
S77
Regulating Artificial Intelligence: U.S. and International Approaches and Considerations for Congress — During the Biden Administration, E.O. 14110 directed over 50 federal agencies to engage in more than 100 specific action…
S78
Competition law and regulations for digital markets: What are the best policy options for developing countries? (UNCTAD) — Competition policy and advocacy play an important role, especially in developing countries, where competition authoritie…
S79
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — This comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents di…
S80
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — Lucia Russo:OK. Well, thank you. So I’ve never done an analysis of all of the principles that exist, so I don’t know to …
S81
Charting New Horizons: Gender Equality in Supply Chains – Challenges and Opportunities — Nevertheless, collaboration with UN Women has amplified the registration of women-owned vendors, driving the figures fro…
S82
International Cooperation for AI & Digital Governance | IGF 2023 Networking Session #109 — Atsushi Yamanaka:Well, thank you so much, actually, it’s a very, very interesting questions. And then I have a few, actu…
S83
Workshop 1: AI & non-discrimination in digital spaces: from prevention to redress — **Prevention measures:** Audience responses supported proactive approaches including impact assessments, community invol…
S84
WS #323 New Data Governance Models for African Nlp Ecosystems — Melissa Omino: Thanks, Mark. I think that in order to have real equity, we need, we are required to think about communit…
S85
Rethinking Africa’s digital trade: Entrepreneurship, innovation, & value creation in the age of Generative AI (depHub) — This data collection occurred without clear information or consent from the individuals, leading to ethical concerns, es…
S86
Open Forum #22 Citizen Data to Advance Human Rights and Inclusion in the Di — Participants stressed the importance of involving women, girls, persons with disabilities, and other marginalized groups…
S87
Toward Collective Action_ Roundtable on Safe & Trusted AI — Professor Jonathan Shock warned against the “Silicon Valley approach of move fast and break things” when dealing with go…
S88
Opening of the session — The tone began very positively and constructively, with the Chair commending delegations for focused, specific intervent…
S89
How Multilingual AI Bridges the Gap to Inclusive Access — The tone was consistently collaborative, optimistic, and mission-driven throughout the conversation. Speakers demonstrat…
S90
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S91
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — The tone was consistently optimistic, collaborative, and forward-looking throughout the session. It maintained a formal …
S92
Opening address of the co-chairs of the AI Governance Dialogue — The tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate fo…
S93
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — Ernst Noorman: Thank you very much, Zach, and thank you, Rasmus, for your words. While leaders at this moment gather in …
S94
Open Forum #78 Shaping the Future with Multistakeholder Foresight — 2. **Complete systemic collapse** – Featuring internet fragmentation and breakdown of current governance structures Anr…
S95
WS #193 Cybersecurity Odyssey Securing Digital Sovereignty Trust — The discussion maintained a consistently collaborative and constructive tone throughout. Speakers demonstrated mutual re…
S96
Workshop 2: The Interplay Between Digital Sovereignty and Development — Sofie Schönborn: the context for our interactive discussion. Thank you. Thank you so very much. It’s a pleasure to be he…
S97
Digital Cooperation and Empowerment: Insights and Best Practices for Strengthening Multistakeholder and Inclusive Participation — Tripti Sinha: Oh, thank you, Theresa. Thank you, Theresa. As you just said, I am very familiar with ICANN. So I’m gonna …
S98
BOOK LAUNCH: The law and politics of Global Competition — Competition laws are shaped by the unique history, culture, and values of each jurisdiction, which means that rules and …
S99
IN CONVERSATION WITH MITCHELL BAKER — Mozilla’s emphasis on open source technology and community building is another noteworthy aspect. They believe that open…
S100
Webinar session — The discussion maintained a diplomatic and constructive tone throughout, with participants demonstrating nuanced thinkin…
S101
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S102
What policy levers can bridge the AI divide? — The discussion maintained a collaborative and optimistic tone throughout, with participants sharing experiences construc…
S103
Opening & Plenary segment: Summit of the Future – General Assembly, 3rd plenary meeting, 79th session — While the summit was seen as a step towards revitalizing multilateralism, some speakers noted the challenges in translat…
S104
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S105
AI Infrastructure and Future Development: A Panel Discussion — -Cost Reduction and Efficiency Breakthroughs: The discussion addressed dramatic cost reductions in AI (from $33 to $0.09…
S106
ICF 2023: Digital Commons for Digital Sovereignty | IGF 2023 Day 0 Event #82 — Audience:know who you are, and then we can proceed. Yes, definitely. I am Alexandre Costa Barboza. I’m a fellow at the W…
S107
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — High level of consensus with complementary rather than conflicting perspectives. The agreement spans technical experts, …
S108
Towards Parity in Power / DAVOS 2025 — The discussion also addressed the need for diversity within gender representation, acknowledging intersecting identities…
S109
The WSIS welcome Part I: Meet the Movers Behind It — Noteworthy observations from the session included an acknowledgment of the gender imbalance on the panel, which was reco…
S110
https://dig.watch/event/india-ai-impact-summit-2026/press-briefing-by-hmit-ashwani-vaishnav-on-ai-impact-summit-2026-l-day-5 — I would also like to thank all the team members. All the stakeholders, right from media, from the organizers, from ITPO,…
S111
Building Scalable AI Through Global South Partnerships — He reflects that dealing with traffic and other logistical issues during the summit taught the team patience.
S112
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — <strong>Moderator:</strong> With a big round of applause, kindly welcome the panelists of this last panel of AI Impact S…
S113
Panel Discussion AI &amp; Cybersecurity _ India AI Impact Summit — And I want to acknowledge the countries that came forward to really put this initiative together, starting first, of cou…
S114
Panel Discussion: Europe’s AI Governance Strategy in the Face of Global Competition — This response elevated the discussion from a binary choice between ‘might vs. values’ to a more nuanced exploration of h…
S115
© 2019, United Nations — In Africa, only some hubs have become ‘buzzing’ places, brimming with entrepreneurial activity (e.g. BongoHive in Zambia…
S116
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — Despite these multifaceted benefits, there remains a discernible concern regarding the underappreciation of open source …
S117
Policy Network on Internet Fragmentation (PNIF) — Marilia Maciel: Thank you Bruna. I can take a couple of questions. Let me just say a few words about digital sovereignty…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Alondra Nelson
4 arguments175 words per minute1527 words520 seconds
Argument 1
Openness as socio‑technical, non‑binary (Alondra Nelson)
EXPLANATION
Nelson argues that openness should be understood as a spectrum rather than a simple yes‑or‑no condition, and that it encompasses socio‑technical dimensions such as accountability, democracy and shared infrastructure, not merely the release of model weights or code.
EVIDENCE
She notes that the Biden administration originally treated openness as a gradient, but the current administration tends to view it as a binary state, prompting a return to the broader, socio-technical conception of openness rooted in the open-source movement, which includes shifting power, accountability, and shared resources for communities [30-33][40-43].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Nelson’s view of openness as a spectrum with socio-technical dimensions is articulated in [S1].
MAJOR DISCUSSION POINT
Defining Openness in AI
AGREED WITH
Amba Kak, Karen Hao
Argument 2
US relies on industrial, trade, immigration levers, limiting formal rulemaking (Alondra Nelson)
EXPLANATION
Nelson points out that U.S. AI governance increasingly uses policy tools such as tariffs, export controls, trade policy and costly immigration visas instead of traditional regulatory rulemaking, thereby reducing opportunities for democratic input.
EVIDENCE
She references the use of tariffs, trade policy, export controls on semiconductors, and the high cost of H-1B visas for high-tech workers as examples of the levers the administration employs, and observes that this approach is “hyper-regulatory” but lacks the public participation inherent in formal rulemaking processes [57-60][55-60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
She notes the shift to industrial, trade and immigration levers over formal rulemaking in [S1].
MAJOR DISCUSSION POINT
Government Policy Mechanisms & Democratic Input
AGREED WITH
Amba Kak
Argument 3
Opacity of data‑center siting undermines community participation and democratic oversight
EXPLANATION
Nelson points out that the physical infrastructure of AI, such as data centres, is often built without informing or involving the local communities, which contradicts the broader notion of openness that includes democratic accountability.
EVIDENCE
She explains that elected officials are asked to sign NDAs and contracts for data-centre construction are signed covertly at night, leaving communities unaware of the installations and their impacts [224-226].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The lack of democratic participation in AI infrastructure siting is highlighted in [S33].
MAJOR DISCUSSION POINT
Openness and democratic governance of AI infrastructure
Argument 4
AI conferences should prioritize community inclusion to foster democratic legitimacy
EXPLANATION
Nelson argues that AI events need to move beyond traditional professional gatherings and actively involve a diverse community of participants to ensure that AI development reflects democratic values.
EVIDENCE
She reflects that this summit was the first she attended that included a broad community of students, aunties, and other non-expert participants, describing it as a revolutionary and distinctive experience that could set a new standard for future conferences [232-236].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for active stakeholder involvement alongside openness is discussed in [S41].
MAJOR DISCUSSION POINT
Community participation in AI discourse
A
Amba Kak
4 arguments131 words per minute1825 words833 seconds
Argument 1
Openness as proxy for democratization, participation, sovereignty (Amba Kak)
EXPLANATION
Kak describes the term “open” as a shorthand for broader values such as democratization, citizen participation, agency and even national sovereignty in the AI context.
EVIDENCE
She explicitly states that the word open is a stand-in for many broader values of democratization, participation, agency, and sovereignty [16-17].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Kak’s framing aligns with Nelson’s spectrum view of openness encompassing democratization and sovereignty [S1].
MAJOR DISCUSSION POINT
Defining Openness in AI
AGREED WITH
Alondra Nelson, Karen Hao
Argument 2
Shift from traditional regulation to policy levers reduces public accountability (Amba Kak)
EXPLANATION
Kak observes that U.S. AI policy is moving away from conventional regulatory mechanisms toward industrial, trade and immigration policies, which are less transparent and harder for the public to influence.
EVIDENCE
She notes that AI governance is happening less through traditional regulation and more through industrial policy, trade policy and immigration, which are relatively more immunized from public accountability [50-52].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
She points to the same policy-lever shift described by Nelson in [S1].
MAJOR DISCUSSION POINT
Government Policy Mechanisms & Democratic Input
AGREED WITH
Alondra Nelson
Argument 3
Question on using competition to safeguard sovereignty (Amba Kak)
EXPLANATION
Kak asks whether competition policy can serve as a tool for global‑majority countries to retain and exercise sovereignty in the AI age.
EVIDENCE
She poses the question directly, asking if competition can be a lever in the sovereignty toolkit for AI-driven economies [160-162].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The role of competition in preserving national sovereignty is examined in [S39].
MAJOR DISCUSSION POINT
Competition, Antitrust, and AI Sovereignty
Argument 4
Female‑only panel highlights gender imbalance; need broader inclusion (Amba Kak)
EXPLANATION
Kak points out that the panel is the only all‑female one at the summit, underscoring the broader gender imbalance in AI fields and the need for more inclusive representation.
EVIDENCE
She remarks that it is “exceptional that this is also the only female-only panel” and hopes this will improve in future iterations rather than being a badge of honor [4-5].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Gender representation concerns are echoed in studies on gender equality and diversity in tech governance [S40][S41].
MAJOR DISCUSSION POINT
Community Inclusion, Gender, and Representation
AGREED WITH
Alondra Nelson, Karen Hao
K
Karen Hao
6 arguments171 words per minute1765 words618 seconds
Argument 1
Openness must embed community participation, not just technical release (Karen Hao)
EXPLANATION
Hao stresses that true openness goes beyond releasing code or model weights; it requires active community involvement, consent, and shared value creation throughout the AI development pipeline.
EVIDENCE
She describes the Tahiku Media project where the community was consulted, educated, gave consent for data collection, and co-designed applications, illustrating a socially embedded openness beyond technical sharing [194-202]; she also references the BigScience initiative that involved many institutions and cultural partners to ensure transparent data governance [179-182].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The principle that openness requires community engagement is reinforced in [S41].
MAJOR DISCUSSION POINT
Defining Openness in AI
AGREED WITH
Alondra Nelson, Amba Kak
Argument 2
Corporate “open” language may mask lock‑in, requiring scrutiny (Karen Hao)
EXPLANATION
Hao observes that corporations often adopt inclusive and “open” rhetoric while actually promoting closed platforms that lock in users and profit from the narrative of openness.
EVIDENCE
She notes that corporate speak has become sophisticated, using language of inclusion and empowerment to sell technology while ultimately locking users into closed platforms [255-256].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Corporate rhetoric versus actual lock-in is critiqued in [S1] and further discussed in [S43].
MAJOR DISCUSSION POINT
Competition, Antitrust, and AI Sovereignty
Argument 3
Community‑driven projects empower marginalized groups (Karen Hao)
EXPLANATION
Hao highlights how community‑focused AI initiatives can empower historically marginalized populations by providing tools that serve their specific linguistic and cultural needs.
EVIDENCE
The Tahiku Media example shows a Maori-language radio station using an open, consent-based speech-recognition model to support language revitalization, with community members participating in data collection and application design [184-202].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Community-centric AI initiatives and their empowerment effects are noted in [S41].
MAJOR DISCUSSION POINT
Community Inclusion, Gender, and Representation
Argument 4
BigScience and Tahiku Media illustrate participatory open‑source AI at scale (Karen Hao)
EXPLANATION
Hao presents two large‑scale, open‑source projects that embody participatory principles, demonstrating that open AI can be pursued at both global research consortium level and local community level.
EVIDENCE
She describes the BigScience project, which coordinated over a thousand researchers from 70 countries to create an open-source LLM with transparent data governance [179-182]; she also details the Tahiku Media speech-recognition effort that engaged the Maori community and used open-source tools to build a culturally relevant model [184-202].
MAJOR DISCUSSION POINT
Open‑Source Projects, Scale, and Alternative Models
Argument 5
Open source enables diverse communities to develop their own models rather than a monopoly (Karen Hao)
EXPLANATION
Hao argues that true scale should mean many communities each building models for their own contexts, rather than a single dominant provider distributing a monopoly‑like product.
EVIDENCE
She reframes scale as multiple communities developing their own models, criticizing the Silicon Valley notion of scale as a monopoly and noting that most industries are data-poor, limiting diffusion of large-scale models [207-211].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Competition as a safeguard against monopoly and for diverse model development is highlighted in [S39].
MAJOR DISCUSSION POINT
Open‑Source Projects, Scale, and Alternative Models
AGREED WITH
Anne Bouverot
Argument 6
Data‑worker exploitation is inherent; requires rethinking of model creation (Karen Hao)
EXPLANATION
Hao points out that the AI supply chain relies on extensive labor for data collection and cleaning, often under exploitative conditions, and calls for a fundamental redesign of how models are built.
EVIDENCE
She states that labor exploitation occurs both in data generation and data-worker cleaning, and that this exploitation is built into the logic of current model creation, necessitating a ground-up rethink [380-381].
MAJOR DISCUSSION POINT
Labor, Intellectual Property, and Open‑Washing
A
Anne Bouverot
3 arguments140 words per minute645 words275 seconds
Argument 1
Open source as competitive, sovereign tool (Anne Bouverot)
EXPLANATION
Bouverot argues that open‑source software can serve as a strategic lever for countries to catch up technologically and assert digital sovereignty, providing shared infrastructure and fostering competition.
EVIDENCE
She notes that China’s use of open source gave it a seat at the AI table, that open source offers benefits and risks, and that Europe sees it as a competitive tool to leverage others’ knowledge while standing on their shoulders [82-88].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open-source as a lever for digital sovereignty is described in [S1] and [S39].
MAJOR DISCUSSION POINT
Defining Openness in AI
AGREED WITH
Karen Hao
DISAGREED WITH
Astha Kapoor
Argument 2
Middle powers can build ad‑hoc coalitions using open source to compete (Anne Bouverot)
EXPLANATION
She highlights that middle‑income nations can form flexible coalitions to collectively develop AI capabilities using open‑source resources, compensating for limited individual resources.
EVIDENCE
She cites a speech by Mark Carney and Macron, then lists countries such as Canada, France, Germany, India, Japan, Australia that can cooperate in ad-hoc coalitions to advance AI governance and competition [89-92].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The potential of middle-power coalitions leveraging open source is discussed in [S39].
MAJOR DISCUSSION POINT
Middle Powers, Multilateral Coalitions & Global Governance
Argument 3
Open source is not a universal solution; must be applied case‑by‑case (Anne Bouverot)
EXPLANATION
Bouverot cautions that open‑source is not a one‑size‑fits‑all answer; its suitability depends on specific use‑cases, risks, and contexts.
EVIDENCE
She remarks that “it doesn’t mean everything should be open source” and that open source is a tool, not a universal remedy, emphasizing the need for case-by-case assessment [84-86].
MAJOR DISCUSSION POINT
Open‑Source Projects, Scale, and Alternative Models
A
Astha Kapoor
3 arguments185 words per minute852 words275 seconds
Argument 1
Openness can generate dependence for Global South (Astha Kapoor)
EXPLANATION
Kapoor warns that framing openness merely as a driver of adoption can create dependency for Global South nations, turning them into labor pools for AI development without addressing deeper structural challenges.
EVIDENCE
She explains that openness as a driver of adoption is dangerous because it shifts focus from needed investments to merely using AI, risking labor exploitation and dependence, especially when the entire AI stack lacks local control [118-119][111-118].
MAJOR DISCUSSION POINT
Defining Openness in AI
DISAGREED WITH
Anne Bouverot
Argument 2
Need to distinguish Global South needs from middle‑power aspirations; avoid labor‑only role (Astha Kapoor)
EXPLANATION
Kapoor stresses that Global South countries have distinct structural needs (health, education) and should not be reduced to test‑beds for AI models; their aspirations differ from those of middle powers.
EVIDENCE
She notes that Global South priorities involve structural issues, that they should not be merely labor sources for testing models, and that solidarity must recognize non-homogeneity among large markets [124-126][111-118].
MAJOR DISCUSSION POINT
Middle Powers, Multilateral Coalitions & Global Governance
Argument 3
Cooperatives as one‑vote‑one‑share models for inclusive AI design (Astha Kapoor)
EXPLANATION
Kapoor proposes cooperatives, which operate on a one‑member‑one‑vote principle, as a governance model that can turn users into co‑designers of AI systems, ensuring more equitable participation.
EVIDENCE
She references the Amul cooperative example, highlighting its one-vote-one-share structure and its potential to move participants from mere recipients to co-designers of AI initiatives [243-245].
MAJOR DISCUSSION POINT
Community Inclusion, Gender, and Representation
R
Ravneet Kaur
4 arguments168 words per minute1442 words512 seconds
Argument 1
Competition authority stresses transparency, accountability, democratic oversight (Ravneet Kaur)
EXPLANATION
Kaur emphasizes that the Competition Commission of India prioritizes transparency and accountability throughout the AI lifecycle, viewing these as essential for building public trust and democratic oversight.
EVIDENCE
She outlines the commission’s focus on transparency in governance, access to data, compute, and skill-sets, and stresses that trust depends on non-opaque systems, linking transparency to competition oversight [101-107][153-158].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Transparency and accountability in AI oversight are emphasized in [S38] and [S39].
MAJOR DISCUSSION POINT
Government Policy Mechanisms & Democratic Input
AGREED WITH
Alondra Nelson, Karen Hao
Argument 2
Competition as lever to prevent lock‑in, ensure contestable markets, protect sovereignty (Ravneet Kaur)
EXPLANATION
Kaur argues that robust competition is essential to avoid entry barriers, market foreclosure, and data lock‑in, thereby safeguarding national sovereignty in the AI era.
EVIDENCE
She states that competition ensures no entry barriers, prevents dominance from foreclosing markets, and protects consumers from being locked into particular systems, linking this to autonomy and sovereignty [161-166].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Competition’s role in preventing lock-in and protecting sovereignty is examined in [S39] and [S38].
MAJOR DISCUSSION POINT
Competition, Antitrust, and AI Sovereignty
Argument 3
Competition commission intervenes only on abusive practices, not on IP protection per se (Ravneet Kaur)
EXPLANATION
Kaur clarifies that the commission’s mandate is limited to addressing anti‑competitive abuses; it does not regulate intellectual‑property rights unless they result in market abuse.
EVIDENCE
She explains that the commission steps in only when there is abuse, aiming to protect innovation and consumer welfare, and does not stifle IP-driven innovation [315-324].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The commission’s limited mandate to address anti-competitive abuse, not IP per se, is reflected in [S35].
MAJOR DISCUSSION POINT
Labor, Intellectual Property, and Open‑Washing
Argument 4
Government must proactively build digital public infrastructure to support equitable AI development
EXPLANATION
Kaur emphasizes that state actors should create and provide shared resources such as compute platforms, data sets, and digital identity systems to enable startups and smaller players to develop AI solutions, especially for local language needs.
EVIDENCE
She cites the commission’s work on digital identity, digital payments, and the plan to offer compute platforms and data for startups, stressing the importance of small, language-specific models rather than only large-scale systems [246-250].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for shared digital infrastructure align with the shared-resource perspective in [S43] and [S1].
MAJOR DISCUSSION POINT
Public infrastructure as an enabling environment for AI
A
Audience member 2
1 argument190 words per minute256 words80 seconds
Argument 1
Observation of limited Chinese participation raises inclusion concerns (Audience member 2)
EXPLANATION
The audience member points out the low visibility of Chinese participants at the summit and asks how this affects the notion of an all‑inclusive AI vision.
EVIDENCE
She asks why Chinese representation is low and what “all-inclusive” means in this context, highlighting concerns about broader inclusion [298-306].
MAJOR DISCUSSION POINT
Middle Powers, Multilateral Coalitions & Global Governance
A
Audience member 1
1 argument138 words per minute141 words61 seconds
Argument 1
Individuals need agency and labeling to choose ethical AI tools (Audience member 1)
EXPLANATION
The participant stresses that individuals need clear, third‑party labeling of AI products to make informed, ethical choices, and that they should have agency to adopt, resist, or remain neutral toward AI tools.
EVIDENCE
She calls for third-party organizations to create easy-to-understand labels for AI models, similar to labeling in fashion or food industries, and notes that individuals have many touch-points to decide how to interact with AI [277-282][283-286].
MAJOR DISCUSSION POINT
Community Inclusion, Gender, and Representation
A
Audience member 5
1 argument183 words per minute167 words54 seconds
Argument 1
Protection‑by‑design for data could safeguard IP but challenges openness (Audience member 5)
EXPLANATION
The audience member asks whether designing AI systems to protect intellectual property and data—by rendering data unusable for models—can be an effective approach, and how it interacts with openness principles.
EVIDENCE
She raises the question about protecting publicly available data through protection-by-design, referencing research at the University of Chicago and elsewhere, and asks how this aligns with openness [354-363].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The question about protection-by-design and its relation to openness is raised in [S35].
MAJOR DISCUSSION POINT
Labor, Intellectual Property, and Open‑Washing
A
Audience member 6
1 argument128 words per minute78 words36 seconds
Argument 1
Open‑washing assessment needs new competition tools and frameworks (Audience member 6)
EXPLANATION
The participant queries how competition authorities should evaluate “open‑washing,” i.e., whether claimed openness truly lowers entry barriers or masks underlying dependencies, and whether new analytical tools are required.
EVIDENCE
She asks whether enforcement needs new tools or a reworking of competition frameworks to assess genuine openness versus hidden dependencies [369-374].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for new analytical tools to assess open-washing is discussed in [S35] and [S39].
MAJOR DISCUSSION POINT
Labor, Intellectual Property, and Open‑Washing
DISAGREED WITH
Ravneet Kaur
A
Audience member 4
1 argument140 words per minute57 words24 seconds
Argument 1
Tension between patents and open models highlighted by IP question (Audience member 4)
EXPLANATION
The audience member seeks clarification on how openness interacts with intellectual‑property regimes, questioning whether openness restricts or conflicts with patent protections.
EVIDENCE
She frames the question by noting her background as an IP and business lawyer and asks how openness of AI relates to intellectual-property restrictions [309-312].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The interaction between openness and IP regimes is explored in [S43].
MAJOR DISCUSSION POINT
Labor, Intellectual Property, and Open‑Washing
A
Audience member 3
1 argument193 words per minute59 words18 seconds
Argument 1
Chinese open‑source AI models may embed CCP perspectives, raising governance concerns; mechanisms are needed to assess and mitigate political influence while leveraging technical strengths
EXPLANATION
The participant questions how to reconcile the technical excellence of Chinese open‑source models with the risk that they carry state‑driven ideological biases, and asks for ways to responsibly incorporate them into the broader AI ecosystem.
EVIDENCE
In their question they note that Chinese open-source models are “clearly the most intelligent in the open-source space but clearly have a deep CCP perspective,” and seek guidance on how to combine them appropriately within the ecosystem [308].
MAJOR DISCUSSION POINT
Geopolitical implications of open‑source AI
Agreements
Agreement Points
Openness should be understood as a socio‑technical, non‑binary concept that goes beyond merely releasing model weights or code and includes democratization, participation, shared infrastructure and sovereignty.
Speakers: Alondra Nelson, Amba Kak, Karen Hao
Openness as socio‑technical, non‑binary (Alondra Nelson) Openness as proxy for democratization, participation, sovereignty (Amba Kak) Openness must embed community participation, not just technical release (Karen Hao)
All three speakers stress that ‘open’ is a stand-in for broader democratic values and that true openness involves community engagement, accountability and shared resources, not just technical openness such as releasing weights. Alondra describes the shift from a gradient to a binary view and calls for a broader socio-technical definition [30-33][40-43]; Amba explicitly frames openness as shorthand for democratization, participation and sovereignty [16-17]; Karen illustrates this with the BigScience and Tahiku Media projects that embed community consent and value sharing [179-182][194-202].
POLICY CONTEXT (KNOWLEDGE BASE)
This framing matches the UN Secretary-General’s roadmap that positions open-source solutions as a means to advance the Sustainable Development Goals and strengthen digital sovereignty, and it is echoed in analyses of digital public goods that stress community-driven governance and reduced dependency on proprietary platforms [S66][S65][S61].
U.S. AI governance is increasingly being pursued through industrial, trade and immigration policy levers rather than traditional regulatory rulemaking, reducing opportunities for public democratic input.
Speakers: Alondra Nelson, Amba Kak
US relies on industrial, trade, immigration levers, limiting formal rulemaking (Alondra Nelson) Shift from traditional regulation to policy levers reduces public accountability (Amba Kak)
Both speakers note that the Biden administration is steering AI policy via tariffs, export controls, H-1B visa costs and other industrial tools, bypassing the formal rulemaking process that would allow public comment. Alondra points to tariffs, export controls and costly H-1B visas as examples of the new “hyper-regulatory” approach that lacks democratic input [57-60][55-60]; Amba observes the same shift and its opacity for the broader public [50-52].
Community participation and inclusion are essential for legitimate AI governance and should be embedded in conferences, projects and policy discussions.
Speakers: Alondra Nelson, Karen Hao, Amba Kak
AI conferences should prioritize community inclusion to foster democratic legitimacy (Alondra Nelson) Openness must embed community participation, not just technical release (Karen Hao) Female‑only panel highlights gender imbalance; need broader inclusion (Amba Kak)
All three stress that AI discourse must move beyond elite circles to involve diverse community members. Alondra describes this summit as the first with a broad community of students, aunties, etc., calling it revolutionary [232-236]; Karen warns that corporate “open” language can mask lock-in and stresses genuine community engagement as shown in the Tahiku Media example [255-256][194-202]; Amba points out the gender imbalance of the panel and the need for broader representation [4-5].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for broad stakeholder involvement is highlighted in interdisciplinary AI governance forums such as the IGF and UNESCO initiatives, and was a central theme of the WSIS+20 High-Level Event on building AI capacity from the ground up in the Global South [S57][S64][S68].
Open‑source software can serve as a strategic lever for countries, especially middle powers, to build digital sovereignty, foster competition and avoid dependence on a single dominant provider.
Speakers: Anne Bouverot, Karen Hao
Open source as competitive, sovereign tool (Anne Bouverot) Open source enables diverse communities to develop their own models rather than a monopoly (Karen Hao)
Both argue that open-source is a tool for nations to catch up technologically and maintain sovereignty. Anne notes China’s use of open-source to gain a seat at the table and Europe’s view of it as a competitive lever, and highlights ad-hoc coalitions of middle powers [82-88][89-92]; Karen reframes scale as many communities building their own models, warning that the Silicon Valley notion of scale creates monopolies [207-211].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy analyses from the UN digital cooperation agenda and the digital public goods discourse argue that open-source enables middle-power states to reduce reliance on dominant vendors and promote competitive ecosystems [S66][S65][S70].
Transparency and accountability throughout the AI lifecycle are essential for building public trust and ensuring fair competition.
Speakers: Ravneet Kaur, Alondra Nelson, Karen Hao
Competition authority stresses transparency, accountability, democratic oversight (Ravneet Kaur) Openness as a practice includes accountability and shared infrastructure (Alondra Nelson) BigScience project illustrates transparent data governance (Karen Hao)
All three emphasize that openness must be paired with transparent governance to foster trust. Ravneet outlines the commission’s focus on transparency in data, compute and governance as a condition for competition and trust [101-107][153-158]; Alondra links openness to accountability and shared resources [40-43]; Karen describes the BigScience initiative’s transparent data curation and value-return to contributors [179-182].
POLICY CONTEXT (KNOWLEDGE BASE)
Transparency and accountability are core pillars of the European Commission’s ‘Competition Policy for the Digital Era’ report, which calls for adapting antitrust rules to safeguard fair competition in AI markets [S56][S53].
Similar Viewpoints
Both see the U.S. moving away from conventional regulatory rulemaking toward industrial, trade and immigration tools, which diminishes democratic participation and transparency in AI governance. Alondra cites tariffs, export controls and H‑1B visa costs as examples of this “hyper‑regulatory” approach [57-60][55-60]; Amba notes the same shift and its relative immunisation from public oversight [50-52].
Speakers: Alondra Nelson, Amba Kak
US relies on industrial, trade, immigration levers, limiting formal rulemaking (Alondra Nelson) Shift from traditional regulation to policy levers reduces public accountability (Amba Kak)
Both view open‑source software as a strategic instrument for nations (especially middle powers) to achieve digital sovereignty and avoid concentration of power. Anne highlights open‑source as a lever for competition and coalition‑building among middle powers [82-88][89-92]; Karen argues that true scale means many communities building their own models, countering monopoly dynamics [207-211].
Speakers: Anne Bouverot, Karen Hao
Open source as competitive, sovereign tool (Anne Bouverot) Open source enables diverse communities to develop their own models rather than a monopoly (Karen Hao)
All three stress that openness must be coupled with transparent, accountable governance to build trust and ensure fair competition. Ravneet links transparency to competition and consumer trust [101-107][153-158]; Alondra ties openness to accountability and democratic practice [40-43]; Karen points to the BigScience consortium’s transparent data handling as a model of open governance [179-182].
Speakers: Ravneet Kaur, Alondra Nelson, Karen Hao
Competition authority stresses transparency, accountability, democratic oversight (Ravneet Kaur) Openness as a practice includes accountability and shared infrastructure (Alondra Nelson) BigScience project illustrates transparent data governance (Karen Hao)
Unexpected Consensus
Recognition that open‑source projects can be scaled through community‑driven, small‑AI approaches rather than monolithic large‑scale models.
Speakers: Karen Hao, Anne Bouverot
Open source enables diverse communities to develop their own models rather than a monopoly (Karen Hao) Open source as competitive, sovereign tool (Anne Bouverot)
While Anne frames open-source primarily as a geopolitical and competitive lever for nations, Karen extends the argument to a technical-scale perspective, asserting that true scale is achieved by many small, community-specific models rather than a single dominant one. This convergence of geopolitical and technical scaling arguments was not explicitly anticipated. Anne’s discussion of middle-power coalitions using open-source [89-92] aligns with Karen’s reframing of scale as distributed community development [207-211].
POLICY CONTEXT (KNOWLEDGE BASE)
Recent initiatives such as African small-AI language datasets and concerns about low-quality AI contributions to open-source repositories illustrate a shift toward lightweight, community-led models instead of monolithic systems [S62][S63][S68].
Agreement that competition policy is a crucial tool for protecting national sovereignty in the AI era.
Speakers: Amba Kak, Ravneet Kaur
Question on using competition to safeguard sovereignty (Amba Kak) Competition as lever to prevent lock‑in, ensure contestable markets, protect sovereignty (Ravneet Kaur)
Amba explicitly asks whether competition can be part of a sovereignty toolkit [160-162]; Ravneet later affirms that competition prevents market foreclosure and protects autonomy, linking it directly to sovereignty [161-166]. The alignment of a moderator’s probing question with a regulator’s policy stance was not foreseen.
POLICY CONTEXT (KNOWLEDGE BASE)
Competition authorities are increasingly invoking competition law to protect national sovereignty in AI, as reflected in ecosystem-level competition assessments and EU antitrust reforms targeting digital platforms [S53][S54][S56].
Overall Assessment

The panel displayed substantial convergence around three core themes: (1) openness must be understood as a socio‑technical, democratic principle rather than a simple technical release; (2) U.S. AI policy is shifting toward industrial and trade levers, limiting formal democratic rulemaking; (3) competition and transparent governance are essential to prevent lock‑in, protect sovereignty and build public trust. These shared viewpoints cut across speakers from academia, government, and civil society, indicating a strong consensus on the need for broader, inclusive, and accountable AI governance frameworks.

High consensus on the definition of openness, the importance of community participation, and the role of competition and transparency. The agreement spans multiple domains (AI, data governance, digital economy, human rights), suggesting that future policy initiatives are likely to incorporate multi‑stakeholder, open‑source and competition‑focused mechanisms to address power asymmetries in AI.

Differences
Different Viewpoints
Openness may create dependence for Global South versus being a strategic lever for sovereign development
Speakers: Astha Kapoor, Anne Bouverot
Openness can generate dependence for Global South (Astha Kapoor) Open source as competitive, sovereign tool (Anne Bouverot)
Astha warns that framing openness merely as a driver of adoption can turn Global South countries into labor pools and increase dependence, emphasizing the need to address structural challenges first [111-119]. Anne counters that open-source software is a strategic lever that allows countries, including middle-power and Global-South states, to catch up technologically and assert digital sovereignty, viewing it as a competitive tool rather than a source of dependence [75-88].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on whether openness leads to new dependencies for the Global South appear in UN digital cooperation literature and UNESCO-linked discussions on equitable access, highlighting both empowerment potential and risk of reliance on external codebases [S66][S61][S64][S68].
Whether existing competition assessment tools are sufficient to detect open‑washing or new analytical frameworks are needed
Speakers: Audience member 6, Ravneet Kaur
Open‑washing assessment needs new competition tools and frameworks (Audience member 6) Competition assessment uses rigorous case‑by‑case analysis (Ravneet Kaur)
The audience member asks if competition authorities require new tools or a reworking of frameworks to evaluate whether claimed openness truly lowers entry barriers or masks hidden dependencies [369-374]. Ravneet responds that the commission already conducts detailed, case-by-case economic and competition analyses, relying on existing data and internal expertise without indicating a need for new methodologies [382-388].
POLICY CONTEXT (KNOWLEDGE BASE)
The adequacy of current competition tools to identify ‘open-washing’ is questioned in IGF panels and EU competition reports, prompting calls for novel analytical frameworks to better capture hidden dependencies [S53][S54][S56].
Unexpected Differences
Open‑source as a sovereign tool versus risk of dependency for Global South
Speakers: Astha Kapoor, Anne Bouverot
Openness can generate dependence for Global South (Astha Kapoor) Open source as competitive, sovereign tool (Anne Bouverot)
While both discuss openness, Astha’s focus on the Global South’s structural needs leads her to view openness as potentially exploitative, whereas Anne treats open‑source as a universally beneficial lever for middle‑power and sovereign development. The tension between viewing openness as a risk versus an opportunity for less‑resourced nations was not anticipated given the generally shared pro‑openness framing elsewhere in the panel.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between open-source as a means of achieving digital sovereignty and the risk of creating new dependencies is discussed in UN and UNESCO policy briefs on digital public goods and community sovereignty [S66][S61][S64].
Need for new analytical tools to assess open‑washing versus reliance on existing competition analysis
Speakers: Audience member 6, Ravneet Kaur
Open‑washing assessment needs new competition tools and frameworks (Audience member 6) Competition assessment uses rigorous case‑by‑case analysis (Ravneet Kaur)
The audience’s call for novel frameworks to detect open‑washing was not mirrored by the competition authority’s confidence in its current methodology, revealing an unexpected split between external stakeholder expectations and regulator self‑assessment.
POLICY CONTEXT (KNOWLEDGE BASE)
Recent IGF discussions and competition-law scholarship argue that existing antitrust metrics may be insufficient to detect open-washing, urging the development of dedicated analytical tools [S53][S54].
Overall Assessment

The panel largely converged on the principle that openness should be understood broadly and linked to democratic participation, community involvement, and sovereign capacity. Divergences emerged around the practical implications of openness for Global South countries and the adequacy of existing competition tools to police open‑washing claims. These disagreements highlight the challenge of translating shared normative goals into concrete policy instruments that satisfy both equity concerns and regulatory capacities.

Moderate – while there is strong consensus on the value of openness, the panel split on how openness should be leveraged for development versus sovereignty, and on whether current competition frameworks are sufficient to address emerging open‑washing practices. The implications are that future AI governance discussions will need to reconcile these perspectives, possibly by designing differentiated openness strategies for Global South contexts and by evaluating the need for new competition‑law tools.

Partial Agreements
All three agree that openness should go beyond mere technical release of model weights and serve democratic, participatory goals. Alondra stresses a spectrum and socio‑technical dimensions [40-43]; Karen illustrates this with community‑driven projects that involve consent and co‑design [194-202]; Amba frames openness as shorthand for democratization and sovereignty [16-17]. However, they differ on emphasis: Alondra focuses on policy and power‑shifting, Karen on concrete community engagement practices, and Amba on the symbolic meaning of the term.
Speakers: Alondra Nelson, Karen Hao, Amba Kak
Openness as socio‑technical, non‑binary (Alondra Nelson) Openness must embed community participation, not just technical release (Karen Hao) Openness as proxy for democratization, participation, sovereignty (Amba Kak)
Both see the need for democratic oversight in AI governance. Alondra points out that the U.S. is shifting to industrial levers that bypass public rulemaking, reducing democratic input [57-60]. Ravneet emphasizes that competition oversight must ensure transparency and accountability throughout the AI lifecycle to build public trust [101-107][153-158]. They share the goal of democratic oversight but differ on the institutional mechanism: Alondra critiques the current U.S. approach, while Ravneet proposes competition policy as the corrective mechanism.
Speakers: Alondra Nelson, Ravneet Kaur
US relies on industrial, trade, immigration levers, limiting formal rulemaking (Alondra Nelson) Competition authority stresses transparency, accountability, democratic oversight (Ravneet Kaur)
Takeaways
Key takeaways
Openness in AI should be understood as a socio‑technical spectrum rather than a binary technical release, encompassing democracy, accountability, participation, and sovereignty. US AI governance is shifting from formal rulemaking to industrial, trade, and immigration policy levers, which reduces direct public accountability. Middle‑power countries can leverage open‑source tools and ad‑hoc coalitions to compete with US and China, but their needs differ from those of Global South nations. Competition policy is crucial for AI sovereignty: it must prevent ecosystem lock‑in, ensure contestable markets, and enforce transparency and accountability throughout the AI lifecycle. Gender imbalance and broader inclusion remain significant challenges; community‑driven projects (e.g., Tahiku Media, cooperatives) demonstrate how participatory openness can empower marginalized groups. Large‑scale open‑source projects (e.g., BigScience) show that openness can be combined with consent‑based data practices, but scaling such models requires rethinking ‘scale’ as many community‑specific solutions rather than a single monopoly. Labor exploitation and IP concerns are embedded in current AI supply chains; addressing them requires new governance mechanisms and possibly protection‑by‑design approaches. Corporate “open” language can mask lock‑in; third‑party labeling and scrutiny of open‑washing are needed to give users real agency.
Resolutions and action items
Develop public‑funded compute and data‑sharing infrastructure to support startups and smaller players (suggested by Ravneet Kaur). Create transparent, community‑focused labeling schemes for AI models and services to help consumers make informed choices (suggested by Karen Hao). Incorporate more enforcement voices (competition authorities, regulators) into future AI governance summits (suggested by Amba Kak). Encourage middle‑power coalitions (e.g., France, Canada, Germany, India, Japan, Australia) to co‑design open‑source AI initiatives (suggested by Anne Bouverot). Promote community‑driven AI projects that involve consent, co‑design, and benefit‑sharing with data contributors (highlighted by Karen Hao). Commission a study on open‑washing and develop analytical tools for competition authorities to assess true entry‑barrier reduction (raised by audience member 6).
Unresolved issues
How to balance openness with safety concerns for high‑risk AI applications (e.g., nuclear‑related AI). Mechanisms to ensure democratic input in US AI policy when reliance is on executive levers rather than formal rulemaking. Concrete strategies for integrating Chinese open‑source models while mitigating ideological bias. Effective protection‑by‑design methods for copyrighted data that reconcile IP rights with openness goals. Scalable models for community‑driven AI that can operate beyond niche projects without sacrificing impact. Specific metrics or frameworks to evaluate whether open‑source initiatives truly reduce dependence for Global South countries. How to systematically address data‑worker and broader labor exploitation throughout the AI value chain.
Suggested compromises
Treat openness as a gradient: allow closed development for clearly high‑risk domains while keeping other layers (data, APIs, governance) as open and participatory. Adopt a hybrid governance approach that combines formal rulemaking (for democratic legitimacy) with targeted industrial/trade policies (for speed). Encourage middle powers to cooperate in ad‑hoc coalitions rather than forming a single monolithic bloc, preserving flexibility for diverse national interests. Accept that not all AI models need to be large‑scale; promote small, domain‑specific open models alongside flagship models.
Thought Provoking Comments
Openness should be understood as a socio‑technical characteristic, not just a binary technical decision about model weights. It is about shifting power, accountability, shared infrastructure, and democratic participation.
She reframes the dominant narrative that treats ‘open’ as merely releasing code or weights, highlighting the deeper political and democratic dimensions of openness.
This set the analytical foundation for the whole panel, prompting others (e.g., Anne Bouverot and Karen Hao) to discuss openness beyond technology and to consider its role in governance, competition, and community empowerment.
Speaker: Alondra Nelson
U.S. AI policy is not deregulative; it is ‘hyper‑regulatory’ through trade policy, export controls, immigration fees, and funding decisions, which bypasses the democratic input that formal rulemaking would provide.
She challenges the common perception that the current administration is hands‑off, exposing a hidden layer of state power that shapes the AI ecosystem without public oversight.
Shifted the conversation from a surface‑level discussion of openness to a critique of the governance mechanisms themselves, leading Amba to ask about the role of competition as a sovereignty tool and prompting Ravneet Kaur to elaborate on competition as a democratic lever.
Speaker: Alondra Nelson
Middle powers can leverage open‑source as a competitive tool by forming ‘coalitions of the willing’, allowing them to punch above their weight without building a full stack from scratch.
She introduces a new geopolitical framing that moves beyond the U.S.–China binary, showing how a broader set of countries can shape AI governance through collaborative openness.
Opened a new line of discussion about multilateralism and the strategic use of openness, which Astha Kapoor later linked to the specific challenges of Global South nations and the need for agency rather than mere adoption.
Speaker: Anne Bouverot
Openness as a driver of adoption can be dangerous for Global South countries because it diverts attention from necessary investments and makes them a test‑bed for external innovators, turning them into labor providers rather than co‑designers.
She critiques the simplistic narrative that open data or models automatically benefit developing regions, highlighting structural inequities and the risk of dependency.
Prompted a deeper examination of sovereignty and competition (Ravneet Kaur) and reinforced the panel’s focus on community‑centric models, influencing Karen Hao’s examples of truly participatory projects.
Speaker: Astha Kapoor
Competition is a crucial lever for sovereignty: it ensures contestable markets, prevents ecosystem lock‑in, and requires transparency and accountability throughout the AI lifecycle.
She connects competition policy directly to democratic control and national sovereignty, positioning it as a concrete tool rather than an abstract principle.
Steered the discussion toward concrete policy mechanisms, leading to follow‑up questions about enforcement, open‑washing, and the need for new analytical tools in competition law.
Speaker: Ravneet Kaur
The Tahiku Media project shows that openness can be social as well as technical: community consent, co‑design, and value‑return to the data providers create a model where AI truly serves marginalized groups.
Provides a concrete, ground‑level illustration of the kind of openness Alondra described, moving the conversation from theory to practice.
Illustrated the feasibility of community‑driven AI, influencing later remarks about scaling (her own) and reinforcing the panel’s call for diverse, locally‑tailored AI solutions.
Speaker: Karen Hao
Scale should be re‑thought: true scale is not a single monopoly distributing to everyone, but many communities developing their own models for specific contexts; large‑scale monolithic models are a monopoly, not scale.
Challenges the Silicon Valley assumption that ‘scale = reach’, proposing a decentralized vision that aligns with the panel’s emphasis on sovereignty and community empowerment.
Prompted participants to reconsider the relationship between openness and market concentration, and set up the final reflections on how to build AI ecosystems that are both inclusive and competitive.
Speaker: Karen Hao
Corporate language of inclusion and diversity is often a veneer that masks the goal of locking users into closed platforms; genuine openness requires community engagement beyond marketing rhetoric.
Calls out performative inclusion, urging critical scrutiny of corporate narratives that claim openness while maintaining control.
Served as a concluding critique that tied together earlier points about democratic participation, competition, and the need for transparent, community‑led AI development.
Speaker: Karen Hao
Overall Assessment

The discussion was shaped by a series of pivotal interventions that repeatedly shifted the focus from abstract notions of ‘open’ to concrete political, economic, and community dimensions. Alondra Nelson’s reframing of openness as socio‑technical and her expose of hidden regulatory levers set the analytical tone. Anne Bouverot’s middle‑power coalition concept broadened the geopolitical frame, while Astha Kapoor’s critique of openness as a potentially exploitative adoption model grounded the debate in Global South realities. Ravneet Kaur linked these ideas to competition law, presenting it as a tangible sovereignty tool. Karen Hao’s vivid case studies and her deconstruction of corporate ‘open’ rhetoric provided practical illustrations and a critical lens that tied the conversation together. Collectively, these comments redirected the panel from a surface‑level discussion of technical openness to a nuanced exploration of power, governance, and community agency, ultimately shaping a richer, more actionable dialogue.

Follow-up Questions
How can a third‑party labeling system be developed to clearly indicate the values, resource usage, and openness of AI models so consumers can make informed choices?
Consumers currently lack easy, standardized information about the provenance and openness of AI tools, hindering responsible adoption and accountability.
Speaker: Karen Hao
What analytical tools or revised competition frameworks are needed to detect and assess “open‑washing” where firms claim openness but maintain hidden barriers to entry?
Ensuring that openness claims genuinely lower entry barriers is crucial for effective competition enforcement and preventing anti‑competitive practices.
Speaker: Audience member 6
Can protection‑by‑design techniques that render publicly available data unusable for AI training be effective, and how do they align with broader openness goals?
Explores a technical‑legal approach to safeguard intellectual labor and data rights while balancing the principle of openness.
Speaker: Audience member 5
Who is truly included in the “all‑inclusive” AI vision, particularly regarding gender representation and the participation of countries like China?
Clarifying inclusion criteria is essential to ensure that AI governance frameworks do not marginalize key stakeholders or regions.
Speaker: Audience member 2
How should the AI community handle open‑source Chinese models that may embed CCP ideological controls, and what methods exist to mitigate such influences?
Open‑source models from geopolitically sensitive contexts raise concerns about hidden political bias and require strategies for safe adaptation.
Speaker: Audience member 3
How can transparency and community oversight be increased for data‑center and cloud‑infrastructure decisions that are currently made behind NDAs and without public input?
The physical layer of the AI stack is critical to democratic control; lack of openness undermines community trust and accountability.
Speaker: Alondra Nelson
What role can cooperatives (e.g., the Amul co‑op) play as co‑designers and governance structures for AI development in the Global South?
Cooperatives could offer a democratic, one‑member‑one‑vote model for pooling resources and shaping AI applications to local needs.
Speaker: Astha Kapoor
How can regulator and enforcement voices be more systematically included in AI governance discussions and summit panels?
Involving enforcers ensures that policy recommendations are grounded in enforceable legal frameworks and protect public interest.
Speaker: Amba Kak
What are the challenges and possible solutions for achieving scale with community‑driven, small‑AI models versus the monopoly‑style scaling of large tech firms?
Understanding how to diffuse AI capabilities without concentrating power is key to a more equitable AI ecosystem.
Speaker: Karen Hao
What research is needed to understand and mitigate labor exploitation throughout the AI supply chain, from data annotation to model deployment?
Labor concerns span the entire AI pipeline; systematic study can inform policies that protect workers and ensure ethical AI development.
Speaker: Karen Hao
How can democratic input be incorporated into AI policy mechanisms that rely on industrial, trade, and immigration levers rather than formal rulemaking?
Current “hyper‑regulatory” approaches bypass traditional public comment periods, raising questions about legitimacy and participation.
Speaker: Alondra Nelson

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

From KW to GW Scaling the Infrastructure of the Global AI Economy

From KW to GW Scaling the Infrastructure of the Global AI Economy

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel examined how India can achieve AI sovereignty while rapidly scaling AI infrastructure and services, emphasizing that sovereignty and innovation must progress together and that true sovereignty requires control over both hardware and software [1][7-13][14]. Ankush argued that India’s aspirational stance will make it a global AI hub within months [2][3], and Nitin highlighted Google’s new Vizag data centers and an on-premise “indigenous data box” that delivers full Gemini AI capabilities while keeping data local [8][10-13]. He stressed that controlling both hardware and software is essential for true sovereignty [14].


Sudeesh described the IRCTC ticketing platform’s demand-supply mismatch and how advanced AI, built with both global and indigenous components, is used to detect and mitigate automated booking bots during peak tatkal periods [16-19][21-25]. He confirmed collaboration with Indian startups for data analysis and continuous monitoring [22-24]. Ankush explained that Bharat GPT follows an “AI with purpose and trust” approach, focusing on domain-specific, small-to-mid-size models trained on partner data rather than a generic large language model for consumers [28-34][36-38][41-44].


Addressing inclusivity, Nitin cited Google’s provision of free Gemini-powered JEE mock exams to broaden access for students in underserved areas [58-62].


Srirang introduced the concept of AI factories, noting that gigawatt-scale data centers are shifting from an “outside-in” to an “inside-out” design where workloads dictate infrastructure [71-74]. Peter and Jigar explained that speed at scale requires modular GPU pods, with rack power densities rising from 10 kW to over 240 kW and future megawatt-per-rack designs, enabling rapid deployment of AI workloads [108-112][118-121][160-163]. They stressed the use of reference designs and pod-level standardization to maximize utilization, reduce under-use, and simplify upgrades across GPU generations [252-259][262-267].


Srikanth and Sanjay warned that future-proof data centers must consider row-level density and modular pods to avoid costly retrofits, leveraging digital twins for design validation [665-667][666-670]. To support this growth, NVIDIA and partners are launching skill-development programs with Indian institutes and promoting off-site prefabricated systems to accelerate build-out while addressing energy-efficiency and PUE challenges [710-718][739-744][779-784].


Overall, the discussion concluded that coordinated efforts across sovereign AI models, scalable infrastructure, and talent development are critical for India to become a leading, self-reliant AI ecosystem [1][71-74][108-112].


Keypoints

Major discussion points


AI sovereignty and the need for indigenous solutions – The panel stressed that AI innovation must be coupled with data-sovereignty, highlighting Google’s new Indian data centers and on-premise “indigenous data box” that runs Gemini AI services inside the customer’s premises [7-14]. Ankush’s “Bharat GPT” is presented as a purpose-driven, trust-focused model built for Indian enterprises rather than a generic large-language model [28-38].


Applying AI to critical Indian services and promoting inclusivity – AI is already being used to manage massive demand spikes on IRCTC ticketing and to curb automated abuse [15-18][21-25]. Google is extending inclusive AI tools such as free Gemini-powered JEE mock exams for students across the country [58-62]. The discussion also touched on AI-driven fraud detection in subsidies and UPI transactions as examples of societal impact [298-310].


Scaling AI infrastructure: “AI factories”, GPU pods and gigawatt-scale data centers – Multiple speakers described a shift from traditional data-center design to purpose-built AI factories, emphasizing “speed at scale”, modular GPU pods, and reference designs that can be replicated across generations [99-108][119-130][158-166][210-218][245-258]. The goal is to move from 1.5 GW today to 10 GW+ within a few years [210-218].


Energy efficiency, PUE optimisation and future-proof design – The conversation highlighted the challenges of cooling high-density GPU racks, the limits of PUE as a metric, and strategies such as adaptive chillers for seasonal temperature swings [710-730]. Future-proofing requires thinking beyond rack density to row-level “bounding boxes” and integrating chip-to-data-center telemetry [665-668][739-748].


Building talent and skill pipelines for AI-scale operations – Recognising the talent gap, NVIDIA/Vertiv outlined training programmes with Indian institutes (e.g., IIT-Chennai) to certify engineers in operations, maintenance, and design of AI-optimized data centers [778-788][811-818].


Overall purpose / goal


The panel was convened to map India’s roadmap for becoming a sovereign, inclusive AI hub: showcasing current AI deployments, outlining the technical and infrastructural upgrades needed to support massive AI workloads, and identifying policy, sustainability, and talent-development actions required to accelerate the nation’s AI ecosystem.


Overall tone and its evolution


The discussion began with an optimistic, visionary tone-emphasising India’s aspirational role and the promise of sovereign AI [1-3][28-31]. It then shifted to a pragmatic, solution-focused tone as speakers detailed concrete technical measures (data boxes, GPU pods, reference designs) [7-14][99-108][245-258]. A later segment adopted a more cautionary yet collaborative tone around energy, cost, and design challenges [710-730][665-668]. The conversation concluded on an encouraging, forward-looking note, stressing partnership, rapid deployment, and skill-building [378-386][778-788].


Speakers

Ankush Sabharwal – Role/Title: (not explicitly stated in the transcript) – Area of expertise: AI sovereignty, Bharat GPT, AI strategy for India. [S6][S7]


Akanksha Swarup – Role/Title: Moderator / Host of the panel – Area of expertise: Interviewing, moderating AI-focused discussions. [S13]


Nitin Gupta – Role/Title: Google employee (speaking on behalf of Google) – Area of expertise: AI services, data-center sovereignty, Google Gemini, on-premise AI solutions. [S16][S17]


Sudeesh VC Nambiar – Role/Title: IRCTC representative (AI & ML for railway ticketing) – Area of expertise: AI-driven demand-supply management for railway bookings.


Srirang Deshpande – Role/Title: Strategy lead for India, Vertiv (managing Vertiv strategy & market development) – Area of expertise: Data-center strategy, AI-infrastructure planning. [S8][S9]


Moderator – Role/Title: Conference moderator – Area of expertise: Session facilitation and audience interaction.


Peter Panfil – Role/Title: Vertiv senior executive (panelist) – Area of expertise: AI factories, GPU-centric data-center design, speed-at-scale deployment. [S23]


Jigar Halani – Role/Title: NVIDIA representative / industry veteran – Area of expertise: AI factories, GPU infrastructure, AI model deployment. [S18][S19]


Srikanth Cherukuri – Role/Title: Vertiv executive (panelist) – Area of expertise: AI-factory blueprinting, GPU-first design, future-proof data-center architecture. [S24]


Sanjay Kumar Sainani – Role/Title: Senior Vice President, Technical Business Development, Vertiv – Area of expertise: High-density AI data-centers, power & cooling efficiency, scaling AI infrastructure. [S4][S5]


Audience – Role/Title: Various audience members asking questions – Area of expertise: Varied (AI inclusivity, AI-human interaction, AI ecosystem in India, talent development, etc.).


Additional speakers:


None identified beyond the speakers listed above.


Full session reportComprehensive analysis and detailed insights

Opening remarks – AI sovereignty


Ankush Sabharwal opened the panel by asserting that India’s ambition to become a global AI hub must rest on “complete sovereignty in terms of AI and not just the platform” and that this transformation will happen “in a few months, not years” [1-2]. He framed sovereignty as inseparable from innovation, a view echoed by Nitin Gupta who said “sovereignty and innovation … have to run together” and that the two are not mutually exclusive [7-9].


Google’s sovereign-AI offerings


Google outlined its contribution through the announcement of new data-centre capacity in Vizag, which will keep “innovation and any data-residency things … within the boundaries of India” [8]. More importantly, Google introduced an “indigenous data box” that resides entirely on a customer’s premises yet delivers the full suite of Gemini AI services, giving users the power of a Google data centre “inside your own premise” while also “controlling the hardware” [10-14].


Inclusivity and the digital-divide


Moderator Akanksha Swarup asked how Google would address the digital divide and serve under-privileged and rural populations [49-53]. Nitin Gupta responded by highlighting that Google has made “JEE Main exams, mock exams … available on Gemini free of cost for any student to try”, positioning the offering as a step toward democratising AI-enabled education [58-62].


IRCTC ticket-booking AI use-case


Sudeesh VC Nambiar described the severe “demand-supply mismatch” on the IRCTC ticketing platform during peak tatkal windows (8 am, 10 am, 11 am) [16-19] and explained that a “very advanced AI solution … maybe the best in the world” is being employed to detect and curb automated booking bots [21-25]. The solution incorporates a “layer of indigenous” models and collaborates with an Indian startup that performs continuous data-analysis and social-media monitoring [23-25].


Bharat GPT – purpose-driven AI


Ankush Sabharwal then detailed the philosophy behind “Bharat GPT”. The tagline “AI with purpose and trust” reflects a focus on solving specific enterprise problems rather than creating a consumer-facing large language model [28-34]. The development process follows a “begin with the end in mind” habit: model size is chosen based on the use-case, data are sourced from partners, and domain-specific models (e.g., for railways) are trained using the partner’s existing knowledge [36-44][45-46].


AI factories and inside-out design


Srirang Deshpande introduced the concept of “AI factories”, noting that the industry is moving from an “outside-in” to an “inside-out” data-centre design where workloads dictate the infrastructure [71-74].


Infrastructure at scale – chip-first philosophy


Peter Panfil and Jigar Halani expanded on this by describing “speed at scale” as the need to design from the GPU chip upward, creating modular GPU pods that can be replicated rapidly. They cited the evolution of rack power density from ~10 kW to >240 kW and the prospect of “megawatt-per-rack” designs, which would enable a single hall to serve a substantial portion of India’s AI demand [106-112][118-121][160-163].


Reference-design “magic numbers” from Vertiv and NVIDIA allow a pod to support three GPU generations without redesigning power or cooling infrastructure [252-259][262-267]. Jigar Halani stressed that focusing on “row-level” or “data-hall-level” density-rather than individual rack density-prevents costly retrofits and aligns with the “bounding-box” methodology [665-667]. Sanjay Kumar Sainani added that a pod can be built to a fixed power and liquid-cooling capacity (e.g., 2.4 MW or 6 MW) and later re-configured for newer GPUs, thereby future-proofing the deployment [692-699].


Energy efficiency and telemetry integration


Sanjay Sainani warned that Power Usage Effectiveness (PUE) can be “misleading” because raising ambient temperature artificially improves the metric while increasing overall power consumption [710-720]. He advocated seasonal cooling strategies-using free cooling in winter and supplemental chillers in summer-to optimise the annual PUE across India’s diverse climate zones (10 °C to 48 °C) [730-738]. Srikanth Cherukuri echoed this concern, noting that current data-centre telemetry does not communicate with chip-level telemetry, and that integrating the two would enable automated, real-time energy optimisation [739-748]. He also highlighted the use of digital twins to simulate entire pods and verify that “the whole pod as one big block” meets power, thermal and redundancy requirements before construction, reducing the risk of over-building [739-748].


Future-proofing and modularity


The “bounding-box” (row-level) design philosophy, together with the reference-design pod approach, allows multi-generation GPU reuse and avoids expensive retrofits [665-667][252-259][262-267]. Sanjay Sainani reiterated that a 1 MW rack is imminent and that a megawatt-scale rack could replace the footprint of an entire 1 MW data-centre [692-699].


Talent and skill development


The moderator described an 8-to-12-week training programme in partnership with IIT-Chennai that equips engineers with the skills to operate and maintain AI-optimised data centres [778-784]. Srikanth Cherukuri and Sanjay Sainani reinforced the importance of prefabricated, modular systems (e.g., Vertiv “smart-run”) that can be assembled off-site, allowing parallel development and testing, and thereby shortening build-out times [796-804][811-818].


Audience Q&A highlights


Questions from the audience covered AI consciousness, India’s five-layer AI stack (energy, infrastructure, compute, models, applications), and statistics showing that India generates ~20 % of the world’s data but hosts only ~3 % of global data-centre capacity [910-918]. Nvidia-ready / DGX-ready certification and the push for purpose-built AI factories were also discussed [842-850]. Additional examples of AI for agriculture and fraud detection were mentioned [896-904].


Closing remarks


Peter Panfil concluded by reiterating the “chip-first” approach: design should start from the GPU chip and then define the supporting power, cooling and rack infrastructure [106-112]. The panel collectively emphasized three pillars for India’s AI future: (1) speed at scale through modular AI factories, (2) sustainability via energy-efficient, row-level designs and token-per-watt metrics, and (3) a skilled talent pipeline supported by industry-academic programmes. Together, these elements position India to build sovereign, high-density AI infrastructure rapidly, while addressing energy, talent and ecosystem challenges.


Session transcriptComplete transcript of the session
Ankush Sabharwal

having the complete sovereignty in terms of AI and not just the platform. I think India being so aspirational and ready to adopt new technology for the welfare of themselves and the welfare of the businesses, I think we would be the hub of AI development for the world. You will start seeing that happening in a few months, not years.

Akanksha Swarup

It’s actually heartwarming to hear that from someone who’s actually fronting India’s AI story at the moment. Nitin, as someone who is at Google, how do you see this for India? Do you think India has the right infrastructure, the right resources to build its own sovereign AI at the moment?

Nitin Gupta

First of all, thank you, Corvo team, Ankush, for inviting me here. And, you know, I’ll be very happy to share my views. from Google perspective and from my personal perspective I feel yes sovereignty is very important but at the time with the sovereignty it is not a question between sovereignty or innovation it is sovereignty and innovation they have to run together they can’t be one choice versus the other one and with that Google while we have our entire data centers in India you have heard three months back we announced that we are going to be building big data centers in Vizag the announcement happened so we are ensuring that let’s say if any innovation and any data residency things are there they are being kept within the boundaries of India But then those data centers are definitely empowering the lot of AI, but they are for everyone, for all type of personas, whether they’re government, enterprises, startups, students, colleges, universities.

We understand that, you know, sometime there are going to be critical data which needs to stay even more secure. And for that, Google has created a completely indigenous data box which completely stays inside the customer premise and is fully powered by AI. So imagine that you have the full potential to run what you’re running in a Google data center, but inside your own premise. And that has full Google Gemini AI services. And that’s the definition we have for sovereignty, where you are. Also. Also controlling the hardware, not only what’s running on that hardware.

Akanksha Swarup

All right. So this IRCTC is one of the most heavily used websites in India. my data research says close to 50 million users visiting every month on an average correct me if I am wrong but how are you incorporating or leveraging the use of AI especially in peak periods when you look at say tatkal booking time when the traffic actually dramatically peaks up

Sudeesh VC Nambiar

yeah so we had tremendous mismatch of demand and supply as far as railway ticketing is concerned so we have the peak the morning 8 o ‘clock when the ticket is opened for the 60 days hence travel then 10 o ‘clock the AC tatkal and 11 o ‘clock the sleeper tatkal so there is a lot of huge demand and there is a demand supply mismatch as of today so people try to misuse and use the automated tools for accessing it So this is a constant, I would say, cat and mouse game we are sort of playing. And we are using AI also. We are using AI of very advanced AI solution. Maybe they are said to be the best in the world solution we are using

Akanksha Swarup

Any indigenous models are used?

Sudeesh VC Nambiar

Indigenous, of course, we have a layer of indigenous. There is a startup also who are doing the analysis, data analysis, and they constantly monitor the social media and see what is happening, what is the strategy. So it is basically a collaboration between the Indian startups and the global technology strength of a global company. So we are using AI and ML -based model. The model constantly learns and tries to… mitigate those automated…

Akanksha Swarup

Okay. Ankush, what differentiates Bharat GPT in terms of its vision when you compare it to say global models like ChatGPT or even Gemini and especially how is it curated for Indian citizens and enterprises? What is that differentiating factor?

Ankush Sabharwal

Yeah, see, our tagline is AI with purpose and trust, right? So whatever we are doing, so I had read that book, Seven Habits of Highly Effective People very early on in my career. So begin with end in the mind. We always think what’s the use case? What’s the problem you’re going to solve? And then see what kind of model you need, tiny, small, medium, large. And then you see, okay, from where the data would come. See, the Bharat GPT family of models, right? It’s not the large language model. It’s not ready for consumers yet, right? So we work with our partners, get their data and train the model for their users because we believe we…

is easy for us to solve the problem of enterprises because the enterprises say like IRCTC, they already know their domain. We cannot learn, right? And if you say, hey, I can create travel AI solutions, very, very difficult, right? So they know travel, they know railways. So it would be, I think, much better to work with them, learn from them. They already know, they are already solving a lot of problems and they also know the problem, the real problem. They don’t have existential crisis, right? So they are not just in the game of valuation. So they are solving the real world problem.

Akanksha Swarup

That’s why we have him on stage with you today. He’ll share those precious tips. My last question, since we are running short of time, Nitin, I think this is also not to highlight the achievements, it’s also to perhaps highlight the concerns. And right now, one concern which the Indian Prime Minister has also highlighted is that of inclusivity. How is Google trying to bridge that divide as far as you can see? As far as digital divide is concerned, how do you make Google more accessible for the underprivileged, for those in rural areas? Nathan, before you answer, it should be shortened. I have my colleagues from other team, Vertex. I would like to apologize to them for this delay, but allow us just to wind this up.

Nitin Gupta

Yeah, I’ll take a minute. Okay. So, great question. And, you know, Google has always been, you know, in the forefront of inclusivity, whether you call it Gmail, whether you call it search. You know, it is empowering billions of users every day. And just to summarize and give a recent example, we have very recently Sundar Pichai has announced that the JEE main exams, mock exams are available on Gemini free of cost for any student to try. That’s the inclusivity we want. We want to make sure that student at his home can keep on trying the mock test at free.

Akanksha Swarup

All right. Amazing. Amazing. Which is inclusive. Inclusive and democratic. Many thanks to you three gentlemen. It was a pleasure having you all over here. Thank you so much.

Srirang Deshpande

Good morning to all of you. As Rakesh has already introduced, two companies are planning for a lot of things together. As I said, I am part of strategy for India and managing Vortiv strategy and market development. The important thing, what we are bringing today for you is, as we see a lot of gigawatt infrastructures are getting announced, and that poses a lot of challenges for us. Till this time, data centers are getting built from outside in approach. and then now time is there or time has come where data centers are getting filled from inside out approach. So it’s first GPU gets decided or the workloads get decided and then the whole infrastructure gamut comes into the picture.

To discuss this, I have two friends, two industry veterans from Vertiv and NVIDIA to discuss the Fireside Chat. So we have a Jigar, I think by this time Jigar is already known to the industry because immense contribution Jigar has done for the AI ecosystem in India working with all the ecosystems, all the layers, infrastructure application, use cases and so on and so forth. He managed solution and engineering for India in NVIDIA and I have another my friend Peter Panfil . Peter is Encyclopedia in Vertiv. He is based in US. He is our senior vice president for technical business development. And he’s the one who’s involved into many designs, a large scale data centers and gigawatt designs.

I would request Jigar and Peter, please come on the stage. Let’s have a round of applause for Jigar and Peter. So, Jigar and Peter, it’s all yours now. Go to

Peter Panfil

Thank you. Thank you. Thank you. So, my friend, we got our introductions. Let’s see. Are we on? Are we on? You guys can all hear us? Yeah? Can you hear us? Good? We’re good? Okay. All right. So, my friend, great to see you. Great to see you. So, I got to start with how we would normally end. I believe that any discussion like this should start. with us telling you what we think you’re going to get out of this. So what key message or messages do you think this audience needs to hear before we get started? And then we can spin off of that and go into the kinds of details we really need to.

So where do you think, what do you think this audience is the most interested in?

Jigar Halani

Okay. Am I audible? Okay, great. So I think as the topic suggests as well, my view, what you will get to hear us next 30 -35 minutes is about why this AI is becoming so much of notion for every country. What is that is the building blocks of these AI factories and the sovereignty aspect of it? What is it two of us are trying to contribute in this journey for everyone for that matter? And how do we scale? and make it work for everyone, to make AI for all, how India wants to call it as, AI for all, is what I feel we should be discussing about here. Because that will be most relevant for the conference, for the audience, and what we can contribute back to the humanity as well.

What are your thoughts?

Peter Panfil

I agree with you completely. So the three things I feel are most relevant are speed at scale. Now, it’s not just the speed of the compute. It’s the speed of deployment. The faster we can get the GPU structures in place, the faster we can benefit from it. And scale, you and I talked about the scale. And you’re going to quote some numbers, I think, that the tops of their heads are going to blow off. But speed at scale. The second thing is we’ve got to stop. We’re not thinking the way we thought in the cloud world. In the cloud world, we were thinking a high -density rack was 10 kilowatts. And that we would start at the source, at the grid, and work our way to the chip.

What I’m here to advocate for you to do is start at the GPU. Start at the chip. Let’s start at the chip, define the most economical, most efficient, fastest from a compute perspective, and figure out how to deploy that as a pod, then replicate that pod, and achieve the speed. And the third is, don’t be scared. We got it covered. We got you covered. We know how to do this. This, we made a big, I got to just tell you, I told you this in the hallway. Vertiv made a big bet. with NVIDIA. We made a big bet. I actually reassigned myself. I was leading what we call a GSA, Global Strategic Account Pursuit Team, and I said, if we’re going to do this right, we’ve got to immerse ourselves in GPUs, understand how to deploy them, understand what drives our customers, and how we’re going to make them successful.

And I think that that has worked to both of our benefits.

Jigar Halani

Absolutely. And through the humanity as well, right? We are fundamentally changing everything that has been pursued so far, and you bring out the cloud part of it. I was just thinking while putting my hand on my beard that only a few hairs were white back then. It’s not that far that I’ve seen the retrieval clouds. We store the information and we’re just retrieving it to process the application to get us the information out, right? to the world of now generating every single time a new data and processing it right there to give you all the time a new input and a new output, right? Because prompts are new, the outputs are new, and thereby the world sees every time something different which is getting processed and being delivered to the customers, right?

So such an amazing and a fastest -paced change of how these clouds have emerged and what are your thoughts in terms of what this space is all about, how our customers are keeping up with this, and what are we contributing in that journey, if you can throw some light towards that.

Peter Panfil

Sure, that’s great. So first of all, it comes with understanding and having a transparent provider that says, here is what I’m producing today, here’s what I think I’m going to be producing a year from now, here’s what I think I’m going to be producing two years from now. Now, our goal is to make every deployment that you take on an AI factory. We all know what an AI factory is, right? An AI factory, think of it as a car factory, washing machine factory. Just, it’s a data factory, okay? And so our goal, I will just tell you, our goal along with your team is start as an AI factory. Yes, you might want to have mixed mode CPU and GPU workloads in your facility, but you’ve got to pilot the GPU configurations, at least pilot them.

When I say I reassign myself, I was working primarily with cloud providers, mostly hyperscalers, and they had a prescriptive formula. You know, they had their hacks, their number of racks. They would deploy them. We all knew which ones they were. Now, we can take a GPU pod, design it once, build it many, and apply it to the GPU that we need from that generation. It’s a complete change in the way we think about how to deploy the IT.

Jigar Halani

That’s so true. By the way, did you notice, every time we are talking about GPU, the screen is blinking. There you go. I think that’s a good message.

Peter Panfil

I think it’s because I owe somebody a nickel every time I use the letters GPU. It must be trademarked somewhere, all right? So I owe them a nickel. Okay, all right.

Jigar Halani

No, so I think the transition that we see because it’s generating something new every single time, the compute demand because of which is just exploding, right? And thereby, the possibility of what we could do more and new is every time becoming bigger and better, essentially. Right? And with that, I think the journey of data center is also evolving much more faster than what we have thought, right? You mentioned it, 10 kilowatt to 15 kilowatt, not that far. We were talking about this about four or five years back. To 40 kilowatt, what we transitioned it to it, to now to 120, 130 kilowatts. And as we announced it in January, we are now talking about 240, 230, 210 kilowatt per rack, which means this size hall could probably run a great portion of India with so many services that is probably never imagined before.

Peter Panfil

So I think it’s interesting that you comment about that, because one of the things that we’ve heard back from our customers who first do a lot of research, how do they take their critical infrastructure from CPU -based to GPU -based? And I think that’s something that we’re seeing a lot of growth in. First, there’s that transition to liquid. Don’t worry about it. We’ve been doing liquid cooling for 40 years. We know exactly how to manage it. Then there’s the density of the compute itself. I’m amazed at how quickly and easily our customers understood the move from a 10 -kilowatt rack to a 130 -kilowatt rack. I credit you all. So if you’ve already made that transition, I credit you.

You’re doing a spectacular job. Our job is to prepare you to have that go up by an order of magnitude. Not right away, but in future generations of compute. And so what we try to do is we try to prepare you for future -ready thinking. I know you don’t want to think three years down the road. You can do it. You can do it. You can do it. You can do it. You can do it. You can do it. let’s at least think three years down the road in three years based on the rate of what you’re seeing what we’re seeing both here in India and around the world

Jigar Halani

my perspective is I think all reports are talking about 5, 6 gigawatts kind of a number over the next three years my personal understanding from the lens I look at it both from NVIDIA as well as what industry and government is trying to do my anticipation is we will cross 10 to 12 gigawatts in the next three years and that’s not far and I’m not going by any of the announcements that has been made in the last three years. I know where the reality stands in terms of what inferencing and training workloads. I repeat, I started with inferencing. I did not start with training.

Peter Panfil

Yep. I noticed that.

Jigar Halani

The reason is we are a consumer country. Make a note of that. Yes. Right? He started with inferencing, not learning. Yes. All right? Because we are a consumer country. We have always been in the mode of first to consume, then to build. And thereby, we are the largest chat GPT consumer base for the globe. We are the largest for public city. We are the largest for even for Gemini as well. Right? I think we were number two about a month or so back. But my view is with this geo announcement, we should have crossed by now number one position. But the delta was pretty small. Right? What does that mean? That means. If that entire compute capacity.

that is currently not getting processed in the country should come back to India because of the DPDP law that has got enforced last month or so, then this number will be even higher. And we are very democratic that way. You know, we are not closing the doors for any businesses. We have never done that. I’m sure, knowing the country, we will never do that with this leadership that we have from Prime Minister Modi. That means we will still allow these processing to happen outside of India, but at the same time, we will do the regulatory reasons of some of the verticals, say, fintech, say, healthcare, defence and so on and so forth, or some of the government, you know, bodies.

Even if they start to do influencing locally, this number will easily touch 10 plus. And I’ve not included the industry at scale yet, which is what Anthropic and J &J, Gemini and others are trying to even capture it from that market perspective. So my understanding, it should cross 10. while all the reports are talking 5, but India will

Peter Panfil

So it’s amazing. We didn’t compare this note before we got on this stage. What was the number you gave me just 20 minutes ago? 10, right? So let’s think about that just for a second. We’re at 1 .5 now. We’re going to get to 10. So to get to 10 in that three – to five -year horizon, we’re going to have to scale pretty far, pretty fast. We’re going to have to draw on our shared expertise. And by drawing on our shared expertise, we’re going to help be a trusted advisor to you. Who’s your trusted advisor? I’ve got my trusted advisors. Somebody. Somebody I can always go to, and they’re always going to give me. the right answer. It might not be the answer I like, but they give me the right answer.

So what we want to do is make sure that you know, we understand how to scale. You understand how to scale. We understand how to scale. I think if we’re talking about doubling, getting to 10 in three years says we double every year starting this year. My one and a half goes to three, my three goes to six, my six goes to 12. We’re doubling every year. Now, if I was to take you outside of India now, North America market, when the North America market first started becoming aware of GPUs, there was a wide variety of acceptance. There were the folks that said, yep, I want to be there. And I want to be there, and I want to do a pilot with you.

I want to design a pilot that I can replicate into all of my either hyperscale or multi -tenant data center environments. The other thing they wanted to do was no data center left behind. They didn’t want to leave behind any capacity because they knew capacity was going to be the currency. They knew power and land and GPUs, that’s where they needed to be. The third thing was their project scales moved. We used to live in the cloud world at project scales of 18 months. We live now in the GPU world of project scales of between four and six months. So a dramatic compression of schedules, a dramatic increase in capacity, what does that mean? We’ve got to build capacity at a faster rate.

Than we ever have before. and I know we’re up to it. We’ve added the capacity we need to be able to support that kind of demand.

Jigar Halani

Peter, that actually brings to a very good question. When we talk about this at scale, and you said that in the U .S., you guys have already started to build this at scale because you see this as a great opportunity, essentially, and India is yet to build, right? In all fairness, I think some of the largest clusters are in 10Ks of GPU, essentially, right? While in the U .S., we’re talking about millions of GPU in a single data center, essentially. Would you like to throw some light on some of the learnings and, Kodi Kakar, a quick bite for the audience to know what are those quick things that India could do in terms of having these things done in, let’s say, three to eight months’ time frame, not just the project planning, not just the understanding of BOQs, not just the understanding of…

who is going to deploy my project and how does the project look like and the 3D version of that. How do I get the entire project done in, let’s say, six to eight months’ time frame? Including from land is what I have, and from there onwards, GPUs running and hugging and making the production environment happen.

Peter Panfil

shifted to 250. Now, along the way, we said, okay, let’s take these 10s and put them together and make a 50, and let’s take the 50s and put them together and make 100, and let’s put the 100s together and make a 220. Shoot me now. What we found is, let’s pick an optimum building block that supports the number of GPUs that is, I’ll call it, reasonable at scale, don’t take a design that has never been created before. Let’s take a design that we have a good basis on. For example, the pod. You just published some standards on pods. Reference designs.

Jigar Halani

Reference designs, okay.

Peter Panfil

We worked closely with your team on reference designs. We came up with the magic numbers that are reference designs that minimize the amount of, of underutilization, so maximize the utilization and make them the most efficient. I’ve been an advocate for efficiency within the data center space my entire career if you save a watt that’s a watt you don’t have to generate at the source you don’t have to distribute it you don’t have to reject it so the fewer watts you lose and the more watts you can put into the compute the more tokens I can generate and so our goal our goal in working with the GPU I’ll call it the AI factory mentality is how much power can we deliver from the source to the GPU as much power as we possibly can and how can we deploy that physically as quickly as we possibly can and it boils down to take the reference designs we’re not saying all the designs are going to be the same we know that’s not going to be the case so but I could show you a pod design it’s part of the reference design I could show you a pod design that supports three generations of GPUs so this year, next year, next year after that three generations of GPUs just by changing the way those pods are populated on the compute side and in fact we’ve got one customer who wants to be able to seamlessly mix GPU platforms within a pod he says I’m going to have one compute line up number one as one generation of GPUs, pod two is another generation of GPUs third generation of GPUs, so they want to be able to seamlessly move between GPU generations because at some point they’re going to optimize particular functions and particular outputs and services against a GPU platform.

Jigar Halani

You just brought up a perfect point, right? So a few things are why it’s important to follow the reference design. Just to bring everybody on the same page, CPU world was very different. Having a node down means few hundred dollars getting downtime. A GPU node down translates to few thousands of dollars going into a downtime, right? And the fortunate or unfortunate part is if your training workload is running, if a node fails, you start from the checkpoint that you have done it. Assume that your checkpoint was done eight hours before. For eight hours of, say, 4 ,000 GPUs of time multiplied by that much is what you have lost the compute time in the cloud. Unfortunately, that translates to…

Peter Panfil

Real money.

Jigar Halani

Hundreds of thousands of dollars. Real money. Right? Real money. So while you as a cloud provider might be thinking, and I’m talking about both the sides, that, hey, let me do a little bit of cut corners, do something here, something there, and I’m still making the cluster up and running. But you know what? That’s going to cause a lot. And customer may not have SLAs with you in that direction because these are not the standard SLAs we’re talking about, right? What the world has seen in the typical cloud world. These are different type of SLAs that customer signs with you. And, you know, if it’s an inferencing workload and if it’s critical with the enterprises, we’re talking about down times, which is, again, by all law of cloud, is not acceptable.

But the key question could come in that, hey, why do I need these large -scale clusters only for training? Is that the only thing I do it? The answer is no. I don’t know, but I’m sure most of you might be following what Jensen talks about it. The three scaling laws that we have it. We’ll not go. We’ll go into detailing of it. I think Jensen has mentioned it like at least 100 times of his keynote. But in a simple term, if I have to tell you, let me take one or two good examples from the country itself, right, and what we announced in the last three days. So, taking a very simple example, as everybody knows, we are 1 .4 billion people, right?

Half of the audience is associated with farming in the country. Half of the audience or the, you know, citizen base is associated with the, you know, farming, and thereby one -third of the families of the country are completely aligned to the farming aspect of the story, right? They contribute just 15 % of our GDP, but half of the population does and associates with the farming, right? Now, government of India has two simple applications that has been launched, right? One is to check the subsidized… Food, you know, that government gives it to these half of the, you know, audience, half of the citizens in the country today. subsidized to the level which is a cent or two. In Indian rupees, it is one rupee to five rupee, how government gives it.

And a feedback call goes to all these citizens, asking how was the quality, did you get the right quantity, have they done any kind of fraud, and so on and so forth. A call per day, if government has been able to scale in the last one month to about 50 ,000 calls a day to citizens through a bot, which is talking in a local language, has been able to save a fraud work of around per day, and I’m talking about per day, in the range of a couple of millions of dollars. Okay, take that fraud. Talk about financial fraud. This would be another one, right? Because we are the world’s largest online payment transaction country. We contribute 50 % of the digital, and that’s…

by the NPCI data, globally acceptable, and it does at free of cost in this country. We call it as UPI, right? Most of the Indian people would know about it, right? And imagine the innovation that takes place in the fraud, you know, when the UPI transactions are taking place, right? And I do this transaction using mobile, from your mobile to your mobile, in a fraction of milliseconds. That data is in hundreds of millions, right? To prevent this fraud is where the AI is getting used. Now, if I’m putting a couple of hundreds of millions of dollars for five years as an initial investment, think of the economic benefits and the money back that I’m giving it to the citizens by not having these frauds.

And thereby for each of these applications, right? I have another good example, which is we have 22 official languages speaking in 500 dialects in the country, unofficial languages. So, officially, we have over 100 plus languages in the country. right government of India has an application called bashingi which does basically the translation you know and ASR and TTS in all different languages of India and government of India and state government has about 10 ,000 websites that government runs it we have only touched 1 ,000 and we are already hitting 100 million requests per hour right and this translates to in a simple term roughly about 2 million 2 megawatts of data center consumption per minute right in in 2 megawatt if I’m able to cater to 100 million requests a minute that’s massive

Peter Panfil

yeah look at the productivity improvement and I’m bringing it to the nation’s and this is just thousands of those websites of government of India so we yeah so we we talked we started with scale at speed Okay. That’s where we began. It’s not just the scale of the data center environment. It’s the scale of the applications and the benefit that they’re going to bring when they get fully populated.

Jigar Halani

Absolutely.

Peter Panfil

So I’m going to put you on the spot. How much do you think, on the journey right now, where are we? Are we at 3%, 5%, 10 %? I will tell you, I cannot wait for AI to take every mundane task I have to do in every day of my life and just do it. Okay? And then once those mundane tasks are out of the way, I can use every gray cell up here for productive work.

Jigar Halani

Absolutely. Absolutely.

Peter Panfil

So where do you think we are in terms of that scale?

Jigar Halani

But you touched upon a good point, how Meta calls it as personalized AI for everyone. So we are getting there, right? But in terms of data, and I think even Minister made that announcement yesterday when he was talking at the inaugural. He gave a nice statistics talking about we as India generate 20 % of world’s data, and what data center capacity the country has today is 3 % of the world’s data center capacity. So which means even if I don’t assume data generation speed over the next 3 to 5 years, even if I keep the data rate at 20 % only, and we are a young population, so we are bound to generate more data, and ours is the cheapest data rate in the world for 5G that we have, but assume that we don’t.

We don’t generate much of data, right, and we restrict it. We still have a long, long way to go in building the… large -scale data centers just to make sure our own data we process it by ourselves. Right? And that’s where the whole theme of this sovereignty, what government is talking about at least let’s protect our data. It’s more critical, not the general data. And that’s where the gigascale is more important.

Peter Panfil

But I don’t look at the sovereign data center approaches so much as a protection. I look at it as, where’s the most efficient place to process the data? It’s where the data is generated. The most efficient and effective place to process data is at the source of the data. Absolutely. And we are limited by energy. So we want to protect that layer as much as possible. And so I see a world where the data gets generated, it gets processed as closely and as quickly after the generation of the data. It’s used to further improve the performance and generation of subsequent data. So that data gets cleaned up as it goes. It gets more refined and more accurate.

We all know we make good decisions with good data. We know that. We make bad decisions with bad data. So the real issue here is we’ve got to take the data. I won’t say that our data is not clean now, but it’s not. Okay? I mean, you spend, give the audience an idea when a model is being put together. How much of the time is actually in cleaning the data and pre -processing of it, and how much of it actually goes into the language model itself?

Jigar Halani

So just to build, because India just announced 10 of their foundation models, cleaning of data typically is three to six months of journey on thousands of GPUs. for a language model that we are trying to build it in, right? If it’s a specific model for a particular task or a vertical that we are trying to build it in, and if the data is more notorious in terms of having more videos and images and stuff, it could be even longer.

Peter Panfil

Got it.

Jigar Halani

Right? And then comes the foundation model building itself and, you know, conversion of the model. That’s another 6 to 12 months of journey. It depends on the model size and type that you are trying to build.

Peter Panfil

So it could be a third of the time of realizing my language, my large language model, is cleaning of the data.

Jigar Halani

That’s correct.

Peter Panfil

Processing that data. Now, once it’s there, I’ve got a solid foundation of data to use for future models.

Jigar Halani

That’s correct.

Peter Panfil

Okay. So, again, I think if we’re talking about it in terms of percentage, are we 5 % there? Are we 10 % there?

Jigar Halani

So I would not put percentage. The reason is what type of model we are trying to build it in. depends on that. If it’s a language model, I think specifically to India per se, I will not comment about other countries because it depends on where their journey is in their data building. But India, in my view, has already nailed the data creation for a mid -sized model, how we call it, as a small to a mid -sized model in play. And they’re going to make it open source as well, how it has been announced. So, I will not claim it that we have a very large data set for a very large model, but a small to mid -sized, I think in the last one, one and a half years due to this India AI mission, we have been able to generate a pretty good amount of data and a pretty amazing clean data.

Peter Panfil

Perfect. Alright. So, I’m getting a hook from the guys in the front row. Okay. Yeah. I run long. I’ll always run long.

Jigar Halani

Sorry, I’m pausing you there. And I want to diverge a little bit right, asking as an India person. and I’m an Indian first, then I’m an Indian, what is that Vertiv is trying to contribute in this journey for, let’s say, India to begin with, and for the globe as well, you know, in the building blocks of these things that we are trying to do for these gigawatt -scale data centers? If you can throw some light.

Peter Panfil

Sure.

Jigar Halani

I know it’s a little bit silly question.

Peter Panfil

No, it’s not a silly question.

Jigar Halani

No, no, it’s not. We want to push manufacturing. We want to push, you know, India ecosystem, be more indigenized as much as possible, be more self -reliant. I want to know what Vertiv is trying to do.

Peter Panfil

So Vertiv is investing in people, in process, in production capacity.

Jigar Halani

Amazing.

Peter Panfil

Our goal, our goal is to build as much of the critical infrastructure here in India as we possibly can. And what that, it starts with. Working with our partners and our customers on first pilots and then production. So that production, you’re going to benefit. I will just tell you, India, you’re going to benefit from the mistakes that have been made in other regions for the last 12 to 18 months. So you’re going to benefit from that. You’re going to be able to jump right to it, all right? All right, so here’s the sum up. I asked you to give what you thought the panel should, this discussion, they should get out of it. What should they have heard from us that you want them to keep in their minds for the rest of the day?

Jigar Halani

For the rest of the day?

Peter Panfil

Rest of the day.

Jigar Halani

My view is, and I know it’s going to be a mix of audience here, you should be listening to it. How are these building blocks of AI factories that are getting to learn from the globe that India could adopt it fast, followed by? Followed by what’s happening in the modern world, because that’s the most and the fastest. building things that is happening in the world and the most fascinating because changing the world is so fast, followed by how are these models getting deployed, right? And what are the applications which are changing our world on a day -to -day basis? And fundamentally, the businesses are getting challenged on how they have been operating it for decades or centuries, right?

Versus how they could do this business today. If I would be you as audience, and that’s what I’m trying to do, being the audience as well, trying to constantly learn from this conference, what’s that the people who have done this at scale, what can I learn from there that I can deploy back in my country, in my profession, in my day -to -day life as a learning, is what I’m trying to do. And that’s what I would recommend everyone else to do as well.

Peter Panfil

Perfect. So let me add on top of that. It’s scale at speed. And it’s not just speed of build, it’s speed of compute, it’s speed of adoption. Yes. Second, stop thinking grid to chip and start thinking chip to grid and let the chip help us define what that critical infrastructure needs to look like and the third is we’re going to make it as sustainable as we possibly can because a lot that I don’t waste is one I don’t have to generate transmit or reject alright I think I think you’re up next Any questions? Can we have time to take questions from this? Okay, we have. Can we? Okay. Okay, we have one end up. She’s going to run a mic over to you.

Yes.

Audience

Hi, my name is Ani. I have a question. As I can see…

Peter Panfil

Use your outside voice. That’s what my family always says.

Audience

As I can see, everywhere is AI. And in today’s era, it is totally about AI. So, as you also said that this is a… AI whereas everywhere industry and company and education in every sector using AI. So the day is not getting far once AI humans is totally dependent on the AI and once AI is in the subconsciousness as humans thinking as humans. There is any chance where humans and AI both are in the same niche?

Peter Panfil

So I think that early on AI got a bad rap. It was going to be the computers were going to take over and blow up the earth. That’s not what we’re finding. What we’re finding is that AI makes our life easier. life better every single day. I know that traffic systems in the city that I’m in now use AI to look at traffic congestion and traffic patterning. And they actually time the lights to improve the throughput on particular roads at particular times a day. Now, that’s where AI is going to really benefit society. It’s going to benefit it in transportation, in medicine, in research. I’m not so worried about the data being used for evil.

I’m really excited about the data being used for good because that’s where I think we’re going to get the most benefit.

Audience

True. But what if AI get their own subconsciousness? They don’t need humans to just act.

Jigar Halani

I wish you see that day. Somebody told me when I started my journey with the phone. That’s what is going to happen. You will lose touch with your family. You will always be busy with the phone. And so I don’t think so. We have even touched that level as a surface, even after having this phone with me for 20 years.

Peter Panfil

Here’s the example I like to give. Do you think about breathing and blinking? No. You do it automatically. So let’s let AI take those autonomous functions and do them for you automatically so that you don’t have to think about them. And then if I don’t have to think about breathing and blinking, then all of a sudden I can use my brain matter to do other things. So many things. So. I look at it as it’s going to free us. It’s going to free us from the mundane tasks that breathing and blinking. Come on, you’re laughing at me. But do you think about breathing? No. You only think about breathing when you’re trying to hold your breath.

Okay? So I think what’s going to happen is AI is going to become to us like breathing and blinking. It’s going to become an autonomous function that just runs in the background of our lives constantly and makes it better. It’s going to learn what we do and how we do it and how to improve that performance and give us more freedom to do what we really should be doing, and that is making the world better.

Audience

Thank you.

Peter Panfil

Thank you. That’s a good question. I’m glad you asked that question. We have one more. We have one more? Yeah. Hey, hi. We’re going to that side. Hello. Big one.

Audience

This is Shlom. I was watching the interview of Mr. Jensen Hong from NVIDIA, and he explained AI as a five -player stack, you know, energy -cheap infrastructure model and application. Which layer do you think, he also explained how US and China are working on different layer and how they are, you know, ahead of us many years in different layers. Which layer do you think India can excel with them or match with them in upcoming years?

Jigar Halani

So, I think we are already doing that, right? It’s a great question. When we talk about sovereignty, these are the layers we should be sovereign, essentially, right? We cannot be importing energy from anybody. We need to generate by ourselves. Otherwise, how will we run these lights and so many functions and how will we power these data centers, right? So, the good news and I think Prime Minister gave this answer so nicely yesterday in his keynote. He explained, no, the Minister said about it, sorry. He explained these five layer cake once again and I am proud to say he made that statement which we all know it. Half of our energy today is generated which is a green energy, right?

So, that layer is sorted. And I and you have a lesson to learn. We have to contribute more of the companies who have to contribute by having solar, hydro and air and other methods, right? Where NVIDIA is trying to contribute to the nation today is on the top three layers, right? We are helping the nation with AI factories. building it with all the learnings what Peter also mentioned. You don’t have to learn from all our mistakes of last 18 months that we have undergone in other regions because they were ahead. India was slightly delayed by at least 12 months or so. But we have put in all those learnings and the factories have come up way too faster than anywhere else in the world, right?

By all means. The second layer is one of the layer is the serving layer when you build these applications. How do you do inferencing? You’ll be surprised to hear Indian cloud providers never had a control plane, right? We were dependent on other nations to give us a control plane to run these cloud inferencing stack. NVIDIA has open sourced that work and shared that with government of India and that was the announcement that Sarvam did with the product named as how we call it Prava, if I’m not mistaken. I hope I’m pronouncing it correctly. And that layer is now completely owned by government of India and an Indian company to do the entire inferencing locally. right?

And the last piece, which is the application, right? I’m sure you would have visited the booth downstairs on the Hall 5. I don’t think that we have left any booth. Every booth is powered by NVIDIA open source stack that we have given it to build the agent -DIG AI platforms and formulation models. That’s the contribution we have done it for the nation. And India is right there. I think what’s missing, and I will fully agree with that, is we are missing with our own chips, right? And that’s the autonomy that every country is trying to drive across. I’m again proud to say that NVIDIA is fabulous. We don’t produce, right? We outsource that to Taiwan and a few other countries, essentially, right?

We have opened up partnerships in many countries, and we are very open to partner with India as well to give away our technology. Thank you. We will do the modifications and do the manufacturing by themselves. That’s the last piece which is left, and I’m very confident with this Semicon mission. this is going to happen very soon, even if we NVIDIA with somebody else.

Audience

Thank you so much. The past year time, we’ll have to get into our next session. 10 megawatt, 12 megawatt, and today we have…

Peter Panfil

Gigawatts. Gigawatts, baby. Gigawatts. Gigawatts.

Audience

I just wanted to leave important information. It took about 8 years time to build one 5 gigawatt. And another 10 gigawatts is going to happen next year. So look at the speed and scale. We both have to work together. And as Jigar rightly mentioned, all 5 layers will have a tremendous opportunity to work. Energy, infrastructure, compute, models, application, and so on and so forth. Huge amount of resources required. Huge amount of support required. And very exciting time ahead. Thank you so much.

Peter Panfil

And it’s going to be a system approach. System. Systems. Think systems. we as an industry have thought boxes for too long. We think, I got this compute box or that compute box. It’s now a system. It’s a platform. And that platform generates tokens. The new measure should be tokens for watt per dollar.

Jigar Halani

Absolutely. Absolutely. Very well said. Thank you so much.

Moderator

He’s one of the guiding principles to implement a lot of large -scale data centers for Vertiv or all the entire ecosystems. Let me welcome Srikanth on the stage. A good round of applause for Srikanth. And another gentleman we have from Vertiv. He’s about 35 years of experience in leadership roles in Europe, Middle East, Africa, India, Southeast Asia, Asia, you name the region. He’s been there for many years. His name is Sanjay Sainani. He joined us as a senior vice president, technical business development. He’s the one strategizing all technical strategies for Vertiv and a business development area. Let me welcome Sanjay on the stage. A good round of applause for Sanjay. And I’ll be asking some questions on behalf of you.

I would also open the floor maybe sometime later. Welcome. Yeah, am I audible? Okay, so let me start, Srikanth, from you. Last question first. First, what is the one learning you want to give it to the audience from your experience of implementation when you build a large AI -scale factories? That was my last question, but I want to ask you first. One piece of advice or experience? Experience, out of your experience, because you already have good hands -on on implementation. So from a sustainability standpoint, implementation standpoint, what is one learning you want to give it to us in India when we’re building a scale of the factories and things like that?

Srikanth Cherukuri

Yeah, it’s an interesting question, right? Like when… One year back or one and a half year back, I came to India to review some data centers. And when I was asked to do that, one of the first things that crossed my mind is, wow, India is building data centers at scale? Because when we were growing up, power used to be a big issue. The reliability of the power used to be a big issue. The availability of power used to be an issue. And when I came here, I was amazed at how far, you know, I have been away from the ecosystem for a little bit, but I was amazed at how far things have come in terms of availability of power and the reliability of that power.

And the second thing I was amazed at is also just the knowledge here in the ecosystem as well as everything related to everything from safety to speed of light construction and the product ecosystem has come such a long way. I think the next step in terms of where India is going in this AI factory build -out is if you look at the U .S., it’s a little further ahead in terms of gigawatt scale and high -density racks, deploying high -density racks, high -density liquid -cooled racks. There’s a lot more experience over there. And I think our combined companies have created that experience. Like I’ve been working with WordUp for the last four to five years in the R &D work, engineering work, and then eventually the deployment work.

So now we have actually matured a lot in what we consider AI factories versus data centers. So there is a lot of advantage for India to draw from that experience. Our combined knowledge pool, again, it’s the same company. Whether you go to Europe or… US or India, it’s still Word of an NVIDIA. It has to be a strong cross -pollination between the ecosystem in the US and here, a strong knowledge sharing. And we are in year two or year three of this AI factory build -out worldwide. And as India is picking up pace in this journey, there’s a huge opportunity to not relearn all those hard lessons, or the hard way, but instead share that knowledge, share, you know, our combined teams share that knowledge and build it much faster here.

Moderator

That’s first, as a thought leader, both sides we need to do that, we need to equip the market for those kind of things. And let me also tell you, on Vertiv’s side, whatever innovations we are doing in the US, we are real -time bringing to India, so that there’s no latency here, and absolutely whatever is going to happen in the US, we want to bring it to India. That takes me to our next question to Sanjay. Sanjay, we have heard about speed, and so far we have heard about speed of clock. Now, Peter, sometime back, and Jigar spoke about speed at scale. what is your thought process about speed at scale or ramp up of infrastructure happening at the speed level what’s your thought process

Sanjay Kumar Sainani

I mean most of us who are in the space of mission critical applications and then within IT and if you’re dealing with semiconductors we all know Moore’s law and that was pretty much a 10x almost a couple of years in terms of performance and while performance was 10x the energy required to reach that performance was probably 2 -2 .5x every generation so you were getting amazing efficiency in terms of performance because you were getting a 10x performance with 2 -2 .5x kind of additional energy usage and that’s what you saw for the past many many decades and we all thought that Moore’s law is kind of now reached a plateau there’s not much happening … and this is where companies like NVIDIA, working with other semiconductor ecosystem, came up with multi -tiered chip structures.

When you look at today, some of the chipsets, these are three -story, four -story, six -story buildings. If you had to look under a microscope, there are layers and layers of transistors, billions of transistors layered together. And the innovation that is happening now or that has kick -started now is again kind of retracing Moore’s law. So if you look at what NVIDIA is announcing in terms of the new generation of chipset, there’s a humongous amount of performance improvement every generation. While the performance generation is 10x, 20x, 50x, the energy consumption is also jumping up. It’s not 10x, but it’s 2x, it’s 2 .5x. So like Jigar and Peter mentioned a little while ago, you have the current generation.

The current generation at 130, 140 kilowatt per cabinet, while the next one is 250, 260. and the one down the road is 400, 500 kilowatt per rack. And while I don’t want to give away a bit too much, but one megawatt rack is not too far away. People are already testing it. So now think about it, one megawatt of rack. A few years ago, the whole data center was one megawatt. The white space would have 200 racks of five kilowatt each, and you had generators, chillers, transformers, facilities supporting that one megawatt. So the white space was 80 % of your footprint. The rest of this stuff was 20 -30 % of your footprint. Now this has flipped. You have only one cabinet. But you still need all of that.

You still need one megawatt worth of power, generators, chillers, transformers, everything. So in that context, if you see, we are innovating at tremendous speed. If you invest anything, today it’s outdated two years down the road. So that’s number one. That’s a challenge. The second challenge is that it costs a lot of money. Jigar mentioned the cost of a data center may be a billion dollars, or let’s make the numbers a bit more reasonable, $100 million. But the amount of GPUs sitting inside is probably $2 billion worth of GPUs. So now if I place an order today with $2 billion of GPUs, I want to monetize this project very, very quickly. If you build a project, and in olden days we used to build a home in India, not just in India, in most other parts of the not -so -developed or developing countries, would have people carrying bricks on their heads and building a house.

It takes two years to build a home. Now as a homeowner, you don’t see that as a problem. You’re trying to save $5 a year and $2 a year. You’d rather have a person taking a brick on the head rather than bringing a cement mixer because you thought you were saving money. In this world, you’re losing money. because the money you are spending is still going to be the same, probably 10 % cheaper, but your return will start after two years because you will monetize that investment after two years, after three years, because only when you turn on the switch, only when you turn on the tokens, that’s when you make money on your investment. So it’s speed to token.

Whether you spend $100 million or $1 billion, you need to spend it fast, get the factory up and running very fast, so that the token comes out very fast, so that you can get your return on your capital employed. So if you are anyone here who is from the finance industry, Rocky, return on capital employed is a seriously important KPI for money. So that’s speed. And the third is scale. The demand is so heavy. Jigar and Peter in their conversation talked about a few kind of areas where they have high applications. Think of agentic AI as what it can do for you and in how many areas of your daily life it can affect you. The scales are crazy.

And so not only we need to work on the degree of difficulty in terms of density, we need to deploy it tomorrow morning and we want to deploy it at massive amount of scale. And that’s the kind of problem statement or opportunity that we have.

Moderator

So Sanjay, when you say speed at scale and that’s an idea which you have given because every month or a week save to deployment is going to be a go -to -market fast, right? And generally when you have to speed at scale, you also have to design for scale. And that’s where the blueprint discussion starts. Now why, Srikant, when it is a blueprint, it starts from a GPU architecture or GPU cluster architecture. What is your thought process? When you say… Scale for design, you not have to scale for it first which GPU you want to go with today and then you scale for that. What’s your thought process when you talk about why the GPU has to start with the… Why the blueprint of any data center has to start with a GPU cluster?

Srikanth Cherukuri

Could you repeat the last part again?

Moderator

Okay, when I say when we have to speed at scale, we have to design for the scale. And that’s where the blueprint of GPU starts. GPU is the first thing we need to start with. And why is that?

Srikanth Cherukuri

Yeah, I think there’s a couple of things, right? When we first started designing, you know, the early phase of AI factories, we were relying on, you know, general purpose built data centers. And we were changing them rapidly into what… They weren’t even really AI factories, but we were trying to figure out how to make it work, right? It was not designed at scale. It wasn’t designed for… It was not purpose built designs. But I think the moment came on us so quickly. And again, NVIDIA and WordUp together foresaw that moment. We didn’t foresee the scale. We foresaw the moment. And we went… We went from very quickly from 10 megawatts to… Now we’re talking about gigawatts.

And so infrastructure doesn’t move at that speed. Infrastructure moves, you know, the design can move at that speed, but someone has to actually build out the AI factory. Someone has to build out the data centers. We have to make so many CDUs. So we were in a phase where we made it work, but we made it work in a very, we had to make it work way, right? If we had to do it all over again, that’s not how we would do it. So now we have a moment where we say, okay, if we were to do it the right way, now we know what the future looks like. We know, that’s why we’ve redefined the data center as an AI factory, which is a fully integrated, you know, where you go from a chip design to system design to the liquid cooling design or the power design.

It’s all, in fact, even the shell and the campus is all purpose -built as an AI factory. So we have to start thinking both in terms of design as well as manufacturing, as well as delivery, as well as operation. We have to think and start thinking about it at that scale. and I think we’ve already started doing that at the design. Like, you know, NVIDIA has a DSX reference design now, which is actually based out of word of, you know, smart run products and large -scale CDUs. So now we have to start deploying it that scale. That is one of the things that NBIS’s focus is, is how do we deploy it speed of light.

Everything from logistics to operations, everything is being redefined. So that’s why we have to, like, you know, you have to think of it as an end -to -end integrative product.

Moderator

So you say about we have to design for the future. That means every design what we do has to have a future proof. What are two important ingredients you want to suggest to our audience or all of us when you talk about future proof from a design standpoint?

Srikanth Cherukuri

Yeah, I think the biggest one that I still have to repeat sometimes because it hasn’t caught on is we used to think of, I mean, Jigar and others, have spoken so much about rack density. we have to stop thinking about rack density we have to start thinking about row and data hole level density because how we almost are slowly retrofitting the entire footprint to match an AI factor design we will not be doing that generation to generation that’s just very expensive if we keep changing the technology it’s going to be very expensive for you’re not only spending a lot on building it you’re spending a lot on retrofitting it we don’t want that because that’s going to eat into the ROI so we have to start thinking about I’m at 30 today, I’m going to do this 40 tomorrow, I’m going to do this 100 tomorrow, I have to do something else 200 or 1 megawatt, I have to do something completely different we have to stop that mindset we have to start thinking about it in bounding boxes data hole level or row level bounding boxes and that’s what our latest reference designs do which is start looking at the entire pod as one big block don’t change the technology optimize it with a future proof mindset, right, will this work for that one megawatt rack and today with the digital twins you don’t need to actually build it to do it, you can actually simulate that so that’s number one I would say is take those bounding boxes and take that bounding box mentality now map that technology wise map that right up from the chip to the utility which is same redundancy this redundancy for compute this redundancy for network and have that cluster mindset where you say you map the cluster to the power and thermal perfectly so that every watt goes into maximizing tokens versus going into redundancy and your old school way of thinking so I think if you combine both of those elements you would get into a future proof data center again it’s the hyperscalers have have mastered over the last 10 -15 years.

Again, we pretend like AI is the first time we’re doing infrastructure build -out, but it’s not. The hyperscalers have been doing since the late 2000s, right? So they have mastered the concept of a reference design, a global reference design, where you once lock in that design, you generation -wise you stay in consistency. You build a template and you just feed it out.

Moderator

I would like to ask the same question to you, Sanjay. From your perspective, what are two things you would like to offer from a design standpoint, infrastructure standpoint, when you want to give a future -proof design for at least for two or three generations, which Peter spoke about?

Sanjay Kumar Sainani

I think whether we like it or not, the speed of change in the semiconductor IT AI world is very different than the speed of change on the physical world in terms of power and cooling. And even the life cycles and depreciation cycles are very different. So, for example, compute storage or, you know, in the IT world is depreciated every three to five years because that’s the pace of evolution. generators, chillers, transformers, UPS batteries are deprecated 10 to 15 year cycle. So you got to figure out a way how do you run 2 to 3 cycles of IT within one cycle of infrastructure. This is a requirement. If you don’t do that to the point that was just made, Srikanth made that you would be keeping on investing and that’s not good business at all.

So now how do you do that? And in the cloud world again, we mastered that to the sense that in very simple English, how are we doing it today in the cloud space? We have a 30 megawatt data center. We have 2 to 3 to 4, 5 megawatt per data hall. Then we don’t worry about what’s inside the data hall. How does it matter? I have a 5 megawatt power, 5 megawatt cooling capacity. Bring whatever you want as long as it’s 5 megawatt, you’re good to run. The only thing that you are probably retrofitting, if at all you have a generational change, is the final mile of cable or connectors. Now, that becomes a slightly more complicated in the AI world because your densities are much higher while providing power is relatively easy.

Pumping a lot of air or now pumping a lot of liquid is not as simple. There’s much more piping happening. In fact, I joke with the people, the future is of electricians and plumbers, believe me. There’s so much plumbing now in a data center, you will need plumbers in the data center. So, the only way to do it is to again look at what was mentioned in the previous discussion also, is look at certain capacity pods, 2 .4 megawatt pod, 6 megawatt pod. So, now you have a pod. It fits certain number of GPUs of today’s generation. It has certain power capability and liquid capability and it’s done. All the upstream to that in terms of transomers, generators, utility connections is designed for 6 .2, 6 .4, whatever the case.

Now let’s say over the next three years, generations change. Well, all you have to do is reconfigure the cabinets, nothing else, everything else stays the same. Precisely what we are doing in the cloud world. It took us a couple of years to figure this out because this was all being done for the first time. But now this will definitely be the way to go going forward.

Moderator

So, Sanjay, let me bring to a very different topic now on energy efficiency. When we are talking about gigawatt scale, energy conservation or energy saving is the most important piece. Now, we as a country are a tropical, right? We have a temperature right from 10 degree to 48 degree. So in such a span, what do you think the right approach to improve the PUE? Okay, maybe water usage or what are the important best practices you would like to suggest to the market when it comes to saving energy efficiency or improvising the PUE? Of course, because of liquid adoption, it is anyway has scaled down to an extent of what was there for the normal. But what would be the next stage of best practices you would like to suggest through your experience?

Sanjay Kumar Sainani

I think the word PUE is, I don’t know if this is the right word, but probably a very abused word in the industry. It’s used so commonly, thrown out there so easily that everyone believes, well, I have a lower PUE. Well, first of all, I can give you better PUE without doing anything. I can increase the air temperature. Suddenly your PUE is much better. You think your PUE is better, but right now your computer fans, your server fans will speed up. The temperature is more. They need to throw more. So the IT load increases. but you increase the temperature so your electrical load reduces, I mean your cooling load reduces, you suddenly have a better calculation.

But in reality, your total power increase, which you don’t realize, so the PUE is better. So PUE is a bit of a, you know, thrown out word, but here is how I look at this. I think the PUE in the data hall in the white space, irrespective where you build it, is the same. Because I need liquid at a certain temperature, I need air at a certain temperature, it needs to enter the rack. The rack is doing what it is doing, it doesn’t matter whether you build in Mumbai, Singapore, I live in Dubai, you build in Timbuktu, it’s exactly the same. The question is, how do you throw the heat out? Because now that depends on the environment outside.

So are you in Singapore? Rains all the time. Are you in Iceland? It’s never more than 20 degrees any time of the year. or are you in, you know, Dubai where it reaches 52 degrees in summer. At least that’s what we designed for 52 degrees. And that’s where the different technologies need to be adopted. Now, whether it is, you know, air -cooled chillers, whether in some markets you can have, you know, water -cooled chillers. One of the unique solution sets that we have started to see is that, especially in India, you have our cities and the way we are located in between the latitudes, our thermal variation or our temperature variation during the year is different. We have very hot in the summer and we have reasonably good weather in the winter.

So there are some entitlements that you can get in the winter. So, for example, we can use chiller technologies where during the winter months we are able to use a bit more free cooling. And in the summer months or during demand months, we add a bit more of, you know, chillers. Chiller technologies, I mean DX technologies, comparator elements that come in and help us to add that extra cooling factor when required. and so what we could do is optimize the way we cool across the thermal cycles of the year and bring down the annual PUEs of the year because at the highest point of temperature you will need that cooling whether you like it or not and so it’s this management of PUE through thermal cycles and some optimization also through load cycles because load also especially in the AI world may not be like a cloud business uniform throughout the year, throughout the day, through every month and so again certain optimizations in how you use your CDUs or fan wall units to bring that energy down will help us to improve the PUE.

Srikanth Cherukuri

One thing I would say about that is the design is there, right? Based on whether it’s the water temperatures, we’re all designing to the same targets. The design is there. Where it becomes extremely manual is again we’re still, in the traditional mode of operation in data centers where we have a large control room and we are optimizing for… for uptime and safety, and safety in the sense there’s no risk of downtime. We’re very risk -averse. But we haven’t, even if we have to do what Sanjay just suggested, which is optimize that, there is no automated way of doing that because the chip -level telemetry doesn’t talk to the data center -level telemetry. And that’s what NVIDIA’s reference design is looking to change today, is, again, if you were to retrofit a brownfield facility, this will be harder.

But if you were to build a purpose, of course, this is an opportunity for India, if you’re building an AI factory today, there is no reason why you can’t integrate telemetry from chip to chip. There’s no reason why you cannot simulate how to optimize that and simulate a traditional sample workload and see how you save energy. I’m sure that simulation will tell you that you’ll save a ton of energy without any human intervention.

Moderator

You spoke about retrofit. So there have been normal cloud services or normal workloads have been working, let’s say about 5, 10, 15, 15 kilowatt of load. what do you see when it comes to AI augmentation or a GP augmentation in a same platform or a same aisle how the retrofit will be easy or difficult or what could be your one or two tips to do that like if you are talking about AI optimization for telemetry specifically there is already an existing workload which is working with a very small medium to small densities but in that row you want to put a GPU or a liquid could GPU or air GPU which means you are retrofitting some amount of passive infrastructure how difficult or easy would be that actually?

Srikanth Cherukuri

I think again if you go back to that journey even the design and the retrofit was extremely commercial even today I think at enterprise level it is extremely difficult if I was an enterprise CTO looking to deploy AI compute I might actually and I look at our experience in the last one year I might actually be a little you’re looking at a very cumbersome everywhere from design to following local regulations for the high power and liquid cooling having secondary loop built out that’s going to be that could be pretty scary at the end of the day but I think what Verte was doing for example with smart runs you know fully integrated mechanical electrical system that can be purpose built for any pod size that can track our you know our most scalable reference designs I think that would be the way to go right like that’s the importance why even Jigar mentioned that you know our following our reference design as closely as possible all these innovative designs and offerings will improve the adaptability part for the future change is what I can say

Moderator

my last question to you the future of the the future of the the future of the there seems to have some NVIDIA -ready design offerings or certification offerings. Would you like to say, talk, or would you like to give some insight about that? Certification programs of NVIDIA -ready data centers. But NVIDIA -ready designs.

Srikanth Cherukuri

Yeah, I think whether it’s a colo or whether it’s at a cloud scale, an NCP scale, what we’ve been doing from the beginning is, you know, just like we’ve been enabling other partners, we’ve been enabling a lot of colo partners to build NVIDIA -ready data centers. Okay. And that optimizes for, you know, the water temperatures that we’re recommending, the port sizes that we’re recommending, the redundancy that we’re recommending, the integration between telemetry that we’re recommending. So for the partners that have followed that design, we have, you know, whether it’s DGX -ready or NVIDIA -ready. Now, the only thing I would encourage these partners and also those who are looking forward to this vertical, who is actually doing that?

at speed of light, in a sense. Like, you know, a lot of the data center industry is still in the mode of, you know, they’re thinking more like real estate developers, you know, waiting for, for example, you know, you have these tranches of data centers that you’re purpose -building for everyone. That is a traditional way of thinking and saying, I’m giving this space, this cage to you, and I’m going to build it out the way you want it. But you can’t wait. Like, the way the industry is operating, no one can wait for that, right? So the partners who are building purpose -built AI factories, they are part of, or want to be part of that future, building at large scale, and then whether they give those tranches or not, but they’re built on NVIDIA design, so when the customer comes, you already have built basically according to the specs.

Moderator

That’s really insightful. Many of our Colo customers will take good insight from that. With this, I would come to an audience for any other question for them.

Audience

Hi, I’m Dal Bhanushali. Thanks for the talks, this one and the previous one. We have been talking about how we will scale India in the future. We also need to scale the talent. I wanted to get some viewpoints from you, from your experiences as we double capacities. You also need those people to run the data centers. We need DC ops as special. We can run the NVIDIA optimized containers in our laptops, but those word -to -you chillers, those skills are not common and cannot be easily taught in schools today. So what’s the plan? How do you think we should be going in the future? Especially double every year. is a huge challenge, right?

Moderator

So I’ll just take this question for a while. So at Vati, we realized this challenge much ahead of time, and we started with a lot of skill development program. So the first thing first is about operation and management of the infrastructure, okay? That’s something which we have started with in collaboration with Indian Institute of Technology, Chennai, where Diploma and BTEC are graduate engineers. We train them for managing how to manage operation and maintenance of data centers. That’s about eight to 12 weeks program, extensive programs, off -site as well as on -site. So this is one part. And there are many other programs which are on the cards to develop design, engineering, and many other things, actually.

That’s what I can tell. And these programs are already available on the web. Anybody can have a look and enroll for that, okay? Any other thing which maybe anybody would like to say about skill development or any other thing which maybe anybody would like to say about skill development or any other development activity which NVIDIA would like to do with the need of ours when we are scaling so high?

Srikanth Cherukuri

I think that’s a question you also want to have. Could you repeat the last part of the question, if you don’t mind?

Moderator

So he’s asking about how the scale is going up. There’s a lot of resources required, and the skill development is also a big challenge. So while NVIDIA is taking care of an operation and management piece, we are developing a lot of people through colleges and engineering institutions. What are the initiatives NVIDIA also…

Sanjay Kumar Sainani

managing how to manage operation and maintenance of data centers. That’s about 8 to 12 weeks program, extensive programs, off -site as well as on -site. So this is one part. And there are many other programs which are on the cards to develop design, engineering, and many other things actually. That’s what I can tell. And these programs are already available on the web. Anybody can have a look and enroll for that. Any other thing which maybe NVIDIA would like to say about skill development activity which NVIDIA would like to do with the need of ours when we are scaling so high? I think that’s a question you also want to have.

Srikanth Cherukuri

Could you repeat the last part of the question if you don’t mind?

Moderator

So he’s talking about scale is going up. There’s a lot of resources required. And the skill development is also a big challenge. So while NVIDIA is taking care of an operation and management piece, we are developing a lot of people through colleges and engineering institutions. What are the initiatives NVIDIA is also taking to develop the skills within the ecosystem?

Srikanth Cherukuri

Yeah. I think a couple of things I would say. One is, you know, as you keep going up on the scale, the prefab systems that Vertebra is developing are going to be absolutely critical because, like, when I was talking about the enterprise -level difficulties right now, all that can be solved with. But it’s, you know, a lot of times you’re waiting for the data center, you’re waiting for the data hole to get ready before you can deploy the compute systems. Yeah. And each of them have dependencies on each other that all are centered around that space, right? When you’re doing off -site prefab integration, you’re doing prefab, you know, manufacturing, you can do that all in parallel.

You can do that all at scale in parallel and then bring it all into one place. And in the meantime, you could do the testing off on the factory. A lot of the testing is done today in the data hall. So you could avoid all that, move it all to the left by bringing it all outside of the data hall and then bring it all into the data hall once the data hall is ready, once the shell is built up, and you could really condense that build -out.

Sanjay Kumar Sainani

Srikant, as you say it rightly, as we are taking a lot of activity, it is supposed to happen on -site, taking to off -site, which by means of pre -engineering it, developing and building at the scale and getting deployed at the site. So that’s a way forward. Any more questions? Otherwise, we can hold it here.

Related ResourcesKnowledge base sources related to the discussion topics (13)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“Sovereignty and innovation must run together, not as competing choices, with Google building data centers in India while providing indigenous solutions for critical data.”

The knowledge base states that “Sovereignty and innovation must run together, not as competing choices, with Google building data centers in India while providing indigenous solutions for critical data” [S1].

Confirmedhigh

“Google announced new data‑centre capacity in Vizag, which will keep innovation and any data‑residency things within the boundaries of India.”

Google’s planned ₹80,000-crore hyperscale campus in Visakhapatnam (Vizag) is documented as a key data-centre investment aimed at AI and data-residency in India [S85].

Confirmedhigh

“Google has made JEE Main exams, mock exams … available on Gemini free of cost for any student to try.”

Google’s Gemini platform now includes full-length JEE practice tests and mock exams that are freely accessible to students, confirming the rollout of AI-powered JEE preparation tools [S75].

Additional Contextmedium

“India’s ambition to become a global AI hub must rest on complete AI sovereignty and will happen in a few months, not years.”

The knowledge base notes that sovereignty is a contested concept in India, with concerns about isolation and the need for balanced approaches, suggesting that the timeline “few months” is not universally accepted [S59].

Additional Contextmedium

“Google would address the digital divide and serve under‑privileged and rural populations.”

Google’s “Internet Saathi” initiative, which provides internet access to rural women through community networks, illustrates the company’s efforts to bridge the digital divide in underserved Indian regions [S93].

Additional Contextlow

“Google has a history of inclusivity across its products, empowering billions of users.”

A knowledge-base excerpt highlights Google’s longstanding focus on inclusivity through services like Gmail and Search, reinforcing its claim of empowering billions [S5].

External Sources (97)
S1
From KW to GW Scaling the Infrastructure of the Global AI Economy — – Ankush Sabharwal- Sudeesh VC Nambiar
S2
AI Transformation in Practice_ Insights from India’s Consulting Leaders — -Audience member 5- Sudhakar Gandhey, Former Senior Director at American Express Bank, built Access Cadets Technologies …
S3
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — -David Freed- Role/Title: Corporate Vice President and leader of LAM Research’s advanced analytical and simulation softw…
S4
From KW to GW Scaling the Infrastructure of the Global AI Economy — -Sanjay Kumar Sainani- Senior Vice President, Technical Business Development at Vertiv, 35+ years experience in leadersh…
S5
https://dig.watch/event/india-ai-impact-summit-2026/from-kw-to-gw-scaling-the-infrastructure-of-the-global-ai-economy — He’s one of the guiding principles to implement a lot of large -scale data centers for Vertiv or all the entire ecosyste…
S6
From KW to GW Scaling the Infrastructure of the Global AI Economy — – Ankush Sabharwal- Sudeesh VC Nambiar
S8
From KW to GW Scaling the Infrastructure of the Global AI Economy — -Srirang Deshpande- Part of strategy for India, managing Vertiv strategy and market development
S9
Connecting the Unconnected in the field of Education Excellence, Cyber Security &amp; Rural Solutions and Women Empowerment in ICT — – **Ninad S. Deshpande** – Ambassador and Deputy Permanent Representative of India to the WTO in Geneva
S10
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S11
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S12
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S13
From KW to GW Scaling the Infrastructure of the Global AI Economy — -Akanksha Swarup- Moderator/Host conducting interviews and panel discussions
S14
Akanksha Singh — Singh, A. (2020). Indian Perspectives on the ‘Responsibility to Protect’.International  Studies, 57(3): 296-316. (ISSN: …
S15
The reality of science fiction: Behind the scenes of race and technology — ‘Every desireis an endand every endis a desirethenthe end of the worldis a desire of the worldwhat type of end do you de…
S16
IGF Retrospective – Past, Present, and Future — – **Nitin Desai** – Role/Title: Former MAG chair (approximately 5 years), chaired the working group on Internet governan…
S17
From KW to GW Scaling the Infrastructure of the Global AI Economy — Google’s Nitin Gupta reinforced this collaborative approach to sovereignty, emphasising that “sovereignty and innovation…
S19
From KW to GW Scaling the Infrastructure of the Global AI Economy — – Ankush Sabharwal- Jigar Halani- Nitin Gupta – Peter Panfil- Jigar Halani- Sanjay Kumar Sainani
S20
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S21
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S22
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S23
From KW to GW Scaling the Infrastructure of the Global AI Economy — To discuss this, I have two friends, two industry veterans from Vertiv and NVIDIA to discuss the Fireside Chat. So we ha…
S24
From KW to GW Scaling the Infrastructure of the Global AI Economy — – Audience- Moderator- Srikanth Cherukuri – Peter Panfil- Sanjay Kumar Sainani- Srikanth Cherukuri – Srirang Deshpande…
S25
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — “Do you think AI Summit has been successful?”[68]. “But, in the next 3 -5 years, what are the main targets for India to …
S26
The Innovation Beneath AI: The US-India Partnership powering the AI Era — I agree with him. I think that the IBM analogies are very good. Very good one. I think we are all focused on the core an…
S27
From Innovation to Impact_ Bringing AI to the Public — If we don’t make for it, our all compounded historical knowledge will be lacking in the next generation. So instead of a…
S28
WS #270 Understanding digital exclusion in AI era — Moderator: Thank you so much, Rashad. I really liked the point when you talked about the human-centred approach when w…
S29
How can Artificial Intelligence (AI) improve digital accessibility for persons with disabilities? — Furthermore, the synthesis highlights the positive role of multi-sectoral collaboration in driving disability inclusion….
S30
What policy levers can bridge the AI divide? — ## Key Challenges and Opportunities Lacina Kone: Before talking about the bridging of AI, bridging the gap of the AI, t…
S31
US media executives call for legislation on AI content compensation — Media executives and academic experts testified before the Senate Judiciary Subcommittee on Privacy, Technology and the…
S32
Climate change and Technology implementation | IGF 2023 WS #570 — Speaker:Thank you, Millennium. I’m Sakura Takahashi from Japan. I’m speaking here today on behalf of Climate Youth Japan…
S33
Big Tech boosts India’s AI ambitions amid concerns over talent flight and limited infrastructure — Majorannouncementsfrom Microsoft ($17.5bn) and Amazon (over $35bn by 2030) have placed India at the centre of global AI …
S34
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Number one, they said, you all come and panel with us at a right price point, right quality, and you declare how much GP…
S35
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — . . . . . . . . . . . . . . one of our keynote speakers, they said autonomous weapons are going to AI -based autonomous …
S36
Oversight of AI: Hearing of the US Senate Judiciary Subcommitee — So I could go into that more, but I want to flag that. Second is on jobs past performance history is not a guarantee of …
S37
Enhancing rather than replacing humanity with AI — A grandmother in Poland and her grandson, growing up in Dubai, sit together on a video call. She speaks only Polish, and…
S38
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Giordano Albertazzi — Albertazzi explained that the traditional concept of individual servers as computing units is becoming obsolete in AI ap…
S39
Part 7: ‘Converging realities: Embedding governance through digital twins’ — As digital and physical systems increasingly interact, they give rise to what we can call embedded realities, i.e.enviro…
S40
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — This comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents di…
S41
Building Indias Digital and Industrial Future with AI — Deepak Maheshwari from the Centre for Social and Economic Progress provided historical context, tracing India’s digital …
S42
Designing Indias Digital Future AI at the Core 6G at the Edge — This comment connects technical sovereignty to cultural and ethical sovereignty, highlighting that AI systems trained on…
S43
How AI Is Transforming Indias Workforce for Global Competitivene — Artificial intelligence | Human rights and the ethical dimensions of the information society Policy, Governance, and In…
S44
AI-Driven Enforcement_ Better Governance through Effective Compliance &amp; Services — “So I am happy to report that these seven sutras which initially we started as a recommendation or guiding principles fo…
S45
Lightning Talk #245 Advancing Equality and Inclusion in AI — The session will present measures that can be taken to operationalise safeguards and remedies against discrimination in …
S46
Indias Roadmap to an AGI-Enabled Future — Shri Ghanshyam Prasad This comment quantifies the massive scale of energy transformation required for AI infrastructure…
S47
Fireside Chat Intel Tata Electronics CDAC &amp; Asia Group _ India AI Impact Summit — Vivek highlights design choices that improve energy efficiency, such as liquid cooling and power‑aware circuits, achievi…
S48
From KW to GW Scaling the Infrastructure of the Global AI Economy — “We have a 30 megawatt data center”[61]. “I think the word PUE is, I don’t know if this is the right word, but probably …
S49
How to Project Europe’s Power / Davos 2025 — Mentions the inefficiency of planning energy investments at national levels rather than European level. Pouyanné calls …
S50
1.1 CHALLENGES IN ENVIRONMENTAL INNOVATION — Second, market actors can lack sufficient information about future prices and costs . For many companies and individuals…
S51
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Economic | Development Four-channel framework showing automation vs. complementation paths, with emphasis on right-hand…
S52
The Foundation of AI Democratizing Compute Data Infrastructure — And they could be partly technological and partly policy -based or protocol -based. And a combination of this will ensur…
S53
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Giordano Albertazzi — Construction approaches overview The video contrasts conventional data‑center construction, which follows a sequential,…
S54
Prosperity Through Data Infrastructure — Data integration proves to be a complex task as there are often overlapping pieces of infrastructure that need to work t…
S55
[Tentative Translation] — Looking back at the Science, Technology, and Innovation Policy during the Fifth Basic Plan, digitalization, which is the…
S56
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — ## Industry Perspectives: Systems Integration Challenges Anne Flanagan: Hello, apologies that I’m not there in person t…
S57
Pre 10: Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative — Issues particularly evident in joint or cross-force environments where systems must function across organizational, nati…
S58
Indias Roadmap to an AGI-Enabled Future — Shri Ghanshyam Prasad, Chairperson of Central Electricity Authority, outlined India’s energy readiness for AI infrastruc…
S59
Panel Discussion Data Sovereignty India AI Impact Summit — Both speakers emphasize that achieving data sovereignty requires collaborative efforts between government and private se…
S60
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — And that’s the problem we tried to solve. You know way back at that time Jensen was in India. I happened to get to meet …
S61
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Thank you, Prime Minister. It’s an honor to be here, and under your leadership, you have elevated technology from a sect…
S62
AI Meets Agriculture Building Food Security and Climate Resilien — This insight distinguishes AI deployment from traditional technology rollouts, emphasizing iterative improvement over pe…
S63
Open Forum #76 Digital Literacy As a Precondition for Achieving Universal a — Focus on inclusive access for underserved populations Policies should encourage inclusiveness focusing on rural access,…
S64
Digital divides &amp; Inclusion — By improving connectivity and expanding access to the internet, more individuals will be able to bridge the digital divi…
S65
2015 — – The first group, consisting of Targets 2.1, 2.2 and 2.3 is concerned with the inclusion of particular development cate…
S66
WS #270 Understanding digital exclusion in AI era — The discussion underscored the urgency of taking action to prevent further widening of the digital divide as AI technolo…
S67
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Ante este panorama, los países del sur global debemos priorizar estrategias y normativas para un uso ético y responsable…
S68
Bridging the Digital Divide: Inclusive ICT Policies for Sustainable Development — – Hakikur Rahman- Ranojit Kumar Dutta Barriers to ICT employment include lack of advanced skills (46%) and poor interne…
S69
Multistakeholder Partnerships for Thriving AI Ecosystems — Dr. Bärbel Koffler emphasized that governments must create frameworks and governance structures to ensure AI benefits ar…
S70
From KW to GW Scaling the Infrastructure of the Global AI Economy — Sovereignty and innovation must run together, not as competing choices, with Google building data centers in India while…
S71
WS #204 Closing Digital Divides by Universal Access Acceptance — ### Indigenous Rights and Data Sovereignty Steinhauer-Mozejko Phil: Thank you. I’m gonna try not to get distracted by s…
S72
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — Both speakers acknowledge the challenge of making government data available for AI innovation while protecting sovereign…
S73
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — Ankush highlights that only a small fraction of Indians speak English, making regional language models essential for mas…
S74
Pre 3: Exploring Frontier technologies for harnessing digital public good and advancing Digital Inclusion — Charlotte Gilmartin: Thank you very much. I’m just going to share my screen and show the slides. Because I only have fiv…
S75
AI learning tools grow in India with Gemini’s JEE preparation rollout — Google is expanding AI learning tools in India by adding full-lengthJoint Entrance Exam practice teststo Gemini, targeti…
S76
Lightning Talk #245 Advancing Equality and Inclusion in AI — The session will present measures that can be taken to operationalise safeguards and remedies against discrimination in …
S77
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — “So one of the key application, key product what we have developed is Fraud Pro”[41]. “We are able to today identify fra…
S78
Indias Roadmap to an AGI-Enabled Future — Shri Ghanshyam Prasad This comment quantifies the massive scale of energy transformation required for AI infrastructure…
S79
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Giordano Albertazzi — This comment is particularly thought-provoking because it challenges conventional thinking about computing architecture….
S80
Fireside Chat Intel Tata Electronics CDAC &amp; Asia Group _ India AI Impact Summit — Vivek highlights design choices that improve energy efficiency, such as liquid cooling and power‑aware circuits, achievi…
S81
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — “You can do air -cooled carts and then just use air -cooled servers and running up to 100 to 300 billion parameter model…
S82
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — The discussion reveals strong consensus on key strategic directions: comprehensive ecosystem development beyond chip man…
S83
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — 8 year old prodigy: Sharing is learning with the rest of the world. One, an AI that is independent. From large global A…
S84
Keynotes — O’Flaherty, paraphrasing Professor Bradford, calls for a fundamental shift in how the technology regulation debate is fr…
S85
‘AI City Vizag’ moves ahead with ₹80,000-crore Google hyperscale campus in India — Andhra Pradesh will sign an agreement with Google on Tuesday for a1-gigawatt hyperscale data centrein Visakhapatnam. Off…
S86
AI Innovation in the UK Advances with new Google initiatives — Google isintensifying its investmentin the UK’s AI sector, with plans to expand its data residency offerings and launch …
S87
Google invests $1.1 billion in Finnish data centre expansion for AI growth — Google hasrevealedplans to inject an additional $1.1 billion into its data centre campus expansion in Finland, emphasisi…
S88
€5.5bn Google plan expands German data centres, carbon-free power and skills programmes — Google willinvest €5.5bn in Germanyfrom 2026 to 2029, adding a Dietzenbach data centre and expanding its Hanau facility….
S89
Private AI Compute by Google blends cloud power with on-device privacy — Googleintroduced Private AI Compute,a cloud platform that combines the power of Gemini with on-device privacy. It delive…
S90
Introducing Gemini, Google’s response to ChatGPT — Google`s Alphabet introduces Gemini,its state-of-the-art AI model adept at handling various data formats such as video, …
S91
Google launches Gemini Live and Pro/Ultra AI tiers at I/O 2025 — At Google I/O 2025, the company unveiledsignificant updates to its Gemini AI assistant, expanding its features, integrat…
S92
Google expands Gemini with real-time AI features — Googlehas begunrolling out real-time AI features for its Gemini system, allowing it to analyse smartphone screens and ca…
S93
Criss-cross of digital margins for effective inclusion | IGF 2023 Town Hall #150 — In order to expand high-speed internet connectivity in remote or inaccessible areas, the study suggests exploring innova…
S94
Equi-Tech-ity: Close the gap with digital health literacy | IGF 2023 — The digital divide refers to the gap between those who have access to digital technology and those who do not. This divi…
S95
Sangeet Paul Choudary — Drivers also organize themselves to outwit the platform’s algorithms. Qualitative research as well as anecdotal evidence…
S96
The Geopolitics of Materials: Critical Mineral Supply Chains and Global Competition — Jonathan Price outlined the stark mathematics: copper demand will double by 2035, but supply will fall 30% short. This c…
S97
WS #214 Youth-Led Digital Futures: Integrating Perspectives and Governance — Denise Leal: Perfect. Thank you. So when it comes to Indigenous data, there are a lot of questions that we have to …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Ankush Sabharwal
1 argument172 words per minute287 words99 seconds
Argument 1
India as AI Hub & Purpose‑Driven Models
EXPLANATION
Ankush asserts that India will become a global hub for AI development, driven by its aspirational stance and willingness to adopt new technologies for societal and business benefit. He emphasizes that Bharat GPT follows a purpose‑and‑trust mantra, focusing on specific use‑cases and tailoring model size to enterprise needs.
EVIDENCE
He notes India’s aspiration to adopt AI for welfare and predicts rapid emergence of a hub within months [2]. He explains Bharat GPT’s tagline “AI with purpose and trust”, the habit of beginning with the end in mind, selecting model size based on the problem, and partnering with domain experts to train models on relevant data rather than building generic large language models [28-46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s ambition to become a global AI hub, with Bangalore highlighted as a focal point, is discussed in [S25]; the emphasis on purpose-driven, retrained models for trust and bias mitigation aligns with observations in [S27]; large AI investments by major tech firms further reinforce the hub narrative [S33].
MAJOR DISCUSSION POINT
Positioning India as a purpose‑driven AI leader
N
Nitin Gupta
1 argument140 words per minute366 words156 seconds
Argument 1
Sovereignty and Innovation Must Co‑exist; Google’s Indian Data Centers & On‑Prem Data Box
EXPLANATION
Nitin argues that AI sovereignty and innovation are not mutually exclusive; both must progress together. Google is addressing sovereignty by building large data centres in India and offering an on‑premise “Data Box” that provides full Google Gemini AI capabilities while keeping data within the customer’s premises.
EVIDENCE
He describes Google’s new Vizag data centres that keep data residency within India while serving all personas [8]. He then details the indigenous Data Box that runs Google AI services on-premise, giving customers the power of a Google data centre inside their own facilities, including hardware control [10-14].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Google’s approach of building sovereign data centres in India and offering on-premise AI boxes matches the collaborative sovereignty model described in [S1]; the focus on an indigenous GPU layer and startup participation is echoed in [S34]; broader industry investment supporting the balance of sovereignty and innovation is noted in [S33].
MAJOR DISCUSSION POINT
Balancing AI sovereignty with innovation through on‑premise solutions
AGREED WITH
Akanksha Swarup, Peter Panfil
DISAGREED WITH
Akanksha Swarup
A
Akanksha Swarup
1 argument166 words per minute321 words115 seconds
Argument 1
Inclusivity & Bridging the Digital Divide are Essential for AI Adoption
EXPLANATION
Akanksha raises concerns about ensuring AI benefits reach under‑privileged and rural populations in India. She asks how Google plans to make AI tools accessible and inclusive, reflecting the Prime Minister’s emphasis on digital inclusion.
EVIDENCE
She frames the question by praising India’s AI story and then directly asks about infrastructure, resources, and inclusivity, citing the Prime Minister’s concern and requesting concrete steps to bridge the divide [4-6][49-53].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for a human-centred, inclusive AI design to avoid digital exclusion is highlighted in [S28]; AI’s role in improving accessibility for persons with disabilities and fostering multi-sector collaboration is covered in [S29]; policy levers to bridge the AI divide, especially broadband access, are discussed in [S30].
MAJOR DISCUSSION POINT
Ensuring AI reaches marginalized communities
AGREED WITH
Nitin Gupta, Peter Panfil
DISAGREED WITH
Nitin Gupta
S
Sudeesh VC Nambiar
2 arguments136 words per minute205 words90 seconds
Argument 1
AI Mitigates Demand‑Supply Mismatch and Bot Abuse in IRCTC Ticketing
EXPLANATION
Sudeesh explains that the railway ticketing platform faces huge demand‑supply mismatches, especially during Tatkal booking windows, leading to bot abuse. Advanced AI solutions are deployed to detect and curb automated misuse, helping balance demand and supply.
EVIDENCE
He describes the peak booking times (8 am, 10 am, 11 am) and the resulting demand-supply mismatch, noting that bots are used to exploit the system and that a “cat-and-mouse” game ensues, which is being addressed with AI [16-19].
MAJOR DISCUSSION POINT
Using AI to protect high‑traffic public services
AGREED WITH
Nitin Gupta, Jigar Halani
Argument 2
Indigenous Layer and Startup Collaboration Strengthen the Solution
EXPLANATION
Sudeesh highlights that the AI solution includes an indigenous component and leverages collaborations with Indian startups for data analysis and real‑time monitoring. This hybrid approach combines global technology strength with local expertise.
EVIDENCE
He confirms the presence of an indigenous layer, mentions a startup that continuously analyses social-media signals and collaborates with the global tech provider, and notes that the model continuously learns to mitigate automated attacks [21-25].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Collaboration between Indian startups and global technology providers, together with an indigenous AI layer, is described in [S1] and reinforced by the explicit mention of an indigenous component in [S5]; the broader sovereign AI initiative that includes startup GPU contributions aligns with [S34].
MAJOR DISCUSSION POINT
Localizing AI through indigenous layers and startup partnerships
AGREED WITH
Nitin Gupta, Jigar Halani
P
Peter Panfil
2 arguments139 words per minute2977 words1275 seconds
Argument 1
AI Factories Require Speed‑at‑Scale; Start from the Chip, Not the Grid
EXPLANATION
Peter stresses that building AI capacity demands rapid deployment of compute at scale. He advocates beginning design from the GPU chip level, creating modular pods that can be replicated, rather than starting from power‑grid considerations.
EVIDENCE
He outlines three pillars-speed at scale, moving from chip to grid, and sustainability-explaining that faster GPU pod deployment accelerates token generation, and that starting at the chip defines the most efficient infrastructure [106-121].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The chip-first, GPU-centric design philosophy for rapid AI deployment is advocated in [S1]; the concept of AI pods as the fundamental compute unit supports this view in [S38]; a system-level perspective emphasizing modular compute boxes is also noted in [S5].
MAJOR DISCUSSION POINT
Prioritising chip‑first design for rapid AI factory rollout
AGREED WITH
Jigar Halani, Srikanth Cherukuri, Sanjay Kumar Sainani
DISAGREED WITH
Srikanth Cherukuri
Argument 2
AI Will Become an Autonomous Background Function (like breathing) that Frees Human Capacity
EXPLANATION
In response to audience curiosity, Peter likens future AI to automatic bodily functions such as breathing and blinking, suggesting AI will handle mundane tasks autonomously, freeing humans to focus on higher‑order work.
EVIDENCE
He uses the breathing/blinking analogy, stating AI will run in the background, learn user habits, and improve performance, thereby liberating human mental capacity for productive activities [445-458].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion of AI creating new job categories and freeing human capacity aligns with insights on emerging professions in [S36]; the perspective of AI enhancing rather than replacing humanity is explored in [S37].
MAJOR DISCUSSION POINT
AI as an invisible, productivity‑enhancing layer
AGREED WITH
Akanksha Swarup, Nitin Gupta
J
Jigar Halani
1 argument170 words per minute3536 words1246 seconds
Argument 1
GPU‑First, Pod‑Based, Future‑Proof Architecture; Emphasis on Rapid Deployment
EXPLANATION
Jigar argues that AI infrastructure should be built around GPU‑centric pods, treating the pod as the fundamental building block. This approach enables future‑proofing, allowing easy upgrades across GPU generations while maintaining rapid deployment.
EVIDENCE
He describes the AI-factory concept, the need to think beyond individual racks to whole rows or pods, and cites reference designs that support multiple GPU generations within a single pod, enabling seamless upgrades [99-104][120-124][254-259].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift to GPU-centric pods as the core building block for AI infrastructure is detailed in [S38]; pod-first design recommendations are also present in [S1]; a system-approach that treats compute boxes as platforms is discussed in [S5].
MAJOR DISCUSSION POINT
Pod‑centric, GPU‑first design for scalable AI infrastructure
AGREED WITH
Peter Panfil, Srikanth Cherukuri, Sanjay Kumar Sainani
S
Srirang Deshpande
1 argument124 words per minute274 words131 seconds
Argument 1
Transition from Outside‑In to Inside‑Out Data‑Center Design for Gigawatt‑Scale AI
EXPLANATION
Srirang outlines a shift in data‑center planning: moving from a traditional “outside‑in” approach (building infrastructure first) to an “inside‑out” model where workloads and GPU requirements drive the overall design, essential for gigawatt‑scale AI deployments.
EVIDENCE
He notes that earlier data centres were built from the outside in, but now the process starts with GPU or workload decisions, after which the full infrastructure is designed around them [71-74].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The inside-out methodology, where GPU workloads drive overall data-center design, is outlined in [S1]; the evolution toward AI pods as the primary unit supports this shift in [S38]; a broader system-centric view is mentioned in [S5].
MAJOR DISCUSSION POINT
Reversing design methodology to centre AI workloads
S
Srikanth Cherukuri
3 arguments171 words per minute2202 words770 seconds
Argument 1
Future‑Proof Design via Row‑Level Density, Digital Twins, and Integrated Telemetry
EXPLANATION
Srikanth proposes that AI‑center designs should move from rack‑level density to row‑ or data‑hall‑level planning, using digital twins and bounding‑box concepts to simulate and future‑proof deployments, and integrating chip‑level telemetry with data‑center controls for optimal operation.
EVIDENCE
He explains the need to think in terms of row-level density, using digital twins to simulate designs, and aligning chip-to-utility mapping for efficiency, while highlighting the lack of telemetry integration between chips and data-center systems [665-669][739-748].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The use of digital twins for AI-center design and simulation is described in [S39]; row-level density planning and pod-centric concepts are discussed in [S38]; the overall design methodology aligns with the inside-out approach in [S1].
MAJOR DISCUSSION POINT
Designing AI data centres with holistic, simulation‑driven, telemetry‑enabled approaches
AGREED WITH
Moderator, Sanjay Kumar Sainani
DISAGREED WITH
Peter Panfil
Argument 2
Integrating Chip‑Level Telemetry with Data‑Center Controls Enables Real‑Time Energy Optimisation
EXPLANATION
He emphasizes that connecting chip‑level performance data to data‑center management systems can automatically optimise energy use, reducing reliance on manual interventions and improving PUE.
EVIDENCE
He points out the current gap where chip telemetry does not talk to data-center telemetry, and describes NVIDIA’s reference design that aims to bridge this gap for real-time energy optimisation [739-748].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The current gap between chip telemetry and data-center management, and the need for integrated monitoring, is highlighted in [S1]; system-level integration concepts are also referenced in [S5].
MAJOR DISCUSSION POINT
Telemetry integration for energy efficiency
AGREED WITH
Sanjay Kumar Sainani
Argument 3
Vertiv’s Prefabricated Systems and Training Initiatives Reduce Build‑Out Time and Skill Gaps
EXPLANATION
Srikanth highlights Vertiv’s prefabricated, modular systems and associated training programmes as ways to accelerate AI‑factory deployment and address the shortage of skilled personnel.
EVIDENCE
He mentions Vertiv’s smart-run integrated mechanical-electrical pods, reference designs, and the importance of following these designs to improve adaptability, as well as the need for off-site prefab integration to speed up construction [751-758].
MAJOR DISCUSSION POINT
Prefabrication and training as levers for rapid, skilled deployment
AGREED WITH
Moderator, Sanjay Kumar Sainani
S
Sanjay Kumar Sainani
3 arguments170 words per minute2078 words733 seconds
Argument 1
Modular Pod Approach Allows Multi‑Generation GPU Upgrades and Efficient Scaling
EXPLANATION
Sanjay explains that using standardized GPU pods enables data centres to support several generations of GPUs without major redesign, simply by reconfiguring cabinets, thereby ensuring efficient scaling and future‑proofing.
EVIDENCE
He describes pods that support three GPU generations, the ability for customers to mix GPU platforms within a pod, and the process of reconfiguring cabinets while keeping power and cooling infrastructure unchanged [692-699].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Standardised GPU pods that support multiple generations of GPUs are presented in [S38]; the pod-centric, future-proof design framework is also covered in [S1].
MAJOR DISCUSSION POINT
Standardised pods for seamless multi‑generation upgrades
AGREED WITH
Peter Panfil, Jigar Halani, Srikanth Cherukuri
Argument 2
PUE Can Be Misleading; Focus on Thermal‑Cycle Optimisation and Adaptive Cooling
EXPLANATION
Sanjay cautions that PUE figures can be artificially improved by raising ambient temperatures, which may increase server fan power. He advocates optimizing cooling across seasonal thermal cycles and using adaptive cooling technologies to achieve genuine efficiency gains.
EVIDENCE
He explains how raising temperature lowers apparent PUE but raises total power consumption, then discusses seasonal strategies such as free cooling in winter and supplemental chillers in summer to optimise annual PUE [710-720].
MAJOR DISCUSSION POINT
Realistic energy‑efficiency metrics and seasonal cooling strategies
AGREED WITH
Srikanth Cherukuri
Argument 3
Vertiv’s Prefabricated Systems and Training Initiatives Reduce Build‑Out Time and Skill Gaps
EXPLANATION
Sanjay adds that Vertiv’s off‑site prefabrication, combined with training programmes, shortens construction timelines and equips personnel with the necessary skills for AI‑factory operations.
EVIDENCE
He notes the shift from on-site to off-site prefab integration, parallel manufacturing and testing, and the ability to bring fully tested modules into the data hall, thereby condensing build-out time [810-818].
MAJOR DISCUSSION POINT
Off‑site prefab and training to accelerate AI‑factory roll‑out
AGREED WITH
Moderator, Srikanth Cherukuri
M
Moderator
1 argument176 words per minute1315 words447 seconds
Argument 1
Accelerated Skill‑Development Programs (8‑12 week courses) with IITs to Train DC Ops Personnel
EXPLANATION
The moderator describes a partnership with Indian Institutes of Technology to deliver intensive 8‑12 week programmes that train engineers in data‑center operations and maintenance, addressing the talent shortage for scaling AI infrastructure.
EVIDENCE
He outlines the collaboration with IIT-Chennai, the curriculum covering operation and maintenance, the duration of the programmes, and the availability of these courses online for anyone to enrol [778-784].
MAJOR DISCUSSION POINT
Building a skilled workforce for AI data‑center operations
AGREED WITH
Srikanth Cherukuri, Sanjay Kumar Sainani
A
Audience
1 argument128 words per minute418 words195 seconds
Argument 1
Concern Over AI Developing Independent Consciousness and Potential Societal Impact
EXPLANATION
An audience member questions whether AI could develop its own subconsciousness, acting independently of human direction, and what implications that might have for society.
EVIDENCE
The participant asks, “What if AI get their own subconsciousness? They don’t need humans to just act” [436-437].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Risks associated with autonomous AI systems, including weaponisation and loss of human control, are discussed in [S35]; broader societal implications of AI-driven job changes are examined in [S36]; the notion of AI enhancing rather than supplanting humanity provides a counter-perspective in [S37].
MAJOR DISCUSSION POINT
Ethical and societal risks of autonomous AI
Agreements
Agreement Points
AI sovereignty must be paired with innovation, requiring on‑premise solutions and indigenous components
Speakers: Nitin Gupta, Sudeesh VC Nambiar, Jigar Halani
Sovereignty and Innovation Must Co‑exist; Google’s Indian Data Centers & On‑Prem Data Box AI Mitigates Demand‑Supply Mismatch and Bot Abuse in IRCTC Ticketing Indigenous Layer and Startup Collaboration Strengthen the Solution
All three speakers stress that AI sovereignty does not have to limit innovation; instead, solutions such as Google’s on-premise Data Box [10-14] and the inclusion of an indigenous AI layer together with startup partners [21-25] demonstrate a hybrid approach that keeps data within India while leveraging cutting-edge technology. Jigar further links this to sovereign data-center layers and the need for locally owned inference stacks [470-504].
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on on-premise, indigenous AI aligns with India’s sovereign AI policy that calls for domestic capability building and data-center sovereignty, as highlighted in the Data Sovereignty panel at the India AI Impact Summit and NVIDIA’s partnership to foster indigenous layers [S59][S60][S52].
Rapid deployment of AI capacity requires a chip‑first, pod‑centric, modular design (“AI factories”) to achieve speed at scale
Speakers: Peter Panfil, Jigar Halani, Srikanth Cherukuri, Sanjay Kumar Sainani
AI Factories Require Speed‑at‑Scale; Start from the Chip, Not the Grid GPU‑First, Pod‑Based, Future‑Proof Architecture; Emphasis on Rapid Deployment Future‑Proof Design via Row‑Level Density, Digital Twins, and Integrated Telemetry Modular Pod Approach Allows Multi‑Generation GPU Upgrades and Efficient Scaling
Peter outlines a three-pillar approach that begins with the GPU chip and builds modular pods for fast token generation [106-121]. Jigar echoes this by describing AI-factory pods as the fundamental building block and promoting reference designs that support multiple GPU generations [99-104][254-259]. Srikanth reinforces the need to start design from the GPU, using row-level density and digital twins to future-proof deployments [628-658]. Sanjay adds that standardized pods enable seamless multi-generation upgrades without redesigning power or cooling infrastructure [692-699].
POLICY CONTEXT (KNOWLEDGE BASE)
Modular, prefabricated pod designs are promoted as faster, lower-risk ways to build AI infrastructure, contrasting with traditional sequential construction methods (see modular construction overview) [S53] and addressing the limitations of legacy hardware not suited for AI workloads [S57].
Inclusivity and bridging the digital divide are essential for AI adoption in India
Speakers: Akanksha Swarup, Nitin Gupta, Peter Panfil
Inclusivity & Bridging the Digital Divide are Essential for AI Adoption Sovereignty and Innovation Must Co‑exist; Google’s Indian Data Centers & On‑Prem Data Box AI Will Become an Autonomous Background Function (like breathing) that Frees Human Capacity
Akanksha raises concerns about reaching under-privileged and rural users [4-6][49-53]. Nitin responds by highlighting Google’s free JEE mock-exam service on Gemini, illustrating a concrete step toward inclusive AI education [58-62]. Peter later emphasizes AI’s societal benefits, such as traffic-management systems that improve daily life for all citizens [430-434].
POLICY CONTEXT (KNOWLEDGE BASE)
Inclusive ICT policies and digital-literacy initiatives stress rural broadband expansion, affordable connectivity, and targeted programs for underserved groups, providing the policy backdrop for bridging the digital divide in AI deployment [S63][S64][S66][S68].
Scaling AI infrastructure demands accelerated skill‑development and training programmes
Speakers: Moderator, Srikanth Cherukuri, Sanjay Kumar Sainani
Accelerated Skill‑Development Programs (8‑12 week courses) with IITs to Train DC Ops Personnel Future‑Proof Design via Row‑Level Density, Digital Twins, and Integrated Telemetry Vertiv’s Prefabricated Systems and Training Initiatives Reduce Build‑Out Time and Skill Gaps
The moderator describes an 8-12 week partnership with IIT-Chennai to train data-center operations staff [778-784]. Srikanth highlights Vertiv’s prefabricated, modular pods and the importance of off-site integration to shorten build times while also noting training as part of the solution [751-758][811-818]. Sanjay echoes this by detailing multiple training programmes (including the same 8-12 week format) aimed at developing design, engineering and operational expertise [796-804].
POLICY CONTEXT (KNOWLEDGE BASE)
AI policy pathways highlight the need to complement human labour with upskilling and new-job creation, calling for accelerated training programmes to support AI infrastructure scaling [S51][S66].
Energy efficiency must be addressed realistically; PUE alone can be misleading and telemetry integration is needed
Speakers: Sanjay Kumar Sainani, Srikanth Cherukuri
PUE Can Be Misleading; Focus on Thermal‑Cycle Optimisation and Adaptive Cooling Integrating Chip‑Level Telemetry with Data‑Center Controls Enables Real‑Time Energy Optimisation
Sanjay warns that raising ambient temperature can artificially improve PUE while increasing overall power use, and proposes seasonal cooling strategies for genuine efficiency gains [710-720]. Srikanth points out the current gap where chip telemetry does not communicate with data-center management, and cites NVIDIA’s reference design that aims to bridge this gap for automatic energy optimisation [739-748].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry commentary notes that PUE is often misused as a sole metric, urging more comprehensive telemetry and realistic energy-efficiency assessments; this mirrors broader concerns about information gaps in energy-cost forecasting [S48][S50].
Similar Viewpoints
All four speakers advocate a modular, GPU‑centric pod architecture that begins with the chip and is designed for rapid, scalable deployment and future upgrades, emphasizing reference designs, digital twins and row‑level planning [106-121][99-104][254-259][628-658][692-699].
Speakers: Peter Panfil, Jigar Halani, Srikanth Cherukuri, Sanjay Kumar Sainani
AI Factories Require Speed‑at‑Scale; Start from the Chip, Not the Grid GPU‑First, Pod‑Based, Future‑Proof Architecture; Emphasis on Rapid Deployment Future‑Proof Design via Row‑Level Density, Digital Twins, and Integrated Telemetry Modular Pod Approach Allows Multi‑Generation GPU Upgrades and Efficient Scaling
Both stress that AI solutions must be tailored to Indian data‑sovereignty requirements while still delivering innovative capabilities, whether through on‑premise Google Data Boxes or indigenous AI layers that protect critical data [10-14][21-25].
Speakers: Nitin Gupta, Sudeesh VC Nambiar
Sovereignty and Innovation Must Co‑exist; Google’s Indian Data Centers & On‑Prem Data Box AI Mitigates Demand‑Supply Mismatch and Bot Abuse in IRCTC Ticketing
Both highlight the need for inclusive AI services that reach underserved populations, with Nitin citing free educational tools as an example of bridging the divide [4-6][49-53][58-62].
Speakers: Akanksha Swarup, Nitin Gupta
Inclusivity & Bridging the Digital Divide are Essential for AI Adoption Sovereignty and Innovation Must Co‑exist; Google’s Indian Data Centers & On‑Prem Data Box
Unexpected Consensus
Both large multinational tech firms (Google and NVIDIA/Vertiv) and Indian public‑sector operators emphasize the development of indigenous AI layers and local data‑center sovereignty
Speakers: Nitin Gupta, Sudeesh VC Nambiar, Jigar Halani, Peter Panfil
Sovereignty and Innovation Must Co‑exist; Google’s Indian Data Centers & On‑Prem Data Box AI Mitigates Demand‑Supply Mismatch and Bot Abuse in IRCTC Ticketing Indigenous Layer and Startup Collaboration Strengthen the Solution AI Factories Require Speed‑at‑Scale; Start from the Chip, Not the Grid
It is surprising that representatives from competing global vendors converge on the importance of building indigenous, sovereign AI capabilities within India, rather than solely promoting their own proprietary stacks. This shared stance underscores a broader national priority for data sovereignty and local innovation [10-14][21-25][470-504][106-121].
POLICY CONTEXT (KNOWLEDGE BASE)
Collaborative efforts between government and private sector to secure data-center sovereignty and build indigenous AI stacks are documented in panel discussions and partnership announcements, underscoring a shared policy goal [S59][S60].
Overall Assessment

The panel shows strong convergence on three core themes: (1) AI sovereignty must be paired with innovation through on‑premise and indigenous solutions; (2) rapid, chip‑first, pod‑centric designs are essential to achieve speed at scale; (3) inclusive access, skill development and realistic energy‑efficiency measures are critical for sustainable AI deployment in India.

High consensus – most speakers, across different organisations (Google, NVIDIA, Vertiv, Indian railways, government‑linked bodies), repeatedly echo the same priorities, indicating a unified strategic direction that can inform policy and investment decisions.

Differences
Different Viewpoints
Preferred design methodology for AI‑factory infrastructure – chip‑first modular pods versus row‑level, digital‑twin driven planning
Speakers: Peter Panfil, Srikanth Cherukuri
AI Factories Require Speed‑at‑Scale; Start from the Chip, Not the Grid Future‑Proof Design via Row‑Level Density, Digital Twins, and Integrated Telemetry
Peter argues that AI infrastructure should be built by starting at the GPU chip level, creating repeatable GPU-pods that are then replicated (“chip-first” approach) [106-121]. Srikanth counters that designers should move beyond rack-level density to row- or data-hall-level planning, using digital twins and bounding-box concepts to simulate and future-proof deployments, and integrate chip-level telemetry with data-center controls [665-666][739-748]. The two positions differ on what the primary design abstraction should be – individual chips/pods versus holistic row-scale simulation.
Difficulty of retrofitting existing data‑centers for AI workloads
Speakers: Srikanth Cherukuri, Sanjay Kumar Sainani
Retrofit is extremely difficult and scary for enterprises Modular pod approach allows simple re‑configuration across GPU generations
Srikanth describes retrofitting AI workloads into legacy facilities as a “cumbersome” and “scary” process, noting the many regulatory and engineering hurdles involved [751-758]. Sanjay, by contrast, says that with a standardized GPU-pod you can simply re-configure cabinets to accommodate new GPU generations without major infrastructure changes, making the upgrade path straightforward [692-699]. The disagreement centres on how hard it is to adapt existing sites to AI-centric designs.
POLICY CONTEXT (KNOWLEDGE BASE)
Retrofitting legacy facilities is portrayed as challenging compared with modular, prefabricated approaches that simplify AI-focused upgrades, reflecting industry observations on legacy hardware constraints [S53][S57].
How to achieve inclusive AI access for under‑privileged and rural populations
Speakers: Akanksha Swarup, Nitin Gupta
Inclusivity & Bridging the Digital Divide are Essential for AI Adoption Sovereignty and Innovation Must Co‑exist; Google’s Indian Data Centers & On‑Prem Data Box
Akanksha explicitly asks how Google will bridge the digital divide and make AI tools accessible to under-served communities, citing the Prime Minister’s concern [49-53]. Nitin replies by highlighting a single initiative – free JEE mock exams on Gemini – as the example of inclusivity, without addressing broader rural connectivity or affordability issues [58-62]. The two speakers differ on the scope and adequacy of measures needed to ensure inclusive AI deployment.
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple reports call for a multi-pronged strategy-combining infrastructure investment, digital-literacy programs, and policy safeguards-to ensure AI benefits reach underserved and rural communities [S63][S64][S66][S68].
Unexpected Differences
Retrofitting legacy data‑centers – perceived difficulty versus modular simplicity
Speakers: Srikanth Cherukuri, Sanjay Kumar Sainani
Retrofit is extremely difficult and scary for enterprises Modular pod approach allows simple re‑configuration across GPU generations
Both speakers are from the same ecosystem (Vertiv/NVIDIA) yet present opposite views on how hard it is to adapt existing facilities for AI workloads. Srikanth paints retrofitting as a major obstacle, while Sanjay suggests the pod model makes upgrades trivial. The contrast was not anticipated given their shared organisational background.
POLICY CONTEXT (KNOWLEDGE BASE)
The contrast between perceived retrofitting difficulty and the simplicity of modular pod deployment is echoed in discussions of prefabricated data-center construction versus traditional on-site builds [S53][S57].
Scope of inclusivity measures – narrow educational pilot versus broader rural access
Speakers: Akanksha Swarup, Nitin Gupta
Inclusivity & Bridging the Digital Divide are Essential for AI Adoption Sovereignty and Innovation Must Co‑exist; Google’s Indian Data Centers & On‑Prem Data Box (example of free JEE mock exams)
Akanksha’s question seeks systemic solutions for under‑served populations, yet Nitin’s response focuses on a single educational offering, which does not address connectivity, affordability, or multilingual access. The mismatch between the breadth of the concern and the narrowness of the answer was unexpected.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy debates highlight tension between limited pilot-scale digital-literacy initiatives and broader systemic measures such as rural broadband expansion and inclusive ICT frameworks [S63][S68].
Overall Assessment

The discussion revealed three main fault lines: (1) the optimal design methodology for AI‑factory roll‑out (chip‑first pods vs row‑level, digital‑twin planning); (2) the perceived difficulty of retrofitting existing data‑centers versus the promise of modular pod upgrades; (3) the adequacy of inclusivity initiatives, with a narrow corporate example contrasted against a broader policy‑level demand. While participants share common goals of speed, scale, sustainability and skill development, they diverge on the practical pathways to achieve them.

Moderate – disagreements are technical and strategic rather than ideological, but they signal potential coordination challenges for policy makers and industry partners. If unresolved, differing design philosophies could lead to fragmented investments, and an insufficient focus on inclusive access may limit the societal impact of AI deployments.

Partial Agreements
All four speakers concur that AI infrastructure must be deployed rapidly and at large scale, and that a pod‑centric, modular architecture is central to achieving this. They differ on the sequencing (chip‑first vs row‑level planning) but share the overall goal of fast, scalable rollout [106-121][254-259][692-699][665-666][739-748].
Speakers: Peter Panfil, Jigar Halani, Sanjay Kumar Sainani, Srikanth Cherukuri
AI Factories Require Speed‑at‑Scale; Start from the Chip, Not the Grid GPU‑First, Pod‑Based, Future‑Proof Architecture; Emphasis on Rapid Deployment Modular Pod Approach Allows Multi‑Generation GPU Upgrades and Efficient Scaling Future‑Proof Design via Row‑Level Density, Digital Twins, and Integrated Telemetry
All agree that energy efficiency is a critical concern. Peter stresses sustainable design from the chip onward, Sanjay warns against superficial PUE improvements and advocates seasonal cooling strategies, while Srikanth acknowledges PUE is often misused and calls for telemetry‑driven optimisation. They share the objective of genuine energy savings but propose different levers [106-121][166-170][710-718][739-748].
Speakers: Peter Panfil, Sanjay Kumar Sainani, Srikanth Cherukuri
AI Factories Require Speed‑at‑Scale; Start from the Chip, Not the Grid (includes sustainability) PUE Can Be Misleading; Focus on Thermal‑Cycle Optimisation and Adaptive Cooling Future‑Proof Design via Row‑Level Density, Digital Twins, and Integrated Telemetry (mentions PUE abuse)
Both highlight the need for rapid, structured training to address the talent shortage for AI‑factory operations. The moderator describes an 8‑12 week IIT‑Chennai programme, while Srikanth points to Vertiv’s broader training and prefab‑system initiatives, indicating consensus on up‑skilling as essential [778-784][811-818].
Speakers: Moderator, Srikanth Cherukuri
Accelerated Skill‑Development Programs (8‑12 week courses) with IITs to Train DC Ops Personnel Vertiv’s Prefabricated Systems and Training Initiatives Reduce Build‑Out Time and Skill Gaps
Takeaways
Key takeaways
AI sovereignty and innovation must be pursued together; India aims to become a global AI hub with purpose‑driven, trustworthy models (Bharat GPT). Google is expanding Indian data‑center capacity and offering an on‑premise “Data Box” that runs Gemini AI services, providing hardware control and data residency. Inclusivity is a priority – examples include free Gemini‑powered JEE mock exams and efforts to bridge the digital divide in rural areas. Domain‑specific AI (e.g., IRCTC ticketing) is being used to mitigate demand‑supply mismatches and bot abuse, leveraging both global and indigenous models in partnership with Indian startups. Building AI infrastructure requires “speed at scale”: start design from the GPU chip, use modular pod‑based reference designs, and adopt an inside‑out (chip‑to‑grid) approach. Future‑proof data‑center design should focus on row‑level density, digital‑twin simulations, and integrated chip‑to‑facility telemetry to optimise energy use. Energy efficiency discussions highlighted the limits of PUE as a metric and emphasized thermal‑cycle‑aware cooling, liquid cooling, and adaptive PUE management. Talent development is critical; Vertiv and partners are launching 8‑12‑week training programmes with IITs and other institutions to create a skilled AI‑factory workforce. AI is envisioned as an autonomous background function that frees human capacity, while concerns about AI developing independent consciousness were raised.
Resolutions and action items
Google will deploy new data centres in Vizag and make the on‑premise Data Box with full Gemini AI services available to Indian customers. IRCTC will continue to use a hybrid AI solution – a global core model plus an indigenous layer built with Indian startups – to combat ticket‑booking bots. Vertiv, NVIDIA and partners will publish and promote GPU‑pod reference designs (including DSX pods) and encourage adoption of NVIDIA‑ready certifications. Stakeholders agreed to accelerate AI‑factory deployments using modular pod construction, digital‑twin validation and chip‑first design methodology. Skill‑development programmes (8‑12 week courses) will be rolled out in collaboration with IIT Chennai and other institutions to train data‑center ops and design talent. Google will keep Gemini‑based educational tools (e.g., JEE mock exams) free for students to improve inclusive access. Commitment to share learnings from US/European AI‑factory deployments with Indian teams to avoid repeat mistakes.
Unresolved issues
Concrete roadmap and timeline for achieving full AI sovereignty (large‑scale indigenous LLMs) beyond the current small‑to‑mid‑size models. Specific strategies for extending inclusive AI services to remote, under‑connected rural populations beyond pilot education tools. Detailed plan for retrofitting existing low‑density data centres to high‑density GPU pods, including cost, regulatory and operational challenges. Metrics and governance framework for measuring real energy savings versus PUE manipulation; how to standardise chip‑to‑DC telemetry integration. Answers to audience concerns about AI developing independent consciousness and the societal implications were not fully addressed. Exact financial models and ROI calculations for gigawatt‑scale AI factories, especially for enterprises with limited capital.
Suggested compromises
Balancing sovereignty with innovation – using global technology (Google, NVIDIA) together with indigenous layers and data to meet local regulatory needs. Hybrid deployment model for IRCTC: combine global AI capabilities with locally developed models to satisfy both performance and data‑residency requirements. Adopt a chip‑first design while still considering grid constraints – start from GPU specifications but keep flexibility for future power‑infrastructure upgrades. Use modular pod designs that can be upgraded across GPU generations, reducing the need for costly full‑scale rebuilds. Combine rapid deployment (speed) with future‑proofing (row‑level density, digital twins) to meet immediate demand while preserving long‑term ROI.
Thought Provoking Comments
Follow-up Questions
What are the capabilities, deployment models, and adoption status of Google’s indigenous data box solution for on‑premise AI workloads in India?
Understanding this solution is key to assessing how Indian enterprises can achieve data sovereignty while leveraging Google’s AI services.
Speaker: Nitin Gupta
How can Indian startups effectively collaborate with global technology providers to develop and deploy indigenous AI models for high‑demand services like IRCTC?
Exploring partnership frameworks and capacity‑building measures will help create locally‑tailored AI solutions and reduce reliance on external models.
Speaker: Sudeesh VC Nambiar
What specific cooling and energy‑efficiency strategies can be employed to improve PUE across India’s diverse climate zones (e.g., 10‑52 °C range)?
Tailored approaches are needed to optimize data‑center energy use in varying thermal environments, which is critical for sustainable AI scale‑up.
Speaker: Sanjay Kumar Sainani
How can chip‑level telemetry be integrated with data‑center‑level telemetry to enable automated, real‑time energy‑optimization in AI factories?
Automation of telemetry across hardware and facility layers is essential for future‑proof, energy‑efficient AI infrastructure.
Speaker: Srikanth Cherukuri
What are the most effective methods for retrofitting existing data centers to support high‑density GPU workloads without prohibitive cost or downtime?
Retrofitting is a practical challenge for many operators; research into modular, plug‑and‑play solutions can accelerate AI adoption.
Speaker: Srikanth Cherukuri, Sanjay Kumar Sainani
What comprehensive talent‑development roadmap is required to scale AI‑data‑center operations (design, engineering, ops) to meet the projected doubling of capacity each year?
A skilled workforce is a bottleneck; systematic training programs and industry‑academic collaborations are needed to sustain rapid growth.
Speaker: Dal Bhanushali (audience)
How will India’s DPDP (Data Protection) law influence the location of AI compute workloads and the domestic demand for AI infrastructure?
Policy impacts on data residency could drive significant shifts in where AI processing occurs, affecting infrastructure planning.
Speaker: Jigar Halani
Is the target of 10‑12 GW AI compute capacity in India within the next three years realistic, and what detailed roadmap (technology, financing, supply chain) is required to achieve it?
Assessing feasibility and outlining concrete steps is crucial for investors and policymakers to support the AI ecosystem.
Speaker: Jigar Halani, Peter Panfil
What roadmap and investment are needed to develop indigenous Indian AI chips to complete the sovereign AI stack?
Domestic chip development is identified as the missing layer for full AI sovereignty and requires focused research and funding.
Speaker: Jigar Halani
What measurable impact has Google’s free JEE mock‑exam offering had on educational inclusivity for under‑privileged students in rural India?
Evaluating outcomes will inform future initiatives aimed at bridging the digital divide in education.
Speaker: Nitin Gupta
How can the data‑cleaning and model‑training timelines for foundation models in India be shortened without compromising model quality?
Accelerating these phases would speed up AI development cycles and improve competitiveness.
Speaker: Jigar Halani
How can reference designs for AI‑focused data centers be standardized across multiple GPU generations to ensure future‑proofing and avoid costly redesigns?
Standardized, modular designs are essential for rapid scaling and long‑term cost efficiency.
Speaker: Srikanth Cherukuri, Sanjay Kumar Sainani
What financial and operational strategies (e.g., ‘speed to token’) can ensure a viable return on investment for large‑scale AI infrastructure deployments?
Understanding ROI dynamics is vital for sustaining capital‑intensive AI projects.
Speaker: Sanjay Kumar Sainani
Beyond educational tools, what additional AI‑driven services can Google provide to reduce the digital divide for rural and under‑served populations in India?
Identifying broader inclusive applications will help shape policies and product roadmaps for equitable AI access.
Speaker: Akanksha Swarup (question to Nitin Gupta)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

How Small AI Solutions Are Creating Big Social Change

How Small AI Solutions Are Creating Big Social Change

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, moderated by Alpan Rawal, examined how “small AI”-data-efficient, low-cost models that run at the edge and are tailored to local contexts-can generate large social impact, especially for underserved communities in the global south [15-19]. Zameer Brey stressed that AI’s value lies in its relevance to specific settings such as district hospitals in Telangana, smallholder farmers in Zambia, or classrooms in rural Senegal, warning against designs that ignore users’ environments [27-34]. Aisha Walcott-Bryant described Google Research Africa’s “Africa-for-Africa” approach, highlighting problem-first projects like continent-wide weather forecasting that compensate for the scarcity of radar stations [50-57] and the creation of open voice-data sets for 27 African languages to enable edge-ready models on laptops and tablets [61-65]. Wassim Hamidouche outlined Microsoft’s AI for Good Lab, citing two open-source small-AI systems: SPARO, a solar-powered acoustic sensor network for biodiversity monitoring in remote areas [83-88], and Alert California, a 1,300-camera network that detects wildfires early using on-device AI [91-97]. Antoine Tesnière explained that health care already relies on validated small-AI tools for radiology, dermatology and ophthalmology, many of which can operate offline on modest hardware and complement rather than replace clinician judgment [102-109]. Illango Patchamuthu of the World Bank framed AI as a means to reduce poverty, arguing that simple, low-resource models are easier to scale across villages, and that replicating successful pilots requires clear KPIs, trust-building and partnership with multilateral institutions [111-124][243-267]. The discussion also addressed technical challenges: low-resource languages suffer from data scarcity, limited benchmarks, performance gaps and safety-alignment issues, prompting Microsoft to target pilot languages (Inuktitut, Chichewa, Māori) and launch the “Lingua Africa” initiative with $5.5 million funding for data collection [181-210][220-228]. To improve reliability, speakers advocated for verifiable “glass-box” models, domain-specific data collection and continuous pre-training, noting that small models can achieve near-human accuracy in targeted tasks such as community health-worker decision support [153-166][233-238]. Consensus emerged that small AI is not a second-class technology; when designed responsibly and deployed with local ecosystems, it can accelerate development outcomes, create jobs and complement larger foundation models rather than compete with them [120-126][244-250]. Participants highlighted the importance of digital literacy, upskilling and STEM education to build a cadre capable of developing and maintaining small-AI solutions, especially where three billion people remain offline [350-353]. The moderators concluded that competition among platforms should be viewed as healthy and complementary, with the ultimate goal of delivering trustworthy, context-appropriate AI to end users rather than determining a single “winner” [386-393].


Keypoints


Major discussion points


What “small AI” means and why it matters – The moderator frames “small AI” as data-efficient, low-cost models that run at the edge and are tailored to local contexts rather than generic, large-scale foundation models [15-19]. Zameer reinforces this with a traffic-analogy, arguing that solutions should be “smaller, faster, sharper, cost-effective” for the environments they serve [32-34].


Concrete small-AI projects across sectors


Google Research Africa builds continent-specific weather-forecasting tools and releases a multilingual voice dataset, emphasizing open-weight models that can run on laptops or tablets [50-55][60-65][144-145].


Microsoft AI for Good showcases SPARO (solar-powered acoustic monitoring for biodiversity) and Alert California (camera network for early wildfire detection), both open-source and deployable worldwide [82-98].


Low-resource language work at Microsoft targets languages such as Inuktitut, Chichewa and Māori, launches the “Lengua Africa” data-collection fund, and partners with the Gates Foundation to support African language models [181-190][210-218][219-228].


Healthcare small-AI in France uses validated, offline models for radiology, dermatology and ophthalmology, stressing that AI only augments clinician decisions [102-108][300-306].


The central role of partnerships, community involvement and open resources – Aisha notes that the African voice dataset was co-created with local partners and that open models enable “partnership-led” solutions [60-65]. Illango (World Bank) stresses replicating proven pilots, building an AI use-case repository, and collaborating with NGOs, governments and tech firms to scale impact [111-120][258-262]. Microsoft’s language initiatives rely on community data collection through the “Masakani African Languages Hub” [210-218][219-226].


Key technical and deployment challenges, and proposed strategies – Wassim outlines four hurdles for low-resource languages: data scarcity, lack of benchmarks, performance gaps, and safety/alignment issues [181-200]. Zameer highlights the need for “verifiable AI” that reduces black-box errors, citing a maternal-health case where a small on-device model could have saved lives [153-166][167-174]. Antoine points out limited, siloed health data and the necessity of efficient, offline algorithms that run on modest hardware [280-298]. Across the board, speakers recommend domain-specific data collection, open-weight models, and edge-native deployment to overcome these barriers [233-238][280-298].


Future outlook and policy implications for development – Illango envisions AI as a catalyst for job creation, stressing digital-literacy, up-skilling and a robust private-sector ecosystem; he also announces a publicly accessible AI use-case repository [243-262][268-272]. The moderator and panelists caution against a zero-sum “AI wars” narrative, emphasizing healthy competition and context-driven relevance [384-393][394-398].


Overall purpose / goal of the discussion


The panel was convened to explore how “small AI”-data-efficient, locally-adapted models-can generate tangible social impact for underserved and rural communities, especially in the Global South. Each speaker shared organizational experiences, highlighted non-foundation-model approaches, and discussed how to scale such solutions responsibly.


Overall tone


The conversation remained professional, collaborative and optimistic, with speakers celebrating successes (e.g., open-source biodiversity tools, multilingual datasets). When addressing reliability, safety and scalability, the tone shifted to a more cautionary, problem-solving stance, underscoring the need for rigorous validation and community trust. Throughout, the dialogue stayed constructive, focusing on partnership-driven pathways rather than competition.


Speakers

Illango Patchamuthu – World Bank Group Director of Strategy and Operations, Digital and AI Vice Presidency; Acting Director for Data and AI [ S1 ]


Announcer – Event announcer/moderator who introduced the panelists [ S2 ][ S3 ][ S4 ]


Alpan Rawal – Chief AI / ML Scientist at Wadwani AI; moderator of the panel [ S5 ]


Aisha Walcott-Bryant – Senior Staff Research Scientist and Head of Google Research Africa, Google [ S7 ][ S8 ]


Antoine Tesniere – French Professor of Medicine, entrepreneur, anesthesiologist at Georges Pompidou European Hospital; co-founder of ILEMENTS; Director of Paris-Saint-Denis Campus [ S10 ]


Zameer Brey – Panelist (organization not specified in the transcript) [ S12 ]


Wassim Hamidouche – Principal Research Scientist, AI for Good Lab, Microsoft (specializing in computer vision, NLP, multimodal AI, low-resource languages) [ S14 ]


Audience – Various participants from the public; no specific titles or roles mentioned


Additional speakers:


Neha Butts – Associate Director, Human Resources (mentioned at the close of the session)


Selena – CEO and Co-founder of Zindi, runs competitions to develop AI models in Africa


Irish Kumar – Representative from the CSC Winnie Ocean Center on solar energy (asked a question during the Q&A)


Dr. Ravi Singh – Participant from Miami who posed a question about platform competition


Full session reportComprehensive analysis and detailed insights

The panel, moderated by Dr Alpan Rawal, opened by defining small AI as data-efficient, low-cost, edge-native models that are built for specific local contexts rather than generic, large-scale foundation models. Rawal emphasized that relevance to the end-user’s environment is the key criterion for impact [15-19]. This definition set the tone for the discussion, prompting each panelist to illustrate how their work embodies these principles.


Zameer Brey reinforced the need for context-appropriate solutions with a vivid traffic analogy, arguing that, just as Delhi’s congestion would never justify an aeroplane for short trips, AI should be “smaller, faster, sharper, cost-effective” and suited to the specific setting [32-34]. He warned that designers often focus on benchmark performance without considering how a model fits into a district hospital in Telangana, a smallholder farm in Zambia, or a rural classroom in Senegal [29-31]. After noting the importance of “verifiable” or “glass-box” AI, Zameer cited a World-Bank study showing 50 % diagnostic accuracy across five common conditions in eight countries, underscoring the gap that reliable on-device models must close [409-410].


Domain-specific small-AI projects

Google Research Africa highlighted two flagship initiatives. First, the team built a continent-wide weather-forecasting system that compensates for Africa’s severe radar shortage – only 37 stations compared with roughly 300 in North America and Europe [55-57]. By innovating around this constraint, they delivered more accurate forecasts for rain-fed agriculture, a critical need for millions of smallholder farmers [50-54]. Second, they released an open multilingual voice dataset covering 27 African languages (out of an estimated 2 000), enabling “partnership-led” development of edge-ready models that run on laptops or tablets [61-65][144-145].


Microsoft’s AI for Good Lab presented two open-source, globally deployable tools. SPARO (Solar-Powered Acoustic and Remote Observation) combines solar-powered cameras with an HAA model to detect animal species in remote habitats, transmitting data via satellite where infrastructure is lacking [82-88]. Alert California operates a network of 1 300 cameras with on-device AI that detects early wildfire signatures, allowing rapid emergency response [91-97]. Both solutions exemplify small AI that is cheap to run, edge-deployable, and openly shared for reuse worldwide [84-88][96-97].


Antoine Tesnière’s health-innovation ecosystem illustrated healthcare applications, noting that validated small-AI tools already support radiology, dermatology and ophthalmology analyses on modest hardware, providing information to clinicians while preserving human decision-making [102-108][300-306]. He stressed that data in health is often scarce and siloed, requiring data-efficient algorithms that can operate offline on smartphones or simple computers, especially in low- and middle-income settings [280-298][332-340]. Antoine clarified that these models are better than current practice, and that the combination of algorithm + human decision-making is the most effective tool, rather than claiming they outperform clinicians outright [311-313][332-340].


Wassim Hamidouche described work on low-resource languages, identifying four systemic challenges: dominance of English in internet data (>60 %), a dearth of benchmarks (only ~300 languages have any, most limited to translation tasks), a performance gap between high- and low-resource languages, and safety-alignment work that is largely English-centric [184-200]. To address these, Microsoft is piloting models for Inuktitut, Chichewa and Māori, achieving a 12 % performance gain through continual pre-training and instruction fine-tuning [210-218]. The “Lingua Africa” initiative, funded with US$5.5 million in partnership with the Gates Foundation and the Masakani African Languages Hub, will support data collection for ten African languages, extending the earlier “Lingua Europe” programme [219-228]. He also emphasized that speech-to-text and text-to-speech are essential for many low-resource languages, where written corpora are limited [416-418].


Technical and reliability challenges

Zameer argued for “verifiable” AI, insisting that models used in critical health contexts must approach zero error and provide auditable logic chains to prevent catastrophic failures [160-165]. Antoine offered a counterpoint, acknowledging that while current small-AI models are not 99.999 % accurate, they already improve over existing practice and are acceptable when combined with human oversight [300-311][332-340]. This tension highlighted a broader disagreement on the acceptable error tolerance for health-focused small AI.


Wassim advocated shifting from generic data collection to domain-specific, use-case-driven pipelines, arguing that such focus improves reliability for applications in agriculture, education and health [233-238]. He also noted the importance of speech technologies for low-resource languages [416-418].


Partnerships, co-creation and open resources

All speakers underscored the necessity of multi-stakeholder collaboration. Illango Patchamuthu of the World Bank described AI as a means to reduce poverty, insisting that simple, low-resource models are easier to scale and must be replicated through clear KPIs and trust-building with NGOs, governments and local communities [111-124][115-124]. He announced an open-access AI use-case repository containing about 100 curated examples in health, education, agriculture and job creation, hosted on the World Bank platform [258-262][268-272]. Illango explicitly stated that “small AI is not inferior, it is not second class.” [411-413] He added that once legal issues are resolved, anyone will be able to submit use-cases to the repository, subject to filtering [414-415]. Aisha echoed this collaborative ethos, noting that the African voice dataset was collected with partners across the continent and that open-weight models such as Gemma enable edge deployment [60-65][144-145]. Microsoft’s language initiatives similarly rely on community-driven data collection through the Masakani African Languages Hub [210-218][219-226].


Scalability and development impact

Illango highlighted the importance of turning pilots into plug-and-play solutions that can expand from a single village to larger regions, citing ongoing projects in Uttar Pradesh and Maharashtra that aim to improve agricultural productivity, market access and credit [117-119]. He positioned AI as a catalyst for job creation, arguing that small AI should augment-not replace-employment, and that building digital literacy, up-skilling and STEM capacity is essential for emerging economies [350-353][244-250].


Audience interactions

During the Q&A, Irish Kumar asked how small AI can support youth, agriculture and renewable energy. Illango responded that digital-literacy programmes, up-skilling initiatives, and sector-specific pilots in India are already being rolled out to empower young people and promote sustainable agriculture and clean-energy solutions [400-402]. Selena questioned the technical feasibility of using open-source, open-weight LLMs for low-resource languages. Wassim explained that selecting a strong multilingual base model, augmenting it with monolingual or bilingual data, and leveraging speech-to-text/text-to-speech pipelines are key to achieving good performance [403-405]. Dr Ravi Singh raised the “AI wars” concern. Alpan affirmed that healthy competition drives innovation, Illango reminded that three billion people remain offline-leaving ample space for diverse AI approaches-and Wassim emphasized that collective effort across sectors is essential to avoid fragmented development [406-408].


Future outlook and policy considerations

Rawal concluded that competition among platforms is healthy and not a zero-sum game; the “winner” will be the solution that best fits the user’s context [384-393][386-392]. Illango reinforced this view, noting the large offline population as an opportunity for inclusive AI [394-398]. The panel collectively agreed that small AI is not a second-class technology; when responsibly designed, it can fast-track development outcomes and complement larger foundation models [120-124][15-19].


Action items and unresolved issues

– The World Bank will maintain and expand the open-access AI repository, and will open submissions to external contributors once legal clearance is obtained [258-262][414-415].


– Microsoft will roll out the Lingua Africa initiative to fund domain-specific data collection for African languages [219-228].


– Google Research Africa will keep releasing open-weight models and multilingual voice datasets, supporting edge deployment [144-145].


– SPARO and Alert California will remain open-source for global adoption [84-88][96-97].


– All participants committed to prioritising domain-specific data, co-creation with local partners and scaling pilots into reusable modules [233-238][115-124][259-262].


Remaining challenges include achieving near-zero error rates and verifiable audit trails for critical health applications, establishing robust benchmarks and safety evaluations for low-resource languages, ensuring affordable edge hardware for the poorest populations, and defining concrete digital-literacy and up-skilling programmes at scale. These issues were identified as priorities for future collaboration and research.


The discussion demonstrated strong consensus that lightweight, context-aware AI, built through open-source practices and local partnerships, can deliver meaningful social impact while complementing, rather than competing with, large foundation models. The panel’s insights chart a clear pathway toward inclusive, trustworthy AI deployment in underserved regions. Alpan invited Neha Butts to hand out mementos and requested a group photo to close the event [419].


Session transcriptComplete transcript of the session
Announcer

Please, I would request you to take your seat on the panel. Wassim Hamidouche, who’s a principal research scientist at Microsoft’s AI for Good Lab, specializing in computer vision, NLP, and multimodal AI with a focus on low -resource languages. Requesting you to please take your seat. Illango, who’s a World Bank Group Director of Strategy and Operations in the Digital and AI Vice Presidency and also serving as Acting Director for Data and AI. Requesting you to please join the panel. Thank you. Aisha Walcott, who is a senior staff research scientist and head of Google Research Africa, focused on AI development, addressing the continent’s most pressing challenges. She holds a PhD in electrical engineering and computer science and holds leadership roles in the IEEE Robotics and Automation Society.

Requesting you to please join the panel. Antoine, Antoine Tesniere, who’s a French professor of medicine and entrepreneur, specializing in health innovation and crisis management and an anesthesiologist at the Georges Pompidou European Hospital. He co -founded ILEMENTS, coordinated France’s national COVID response, and since 2021 has served as director of Paris -Saint -Denis Campus. Thank you so much for being here. Requesting you to join the panel. And Dr. Alpan Rawal, who’s chief AI ML scientist at Wadwani AI, will be moderating today’s session. Alpan, requesting. Thank you. handing it over to you

Alpan Rawal

yes thank you everyone for coming requesting those at the back if you could close the door so that we can reduce the noise a little bit it’s full ok great well if you could just calm down a bit and settle down thank you welcome to all our esteemed panelists and our panel the topic of our panel as you know is on small AI for big social impact like to deeply thank our panelists for making it all the way for the summit and making it to this panel so what do we mean by small AI I think different people have different definitions and we are sort of open to how each panelist chooses to interpret small AI When we at Wadwani AI brainstormed about this panel, we thought it would reflect in some ways the ethos of our own work, making models that are data efficient, that are cheap to run, that sit on the edge, and most importantly are meaningful to the communities that we serve, which are underserved communities, mostly in rural India.

But it’s increasingly clear that small AI means a lot more. And I see a lot of people talking about small AI in the summit. More generally, I think it encapsulates any AI that meaningfully impacts individuals while taking into account and respecting their very local context, rather than providing generic outputs. So anything like that could rightly be called small AI, and we’re going to hear from our panelists about their experiences with AI models like that. So with that small introduction, let’s now avoid further ado and speak to our panelists. panelists. So, can you hear me at the back? Yeah, okay. So we can start. I have a common question for every panelist. Each of you represents a different and important aspect of AI work that’s happening outside of the mainstream excitement that focuses on large foundation models for a primarily global north audience.

Can you tell us briefly about your organization’s work and perhaps your thoughts on non -foundation AI models in general? Maybe we can start with Zamir.

Zameer Brey

Thanks Alpan. Yeah. Thanks. Thank you. Thank you. Thank you. we really see the opportunity for AI to reduce inequality and our starting point with AI tools is really does this work for whom where and at what scale so those are some of the departing points for us and so really looking beyond the model against a benchmark but how is this going to work in a district hospital in Telangana or a small older farmer in Zambia or a classroom in rural Senegal and you know part of what we’ve in some ways got caught up with is is the performance of the model on its own. And we’ve forgotten how does this fit into the lives and the context that it operates in.

And in doing so, part of what we need to think about is who’s designing the model and what’s it designed for? And I was thinking about the traffic that we’ve been experiencing the last few days in Delhi. And I thought to myself, would anyone, given the traffic here, design something so big as an aeroplane to try and get across the city? No. Yeah. I think we would design something that’s a lot smaller, faster, sharper, cost -effective, and gets us from point A to point B. Without a first -class airbender. us on

Alpan Rawal

I think that’s a great analogy. Aisha, can you tell us a bit about your work at Google Africa?

Aisha Walcott-Bryant

Yes. Thank you. So I lead our Google Research Africa team. We have two sites, one in Ghana and one in Kenya, so representing East and West. But the work that we do is essentially from Africa for Africa and the world. Much of our work is scaling from the uniqueness of the continent. Turns out that a lot of the challenges are similar, definitely across the global south and generally worldwide. Our work, so kind of leaning into the next part of your question and thinking about how we approach this type of work, it’s very much interesting. It’s very much problem first. I always say if there’s a red button that… that you can press and it’s a one error zero, just build the red button.

We don’t need to bring AI or technology. So it’s really important to be very thoughtful about the type of problem. Coming from Google Research, we want to leverage our compute, our AI expertise and capabilities, and then our mandate, which is the societal impact at scale, to think about the types of problems that we work on. I’ll give two good examples of those problems. One is around weather now casting, which we launched last year across the continent of Africa. So to have much more accurate weather forecast is absolutely essential, given that much of the continent and as well as in India rely on agriculture for labor. And we are rain fed primarily, 95 % in Africa. So having much more accurate weather forecast is essential in that case.

And at the same time, on the technical challenges side, we know in North America and in Europe, there’s about 300 or so weather radar stations. And in Africa, there’s only 37, I believe. You know you can fit both North America and Europe in Africa. So when you think about that, you have to innovate. And so those constraints of the environment that you were alluding to in the intro are part of the motivation of having a research team in the continent. And so that was one way that we innovated and made solutions that were available to the continent. And then the other one is a complementary side, which is working with the ecosystem, working with partners in Africa, including Macquarie University, Digital Umaga, and Uganda around Africa.

African languages. And we just released a data set of 21 now, 27 voice languages, given that Africa has 2 ,000 or so languages. This is the start. Most importantly is it’s partnership -led and driven, and this is because it’s voice, it is about accessibility and about reaching those rural villages as well. So, and enabling the ecosystem to build the solutions from there, whether they’re smaller models or larger models. So, making that type of data open and available is another way that we are leveraging this notion of smaller AI.

Alpan Rawal

Thank you. Thank you. Great, great insights. Zamir, can you tell us about your work at Microsoft?

Wassim Hamidouche

Yeah. Thank you. Sorry. Yeah, Massim. Thank you for the invitation, and it’s a great pleasure to be here today. So, first, what is AI for Good? AI for Good labs. AI for Good lab is the phylogenetic research of… Microsoft. We are employing advanced AI technology to solve real world problem with real societal impact. This is very important. And how our team and the researcher work, we closely collaborate with NGOs, governments, nonprofit organization, and local communities around the world. And together we are building AI solutions on multiple domains. We are interested about agriculture, food security, healthcare, education, culture, and so on. So this is about AI for Good Lab at Microsoft. Now I am scientist, so I would like to give you two concrete examples where we use small AI and also there are two global solutions to tackle global challenges.

So they are valid for both, they are valid for both global and international north and global south. So the first project in biodiversity is called SPARO. SPARO for solar powered acoustic and remote recording observation. It is an AI powered open source solution designed to track and monitor biodiversity in the most remote and hard to reach region in the world. So SPARO is camera tracks with HAA model that enable to detect animal species and this observation are then transmitted using wireless connectivity and satellite where we don’t have infrastructures to transmit this information. And this SPARO solution is already deployed around the world in many countries. I can cite Colombia, Peru, United States, Tanzania and it’s really enable practitioners and the researcher to understand species present and the ecosystem.

at scale, supporting more timely and informed decision to protect biodiversity. The second project focuses on wildfires. As you know, wildfires becomes real threat, global threat with devastating, impacting lives, communities and ecosystems and even economies. And around the world, firewires are increasingly in both frequency and intensity, making early detection and rapid responses more critical than ever. So through Alert California, we are addressing this challenge using AI. So what is Alert California? Alert California is the network of 1300 cameras operating 24 -7. So what is Alert California? Alert California is the network of 1300 cameras operating 24 -7. And we are developing AI tools that runs on the top of this effort. structure enabling to detect early fire and this will enable emergency responder to act quick quickly and stop fires before it’s split.

So Sparrow and Alert California as I said they are two global solutions for global problems that can be deployed anywhere around the world and we are providing them open source that anyone can embrace them and deploy them. Thank you.

Alpan Rawal

Thank you. And when I think you’re the only member of this group that doesn’t work in the global south. So if you tell us a bit about the work at Paris and how you’re using maybe non foundation models. Right. Well

Antoine Tesniere

thank you Alpan for this invitation and I’m happy to be the outsider of the panel. I’m working actually in health care and I’m leading a new kind of innovation ecosystem for health care where we gather researchers. We. We have doctors patients. We have doctors patients. startups and industrial together as well as institutions so the idea is to really create a whole community of innovation and engage into the use of data and artificial intelligence and healthcare is probably one of the fields where AI has a long standing history and the world has discovered AI with the rise of Gen AI but there were a number of small AI models designed for a long time and this is why we already have a number of validated tools that we can use in healthcare answering the question not only does it work but is it reliable which is very important for our patients so before we have the proof of efficiency of LLMs in the medical field which is not fully clear yet, we use machine learning tools which are actually small AI models in very specific areas actually works really nice today is the image analysis or pattern analysis so you can think of radiology for example chest x -ray or fractures in the emergency room are fully analyzed by small AI models and small AI tools that are easily deployable on small computers you can also think of picture analysis in dermatology in ophthalmology etc so these are very concrete example of already validated small AI models in healthcare that are used on a daily basis at least in France and Europe we’ll get back in the discussions on how we need data efficacy on this topic but it’s really important to understand that these models are already deployable and some of them can actually work offline which is really important in some environments Thank you.

Alpan Rawal

Ilango from World Bank perspective. What is your view on these types of models? Thank

Illango Patchamuthu

you very much for the opportunity to be here. Coming right at the end, I don’t know what new things I can say, but to basically reinforce the messages that have been said. For us at the World Bank, we see AI as a means to an end, and very much of an AI agenda is shaped by the mission of the World Bank, which is to reduce poverty and grow prosperity in the world. And when you take that lens and you apply it, we have to keep it simple. Not all countries have the ability to have the compute power, the electricity, the talent, and the data. So therefore, taking on tested small AI applications to scale and replicating them around the world is something that we see as a mission priority.

So in that respect, what Badwani AI is doing here is pioneering. And what I’ve heard this morning from Dr. Sunil Badwani himself about what you’re doing in TB, what you’re doing in out -of -school children, this is all… tremendous and it has great potential for application and often what happens is we focus a lot on pilots and then what happens the pilots tend to kind of once the sheen wears off people forget the pilot I think what we need to do is and what we are doing at the World Bank is to see those pilots whether it’s in health education agriculture on the small AI setting it works in rural communities where offline where data is not that rich talent is not readily available and it can also not require a lot of electricity it’s plug plug and play then how do we get the right KPIs which then allows us to go from a village of community of 50 villages to a larger population center and to see how best we can help them say in agriculture to improve productivity you better inputs We are now working in UP in partnership with Google, and we are doing the same thing in Maharashtra.

Household income, the inputs get better, and to see how they can access the markets and agriculture credit. Similar in health and education, it’s in great practices that we are seeing in Africa, in Ghana, in Kenya. So how do we take these models and replicate it? So I’d like to assure everybody, and this is something people think, small AI is inferior. No. It’s not second class. No. Small AI can solve problems. It means to an end. And if it can actually fast -track development outcomes, you know, we’ve known the problems with millennium development goals. We’ve known the problems with sustainable development goals, and many countries are lagging behind. And this is an opportunity where this development technology, if it can be put to use in the right context in the right way, I think we can achieve faster

Alpan Rawal

Thank you. That’s really interesting. Thank you. Ayesha, I’m going to come back to you and ask you more specifically how does the work that Google Research does in Africa impact rural communities specifically? How does one bring the benefits of technologies like these big foundation models to devices that may only have patchy Internet and supply very little data?

Aisha Walcott-Bryant

Thanks, that was a loaded question. Two parts there. So I think, so first and foremost, in general, just approaching these challenges with humility and relating. So I always start with, you know, I’m a scientist, but I’m also a mother, right? And that’s a thread that I’ve been following for a long time. And that’s a thread that binds so many of us. When you think of that, you also think a lot of the solutions that we’re building, it’s not for them. for us. I’m using the same health systems that you all are developing, interesting tools and models for, you know, and we have many of the challenges around weather as well. So I think the first thing is to kind of have that base human layer as we think about our work and to connect with those communities, whether they’re rural or urban, right?

A lot of the work that we do, we’re looking at, you know, these large populations that, you know, if you think about agriculture, for example, where it has a large part of the labor force, you know, there’s many different ways that this is, you know, people are part of that value chain, whether they’re actually doing the growing or providing the inputs or making those decisions and the risks along the way. So that relationship of getting out in the community, getting out in the community, getting out in the community, getting out in the community is a very important part of work. that we do is to connect with those and then really think about you know kind of coming home as Google research you know where is our unique value proposition we’re not necessarily going to solve this whole problem alone usually it requires behavior change policy and many pieces of the puzzle how do we best fit our role and we do this in co -creation with partnerships so that’s kind of the the second layer of fabric on on how we reach these rural communities and then on the other side

Alpan Rawal

Do you have an example of that?

Aisha Walcott-Bryant

oh yeah yeah absolutely so if you think about okay I’ll do two ways so for example the the languages work that I was talking about wall and wall is a word it’s a Senegalese word that it’s wall of that means to speak and the way we wanted to create this you know we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted created this and we wanted to create this and we wanted to create this and we created this and we wanted to create this and we wanted to create this and we created this and we wanted to create this and we wanted to create this and we created this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we wanted to create this and we the community.

So if you have partners who are across the continent, let them be a part of the process of collecting the data, of understanding their language and their local context to get these high quality data sets. And so I think being partnership driven and knowing our role and our place was what was very successful for that. And then the last point I’ll just say on the second question that you threw in there is really our open models, our open weight models, Gemma, are made for a lot of these solutions that are more closer to the edge. So we have nano models that can run on your laptop and tablets and so forth,

Alpan Rawal

Do you actually use them in Africa?

Aisha Walcott-Bryant

Oh, yes. Yes, yes, yes, yes.

Alpan Rawal

Great. Thank you. The next question is for you. Much of your work at the foundation is about reducing inequities through promoting safe and responsible use of AI. So what role, in your view, do small and custom AI models have to play in this? And if you can provide examples, that would be great.

Zameer Brey

Sure, Alpan. You know, I think I do want to just touch a little bit on the issue of reliability because my colleague over here spoke about it. And I think it’s a critical issue. I’m sorry if I’m going to repeat this example from one of my previous panels. But I asked the audience, Alpan, because I’m going to be on the plane later, so it’s a bad idea, but I asked the audience anyway. If I said to you, the plane has a high probability of leaving Delhi and landing safely wherever that’s going to be, and that probability was 90%. 95%. Would you get on that flag? 99 %? No? No. I did have one guy that he kind of thought about it and he said no.

And the point there is that I do think we’ve got to work towards models that have zero error, right? So much so that I think that we are trying to wrap our heads around is there a concept of verifiable AI where it shifts the narrative from a black box to a glass box. It actually exposes the logic. So for a particular set of inputs, you can follow the logic chain and it gives you a set of outputs that you can really track. You can audit. You can see that it’s repeatable. And you can prevent some of the kind of fundamental errors that we start… to see. I think, you know, and I want to go back to a very real example, Alpan, because when I think about small models, I’m coming back to the user, the community alky worker that tried to help a mother.

And we have one of our grantees who shared this very personal story of a first -time mother who presented, she was six months pregnant, and she said her hands and her feet started to get swollen. And the community alky worker looked and said, you’re pregnant, this is normal. Four weeks later, she started having a headache and blur vision. I think colleagues will know where the story goes. Unfortunately, that mother had severe gestational proteinuric hypertension. It was missed. And the mother and the baby didn’t make it. But in that moment, what inspired our grantee was if the community health care worker had a small model that worked on her device, which was a low -cost smartphone, still had patchy internet, but was just built small enough to help her to make good decisions at that point of care.

Today, we actually would be sitting with a very different outcome today. And so I think small models present us those

Alpan Rawal

Very interesting. Very good points. Wasim, you spoke about, you know, in a general sense about the research done at the AI for Good Lab at Microsoft. Again, are there specific examples from your work where you see the benefits of building domain -specific models to realize impact? And are there research… lessons that we can take away from this? I think it would be good for the audience and us to understand what are the research directions of the future that can come out of this work.

Wassim Hamidouche

models from 4 billion to 15 billion. And once we select the best LLM for one target language, we do all these recipes to boost the performance for these low resource languages. But I wanted to get back to all the challenges we are facing for these low resource languages. So when we train these foundation models, we train them on internet data. And internet data represented by more than 60 % is English and followed by some high resource languages like French, Mondarian, Portuguese, etc. So this low resource language is, even if they represent more than 7 ,000 languages, they represent only a tiny portion of internet data. This is the first challenge. And the second one is the benchmarks. When we build LLM, we benchmark them, we evaluate the performance on benchmarks.

And we have seen, like, there are only at least one benchmarks for only 300 languages. So we can see, like, more than 6 ,000 or 7 ,000. They don’t have that many benchmarks. So we can see, like, more than 6 ,000 or 7 ,000. They don’t have even one benchmark. Even in these 300 benchmarks, most of them are just translation from English to this low -resource language. They have nothing to do with the culture and the context of these languages. The third challenge is the performance gap. Of course, there is a performance gap for these LLMs, even the frontier models between high -resource and low -resource languages. The fourth one is safety. When we build LLMs, usually we do some safety alignments with reinforcement learning, but these safety are mainly done in English and some of high -resource languages.

Now, when we build LLMs for low -resource languages, it becomes very strong for these low -resource languages. It raises some other issues with safety. We have to evaluate these LLMs for safety on this language and do all these alignments, reinforcement learning in the target language also. In this PO, we addressed some of these issues. We have to evaluate these issues and do all these alignments. We have to evaluate these issues and do all these alignments. We have to evaluate these issues and do all these alignments. We have to evaluate these issues and do all these alignments. We have to evaluate these issues and do all these alignments. We have to evaluate these issues and do all these alignments.

We have been targeting three pilot languages, which are Inuktitut, spoken in north of Canada, indigenous language, Chichewa in Malawi in Africa, and the Maori in New Zealand. Why we have selected these three languages? Just because we have access to local community to help us to get data. So we gathered data from this community. Then we used some continual pre -training, instruction fine -tuning to boost the performance of open weights LLM, and we were able to gain 12 % balance gain, closing the gap with English. So now what’s next? Next steps, actually we are trying to expand this to more languages. We have some collaboration, for example, in North America with Paraguay to develop LLM for Guarani, and we want to extend this to other languages.

But most importantly, we have launched an initiative to help the community to get the best of the language. We have an initiative called Lengua Europe. We have a project called Lengua Europe. to fund data collection in Europe for 10 languages in Europe. It was released in last September. It was very successful. We have received many applications and 10 have been selected. And now we will start working with them. And it was that successful that now we are extending this initiative to Africa through Lingua Africa. It has been announced just today in the AI Summit. And we will be allocating 5 .5 million to support data collection for African languages. And this is in partnership with Gates Foundation and Microsoft Air for Good and FCDU.

And this initiative will be led by Masakani African Languages Hub.

Alpan Rawal

Sorry, just to follow up. So for people who are working on these small language models or domain, specific language models even, you know, say for healthcare domain or some other domain. Are there, you know, strategies? that they should pursue that you can recommend?

Wassim Hamidouche

Yeah, this is very important, and this is also related to the call for Lingua Africa because many efforts have been done in the past to collect general proposed data. Now we have enough, I think, general proposed data, but we evaluate the performance of these AI tools for application -specific, for example, healthcare, education, agriculture. They don’t work as we want, as expected. So what we want today is rather than, instead of focusing on general data collection, we will be focusing on domain -specific, application -specific, use case -specific data collections and building AI tools for specific domains. At least for this all reliability issues, we will have a model that performs good in that target level resource language in that application that we can deploy and we can be used by local communities and local communities.

This is really a priority for the next…

Alpan Rawal

Thank you. Ilango, let me come to you. You have vast experience in international development. Can you give us a view of the future as it relates to using AI for developmental goals? Do you think AI will have meaningful role to play in transition of emerging economies to advanced economies?

Illango Patchamuthu

So I do think the prospects are good and our North Star is job creation. And so we need to support countries that AI doesn’t automate jobs away, but AI actually supports the creation and enhancement of jobs. And this is where small AI becomes imperative, unlike the foundation models. Good question. Which will have implications. So the second is how are we going to go about it. And in some sense, whether it’s large language models or small AI solutions, you need an ecosystem. And that ecosystem needs to be powered by the local private sector. And often what we see even now, whether the AI revolution is before us or not, small enterprises, whether in the SME space or in the larger space, struggle for a variety of reasons.

And if the countries don’t reform business processes, make it easier for permitting, which AI can do, you’re going to see that AI actually is not going to play an effective role. So there are some fundamental reforms. And this is where some of the foundational investment in BPI, the digital public infrastructure, needs to happen to create that ecosystem and the ability for the ecosystem to then work with the private sector, the local communities, to be able to create those jobs. And this is what we… seeing everywhere, if that happens and here too you see this whole vibrancy around the startup ecosystem is why? Because everyone, the young people see opportunities and this momentum can drive everywhere in the world.

Whether it be in India, whether it be rest of South Asia or Africa or Latin America or even in the Pacific region. So how do you go about it? And what we did was we joined hands with a number of multilateral development banks and last couple of days we launched this small AI use case repository. It’s a good 100 cases. It explains in health education and agriculture and job creation how AI can be leveraged to the maximum advantage of communities. Both in terms of service delivery, productivity gains, household income gains. All this eventually leads to better jobs, better employment and better income prospects. So we are very much upbeat about small AI but I do take the point about community trust.

Once it fails, the community is not going to believe it. So it’s very important that whatever we put in place work with others, partners including the MDBs or Microsoft, Google, Gates and everyone. We have to ensure that whatever we leave behind in small communities is something trustworthy, reliable and it doesn’t end of the day hallucinate and give them something that the farmer struggles and ends up with other challenges. Thank you.

Alpan Rawal

So this report you mentioned, is it open access?

Illango Patchamuthu

On the World Bank we are hosting it. It’s called AI Repository. Just type and you’ll be able to access it. It’s got 100 and we’ll continue to update this and once we’re able to sort out some legal issues then we’ll also allow anyone to submit their use case repository obviously into the repository obviously we’ll go through a filtering process to ensure that the right ones are there.

Alpan Rawal

Great. Thank you. Antoine, coming to you. You have an organization that uses AI to advance health outcomes through research and commercialization. Are data -efficient and hardware -integrated AI models important for the work that’s happening at Parasante? And do you see these models as sort of potentially being deployed in low – and middle -income countries like India?

Antoine Tesniere

Yes, so clearly they are very important for us for different reasons. Of course, we’ll get back to the scalability and the use in low – and middle -income countries. But at first, what is the reality in healthcare is that data is scarce and siloed. And so you need to work on what you have, actually. So sometimes it’s a large set of data. Sometimes it’s a very small set of data. But you need to have tools that allow you to build relevant algorithms and relevant analysis. on small data sets. In the meantime, of course, we’re building larger data sets. Sometimes it’s at a level of one department in one hospital. Sometimes it’s one hospital. Sometimes it’s a group of hospitals.

At the end, what we are reaching out in Europe is the constitution of a large European health data space. 450 million citizens joining their health data in digital public infrastructure organized in 27 countries, which will be a world premiere. But in the meantime, we need to work on that reality of scarce data. Second thing is that not only data is limited, but also when you want to enter the new revolution in medicine, which is what we call precision medicine, personalized medicine, you need to work on very efficient algorithms because they need to adapt to one person and not only to a whole population. So you need also to get that into account in building the algorithm. The last thing is that You also have to work with what is existing in the healthcare systems, which is sometimes not supercomputers or high calculation power that exists in servers remotely.

But when you’re in a room of a patient or working in hospitals, it’s a very simple computer. And you need to have efficient algorithms and tools that you can have running on that kind of computers. And so, of course, you go all the way to a smartphone at some point if you go into remote areas. So this is why we actually work on this kind of approach, making sure that, of course, we have research on LLMs and large computing power. But we also have this work on small data, very efficient algorithm.

Alpan Rawal

Can you give examples?

Antoine Tesniere

Well, yes. I mean, I already gave some examples about radiology. We are able to. We have a radiology algorithm running on small computer machine. And getting back to your example, which I think is really important, it provides me the. opportunity to put two very important facts. One is that the AI that we use is providing information. It’s not making decisions in healthcare. So of course we target high level of reliability but at the end it’s a human decision and this is very important I think. Second one is that we’ve been trying to compare the performance of the algorithm that we’ve been designing with the existing performance. And of course you’re reaching to 99 .999 % etc. But what very few people actually know is that the actual performance of what we do at the moment is not 99 .999 % So most of the time and I won’t say the numbers but most of the times it’s actually better than what we have.

And this is really important in your example. Is it good enough compared to what we can actually do at the moment? And I think it’s particularly important in low – and middle -income countries because a very simple solution, offline LLMs, et cetera, can solve many, many issues.

Zameer Brey

Alban, can I pick up quickly? I think it’s really important, and actually I’m going to name the number if it’s okay. Okay, so a really important World Bank study from a few years back showed that on a set of five very simple conditions, the diagnostic accuracy was 50 % across eight countries. 50%. What illnesses are we talking about? Acute diarrhea, upper respiratory tract infection, maternal hypertension. And the point about that is I don’t think, any of us would be happy with 50%, the equivalent of tossing a coin and saying that’s okay. And so I completely understand that today there’s a big gap between what the models can offer. And I think the question about are the models performing better than the average clinician, that’s done.

Alpan Rawal

Sorry, I can’t resist the follow -up question. So often you find that average accuracy of models is far better. But models seem to fail more unpredictably than humans. At least that’s sort of the understanding in health care. Do you agree with that or do you think that’s not true? Anyone who wants to answer this.

Antoine Tesniere

Well, so I think we need another hour to discuss this. So what you say is absolutely true. But then you need to look at every pathology or every symptoms that you’re looking. Because the performance. The performance of diagnostic can be a little bit higher in certain places, in certain situations, a little bit lower, et cetera. But we get to the right to the same point, which is what we are building. is actually better than what we are able to do at the moment. And what we show in the scientific literature is that actually the combination between algorithm and natural intelligence, I would say the doctor, is actually the best tool so far. So the question, getting back to your question, how do we deploy this in low – and middle -income countries, I think it’s really important.

We need to have a model that are able to run on small devices. That are able to run offline. And sometimes it’s a very limited set of data, very limited set of algorithm. But if you, we were actually discussing in Paris about examples of remote LLM providing answer on the 10 most important questions for healthcare in low – and middle -income countries. That doesn’t need LLMs online with super calculation power. So that’s one first point, edge native AI. We also need to have data -efficient learning systems because most of the time in low – and middle -income countries, we have a limited amount of data available. So this is what I discussed earlier.

Alpan Rawal

We have a lot of data in India, but it tends to be noisy.

Antoine Tesniere

Yes, but we need to get the time to actually get them together, clean them, and get them prepared for robust analysis. So I know you are leapfrogging and going very fast, but by the time you will scale, this will create a real power of analysis. And then we need also to understand how we can couple hardware with software and algorithm to design reduced costs so that they can very easily scale. Thank you.

Alpan Rawal

Great. That was fantastic insight. I’d actually like to give some time. to the audience to ask questions to our panelists so yes please

Audience

thank you very much I’m Irish Kumar from the CSC Winnie Ocean Center on solar energy particularly in basement I’m belong to Rajesh son question a question to World Bank president very thanks to the World Bank in Rajesh on 60 % population rural areas and totally based on the agricultural domain 40 % population in the youth how our bank is increase the capacity of AI application to the youth as well agricultural domain so the economic changes more productivity more economy more you inclusion in climate change and renewable energy domain

Illango Patchamuthu

Thank you for that question, which I think is a very foundational question to ask any policymaker in terms of what kind of an AI strategy or implementation you want to have at any geography in the world. So obviously the first thing is you need digital literacy. Second, you need to skill up so that everybody is upskilled and reskilled on AI -related capabilities. Third is improving the STEM capability in schools and universities. So you do create a future cadre of people who can work on these topics. And then the sectors you mentioned, which are our priorities, agriculture, health, and education, obviously this is where we see the greatest potential for small AI. But particularly on Rajasthan, right now I don’t have any information, but I’m happy to share that with you.

But certainly we are working across different states in India like we’re doing elsewhere in the world. And we do prioritize literacies, skilling. STEM and applications in priority sectors like agriculture, health and education.

Alpan Rawal

But having said that, I also want to say one point. I mean, just to respond to devices that can do computing, devices are expensive for the bottom 40%.

Audience

Yeah. Hi, my name is Selena. I’m the CEO and co -founder of Zindi. We run competitions to develop models, especially in Africa. And I actually had a question for Wasim about kind of the technical implications, the size implications, the practicality of using… Open source, open weight models, you know, large language models to train very specific, domain specific, you know, language, you know, under -resourced language models. How have you seen that play out?

Wassim Hamidouche

Yeah, I think what we have seen, like the selection of the base model is very important. Because what is true, what is real, that we cannot train from scratch in LLM, even if small or large language model. We cannot train it from scratch for lower social languages because we don’t have this 15 trillion tokens to train. So it is very important to select the best multilingual model that has the right tokenizer that can be adapted to many lower social languages. This is very important. And then get the data that we need. And what we have seen also, monolingual data helps, but also bilingual data can help. And also translating English into this lower social language can also help to boost the performance.

So in our paper, we are providing all these three CPs to follow to get the best boost in terms of performance. What I would like to add also, with all these level source languages we have, text cannot solve them all. Many of these languages will be solved by speech. It’s very important. ISR models, speech -to -text, text -to -speech will build a very large role into unlocking all these level source languages in addition to LLM that can operate into level source language or in English.

Alpan Rawal

I think we have time for one really short question.

Audience

Hi, this is Dr. Ravi Singh. I’m from Miami, and it was a great panel, so a lot of great insights. It’s for Google, Microsoft, and the World Bank. Here’s the scenario. If there’s compliance across all of these platforms, which platform will win the AI wars?

Alpan Rawal

That’s a loaded question. Anyone want to answer? I’m not. So first of all, I think healthy competition is how we’ve been able to develop incredible technologies just over time. So the competition is healthy, and this is great. I don’t see it as a zero -sum game. There’s too many people on the planet, and there’s too many challenging, unique problems that need to be solved. So if we’re making it useful and bringing joy and happiness for all, that’s in the – I just love it, that’s in the theme here, then it’s not necessarily going to be who wins whatever platform. It’s what is relevant to the context of the end user. So taking it back to a more, like, human, personal perspective.

That’s my thinking.

Illango Patchamuthu

First, three billion people are offline, so there is space for everybody to compete. Second, in health sector alone, three and a half billion people don’t have access to healthcare, so there is enough scope for all kinds of applications.

Wassim Hamidouche

Just I want to add, many people have been asking me to, if this all efforts we are doing for language, if it is enough to make this model as good as English. I would say maybe not, but without all these efforts we would never reach this objective. So we have all these collective efforts will get us to this objective.

Alpan Rawal

Thank you so much, everyone. I would now like to invite Neha Butts, Associate Director, Human Resources, to just hand out the mementos to all our speakers. And we will just take one group photo. Thank you. One group photo, please. Requesting the speakers to just take one group photo, please. Thank you so much everyone Thank you everyone for joining Thank you Thank you

Related ResourcesKnowledge base sources related to the discussion topics (26)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“The moderator defines small AI as technology that must be meaningful for the end‑user’s local context rather than generic solutions.”

The knowledge base states the moderator defines small AI based on relevance to the end-user’s context [S1].

Confirmedhigh

“Google Research Africa released an open multilingual voice dataset covering 27 African languages out of an estimated 2 000.”

The knowledge base notes a released dataset of 27 voice languages for Africa, referencing the continent’s roughly 2 000 languages [S6].

Additional Contextmedium

“Google Research Africa built a continent‑wide weather‑forecasting system that compensates for Africa’s severe radar shortage – only 37 stations compared with roughly 300 in North America and Europe.”

The knowledge base discusses hyper-local, multi-modal forecasting that combines satellite, ground sensors and cameras to achieve fine-grained predictions, providing context on the technical approach to address data gaps [S20].

External Sources (114)
S1
How Small AI Solutions Are Creating Big Social Change — – Aisha Walcott-Bryant- Antoine Tesniere- Illango Patchamuthu – Illango Patchamuthu- Antoine Tesniere- Wassim Hamidouch…
S2
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Giordano Albertazzi — -Announcer: Role/Title: Event announcer/moderator; Area of expertise: Not mentioned
S3
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Takahito Tokita Fujitsu — -Announcer: Role as event announcer/host, expertise/title not mentioned
S4
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Cristiano Amon — -Announcer: Role/Title: Event announcer/moderator; Areas of expertise: Not mentioned
S5
How Small AI Solutions Are Creating Big Social Change — Requesting you to please join the panel. Antoine, Antoine Tesniere, who’s a French professor of medicine and entrepreneu…
S6
https://dig.watch/event/india-ai-impact-summit-2026/how-small-ai-solutions-are-creating-big-social-change — Requesting you to please join the panel. Antoine, Antoine Tesniere, who’s a French professor of medicine and entrepreneu…
S7
How Small AI Solutions Are Creating Big Social Change — – Aisha Walcott-Bryant- Antoine Tesniere- Illango Patchamuthu – Aisha Walcott-Bryant- Wassim Hamidouche- Antoine Tesnie…
S8
https://dig.watch/event/india-ai-impact-summit-2026/how-small-ai-solutions-are-creating-big-social-change — Do you actually use them in Africa? Aisha Walcott-Bryant: Oh, yes. Yes, yes, yes, yes. I think that’s a great analogy….
S9
THE IMPACT OF RAPID TECHNOLOGICAL CHANGE ON SUSTAINABLE DEVELOPMENT — – Abdus Salam International Centre for Theoretical Physics (2018). New Internet of things Doctoral Programme: ICTP suppo…
S10
How Small AI Solutions Are Creating Big Social Change — -Antoine Tesniere- French professor of medicine and entrepreneur, specializing in health innovation and crisis managemen…
S11
https://dig.watch/event/india-ai-impact-summit-2026/how-small-ai-solutions-are-creating-big-social-change — Requesting you to please join the panel. Antoine, Antoine Tesniere, who’s a French professor of medicine and entrepreneu…
S12
How Small AI Solutions Are Creating Big Social Change — – Zameer Brey- Antoine Tesniere
S13
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — – Ken Ichiro Natsume- Prokar Dasgupta- Zameer Brey- Alain Labrique – Zameer Brey- Alain Labrique – Zameer Brey- Payden…
S14
How Small AI Solutions Are Creating Big Social Change — – Aisha Walcott-Bryant- Wassim Hamidouche- Antoine Tesniere – Illango Patchamuthu- Antoine Tesniere- Wassim Hamidouche
S15
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S16
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S17
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S18
Why smaller AI models may be the smarter choice — Most everyday jobs do not actually need the most powerful, cutting-edge AI models, argues Jovan Kurbalija in his blog po…
S19
Digital democracy and future realities | IGF 2023 WS #476 — Current regulations may not fully consider the practices and needs of these platforms, which can impede their ability to…
S20
Survival Tech Harnessing AI to Manage Global Climate Extremes — -Hyperlocal and Multi-Modal Forecasting: There was significant discussion about developing AI systems capable of providi…
S21
Toward Collective Action_ Roundtable on Safe &amp; Trusted AI — Cool. So I think we just have to be very, very careful here of the sort of, you know, the Silicon Valley approach of mov…
S22
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Lt Gen Vipul Shinghal — “The black box of data must become a glass box.”[11]. “the commander taking a decision based on an AI -enabled system bu…
S23
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — But you can figure it out. But basically what we need… to do is essentially teach the kid learning to learn using AI, …
S24
AI that serves communities, not the other way round — At theWSIS+20 High-Level Eventin Geneva, a vivid discussion unfolded around how countries in the Global South can build …
S25
How AI Drives Innovation and Economic Growth — Appropriate technology solutions for developing countries Zutt advocates for a focus on ‘small AI’ rather than large-sc…
S26
AI for agriculture Scaling Intelegence for food and climate resiliance — “We will move from pilots to platforms, from fragmented data to interoperable systems, from experimentation to execution…
S27
Open Forum #54 Advancing Lesothos Digital Transformation Policies — Funding constraints limit many initiatives to pilot phases, with the digital skills training programme initially reachin…
S28
Global Perspectives on Openness and Trust in AI — And then exclusive partnerships and the systems being opaque. So those were the things identified in the market study. A…
S29
Shaping the Future AI Strategies for Jobs and Economic Development — So I think they should start small and have a few small scales. quick impact projects so that they can build on proven s…
S30
Turbocharging Digital Transformation in Emerging Markets: Unleashing the Power of AI in Agritech (ITC) — Jobs within the agri-food value chain, such as advisory services, should be maintained to promote decent work and econom…
S31
Redrawing the Geography of Jobs / Davos 2025 — Using technology to supplement rather than replace existing jobs and skills, especially in informal economies
S32
A Digital Future for All (afternoon sessions) — There is a need to build AI capacity in developing countries to ensure they can participate in and benefit from AI advan…
S33
Assessing the Promise and Efficacy of Digital Health Tool | IGF 2023 WS #83 — In summary, digital health has achieved technical maturity but lacks organizational maturity. Comprehensive understandin…
S34
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Ioanna Ntinou: I think that my question will be, as a researcher, if we focus so much on having smaller models, if we ac…
S35
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Development | Sociocultural Emphasis on building use cases in key sectors and creating shareable repositories across ge…
S36
IGF 2023 WS #313 Generative AI systems facing UNESCO AI Ethics Recommendation — Large language models can be run on personal laptops
S37
Open Forum #14 Data Without Borders? Navigating Policy Impacts in Africa — Moderator: Thank you very much for that. Let me go, because we are running out of time now. Africa is a specific cont…
S38
Stronger digital voices from Africa — 330 German Agency for International Cooperation [GIZ]. (2019). Background paper on Open Forum to present Ethical Polic…
S39
AI for Good – food and agriculture — – Use of remote sensing and geospatial platforms for analyzing drought, water stress, and crop management Dongyu Qu: Ex…
S40
WS #219 Generative AI Llms in Content Moderation Rights Risks — ### The Low-Resource Language Crisis Dhanaraj Thakur provided extensive analysis of how language inequities create syst…
S41
Democratizing AI Building Trustworthy Systems for Everyone — “So we’re pleased to announce a Lingua Africa initiative where we are working with local communities in partnership with…
S42
How Small AI Solutions Are Creating Big Social Change — African languages. And we just released a data set of 21 now, 27 voice languages, given that Africa has 2 ,000 or so lan…
S43
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Portugal offers something increasingly rare, agility with stability. A country large enough to scale, yet compact enough…
S44
Small states, big ambitions: How startups and nations are shaping the future of AI — At theInternet Governance Forum 2025in Lillestrøm, Norway, a dynamic discussion unfolded on how small states and startup…
S45
[Parliamentary Session 4] Fostering Inclusive Digital Innovation and Transformation — Gong Ke: Thank you. Thank you so much. I think this year’s IGF is one of the important international events after the Un…
S46
AI that serves communities, not the other way round — At theWSIS+20 High-Level Eventin Geneva, a vivid discussion unfolded around how countries in the Global South can build …
S47
Elections and the Internet: free, fair and open? | IGF 2023 Town Hall #39 — Data needed for policy making needs to reflect their specific local contexts
S48
WSIS Action Line C8: Multilingualism in the Digital Age: Inclusive Strategies for a People-Centered Information Society — Sofiya Zahova: Thank you, Davide. I’m honored and delighted to join you today on this important panel, but even more ple…
S49
Main Session 1: Global Access, Global Progress: Managing the Challenges of Global Digital Adoption — Shivnath Thukra: Thanks to you and thanks for inviting me, Meta from India on this panel. I will, in the spirit of bein…
S50
WS #219 Generative AI Llms in Content Moderation Rights Risks — ### The Low-Resource Language Crisis Dhanaraj Thakur provided extensive analysis of how language inequities create syst…
S51
AI as a tech ally in saving endangered languages — Funding community-led data collection and annotation projects Supporting open evaluation benchmarks for low-resource la…
S52
Earth’s Wisdom Keepers — Creating trust between communities and policymakers and valuing indigenous knowledge is crucial for successful collabora…
S53
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — International collaboration is essential for developing countries, requiring customization, learning, and evidence-based…
S54
Informal Stakeholder Consultation Session — Digital transformation affects every sector, so coordinated policymaking helps ensure coherence and better outcomes for …
S55
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Jigar Halani articulated the complexity of trust requirements across different user groups: while IT professionals might…
S56
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — ## Introduction and Context Setting ## Sectoral Applications: Healthcare Insights However, Flanagan highlighted a fund…
S57
NRIs MAIN SESSION: DATA GOVERNANCE — Furthermore, it is noted that support for data systems should not be limited to the private sector. The analysis suggest…
S58
Data Policy in the Fourth Industrial Revolution: Insights on personal data — Assessing risk requires those setting policies to consider the context in which data is collected and processed.
S59
AI and Data Driving India’s Energy Transformation for Climate Solutions — Data ecosystem challenges and need for granular, interoperable data Data governance | Capacity development | Monitoring…
S60
Conversational AI in low income &amp; resource settings | IGF 2023 — Additionally, the potential of AI and chatbots in low-resource settings is acknowledged. The analysis suggests that thes…
S61
Empowering communities through bottom-up AI: The example of ThutoHealth — Community trust: Ensuring AI tools are culturally relevant, i.e. available in local languages and aligned with tradition…
S62
Comprehensive Report: Preventing Jobless Growth in the Age of AI — Development | Economic | Future of work Sectoral Applications and Global Development World Bank president’s presentati…
S63
AI for agriculture Scaling Intelegence for food and climate resiliance — “We will move from pilots to platforms, from fragmented data to interoperable systems, from experimentation to execution…
S64
WSIS Action Line C7: E-Agriculture — Both speakers recognize that while pilot projects are valuable for testing solutions, they often fail to scale without p…
S65
Building Climate-Resilient Systems with AI — The time for action is immediate – moving from research and pilots to deployment and impact is essential
S66
The Future of Digital Agriculture: Process for Progress — Technologies must be easily accessible, economically viable for the lowest-income groups, relevant to the context, and s…
S67
How Small AI Solutions Are Creating Big Social Change — “It’s what is relevant to the context of the end user”[6]. “So what role, in your view, do small and custom AI models ha…
S68
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Ioanna Ntinou: I think that my question will be, as a researcher, if we focus so much on having smaller models, if we ac…
S69
How AI Drives Innovation and Economic Growth — Appropriate technology solutions for developing countries Zutt advocates for a focus on ‘small AI’ rather than large-sc…
S70
Survival Tech Harnessing AI to Manage Global Climate Extremes — -Hyperlocal and Multi-Modal Forecasting: There was significant discussion about developing AI systems capable of providi…
S71
https://dig.watch/event/india-ai-impact-summit-2026/regional-leaders-discuss-ai-ready-digital-infrastructure — try to invest in the township planning and the implementation. Also, we can have a water supply road project that can be…
S72
Strategy outline — – 3.1 Encourage public-private sectors competition, promote entrepreneurship and innovation in the fields of…
S73
Strategy — ‘Foster the use of AI in vital developmental sectors using partnerships with local beneficiaries and local or foreign te…
S74
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Audience: Good evening, everyone. Is it? Okay. My name is Lydia Lamisa Akamvareba from Ghana. I’m looking at the team up…
S75
IGF 2023 WS #313 Generative AI systems facing UNESCO AI Ethics Recommendation — Large language models can be run on personal laptops
S76
Webinar :Using current and emerging cyber tools for disaster management in Africa — Alphonso Wilson:Yeah, shortly, I think I’ll be very brief. Yeah, in response to whatever, in the issues of the climate c…
S77
AI for Good – food and agriculture — – Use of remote sensing and geospatial platforms for analyzing drought, water stress, and crop management Dongyu Qu: Ex…
S78
AI for Good Impact Awards — Development | Sustainable development In the pilot program, rangers intercepted two logging crews before the first tree…
S79
UK researchers test robotic dogs and AI for early wildfire detection — Researchers at the University of Bradford arepreparingto pilot an AI-enabled wildfire detection system that uses robotic…
S80
Democratizing AI Building Trustworthy Systems for Everyone — Lingua Africa initiative launched to collect local data with communities for spoken languages in partnership with Gates …
S81
WS #219 Generative AI Llms in Content Moderation Rights Risks — ### The Low-Resource Language Crisis Dhanaraj Thakur: Yeah, great. Thank you, Marlena. And thanks for the invitation to…
S82
AI Innovation in India — “The solution is a system or a framework that reasons across modalities and refers to previous conclusions, contradicts …
S83
Ateliers : rapports restitution et séance de clôture — Gouvernance de l’IA dans le domaine de la santé L’intelligence artificielle est déjà présente dans le domaine de la san…
S84
WS #323 New Data Governance Models for African Nlp Ecosystems — Deshni Govender: Sure. I think it’s important also to point out that when we mention the concept of extractive practices…
S85
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — Canada’s AI for Development projects in Africa and Latin America have been highly appreciated for their positive impact….
S86
Leveraging AI4All_ Pathways to Inclusion — Language and Low‑Resource Context Challenges
S87
AI as a tech ally in saving endangered languages — Supporting open evaluation benchmarks for low-resource languages
S88
Transforming Health Systems with AI From Lab to Last Mile — Vikalp Sahni identified key technical challenges including building systems that work across multiple languages and gene…
S89
Panel Discussion Inclusion Innovation &amp; the Future of AI — The tension between Ball’s emphasis on frontier AI capabilities and Ramos’s focus on addressing market concentration rep…
S90
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — Panellists offered different outlooks on employment implications. Rees-Jones maintained optimism about AI tutoring enhan…
S91
Scaling AI for Billions_ Building Digital Public Infrastructure — Talent development and future outlook
S92
Open Forum #67 Open-source AI as a Catalyst for Africa’s Digital Economy — Moderate disagreement with significant implications. While speakers agree on the fundamental opportunity that open sourc…
S93
Developing capacities for bottom-up AI in the Global South: What role for the international community? — The discussion explored alternatives to mainstream Western AI approaches. Gurumurthy highlighted the BRICS AI declaratio…
S94
Day 0 Event #261 Navigating Ethical Dilemmas in AI-Generated Content — Hamleh creates unique AI models designed specifically for their regional context, built on localized words, terms, defin…
S95
Building an Enabling Environment for Indigenous, Rural and Remote Connectivity — A key point of agreement among speakers was the necessity of making connectivity affordable and accessible. The cost of …
S96
Panel Discussion: 01 — “You know, when you think about the journey that we’ve had till now, the global community has had till now with AI, a lo…
S97
Main Session on Sustainability &amp; Environment | IGF 2023 — Citizens need access to information that enables them to make environmentally responsible choices. It is important for i…
S98
GermanAsian AI Partnerships Driving Talent Innovation the Future — The discussion maintained a consistently optimistic and collaborative tone throughout. Speakers demonstrated mutual resp…
S99
Main Session | Policy Network on Meaningful Access — The session began with Vint Cerf emphasising that the definition of meaningful access changes over time and depends on n…
S100
Setting the Rules_ Global AI Standards for Growth and Governance — The discussion maintained a consistently collaborative and constructive tone throughout. Panelists demonstrated remarkab…
S101
Responsible AI in India Leadership Ethics &amp; Global Impact part1_2 — This set the foundational tone for the entire panel discussion, moving away from abstract principles to practical implem…
S102
Heathrow explores AI to ease air traffic congestion — Heathrow Airport, one of the world’s busiest, is trialling an advanced AI system named ‘Amy’ to assist air traffic contr…
S103
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Matthew Prince Cloudflare — However, Prince expressed optimism about this transition, arguing that it presents an opportunity to correct longstandin…
S104
Responsible AI for Shared Prosperity — This comment provided empirical validation for the urgency of the initiatives being discussed and introduced the concept…
S105
State of play of major global AI Governance processes — These regulations are context-sensitive, harmonised to varying degrees as needed; traffic regulations in the UK, for exa…
S106
From Technical Safety to Societal Impact Rethinking AI Governanc — impact. Across global AI discussion, safety is too often being framed in technical terms. Model alignment, red teaming, …
S107
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-bharats-health_-addressing-a-billion-clinical-realities — which can be developed by IKAK and other health startups. Where ABDM created the federated architecture, where the model…
S108
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — The session emphasised the benefits of these collaborative approaches, which enable regulators to stay updated on the la…
S109
Artificial intelligence (AI) – UN Security Council — Another session highlighted the need for transparency and accountability in AI algorithms. The speakers advocated for AI…
S110
Resilient and Responsible AI | IGF 2023 Town Hall #105 — Audience:I have three interventions and I’m going to do it in two minutes. One is at national level, one is at continent…
S111
Google and Cassava expand Gemini access in Africa — Googleannounceda partnership with Cassava Technologies to widen access to Gemini across Africa. The deal includes data-f…
S112
Google boosts AI and connectivity in Africa — Google has announcednew investments to expand connectivity, AI access and skills training across Africa, aiming to accel…
S113
DIGITAL DIVIDENDS — and US$36 billion a year. 45 Data on river fl ows are essential for disaster risk planning and for planning and…
S114
High Level Leaders Session 3 | IGF 2023 — Audience:Honorable Ministers, Excellencies, distinguished panelists, ladies and gentlemen, it is a great honor to join y…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Alpan Rawal
2 arguments127 words per minute1158 words546 seconds
Argument 1
Small AI is data‑efficient, cheap to run, edge‑deployable and context‑aware (Alpan Rawal)
EXPLANATION
Alpan described small AI as models that require minimal data, are inexpensive to operate, can run on edge devices, and are tailored to the specific local contexts of the communities they serve. He emphasized that such AI should be meaningful for underserved populations, especially in rural India.
EVIDENCE
Alpan explained that small AI should be data-efficient, inexpensive to operate, capable of running on edge devices, and most importantly produce outcomes that are meaningful for the specific communities they serve, particularly underserved rural populations in India [15-19].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The definition and benefits of small AI, including data efficiency, low cost, edge deployment and local context awareness, are discussed in [S1] and the advantages of smaller models for everyday tasks are highlighted in [S18].
MAJOR DISCUSSION POINT
Defining small AI
AGREED WITH
Zameer Brey, Illango Patchamuthu, Aisha Walcott‑Bryant, Wassim Hamidouche, Antoine Tesniere
Argument 2
Healthy competition among platforms drives innovation; the winner is the solution that fits the user’s context (Alpan Rawal)
EXPLANATION
Alpan argued that competition between AI platforms is beneficial and not a zero‑sum game; the most successful platform will be the one that best addresses the specific needs and context of end users. He highlighted that many problems remain unsolved, providing space for multiple solutions.
EVIDENCE
Alpan stated that healthy competition has driven incredible technological advances, that there are many challenges to solve, and that the platform that best fits the user’s context will be the most relevant, rather than a single winner [386-392].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of competition and multiple platform choices for fostering innovation is emphasized in [S19], while the need for solutions that fit local contexts is underscored in [S24].
MAJOR DISCUSSION POINT
Future outlook and competition among platforms
Z
Zameer Brey
2 arguments123 words per minute795 words385 seconds
Argument 1
Design small AI for specific local problems rather than generic large models (Zameer Brey)
EXPLANATION
Zameer used a traffic analogy to argue that AI solutions should be small, fast, cost‑effective, and suited to local constraints rather than large, generic models that are ill‑suited to specific environments. He emphasized designing AI that fits the lived context of users.
EVIDENCE
He compared Delhi traffic to airplane design, concluding that a smaller, faster, cheaper solution would be appropriate for local transport needs, illustrating the need for locally-tailored AI rather than large generic models [32-34].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Focusing AI on local problems and community relevance is supported by [S1] and [S24], and the call for appropriate technology in developing settings is echoed in [S25].
MAJOR DISCUSSION POINT
Design small AI for local problems
AGREED WITH
Alpan Rawal, Illango Patchamuthu, Aisha Walcott‑Bryant, Wassim Hamidouche, Antoine Tesniere
Argument 2
Move from black‑box to “glass‑box” verifiable AI with audit trails to ensure repeatability (Zameer Brey)
EXPLANATION
Zameer called for AI systems that are transparent and auditable, allowing users to trace the logic behind outputs. Such “glass‑box” AI would reduce errors and increase trust by making the decision process repeatable and verifiable.
EVIDENCE
He described the need for verifiable AI that shifts from a black-box to a glass-box, exposing the logic chain for each input, enabling audits, repeatability, and prevention of fundamental errors [160-165].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for transparent, auditable AI systems are made in [S22] (“black box must become a glass box”) and reinforced by the cautionary stance on rapid deployment in [S21].
MAJOR DISCUSSION POINT
Strategies for building reliable, trustworthy small AI
AGREED WITH
Illango Patchamuthu, Antoine Tesniere
I
Illango Patchamuthu
7 arguments159 words per minute1217 words459 seconds
Argument 1
Small AI can deliver meaningful outcomes for underserved communities without being “second class” (Illango Patchamuthu)
EXPLANATION
Illango asserted that small AI is not inferior to larger models; it can solve real problems efficiently and accelerate development outcomes in low‑resource settings. He emphasized that small AI should be seen as a means to an end, not a second‑class solution.
EVIDENCE
He explicitly stated that small AI is not second class, can solve problems, and can fast-track development outcomes, countering the perception that it is inferior [120-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The impact of small, context-aware AI for underserved populations is described in [S1]; its non-inferior status and community focus are highlighted in [S24] and [S18].
MAJOR DISCUSSION POINT
Defining small AI and its relevance for local impact
AGREED WITH
Zameer Brey, Antoine Tesniere
Argument 2
Transform pilots into plug‑and‑play solutions that can scale from a single village to larger regions (Illango Patchamuthu)
EXPLANATION
Illango highlighted the importance of moving beyond pilot projects to scalable, replicable solutions that can be deployed across many villages and larger populations. He described the need for clear KPIs and plug‑and‑play models to enable this scaling.
EVIDENCE
He discussed the challenge of pilots losing momentum, and the World Bank’s approach of scaling tested small-AI applications from a single community to larger regions using plug-and-play solutions and appropriate KPIs [115-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Scaling from pilots to plug-and-play deployments is covered in [S1]; the transition from pilots to platforms is detailed in [S26]; challenges of scaling pilots are noted in [S27].
MAJOR DISCUSSION POINT
Deployment, scalability, and ecosystem considerations
AGREED WITH
Aisha Walcott‑Bryant, Wassim Hamidouche
Argument 3
Co‑creation with NGOs, governments, academia and local partners ensures relevance and adoption (Wassim Hamidouche; Aisha Walcott‑Bryant)
EXPLANATION
Illango emphasized that collaborating with a broad ecosystem of NGOs, governments, academic institutions, and local stakeholders is essential for designing AI that fits local needs and gains community trust. Such partnerships help in data collection, contextual understanding, and implementation.
EVIDENCE
He noted the need to work with partners such as NGOs, governments, and local communities to ensure solutions are trustworthy, reliable, and do not hallucinate, reinforcing the importance of co-creation [115-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Community-driven co-creation and partnership models are highlighted in [S24]; issues of access and openness are discussed in [S28]; capacity-building collaborations are noted in [S32].
MAJOR DISCUSSION POINT
Co‑creation with NGOs, governments, academia and local partners ensures relevance and adoption
Argument 4
World Bank’s AI Repository of 100 curated use cases provides an open‑access knowledge base for replication (Illango Patchamuthu)
EXPLANATION
Illango described the World Bank’s AI Repository, an openly accessible collection of around 100 use cases across health, education, and agriculture, intended to help other actors replicate successful small‑AI implementations.
EVIDENCE
He explained that the AI Repository is hosted by the World Bank, contains about 100 use cases, and will be openly accessible for others to view and submit vetted use cases [269-272].
MAJOR DISCUSSION POINT
Deployment, scalability, and ecosystem considerations
AGREED WITH
Wassim Hamidouche, Antoine Tesniere
Argument 5
Small AI should augment, not replace, jobs; it must create new employment opportunities in emerging economies (Illango Patchamuthu)
EXPLANATION
Illango argued that AI should support job creation rather than automation that eliminates jobs. Small AI can be leveraged to generate new employment opportunities in emerging economies.
EVIDENCE
He stated that the North Star is job creation and that AI should support the creation and enhancement of jobs rather than automate them away [243-246].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The job-augmenting role of AI is advocated in [S29] (start small, create impact), [S30] (AI as assistive tool), and [S31] (technology to supplement rather than replace jobs).
MAJOR DISCUSSION POINT
Role of AI in development goals and job creation
Argument 6
Building digital literacy, STEM education and up‑skilling are prerequisites for effective AI deployment (Illango Patchamuthu)
EXPLANATION
Illango highlighted that digital literacy, STEM education, and continuous up‑skilling are essential foundations for any AI strategy, ensuring that populations can develop, maintain, and benefit from AI solutions.
EVIDENCE
He listed three prerequisites: digital literacy, up-skilling on AI-related capabilities, and improving STEM capacity in schools and universities [350-354].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for digital literacy, STEM capacity and up-skilling is identified in [S32]; similar emphasis on digital literacy appears in [S6]; and capacity building for digital health is noted in [S33].
MAJOR DISCUSSION POINT
Role of AI in development goals and job creation
Argument 7
AI‑driven improvements in agriculture, health and education can raise household incomes and accelerate inclusive growth (Illango Patchamuthu)
EXPLANATION
Illango explained that deploying small AI in key sectors such as agriculture, health, and education can increase productivity, improve service delivery, and consequently raise household incomes, contributing to inclusive economic growth.
EVIDENCE
He referenced the AI Repository’s 100 use cases that demonstrate how AI can improve service delivery, productivity, and household income, leading to better jobs and inclusive growth [260-263].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI’s contribution to agriculture productivity and inclusive growth is documented in [S26]; impacts on health and education are discussed in [S33]; broader economic benefits of AI are highlighted in [S25] and household-income gains in [S6].
MAJOR DISCUSSION POINT
Role of AI in development goals and job creation
A
Aisha Walcott‑Bryant
3 arguments0 words per minute0 words1 seconds
Argument 1
Accurate weather‑forecasting for rain‑fed agriculture using limited radar infrastructure in Africa (Aisha Walcott‑Bryant)
EXPLANATION
Aisha described Google Research Africa’s effort to improve weather forecasts for rain‑fed agriculture, addressing the scarcity of radar stations on the continent. Better forecasts help farmers plan planting and mitigate climate risks.
EVIDENCE
She noted that Africa has only about 37 weather radar stations compared with roughly 300 in North America and Europe, and that Google launched a continent-wide weather-forecasting service to provide more accurate predictions for rain-fed agriculture [50-58].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Hyperlocal, multi-modal weather forecasting for climate-extreme management is described in [S20]; the African continent-wide weather service initiative is mentioned in [S1].
MAJOR DISCUSSION POINT
Domain‑specific applications of small AI
Argument 2
Creation of multilingual voice datasets for African languages to improve accessibility (Aisha Walcott‑Bryant)
EXPLANATION
Aisha highlighted the development of a voice dataset covering 27 African languages, aiming to improve accessibility and enable voice‑based AI services in rural villages where literacy may be low.
EVIDENCE
She mentioned releasing a dataset of 27 voice languages out of roughly 2,000 African languages, emphasizing that the partnership-led effort focuses on accessibility and reaching rural villages [60-64].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The release of a 27-language African voice dataset is reported in [S1].
MAJOR DISCUSSION POINT
Domain‑specific applications of small AI
Argument 3
Co‑creation with NGOs, governments, academia and local partners ensures relevance and adoption (Wassim Hamidouche; Aisha Walcott‑Bryant)
EXPLANATION
Aisha stressed that Google collaborates with local partners, NGOs, and academic institutions to co‑create AI solutions that fit local contexts, ensuring that technology is appropriate and adopted by communities.
EVIDENCE
She described partnership-led work with entities such as Macquarie University and Digital Umaga, emphasizing co-creation and local involvement in data collection and solution design [61-65].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Community-driven co-creation and partnership models are highlighted in [S24]; issues of access and openness are discussed in [S28]; capacity-building collaborations are noted in [S32].
MAJOR DISCUSSION POINT
Co‑creation with NGOs, governments, academia and local partners ensures relevance and adoption
W
Wassim Hamidouche
6 arguments152 words per minute1552 words609 seconds
Argument 1
Open‑source SPARO biodiversity monitoring and Alert California wildfire detection as small‑AI solutions (Wassim Hamidouche)
EXPLANATION
Wassim presented two open‑source, edge‑deployable AI projects: SPARO for acoustic biodiversity monitoring in remote areas, and Alert California, a network of cameras with AI for early wildfire detection. Both are designed to run on low‑resource infrastructure.
EVIDENCE
He described SPARO as a solar-powered acoustic and remote observation system that uses AI to detect animal species and transmits data via satellite in remote regions, already deployed in countries such as Colombia, Peru, the United States, Tanzania, etc. [82-88]; and he explained Alert California as a 1,300-camera network operating 24/7 with AI tools that detect early fires to enable rapid response [89-97].
MAJOR DISCUSSION POINT
Domain‑specific applications of small AI
AGREED WITH
Alpan Rawal, Zameer Brey, Illango Patchamuthu, Aisha Walcott‑Bryant, Antoine Tesniere
Argument 2
Internet data is >60 % English, leaving low‑resource languages under‑represented (Wassim Hamidouche)
EXPLANATION
Wassim highlighted that the majority of internet text is in English, causing low‑resource languages to be severely under‑represented in training data for large language models.
EVIDENCE
He noted that more than 60 % of internet data is English, with high-resource languages like French and Portuguese following, while low-resource languages constitute only a tiny fraction despite representing over 7,000 languages [185-186].
MAJOR DISCUSSION POINT
Challenges of low‑resource languages and data scarcity
Argument 3
Few or no evaluation benchmarks and limited safety alignment for many languages (Wassim Hamidouche)
EXPLANATION
Wassim explained that most low‑resource languages lack benchmark datasets for model evaluation, and safety alignment work is primarily done for English and other high‑resource languages, leaving gaps in reliability and ethical safeguards.
EVIDENCE
He reported that only about 300 languages have at least one benchmark, many have none, and existing benchmarks focus mainly on English-to-language translation without cultural context; safety alignment is also largely limited to English and high-resource languages [187-196].
MAJOR DISCUSSION POINT
Challenges of low‑resource languages and data scarcity
Argument 4
Need to shift from generic data collection to domain‑specific, use‑case driven data gathering (Wassim Hamidouche)
EXPLANATION
Wassim argued that instead of collecting broad, general‑purpose data, efforts should focus on gathering data specific to particular domains, applications, and use‑cases to improve model performance where it matters most.
EVIDENCE
He stated that after sufficient general data collection, the next priority is domain-specific, application-specific data collection to build reliable AI tools for sectors such as healthcare, education, and agriculture [233-237].
MAJOR DISCUSSION POINT
Challenges of low‑resource languages and data scarcity
AGREED WITH
Illango Patchamuthu, Antoine Tesniere
Argument 5
Release open‑weight models and foster community‑driven data pipelines to increase transparency and trust (Wassim Hamidouche)
EXPLANATION
Wassim highlighted that making model weights open and encouraging community participation in data collection enhances transparency, trust, and enables broader deployment of small AI solutions.
EVIDENCE
He mentioned that Google’s open-weight models (e.g., Gemma) can run on laptops and tablets, and that open models and community-driven pipelines are key to successful deployment [144-145].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open-weight models and community data pipelines are advocated in [S28]; the need for transparent “glass-box” AI is discussed in [S22]; trust and safety concerns are raised in [S21].
MAJOR DISCUSSION POINT
Strategies for building reliable, trustworthy small AI
Argument 6
Large language models and small, open‑source models will coexist; collective open‑source efforts are essential to reach parity for low‑resource languages (Wassim Hamidouche)
EXPLANATION
Wassim asserted that both large foundation models and smaller open‑source models will have roles, and that collaborative open‑source initiatives are crucial to bring low‑resource language performance closer to that of English.
EVIDENCE
He noted that while efforts may not yet make low-resource models as good as English, collective open-source work is necessary to eventually achieve that objective [396-398].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The coexistence of large and small models and the necessity of collective open-source work for low-resource languages are mentioned in [S24]; the suitability of smaller models for many tasks is argued in [S18]; and appropriate technology for developing contexts is highlighted in [S25].
MAJOR DISCUSSION POINT
Future outlook and competition among platforms
A
Antoine Tesniere
2 arguments151 words per minute1246 words492 seconds
Argument 1
Radiology, dermatology and ophthalmology AI tools that run on low‑cost hardware for point‑of‑care diagnostics (Antoine Tesniere)
EXPLANATION
Antoine described existing small AI models used in healthcare for image analysis in radiology, dermatology, and ophthalmology that can operate on inexpensive hardware at the point of care, providing reliable diagnostics.
EVIDENCE
He explained that small AI models are already validated for tasks such as chest X-ray analysis, fracture detection, dermatology and ophthalmology image analysis, and can be deployed on low-cost computers for point-of-care use [102-108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Point-of-care AI for medical imaging on inexpensive hardware is discussed in [S33]; the assistive role of AI in health settings is reinforced in [S30].
MAJOR DISCUSSION POINT
Domain‑specific applications of small AI
Argument 2
Design offline‑capable, hardware‑efficient algorithms to minimise unpredictable failures (Antoine Tesniere)
EXPLANATION
Antoine emphasized the need for AI algorithms that can run offline on modest hardware, reducing dependence on high‑performance computing and limiting unpredictable failures, especially in low‑resource settings.
EVIDENCE
He highlighted that offline-capable, edge-native AI and data-efficient learning systems are essential for low- and middle-income countries, noting that models must run on simple computers or smartphones and be robust without constant connectivity [332-340].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for robust, offline-capable AI algorithms is emphasized in [S21] and [S22]; digital health implementations requiring offline reliability are noted in [S33].
MAJOR DISCUSSION POINT
Strategies for building reliable, trustworthy small AI
AGREED WITH
Illango Patchamuthu, Zameer Brey
A
Aisha Walcott-Bryant
3 arguments167 words per minute1139 words408 seconds
Argument 1
Adopt a problem‑first approach: build simple, non‑AI solutions when they suffice and only apply AI when there is a clear, unmet need.
EXPLANATION
Aisha emphasizes that the team should start by identifying the concrete problem and consider the simplest solution, such as a literal red button, before introducing AI or complex technology.
EVIDENCE
She states, “It’s very much problem first” and illustrates the mindset by saying, “if there’s a red button that… you can press and it’s a one-error-zero, just build the red button. We don’t need to bring AI or technology.” [44-46]
MAJOR DISCUSSION POINT
Problem‑first design of AI solutions
Argument 2
Deploy open‑weight, nano‑scale models (e.g., Gemma) that can run on laptops and tablets, enabling edge AI for African communities.
EXPLANATION
Aisha highlights that Google’s open‑weight models are intentionally lightweight so they can operate on low‑cost devices, bringing AI capabilities directly to users in remote or low‑resource settings.
EVIDENCE
She notes, “our open models, our open weight models, Gemma, are made for a lot of these solutions that are more closer to the edge… we have nano models that can run on your laptop and tablets and so forth.” [144-145]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The push for open-weight, edge-compatible models and community openness is covered in [S28]; transparency and verifiability of such models are emphasized in [S22].
MAJOR DISCUSSION POINT
Edge‑deployable open‑weight AI models
Argument 3
Leverage Google’s massive compute, AI expertise, and societal‑impact mandate to address African challenges at scale.
EXPLANATION
Aisha explains that Google Research Africa uses its global computing resources and AI know‑how, aligned with a mandate for large‑scale societal impact, to tackle problems that are unique to the continent.
EVIDENCE
She says, “Coming from Google Research, we want to leverage our compute, our AI expertise and capabilities, and then our mandate, which is the societal impact at scale, to think about the types of problems that we work on.” [48-49]
MAJOR DISCUSSION POINT
Using corporate resources for societal impact
Agreements
Agreement Points
Small AI should be lightweight, data‑efficient, cheap to run, edge‑deployable and tailored to local contexts and underserved communities
Speakers: Alpan Rawal, Zameer Brey, Illango Patchamuthu, Aisha Walcott‑Bryant, Wassim Hamidouche, Antoine Tesniere
Small AI is data‑efficient, cheap to run, edge‑deployable and context‑aware (Alpan Rawal) Design small AI for specific local problems rather than generic large models (Zameer Brey) Small AI can deliver meaningful outcomes for underserved communities without being “second class” (Illango Patchamuthu) Deploy open‑weight, nano‑scale models (e.g., Gemma) that can run on laptops and tablets, enabling edge AI for African communities (Aisha Walcott‑Bryant) Open‑source SPARO biodiversity monitoring and Alert California wildfire detection as small‑AI solutions (Wassim Hamidouche) Design offline‑capable, hardware‑efficient algorithms to minimise unpredictable failures (Antoine Tesniere)
All panelists described small AI as models that require minimal data, are inexpensive, can run on edge devices, and are designed for the specific local contexts of underserved populations, whether in rural India, African villages, or low-resource health settings [15-19][32-34][120-124][144-145][84-88][332-340].
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on lightweight, data-efficient, edge-deployable AI for underserved communities mirrors the partnership-led African languages initiative that stresses accessibility and local relevance [S42] and the World Bank’s promotion of edge AI models for low-income farmers in Africa [S62].
Co‑creation with local partners (NGOs, governments, academia, communities) is essential for relevance, adoption and trust
Speakers: Illango Patchamuthu, Aisha Walcott‑Bryant, Wassim Hamidouche
Co‑creation with NGOs, governments, academia and local partners ensures relevance and adoption (Illango Patchamuthu) Co‑creation with NGOs, governments, academia and local partners ensures relevance and adoption (Aisha Walcott‑Bryant) Collaboration with NGOs, governments, nonprofit organization, and local communities to build AI solutions (Wassim Hamidouche)
Each speaker highlighted that working together with NGOs, governments, academic institutions and community members is crucial to design AI that fits local needs and gains trust [115-124][61-65][76-78].
POLICY CONTEXT (KNOWLEDGE BASE)
Co-creation with NGOs, governments and communities is highlighted as a core principle in multiple inclusive AI forums, including the partnership-driven approach in African language projects [S42], the community-centric AI capacity building discussed at the WSIS+20 event [S46], and OECD-style guidance on earning trust through local co-creation [S53].
Shift from generic data collection to domain‑specific, use‑case driven data gathering to improve model performance where it matters
Speakers: Wassim Hamidouche, Illango Patchamuthu, Antoine Tesniere
Need to shift from generic data collection to domain‑specific, use‑case driven data gathering (Wassim Hamidouche) World Bank’s AI Repository of 100 curated use cases provides an open‑access knowledge base for replication (Illango Patchamuthu) Design offline‑capable, hardware‑efficient algorithms to minimise unpredictable failures (Antoine Tesniere)
The panelists agreed that after gathering general data, the priority should be collecting domain-specific data (e.g., health, agriculture, education) to build reliable small AI tools, as reflected in the AI Repository and the emphasis on data-efficient learning [233-237][258-262][332-340].
POLICY CONTEXT (KNOWLEDGE BASE)
Moving from generic to domain-specific data collection is advocated in analyses of low-resource language inequities, which call for targeted community-led data pipelines [S50] and funded annotation projects for endangered languages [S51]; similar needs for granular, policy-relevant data are noted in energy transformation discussions [S59].
Small AI is not inferior; it can reliably solve problems and accelerate development outcomes
Speakers: Illango Patchamuthu, Zameer Brey, Antoine Tesniere
Small AI can deliver meaningful outcomes for underserved communities without being “second class” (Illango Patchamuthu) Move from black‑box to “glass‑box” verifiable AI with audit trails to ensure repeatability (Zameer Brey) Design offline‑capable, hardware‑efficient algorithms to minimise unpredictable failures (Antoine Tesniere)
All three emphasized that small AI should be seen as a first-class solution, with transparency and reliability, rather than a lesser alternative [120-124][160-165][332-340].
POLICY CONTEXT (KNOWLEDGE BASE)
Evidence that Small AI can match or exceed larger models appears in the African small-AI case study showing social impact [S42] and the World Bank’s report on edge AI delivering tangible outcomes for smallholder agriculture [S62]; scaling platforms further demonstrate its effectiveness [S63].
Scaling pilots into plug‑and‑play, replicable solutions is essential for broader impact
Speakers: Illango Patchamuthu, Aisha Walcott‑Bryant, Wassim Hamidouche
Transform pilots into plug‑and‑play solutions that can scale from a single village to larger regions (Illango Patchamuthu) Our work is scaling from the uniqueness of the continent (Aisha Walcott‑Bryant) SPARO and Alert California are global solutions that can be deployed anywhere (Wassim Hamidouche)
Panelists highlighted the need to move beyond proof-of-concept pilots toward scalable, reusable small AI deployments that can be replicated across regions and countries [115-124][50-58][84-88].
POLICY CONTEXT (KNOWLEDGE BASE)
The necessity of moving from pilots to plug-and-play, replicable solutions is echoed in several sectoral roadmaps, such as the agriculture scaling framework that stresses platform-level deployment [S63], the pilot-to-scale guidance from WSIS Action Line C7 [S64], and calls for rapid transition from research to impact in climate-resilient AI [S65] and digital agriculture [S66].
Similar Viewpoints
Both see competition and diversity of solutions as beneficial, provided they are context‑appropriate and serve underserved users rather than seeking a single dominant platform [386-392][120-124].
Speakers: Alpan Rawal, Illango Patchamuthu
Healthy competition among platforms drives innovation; the winner is the solution that fits the user’s context (Alpan Rawal) Small AI can deliver meaningful outcomes for underserved communities without being “second class” (Illango Patchamuthu)
Both stress the importance of open‑weight/open‑source models that can run on low‑cost edge devices to broaden access in low‑resource settings [84-88][144-145].
Speakers: Wassim Hamidouche, Aisha Walcott‑Bryant
Open‑source SPARO biodiversity monitoring and Alert California wildfire detection as small‑AI solutions (Wassim Hamidouche) Deploy open‑weight, nano‑scale models (e.g., Gemma) that can run on laptops and tablets, enabling edge AI for African communities (Aisha Walcott‑Bryant)
Both advocate for transparent, reliable AI that can be audited and run offline to avoid unpredictable failures in critical settings [160-165][332-340].
Speakers: Zameer Brey, Antoine Tesniere
Move from black‑box to “glass‑box” verifiable AI with audit trails to ensure repeatability (Zameer Brey) Design offline‑capable, hardware‑efficient algorithms to minimise unpredictable failures (Antoine Tesniere)
Unexpected Consensus
Use of offline, edge‑native AI models in low‑resource health and agriculture contexts
Speakers: Antoine Tesniere, Illango Patchamuthu, Aisha Walcott‑Bryant, Wassim Hamidouche
Design offline‑capable, hardware‑efficient algorithms to minimise unpredictable failures (Antoine Tesniere) Small AI can deliver meaningful outcomes for underserved communities without being “second class” (Illango Patchamuthu) Deploy open‑weight, nano‑scale models that can run on laptops and tablets (Aisha Walcott‑Bryant) Open‑source SPARO and Alert California are edge‑deployable small‑AI solutions (Wassim Hamidouche)
While each speaker focused on different domains (health, agriculture, biodiversity, wildfire detection), they all converged on the necessity of offline, low-cost, edge-compatible AI for impact in low-resource settings – a point not explicitly raised in the opening definitions but emerging across sectors [332-340][120-124][144-145][84-88].
POLICY CONTEXT (KNOWLEDGE BASE)
Offline, edge-native AI for health and agriculture aligns with the World Bank’s showcase of edge AI models for low-resource farming [S62] and the broader push for accessible, low-cost AI tools in low-income settings highlighted at IGF 2023 [S60] and in digital agriculture reports [S66].
Open‑source, community‑driven data pipelines as a strategy to improve low‑resource language models
Speakers: Wassim Hamidouche, Aisha Walcott‑Bryant
Release open‑weight models and foster community‑driven data pipelines to increase transparency and trust (Wassim Hamidouche) Partnership‑led data collection for African voice languages, making datasets openly available (Aisha Walcott‑Bryant)
Both highlighted that open, community‑generated data and model weights are key to advancing AI for low‑resource languages, a consensus that bridges corporate (Google) and corporate‑research (Microsoft) perspectives.
POLICY CONTEXT (KNOWLEDGE BASE)
Open-source, community-driven data pipelines are promoted as a remedy for the low-resource language crisis, with calls for community-led collection and open benchmarks in recent analyses [S50][S51] and exemplified by the African voice dataset initiative [S42].
Overall Assessment

The panel displayed strong consensus that small AI should be lightweight, data‑efficient, edge‑deployable, and co‑created with local stakeholders to address specific community needs. Participants agreed on the importance of domain‑specific data, open‑source models, transparency, and scaling pilots into reusable solutions. There was also a shared belief that small AI is not inferior but can reliably accelerate development outcomes.

High consensus across technical, ethical, and development dimensions, indicating a unified vision that small, context‑aware AI, built through partnerships and open practices, can play a pivotal role in achieving inclusive social and economic development.

Differences
Different Viewpoints
Required reliability and error tolerance for small AI in healthcare diagnostics
Speakers: Zameer Brey, Antoine Tesniere
Move from black‑box to “glass‑box” verifiable AI with zero error to prevent fatal mistakes (Zameer Brey) Current small AI models are not 99.999 % accurate but are still better than existing practice and acceptable for deployment in low‑resource settings (Antoine Tesniere)
Zameer stresses that small AI must achieve near-zero error and be fully auditable to avoid catastrophic outcomes, citing a maternal hypertension case where a lack of reliable AI contributed to a death [153-166][168-174]. Antoine counters that while models are not perfect, they already outperform current clinical practice and can be used effectively, especially when run offline on modest hardware, even if accuracy is below 99.999 % [300-311][332-340].
POLICY CONTEXT (KNOWLEDGE BASE)
The debate over required reliability in healthcare diagnostics reflects findings that patients demand near-zero error rates, contrasting with higher tolerance among IT professionals, as documented in the AI for Bharat’s Health discussion [S55]; the need for sandbox testing to establish evidence bases is also noted [S56].
Unexpected Differences
Overall Assessment

The panel largely converged on the importance of small, context‑aware AI for underserved communities, agreeing on goals such as accessibility, partnership, and scalability. The principal point of contention concerned the acceptable level of reliability for health‑focused small AI, with Zameer demanding near‑zero error and full auditability, while Antoine argued that current, imperfect models already provide net benefits and are suitable for deployment in low‑resource settings.

Overall disagreement was low; the debate centered on a single technical nuance (reliability standards) rather than fundamental strategic differences, suggesting that consensus on the broader vision of small AI is strong, with only modest implications for implementation pathways.

Partial Agreements
All speakers share the overarching goal of deploying AI that benefits underserved or low‑resource populations, but they diverge on the primary strategy: Alpan emphasizes data‑efficiency and edge deployment; Zameer stresses local problem‑fit and verifiability; Illango focuses on scaling plug‑and‑play solutions; Aisha advocates a problem‑first mindset and open‑weight models; Wassim calls for domain‑specific data pipelines; Antoine highlights hardware‑efficient, offline algorithms for scarce data environments. These differing pathways are reflected throughout the discussion [15-19][32-34][120-124][44-46][144-145][233-237][300-311].
Speakers: Alpan Rawal, Zameer Brey, Illango Patchamuthu, Aisha Walcott‑Bryant, Wassim Hamidouche, Antoine Tesniere
Small AI should be data‑efficient, cheap to run, edge‑deployable and context‑aware (Alpan Rawal) Design small AI for specific local problems rather than generic large models (Zameer Brey) Small AI can deliver meaningful outcomes for underserved communities without being “second class” (Illango Patchamuthu) Problem‑first approach; use simple non‑AI solutions when possible and leverage open‑weight nano models for edge use (Aisha Walcott‑Bryant) Open‑source, domain‑specific data collection and community‑driven pipelines are needed for reliable small AI (Wassim Hamidouche) Data‑efficient, hardware‑integrated models are essential for healthcare, especially offline in low‑resource settings (Antoine Tesniere)
All three emphasize that multi‑stakeholder partnership and co‑creation are key to designing and deploying small AI that fits local contexts. While the wording differs, the consensus is that collaboration with NGOs, governments, academia and community actors underpins successful implementation [115-124][61-65][115-124].
Speakers: Illango Patchamuthu, Aisha Walcott‑Bryant, Wassim Hamidouche
Co‑creation with NGOs, governments, academia and local partners ensures relevance and adoption (Illango Patchamuthu) Partnership‑led data collection and ecosystem engagement are essential for building appropriate solutions (Aisha Walcott‑Bryant) Collaboration with NGOs, governments and local communities is crucial for trustworthy, reliable AI deployments (Wassim Hamidouche)
Takeaways
Key takeaways
Small AI is defined as data‑efficient, low‑cost, edge‑deployable models that are tailored to local contexts rather than generic large foundation models. Designing AI for impact requires focusing on specific community problems (e.g., district hospitals, small farmers) and involving local stakeholders in model design and deployment. Domain‑specific small‑AI solutions demonstrated real impact: accurate weather‑forecasting for rain‑fed agriculture in Africa, open‑source biodiversity monitoring (SPARO) and wildfire detection (Alert California), point‑of‑care radiology/dermatology tools, and multilingual voice datasets for African languages. Low‑resource languages face major challenges: dominance of English in internet data, scarcity of benchmarks, and limited safety/alignment work. Targeted data collection and continual pre‑training can narrow performance gaps. Trustworthiness is essential; moving from black‑box to “glass‑box” verifiable AI, releasing open‑weight models, and ensuring offline, hardware‑efficient operation reduce unpredictable failures. Scalability hinges on turning pilots into plug‑and‑play solutions, co‑creating with NGOs, governments, academia, and local partners, and providing open‑access repositories of use cases (World Bank AI Repository). AI should augment—not replace—jobs; building digital literacy, STEM education, and up‑skilling are prerequisites for inclusive economic growth in emerging economies. Healthy competition among platforms (Google, Microsoft, World Bank, etc.) is beneficial; the “winner” is the solution that best fits the user’s context, and open‑source collaboration is key to reaching parity for low‑resource languages.
Resolutions and action items
World Bank to host and maintain an open‑access AI Repository of ~100 curated small‑AI use cases for health, education, agriculture, and job creation. Microsoft announced the Lingua Africa initiative (US$5.5 M) to fund domain‑specific data collection for African languages, building on the earlier Lingua Europe program. Google Research Africa committed to releasing open‑weight models (e.g., Gemini) and multilingual voice datasets, and to continue partnership‑driven data collection across the continent. Microsoft’s SPARO and Alert California solutions will remain open‑source for global deployment. Panelists emphasized the need to shift future data‑collection efforts toward domain‑specific, use‑case‑driven datasets rather than generic large‑scale corpora. All participants agreed to pursue co‑creation models with local NGOs, governments, and academic partners for future pilots.
Unresolved issues
How to achieve near‑zero error rates and verifiable audit trails for critical health applications, especially in low‑resource settings. Standardized benchmarks and safety alignment procedures for the majority of low‑resource languages remain lacking. Ensuring reliable model performance that consistently exceeds average clinician accuracy without introducing unpredictable failures. Addressing hardware affordability for the bottom‑40 % of populations who cannot currently afford edge devices. Concrete strategies for large‑scale digital literacy and up‑skilling programs across diverse emerging economies were discussed but not detailed.
Suggested compromises
Treat small AI as complementary to large foundation models: use open‑weight large models as a base, then fine‑tune with domain‑specific, low‑resource data. Adopt a “glass‑box” approach—provide transparency and auditability while still leveraging powerful pretrained models. Combine open‑source community contributions with targeted funding (e.g., Lingua Africa) to balance broad participation and focused resource allocation. Deploy pilots as plug‑and‑play modules that can be replicated and scaled, acknowledging that not every pilot will immediately become a full‑scale solution.
Thought Provoking Comments
Small AI is defined as models that are data‑efficient, cheap to run, edge‑deployable and, most importantly, meaningful to the specific local communities they serve.
Sets the conceptual framework for the entire panel, moving the conversation away from the hype around large foundation models toward concrete criteria of relevance, efficiency, and impact.
Guided all subsequent speakers to frame their work in terms of data efficiency and local relevance, establishing a common language that shaped the direction of the discussion.
Speaker: Alpan Rawal (moderator)
Would anyone, given Delhi traffic, design something as big as an aeroplane to get across the city? No – we would design something smaller, faster, cheaper, that gets us from point A to point B.
Uses a vivid, everyday analogy to illustrate why AI solutions must be appropriately scaled to the problem context, challenging the assumption that bigger models are always better.
Prompted other panelists (e.g., Aisha and Illango) to discuss concrete constraints like limited infrastructure and to emphasize designing for low‑resource settings.
Speaker: Zameer Brey
In Africa we have only 37 weather radar stations compared to 300 in North America/Europe. To provide accurate forecasts we had to innovate with far fewer resources.
Highlights a stark data‑infrastructure disparity and demonstrates how small AI can be engineered to overcome such gaps, reinforcing the panel’s theme of resource‑constrained innovation.
Shifted the conversation toward concrete technical challenges (data scarcity) and led to deeper discussion about language data collection and open‑weight models.
Speaker: Aisha Walcott‑Bryant
A community health worker missed a case of severe gestational hypertension because she lacked a small AI model on her low‑cost smartphone; with such a model the outcome could have been very different.
Provides a powerful, human‑centered story that illustrates the life‑saving potential of small, offline AI, moving the debate from abstract benefits to tangible health outcomes.
Triggered a focus on reliability and safety, prompting Zameer later to discuss ‘verifiable AI’ and influencing others (e.g., Antoine) to stress human‑in‑the‑loop decision making.
Speaker: Zameer Brey
We need to move from black‑box to glass‑box AI – models whose logic can be audited and verified, especially when zero‑error performance is required for critical decisions.
Introduces the concept of verifiable AI, challenging the prevailing acceptance of opaque models and raising the bar for accountability in low‑resource deployments.
Deepened the technical discussion, leading Wassim to talk about safety alignment in low‑resource languages and prompting audience concerns about model hallucinations.
Speaker: Zameer Brey
Low‑resource languages suffer from three major gaps: lack of training data, lack of benchmarks, and safety alignment mostly done in English. We are launching initiatives like Lingua Africa to fund data collection and domain‑specific fine‑tuning.
Systematically outlines the structural challenges of multilingual AI and presents concrete, funded initiatives, moving the conversation from problem identification to actionable solutions.
Spurred follow‑up questions about domain‑specific data collection, influenced Illango’s remarks on scaling pilots, and set the stage for audience queries about open‑weight models.
Speaker: Wassim Hamidouche
Small AI is not inferior or second‑class; it can fast‑track development outcomes, and the key is to replicate proven pilots at scale with the right KPIs.
Directly counters a common perception that smaller models are less capable, emphasizing scalability and impact measurement, which reframes the discussion toward implementation strategy.
Guided the panel toward talking about replication across regions (e.g., projects in UP, Maharashtra) and reinforced the importance of trust and reliability raised earlier.
Speaker: Illango Patchamuthu
In healthcare, small AI models already power radiology, dermatology, and ophthalmology analyses on edge devices; they provide information, not decisions, preserving the human‑in‑the‑loop model.
Shows that small AI is already mainstream in a high‑stakes domain, illustrating practical deployment and the necessity of human oversight, which adds nuance to the “small vs. large” debate.
Prompted further discussion on offline capability, data efficiency, and the balance between algorithmic assistance and clinician judgment.
Speaker: Antoine Tesniere
Job creation is the North Star for AI in development; we must build ecosystems, digital public infrastructure, and local private‑sector capacity so AI augments rather than replaces jobs.
Broadens the conversation from technical solutions to socioeconomic outcomes, linking AI deployment to sustainable development goals and policy considerations.
Shifted the tone toward macro‑level strategy, leading to mentions of the AI Repository, skilling initiatives, and the need for trustworthy, reliable models to maintain community confidence.
Speaker: Illango Patchamuthu
Healthy competition among platforms is not a zero‑sum game; relevance to the end‑user context matters more than who ‘wins’ the AI wars.
Addresses a provocative audience question with a perspective that reframes competition as collaborative innovation, reinforcing the panel’s inclusive ethos.
Closed the discussion on a unifying note, encouraging continued collaboration across Google, Microsoft, World Bank, and other stakeholders.
Speaker: Alpan Rawal
Overall Assessment

The discussion was anchored by Alpan’s definition of small AI, which established a shared lens for all participants. Key turning points—Zameer’s traffic analogy, Aisha’s radar‑station disparity, the gestational‑hypertension story, and Wassim’s breakdown of low‑resource language challenges—each introduced new dimensions (contextual relevance, data scarcity, real‑world impact, and multilingual barriers) that redirected the conversation toward concrete technical and policy solutions. Illango’s emphasis on scalability, replication, and job creation broadened the scope from technology to development outcomes, while Antoine’s examples of existing edge‑AI in healthcare grounded the debate in current practice. Collectively, these insightful comments moved the panel from abstract notions of ‘small AI’ to actionable strategies, highlighting the necessity of data efficiency, verifiability, local partnership, and ecosystem building to achieve meaningful social impact.

Follow-up Questions
How can we develop verifiable (glass‑box) AI models with near‑zero error for critical health applications?
Ensuring reliability and auditability is crucial for small AI tools used by community health workers, where mistakes can be fatal.
Speaker: Zameer Brey
What strategies should practitioners of small‑language or domain‑specific models adopt to improve performance and safety?
Guidance is needed on selecting base multilingual models, data collection (monolingual, bilingual, translation), and safety alignment for low‑resource languages.
Speaker: Wassim Hamidouche
What are effective strategies for scaling pilot small‑AI projects to larger populations while maintaining impact?
Pilots often lose momentum; defining KPIs and replication pathways is essential for broader development outcomes.
Speaker: Illango Patchamuthu
Do small AI models fail more unpredictably than human clinicians in healthcare diagnostics?
Understanding comparative failure modes informs safe integration of AI into clinical workflows.
Speaker: Alpan Rawal (directed to panel)
How can AI capacity be increased for youth in the agricultural sector of rural India to drive economic inclusion and climate‑resilient practices?
Targeted AI education and tools for young farmers can boost productivity and climate adaptation.
Speaker: Audience (Irish Kumar) – addressed by Illango Patchamuthu
What are the technical and size implications of using open‑source, open‑weight LLMs for domain‑specific, low‑resource language models?
Understanding model selection, tokenization, and data augmentation is key for practical deployment.
Speaker: Audience (Selena) – addressed by Wassim Hamidouche

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Capacity Building in Digital Health

Capacity Building in Digital Health

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel discussed how digital health and artificial intelligence can reshape healthcare workforce capacity across India and globally, emphasizing the need for mindset change alongside technology adoption [9][4-8]. Dr. Rajiv highlighted that community pharmacy has lagged due to social structures, but pharmacists could play a pivotal role throughout the retail and supply chain if professional attitudes shift [4-8]. Dr. Sarvajit Kaur explained that the Indian Nursing Council has embedded AI and digital health into the BSc nursing curriculum since 2021, making five simulation labs mandatory and providing VR and mannequin equipment to build competencies [11-14]. To address limited clinical exposure, the council introduced computer labs with a one-computer-per-five-students rule, trained about 2,000 faculty on simulator use, and established two national reference simulation centers [17-20][26][24-25]. The regulator also links digital competency to continuing education, tying 150 CNE hours to licence renewal and offering a six-month professional digital nursing course, while developing an online registration system for nurses [29-33][31-34].


Dr. Suresh Yadav warned that global shortages of healthcare workers cost roughly 10-12 million jobs and about 15 % of world GDP, exacerbated by climate-health impacts [53-55][57-58]. He argued that AI-driven solutions, such as health-ERP systems, could enable a single clinician to serve many more patients and reduce fragmentation in India’s siloed health ecosystem [78-86]. Yadav also described tele-health platforms that allow doctors to consult across borders, suggesting India could connect its 1.5 billion residents and the diaspora with global expertise [88-92].


Speaker 1 stressed that technology companies must design AI tools that scale in complexity to match varying digital maturity of hospitals, citing their EISU platform that adapts from basic monitoring to advanced decision support [108-118][119-121]. He called for health-tech firms to co-create curricula with institutions like the Academy of Digital Health Sciences to embed hands-on digital skills in future health workers [122-124].


Anish introduced the concept of “innovation pipeline management” for governments, proposing that policymakers be trained to re-imagine solutions-illustrated by an AI-based TB detection tool that increased case finding by 25 % [153-172][173-176]. He suggested a stage-gate process similar to DARPA’s, where ideas are tested, validated, and then scaled by policymakers [179-184]. Dr. Rajiv noted that curriculum regulations set minimum standards but allow institutions to add innovative subjects such as programming, and cited remote-surgery training as an example of rapid upskilling for older practitioners [128-138][208-212]. Concluding the session, Dr. Gupta announced the launch of a Global AI Academy to train health professionals, underscoring that changing mindsets, not just platforms, is essential for widespread digital health adoption [226-234].


Keypoints


Major discussion points


Mind-set change is the primary barrier to adopting AI and digital health across the health-care workforce.


Dr. Rajiv stresses that “the biggest possibility… is for pharmacists… … the change is happening but it will take more time because it’s a professional and mindset change” [6-8]; Dr. Gupta echoes this, calling it “more about mindset change than just technology” [9]; Dr. Sarvajit adds that “this has to be a change of mindset” when introducing expensive simulators [22-24].


Regulators are embedding digital health and AI into nursing education and continuing professional development.


The Indian Nursing Council revised the BSc curriculum in 2021 to build digital competencies, made five simulation labs mandatory, and equipped labs with VR and mannequins [11-13]; it also set faculty-training programmes (≈2,000 faculty) and linked digital-health courses to C & E hours and registration renewal [24-27][31-33].


Digital and AI solutions are presented as the fastest way to close the global health-workforce shortage and to overcome fragmented health-system silos.


Dr. Suresh Yadav quantifies the shortage cost (≈15 % of global GDP) and links it to climate-health challenges [50-58]; he then proposes “low-hanging fruit… digital solutions” and AI-driven “one doctor serve 10 people” models to multiply workforce capacity [77-81]; he also cites remote-surgery up-skilling as a concrete example of rapid capability shift [208-212].


Policy makers and pricing models need a new “innovation-pipeline” approach to fund and adopt digital health at scale.


Anish argues that politicians must be educated on how new tools reshape outcomes, proposing a DARPA-style stage-gate system for testing and scaling innovations [153-162][176-184]; Dr. Gupta raises the practical issue of pricing digital-health products for the Indian market [191].


Technology companies and entrepreneurs must co-design scalable, complexity-adaptive solutions and help train the next-generation workforce.


Speaker 1 outlines a design principle: products should “scale in complexity” to match an institution’s digital maturity, citing their EISU platform that ranges from basic vitals to advanced decision support [112-118][119-124]; Dr. Gupta later announces the launch of a Global AI Academy to institutionalise such capacity-building [226-233].


Overall purpose / goal of the discussion


The panel aimed to diagnose why India’s health-care workforce (pharmacists, nurses, regulators, and senior clinicians) is lagging in AI adoption, and to chart a coordinated roadmap that combines curriculum reform, regulatory incentives, industry-driven technology design, and policy-level innovation pipelines to build a scalable, digitally-enabled health-care ecosystem both nationally and globally.


Overall tone and its evolution


Opening (0:00-2:00): Cautiously analytical – participants identify structural barriers (mind-set, remuneration, social structure) and acknowledge the need for change.


Mid-session (2:00-13:00): Shifts to an optimistic, solution-focused tone as regulators describe concrete curriculum changes and Dr. Yadav paints a visionary picture of AI-driven capacity expansion.


Later segment (13:00-25:00): Becomes more pragmatic and slightly urgent, discussing concrete implementation challenges (faculty gaps, pricing, political education) and proposing concrete frameworks (innovation-pipeline, simulation centres).


Closing (25:00-34:38): Returns to an enthusiastic, forward-looking tone, highlighted by the launch of the Global AI Academy and repeated affirmations that “it’s never about the platform, it’s about the mindset,” ending on a celebratory note.


Overall, the conversation moves from problem-identification through strategic proposals to a rallying call for collective action.


Speakers

Dr. Rajiv


– Title: Dr.


– Role: Discusses pharmacy education, community pharmacy, and regulatory aspects of the pharmaceutical sector.


– Area of expertise: Pharmacy, pharmaceutical education, health workforce development.


Dr. Gupta


– Title: Dr. (Rajendra Gupta)


– Role: Chair of the Dynamic Coalition on Digital Health; Chair of the Commonwealth AI Consortium for Capacity Building across the Commonwealth.


– Area of expertise: Digital health, AI policy, health technology leadership. [S21]


Dr. Sarvajit Kaur


– Title: Dr.


– Role: Secretary of the Indian Nursing Council, representing 2.2 million nurses.


– Area of expertise: Nursing regulation, digital health integration in nursing education. [S4]


Dr. Suresh Yadav


– Title: Dr.


– Role: Executive Director, Commonwealth Secretariat; former advisor to the President of India; works on AI and health policy.


– Area of expertise: AI, digital health, global health policy, Commonwealth initiatives. [S11][S12]


Dr. Freddy


– Title: Dr.


– Role: Faculty member concerned with AI training for senior educators.


– Area of expertise: Medical education, AI adoption in academia.


Anish


– Title: –


– Role: Expert in digital health, involved in the Digital Health Parliament and global leadership initiatives.


– Area of expertise: Digital health innovation, policy, technology entrepreneurship. [S23]


Speaker 1


– Title: –


– Role: Technology entrepreneur discussing DTX and capacity building for health-tech startups.


– Area of expertise: Health-technology entrepreneurship, AI-driven health solutions.


Speaker 2


– Title: –


– Role: Audience participant/entrepreneur asking about mental-health platforms and pricing strategies for India.


– Area of expertise: Digital-health product scaling, pricing strategy.


Speaker 3


– Title: –


– Role: Participant mentioning a consortium of innovative healthcare universities.


– Area of expertise: Healthcare-education collaboration, university consortia.


Additional speakers:


(none)


Full session reportComprehensive analysis and detailed insights


The panel opened with Dr Rajiv highlighting that the chief obstacle to embedding artificial intelligence (AI) and digital health in India’s health-care workforce is a pervasive mind-set barrier, not a lack of technology [6-8]. He explained that community pharmacy has lagged because social structures limit pharmacists’ ability to serve the “last-mile” of the value chain [4-8]; overcoming this gap, he argued, requires strong change-management and a shift in professional attitudes rather than merely new tools. Dr Rajiv also noted that the Pharmacy Council of India (PCI) sets only minimum curriculum standards, allowing institutions to add innovative subjects such as AI, innovation or management [133-141].


Dr Gupta reinforced this view, stating that the challenge is “more about mindset change than just technology” [9] and later responding to a question from Dr Freddy by observing that “age is not a thing, it’s a mindset thing” [219-220].


Dr Sarvajit Kaur described how the Indian Nursing Council (INC) has embedded AI and digital health into the BSc nursing curriculum since 2021, making five simulation labs mandatory and equipping them with VR, high-fidelity mannequins and other tools [11-14]. To address limited clinical exposure, the INC instituted a one-computer-per-five-students rule and set up computer labs across nursing schools [17-20]. Two national reference simulation centres (Gurgaon and Bhagalkot) were created, and around 2,000 faculty members were trained on simulator use [24-27][26]. Continuing professional development is linked to digital competence: 150 CNE hours are now required for licence renewal, a six-month professional digital nursing course has been launched, and an online registration system integrates these opportunities [29-34]. The Digital Health Academy is being leveraged to develop a longer-duration (one- to two-year) specialised programme for health-tech up-skilling [29-34].


Dr Suresh Yadav quantified the global health-workforce shortage (≈10-12 million jobs) and its economic impact (≈15 % of global GDP, about $120 trillion) [50-55]. He linked the shortage to climate-health challenges and highlighted the fragmentation of health systems in the U.K. and India [57-58]. Yadav presented AI-enabled health-ERP systems as a “low-hanging fruit” that could allow a single clinician to serve ten patients, thereby reducing fragmentation and expanding capacity [77-81][78-86]. He expressed confidence that the Government of India can drive this transformation [78-86].


Speaker 1 (technology entrepreneur) introduced the design principle of “scalable complexity,” illustrating it with the EISU platform that can evolve from basic remote-vital monitoring to advanced clinical decision support as an institution’s digital maturity grows [112-124]. He called on health-tech firms to co-design curricula with bodies such as the Academy of Digital Health Sciences, embedding hands-on digital skills in future health workers [122-124].


When pricing of digital-health products for the Indian market was raised, Speaker 2 noted that successful U.S. models have struggled to translate to India and asked for guidance on affordable scaling [187-190]. Dr Gupta deferred to a previous GDHS session on pricing, indicating that a detailed answer was not provided in the current forum [191].


Returning to curriculum reform, Dr Sarvajit warned that formal curriculum changes occur only once a decade, making CME/CNE mechanisms essential for up-skilling the existing four-million-strong nursing workforce [126-130]; the linkage between continuing education and digital competence is reinforced by the earlier cited nursing reforms [29-34].


Dr Rajiv then explained that drug inspectors and regulators are being up-skilled on modern medical devices and AI-enabled tools, citing remote-surgery training as an example of legacy clinicians acquiring new competencies [208-212]; broader regulatory up-skilling is ongoing [215-218].


Anish proposed an “innovation-pipeline management” model for governments, modelled on DARPA’s stage-gate process: define the problem (e.g., TB under-diagnosis), fund ambitious AI solutions, test them through successive gates, validate successful pilots, and scale via policy [153-162][178-184].


During the audience Q&A, Speaker 1 observed a surplus of health-tech ideators but a shortage of executors, prompting Dr Rajiv to reiterate that institutions can add innovative subjects (programming, AI, management) beyond PCI minimums [128-141]. After a question from Dr Freddy, Dr Gupta emphasized that “it’s never about the platform, it’s about the mindset,” reinforcing the panel’s central theme.


In the closing minutes, Dr Gupta announced the launch of the Global AI Academy, positioning it as a cross-disciplinary AI training platform and urging immediate action to embed AI literacy across the health ecosystem [226-234].


In sum, the panel agreed that unlocking AI’s potential in Indian health-care hinges on coordinated mindset shifts, continuous up-skilling, regulatory flexibility, and scalable, ecosystem-oriented technology design.


Session transcriptComplete transcript of the session
Dr. Rajiv

Just by choice, very small fraction would probably take it by choice. Still people want to do jobs in manufacturing or R &D in the pharma companies. So that’s a big factor which we have to solve, which ultimately falls into the remunerations which people get, the future potential of your profession and all that. The community pharmacy in reality has not picked up in this country because of the social structure which we have. Otherwise, the capacity building for anything to do with healthcare, these pharmacists, community pharmacists have to play a very strong role. If you see doctors, nurses, other health technicians, you will find them concerned. They are concentrated in hospitals. But in the society, if you see the spread, the most…

basically the the biggest possibility is for any profession in health care it is for pharmacists through the whole retail chain distribution supply chain management and they are the people who can actually contribute up to the last mile of the value chain so this this needs a strong change management the the change is happening but i think it would take some more time because it’s a professional and mindset change and thinking change for pharmacists

Dr. Gupta

thank you so much i think very important point that it’s more about mindset change than just technology uh dr sarvajit kaur we are very fortunate to have you with us as the secretary of the indian nursing council you represent 2 .2 million nurses and more probably if we account for every registration is three so which is like 10 percent of the world’s nurses how are nurses coping up with the changes in technology with regards to health care and what are you doing at inc

Dr. Sarvajit Kaur

Thank you, Dr. Gupta, for this question and for this opportunity to be here in this esteemed panel. So to answer your question from the regulatory point of view, we have tried to integrate the AI and the digital health into the basic nursing curriculum. We had a change of the BSc nursing curriculum in 2021, and we have started by putting the emphasis on building competencies through the digital health and AI. So five simulation labs have now become mandatory. We have given lab equipments, the list of mannequins, VR, etc., that can be used to build up competencies, because we are also seeing that the clinical facilities that are out there for the nursing students to build up those competencies is becoming limited.

We are having almost 2 .5 lakh nursing students getting passed out. for GNM and BSc, like both getting registered as registered nurses, registered midwife. So we have started from scratch, if I can say so. We have started with computer education. We have given guidelines like for every five students, there should be one computer. We have given computer labs right out there. And we have also worked towards faculty preparedness. So there is, you know, complete adoption, like, you know, the panelists brought out. This has to be a change of mindset. So even if you have these expensive equipments out there, how do you use them and not just keep them in the cupboards, you know, safe as an inventory articles?

So we have started with two national reference simulation centers, one in Gurgaon and the other one just recently opened last two months back in the south, Bhagalkot. And we started with. Faculty preparedness. For the Gurgaon NRSC, we have trained around 2000 faculty on how to use these simulators for each and every nursing student. So what as a regulatory body we are looking is for each and every nursing student to embrace the digital technology as she is working to be a nurse to build up her competencies. And even for in -service, we are linking it up. As you’re aware, with a lot of push from your side, we’ve had this professional digital nursing course of six months, which a lot of takers are there in nursing who are wanting to do this.

But I think we need much more courses like that. We are linking it to C &E hours. We have also brought out our online registration system for the nurses, which again, we are trying to link it with all these. Kinds of opportunities for them. So more nurses benefit out of it. and in the abroad if you see we are having you know these chief technical nurses also now what you know trying to resolve issues like staffing, prevention falls, policies to improve nursing so I think we here also in India need to do a lot in terms of policies to empower every A &M who is working in the rural or every community health officer who’s working in the Arogya Mandir’s or every nurse who is wanting to do better for her patients in the super specialized hospitals there’s a lot more to be done.

Thank you.

Dr. Gupta

Thank you so much it’s very exciting to see how you have moved to bring digital courses to nurses and the offtake for that and I also keep hearing very positive feedback on this opportunity for nurses. Thank you so much. Now I move to Dr. Suresh Yadav who I’ve known as someone who not just ideas the future but creates the future so working with the President of India whether he went to World Bank whether he’s in Commonwealth even in Commonwealth years back you put the agenda of AI as a high priority. What is your work and role today at Commonwealth’s vision for the 56 member nations and more so for the small island states?

Dr. Suresh Yadav

Thank you. Thank you, Professor Gupta, and thank you for your leadership in this very important stage. He has been working in this Digital Health, Digital Health Parliament and global leadership when the world was not thinking. So it’s a great, great contribution by you to the system because digital has taken a frenzy only during the COVID and the post -COVID. Before that, it was just like a digital e -government systems around the world. Now, before I say anything, I’ll be very general in comment on the global level and then touching a little bit on the ground level. What did cost the global ecosystem? Anish described when there was a financial crisis. What global south at that point of time called a crisis triggered by the global north.

I mean, naming that particular country. So there, and he described how beautifully President Obama. steered the United States out of that very complicated and complex situation. Now, if you look at the shortages of the healthcare professionals to the global economy, what it costs, shortage is one part, it’s number. Maybe somewhere 100 ,000 people short, somewhere more number. What are the global implications? So the economic cost of these shortages of the healthcare workers, which is around, in all the categories, around 10 to 12 million, almost costs 15 % of the global GDP. And you can imagine that 15 % of the global GDP of $120 trillion economy. So it’s a huge, huge cost, just because we don’t have people. It has a multiplier effect, and it’s leading to the cascading effect on the various other segments of the society.

The other thing which is happening is that the healthcare workers are not getting paid. the global temperature rise, if you look at the climate and health, there is a latest Lancet report which brings very beautifully how the climate is driving health and leading to a different kind of a challenging situation. But also on this other side, I wanted to say that how health system is also contributing to the climate because one of the largest emitter on the planet. Now, given this situation, we know that so much is the shortages of the healthcare professionals and the nurses shortage is so much that Anisha will know better than I know that the US has a special visa for the nurses.

You may have a computer science degree but may not get a visa. But if you have a nurse experience certificate, you get a visa. So that is the level of the challenges which the world is facing. Now, we know that this is a challenge. What do we do? How do we do? How do we move forward? The other… Before I go to that, the other challenge is the aging population. If you look at Japan, if you look at the Nordic countries, the aging population number is rising. There are not many people to take care of that. Even if I have to get a health care worker in my village in eastern Uttar Pradesh, it’s so difficult.

Even if you want to pay the money, there are no people to serve you. So what do you do? One is, of course, the obvious solution that you train more number of people because there are a lot of people who are looking for the job. It’s not that people are not there. So how do you ramp up that capacity? I know in India, for creating a nursing school, you need to have hospitals, hospitals, and there are so many challenges in spite of setting up a lot of hospitals in the country. So one low -hanging fruit is the digital solutions. And on the top of that digital solution now is AI solution. Can I make one doctor serve 10 people?

Can I make one health care worker serve more than 5 times, 10 times more using the technologies management of the… system using the healthy ERP like multinational enterprises? are doing. The whole system is fragmented in the healthcare system. It should be in that ecosystem. The one good thing about the U .S. is that the doctor, the pharmacy, everybody is connected. So that at least fragmentation is not there in the U .S. system, but that fragmentation still exists in the U .K. system. But in India, that silo is very much there. So even if using this health ERP on the lines of corporate ERP, we are able to fix it, I think that will be a transformative approach of creating a very ecosystem approach where the health workers, the doctors, the nurses, those who want to volunteer and contribute, they will be all connected.

So that is one quick fix solution I see. The other I see that in the global market, and this was my pet project that particularly came out from the post -COVID that there are doctors who want to do more, but they have challenges. So how do you connect? a global or doctors without borders how can an Indian doctors so a patient in Kenya rather than Kenyan or Tinjanian patient traveling to India or if they have to travel they should travel only small portion rather than a big big time of two months three months so these technologies offers you that you can have your scans remotely you can upload send to doctor have all the diagnostic except the procedure which you are required to be there so it’s it’s not only a country health ecosystem but also a global health ecosystem which can which can be made available using the technologies and and then and I see that using that approach any best hospital or doctor the United States can be accessible to a patient in India or vice versa because a lot of Indian wants to consult a doctor in India my wife was in the US for 10 years is still believe in the Indian doctor and wants to have a medicine from India and this one so So 2 million, 20 million persons of Indian origin around the world.

So India can connect 1 .5 billion people within the country and 20 million people who still believe that I should have the Indian medicine, I should have the Indian doctors. So this is a huge, huge opportunity for India to take the leadership because you have the manpower, you have a lot of young people who enter the job market looking for the job, and you have the digital technology power. The only question is to putting these two together and make the nursing institutes, the hospital administration, the startups be all the part of the thriving ecosystem. I think if we can do it, we will have, we will really rather recreating or reimagining a healthcare system not only for India but for the entire world.

And this 15 % GDP, this global temperature rise, the climate health nexus, which I can talk about, these still will be a great enablement for the entire world. And I think that the government, the government of India, I think that the government of India will be able to do this. And I think that the government of India will be able to do this. And I think that the government of India will be able to do this. And I think that the government of India will be able to do this. And I think that the government of India will be able to do this. And I think that the government of India will be able to do this.

And I think that the government of India will be able to do this. And I think that the government of India will be able to do this. And I think that the government of India will be able to do this. And I think that the government of India will be able to do this. And I think that the government of India will be able to do this. there will be universal health access cutting across the boundaries not that

Dr. Gupta

within your boundaries but you can have access to rest of the words of the medicine of the supplies of the doctors of the procedures so I’ll stop here on this positive note and over to you thank you thank you so this is very interesting and you know I I always like optimism over technology even if you’re not optimism technology will move fast coming to use all you can’t you are an entrepreneur in technology while dr. Rajiv approved DTX you make DTX you have made amazing

Speaker 1

AI driven technologies what’s your take on capacity building do we have enough capacity to have more entrepreneurs like you we will have ideators like you but not entrepreneurs because we don’t have executors how do you define this thank you Rajiv ji while of course I will be speaking on the on that part of technology as well how we can create entrepreneurs you But I think more to the point that my fellow panelists talked about, I think technology, when it comes to capacity building, technology companies have a significant role because they influence how the current workforce is practicing. And also they influence how the next generation of workforce will get trained. So that way we have a dual responsibility.

And in that sense, I think there’s a design principle that every technology company should keep in mind or any budding entrepreneur should keep in mind. And that is that the way they design their AI or tech solutions, it should be in a manner that is scalable, not in terms of volume, but scalable in terms of complexity. Because if you’re building something. And if you’re providing the healthcare industry with something, then you have to particularly in a. a country like India where you have a diverse spectrum of digital maturity across various institutes. Some hospitals might be digital native, some of them might be completely analog. So in that sense, you have to have a product that hand -holds the healthcare workers through the digital transformation journey.

So the product is able to scale in complexity as the institutes scale in readiness. That’s how we have been building products. As an example, our EISU solution, its functionality ranges from basic remote vital monitoring to more complex smart alerts and advanced clinical decision support systems based on the readiness of the clinicians. And that’s something that every institution needs. So I think that’s something that techpreneurs should keep in mind. impose AI or technology, rather the technology should adapt to the capacity, or rather it should be able to handhold the capacity and pull it up. One more point that I wanted to add was that just like technologists have been creating or co -creating the next generation of workforce when it comes to programmers and innovators, similarly I feel health tech companies have a responsibility in co -creating the next generation of healthcare workers.

So with the academies like Academy of Digital Health Sciences, I think technology companies or specifically health tech companies should come forward and co -design some hands -on courses as well, like the one ma ‘am mentioned, the professional nurses course. So that, we’re able to expose the students early on to

Dr. Gupta

so i’ll have a few two questions to the you know experts before we move to audience questions uh this is to uh first dr sarrajit to you because you’re a regulator you made an important point that you want to change i mean you have already done that by incorporating digital health as a part of the education you know when i was writing the education policy my biggest worry was technology moves with the pace that you can’t change your curriculum every now and then because by the time you go to the academic council governing board new technology has come so you is there a way you’re looking at to make i think you talked about cme but is that the way we should look at looking at training all professions you know adding cmes rather than changing curriculum every now and then because that’s going to be really tough

Dr. Sarvajit Kaur

um curriculum changes normally occurs say once in a decade and that also is a long process when we brought out the bsc nursing change we took almost three years to bring about a change with all the you know there’s a whole process to it including the public amends and bringing about changes so yes at that point of time whatever is the best for the nursing students we have tried to do that but at the same time we also need to understand that there is this like 40 uh lakhs like you know four million nurses already out there in the country in different states whose competencies also need to be built because they are the ones who are working be it in the rural or be it in the specialized hospitals and for this as a regulator we push upon having simulation centers that’s what we are saying one should be in every district so that you know there are some states who have already started taking this like you know we had Nira Maya in Uttar Pradesh and we had Union in Bihar where they are building up these competency centers integrating the digital technology with it certifying it so that and linking it to the CNE so the nurse carries it forward with her there are incentives there for the nurses to come up for these programs and to better integrate this into the health systems a lot needs to be done in this and as you’re also aware with the digital health academy we are now working towards having a one year or maybe a two -year program we are still working that out so when this also comes as a specialization more takers will be there I think it will again disseminate down it’s a mammoth task no doubt.

Dr. Gupta

Dr. Rajiv I wanted to ask you on that point only that you have drug inspectors across the country who were in the conventional you know world what are you doing for them to understand and of course for pharmacists too I want your point.

Dr. Rajiv

So, yeah, so before moving to that, I just had one comment on this one, the curriculum change, right? So this actually point comes again and again in pharma education also. And colleges and teachers say that we are not allowed to change. It is governed by PCI. But always I say one point. See, PCI or anybody which actually sets the courses, they give you the minimum which should happen. They don’t say that don’t go beyond this. So you have all open at the top. Whatever you want to do, you keep this minimum. Plus you go on adding if you want to. So if pharma is not having a course on innovation or management or any modern technology.

Computer programming, PCI doesn’t say that you can’t do it. PCI says that you keep pharma papers over and above this. If I want to keep innovation paper, I’m free to do that.

Dr. Gupta

Rajiv, I’m sure this message will go viral, but the problem is how many people read it in that manner. You know, when we started courses, we put a line. The contents of this course will change based on the developments in the field. And we had really tough time telling that it can be in the prospectus. I said, we have to do that. The field is changing. And that brings me to Anish, because always the problem comes, what do you do to governments? You know, when you’re talking of technology, we can have regulators change it. We can have, you know, councils change it. But how do politicians get changed? Do we have a crash course for them?

Anish

Well, so here is the, there’s a, that’s a spicy question, but let me, let me, let me handle it. Well, this is in the U .S. It was funny when you saw the senators asking Mark Zuckerberg questions that were not very smart. So there was obviously a push to get education about what the technology means. But let me, let me shift that question in a different way. A lot of this assumes that the job to be done is the same. but you’ve introduced new tools so that you train people on how to do the same job but with the new tools. The politician or the policymaker is often focused on the outcome or the objective, the problem to be solved.

And it may be that we spent 10 years doing it this way, we’ve funded it, organized it, and you should be educated on how technology will influence it. But at some point, there’ll be a flip. Hey, I’ve got an entirely new way of solving that outcome. And why don’t we reorganize this whole thing that takes advantage of new capacity that wasn’t possible but for the technology? Earlier in this conference, we heard from Sunil Wadwani from the Wadwani Foundation. He talked about tuberculosis deaths, half a million deaths. And he said a portion of those deaths come from individuals. Who obviously get later, you know, they’ve been detected later. And then others, they dropped off their medications too early.

So you’ve got these sort of error rates on both sides. And so you have a nurse or someone in the community, asha workers, someone helping, engaging. And so you could think about politicians saying, okay, do I have to fund a new program to do this technology? Or it turns out they’ve come up with an entirely new AI -based detection system, and they found 25 % more tuberculosis cases, not because they’ve educated, but they’ve introduced a whole new concept that you can change the diagnosis model through voice. You cough into a phone and it tells you, I’m paraphrasing what I heard earlier today. So this is the moment where the more we have flexibility in the political dilemma, dialogue, and some say this is zero -based budgeting that’s changed the way we fund our government.

There are lots of policy debates. but if you start with the principle that there’s a problem to be solved, we have too many people dying from tuberculosis too early. Now, let me say, look, we’ve got programming and funding and staff and people that do things to do this, but now a new technology shows up that allows me to think of this in an entirely new way and only possible to implement the strategies that come from this because it exists. That is a whole level of training that’s not training, oh, here’s how the buttons work. That is connecting the dots on what the capacity is to fundamentally reimagine the way to go about this. And so not to go back to capacity building, but I have coined this term innovation pipeline management in government.

DARPA, very famously, it’s our research arm in the U .S. government, sets ambitious but achievable targets and then lets professors, entrepreneurs, innovators sort of come up with ideas. And so you want to have… You want to have a stage gate to test ideas. You want to test more ideas. Then some of them graduate to the next stage and then you want to sort of validate those successes. And then you want policymakers to scale the ideas that work. and so I think your question was meant to it was sort of funny, the politicians need to be trained but there’s also some seriousness which is it can also be the vehicle by which we fundamentally re -imagine the way to go about it and then that brings a whole new cycle.

So that’s the positive side of

Dr. Gupta

Thank you Anish so much and now let’s get to the public questions so any audience questions yeah, you first

Speaker 2

Hi Anish, thank you for your inputs as someone who has been as an entrepreneur, also coming from a Catholic background, researching brain and AI and has spent a lot of time in the US, last four years in US and India. Be specific to the question because we have less time here. Context to what you were saying, the need for the digital portions. So if somebody has come up with a solution for mental health, for the professionals themselves, like the nurses and the doctors what would be a good platform because right now it’s like you educate them for the need of it and then the skills and the outcome get measured. what will be a better way to scale this because the need is there we see it we work with kids also doing that and we see the same need for professionals as well right and it’s contextualized to the Indian context as well what will be the good platform to sort of take this to scale when such needs exist with all professions as well I do have a separate question on the pricing with India so as two ventures that I’ve been part of that have scaled pretty well in the US one of them has become 100 million revenue the other has taken public route in the US but they failed miserably on the pricing here so spending two years up front here we couldn’t get the same product to work at the pricing here so what are your suggestions for how to make pricing work for India when you have the intent to solve for India as well so those are my two questions

Dr. Gupta

because if you go back and look to GDHS session on pricing of digital health you will get a detailed answer from those who build it globally so that will help you solve that problem and the other one does someone want to take an answer it

Dr. Sarvajit Kaur

answering your question from the regulatory point so uh we have for the nurses we have linked 150 cne hours and we have linked it to the renewal of their registration every five years so now nurses have to mandatorily do these courses then only their license will get renewed so there’s a lot of need to have these kinds of courses there are some platforms where these courses are put free of cost inc being one of them this i got this swam so i’m sure there are a lot of opportunities uh for you to you know take up anything that works for the nurses the technical experts have to you know take a look at it to see if it’s okay and then we can take it

Speaker 2

right now it’s developed by doctors for doctors but it can certainly be i’d love to take inputs from you where to take it forward thank you

Dr. Gupta

after this dr freddy yeah

Dr. Freddy

thank you very much uh uh my very simple question is that i am born before technology and suddenly bombarded with the last four five years you and the times are like this the fate is this that i’m from best colleges being a faculty and now join era medical colleges need faculty medicine and suddenly this institution is in a hole into ai now people like me who worked with mci and the curriculum has already been changed but believed me that nothing has changed because i actually had a audition also my question is that how are you emphasizing in future there are people who are supposed to implement ai people who are supposed to train these people in gen z now who themselves have no between so there’s a dilemma between them do you have any solution for that so that at least people who have been trained now are being trained by people who are inverted commas not trained that’s my worry

Dr. Gupta

so i will ask around in one minute

Speaker 1

sure yes so uh I think there are still people far and few in between who can be those ambassadors for change. It’s just a matter of giving them the tools, being able to, you know, get them on the platform of university or digital health sciences academy so that they’re able to train or build capacity at scale. That’s the only way. Otherwise, we don’t have enough people to do it one on one or, you know, in a physical capacity. We have to use virtual tools even for that. And at the same time, I think there shouldn’t be a bar at offer, you know, a certain experience or a number of years of teaching for these kind of courses.

So this has to be age agnostic, I feel.

Dr. Gupta

Rajiv, 30 seconds for you and then we have to close.

Dr. Rajiv

No, we have to close because we are running out of time. We have to launch also one thing. So I think answer lies in the system itself. If I just give one example, remote surgeries. The doctors who were trained 30 years ago, they were not trained on remote surgeries. Today, they are doing remote surgeries. How did they shift? So, I mean, this is something. I don’t think that these trainings and capacity building should be restricted from within the profession. Whosoever is suitable for those trainings, they should be engaged. It’s a continuous process in regulatory system. Our inspectors and drug controllers, they actually are trained into the modern, in basically approving and reviewing these medical devices also.

It was not there when they were appointed. So everybody is getting upgradation and there are systems.

Dr. Gupta

Yeah, and in our courses, we have seen 80%. 80 % of the people. are above 20 years, highest is 50 years after MBBS. So I think age is not a thing, it’s a mindset thing.

Speaker 3

Rajin, I’d also like to add about the consortium of innovative healthcare universities.

Dr. Gupta

We have to launch this and close this session. We have a very strict timing here. And I see that clock ticking up.

Speaker 3

I understand.

Dr. Gupta

So we’re launching the Global AI Academy, which you will see is about training people. You have platforms, it’s not about that. Together, yes. Here we go, something’s happening. There it is, a screen. Oh, that is coming. It’s coming and going. Yeah. So it’s never about the platform, it’s about the mindset. And start now if you have not. Thank you. Thank you very much. A big round of applause. Thank you. Thank you. Thank you. Thank you. Thank you. you Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (34)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“The chief obstacle to embedding AI and digital health in India’s health‑care workforce is a pervasive mind‑set barrier, not a lack of technology (as stated by Dr Rajiv and reinforced by Dr Gupta).”

The knowledge base explicitly notes that “mindset change is harder to achieve than implementing technology and changing processes” confirming that mindset, rather than technology, is seen as the primary barrier [S30].

Additional Contextmedium

“Around 2,000 faculty members were trained on simulator use for nursing education (as reported by Dr Sarvajit Kaur).”

A related source emphasizes the need for a large number of trained faculty members to support digital health capacity building, underscoring the relevance of such training initiatives [S102].

Additional Contextlow

“Continuing professional development is linked to digital competence, with 150 CNE hours now required for licence renewal for nurses.”

The knowledge base mentions the establishment of a model for continuing professional development based on national competence standards, providing broader context for CPD requirements in health professions [S103].

External Sources (103)
S1
Building the Workforce_ AI for Viksit Bharat 2047 — -Speaker 1- Role/Title: Not specified, Area of expertise: Not specified -Speaker 3- Role/Title: Not specified, Area of …
S2
S3
Advancing Scientific AI with Safety Ethics and Responsibility — – Speaker 1- Speaker 2- Speaker 3 – Speaker 1- Speaker 3- Moderator
S4
Capacity Building in Digital Health — -Dr. Sarvjeet Kaur: Secretary of the Indian Nursing Council, represents 2.2 million nurses, regulatory role in nursing e…
S5
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S6
Policy Network on Artificial Intelligence | IGF 2023 — Moderator 2, Affiliation 2 Speaker 1, Affiliation 1 Speaker 2, Affiliation 2
S7
S8
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S9
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S10
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S11
AI 2.0 Reimagining Indian education system — -Suresh Yadav- Executive Director at Commonwealth Secretariat, former advisor to President Mukherjee, expertise in finan…
S12
AI 2.0 The Future of Learning in India — Suresh Yadav, Executive Director of the Commonwealth Secretariat, argued that this moment requires complete reimagining …
S13
https://dig.watch/event/india-ai-impact-summit-2026/ai-2-0-reimagining-indian-education-system — Thank you Pranavji for the presentation. Today as a panelist now we have Professor KK Agarwal sir, President South Asian…
S14
https://dig.watch/event/india-ai-impact-summit-2026/scaling-trusted-ai_-how-france-and-india-are-building-industrial-innovation-bridges — thank you once again to our moderator and to all our distinguished panelists I would now invite all the speakers to plea…
S16
https://dig.watch/event/india-ai-impact-summit-2026/science-ai-innovation_-india-japan-collaboration-showcase — Yeah. This is great. I’ll, we’ll go over to Rajiv. Rajiv Babuji and then we’ll break for questions. You know, we’ve hear…
S18
The reality of science fiction: Behind the scenes of race and technology — How do you know I’m real? I’m not real. I’m just like you. You don’t exist in this society. If you did, your people woul…
S19
Global Health Diplomacy — Andrew F. Cooper is professor, Department of Political Science, University of Waterloo and distinguished fello…
S20
Assessing the Promise and Efficacy of Digital Health Tool | IGF 2023 WS #83 — Deborah Rogers:I guess my closing remark would be that technology is a great enabler. It can actually be used to decreas…
S21
Robotics and the Medical Internet of Things /MIoT — Dr. Gupta:Thank you, Amali. I am Rajendra Gupta. I chair the Dynamic Coalition on Digital Health, and I also chair the C…
S22
Conversational AI in low income &amp; resource settings | IGF 2023 — Ashish Atreja:Dr. Gupta, it’s a pleasure to be here and thanks for having me. Greetings from California. It’s 1 a.m. her…
S23
https://dig.watch/event/india-ai-impact-summit-2026/capacity-building-in-digital-health — Thank you. Thank you, Professor Gupta, and thank you for your leadership in this very important stage. He has been worki…
S24
Keeping up with Smart Factories / DAVOS 2025 — – Padraig McDonnell- Anish Shah
S25
Announcement of New Delhi Frontier AI Commitments — -Abhishek: Role/Title: Not specified (invited as distinguished leader of organization), Area of expertise: Not specified…
S26
Responsible AI in India Leadership Ethics &amp; Global Impact — So to help me with the discussion, may I have the pleasure of inviting Dr. Satya Ramaswamy, Chief Digital and Technology…
S27
Fixing Healthcare, Digitally — Anumula recognises the need to improve healthcare access for the underprivileged, highlighting the Rajiv Arogyashree sch…
S28
Artificial Intelligence &amp; Emerging Tech — A balanced approach is required for the regulation of emerging technologies to prevent the creation of problems while so…
S29
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — I’ll go first. I think voice. It’s a common factor. I think it is horizontal, not vertical, but it’s very, very importan…
S30
Main Topic 2 –  GovTech Dynamics: Navigating Innovation and Challenges in Public Services — Central to this agenda is the belief that technological adoption should be led by a fundamental change in mindset, focus…
S31
Bridging the AI innovation gap — The tone is consistently inspirational and collaborative throughout. The speaker maintains an optimistic, forward-lookin…
S32
Upskilling for the AI era: Education’s next revolution — The tone is consistently optimistic, motivational, and action-oriented throughout. The speaker maintains an enthusiastic…
S33
Dynamic Coalition Collaborative Session — Dr. Gupta challenged current priority-setting in internet governance, arguing that artificial intelligence is being prio…
S34
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — I think healthcare is slightly different from a lot of other industries. I think it is highly regulated, number one. So …
S35
https://dig.watch/event/india-ai-impact-summit-2026/ai-2-0-the-future-of-learning-in-india — And mentor -mentee is always a guru -shishya context, which is very meaningful and useful. I will close this remark by s…
S36
Effective Governance for Open Digital Ecosystems | IGF 2023 Open Forum #65 — A mindset issue persists which hinders the shift towards digitalization.
S37
MedTech and AI Innovations in Public Health Systems — This comment provocatively shifts blame from technology limitations to organizational culture, suggesting that the real …
S38
Digital Health at the crossroads of human rights, AI governance, and e-trade (SouthCentre) — Artificial intelligence (AI) has emerged as a powerful tool in healthcare, enhancing diagnosis, optimizing resource allo…
S39
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Jason Tucker: Thank you. So I wear two hats. I’m an academic, but I also work in public policy. And this is why I’m sort…
S40
Cracking the Code of Digital Health / DAVOS 2025 — The panel discussion highlighted the complex landscape of digital health and AI adoption in healthcare. While there was …
S42
DPI+H – health for all through digital public infrastructure — Critique of the ‘one model fits all’ approach and its associated costs. DPI was portrayed not just as infrastructure bu…
S43
Multistakeholder Dialogue on National Digital Health Transformation — Alain Labrique: Fantastic. Thank you, Leah. I really appreciate everyone’s partnership. and engagement this morning,…
S44
WS #271 Data Agency Scaling Next Gen Digital Economy Infrastructure — Development | Infrastructure | Sociocultural Instead of building complex technological solutions and expecting society …
S45
Creating Eco-friendly Policy System for Emerging Technology — Additionally, the analysis embraces a more globalised, holistic approach to learning. It backs strategies that encourage…
S46
WS #133 Better products and policies through stakeholder engagement — Richard Wingfield: you you you you you you you and rights and lead our work with technology companies on how t…
S47
Capacity Building in Digital Health — The discussion demonstrated that while challenges are substantial, tools and approaches for addressing them are increasi…
S48
Equi-Tech-ity: Close the gap with digital health literacy | IGF 2023 — Rajendra Gupta:I think I would say that in this age where patients are more informed, if not, you know, than anyone abou…
S49
Digital Health at the crossroads of human rights, AI governance, and e-trade (SouthCentre) — In conclusion, digital health technology holds immense potential for improving health systems globally. However, it is e…
S50
Digital health: Technology applications, and policy implications — Global expenditure on health continues to grow, as technological breakthroughs bring patients and doctors closer, regard…
S51
Assessing the Promise and Efficacy of Digital Health Tool | IGF 2023 WS #83 — Another area that requires attention is government policies on digital health, which currently lack focus on capacity bu…
S52
Building Capacity in Cyber Security — A high-level global cybersecurity capacity building agenda is also being called for to enhance efforts worldwide. Howeve…
S53
Multistakeholder Dialogue on National Digital Health Transformation — Kylie Shae: Thank you very much, Leah, and yes, I’m now bringing you to the cold face, you know, why are we doing all …
S54
Building a Digital Society, from Vision to Implementation — Stacey Hines, joining from Vancouver at 4 AM Kingston time, cited research from Web Summit where AI expert Gary Marcus p…
S55
Fixing Healthcare, Digitally — Additionally, satellite data is utilized to identify areas with higher population densities, environmental data, and mob…
S56
Shaping the Future AI Strategies for Jobs and Economic Development — Telemedicine and remote healthcare delivery can serve dispersed populations effectively
S57
WS #462 Bridging the Compute Divide a Global Alliance for AI — The main areas of disagreement center on: 1) Whether to prioritize infrastructure development vs. tool accessibility, 2)…
S58
Global Data Partnership Against Forced Labour: A Comprehensive Discussion Summary — The discussion shows remarkably high consensus on the core problem and general solution approach, with disagreements pri…
S59
WS #484 Innovative Regulatory Strategies to Digital Inclusion — High level of consensus with significant implications for policy direction. The agreement suggests a paradigm shift is n…
S60
Multigenerational Collaboration: Rethinking Work, Learning and Inclusion in the Digital Age — Moderate disagreement level with significant implications. While speakers agree on the importance of intergenerational c…
S61
WS #53 Leveraging the Internet in Environment and Health Resilience — Call for thinking globally and integrated in policy decisions; mention of ecosystem including public safety, emergency, …
S62
Digital technology for the sustainable development goals — In addition, there should begreater awarenessand capacity among policy-makers. This is required not only forchanging min…
S63
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — 3. Contextualising Policies and Technologies: A recurring theme was the importance of tailoring policies and technologi…
S64
WS #193 Cybersecurity Odyssey Securing Digital Sovereignty Trust — Samaila Atsen Bako: Thank you so much. I hope you can hear me clearly. Yes, we do. We can hear you. Oh, awesome. That’s …
S65
WS #453 Leveraging Tech Science Diplomacy for Digital Cooperation — There’s a recognized gap between technological development and policy understanding, with calls for bringing policymaker…
S66
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — The discussion revealed that technical capabilities often exceed institutional readiness for AI adoption. Behavioral cha…
S67
Capacity Building in Digital Health — High level of consensus with significant implications for healthcare digital transformation. The agreement across divers…
S68
MedTech and AI Innovations in Public Health Systems — Despite promising potential, significant challenges emerged. Data quality and infrastructure represent fundamental prere…
S69
Assessing the Promise and Efficacy of Digital Health Tool | IGF 2023 WS #83 — Digital health literacy is crucial for healthcare professionals and workers in the sector. Failing to adapt and learn di…
S70
Digital Health at the crossroads of human rights, AI governance, and e-trade (SouthCentre) — Digital health technology has the potential to significantly improve the efficiency and effectiveness of health systems …
S71
WSIS Action Line C7: E-health – Fostering foundations for digital health transformation in the age of AI — Legal and regulatory | Development | Infrastructure Legal and regulatory | Human rights | Development Legal and regula…
S72
Cracking the Code of Digital Health / DAVOS 2025 — Roy Jakobs: to focus on globally. What can you tell us about these enablers? I think, and I said it before, if you w…
S73
Global Digital Compact: AI solutions for a digital economy inclusive and beneficial for all — ## Challenges and Unresolved Issues ## The Global Call for Solutions: A New Initiative ## The Urgency of Action: Skill…
S74
DC-DH: Health Digital Health &amp; Selfcare – Can we replace Doctors in PHCs — Zaw Ali Khan: Thank you, Dr. Rajan, for inviting me to this session. I feel that there are certainly many use cases w…
S75
AI and robots to fix Japan’s shrinking labor force — Japan is facing a shrinking labour pool due to a declining population and an ageing workforce, which is pushing many ind…
S76
DPI+H – health for all through digital public infrastructure — Criticisms target the ‘one size fits all’ approach, flagging up the risks of increased costs and inefficiencies. Advocac…
S77
Multistakeholder Dialogue on National Digital Health Transformation — Alain Labrique: Fantastic. Thank you, Leah. I really appreciate everyone’s partnership. and engagement this morning,…
S78
Generative AI: Steam Engine of the Fourth Industrial Revolution? — Additionally, reskilling the workforce is crucial to fully embrace new technologies. AI, for instance, has the potential…
S79
Bridging the Digital Divide for Transition to a Greener Economy — Reskilling the workforce is another important consideration highlighted in the analysis. A study by Microsoft estimates …
S80
WS #271 Data Agency Scaling Next Gen Digital Economy Infrastructure — Development | Infrastructure | Sociocultural Instead of building complex technological solutions and expecting society …
S81
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S82
Multilateral Intergenerational High-Level Dialogue: Youth Special Track — Despite the inspiring examples of youth-led innovation, participants identified significant structural barriers that pre…
S83
WSIS+20 Overall Review multistakeholder consultation with co-facilitators — ### Geographical and Structural Barriers This consultation demonstrated both the opportunities and challenges of inclus…
S84
Knowledge Café: Youth building the digital future – WSIS+20 Review and Beyond 2025 — These key comments fundamentally shaped the discussion by introducing three critical shifts: from technology-centered to…
S85
Knowledge Café: WSIS+20 Consultation: Towards a Vision Beyond 2025 — **Inclusion Barriers**: Structural barriers prevent marginalized communities from participating in WSIS processes. Speci…
S86
AI, Data Governance, and Innovation for Development — Sade Dada: Thank you so much, Martha. Thank you to the organized important dialogue. The sessions have been really great…
S87
AI: Lifting All Boats / DAVOS 2025 — The tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunit…
S88
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — The discussion maintained a predominantly optimistic and forward-looking tone throughout, despite acknowledging signific…
S89
AI as critical infrastructure for continuity in public services — The discussion maintained a collaborative and constructive tone throughout, with participants building on each other’s p…
S90
Day 0 Event #174 Giganet Annual Academic Symposium – Morning session — The gap between policy principles and practical implementation is a critical challenge
S91
Launch / Award Event #52 Intelligent Society Development &amp; Governance Research — **Practical Implementation**: The discussion focused on real-world applications and concrete examples rather than theore…
S92
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Galia, one of the speakers, emphasizes the mapping exercises conducted with the OECD regarding risk assessment. This sug…
S93
Lightning Talk #215 Governance in Citizen Science Technologies — However, Soacha was notably candid about the implementation challenges, acknowledging that “it’s extremely challenging” …
S94
Closing remarks – Charting the path forward — A central theme was the need to move beyond abstract principles toward concrete implementation tools, technical standard…
S95
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S96
AI for food systems — The tone throughout the discussion was consistently formal, optimistic, and collaborative. It maintained a ceremonial qu…
S97
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — The tone is consistently optimistic, collaborative, and forward-looking throughout the discussion. Speakers emphasize “l…
S98
Media Briefing: Unlocking the North Star for AI Adoption, Scaling and Global Impact / DAVOS 2025 — The overall tone was optimistic and forward-looking. Panelists expressed excitement about AI’s capabilities and potentia…
S99
Closing Ceremony — The discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebrat…
S100
Panel Discussion AI in Healthcare India AI Impact Summit — “One of the big barriers is multilingual.”[1]. “Maybe use cases, and I briefly hit on this before, but I think certainly…
S101
Exploring the need for speed in deploying information and communications technology for international development and bridging the digital divide — Some semblance of thresholds has been defined in previous campaigns such as the ‘one laptop per child’ campaign (OLPC Fo…
S102
https://dig.watch/event/india-ai-impact-summit-2026/nextgen-ai-skills-safety-and-social-value-technical-mastery-aligned-with-ethical-standards — But I’ll tell you that we need to really work out an infrastructure. We need to work out on academic strength. We need t…
S103
New Colours of Knowledge — – MEASURE 4.3.2. Establish a model for continuing professional development based on the National Competence Standard for…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Dr. Rajiv
4 arguments141 words per minute494 words208 seconds
Argument 1
Emphasizes that adopting digital health requires a fundamental mindset shift among pharmacists and other health professionals, not just technology deployment.
EXPLANATION
Dr. Rajiv argues that the main obstacle to expanding community pharmacy roles is a professional and mindset change rather than the availability of technology. He stresses that pharmacists need to adapt their thinking to engage in the full retail supply chain.
EVIDENCE
He points out that pharmacists have the greatest potential to contribute across the retail chain, but this requires strong change management and a shift in professional mindset, noting that the change is happening but will take more time because it is a mindset change [8].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Capacity-building literature stresses that digital health adoption hinges on a cultural and mindset shift rather than mere technical tools [S4] and further underscores the primacy of mindset change in digital transformation initiatives [S30].
MAJOR DISCUSSION POINT
Mindset shift needed for pharmacists
AGREED WITH
Dr. Gupta, Dr. Freddy
Argument 2
Points out that the Pharmacy Council of India (PCI) sets minimum standards but permits adding innovative subjects such as AI and management.
EXPLANATION
Dr. Rajiv explains that while the PCI defines the minimum curriculum requirements, it does not forbid institutions from adding subjects like innovation, management, or AI. This flexibility allows pharmacy programs to go beyond the baseline.
EVIDENCE
He states that PCI provides only the minimum standards and does not prevent adding innovation or management papers, allowing colleges to include additional topics such as AI [133-141].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The PCI is described as defining only minimum curriculum requirements while allowing institutions to augment programs with innovation, management and AI topics [S4].
MAJOR DISCUSSION POINT
PCI allows curriculum expansion
AGREED WITH
Dr. Sarvajit Kaur
DISAGREED WITH
Dr. Sarvajit Kaur
Argument 3
Uses remote surgery as an example of how existing clinicians can upskill rapidly to adopt new digital procedures.
EXPLANATION
Dr. Rajiv illustrates that doctors trained decades ago can now perform remote surgeries, showing that continuous upskilling enables adoption of new technologies. He suggests that capacity building should be an ongoing regulatory process.
EVIDENCE
He cites remote surgeries as a case where doctors trained 30 years ago have shifted to new capabilities, demonstrating continuous training and upgradation within the regulatory system [208-212].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Remote surgeries are cited as a concrete illustration of clinicians trained decades ago acquiring new digital capabilities through continuous upskilling [S4].
MAJOR DISCUSSION POINT
Remote surgery exemplifies rapid upskilling
AGREED WITH
Dr. Sarvajit Kaur, Dr. Suresh Yadav, Speaker 1
Argument 4
Emphasizes the role of drug inspectors and regulators in continuous training to keep pace with emerging medical devices and AI tools.
EXPLANATION
Dr. Rajiv notes that drug inspectors and controllers are now being trained to approve and review modern medical devices and AI applications, a function that did not exist when they were first appointed. This reflects ongoing professional development within regulatory bodies.
EVIDENCE
He mentions that inspectors and drug controllers are being trained on modern medical devices and AI tools, indicating continuous upgradation of regulatory staff [216-218].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Regulatory staff, including drug inspectors and controllers, are reported to be undergoing ongoing training on modern medical devices and AI applications, reflecting a continuous professional development process [S4].
MAJOR DISCUSSION POINT
Regulators need ongoing AI training
D
Dr. Gupta
2 arguments122 words per minute848 words414 seconds
Argument 1
Highlights that the core barrier is mindset change rather than technology itself.
EXPLANATION
Dr. Gupta emphasizes that shifting mindsets among health workers is more critical than merely introducing new technologies. He frames the discussion as a mindset issue that underpins successful digital health adoption.
EVIDENCE
He explicitly states that the important point is it’s more about mindset change than just technology [9].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multiple capacity-building sources identify mindset change as the principal obstacle to digital health adoption, outweighing the mere presence of technology [S4] and [S30].
MAJOR DISCUSSION POINT
Mindset over technology
AGREED WITH
Dr. Rajiv, Dr. Freddy
Argument 2
Announces the launch of a Global AI Academy to provide cross‑disciplinary training, emphasizing mindset over platform.
EXPLANATION
Dr. Gupta declares the creation of a Global AI Academy aimed at training individuals across disciplines, stressing that success depends on mindset rather than the specific platform used. The launch is presented as a step toward building AI capacity.
EVIDENCE
He states that they are launching the Global AI Academy, noting that it’s not about the platform but about mindset, and urges immediate action [226-233].
MAJOR DISCUSSION POINT
Launch of Global AI Academy
AGREED WITH
Speaker 1, Dr. Suresh Yadav
D
Dr. Sarvajit Kaur
4 arguments171 words per minute973 words341 seconds
Argument 1
Describes regulatory actions embedding AI and digital health into the BSc nursing curriculum, mandatory simulation labs, and VR tools.
EXPLANATION
Dr. Sarvajit Kaur explains that the nursing regulator revised the BSc curriculum in 2021 to incorporate AI and digital health, making five simulation labs mandatory and providing equipment such as mannequins and VR. This aims to build digital competencies among nursing students.
EVIDENCE
She outlines the 2021 curriculum change, the emphasis on digital health and AI, and the requirement for five simulation labs equipped with mannequins and VR to develop competencies [11-14].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Regulatory revisions in 2021 introduced AI and digital health into the BSc nursing curriculum, mandating five simulation labs equipped with mannequins and VR to develop digital competencies [S4].
MAJOR DISCUSSION POINT
Curriculum integration of AI and simulation
AGREED WITH
Dr. Rajiv
Argument 2
Notes the slow pace of curriculum revision and proposes continuous CME, district‑level simulation centers, and linked CNE credits to maintain competencies.
EXPLANATION
Dr. Kaur points out that curriculum changes take years, so she advocates for ongoing CME, establishing simulation centers in every district, and tying CNE credits to license renewal to keep nurses’ skills up to date. These measures aim to address the large existing nursing workforce.
EVIDENCE
She mentions that curriculum revisions take a decade, cites the need for district simulation centers, and describes linking 150 CNE hours to registration renewal as incentives for continuous upskilling [126-130].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion highlights a systematic approach that includes district-level simulation centers, ongoing CME, and the linkage of 150 CNE hours to license renewal to sustain competencies despite long curriculum cycles [S4].
MAJOR DISCUSSION POINT
Continuous upskilling via simulation centers and CNE
Argument 3
Highlights linking 150 CNE hours to nursing license renewal to incentivize continuous upskilling.
EXPLANATION
Dr. Kaur explains that 150 Continuing Nursing Education (CNE) hours have been tied to the five‑year renewal of nursing registration, making these courses mandatory for license continuation. This creates a regulatory incentive for nurses to engage with digital health training.
EVIDENCE
She states that 150 CNE hours are linked to license renewal, requiring nurses to complete these courses to maintain registration [191-192].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A mandatory requirement of 150 Continuing Nursing Education hours tied to the five-year renewal of nursing registration is documented as a regulatory incentive for continuous upskilling [S4].
MAJOR DISCUSSION POINT
CNE hours tied to license renewal
Argument 4
Suggests leveraging free or low‑cost platforms linked to mandatory CNE requirements to disseminate mental‑health tools for professionals.
EXPLANATION
Dr. Kaur notes that some platforms offer free courses, and by integrating them with the mandatory CNE framework, mental‑health solutions can be scaled to nurses and other health workers. This approach aligns regulatory requirements with accessible digital tools.
EVIDENCE
She mentions that there are platforms providing free courses and that these can be used to meet CNE requirements, facilitating broader dissemination [191-192].
MAJOR DISCUSSION POINT
Free platforms tied to CNE for scaling
D
Dr. Suresh Yadav
4 arguments189 words per minute1358 words429 seconds
Argument 1
Argues that AI can enable a single clinician to serve many patients, demanding new mental models and collaborative ecosystems.
EXPLANATION
Dr. Yadav proposes that AI technologies could allow one doctor or health worker to serve ten or more patients, requiring a shift in mental models and the creation of integrated ecosystems that connect all health actors.
EVIDENCE
He asks whether one doctor can serve ten people using AI and suggests that health workers could serve five to ten times more patients through AI-driven systems [79-81].
MAJOR DISCUSSION POINT
AI amplifies clinician capacity
Argument 2
Proposes AI‑driven health ERP systems to break fragmentation, creating an ecosystem where doctors, nurses, pharmacists, and volunteers are interconnected.
EXPLANATION
Dr. Yadav suggests implementing health‑focused ERP solutions, similar to corporate ERP, to integrate fragmented health services and connect all stakeholders, thereby improving efficiency and patient care.
EVIDENCE
He describes using health ERP to connect doctors, pharmacists, nurses, and volunteers, noting that the U.S. system avoids fragmentation while India suffers from siloed structures, and proposes ERP as a quick-fix solution [78-86].
MAJOR DISCUSSION POINT
Health ERP to unify ecosystem
AGREED WITH
Speaker 1, Dr. Gupta
Argument 3
Quantifies the global cost of healthcare worker shortages as 10‑12 million jobs, equating to ~15 % of global GDP, and links it to climate‑health challenges.
EXPLANATION
Dr. Yadav presents data that shortages of healthcare workers represent a loss of 10‑12 million jobs, costing roughly 15 % of global GDP, and connects this economic burden to broader climate‑health issues highlighted in recent Lancet reports.
EVIDENCE
He cites that the shortage costs around 10-12 million jobs, about 15 % of the $120 trillion global GDP, and mentions the Lancet report linking climate change to health challenges [53-58].
MAJOR DISCUSSION POINT
Economic impact of workforce shortages
Argument 4
Suggests digital solutions as a low‑hanging fruit to mitigate shortages, especially in aging populations and underserved regions.
EXPLANATION
Dr. Yadav identifies digital technologies, including AI, as an immediate remedy to address healthcare worker shortages, particularly for aging societies and remote areas where staffing is scarce.
EVIDENCE
He refers to digital solutions as a low-hanging fruit to address shortages, especially for aging populations and regions lacking healthcare workers [77-80].
MAJOR DISCUSSION POINT
Digital tools to address staffing gaps
AGREED WITH
Dr. Rajiv, Dr. Gupta
S
Speaker 1
3 arguments132 words per minute583 words263 seconds
Argument 1
Stresses that technology firms must create solutions that adapt to varying digital maturity, hand‑holding users through transformation.
EXPLANATION
Speaker 1 argues that health‑tech companies should design AI solutions that can scale in complexity, providing support for institutions with different levels of digital readiness, effectively hand‑holding users through the transformation journey.
EVIDENCE
He outlines a design principle that technology should be scalable in complexity, allowing products to hand-hold healthcare workers as institutions progress from basic remote vitals to advanced decision support, citing the EISU solution as an example [112-118][119-120].
MAJOR DISCUSSION POINT
Scalable, hand‑holding tech design
AGREED WITH
Dr. Suresh Yadav, Dr. Gupta
DISAGREED WITH
Dr. Suresh Yadav
Argument 2
Advocates designing products that scale in complexity—from basic remote vitals to advanced clinical decision support—matching institutional readiness.
EXPLANATION
Speaker 1 reiterates that health‑tech products need to adapt to the digital maturity of each institution, offering basic functionalities initially and adding sophisticated features as readiness improves, exemplified by the EISU platform.
EVIDENCE
He describes the EISU solution whose functionality ranges from simple remote vital monitoring to complex smart alerts and clinical decision support, aligned with clinician readiness [119-120].
MAJOR DISCUSSION POINT
Product complexity scaling
Argument 3
Recommends that technology design be scalable in complexity, allowing institutions with differing digital maturity to adopt gradually.
EXPLANATION
Speaker 1 emphasizes that technology should not be a one‑size‑fits‑all but should allow gradual adoption, enabling both digitally native hospitals and analog ones to benefit from the same solution as they mature.
EVIDENCE
He repeats the need for products that can scale in complexity to suit varying institutional digital maturity, ensuring broader applicability across the health sector [112-118].
MAJOR DISCUSSION POINT
Gradual adoption through scalable design
S
Speaker 2
1 argument231 words per minute310 words80 seconds
Argument 1
Raises the difficulty of applying US pricing models in India and seeks strategies for affordable, scalable pricing of digital health products.
EXPLANATION
Speaker 2 points out that solutions successful in the U.S. with high revenues fail to achieve comparable pricing in India, highlighting the challenge of adapting business models to the Indian market and requesting guidance on affordable scaling.
EVIDENCE
He describes his experience with two ventures that succeeded in the U.S. but struggled with pricing in India, noting the need for strategies to make pricing work for the Indian context [187-190].
MAJOR DISCUSSION POINT
Pricing challenges for India
DISAGREED WITH
Dr. Gupta
S
Speaker 3
1 argument144 words per minute15 words6 seconds
Argument 1
Calls for collaborative consortia of innovative healthcare universities to pool resources and accelerate scaling of solutions.
EXPLANATION
Speaker 3 suggests forming a consortium of innovative healthcare universities to share expertise, resources, and infrastructure, thereby speeding up the development and scaling of digital health solutions.
EVIDENCE
He briefly mentions the idea of a consortium of innovative healthcare universities as a way to collaborate and scale solutions [221].
MAJOR DISCUSSION POINT
University consortium for scaling
D
Dr. Freddy
1 argument149 words per minute166 words66 seconds
Argument 1
Asserts that age is not the obstacle for faculty; mindset is, and older educators can still adopt AI with the right approach.
EXPLANATION
Dr. Freddy contends that the barrier to adopting AI among faculty is not age but mindset, emphasizing that even senior educators can learn and implement AI technologies if they adopt the right attitude.
EVIDENCE
He notes that 80 % of participants are over 20 years old, with the oldest being 50, and concludes that age is irrelevant compared to mindset [219-220].
MAJOR DISCUSSION POINT
Mindset over age for AI adoption
AGREED WITH
Dr. Rajiv, Dr. Gupta
A
Anish
2 arguments172 words per minute664 words231 seconds
Argument 1
Calls for educating policymakers on technology’s impact and establishing an “innovation pipeline” to re‑imagine government solutions.
EXPLANATION
Anish argues that politicians need training on how new technologies change problem‑solving, and proposes creating an “innovation pipeline” where government sets ambitious targets and evaluates ideas through staged testing before scaling.
EVIDENCE
He explains that policymakers focus on outcomes and need education on technology’s influence, then introduces the concept of an innovation pipeline with stage-gate testing and scaling of successful ideas [158-164][178-184].
MAJOR DISCUSSION POINT
Innovation pipeline for policy
DISAGREED WITH
Dr. Gupta
Argument 2
Introduces “innovation pipeline management” as a framework for governments to test, validate, and scale promising AI solutions.
EXPLANATION
Anish details a structured process—similar to DARPA’s model—where governments set targets, invite ideas, test them through stages, and then scale validated solutions, providing a systematic way to integrate AI into public programs.
EVIDENCE
He describes the pipeline stages: setting ambitious targets, inviting entrepreneurs, stage-gate testing, validation, and policy scaling, referencing DARPA’s approach as a model [178-184].
MAJOR DISCUSSION POINT
Structured AI innovation pipeline
Agreements
Agreement Points
Mindset shift is the primary barrier to digital health adoption, outweighing technology or age factors.
Speakers: Dr. Rajiv, Dr. Gupta, Dr. Freddy
Emphasizes that adopting digital health requires a fundamental mindset shift among pharmacists and other health professionals, not just technology deployment. Highlights that the core barrier is mindset change rather than technology itself. Asserts that age is not the obstacle for faculty; mindset is, and older educators can still adopt AI with the right approach.
All three speakers stress that changing professional mindsets, not merely providing technology or worrying about age, is the key to successful digital health implementation. Rajiv notes the need for professional and mindset change among pharmacists [8]; Gupta explicitly calls mindset change the core issue [9]; Freddy argues that mindset, not age, determines AI adoption among faculty [219-220].
POLICY CONTEXT (KNOWLEDGE BASE)
The barrier of mindset shift was identified as the primary obstacle in the “Building a Digital Society” discussion, highlighting cultural resistance over technical issues [S54], and aligns with calls for changing public-sector mindsets to enable digital health adoption [S62].
Ongoing capacity building and upskilling of the existing health workforce is essential, using simulation labs, CME, digital tools, and adaptable technology solutions.
Speakers: Dr. Rajiv, Dr. Sarvajit Kaur, Dr. Suresh Yadav, Speaker 1
Uses remote surgery as an example of how existing clinicians can upskill rapidly to adopt new digital procedures. Describes regulatory actions embedding AI and digital health into the BSc nursing curriculum, mandatory simulation labs, and VR tools. Suggests digital solutions as a low‑hanging fruit to mitigate shortages, especially in aging populations and underserved regions. Stresses that technology firms must create solutions that adapt to varying digital maturity, hand‑holding users through transformation.
The panel repeatedly highlights the need for continuous training-through remote-surgery upskilling, mandatory simulation labs, CME, and scalable tech that hand-holds users. Rajiv cites remote-surgery upskilling and regulator training on new devices [208-212][216-218]; Kaur details simulation labs and CNE-linked licensing [11-14][191-192]; Yadav points to digital tools as low-hanging fruit for staffing gaps [77-80]; Speaker 1 proposes scalable, complexity-adjustable solutions to support varied digital maturity [112-118][119-120].
POLICY CONTEXT (KNOWLEDGE BASE)
Capacity-building is emphasized as crucial for scaling digital health, with recommendations to coordinate tools and strategies across contexts [S47] and a specific call for governments to prioritize workforce upskilling in digital health policies [S51].
Regulatory frameworks can and should be flexible to incorporate AI, digital health, and innovative subjects beyond minimum standards.
Speakers: Dr. Rajiv, Dr. Sarvajit Kaur
Points out that the Pharmacy Council of India (PCI) sets minimum standards but permits adding innovative subjects such as AI and management. Describes regulatory actions embedding AI and digital health into the BSc nursing curriculum, mandatory simulation labs, and VR tools.
Both regulators note that they are not limited to baseline curricula and can add AI and innovation. Rajiv explains that PCI provides only minimum requirements and allows additional subjects like AI [133-141]; Kaur outlines the 2021 BSc nursing curriculum revision that embeds AI and digital health, with mandatory simulation labs [11-14].
POLICY CONTEXT (KNOWLEDGE BASE)
Innovative regulatory strategies advocate for flexible, demand-driven frameworks that go beyond minimum standards to promote digital inclusion [S59], and stress the need to tailor policies to local realities rather than rigidly copying external models [S63].
Digital health technologies must be designed to scale in complexity and integrate fragmented health services into a unified ecosystem.
Speakers: Speaker 1, Dr. Suresh Yadav, Dr. Gupta
Stresses that technology firms must create solutions that adapt to varying digital maturity, hand‑holding users through transformation. Proposes AI‑driven health ERP systems to break fragmentation, creating an ecosystem where doctors, nurses, pharmacists, and volunteers are interconnected. Announces the launch of a Global AI Academy to provide cross‑disciplinary training, emphasizing mindset over platform.
The speakers converge on the need for adaptable, ecosystem-oriented tech. Speaker 1 calls for products that scale in complexity to match institutional readiness [112-118][119-120]; Yadav recommends health-ERP solutions to eliminate siloed care and build a connected ecosystem [78-86]; Gupta launches a Global AI Academy, stressing that success hinges on mindset rather than a specific platform [226-233].
POLICY CONTEXT (KNOWLEDGE BASE)
Effective scaling requires coordinated, context-aware strategies that integrate services, as outlined in capacity-building guidance [S47] and calls for integrated health-environment-technology governance [S61].
Digital health can extend care to underserved and remote populations, addressing workforce shortages and last‑mile delivery.
Speakers: Dr. Rajiv, Dr. Suresh Yadav, Dr. Gupta
If you see doctors, nurses, other health technicians, you will find them concentrated in hospitals… the biggest possibility is for pharmacists through the whole retail chain… they can actually contribute up to the last mile of the value chain. Suggests digital solutions as a low‑hanging fruit to mitigate shortages, especially in aging populations and underserved regions. Highlights that the important point is it’s more about mindset change than just technology.
All three underline that digital health can reach the “last mile” and alleviate staffing gaps. Rajiv points to pharmacists’ potential to serve the entire supply chain and reach remote areas [8]; Yadav describes digital tools as a quick fix for remote or underserved regions [77-80]; Gupta reiterates that mindset change is needed to enable such outreach [9].
POLICY CONTEXT (KNOWLEDGE BASE)
Remote-care extensions are supported by evidence that satellite and mobility data can guide placement of health posts for underserved areas [S55], and telemedicine is recognized as a key solution for dispersed populations [S56]; equity considerations further underline the need for inclusive access [S49].
Similar Viewpoints
Both emphasize the need for integrated, scalable technology platforms that can bridge fragmented health services and support users at different stages of digital readiness. Speaker 1 calls for products that scale in complexity [112-118][119-120]; Yadav advocates health‑ERP to unify the ecosystem and overcome siloed structures [78-86].
Speakers: Speaker 1, Dr. Suresh Yadav
Stresses that technology firms must create solutions that adapt to varying digital maturity, hand‑holding users through transformation. Proposes AI‑driven health ERP systems to break fragmentation, creating an ecosystem where doctors, nurses, pharmacists, and volunteers are interconnected.
Both agree that shifting professional mindsets is more critical than merely introducing new technologies. Gupta explicitly labels mindset as the core barrier [9]; Rajiv stresses professional and mindset change for pharmacists [8].
Speakers: Dr. Gupta, Dr. Rajiv
Highlights that the core barrier is mindset change rather than technology itself. Emphasizes that adopting digital health requires a fundamental mindset shift among pharmacists and other health professionals, not just technology deployment.
Both highlight the necessity of training policymakers to understand and leverage emerging technologies. Anish proposes an “innovation pipeline” and education for politicians [158-164][178-184]; Gupta raises the question of a crash course for politicians [148-152].
Speakers: Anish, Dr. Gupta
Calls for educating policymakers on technology’s impact and establishing an “innovation pipeline” to re‑imagine government solutions. …how do you train politicians? (crash course for them).
Unexpected Consensus
Agreement on the need to educate and re‑train politicians/policymakers about digital technologies.
Speakers: Anish, Dr. Gupta
Calls for educating policymakers on technology’s impact and establishing an “innovation pipeline” to re‑imagine government solutions. …how do you train politicians? (crash course for them).
While most discussion focused on health professionals and technology design, both Anish and Dr. Gupta converged on the idea that policymakers themselves require systematic training to keep pace with digital innovation-a point not anticipated given the health-centric agenda. Anish outlines an innovation-pipeline model for government learning [158-164][178-184]; Gupta explicitly asks about a crash course for politicians [148-152].
POLICY CONTEXT (KNOWLEDGE BASE)
Bridging the tech-policy gap is highlighted as essential, with recommendations for capacity building among policymakers in cybersecurity and digital health [S52], and calls for science-diplomacy initiatives to bring policymakers closer to technology [S65]; broader awareness among decision-makers is also stressed [S62].
Overall Assessment

The panel exhibits strong consensus that mindset change, continuous capacity building, regulatory flexibility, and scalable ecosystem‑oriented technology are essential for digital health transformation. Participants align on integrating AI into curricula, linking training to professional incentives, and using digital tools to reach underserved populations.

High consensus across multiple speakers and sectors, indicating a unified direction toward policy reforms, education upgrades, and technology design that together can accelerate digital health adoption and address workforce shortages.

Differences
Different Viewpoints
How to keep health professional curricula current with digital health and AI
Speakers: Dr. Rajiv, Dr. Sarvajit Kaur
Points out that the Pharmacy Council of India (PCI) sets minimum standards but permits adding innovative subjects such as AI and management. Notes the slow pace of curriculum revision and proposes continuous CME, district‑level simulation centres and linking 150 CNE hours to licence renewal to maintain competencies.
Dr. Rajiv argues that existing regulatory frameworks (PCI) already allow institutions to go beyond the minimum curriculum by adding AI, innovation and management topics, so change can be achieved within the current structure [133-141]. Dr. Sarvajit counters that formal curriculum changes take a decade, therefore she emphasizes ongoing CME, simulation centres in every district and mandatory CNE credits to keep the large existing nursing workforce up-to-date [126-130][191-192]. The two speakers agree on the need for up-skilling but disagree on whether the primary lever should be curriculum flexibility or supplemental continuous education mechanisms.
Preferred technical approach for scaling digital health solutions
Speakers: Speaker 1, Dr. Suresh Yadav
Stresses that technology firms must create solutions that adapt to varying digital maturity, hand‑holding users through transformation. Proposes AI‑driven health ERP systems to break fragmentation, creating an ecosystem where doctors, nurses, pharmacists and volunteers are interconnected.
Speaker 1 advocates designing AI products that can scale in complexity, providing basic to advanced functionalities as institutions mature (e.g., the EISU platform) [112-118][119-120]. Dr. Yadav, by contrast, promotes a health-ERP “quick-fix” that integrates fragmented services into a single ecosystem, drawing on corporate-ERP models [78-86]. Both aim to improve scalability, but they differ on whether the solution should be a flexible, modular product or a comprehensive ERP integration.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates over whether to prioritize infrastructure development versus tool accessibility reflect divergent technical approaches to scaling, as documented in the WS #462 discussion on compute divide [S57].
How to address pricing of digital health products for the Indian market
Speakers: Speaker 2, Dr. Gupta
Raises the difficulty of applying US pricing models in India and seeks strategies for affordable, scalable pricing of digital health products. Defers the pricing question to a prior GDHS session on pricing of digital health, without providing a direct answer.
Speaker 2 highlights that successful US ventures have failed to achieve comparable pricing in India and asks for concrete guidance on affordable scaling [187-190]. Dr. Gupta responds by pointing the audience to an earlier GDHS session rather than offering a tailored solution [191]. This reflects a disagreement on the immediacy and responsibility of providing actionable pricing guidance.
Who should be the primary target for capacity‑building initiatives – health professionals or policymakers
Speakers: Dr. Gupta, Anish
Highlights the core barrier is mindset change rather than technology itself. Calls for educating policymakers on technology’s impact and establishing an “innovation pipeline” to re‑imagine government solutions.
Dr. Gupta frames the main obstacle as a mindset shift among health workers and suggests that training should focus on them [9]. Anish argues that politicians and policymakers also need systematic education and an innovation-pipeline process to translate new technologies into policy decisions [158-164][178-184]. The disagreement lies in the primary audience for capacity-building: frontline health workers versus government decision-makers.
POLICY CONTEXT (KNOWLEDGE BASE)
Capacity-building literature shows tension between focusing on healthcare professionals (emphasized in government policy gaps [S51]) and strengthening policymakers’ support and buy-in (highlighted in cybersecurity capacity building [S52]).
Unexpected Differences
Use of regulatory flexibility versus perceived inability to change curricula
Speakers: Dr. Rajiv, Dr. Rajiv (self‑contradiction)
Points out that the Pharmacy Council of India (PCI) sets minimum standards but permits adding innovative subjects such as AI and management. Notes that colleges and teachers claim they are not allowed to change curricula because it is governed by PCI.
Within Dr. Rajiv’s own remarks, he first asserts that PCI allows institutions to add subjects beyond the minimum [133-141], yet later acknowledges that colleges often claim they cannot change curricula because of PCI regulations [130-132]. This internal inconsistency was not anticipated and highlights a hidden tension between perceived regulatory constraints and actual flexibility.
Assumption that a single technology solution can solve workforce shortages versus broader systemic change
Speakers: Dr. Suresh Yadav, Speaker 1
Argues that AI can enable a single clinician to serve many patients, demanding new mental models and collaborative ecosystems. Stresses that technology firms must create solutions that adapt to varying digital maturity, hand‑holding users through transformation.
Dr. Yadav presents AI as a near-miraculous lever to dramatically increase clinician capacity (e.g., one doctor serving ten patients) [79-81], whereas Speaker 1 emphasizes incremental, maturity-based scaling of technology rather than expecting a single AI breakthrough to resolve systemic shortages. The optimism of a “quick-fix” AI solution was unexpected compared to the more cautious, capacity-building perspective of other participants.
POLICY CONTEXT (KNOWLEDGE BASE)
The discussion on systemic versus technology-centric solutions mirrors disagreements about supply shortages versus access inequality in digital health debates [S57], and reflects concerns about profit-driven, narrow tech approaches versus holistic, human-centered strategies [S60].
Overall Assessment

The discussion revealed several points of contention: the best mechanism for keeping curricula current (regulatory flexibility vs. continuous CME), the optimal technical architecture for scaling digital health (modular, maturity‑based products vs. comprehensive health‑ERP), the responsibility for addressing pricing challenges in India, and the primary audience for capacity‑building (health workers versus policymakers). While participants uniformly agreed on the necessity of a mindset shift, they diverged on the pathways to achieve it.

Moderate to high. The disagreements are substantive, touching on policy design, educational strategy, and technology implementation, and they could affect the speed and effectiveness of digital health integration in India. Resolving these tensions will require coordinated action across regulators, educators, technology firms, and government bodies to align curriculum reforms, funding models, and ecosystem design.

Partial Agreements
All speakers concur that a mindset shift is essential for digital health adoption and that training (whether for regulators, faculty, or health workers) must address attitudes rather than merely providing technology. However, they differ on the mechanisms—curriculum reform, regulatory up‑skilling, age‑agnostic faculty training, or product design—to achieve that mindset change.
Speakers: Dr. Rajiv, Dr. Gupta, Dr. Freddy, Speaker 1
Emphasizes that the core barrier is mindset change rather than technology itself. Emphasizes the role of drug inspectors and regulators in continuous training to keep pace with emerging medical devices and AI tools. Asserts that age is not the obstacle for faculty; mindset is, and older educators can still adopt AI with the right approach. Stresses that technology firms must create solutions that adapt to varying digital maturity, hand‑holding users through transformation.
Takeaways
Key takeaways
Adopting digital health and AI in healthcare requires a fundamental mindset shift among pharmacists, nurses, doctors, and educators, not merely technology deployment. Regulatory bodies are embedding AI and digital health into curricula (e.g., BSc Nursing 2021) and mandating simulation labs, VR, and continuous competency centers to build capacity. Pharmacy Council of India sets minimum curriculum standards but permits institutions to add innovative subjects such as AI, management, and programming. AI and health‑ERP platforms can dramatically increase clinician productivity, address workforce shortages, and create a connected ecosystem across the entire health value chain. Workforce shortages have massive economic (≈15 % of global GDP) and climate‑health implications; digital solutions are viewed as a low‑hanging fruit to mitigate these challenges. Scalable‑in‑complexity product design is essential for institutions with varying digital maturity; solutions must hand‑hold users and grow with their readiness. Continuous upskilling through CME/CNE, district‑level simulation centers, and linking credits to license renewal is critical for existing professionals. Pricing and scaling of digital health products in India remain a major barrier; affordable, context‑specific models are needed. Collaboration among technology firms, academia, regulators, and policymakers is vital; initiatives such as the Global AI Academy and consortia of innovative health universities were announced.
Resolutions and action items
Launch of the Global AI Academy to provide cross‑disciplinary AI training for health professionals. Establishment of two national reference simulation centers (Gurgaon and Bhagalkot) and training of ~2,000 faculty on simulation tools. Recommendation for every district to have a simulation/competency center for nurses (e.g., initiatives in Uttar Pradesh and Bihar). Linking 150 CNE hours to nursing license renewal to incentivize mandatory digital‑health upskilling. Encouragement for pharmacy colleges to add AI, innovation, and management modules beyond PCI minimum requirements. Call for technology companies to design AI solutions that scale in complexity, matching institutional digital readiness. Proposal to adopt an “innovation pipeline management” framework for government to test, validate, and scale AI solutions. Suggestion to form consortia of innovative healthcare universities to pool resources and accelerate scaling of digital health solutions.
Unresolved issues
Specific pricing strategies for digital health products in the Indian market (question from Speaker 2 remained unanswered). Concrete mechanisms for training politicians and policymakers on emerging technologies beyond conceptual ideas. How to increase the number of health‑tech entrepreneurs/executors versus ideators; no definitive plan was provided. Detailed roadmap for bridging the skill gap of senior faculty and clinicians who lack AI training. Implementation plan for integrating AI‑driven health‑ERP systems across India’s fragmented health ecosystem. Expansion of digital‑health courses for nurses beyond the current offerings and ensuring nationwide access.
Suggested compromises
Use PCI’s minimum curriculum standards as a baseline while allowing institutions to voluntarily add AI, innovation, and management subjects (Dr. Rajiv). Combine mandatory CNE credits with free or low‑cost online platforms to make upskilling affordable and widely accessible (Dr. Sarvajit). Adopt an age‑agnostic training approach, focusing on mindset change rather than years of experience, to involve senior educators (Freddy & Speaker 1). Blend continuous curriculum revisions with regular CME/CNE programs to keep education current without waiting for decade‑long curriculum cycles.
Thought Provoking Comments
The biggest possibility for any profession in health care is for pharmacists through the whole retail chain distribution supply chain management and they are the people who can actually contribute up to the last mile of the value chain… this needs a strong change management… it is a professional and mindset change and thinking change for pharmacists.
Highlights a systemic blind‑spot – the under‑utilised role of community pharmacists – and frames the barrier as cultural/mindset rather than purely technical, opening a new angle on workforce optimisation.
Shifted the conversation from generic technology adoption to a specific sector (pharmacy) that requires structural and attitudinal change. It prompted later speakers to discuss capacity‑building and curriculum flexibility, and set the stage for Dr. Rajiv’s later point about regulatory freedom in pharmacy education.
Speaker: Dr. Rajiv
We have tried to integrate AI and digital health into the basic nursing curriculum… five simulation labs are now mandatory, we have national reference simulation centres, faculty preparedness programmes, and we link 150 CNE hours to licence renewal… we are also launching a one‑to‑two year digital health academy for nurses.
Provides a concrete, multi‑layered regulatory strategy for embedding digital competencies, showing how policy can drive both infrastructure (labs) and continuous professional development.
Introduced a tangible model that other participants referenced when discussing curriculum rigidity versus CME, and inspired the later discussion on scaling training through simulation centres and online platforms.
Speaker: Dr. Sarvajit Kaur
The economic cost of healthcare‑worker shortages is around 10‑12 million jobs, about 15 % of global GDP… climate change is both a driver and a consequence of health‑system emissions… AI can let one doctor serve 10 people, and a health‑ERP can break the siloed Indian system into an ecosystem.
Quantifies the macro‑economic stakes of workforce gaps, links them to climate, and proposes AI‑driven productivity and ecosystem integration as a systemic remedy.
Moved the dialogue from national‑level training issues to global economic and environmental implications, prompting participants to consider large‑scale digital ecosystems and the urgency of rapid, technology‑enabled solutions.
Speaker: Dr. Suresh Yadav
Technology companies should design AI solutions that are scalable in complexity, not just volume, so they can hand‑hold healthcare workers through the digital transformation journey… the product must adapt to the institution’s digital maturity.
Introduces the design principle of “scalable complexity,” shifting focus from technology as a static tool to a dynamic, capacity‑building partner that grows with users.
Redirected the conversation toward product design considerations, influencing later remarks about co‑creating curricula, faculty upskilling, and the need for platforms that evolve with user readiness.
Speaker: Speaker 1 (Tech entrepreneur)
We need an ‘innovation pipeline management’ in government – set ambitious targets, let entrepreneurs propose ideas, stage‑gate them, validate successes and then have policymakers scale what works. This is a way to re‑imagine how we fund and implement health solutions.
Proposes a structured, DARPA‑style framework for governmental adoption of emerging technologies, moving beyond ad‑hoc training to systematic innovation adoption.
Provided a concrete governance model that answered Dr. Gupta’s question about political engagement, and sparked discussion on how to institutionalise rapid tech adoption at the policy level.
Speaker: Anish
PCI sets the minimum curriculum requirements but does not forbid adding innovation, management, or modern technology papers. Institutions can go beyond the baseline if they wish.
Clarifies a regulatory misconception, empowering educational institutions to innovate within existing frameworks rather than waiting for formal curriculum revisions.
Reinforced earlier points about curriculum flexibility, encouraging participants to view regulatory bodies as enablers rather than bottlenecks, and supporting the argument for continuous upskilling initiatives.
Speaker: Dr. Rajiv
Overall Assessment

The discussion was propelled forward by a handful of incisive remarks that reframed the problem from isolated training gaps to systemic, cultural, and economic dimensions. Dr. Rajiv’s focus on pharmacist mindset, Dr. Kaur’s regulatory blueprint, and Dr. Yadav’s macro‑economic framing opened new thematic lanes—workforce distribution, policy‑driven digital integration, and global stakes. The tech‑entrepreneur’s design principle and Anish’s innovation‑pipeline model supplied practical pathways for translating those ideas into scalable solutions. Together, these comments shifted the tone from descriptive challenges to solution‑oriented strategies, prompting participants to explore curriculum flexibility, ecosystem building, and governance reforms, ultimately shaping a forward‑looking, multi‑level vision for digital health capacity building.

Follow-up Questions
Do we have enough capacity to have more entrepreneurs like you? We will have ideators but not entrepreneurs because we don’t have executors. How do you define this?
Seeks clarification on the gap between idea generation and execution in health‑tech entrepreneurship and how to build capacity for entrepreneurs.
Speaker: Speaker 1 (addressing Dr. Rajiv)
Should we rely on CME/continuous education rather than frequent curriculum changes to keep pace with technology?
Raises the challenge of updating academic curricula quickly and asks whether ongoing CME is a more feasible solution for keeping health professionals current with digital health advances.
Speaker: Dr. Gupta (to Dr. Sarvajit Kaur)
What are you doing to train drug inspectors and pharmacists to understand digital health technologies?
Highlights the need for regulatory personnel to be digitally literate and asks for specific capacity‑building measures for inspectors and pharmacists.
Speaker: Dr. Gupta (to Dr. Rajiv)
Do we have a crash course for politicians/policymakers to understand technology?
Points out that effective policy requires tech‑savvy legislators and asks whether a rapid training program exists for them.
Speaker: Dr. Gupta (to Anish)
What platform would be good to scale mental‑health solutions for healthcare professionals in India?
Seeks guidance on a scalable, context‑appropriate digital platform to deliver mental‑health support to doctors, nurses, and other health workers.
Speaker: Speaker 2 (to Anish)
How can pricing models for digital‑health solutions be adapted to the Indian market compared with the US?
Requests strategies to make pricing affordable and sustainable in India, noting failures of US‑based pricing approaches when applied locally.
Speaker: Speaker 2 (to Anish)
How can we train current faculty (often senior) to teach AI and digital health to Gen‑Z learners when they themselves lack that training?
Identifies a generational skills gap among educators and asks for solutions to enable older faculty to effectively train the next generation in AI.
Speaker: Dr. Freddy
Implementation of district‑level simulation centers for nursing competency building
Calls for research on the feasibility, resource requirements, and impact of establishing simulation/competency centers in every district to upskill the existing nursing workforce.
Speaker: Dr. Sarvajit Kaur
Developing rapid curriculum adaptation mechanisms for digital‑health education
Suggests the need to study agile models that allow curricula to evolve continuously with technology rather than waiting a decade for formal revisions.
Speaker: Dr. Gupta; Dr. Sarvajit Kaur
Quantifying the economic impact of healthcare‑workforce shortages on global GDP and the climate‑health nexus
Proposes further investigation into the macro‑economic costs of staffing gaps and how health‑system emissions intersect with climate change.
Speaker: Dr. Suresh Yadav
Integrating health‑ERP systems to reduce fragmentation in the Indian healthcare ecosystem
Calls for research on designing and deploying enterprise‑resource‑planning solutions that connect doctors, pharmacists, nurses, and volunteers across silos.
Speaker: Dr. Suresh Yadav
Evaluating cross‑border telemedicine ecosystems that connect Indian doctors with the diaspora and global patients
Suggests studying models for seamless international tele‑consultations, diagnostics sharing, and medication delivery.
Speaker: Dr. Suresh Yadav
Pricing strategies for digital‑health products in emerging markets like India
Requests systematic research into cost structures, willingness‑to‑pay, and subsidy models suitable for the Indian context.
Speaker: Speaker 2
Assessing the impact of linking mandatory CNE hours to registration renewal on nurse upskilling
Proposes evaluating whether tying continuing‑education credits to license renewal effectively improves nursing competencies.
Speaker: Dr. Sarvajit Kaur
Innovation pipeline management in government for health‑tech adoption (DARPA‑style model)
Suggests exploring a structured, stage‑gate approach to fund, test, and scale health‑technology innovations within public policy frameworks.
Speaker: Anish
Role of health‑tech companies in co‑designing health‑workforce curricula and training programs
Calls for research on partnerships between tech firms and academic institutions to create hands‑on courses that prepare future health workers.
Speaker: Speaker 1
Effectiveness of remote‑surgery training for older physicians adopting new technologies
Highlights the need to study how legacy clinicians transition to tele‑surgery and other AI‑enabled practices.
Speaker: Dr. Rajiv
Influence of age versus mindset on AI adoption among healthcare professionals
Suggests investigating whether attitudes or chronological age are the primary barriers to embracing AI in clinical settings.
Speaker: Dr. Gupta; Dr. Rajiv

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Inclusive AI Starts with People Not Just Algorithms

Inclusive AI Starts with People Not Just Algorithms

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened with Lakshmi Pratury asking how individuals have scaled their own potential and why AI Kiran is emerging now [1-3]. Kirthiga Reddy answered that embracing risk-asking “what would you do if you weren’t afraid?”-has guided her career and the formation of AI Kiran [12-16]. She noted that the AI Kiran community has exploded from a handful of members to about 10,000 women, adding two zeros to the original 250-woman list generated by ChatGPT [61-64]. Reddy also highlighted the role of male allies and her own position as a privileged woman to pay the support forward [25-28].


Lakshmi Pratury traced her own path from Intel, venture capital and philanthropy to creating platforms that surface untold innovators, noting that she brought TED to India and has spent the last 15 years curating stories [37-44]. She sees the AI revolution as a chance to build inclusive technology from the ground up, arguing that every new wave-Internet, industrial-creates both problems and opportunities that must be addressed early [45-49]. According to her, AI Kiran’s mission is to make AI inclusive from the start, leveraging the momentum of the current generative-AI boom [49-50].


Radha Basu recounted her early work establishing HP’s software operations in Bangalore in the late 1980s, celebrating the first million dollars of software export from India [104-110]. She explained that AI Kiran now runs AI centres in tier-2 cities such as Calcutta, Vizag, Coimbatore, Shillong and others, each becoming a centre of excellence in domains like autonomous mobility, healthcare AI, automotive AI and generative AI [131-138][143-149]. Radha emphasized that the organization’s workforce is roughly 10,000 strong, with women making up 53 % of staff, reflecting a deliberate gender-parity goal [175-178].


When asked about what is needed to scale AI in India, Radha described a three-point investment triangle: technology/models, infrastructure, and human intelligence [259-264]. She added that bridging the AI divide requires making young people from diverse backgrounds AI-ready, citing the average age of her team as 24.5 and the need to upskill non-IIT talent [270-277]. Mihir Shukla reinforced the skilling argument, noting that training programmes have placed 700 women in Africa and 500 U.S. participants into AI jobs within weeks, demonstrating the rapid economic mobility AI can provide [428-436].


Both speakers agreed that resilience, curiosity and the ability to ask the right questions are essential personal traits for navigating the automated future [316-324][349-352]. The discussion concluded that scaling human potential in AI depends on collaborative ecosystems, gender parity, regional centres and continuous learning to ensure inclusive, responsible growth [180-182][259-264].


Keypoints


Major discussion points


Scaling human potential through inclusive AI ecosystems – The panel repeatedly emphasized the need to broaden AI participation, especially for women and youth, citing the rapid growth of the AI Kiran community (10 000 members) and the goal of gender parity in AI companies [11][55-63][175-178]. Lakshmi highlighted the “Fellows Program” that brings together 250 young talent from diverse disciplines [71-78].


Embracing risk and “starting over” in the AI era – Kirthiga urged listeners to ask “what would you do if you weren’t afraid?” and advised entrepreneurs to be willing to restart their trajectories to stay ahead of the AI curve [12-18][19-24].


Building decentralized AI infrastructure and talent hubs across India – Radha described the creation of AI centers in non-metro cities (Calcutta, Vizag, Coimbatore, Hubli, Shillong) and their focus areas-autonomous mobility, healthcare, automotive, and generative AI-aimed at preventing an “AI divide” [132-141][144-149][155-162].


Upskilling under-represented groups to create AI-ready citizens – Multiple speakers stressed large-scale training initiatives: Automation Anywhere’s half-billion digital workers [184-190]; AI Kiran’s plan to train a million women and youth [284-296]; and grassroots programs that have placed hundreds of women and low-income youth into AI jobs within weeks [428-437][442-456].


Future skills and values for the next generation – The audience asked what children should learn for an AI-automated world; panelists answered with resilience, curiosity, “heart intelligence,” and the ability to ask the right questions rather than just consume AI outputs [308-324][326-334][338-342].


Overall purpose / goal


The discussion was designed to showcase how leaders are scaling human potential by building an inclusive, decentralized AI ecosystem in India-highlighting personal journeys, community growth, infrastructure development, and large-scale education initiatives-to inspire collective action toward a more equitable AI future.


Overall tone


Opening (0-10 min): Energetic, celebratory, and motivational, with applause for community size and personal anecdotes.


Middle (10-30 min): Becomes more reflective and strategic, focusing on concrete actions-risk-taking, building centers, and scaling talent.


Later (30-57 min): Shifts to a problem-solving, hopeful tone, addressing audience concerns about education, ethics, and future skills, and ending with an optimistic call-to-action.


The tone moves from inspirational excitement to practical deliberation and finishes on an optimistic, collaborative note.


Speakers

Speakers (from the provided list)


Lakshmi Pratury – Panelist; former Intel executive, venture capitalist, philanthropist; brought TED to India and co-founder of AI Kiran.


Kirthiga Reddy – Panelist; former SoftBank partner, founder of Optimize Geo (AI-focused startup).


Mihir Shukla – Panelist; CEO & Chairman of Automation Anywhere; author of an upcoming book on AI transformation. [S4]


Radha Basu – Panelist; Founder & CEO of iMerit, AI-focused technology company. [S7]


Anurag Hoon – Panelist; music educator, founder of “Manzil Mystics” mobile music school, AI Kiran Fellow.


Audience – Various audience members who asked questions (e.g., Anupama – data-science lead, Hemendra – AI & sustainability faculty at IIM Udaipur, Anjali – Tech Mahindra, Bina – ServiceNow, etc.).


Speaker 1 – Unnamed moderator/host who provided closing remarks and occasional commentary.


Additional speakers (not in the provided list)


Ashna – Representative of AMD, discussed hardware and compute considerations.


Komal – Founder of Dark.ai, working on AI solutions for tailors and fashion designers.


Prerna – AI Kiran member, involved in community outreach and program coordination.


Neha Vaibhav – AI Kiran member, involved in community outreach and program coordination.


Bina – ServiceNow employee, active in the ethical AI movement.


Areas of expertise, roles, and titles are taken from the transcript and enriched where external source citations were available.


Full session reportComprehensive analysis and detailed insights

The panel opened with Lakshmi Pratury asking the speakers how they had “scaled their own potential” and why the AI Kiran initiative was emerging now [1-3].


Kirthiga Reddy welcomed the audience and described the AI Kiran community as spanning from Mumbai to Himachal Pradesh, now numbering roughly ten-thousand women and “multiplying by two or three-fold” [11-12]. She framed her career around the question “What would you do if you weren’t afraid?” – a mantra that encourages risk-taking, restarting one’s trajectory and aiming for stretch goals rather than incremental ones [12-15]. Reddy also acknowledged the role of male allies and said her position is a “privilege and responsibility to give it forward” [25-28].


Reddy then highlighted Optimize Geo, the startup she co-founded, explaining how its generative-engine-optimization platform helps brands stay relevant in an AI-driven consumer landscape [200-210]. She also cited Dark.ai, led by Komal, which enables tailors and fashion designers to adopt AI tools, illustrating AI Kiran’s “create-and-scale” ethos [70-78].


When the group first queried ChatGPT for “100 women in AI in India”, it returned only ten names; the founders responded by compiling a list of 250 women, effectively adding a zero to the original output, and the community has since grown to ~10 000 members [61-64].


Lakshmi Pratury traced her three-decade reinvention-from Intel and venture capital to philanthropy and curating stories of untold innovators [37-44]. She recalled bringing TED to India and, for the past fifteen years, building a platform that surfaces talent beyond the “10 famous people” [38-40]. Pratury positioned the current generative-AI boom as an opportunity to design inclusive technology from the outset, drawing a parallel with past industrial revolutions that created both problems and chances for remediation [45-49]. She introduced AI Kiran’s Fellows Programme, which now supports over 250 multidisciplinary young innovators [71-78].


Radha Basu recounted her pioneering role in establishing Hewlett-Packard’s software operations in Bangalore in the late 1980s and celebrating India’s first million-dollar software export in 1989 [104-110]. She described AI Kiran’s present-day strategy of decentralising AI capacity by setting up centres of excellence in tier-2 cities-Calcutta, Vizag, Coimbatore, Hubli and Shillong-each specialising in autonomous mobility, healthcare AI, automotive AI and generative AI [132-141][143-149]. Basu noted that the organisation now employs roughly ten-thousand AI professionals (about 3 500 in India) with women constituting 53 % of the workforce, a deliberate gender-parity target, and an average employee age of 24.5 years [140-150][175-176].


She also detailed the small-model pipeline used in AI Kiran projects: building compact vision and language models, fine-tuning them, applying reinforcement learning with human feedback, “tormenting” the models to improve robustness, and collaborating with domain scholars such as cardiologists and agronomists [150-170]. Through a partnership with the Anudip Foundation, AI Kiran has already up-skilled 630 000 young people-approximately 50 % women-in AI literacy [190-200].


Mihir Shukla reinforced the emphasis on applied AI, arguing that India’s comparative advantage lies in deploying AI across its eighteen industrial hubs rather than chasing the global race to build ever-larger models [284-288]. He announced a partnership with AI Kiran to train one million women and youth in AI and automation over the next five years [284-296]. Shukla referenced his forthcoming book A Five-Year Century and cited Automation Anywhere statistics: roughly half a billion digital workers, a 1:20 human-to-digital-worker ratio, and a presence in 90 countries [220-240]. He illustrated rapid economic mobility by noting programmes that placed 700 women in Africa and 500 participants in the U.S. into AI jobs within weeks of six-week training [428-436].


After Shukla, Ashna (Speaker 1) offered a distinct perspective on infrastructure and ambition, advocating a coexistence model between human and artificial intelligence and emphasizing the role of ambition in scaling AI solutions [250-270].


The audience then asked what skills children should acquire for an AI-automated future. Panelists converged on the need for resilience, curiosity and emotional intelligence: one speaker stressed learning to “fail” and bounce back [304-311]; another highlighted “five senses and nine emotions” as a foundation for “heart intelligence” [300-320]. The panel reiterated that nurturing these soft skills is as crucial as technical training for the next generation.


Parallel concerns about safety and trust were voiced. An audience member warned of the necessity for guard-rails to protect children from AI-driven harms while still empowering them [382-389]. No concrete solution was offered during the session.


Agreements

1. Community-driven gender parity – Both Kirthiga and Radha highlighted the importance of community building (≈10 000 women) and internal hiring policies (53 % women) to achieve parity [61-64][175-176].


2. Risk-taking and career reinvention – Reddy and Pratury championed taking bold risks and reinventing one’s career as catalysts for personal and technological progress [12-15][34-36][42-44].


3. Decentralisation of AI talent – Reddy’s warning against an “AI divide” and Basu’s tier-2 centres of excellence reflect a shared stance on decentralising AI capacity [136-137][132-141].


4. Balanced “triangle” investment – Shukla and Basu agreed that investment should simultaneously develop technology/models, infrastructure/compute, and human talent [259-264][284-288].


5. Resilience, curiosity and question-asking – Multiple participants emphasized that resilience, curiosity and the ability to ask good questions are essential for thriving in an AI-driven world [304-311][349-352].


Disagreements

1. Investment priorities – Shukla argued for prioritising applied AI solutions over competing in the global model-size race [284-288]; Basu advocated a balanced “triangle” approach that also funds cutting-edge models and compute [259-264].


2. Core competencies for children – The audience stressed resilience and curiosity [304-311]; Anurag Hoon proposed a holistic curriculum centred on the five senses and nine emotions [300-320]; Speaker 1 (Ashna) highlighted resilience and learning from failure but did not mention the sensory-emotional framework [316-324].


3. Rapid empowerment vs. child safety – The audience’s call for robust guard-rails [382-389] contrasted with Reddy’s earlier encouragement to act fearlessly without directly addressing safety [12-15].


4. External community growth vs. internal hiring metrics for gender parity – Reddy emphasised community expansion as the primary lever [61-64]; Basu pointed to internal workforce composition (53 % women) as evidence of successful parity policies [175-176].


Thought-Provoking Remarks

– “What would you do if you weren’t afraid?” [12-15]


– The “add two zeros” anecdote about ChatGPT’s initial list [61-64]


– Basu’s declaration of AI centres outside metros [132-141]


– Pratury’s comparison of AI to past revolutions and the call to make it inclusive [45-49]


– Shukla’s analogy that India’s strength lies in applying technology rather than inventing it [284-288]


– Hoon’s emphasis on “heart intelligence” through the five senses and nine emotions [300-320]


– The panel’s repeated emphasis that asking the right questions drives progress [349-352].


Concrete Actions Proposed

1. Launch the AI Kiran-partnered programme to train one million women and youth in AI and automation within five years [284-296].


2. Continue expanding the AI Kiran community, aiming to add further zeros to its membership count [61-64].


3. Scale the Fellows Programme beyond its current 250 alumni [71-78].


4. Operationalise the tier-2 AI centres of excellence as outlined by Basu [132-141].


5. Prioritise applied AI projects in precision agriculture, breast-cancer screening and autonomous mobility [145-152][280-283].


6. Develop educational resources on resilience, curiosity, EQ and the five-senses framework for parents and teachers [300-320][304-311].


7. Encourage organisations to use existing compute creatively while seeking advanced chips as strategic enablers [12-15].


8. Embed ethical guard-rails and provenance mechanisms early in AI product development, responding to audience concerns [382-389].


Unresolved Issues

– Specific frameworks for safeguarding children from AI-related harms while still empowering them remain undefined [382-389].


– Metrics to monitor the effectiveness of the one-million-women training partnership are still needed.


– The precise balance between investing in large-scale model development versus applied AI solutions requires further policy deliberation.


– Detailed strategies for bridging the AI divide between metropolitan and non-metropolitan regions, and concrete curricula for integrating resilience, curiosity and emotional intelligence, were left open for future work.


Key Take-aways

Community-driven gender parity combines large-scale outreach with internal hiring targets.


Risk-taking and career reinvention are viewed as essential levers for personal and technological progress.


Decentralised AI hubs in tier-2 cities are critical to avoid an urban-AI divide.


– A balanced “triangle” investment model-technology, infrastructure, and human talent-is advocated.


Resilience, curiosity and the ability to ask good questions are highlighted as core competencies for the next generation.


– A concrete training partnership aims to up-skill one million women and youth in AI and automation over the next five years.


Overall, the discussion revealed strong consensus on inclusive gender participation, risk-taking, decentralised AI capacity and balanced investment, while moderate disagreements persisted around investment priorities, child-safety versus rapid empowerment, and the optimal pathways to achieve gender parity. These insights provide a roadmap for policymakers, industry leaders and educators seeking to scale human potential responsibly within India’s burgeoning AI ecosystem.


Session transcriptComplete transcript of the session
Lakshmi Pratury

to being the first woman partner at SoftBank, you know, investing over a billion dollars, to now an entrepreneurial journey at Optimize .io. Today we are talking about scaling human potential, etc. So how have you scaled your own potential through all this that landed you in AI Kiran? Why did you land in AI Kiran now?

Kirthiga Reddy

All right, yeah, we’ll all get seated since we have all our fellow panelists seated here. So I’m just so excited to be here. And a first call -out to all of the AI Kiran community members here because that is really the story. Amazing. And you come from where? Where are you coming from? Mumbai? Gurgaon? Jamshedpur? Himachal Pradesh. All right. Just a microcosm of the community that’s now officially 10 ,000 but growing to, very quickly, like, multiplying by two or three -fold, and we’ll be, you know, stay tuned for the announcement with partnerships like the one that we have here. So, yeah, my own journey has been one of I guess if I had to think about a theme, it’s my favorite meta poster, which is an offices all across the globe, which says, what would you do if you weren’t afraid?

And it’s a phrase that I want us to think about. You know, what would you do if you weren’t afraid? And think about what comes to your mind. Right. So it’s about, you know, taking risks and not being afraid to start over again. I had someone come up to me yesterday and say, hey, if their business is on a certain trajectory, but now if they have to move over to AI, if it meant starting over all over again, what would I recommend to them? And I said, start over all over again. Right. Because there’s a certain trajectory that if you are in, it is all about projecting where you will be five years from now, 10 years from now.

And if the new trajectory gets you further ahead, you know, by the way, and even if you fail at that, it’s better to shoot for the stars. And miss versus doing a part that. That feels achievable, but there’s not the stretch in it. And of course, with that comes a lot of assumptions about both, you know, the financial ability, the support that you have from your family to do it. But if you have all of that, certainly go out and stretch to that. So that I would say is what what has been my journey and the inspiration for AI Kiran as well, in that all of the different roles that you mentioned, I was often the only woman in the room, certainly the single digit percentage at the max.

And I think all of the women in this room relate. And it has been about incredible male allies who also helped with us getting to the roles that we are in. And so that becomes a position of privilege and responsibility to give it forward. And then that’s when I met Lakshmi, the fearless Lakshmi, who has been a pioneer both in her own reinvention. She was the OG technology 25 technologist. Connected. builder of, real builder, and someone who’s focused on scaling human potential, just like everyone else here. So Lakshmi, tell us your story, and I can’t wait to hear the stories

Lakshmi Pratury

of everyone else here on this panel. Yeah, so, you know, for me, when I sit here today in 2026, in 1994, 93, 94, we were talking about internet is going to be big, and people would say, okay, what is this? You know, how will anybody make any money in this, etc. So what we are looking at now is nothing new. This is just a reinvention of things we’ve been seeing for the last 50 years, you know. So I’ve been at Intel, I’ve been a venture capitalist, I’ve been, you know, in philanthropy, all kinds of things. But what brought us together to AI Kiran is that for the last 15 years, my work has been about finding amazing people, doing amazing work and get them to tell their stories, because we only hear about the 10 famous people, but innovation is happening everywhere.

So how do you find them, connect them, and get them to tell their stories and teach people how to tell their stories? So I brought TED to India. I mean, I worked at Intel, venture capital, all kinds of stuff. And then I decided for the last 15 years, my journey is going to be how to create a platform to showcase the amazing talent that’s there in India and across the globe that doesn’t get told. So that’s what I’ve been doing. So when I met Kritika and I look at in the AI revolution, there is an amazing opportunity for us to do it right from the beginning. In every revolution, industrial revolution, we messed up the environment, the rivers and everything.

200 years later, we are like, okay, let’s clean it up. Even in the Internet revolution, you know, we have the problems with social media, you know, mental illnesses, all kinds of things, good and bad. But technology is growing. It’s great, actually. You can’t fight it. So how can we be part of this? to make it inclusive from the word get -go. That’s what excites us in this journey. And as she was saying, we have no idea how to do this. You kind of say, we are going to do this. And it’s amazing, like in six months, the kind of progress we had. As she said, you know, one of the things

Kirthiga Reddy

you must say, Kritika, about the chat GPT thing. Yeah, you know, I shared this when we started AI Kiran. And by the way, this is the most you’re going to hear from both of us because the rest of the session, we are going to hear from our incredible panelists. And when we started, if you went to chat GPT and said, can you tell me about 100 women in AI in India, it would tell you 10 women, right? And it will tell you that I cannot answer this question. And these are some sources that you look at. And so over that period of time, so we launched with 250 named women. So right there, we added a zero. Now it’s an incredible community of, you know, 10 ,000 women who are all taking this on their own, rallying it, self -organizing.

And so we’re going to talk a little bit about that. creating new ventures. We just heard about Dark .ai, which Komal is doing that and helping tailors and fashion designers use AI. And you have to Awesome, right? So if we can create a platform to even add a little bit of like that oomph and make it bigger, faster, bolder, I mean, we have done our jobs. And so, yeah, so we have already added two zeros to the first number that ChatGPT had. It’s just about, you know, as they say in a startup, add a zero, add a zero, and we’ll be at that million. So with that, let’s jump in.

Lakshmi Pratury

So talking about scaling human potential, I have to start the conversation with Radha. I’ve known Radha… I mean, I knew Radha about reading about her first in Silicon Valley. She was one of the first people… people who brought… HP to India in 1987 before anybody thought of the technology corridor. So we used to read about her. And then she started iSupport, which is a software company, which is one of the first unicorns in Bay Area. I was still reading about her. And then through a common friend, Chitra, I met her and been a great friend for the last 25 years. And talking about somebody who reinvents herself all the time to benefit the community that’s been in her, whether it is HP or whether it’s iSupport.

And now she does something called iMerit. And Radha, before, instead of me saying what you do, the kind of work that you do in scaling humanity, the humans in the loop, has been amazing. So tell us about how many people you have, what you’re doing, and what does scaling really mean for

Radha Basu

you? Actually, indeed. I said, indeed. you know I’ve had quite a journey together and sorry by the way I have to say how you know when you say Hewlett and Packard she actually worked with them yes so Lakshmi really dates me as well but that’s okay I work with Andy Grohwal so let’s really date ourselves I grew up in HP I’m originally from Madras and that’s where I did my engineering and you talk about being a woman in the room it was we were 17 girls and 2800 boys in engineering and I really had the opportunity and I went to get my masters in the US and just kind of fell into working at HP labs which was one of the most prestigious places then and the beauty of HP was David Packard particularly and I was really literally did the management by wandering around.

You would run into him everywhere, open offices, and ended up being a mentor. I was so fortunate. The two of them created Silicon Valley, the Silicon Valley we talk about today. And then when I had the opportunity by sheer, I ended up in Europe, ran medical products group for HP in Europe. And then at that point, I was like, okay, I’m going back to the U .S. What am I going to be doing? And there’s this whole thing about what is happening in that country of yours. It’s so behind in electronics, the country of mine, India. And I was so kind of enraged by that comment. I said, that’s rubbish. We have the best mathematicians, all of which is true.

So David Packard said, okay. I’ll give you three months. Go and figure out what you can do in India. And I tell you, it was the greatest opportunity because I ended up in this beautiful garden, sleepy town called Bangalore. And growing up in Chennai was, of course, I’d gone to Bangalore for my holidays. And the talent of, you know, when you’re working with computers, I used to actually at that time, I was working on multi -threaded Unix. And the kind of talent and what you could develop anyway, it wasn’t three months. That extended to about five and a half years. And I set up Hewlett Packard in Bangalore. And the first two multinationals of anybody doing software in India was Texas Instruments and HP.

The other most amazing thing at that time, I think even more amazing, was those were the years that TCS started. Infineon. HP started. Wipro started. TCS started. And I was just a little bit more experienced. And so we celebrated together the first million dollars of software export from India jointly. And I remember doing that million in 1989. I bring this up because if you, if when, within a lifetime when you can see an industry, technologies completely transform a large country with a growing middle class. And it’s just the creation. I mean, there is no question that IT is the global leader is in India. There is no question about that. So now you fast forward and you come to AI.

AI in turn is changing IT. It’s changing IT in ways that we never believed. It was even possible. And I think that so we started IT. And we’re still doing it. And we’re still doing it. it. It will be 10 years in AI in April. So thinking, and that’s what’s fun about being in the IT industry early on. Then you say, well, what’s happening to this industry? Multi, multi, multi billion dollars. And we started in AI. So we’ve kind of had a ringside seat. It’s also you go through when you start something early enough, you go through its issues. We have done a lot of work in computer vision, not just the language model side, but the computer vision side.

And I’ll talk a little bit about it. But at this point, we have, and many of them are AI Kiran folks. We have about little over 10 ,000 people working in AI. 3 ,500 or so in India. and our in -house talent in India working in AI is about a lot of, most of them are in -house. And we also decided to set up AI centers, not in the metros, because remember what transformed India was the work in Bangalore, Noida, Gurgaon, Chennai, et cetera. How do you then take it and use AI now, it’s the new technologies, but not have a divide? The last thing I want is, you know, like five or ten years from now, we’re all discussing how do we bridge the AI divide, as we’ve been doing, how do you bridge the digital divide?

If we could bridge it now, which is what I love about AI, Kiran, then you are in charge and you grow and you scale. So we started setting up. We started setting up centers in Calcutta, Vizag. Coimbatore, Hubli, Shillong. And so those are our centers. And each center now has become a center of excellence in a particular area. And the three areas that we work in, it was wonderful to hear this yesterday. We work four areas, actually. We work in autonomous mobility and robotics, which is our largest business. And there we have people in Kolkata and Meche, Bruce. And then Vizag is the center of excellence for healthcare medical AI. Coimbatore, and not because I’m Tamil, this is the first thing I’ve done in Tamil Nadu, but Coimbatore is our first center of excellence in Asia, automotive AI center of excellence.

And the way that center has grown, you know. And then our generative AI work is primarily in Kolkata. Shillong, it’s in multiple places. and we work with the large foundation model companies. So then what do you do? You can focus and focus and focus on the large models. How do you take that into applications for precision agriculture, breast cancer screening, healthcare AI, into the different areas that are so critical for societal applications? So we work with the foundation model companies to create what are called small models, small vision models, small language models, and then fine -tuning those models and working in reinforced learning with human feedback, red teaming, that means collaborating with the models, what we call tormenting the models, because how do you find out whether this model works or not?

You torment the darn thing. You torment it till you break it. That’s an actual technical term, let me tell you. You torment it till you break it. Okay? And then you do the data set creation to make it right. So then we will bring in experts. So we have on as scholars, we call them globally, and you can’t just do this in English, PhDs in mathematics, cardiologists, radiologists. They’re called scholars. Interventional, something, something. I found out more about this. Agronomists. Agronomists in Germany. Agronomists in the U .S. Agronomists. And this is where the beauty of AI and this potential. It’s not potential for me anymore. It’s real. It’s a 10 -year -old company. We run a fairly large business.

It’s cash positive, earnings positive. It’s got all of that stuff. But it’s business that drives the inclusion. And bringing in these experts means. You not only are inventing technology in Silicon Valley, Bangalore, et cetera, you have a global set of people and experts who are contributing to AI. So for me, let me end by one thing, which I know is most important. It’s 53 % women. 53 % women. And I’ll say one thing. I believe in this, really. If anybody asks you, how do you run a company with 50 -50 women, look them straight in the eye and say, have you seen the world lately? It’s about

Kirthiga Reddy

50 -50. And there should be no reason why AI technology cannot be 50 -50. Thank you. Beautiful. Well, that sets the stage beautifully for maybe the next question to me here. Where you’re a new author. and this is a topic that you spend a lot of

Mihir Shukla

I applaud the vision because as Radha rightly said the idea of getting it at the beginning is the right idea and I’m a big fan the book that you referred to is called a five year century isn’t available it will be available soon available for pre -order because we are going to see the change what happens normally in a hundred years within the next five years now fortunately we have seen this kind of change at least once before in the recent past around 1900 when electricity radio, automobile and planes came in about the same 10 -15 years time frame, imagine you went to sleep and you woke up 10, 15 years later, it would look like a different planet in every way possible.

It worked out for the most part. So we are about to see all of that now in five years. And in my role as a CEO and chairman of Automation Anywhere, we see the world through a very unique lens. We have about nearly half a billion digital workers that are powered by AI today run on our platform. It will reach a billion soon. The human worker to a digital worker ratio is 1 to 20. There’s 20 digital workers for every. And it is happening across 90 countries. We have customers in 90 countries across every industry. So when we saw all of this, we decided to write a book and say, it is time to tell the world what is happening, what is coming, and what is the leader.

playbook looks like.

Kirthiga Reddy

Incredible. And maybe, Ashna, we’ll go to you. You represent another iconic company, AMD, that has been at the heart of this revolution. As we think about scaling human potential and as we think about the opportunity globally and in India, what do you feel are the limiting factors or the enablers? Is it talent? Is it capital? Is it compute? How do they interact?

Speaker 1

You know, just to build off what both Radha and Meha said, there is no debate about the transformational nature of what we’re all experiencing. But change inherently is always personal. And what happens when you see either people who are super positive or super negative about something, it’s because they’re internalizing what they think it means to them or what it means around them. And so when you talk about… infrastructure or assets or how the world is shifting, these are all going to happen. I mean, we have, as a generation, relied exclusively and maybe I would say not exclusively, but at least extensively on human intelligence. Human intelligence has done the most brilliant things, but human intelligence has also made way for artificial intelligence.

And it’s about a model of coexistence as this develops that has to be evolved. And so from an infrastructure perspective, when we think about it, our goal is to make sure that the innovation that we as humans have ambition for is fully supported in what we build and what we deliver for the world. That’s simply put. How that ambition gets realized ultimately is in our hands, right? And so I think that’s where we have the ability to shape it early. We have the ability to drive success. and we have the ability to learn from history. Now, I will say that having been a history buff growing up, you always want people to learn from history, but people never do.

And so you should just expect that this is going to be the most interesting time that we are living and experiencing. And I would encourage and challenge everyone to make the most of that challenge of what it means to them personally and how you drive it. I mean, that’s my personal view. We will continue as a company to go build the best technology out there to support and drive and be the best partner to the businesses we work with. But ultimately, it comes down to the ambition each of us sets, each of the corporations set, each of the organizations set on the kind of change you want to drive.

Lakshmi Pratury

No, I think it’s beautifully put. I think it’s very different people coming together, people in hardware, software, services, training, all kinds of things coming together. And that’s why for us, at AI Kiran bringing very diverse forces is important and one of the biggest things for us is youth. When we look at AI Kiran, we say what can we do to get women into the fold and what do we do with youth and how do we make sure it’s safe for youth and it furthers the knowledge. It’s not just consumption but creation. So I think we have a program called Fellows Programs. Every year we pick 20 amazing people from different disciplines and put them together and I always say that that’s a great way to adopt children without carrying them.

So we have over 250 of them and Anurag is one of our Inc. fellows and he runs something called Manzil Mystics and the reason we wanted to bring this perspective is working with youth and creativity is extremely important. So Anurag, you’ve been in the music field for a long time and you’ve been in the music field for a long time and right now you’re working with over… 60 ,000 children across 900 schools teaching music. They have taken on one thing. They said, we’re going to teach music. And you actually have a van that you take to different schools and teach them. And tell me a little bit about, you teach things like intellectual property and human rights, all kinds of these things through music.

So tell us about that journey a little bit.

Anurag Hoon

Yeah, and thanks, Lakshmi. Lakshmi, I remember eight years ago, like seeing this dream with you. And she used to see my eyes glittering when I used to talk about this mobile music school. And it’s a reality now. And what I would say, for me, it’s less about AI. It’s more about HI, heart intelligence, because we’re talking about human. And what makes us human is a heart pumping and making us alive. And the heart is learning all the time. And so, and we. Thank you. and i personally saw like my story um i grew up in a low -income family in delhi studied in a government school got 52 hence no college started learning music and i in within a year i started my band i was in the u .s in seattle learning marketing and sales and how did it happen um it happened because music helped us learn that but because i create my original songs based on ideas of kabirji and gandhiji and first thing was what if someone steal my work and this this thing was there like what what if someone steals my idea and and it was very easy for anyone to just take your idea or maybe take your song and sing in a movie or a stage so for us one of the first thing that we teach you how to sing write compose and perform a song it it is very important for us that we may that that they don’t lose their property with their create.

Also, they don’t steal anyone’s property, which was there. And I was doing that, translating some English song, putting some Hindi lyrics and making myself cool. But I was like, no, that’s, create AI is there to help us learn things. And that’s what we made sure that every time we go in a classroom and a child learns to write a song or compose a song, they must know that intellectual property is a thing. And it is a big career opportunity right now. All the streaming platform have made sure that people who are like me creating a song get the royalty. But if I create a song through AI, they don’t get money. So we always say, if you want to be, if it’s fun, if you want, then do like create song through AI.

but the streaming all streaming platforms is you cannot earn money so we always say if you want

Kirthiga Reddy

to earn money create a song on your own absolutely well I mean one is so amazing to see the diversity that’s represented on this panel and you know start thinking of your questions we’re going to do one more prepared question but then we want to hear what’s on your mind as well and so let’s bring us back maybe to this historic India AI Summit and I know you know many people many announcements being made by the panelists and you know our attendees here as well on my end certainly you’ll see a number of AI announcements has already started we’ll turn to be here in a few minutes to talk about that as well so there’s AI announcements there’s also my startup Optimize Geo where having helped brands with a move to mobile and social and being relevant there Optimize Geo and I have to say in India it’s generative engine optimization it is not JIO it’s Geo we are helping brands with being relevant because business decisions consumer decisions are being made off of questions being asked by chat GPT perplexity and the like so that’s the platform and we have a bunch of announcements there but Ashna maybe coming to you we are sitting here at this Historic India AI Summit and if access to advanced chips is going to determine who can build powerful models are we at

Speaker 1

service intelligence and complement it with artificial intelligence. And that complementing of artificial intelligence is about how do you then have a thoughtful strategy as a country, as a company, as a startup to build that compute layer and that compute investment structure that gives you the outcomes that you need that complement the human ambition and the human scale you want to achieve. So I think it’s both. I don’t believe it will be a limiting factor for those that want to move fast. You just have to be creative with the resources you have. And believe me, we’re building fast and as quickly as we can to meet all the demand that we have. So Radha, continuing on that about being faster here than anywhere else, you’ve been doing that for 10 years before AI was a fashionable word.

What kind of investments do you think are needed to move people up the value chain of AI in India? And maybe we’ll have to comment on the same because I think it’s a really important question.

Radha Basu

Right. So if you look at the three parts of investments in AI, think of it as a triangle, right? And we’ve heard a lot of this. It’s the AI, let’s call it the technologies, the models, all the stuff, the open AI, the anthropics, the Google, deep minds, et cetera. Then there’s all the infrastructure, and you’ve heard all the announcements, multi -billion dollars of infrastructure. And then there is AI intelligence, and that is the human intelligence. And it’s the nexus of the technology intelligence, the infrastructure to do this, and the human intelligence that really scales AI. So yes, should we be worried about AI taking away jobs? I think we should. But to me, it’s not taking away jobs.

It’s how the jobs are evolving. So you ask me, what are the investments needed? And this is where I think, really, I mean, yesterday, I felt so, I feel very hopeful with young people anyway. Average age of our company is 24 .5. You would never know that looking at me. And the sassiest people in my company would say, if it was not for you, it would be 23. I’ve actually, and that’s that sassy young people from all over India, not from the cities, necessarily, not the IITians. So what is changing in AI? You can take young people from a variety of different backgrounds. And we had some young people come into the iMerit booth and say, we’re in commerce, or we’re in something else.

How do we become AI people? that transformation or that it’s like an equation how do you take a large number of young people and you i don’t want to use the word skilling but they become ai ready so you want to make data ai ready you want to make young people ai ready and you want to make the infra ai ready when you do all those three things the daunting scales the second thing i would say is and this came out quite a bit in the discussions yesterday whether it was daria speaking from anthropic or definitely you know from google sundar talked about this what are the applications of ai that is where we are today we’ve got the large models and of course they are scaling how do you apply it to the big picture and how do you apply it to the big picture and how do you apply it to the precision agriculture because if you do and you can you can actually catch crop failure We work with people like John Deere, and you catch the crop failure in an area like this.

You’ve saved the entire field from just, and they have seen immense amount of production increases because of that. If you can look at breast cancer screening for women, and you can screen people all over, breast cancer in India for Indian women versus Asian versus Caucasian versus black, the smaller models are very different because the parameters are different. If you can use that, then you’re starting to get AI into societal applications and then into enterprise AI because that’s where the big business, any technology scales and gets adapted and adopted when the enterprise. Start to use them, accounting, legal, et cetera. So that is the investment that’s needed. handing it over to

Mihir Shukla

you Mihir I think I’ll cover it in two different dimensions the first is focus on applied AI so I think it is best especially for India it’s good to develop the models but not to blindly chase this model race that’s happening in western world because if you look at the history the printing press was invented in Germany Dutch used it and became a superpower for few hundred years industrial revolution was invented France has all the parameters to succeed a small island of England used industrial revolution in every aspect of the economy became Great Britain making a point that you don’t have to invent a technology the success lies in applying that technology in every aspect of economy and that is India’s superpower if it focuses on is on it.

You have 18 or different industrial hubs in each of them, very specifically applying applied AI like I heard automotive AI, right? Can you create global competitiveness with those models? That’s where the primary investment has to go and it can completely change the economic outlook. The second thing where investment needs to go is an inclusion. I think this is a remarkable technology that can include a vast amount of people that were previously not included in digital economy. Think about the first computers with an English keyboard. You can’t include 90 % of the world population in that interface. Then came the mobile phone that got a little easier. Now you have a technology where anybody can talk, easily participate. So this is the time to include everybody.

And one of the things AI Kiran and we are doing together is we announced a partnership where together we will, in the next five years, we will train a million women and youth on AI and automation together. And I think both sides have to happen. We need to make economic growth and we have to make sure we

Kirthiga Reddy

And tell us a little bit about the wonderful work you do as you also ask your question.

Audience

Hi, my name is Anupama. I am one of AI Kiran members. Professionally, I’m a data scientist. Now moved to a technical lead role where I’m actually helping a lot of banking and financial institutions come up with AI and automation solutions. So I work a lot in building up POCs, use cases, and at the end of them, building up strategies for them to come up with enterprise solutions for them. My question is basically to Anurag you, and this is a little bit of a personal question. We’re talking about AI, and we’re talking about scaling. We’re talking about a new world. My question is, what is it that we as parents should be teaching our kids for the next 15 years when the world is fully automated?

They are already in an AI age because they’re growing in an AI age. They see AI. They hear, do, and in fact, they’re doing everything right. So what are those skills that we as parents should be teaching our kids to be ready? For the next 15 years and, yeah, in the AI automated age. what is it that we should be you know thinking through at this point i mean of course skills will come at a later stage but what is it that we what are what are those blind spots that we don’t see as parents and as people or as professionals who are just busy in building up solutions who are busy in automating things but there’s a human factor out there right uh after 15 years what is it that is going to make them stay

Anurag Hoon

that’s a brilliant question our og mama is here so she can answer better um i guess i when i became an ink fellow i became a father also my son is eight years old i am an ink fellow for eight years uh for before that two years i learned to be a father uh then i became a father so i feel the more onus is on us um i put this is my very personal i feel the five senses um and then nine emotions Navras and India has a lot of literature I guess as a parent I just made sure that my son understand the five senses and nine emotions and then I plan everything according to that it’s not like I don’t want to give phone to my son or I don’t want to do this or that so my son does all those things one of the things he started doing was birthday celebration going on street or maybe going somewhere and making him sense all these things so I guess five emotions, nine emotions five senses if we know it we can

Speaker 1

so basically you are saying that we need to be consistent with the five emotions the five senses and nine emotions like and ensure that this is there at the same time parallelly where we are even automating and we are bringing AI. And you should send us that list of five senses and nine emotions and we’ll share it with the community as well. I mean this is maybe I feel fairly strongly about this. I think we need to teach all our kids resilience. They need to learn to fail. They need to know it’s okay to fail. They need to know life is not easy. They need to know life is unfair. And they need to learn to survive and thrive and learn to be happy in it.

So if they have resilience they will survive all these changes. And I think that’s where you see kids struggle because when kids don’t know how to be resilient that’s when they struggle. When you teach them early and you know you’re there as a support mechanism for them to go through those experiences they will learn. And especially with the change we’re about to experience. even we don’t know I mean we can sit here and speculate as a panel what the world is going to look like in 15 years we don’t know

Radha Basu

that’s something that’s really great you know it can add one thing to this oh yeah it is so she didn’t ask the question she said as parents I’m gonna talk as a grandparent okay hey age comes with this right and it wasn’t so much them asking me it was my asking them and this is my niece and I said Nikki so what is it you think you’ve learned at school he’s a junior it’ll say he’s an 11th and just one more year to go I said what do you think we should be teaching kids at school and he says party and I said what do you think we should be teaching kids at school and he says look I don’t think there is anything you can really teach us at this point because AI is beyond all the parents which I actually agree with and we started talking about it and the thing that came out is being a curious learner because he said and I’m repeating to you not what I said to him but what he said to me he said whatever I learned last year I wanted to go into computer science at Stanford it was my biggest dream hey that’s not going to get me a job and I have to do different things I have to know what’s going on resilience I totally agree I loved your answer to it because it gave a completely from the heart and from the brain and from nature but this thing of knowing what’s around you learning about it, being curious about it because that is really important in AI.

You keep pinging it to make it better, right? And this is part of the intelligence. So thinking critically, curious learners, none of this is going to happen by keeping phones away from them. They’re going to learn to be curious because they want to be and go out in nature and learn it, right? Or wherever. And go out into the field. If you want to work in precision ag and a kid from the city doesn’t know anything about it, or in medical, whatever the thing. So to me, the answers

Mihir Shukla

I was going to quickly say there are three elements, in my opinion. The first is, in my generation, we only studied one subject, like computer engineering. I think today, people are going to do multiple things, right? So combining. And when you combine, the possibilities are limited. less. People who study rock climbing and video game design and when that person creates a rock climbing video game, it will be the most authentic experience you could ever get, right? So things like that, neuroscience and medical, there are just unimaginable possibilities. So that is on the career side, how you progress. I think second thing I asked my daughter, just like you said, you asked them and she said, that stayed with me, she said that the future is that there are no worker bees.

There are only queen bees. I loved it. I loved the spirit of it. You know, the empowerment it embodies, the ambition it embodies and I think if everybody had that mindset, amazing things are possible.

Kirthiga Reddy

You know, I think what we are going to do is actually we are just going to ask the question we are going to combine the questions. So just we’re going to hear a bunch of questions and then the panelists can just pick whichever they want and what they want to do. Okay, question.

Mihir Shukla

Sorry, I remembered the third one. I think the third thing is the power of the question. So this amazing thing AI that we have, it knows many answers, but it doesn’t know what are the right questions. And it is not likely to know anytime soon. And so having the right questions, I share this example with you on our family dining table. I do this experiment to learn from the younger generation because they instinctively know what the future is, right? They know better than us. So we made a rule once and said, you know, we’re going to talk about a subject and you can’t tell me anything that Alexa, Siri, or ChatGPT would tell me. So they said, okay, bring it on, Dad.

I said, okay. So we talked about some bird with the longest wingspan or something. And they said, Dad, that’s not fair. The question itself is, bias towards chart GPT. Ask me something that is, they said it nicer than this, but they said, ask me something that is worthy of human. So I said, okay. So I said, the Patagonia has the ecological imbalance after all the industrial development. In order to restore it back, it’s a complex thing. How do you introduce new species and complex problem and so how do you bring it back to its original state? That dinner conversation went on for three days. Various tools were used and together as a family, we came out with a plan on if you were to do it, what would that look like?

Now imagine those conversations happening on every dining table, if we have the right questions.

Kirthiga Reddy

Amazing. All right. Just quick spattering of questions and then we’ll have the panel just take yes, yeah, question.

Audience

Hi, my name is I’m founder of an AI company. We work with global higher education institutions. So I actually led my life very differently when I went to IIT. I never studied. And I look back and I say I did the best thing. So my question right now is that with the disruption that we are seeing and pretty much like IQ is gone by AI, the only thing that we are left with is EQ. And if you look at the education system and the system here, we actually hold on to something. Like we hold on to some exam, some college, and then some company. Then eventually we grow. So I want to hear from you guys how would you like to now disrupt at the fundamental levels, which is K -12, going into higher education, in order to support what is happening.

Disrupting education at fundamental levels. Hi, I’m Hemendra. I teach AI and sustainability at IIM Udaipur. And quick question. We. A lot of countries are now banning multimedia because the harm is obvious, because we didn’t have the guardrails and things came along. AI is only going to amplify the harms, just like it’s going to amplify the power. We heard about the power, and we know maybe AI can be used. The guardrails that we need for the younger generation is just connected to the question, follow -up to him, is what do we need to protect our kids from the harm while we give them the power? Thank you so much. This is Anjali. I represent Tech Mahindra.

I have been doing throughout my career, you know, connecting the left brain with the right brain, which is creative in technology. So hence, you know, AI is very close to my heart as well. So, ma ‘am, the question that I have for you is basically today in the world, people are getting overwhelmed and confused. There are so many platforms. coming over, so many methods coming over, right? If somebody has to ideate and strategize what to learn first and what next, right? How to go about it? Because so much is happening. There’s a lot of panic around Last question. Bina, AI Kiran member, part of ServiceNow, also a woman for ethical AI movement. How do we build trust in the internet?

Earlier you had provenance via people curating the internet. Now people are turning to ChatGPT and trusting those answers over human answers. How do you build provenance? That’s the main question. Last question, sorry. I do grassroots training with people for AI, but their digital gap is still very high. I have to go back and teach them tech before I can teach them AI. How do you solve this? Because we are having a lot of conversations around the youth and the power of the question, etc. All of you are fermented with wisdom. the application of AI and what you bring to the lens, how do you crunch that wisdom cycle of the youth? Because at the end of the day it is information age.

There’s so much information coming. So facts, figures, information, you’re brilliant at. But how do you build that wisdom to infer, to ask the right questions and to interpret? Can you crunch that wisdom?

Speaker 1

So we’re going to give like 30 seconds to each of the panelists as they close. I mean, I think on learning you just start. There are incredible tools. I mean, it’s amazing how quickly the tools and the capabilities to learn in this space have developed. And how fast you can learn. I mean, if any of you were paying attention to the series of events, TCS did a hackathon where they got women from all over the country. This was 1 ,200 women, right? Non -native, they’re not English speakers. They come from different walks of life. And in four hours, they were able to do brilliant things with the training that they received. I mean, these are the kinds of things.

So the potential is massive. So I think anybody, and that’s the beauty of what we have in AI and the positive side, is you can just take it and run with it and learn. I’m an optimist. So I believe you can always find what doesn’t work. But I think if more of us focus on what works, we can do a lot more good a lot faster than what can go wrong. This is the only panel where nobody has asked us about impact on jobs. So I’ll leave it at that.

Mihir Shukla

I’ll take the aspect of how do you teach them digital skills and then the AI skills. I think there is an opportunity here to skip some of the old digital skills because they weren’t very friendly. I think that in the new… In the new world, we have seen… I’ll give you a few examples. We train… We trained 700 women in Africa for six weeks, and 500 found a job within a week. We trained people in Mississippi Delta, which is a very poor part of the U .S. There was somebody flipping burgers at $12 an hour. Six weeks later, they had a $120 ,000 job in AI. This is a technology that doesn’t require you two or four years training courses, which normally the people who are needed the most can’t afford to have it.

So the fact that you can provide this kind of mobility on this technology is arguably the best thing about this technology.

Anurag Hoon

I guess for EQ, education disruption, EQ, to work on EQ is arts, and that’s a disruption. I’m glad AI came, and hence Invest India just shared that the 3 trillion Indian rupees, it’s going to be that’s the market size, because… for media and entertainment sector because the world has recognized that if you want EQ to be developed, invest in arts. Live music classes are increasing drastically and that’s where I’m here. We can talk more about this.

Radha Basu

So for me, this is not a question for the future. 72 % of the people who work at iMerit come from very low -income backgrounds and come from the hinterland of India. So it is absolutely possible to be able to… You start as though they haven’t been skilled before at all. You know, AI is new. There’s no baggage. It’s wonderful. As Mihir said, there’s a whole new way of doing it. And they learn and they are… They use AI to learn and what they learn, they teach AI. It’s really… It’s really interesting to watch. You have young women in a place called Meteor Bruce who came from tailoring and those kind of backgrounds. Why are they a center of excellence for computer vision?

Because the focus on that single pixel level accuracy is something that they have, and they learn the other skills. I would finish by saying you also have a foundation called Anudip, and this is really working across. And 630 ,000 young men and women, 50 % being women, have been skilled, have been skilled in AI literacy. Literacy, just knowing what is AI, how do you use it. I know it doesn’t address the IRM part, but I’m just talking about at the grassroots level. And when they learn how to work with AI, it’s not an enigma. It’s harder. I would say the last thing I would say, and I love my industry colleagues. it’s harder to take somebody from industry and train them in AI because they have to unlearn first than to take a young person out of some place in Odisha who comes from a second tier or even villages and be able to skill them in AI as to how to be working with AI so it’s the nexus of AI technology and that human and I’m such a believer that we can do this here.

Kirthiga Reddy

Awesome, so with that let’s give a huge round of applause for our panelists one, two if you want to get involved in the AI moment around women, youth, differently able, we have incredible leaders here, Prerna, Neha Vaibhav, they are truly the heart and soul of making this happen so let’s hear it for them, please ask them their questions and feel free to come grab any one of us if your questions didn’t get answered. thank you I just want to say end up with saying that if you can work with people smarter than you if you can work with people half your age and twice as smart I think we’ll find all the answers and thank you so much for giving your time, thank you you

Related ResourcesKnowledge base sources related to the discussion topics (33)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“Kirthiga Reddy described the AI Kiran community as spanning from Mumbai to Himachal Pradesh.”

The transcript snippet [S2] includes a direct reference to the AI Kiran community and asks participants where they are from, mentioning Mumbai, confirming the community’s presence in Mumbai (though it does not mention Himachal Pradesh).

Additional Contextmedium

“Reddy acknowledged the role of male allies and said her position is a “privilege and responsibility to give it forward.””

The knowledge base entry [S110] highlights the importance of male allies in supporting women’s advancement, providing additional context that aligns with Reddy’s statement about male allies.

Additional Contextlow

“The panel emphasized building inclusive AI solutions that reach women in rural and marginalized communities.”

The source [S106] discusses the need for inclusive AI that addresses the informal workforce, specifically women in rural, tribal, and extreme-poverty settings, adding nuance to the panel’s inclusive-AI narrative.

External Sources (120)
S1
Inclusive AI Starts with People Not Just Algorithms — – Kirthiga Reddy- Lakshmi Pratury
S2
https://dig.watch/event/india-ai-impact-summit-2026/inclusive-ai-starts-with-people-not-just-algorithms — All right, yeah, we’ll all get seated since we have all our fellow panelists seated here. So I’m just so excited to be h…
S3
Inclusive AI Starts with People Not Just Algorithms — – Mihir Shukla- Anurag Hoon
S4
Comprehensive Report: “Factories That Think” Panel Discussion — – Mihir Shukla- Thani Ahmed Al Zeyoudi
S5
Inclusive AI Starts with People Not Just Algorithms — – Mihir Shukla- Anurag Hoon – Radha Basu- Mihir Shukla
S6
Inclusive AI Starts with People Not Just Algorithms — – Kirthiga Reddy- Lakshmi Pratury
S7
S8
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S9
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S10
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S11
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S12
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S13
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S14
https://dig.watch/event/india-ai-impact-summit-2026/keynote-rajesh-subramanian — Ask not why, but why not. Question all ways of thinking. Take risks and embrace change as an opportunity for exploration…
S15
Agents of Change AI for Government Services &amp; Climate Resilience — “So agile regulation.”[22]. “If the regulatory framework is able to change, if we can change that, then we are not afrai…
S16
Scaling Innovation Building a Robust AI Startup Ecosystem — -Arita Dalan: Role – Representative of SecurTech IT Solutions Private Limited; Area of expertise – Cybersecurity solutio…
S17
Open Forum #33 Building an International AI Cooperation Ecosystem — Participant: ≫ Distinguished guests, dear friends, it is a great honor to speak to you today on a topic that is reshapin…
S18
Building the Future STPI Global Partnerships &amp; Startup Felicitation 2026 — Major discussion point 5: Startup success stories illustrating the impact of ecosystem support
S19
Agenda item 7 : adoption of annual progress reports / Agenda item 6 : other matters/ Closure of the session — The Chair commended Iran’s flexibility, casting a positive light on the effort to reach consensus, indicative of a colla…
S20
Presentation of outcomes to the plenary — This exceptional attendance highlights the critical urgency and importance placed on global supply chain issues in today…
S21
(Day 2) General Debate – General Assembly, 79th session: morning session — Mbumba highlights Namibia’s progress in achieving gender equality and emphasizes the importance of women’s empowerment. …
S22
What policy levers can bridge the AI divide? — **The Philippines** developed their strategy with strong presidential leadership and multi-agency collaboration. They’ve…
S23
GermanAsian AI Partnerships Driving Talent Innovation the Future — The industry response has been to move beyond traditional guest lectures towards comprehensive engagement models. Dr. Az…
S24
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — <strong>Naveen GV:</strong> out a long, lengthy form of information for that to be processed much later by another human…
S25
AI for Social Good Using Technology to Create Real-World Impact — Absolutely. That’s one of the exciting things. It’s very exciting. Yeah. I’m being told that we’re going to have to wra…
S26
AI and ethics in modern society — Humanity’s rapid advancements in robotics and AI have shifted many ethical and philosophical dilemmas from the realm of …
S27
Séance d’ouverture : « La gouvernance internationale du numérique et de l’IA : à la croisée des chemins ? » — Tomas Lamanauskas Merci beaucoup. J’ai quelques commentaires d’une certaine manière. Tout d’abord, je pense que, comme q…
S28
Prosperity Through Data Infrastructure — Lastly, the analysis emphasises the need for investment in both technology and people. The importance of investing in tr…
S29
Artificial General Intelligence and the Future of Responsible Governance — This comment challenges the dominant narrative that AGI development is primarily about compute power and technical infra…
S30
Media Briefing: Unlocking ASEAN’s Digital Future – Driving Inclusive Growth and Global Competitiveness / DAVOS 2025 — De Vusser emphasizes the need for strategic investments in AI talent development and building trust in AI systems. This …
S31
UNSC meeting: Conflict prevention: women and youth — China:Mr. President, I welcome your presence presiding of the meeting today. I thank you as DiCarlo and Ambassador Danes…
S32
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — India focuses on smaller models for specific use cases rather than chasing trillion-parameter models
S33
Focus shifts to improving AI models in 2024: size, data, and applications. — Interest in artificial intelligence (AI) surged in 2023 after the launch of Open AI’s Chat GPT, the internet’s most reno…
S34
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — And that’s what we’re doing. And that’s what we’re doing. And that’s what we’re doing. And that’s what we’re doing. prio…
S35
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — Another important point emphasized in the analysis is the significance of involving users and technical experts in the p…
S36
Book launch: What changes and remains the same in 20 years in the life of Kurbalija’s book on internet governance? — Development | Economic | Sociocultural Jovan proposes a structured approach to AI risk assessment that prioritizes imme…
S37
How AI Drives Innovation and Economic Growth — Rodrigues emphasizes that while early AI discussions were dominated by fear about job displacement and technological thr…
S38
The Innovation Beneath AI: The US-India Partnership powering the AI Era — And that makes it more difficult, not less difficult, I think, to be an investor because you have more mature products. …
S39
How to make AI governance fit for purpose? — Anne Bouverot: Thank you so much, Gabriela. Thank you for this. I’m lucky to go first because by the time everyone has s…
S40
Shaping the Future AI Strategies for Jobs and Economic Development — But the good thing is humans want touch. So that’s good. But, you know, there will be a lot of revolution in terms of te…
S41
Empowering Workers in the Age of AI — Juan Ivan Martin Lataix: There are people online too. It’s open. We are conducting this session among the many others th…
S42
AI/Gen AI for the Global Goals — Chido Cleopatra Mpemba: Thank you, everyone. First of all, my apologies for being late. This is my fifth event for the …
S43
Bridging the Digital Skills Gap: Strategies for Reskilling and Upskilling in a Changing World — ## Areas of Consensus and Implementation Approaches ## Gender Gaps and Inclusive Approaches Cosmas Luckyson Zavazava: …
S44
Policy Network on Artificial Intelligence | IGF 2023 — A program specifically designed for children aged 6 to 17 is implemented to develop their cognitive skills with technolo…
S45
Generative AI: Steam Engine of the Fourth Industrial Revolution? — It is evident that there is an urgent need for partnerships with governments to modify basic education in order to meet …
S46
WS #376 Elevating Childrens Voices in AI Design — Dr. Mhairi Aitken: Maybe I could just pick up on, I guess, how this relates to the growth of AI companions and gender di…
S47
Rethinking Africa’s digital trade: Entrepreneurship, innovation, &amp; value creation in the age of Generative AI (depHub) — In summary, the analysis raises critical concerns regarding data protection, privacy, and ethical considerations. It und…
S48
Artificial Intelligence &amp; Emerging Tech — It is crucial to examine how data is gathered and the ethical considerations involved. The models and frameworks used in…
S49
WS #119 AI for Multilingual Inclusion — Claire van Zwieten: Absolutely. I don’t think Aida is able to speak, so I’ll speak on her behalf. But the Internet So…
S50
Charting New Horizons: Gender Equality in Supply Chains – Challenges and Opportunities — Importance is given to policy formation, advocating for women to be integral from the start of policy development, not r…
S51
Opening address of the co-chairs of the AI Governance Dialogue — While this transcript captures only the opening remarks of the AI Governance Dialogue, the key comments identified estab…
S52
Inclusive AI Starts with People Not Just Algorithms — The AI Kiran initiative exemplifies this proactive approach to inclusion through a powerful demonstration of how human a…
S53
Policy Network on Artificial Intelligence | IGF 2023 — A notable observation from the analysis is the emphasis on AI education for children. A program specifically designed fo…
S54
Safeguarding Children with Responsible AI — “Because curiosity is there in every child.”[71]. “What huge loss would that be for humanity if we suddenly have childre…
S55
AI &amp; Child Rights: Implementing UNICEF Policy Guidance | IGF 2023 WS #469 — Collaborative efforts are necessary to ensure the correct implementation of technology in mental health support for chil…
S56
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — India focuses on smaller models for specific use cases rather than chasing trillion-parameter models
S57
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — 95% of work can be accomplished with 20-50 billion parameter models. ROI comes from deploying lowest cost solutions for …
S58
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Economic | Development Rather than following historical patterns of automation that replace workers, AI development sho…
S59
Future-Ready Education: Enhancing Accessibility &amp; Building | IGF 2023 — Overall, the analysis provides a comprehensive overview of the different aspects of technology and education, highlighti…
S60
DCAD &amp; DC-OER: Building Barrier-Free Emerging Tech through Open Solutions — Despite coming from different perspectives (technology developer and audience member), both emphasize the need for a hol…
S61
Diplomatic policy analysis — Policy analysis serves as the backbone of diplomacy’s decision-making. It equips leaders and negotiators with the eviden…
S62
Placing learners at the center — A comprehensive examination of the interplay between technology and education reveals a sophisticated and multi-faceted …
S63
OpenAI economist shares four key skills for kids in AI era — As AIreshapesjobs and daily life, OpenAI’s chief economist, Ronnie Chatterji, teaches his children four core skills to h…
S64
Responsible AI for Children Safe Playful and Empowering Learning — “The child needs to have a basic level of literacy to be able to engage with language models.”[15]. “So for our young pe…
S65
Youth-Driven Tech: Empowering Next-Gen Innovators | IGF 2023 WS #417 — Has sustained professional reinvention and personal adversity Innovation is essential for progress and is often synonym…
S66
Contents — Historical experience suggests that economies where innovation thrives can overcome the preceding challenges and re-inve…
S67
ACKNOWLEDGEMENTS — Governments, regulators, industry players, NGOs, academics and decisionmaking bodies have a critical role to play in sha…
S68
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Eltjo Poort, Vice President Consulting at CGI in the Netherlands, supported this view: “Regulation does not hamper innov…
S69
Policymaker’s Guide to International AI Safety Coordination — Okay. Given this remarkable panel and the very short time we have, let me very briefly frame our discussion and get righ…
S70
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — It has been observed that most online users in the Global South are male, which suggests a lack of diverse representatio…
S71
Inclusive AI Starts with People Not Just Algorithms — The AI Kiran initiative exemplifies this proactive approach to inclusion through a powerful demonstration of how human a…
S72
Open Forum #33 Building an International AI Cooperation Ecosystem — Participant: ≫ Distinguished guests, dear friends, it is a great honor to speak to you today on a topic that is reshapin…
S73
Open Forum #37 Her Data,Her Policies:Towards a Gender Inclusive Data Future — This discussion focused on creating gender-inclusive data policies and a more equitable data future in Africa. Panelists…
S74
The Future of Innovation and Entrepreneurship in the AI Era: A World Economic Forum Panel Discussion — Salman bin Khalifa Al Khalifa This advice was particularly powerful because it directly addressed the paralysis that ca…
S75
The Innovation Beneath AI: The US-India Partnership powering the AI Era — And that makes it more difficult, not less difficult, I think, to be an investor because you have more mature products. …
S76
Artificial intelligence (AI) and cyber diplomacy — The speaker argued for balanced attention across short-term, mid-term, and long-term AI risks, cautioning against fixati…
S77
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — Sarim Aziz:Thank you, Babu, for the opportunity. I think this is a very timely topic. There’s been a lot of debate aroun…
S78
What policy levers can bridge the AI divide? — **The Philippines** developed their strategy with strong presidential leadership and multi-agency collaboration. They’ve…
S79
Shaping the Future AI Strategies for Jobs and Economic Development — But the good thing is humans want touch. So that’s good. But, you know, there will be a lot of revolution in terms of te…
S80
GermanAsian AI Partnerships Driving Talent Innovation the Future — The industry response has been to move beyond traditional guest lectures towards comprehensive engagement models. Dr. Az…
S81
Bridging the Digital Skills Gap: Strategies for Reskilling and Upskilling in a Changing World — ## Areas of Consensus and Implementation Approaches ## Gender Gaps and Inclusive Approaches **Additional speakers:** …
S82
Upskilling for the AI era: Education’s next revolution — Doreen Bogdan Martin: Good afternoon, ladies and gentlemen. Yesterday morning on this very stage I spoke about skills. I…
S83
We are the AI Generation — Doreen Bogdan Martin: Thank you. Good morning and welcome to Geneva for the AI for Good Global Summit 2025. I want to th…
S84
Panel Discussion AI &amp; Cybersecurity _ India AI Impact Summit — Dr. Abdurrahman Habib from Saudi Arabia shared remarkable results from their Women Elevate programme, which exemplifies …
S85
How AI Is Transforming Indias Workforce for Global Competitivene — Education, Upskilling, and Training Initiatives
S86
Generative AI: Steam Engine of the Fourth Industrial Revolution? — The skills necessary for the future include adaptability, technology embracement, agility, robust skill sets, and future…
S87
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Future workforce needs different skills including critical thinking, judgment capabilities, and empathy when working wit…
S88
WS #376 Elevating Childrens Voices in AI Design — Dr. Mhairi Aitken: Maybe I could just pick up on, I guess, how this relates to the growth of AI companions and gender di…
S89
High-Level Track Facilitators Summary and Certificates — The discussion maintained a consistently positive and celebratory tone throughout, characterized by gratitude, accomplis…
S90
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S91
Scaling Innovation Building a Robust AI Startup Ecosystem — The tone was consistently celebratory, appreciative, and inspirational throughout. It began formally with the awards cer…
S92
World Economic Forum Annual Meeting Closing Remarks: Summary — The tone is consistently positive, celebratory, and grateful throughout the discussion. It begins with formal appreciati…
S93
Launch / Award Event #159 Book Launch Netmundial+10 Statement in the 6 UN Languages — The tone was consistently celebratory, appreciative, and forward-looking throughout the session. Participants expressed …
S94
Building Future Leaders – Competency Driven Succession Planning — Maria Edera Spandoni: Thank you very much. Last reflection, suggestion. I’ll try to bring this up to something beyond…
S95
WS #343 Revamping decision-making in digital governance — Audience: Thank you very much. My name is Anne McCormick. I lead global digital policy for EY. We’re active in the globa…
S96
DYNAMIC COALITIONS MAIN SESSION — Audience:Thank you. I’m Woro from the National Library of Indonesia. I just want to add that from the library perspectiv…
S97
WS #64 Designing Digital Future for Cyber Peace &amp; Global Prosperity — The speaker emphasizes the need for a governance framework that caters to the lowest common denominator. They stress the…
S98
Closing Session  — Minister Tijani’s comment solidified the proactive framework as the summit’s core achievement and elevated the discussio…
S99
WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy — The tone of the discussion was largely serious and concerned, given the gravity of the issues being discussed. However, …
S100
Using AI to tackle our planet’s most urgent problems — The tone is passionate and advocacy-driven throughout, with the speaker maintaining an urgent, morally-charged perspecti…
S101
Law, Tech, Humanity, and Trust — The discussion maintained a consistently professional, collaborative, and optimistic tone throughout. The speakers demon…
S102
AI: Lifting All Boats / DAVOS 2025 — The tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunit…
S103
AI Without the Cost Rethinking Intelligence for a Constrained World — -Participant: Multiple audience members who asked questions during the panel discussion
S104
Impact of the Rise of Generative AI on Developing Countries | IGF 2023 Town Hall #29 — The first half of the panel discussion was allocated to gather panelist’s perspectives.
S105
AI Infrastructure and Future Development: A Panel Discussion — -Audience- Audience member asking a question
S106
Building Inclusive Societies with AI — This comment powerfully challenges the panel’s assumptions about who constitutes the ‘informal workforce.’ It forces a r…
S107
Empowering Women Entrepreneurs through Digital Trade and Training ( Global Innovation Forum) — Believes in saving herself and taking what she wants Finally, one speaker encourages individuals to view perceived obst…
S108
IN CONVERSATION WITH BIRAME SOCK — 4. Start small, iterate, and don’t fear failure – view it as a learning opportunity. As a woman and an African in the…
S109
07 — As the EQUALS Research Group has pointed out, ‘it is important to act in an inclusive manner so as not to alienate the m…
S110
WS #166 Breaking Barriers: Empowering Women in Internet Network — The importance of male allies in supporting women’s advancement was noted. Speakers also highlighted the transformative …
S111
“Re” Generative AI: Using Artificial and Human Intelligence in tandem for innovation — Don Gotterbarn:There is a basic problem in the way we interact with AI. We had a Secretary of State in the US who made w…
S112
Adobe buys SEO firm Semrush to boost AI-powered marketing — Adobe and Semrush haveagreedon a definitive all-cash transaction in which Adobe will acquire Semrush for US$12 per share…
S113
Keynote by Sangita Reddy Joint Managing Director Apollo Hospitals India AI Impact Summit — The presentation demonstrates strong internal consistency around key themes: leveraging India’s structural advantages fo…
S114
Keynote Address_Revanth Reddy_Chief Minister Telangana — This distills complex geopolitical strategy into a simple but profound framework. It moves beyond the typical discussion…
S115
Generative AI and Synthetic Realities: Design and Governance | IGF 2023 Networking Session #153 — Diogo Cortiz:I totally agree with Heloisa about her intervention. So I would like to switch a little bit my comments reg…
S116
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — To illustrate this concept, she provided compelling examples of biological intelligence in action. The immune system dem…
S117
Design Beyond Deception: A Manual for Design Practitioners | IGF 2023 Launch / Award Event #169 — Dark patterns are found in online experiences ranging from e-commerce apps to social media and fintech services Dark pa…
S118
AI in education: Leveraging technology for human potential — Kevin Mills: Hello. It’s an incredible honor to be here with you today. The last UN gathering I attended was almost exac…
S119
The AI gold rush where the miners are broke — The rapid rise of AI has drawn a wave of ambitious investors eager to tap into what many consider the next major economi…
S120
https://dig.watch/event/india-ai-impact-summit-2026/the-innovation-beneath-ai-the-us-india-partnership-powering-the-ai-era — Yes, thank you. So super excited. This week we announced in partnership with the Office of Principal Scientific Advisory…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
K
Kirthiga Reddy
3 arguments176 words per minute1339 words455 seconds
Argument 1
AI Kiran’s rapid growth to 10,000 women demonstrates the power of community building (AI Kiran growth – Kirthiga Reddy)
EXPLANATION
Kirthiga highlights that AI Kiran expanded from a modest list of 250 women to a vibrant community of 10,000 members, showing how a focused network can quickly scale. The growth illustrates the effectiveness of collective action in amplifying women’s presence in AI.
EVIDENCE
She explains that when they first queried ChatGPT for women in AI it returned only ten names, so they launched with 250 named women and have since grown to an “incredible community of, you know, 10,000 women” [61-64].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI Kiran initiative started with 250 women and expanded to a community of 10,000, as documented in an inclusive AI case study [S1].
MAJOR DISCUSSION POINT
AI Kiran’s rapid growth to 10,000 women demonstrates the power of community building (AI Kiran growth – Kirthiga Reddy)
AGREED WITH
Radha Basu, Mihir Shukla
Argument 2
“What would you do if you weren’t afraid?” frames risk‑taking as a catalyst for change (Fearless question – Kirthiga Reddy)
EXPLANATION
Kirthiga uses the rhetorical question to encourage participants to imagine bold actions without fear, positioning risk‑taking as essential for personal and technological breakthroughs. She ties this mindset to the need for starting over in AI.
EVIDENCE
She repeats the phrase “what would you do if you weren’t afraid?” several times, presenting it as a guiding meta-poster for the discussion [12-15].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guidance on embracing risk and asking “why not” to drive AI innovation aligns with the discussion on risk-taking in AI contexts [S14].
MAJOR DISCUSSION POINT
“What would you do if you weren’t afraid?” frames risk‑taking as a catalyst for change (Fearless question – Kirthiga Reddy)
AGREED WITH
Lakshmi Pratury
Argument 3
Supporting niche ventures like Dark.ai illustrates ecosystem‑wide impact (Startup support – Kirthiga Reddy)
EXPLANATION
Kirthiga points to the example of Dark.ai, a startup helping tailors and fashion designers adopt AI, to show how AI Kiran’s ecosystem nurtures diverse, sector‑specific innovations. This underscores the broader impact beyond large tech firms.
EVIDENCE
She mentions hearing about Dark.ai, which helps tailors and fashion designers use AI, as an example of the kinds of ventures the community wants to amplify [65-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Reports on AI startup ecosystems and global partnership events highlight how targeted support for niche ventures amplifies ecosystem impact [S16][S18].
MAJOR DISCUSSION POINT
Supporting niche ventures like Dark.ai illustrates ecosystem‑wide impact (Startup support – Kirthiga Reddy)
R
Radha Basu
5 arguments139 words per minute2565 words1099 seconds
Argument 1
Achieving 53 % women workforce showcases intentional gender parity (Gender parity 53% – Radha Basu)
EXPLANATION
Radha notes that her company has deliberately built a workforce where women constitute a slight majority, demonstrating a concrete commitment to gender balance in tech. This metric serves as a benchmark for other organisations.
EVIDENCE
She states, “It’s 53 % women. 53 % women.” while emphasizing its significance [175-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Gender-balanced participation metrics, such as 51% of interventions led by women at major forums, provide context for achieving a 53% women workforce [S19][S21].
MAJOR DISCUSSION POINT
Achieving 53 % women workforce showcases intentional gender parity (Gender parity 53% – Radha Basu)
AGREED WITH
Kirthiga Reddy, Mihir Shukla
Argument 2
Pioneering HP’s entry into India highlights early‑stage tech leadership (Early‑tech leadership – Radha Basu)
EXPLANATION
Radha recounts leading the establishment of Hewlett‑Packard’s first software operations in Bangalore, marking a seminal moment in India’s IT evolution. Her story illustrates how early‑stage leadership can catalyse an entire industry.
EVIDENCE
She describes HP’s arrival in India in 1987 and her role in setting up HP Bangalore, noting that HP and Texas Instruments were the first multinationals doing software in India and that she celebrated the first million-dollar software export in 1989 [74-75][103-105].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Historical accounts note David Packard’s directive to establish HP’s software operations in India in the late 1980s, underscoring early-stage leadership [S1].
MAJOR DISCUSSION POINT
Pioneering HP’s entry into India highlights early‑stage tech leadership (Early‑tech leadership – Radha Basu)
Argument 3
Establishing AI centers in tier‑2 cities prevents an urban‑AI divide (Decentralized AI centers – Radha Basu)
EXPLANATION
Radha explains that AI Kiran deliberately set up AI centers outside major metros—such as in Calcutta, Vizag, Coimbatore, Hubli, and Shillong—to ensure that AI capabilities are distributed across the country. This strategy aims to avoid a concentration of AI talent only in urban hubs.
EVIDENCE
She lists the locations of the new centers-Calcutta, Vizag, Coimbatore, Hubli, Shillong-and notes that each has become a centre of excellence in a specific domain [133-140].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Policy examples such as the Philippines’ creation of AI centers for marginalized regions illustrate the rationale for tier-2 AI hubs [S22].
MAJOR DISCUSSION POINT
Establishing AI centers in tier‑2 cities prevents an urban‑AI divide (Decentralized AI centers – Radha Basu)
AGREED WITH
Kirthiga Reddy
Argument 4
Deploying AI in healthcare, precision agriculture, and vision showcases real‑world benefits (AI societal applications – Radha Basu)
EXPLANATION
Radha details how AI Kiran applies AI to critical sectors such as medical imaging, precision farming, and autonomous robotics, demonstrating tangible societal impact. These applications illustrate AI’s potential to improve health outcomes and agricultural productivity.
EVIDENCE
She cites work in autonomous mobility, healthcare medical AI in Vizag, automotive AI in Coimbatore, and generative AI in Kolkata, and describes projects like precision agriculture for crop-failure detection and breast-cancer screening using small, fine-tuned models [145-152][280-283].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-for-social-good initiatives in medical imaging, precision farming, and computer vision are highlighted as tangible societal impacts [S25].
MAJOR DISCUSSION POINT
Deploying AI in healthcare, precision agriculture, and vision showcases real‑world benefits (AI societal applications – Radha Basu)
Argument 5
AI investment is a triangle of technology, infrastructure, and human talent (Investment triangle – Radha Basu)
EXPLANATION
Radha frames AI investment as requiring three inter‑dependent pillars: cutting‑edge models, robust compute infrastructure, and skilled human expertise. Balancing these three elements is essential for scaling AI responsibly.
EVIDENCE
She outlines the “triangle” of AI technologies, infrastructure, and human intelligence, stating that the nexus of these three scales AI [259-264].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses stress the need to invest simultaneously in AI models, compute infrastructure, and skilled talent as a balanced growth strategy [S28][S30].
MAJOR DISCUSSION POINT
AI investment is a triangle of technology, infrastructure, and human talent (Investment triangle – Radha Basu)
AGREED WITH
Mihir Shukla
M
Mihir Shukla
3 arguments155 words per minute1199 words463 seconds
Argument 1
Partnership to train one million women and youth scales empowerment (Million‑women training – Mihir Shukla)
EXPLANATION
Mihir announces a joint initiative with AI Kiran to educate a million women and young people in AI and automation over the next five years, highlighting the scale of the empowerment effort. This partnership aims to bridge skill gaps and foster inclusive growth.
EVIDENCE
He states that AI Kiran and his organisation have announced a partnership to train a million women and youth on AI and automation within five years [284-296].
MAJOR DISCUSSION POINT
Partnership to train one million women and youth scales empowerment (Million‑women training – Mihir Shukla)
Argument 2
Prioritizing applied AI over chasing model size maximizes economic impact (Applied AI focus – Mihir Shukla)
EXPLANATION
Mihir argues that India should focus on applying AI to solve real problems rather than competing in the global race to build ever‑larger models. He likens this to historical examples where adopting a technology was more important than inventing it.
EVIDENCE
He explains that India should develop applied AI, citing the historical impact of the printing press, radio, automobile, and planes, and warns against blindly chasing the model race [284-288].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Policy discussions advocate focusing on applied, task-specific AI models rather than pursuing ever-larger models [S32][S33].
MAJOR DISCUSSION POINT
Prioritizing applied AI over chasing model size maximizes economic impact (Applied AI focus – Mihir Shukla)
Argument 3
Training initiatives for women and youth demonstrate rapid, low‑barrier skill acquisition (Rapid skill‑up – Mihir Shukla)
EXPLANATION
Mihir shares examples of short, intensive training programmes that quickly placed participants into well‑paid AI jobs, showing that AI skills can be acquired without long‑term formal education. These cases illustrate the speed and accessibility of AI‑driven upskilling.
EVIDENCE
He describes training 700 women in Africa and 500 in the U.S. Mississippi Delta for six weeks, after which many secured high-paying AI positions, demonstrating rapid, low-barrier skill acquisition [433-436].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A six-week program that trained 700 women, with most securing high-paying AI roles, exemplifies rapid, low-barrier skill acquisition [S4].
MAJOR DISCUSSION POINT
Training initiatives for women and youth demonstrate rapid, low‑barrier skill acquisition (Rapid skill‑up – Mihir Shukla)
L
Lakshmi Pratury
3 arguments172 words per minute903 words314 seconds
Argument 1
Platform to surface untold talent amplifies hidden voices (Talent‑showcase platform – Lakshmi Pratury)
EXPLANATION
Lakshmi describes her work over the past 15 years of discovering and promoting talented individuals whose stories are rarely heard, thereby creating a platform that brings hidden innovators to the fore. This effort expands the narrative beyond the well‑known few.
EVIDENCE
She says her work has been about “finding amazing people, doing amazing work and get them to tell their stories” because only a few famous people are heard, and she built a platform to showcase this talent [38-40].
MAJOR DISCUSSION POINT
Platform to surface untold talent amplifies hidden voices (Talent‑showcase platform – Lakshmi Pratury)
Argument 2
Continuous career reinvention illustrates the value of starting over (Career reinvention – Lakshmi Pratury)
EXPLANATION
Lakshmi reflects on her own professional journey—from early internet optimism to roles at Intel, venture capital, philanthropy, and now AI Kiran—showing how repeatedly reinventing oneself can drive impact. She frames reinvention as a response to evolving technological landscapes.
EVIDENCE
She recounts being part of the early internet era in 1994, then moving through Intel, venture capital, and philanthropy, and finally focusing on a platform to showcase talent over the last 15 years [34-36][42-44].
MAJOR DISCUSSION POINT
Continuous career reinvention illustrates the value of starting over (Career reinvention – Lakshmi Pratury)
AGREED WITH
Kirthiga Reddy
Argument 3
Fellows program nurtures 250+ multidisciplinary youth innovators (Fellows program – Lakshmi Pratury)
EXPLANATION
Lakshmi outlines AI Kiran’s Fellows Programme, which selects around 20 promising individuals each year from diverse disciplines, providing them with mentorship and resources. Over time the programme has supported more than 250 fellows, fostering youth innovation.
EVIDENCE
She mentions the Fellows Programme, noting that “Every year we pick 20 amazing people… we have over 250 of them” and cites Anurag as an example of a fellow [222-224].
MAJOR DISCUSSION POINT
Fellows program nurtures 250+ multidisciplinary youth innovators (Fellows program – Lakshmi Pratury)
AGREED WITH
Audience, Speaker 1, Anurag Hoon
A
Anurag Hoon
1 argument142 words per minute686 words288 seconds
Argument 1
Emphasizing five senses and nine emotions grounds children in holistic development (Five senses & nine emotions – Anurag Hoon)
EXPLANATION
Anurag argues that teaching children about the five senses and nine emotions provides a balanced, human‑centric foundation that complements technological education. This holistic approach nurtures emotional intelligence alongside technical skills.
EVIDENCE
He explains that as a parent he ensures his son understands “the five senses and nine emotions” and structures activities around them, emphasizing their importance for development [312-319].
MAJOR DISCUSSION POINT
Emphasizing five senses and nine emotions grounds children in holistic development (Five senses & nine emotions – Anurag Hoon)
AGREED WITH
Audience, Speaker 1, Lakshmi Pratury
A
Audience
3 arguments164 words per minute825 words300 seconds
Argument 1
Parents should teach resilience, curiosity, and emotional awareness to thrive in an AI age (Resilience and EQ – Audience)
EXPLANATION
An audience member asks what skills parents should instill—resilience, curiosity, and emotional awareness—to prepare children for a future dominated by AI and automation. The question highlights the perceived need for soft‑skill development alongside technical literacy.
EVIDENCE
The audience member asks, “what are the skills that we as parents should be teaching our kids to be ready… what are those blind spots…?” emphasizing resilience, curiosity, and emotional awareness [304-311].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Emphasizing curiosity and resilience through risk-taking questions is recommended for AI empowerment and future-ready education [S14].
MAJOR DISCUSSION POINT
Parents should teach resilience, curiosity, and emotional awareness to thrive in an AI age (Resilience and EQ – Audience)
AGREED WITH
Speaker 1, Anurag Hoon, Lakshmi Pratury
Argument 2
Protecting children from AI‑driven harms while granting them power requires robust safeguards (Child safety guardrails – Audience)
EXPLANATION
Another audience participant raises concerns about the potential harms AI could pose to children and calls for strong guardrails to ensure safe usage while still empowering youth. This underscores the need for policy and technical safeguards.
EVIDENCE
The audience states, “A lot of countries are now banning multimedia because the harm is obvious… AI is only going to amplify the harms… what do we need to protect our kids from the harm while we give them the power?” [382-389].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Ethical AI frameworks and agile regulatory approaches are proposed to safeguard children while enabling AI-enabled empowerment [S26][S15].
MAJOR DISCUSSION POINT
Protecting children from AI‑driven harms while granting them power requires robust safeguards (Child safety guardrails – Audience)
AGREED WITH
Speaker 1
Argument 3
Building provenance and trust in information sources counters over‑reliance on AI answers (Provenance trust – Audience)
EXPLANATION
The audience asks how to re‑establish trust and provenance in information when people increasingly rely on AI outputs like ChatGPT, highlighting the need for mechanisms that verify source credibility.
EVIDENCE
They ask, “How do we build provenance? Earlier you had people curating the internet… now people trust ChatGPT over human answers. How do you build provenance?” [395-401].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Governance recommendations stress the need for provenance mechanisms and trust-building in AI-generated content [S30][S26].
MAJOR DISCUSSION POINT
Building provenance and trust in information sources counters over‑reliance on AI answers (Provenance trust – Audience)
AGREED WITH
Speaker 1
S
Speaker 1
3 arguments170 words per minute987 words346 seconds
Argument 1
Building resilience and the ability to learn from failure prepares children for rapid change (Resilience learning – Speaker 1)
EXPLANATION
Speaker 1 stresses that teaching children to be resilient, to accept failure, and to persist is essential for navigating the fast‑paced transformations driven by AI. Resilience is presented as a core life skill for future success.
EVIDENCE
He says, “They need to learn resilience… they need to learn to fail… they need to survive and thrive… if they have resilience they will survive all these changes” [316-324].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Resilience and learning from failure are highlighted as essential competencies for navigating AI-driven transformation [S14].
MAJOR DISCUSSION POINT
Building resilience and the ability to learn from failure prepares children for rapid change (Resilience learning – Speaker 1)
AGREED WITH
Audience, Anurag Hoon, Lakshmi Pratury
Argument 2
Access to advanced chips and compute resources is a strategic enabler, not a blocker (Compute access – Speaker 1)
EXPLANATION
Speaker 1 argues that while compute infrastructure is crucial, it should not be viewed as a limiting factor; instead, innovators can be creative with existing resources to meet demand. This perspective frames compute as an enabler rather than a barrier.
EVIDENCE
He notes that “you just have to be creative with the resources you have… I don’t believe it will be a limiting factor for those that want to move fast” when discussing the compute layer and investment structure [250-256].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses describe compute as an enabler, emphasizing creative use of existing resources alongside infrastructure investment [S28][S30].
MAJOR DISCUSSION POINT
Access to advanced chips and compute resources is a strategic enabler, not a blocker (Compute access – Speaker 1)
Argument 3
Embedding ethical considerations and human values ensures responsible AI deployment (Ethical AI principles – Speaker 1)
EXPLANATION
Speaker 1 emphasizes that beyond building technology, organisations must align AI development with ethical standards and human values to guarantee responsible use. This call for ethical AI underpins trustworthy deployment.
EVIDENCE
He concludes that “it comes down to the ambition each of us sets… to build the best technology… but ultimately, it comes down to the ambition each of us sets” implying a responsibility to embed ethical ambition in AI work [306-311].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI ethics literature and regulatory guidance call for integrating ethical principles and human values into AI development [S26][S15].
MAJOR DISCUSSION POINT
Embedding ethical considerations and human values ensures responsible AI deployment (Ethical AI principles – Speaker 1)
AGREED WITH
Audience
Agreements
Agreement Points
Scaling women’s participation in AI through community building, workforce parity, and large‑scale training initiatives.
Speakers: Kirthiga Reddy, Radha Basu, Mihir Shukla
AI Kiran’s rapid growth to 10,000 women demonstrates the power of community building (AI Kiran growth – Kirthiga Reddy) Achieving 53 % women workforce showcases intentional gender parity (Gender parity 53% – Radha Basu) Partnership to train one‑million women and youth scales empowerment (Million‑women training – Mihir Shukla)
All three speakers highlight that increasing female representation in AI can be achieved by building strong communities, setting internal gender-balance targets, and launching large-scale training programmes, moving from a modest list of 250 women to a 10,000-member community and aiming to train a million women and youth [61-64][175-176][284-296].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with global gender-inclusion policies that call for women’s early involvement and mentorship in tech, as highlighted by the Internet Society’s commitment to bridge the digital divide and promote women in the Internet space [S49] and by empowerment frameworks that extend beyond basic training to include mentorship and policy participation for women [S50]; the AI Kiran initiative further demonstrates community-driven scaling of women’s representation in AI [S52].
Risk‑taking and career reinvention are essential catalysts for personal and technological progress.
Speakers: Kirthiga Reddy, Lakshmi Pratury
“What would you do if you weren’t afraid?” frames risk‑taking as a catalyst for change (Fearless question – Kirthiga Reddy) Continuous career reinvention illustrates the value of starting over (Career reinvention – Lakshmi Pratury)
Both emphasize that questioning fear and repeatedly starting anew enable breakthrough innovations, with Kirthiga urging “what would you do if you weren’t afraid?” and Lakshmi describing her own multiple reinventions across the early internet, Intel, VC and AI Kiran [12-15][34-36][42-44].
POLICY CONTEXT (KNOWLEDGE BASE)
The importance of continuous reinvention is echoed in youth-driven tech empowerment narratives that link personal adversity to innovation, and historical analyses stress that economies thrive when individuals pursue risk-taking and skill renewal [S65][S66].
Decentralising AI capacity to avoid an urban‑AI divide.
Speakers: Radha Basu, Kirthiga Reddy
Establishing AI centers in tier‑2 cities prevents an urban‑AI divide (Decentralized AI centers – Radha Basu) The last thing I want is… bridge the AI divide (AI divide – Kirthiga Reddy)
Both stress the need to spread AI research and training beyond metros, with Radha describing new centres in Calcutta, Vizag, Coimbatore, Hubli and Shillong and Kirthiga warning against a future AI divide, urging proactive decentralisation [133-140][136-137].
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions at the AI Governance Dialogue emphasized unprecedented inclusivity and the need to avoid concentration of AI resources in urban centers, mirroring calls to bridge the digital divide and include women and marginalized groups in AI development [S51][S49].
Investing in applied AI and sector‑specific use cases rather than chasing ever‑larger models maximises economic impact.
Speakers: Mihir Shukla, Radha Basu
Prioritizing applied AI over chasing model size maximises economic impact (Applied AI focus – Mihir Shukla) AI investment is a triangle of technology, infrastructure, and human talent (Investment triangle – Radha Basu)
Mihir argues India should focus on applying AI to real problems, while Radha frames investment as a balanced triangle that includes building applications, illustrating consensus on channeling resources toward practical AI deployments rather than model-size races [284-288][259-264][145-152][280-283].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs from India advocate focusing on smaller, task-specific models for higher ROI and sustainable AI diffusion, arguing that large-parameter models are not necessary for most economic applications [S56][S57][S58].
Developing resilience, curiosity and emotional intelligence in children is essential for thriving in an AI‑driven future.
Speakers: Audience, Speaker 1, Anurag Hoon, Lakshmi Pratury
Parents should teach resilience, curiosity, and emotional awareness to thrive in an AI age (Resilience and EQ – Audience) Building resilience and the ability to learn from failure prepares children for rapid change (Resilience learning – Speaker 1) Emphasizing five senses and nine emotions grounds children in holistic development (Five senses & nine emotions – Anurag Hoon) Fellows program nurtures 250+ multidisciplinary youth innovators (Fellows program – Lakshmi Pratury)
Multiple participants converge on the need to cultivate soft skills-resilience, curiosity, emotional awareness and holistic development-through programmes, mentorship and parental guidance to equip youth for AI transformation [304-311][316-324][312-319][222-224].
POLICY CONTEXT (KNOWLEDGE BASE)
Expert recommendations identify curiosity, resilience, and emotional intelligence as core competencies for children, with UNICEF and OpenAI highlighting these traits as vital for safe and empowered AI interaction [S54][S63][S59].
Ensuring ethical safeguards, child protection and provenance of information is critical as AI becomes a primary source of knowledge.
Speakers: Audience, Speaker 1
Protecting children from AI‑driven harms while granting them power requires robust safeguards (Child safety guardrails – Audience) Building provenance and trust in information sources counters over‑reliance on AI answers (Provenance trust – Audience) Embedding ethical considerations and human values ensures responsible AI deployment (Ethical AI principles – Speaker 1)
Both highlight the necessity of guardrails, provenance mechanisms and ethical principles to protect children and maintain trust in AI-generated content [382-401][306-311].
POLICY CONTEXT (KNOWLEDGE BASE)
International guidelines stress data protection, privacy, and provenance, and UNICEF policy guidance calls for robust child-rights safeguards when deploying AI for education and health [S47][S48][S55][S64].
Similar Viewpoints
Both argue that AI progress depends on balanced investment in practical applications, compute infrastructure and skilled people rather than on building ever‑larger models [284-288][259-264].
Speakers: Mihir Shukla, Radha Basu
Prioritizing applied AI over chasing model size maximises economic impact (Applied AI focus – Mihir Shukla) AI investment is a triangle of technology, infrastructure, and human talent (Investment triangle – Radha Basu)
Both see confronting fear and repeatedly starting anew as essential drivers of personal growth and technological innovation [12-15][34-36][42-44].
Speakers: Kirthiga Reddy, Lakshmi Pratury
“What would you do if you weren’t afraid?” frames risk‑taking as a catalyst for change (Fearless question – Kirthiga Reddy) Continuous career reinvention illustrates the value of starting over (Career reinvention – Lakshmi Pratury)
Unexpected Consensus
Emphasis on emotional and holistic development from both technology leaders and an arts‑focused educator.
Speakers: Anurag Hoon, Audience
Emphasizing five senses and nine emotions grounds children in holistic development (Five senses & nine emotions – Anurag Hoon) Parents should teach resilience, curiosity, and emotional awareness to thrive in an AI age (Resilience and EQ – Audience)
While the panel largely focused on AI technology and scaling, both an AI-focused panelist and a music-education practitioner converged on the importance of emotional intelligence and holistic child development, an unexpected cross-disciplinary agreement [312-319][304-311].
POLICY CONTEXT (KNOWLEDGE BASE)
Holistic education frameworks argue that learning should centre on the whole learner, integrating arts and emotional development alongside technology, as advocated in inclusive education initiatives [S60][S62].
Agreement on the need for child‑focused guardrails despite a strong emphasis on rapid AI deployment.
Speakers: Audience, Speaker 1
Protecting children from AI‑driven harms while granting them power requires robust safeguards (Child safety guardrails – Audience) Embedding ethical considerations and human values ensures responsible AI deployment (Ethical AI principles – Speaker 1)
Even as speakers promoted fast-moving AI initiatives, they jointly stressed the necessity of safeguards for children and provenance of information, revealing an unexpected alignment between growth-oriented and protection-oriented perspectives [382-401][306-311].
POLICY CONTEXT (KNOWLEDGE BASE)
Recent IGF workshops and responsible AI guidelines underline the necessity of child-centric safeguards (trust, transparency, privacy) even as governments pursue fast AI rollout, noting that clear regulatory guardrails can accelerate safe innovation [S54][S55][S64][S68].
Overall Assessment

The panel shows strong consensus on four pillars: (1) gender‑inclusive scaling of AI participation, (2) the role of risk‑taking and reinvention, (3) decentralising AI capacity to avoid urban divides, (4) focusing investment on applied AI and ethical safeguards while nurturing resilience and emotional intelligence in youth.

High – The repeated convergence across speakers from different sectors (industry, academia, arts, policy) suggests a shared vision that can drive coordinated policy and programme actions to promote inclusive, responsible AI development.

Differences
Different Viewpoints
Allocation of AI investment – applied AI solutions versus building large models and infrastructure
Speakers: Mihir Shukla, Radha Basu
Prioritizing applied AI over chasing model size maximizes economic impact (Applied AI focus – Mihir Shukla) AI investment is a triangle of technology, infrastructure, and human talent (Investment triangle – Radha Basu)
Mihir argues that India should concentrate on applying AI to solve concrete problems and avoid competing in the global race to build ever-larger models, citing historical examples of technology adoption [284-288]. Radha counters that a balanced investment across cutting-edge models, compute infrastructure, and skilled people is essential to scale AI responsibly, describing a three-pillar “triangle” approach [259-264]. The two positions differ on where limited resources should be directed – toward immediate applications or toward foundational model and infrastructure development.
POLICY CONTEXT (KNOWLEDGE BASE)
Evidence from policy analyses shows that allocating resources to low-cost, domain-specific AI yields higher economic returns than investing in massive model infrastructure, supporting a shift toward applied AI investment [S56][S57][S58][S68].
What core competencies children need to thrive in an AI‑driven future
Speakers: Audience member (Resilience & EQ), Anurag Hoon, Speaker 1
Parents should teach resilience, curiosity, and emotional awareness to thrive in an AI age (Resilience and EQ – Audience) Emphasizing five senses and nine emotions grounds children in holistic development (Five senses & nine emotions – Anurag Hoon) Building resilience and the ability to learn from failure prepares children for rapid change (Resilience learning – Speaker 1)
The audience stresses soft-skill development-resilience, curiosity and emotional intelligence-as the priority for children growing up with AI [304-311]. Anurag proposes a more holistic, human-centric curriculum focused on sensory awareness and emotional vocabulary (five senses, nine emotions) [312-319]. Speaker 1 also highlights resilience and learning from failure as essential, but does not mention the sensory-emotional framework [316-324]. The three speakers agree that non-technical skills matter, yet they diverge on which specific competencies should be foregrounded.
POLICY CONTEXT (KNOWLEDGE BASE)
Authoritative reports list critical thinking, adaptability, emotional intelligence, and financial numeracy as essential skills for children navigating AI, reinforcing curriculum recommendations for AI-focused education programs [S63][S53][S54][S64].
Need for child‑focused AI guardrails versus emphasis on rapid empowerment
Speakers: Audience member (Child‑safety guardrails), Kirthiga Reddy
Protecting children from AI‑driven harms while granting them power requires robust safeguards (Child safety guardrails – Audience) Focus on scaling AI participation and community building without explicit mention of safeguards (Fearless question – Kirthiga Reddy)
An audience participant calls for strong protective measures to prevent AI-related harms to children, highlighting the urgency of guardrails [382-389]. Kirthiga, throughout her remarks, stresses bold risk-taking and rapid community scaling (e.g., “what would you do if you weren’t afraid?”) without addressing safety mechanisms for minors [12-15][52-53]. The tension lies between a precautionary approach for minors and an acceleration-focused mindset.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions balance rapid AI empowerment with child protection, emphasizing that safeguards such as privacy, transparency, and age-appropriate content are integral to responsible deployment [S54][S55][S64][S68].
Strategies for achieving gender parity in AI – community‑driven scaling versus internal workforce composition
Speakers: Kirthiga Reddy, Radha Basu
AI Kiran’s rapid growth to 10,000 women demonstrates the power of community building (AI Kiran growth – Kirthiga Reddy) Achieving 53 % women workforce showcases intentional gender parity (Gender parity 53% – Radha Basu)
Kirthiga highlights the expansion of the AI Kiran network from 250 to 10,000 women as evidence that a focused community can quickly increase female participation in AI [61-64]. Radha points to her own organization’s internal composition, where women already constitute a slight majority (53 %) and presents this as a concrete gender-balance outcome [175-176]. Both aim for gender inclusion but propose different pathways-external community mobilisation versus internal hiring and retention policies.
POLICY CONTEXT (KNOWLEDGE BASE)
Gender-parity strategies highlighted in multiple sources advocate community-based training, mentorship, and policy participation as complementary to internal hiring practices, reflecting a broader policy push for inclusive AI ecosystems [S49][S50][S52].
Unexpected Differences
Human‑centric versus technology‑centric focus for child development
Speakers: Anurag Hoon, Mihir Shukla
Emphasizing five senses and nine emotions grounds children in holistic development (Five senses & nine emotions – Anurag Hoon) Prioritizing applied AI over chasing model size maximizes economic impact (Applied AI focus – Mihir Shukla)
Anurag argues that nurturing children’s sensory and emotional intelligence (HI) is the primary foundation for future success, effectively downplaying the role of AI in education [235-238]. Mihir, by contrast, stresses that the strategic priority for the nation is to apply AI to drive economic growth, implying that technical AI skills should be central to future curricula [284-288]. The clash between a heart-centric developmental philosophy and a technology-centric economic strategy was not anticipated given the overall AI-focused agenda of the panel.
POLICY CONTEXT (KNOWLEDGE BASE)
Educational policy literature stresses placing learners at the centre and adopting holistic, inclusive approaches rather than technology-first models, aligning with calls for human-centric AI integration in schooling [S60][S62][S53].
Overall Assessment

The panel displayed modest but meaningful disagreement across four thematic axes: (1) the optimal allocation of AI investment (applied solutions vs. model‑centric infrastructure), (2) the specific non‑technical competencies children should acquire (resilience vs. sensory‑emotional grounding), (3) the balance between rapid empowerment and protective guardrails for minors, and (4) the preferred mechanism for achieving gender parity (community scaling vs. internal hiring). While participants largely shared common aspirations—greater inclusion, skill development, and responsible AI—their divergent pathways reflect differing priorities between immediate economic impact, holistic human development, and safety considerations.

The disagreements are moderate; they do not fracture the panel but highlight distinct strategic lenses. Their implications are significant: policy makers must reconcile investment choices (applied AI vs. foundational model development), design education frameworks that integrate both resilience and emotional intelligence, embed child‑safety safeguards alongside empowerment initiatives, and combine community‑building with internal gender‑balance policies to achieve inclusive AI ecosystems.

Partial Agreements
Both speakers share the goal of increasing women’s representation in AI, but Kirthiga advocates scaling through a large external community network, whereas Radha emphasizes achieving parity within her own company’s workforce through hiring and retention practices [61-64][175-176].
Speakers: Kirthiga Reddy, Radha Basu
AI Kiran’s rapid growth to 10,000 women demonstrates the power of community building (AI Kiran growth – Kirthiga Reddy) Achieving 53 % women workforce showcases intentional gender parity (Gender parity 53% – Radha Basu)
Both aim to equip young people with AI‑related capabilities. Mihir focuses on short, intensive bootcamps that quickly place participants into high‑paying jobs [433-436], while Lakshmi describes a longer‑term Fellows Programme that selects 20 individuals annually and has supported over 250 fellows [222-224]. The shared objective is youth empowerment, but the delivery models differ (rapid bootcamps vs. structured fellowship).
Speakers: Mihir Shukla, Lakshmi Pratury
Training initiatives for women and youth demonstrate rapid, low‑barrier skill acquisition (Rapid skill up – Mihir Shukla) Fellows program nurtures 250+ multidisciplinary youth innovators (Fellows program – Lakshmi Pratury)
Takeaways
Key takeaways
AI Kiran has rapidly grown to a community of 10,000 women, demonstrating the impact of focused community building and mentorship. Achieving gender parity (53 % women) in the workforce is possible with intentional hiring and leadership support. A partnership has been announced to train one million women and youth in AI and automation over the next five years. Risk‑taking and the mindset of “What would you do if you weren’t afraid?” drives personal reinvention and scaling of potential. Early‑stage tech leadership (e.g., pioneering HP’s entry into India) shows the value of being at the front of emerging technologies. Decentralized AI centers in tier‑2 cities (Kolkata, Vizag, Coimbatore, Shillong, etc.) are essential to prevent an urban‑AI divide. AI is being applied to high‑impact societal problems: autonomous mobility, healthcare imaging, precision agriculture, and niche startups like Dark.ai. Education initiatives – Fellows program, rapid‑skill‑up trainings, and grassroots AI literacy – are crucial for preparing the next generation. Parents should focus on resilience, curiosity, emotional awareness (five senses & nine emotions) and EQ to help children thrive in an AI‑driven world. Investment in AI should be viewed as a triangle: technology/models, infrastructure/compute, and human talent. Access to advanced chips and compute is an enabler, not a blocker, if organizations are creative with resources. Applied AI that solves real‑world problems should be prioritized over chasing larger model sizes. Ethical guardrails, provenance of information, and child‑safety mechanisms are needed to balance AI’s power with responsibility.
Resolutions and action items
Launch a joint partnership (AI Kiran + partners) to train 1,000,000 women and youth in AI and automation within five years (announced by Mihir Shukla). Continue expanding AI Kiran’s community, aiming to add zeros to the member count and maintain rapid growth. Scale the Fellows program to support >250 multidisciplinary youth innovators and integrate them into AI projects. Establish and operationalize AI centers of excellence in tier‑2 locations (Calcutta, Vizag, Coimbatore, Shillong, etc.) as outlined by Radha Basu. Prioritize applied AI projects in sectors such as precision agriculture, breast‑cancer screening, and autonomous mobility. Develop and disseminate educational content on resilience, curiosity, EQ, and the five senses/nine emotions for parents and teachers. Encourage organizations to adopt a creative approach to compute resource allocation rather than waiting for exclusive chip access. Promote ethical AI practices and provenance mechanisms to build trust in AI‑generated information.
Unresolved issues
Specific frameworks or guidelines for safeguarding children from AI‑driven harms while still empowering them remain undefined. Concrete steps for integrating AI ethics and provenance into mainstream education curricula were not detailed. How to systematically measure and ensure the effectiveness of the one‑million‑women training partnership is still open. The exact strategy for balancing compute resource constraints with rapid AI development was discussed but not finalized. Questions from the audience about disrupting K‑12 and higher education, and how to replace traditional exam‑centric models, were raised without a clear answer.
Suggested compromises
Adopt a pragmatic stance on compute: treat advanced chips as a strategic advantage but not a blocker, encouraging creative use of existing resources (Kirthiga Reddy). Balance rapid AI scaling with ethical guardrails by embedding safety considerations early in product development rather than as an afterthought (Speaker 1). Combine AI skill‑building with emotional and sensory development for children, merging technical education with EQ/arts to address both empowerment and safety (Anurag Hoon & Speaker 1).
Thought Provoking Comments
What would you do if you weren’t afraid? – It’s about taking risks, starting over, and stretching for the stars rather than settling for the safe, achievable part.
Frames the entire discussion around a growth‑mindset and the willingness to reinvent oneself, which resonates with the panel’s theme of scaling human potential.
Set the tone for personal stories of reinvention; prompted Lakshmi and Radha to share their own ‘starting over’ experiences and opened the floor for advice on bold career moves.
Speaker: Kirthiga Reddy
When we asked ChatGPT for 100 women in AI in India it gave us only 10. We launched with 250 named women and now we have a community of 10,000 – we literally added two zeros to the answer.
Highlights a concrete data gap in representation, turning a limitation of AI into a catalyst for community building and advocacy.
Shifted the conversation from abstract AI topics to tangible gender‑bias issues; led to deeper discussion about building inclusive networks and the role of AI Kiran in correcting systemic blind spots.
Speaker: Kirthiga Reddy
We set up AI centers not in the metros but in places like Calcutta, Vizag, Coimbatore, Hubli, Shillong – each becoming a centre of excellence for specific domains.
Introduces a strategic decentralisation model that tackles the ‘AI divide’ by bringing advanced research to tier‑2 and tier‑3 cities.
Redirected the dialogue toward regional equity, prompting other panelists to discuss how infrastructure and talent can be distributed beyond traditional hubs.
Speaker: Radha Basu
In every revolution we messed up the environment; now we have a chance to make AI inclusive from the get‑go.
Draws a historical parallel that frames AI as a societal responsibility rather than just a technological race.
Reinforced the need for proactive inclusion, influencing subsequent comments about gender balance, youth programs, and ethical guardrails.
Speaker: Lakshmi Pratury
India’s super‑power isn’t inventing AI, it’s applying AI across its industrial hubs to create global competitiveness.
Shifts focus from chasing cutting‑edge models to leveraging AI for practical, sector‑specific impact, echoing historical lessons from past tech revolutions.
Steered the conversation toward applied AI, prompting panelists to cite examples like precision agriculture, healthcare, and upskilling initiatives.
Speaker: Mihir Shukla
I teach kids the five senses and nine emotions – heart intelligence – so they can stay human even as AI automates everything.
Introduces a holistic, human‑centric educational framework that balances technical skill with emotional and sensory awareness.
Opened a new thread on parenting and education, leading to multiple responses about resilience, curiosity, and the importance of soft skills in an AI‑driven world.
Speaker: Anurag Hoon
Investments in AI form a triangle: technology/models, infrastructure, and human intelligence. All three must grow together.
Provides a clear, actionable framework for scaling AI responsibly, integrating technical, physical, and human resources.
Guided the discussion toward concrete policy and investment recommendations, influencing later remarks on skilling, data readiness, and building AI‑ready ecosystems.
Speaker: Radha Basu
The power of the question – AI can give many answers, but it doesn’t know the right questions. Teaching people to ask better questions is the real leverage.
Elevates the conversation from tool‑centric to mindset‑centric, emphasizing critical thinking as the ultimate differentiator.
Prompted audience members and panelists to reflect on education curricula, leading to suggestions about curiosity, interdisciplinary learning, and the role of families in nurturing inquiry.
Speaker: Mihir Shukla
In a TCS hackathon, 1,200 women from non‑English backgrounds built brilliant solutions in four hours after a short training – the speed of learning is massive.
Demonstrates the rapid scalability of AI education when barriers are removed, reinforcing optimism about mass upskilling.
Supported the narrative that AI can democratise opportunity, bolstering arguments for large‑scale training programs and reinforcing the panel’s hopeful tone.
Speaker: Speaker 1 (unnamed senior executive)
We trained 700 women in Africa for six weeks and 500 got jobs within a week; AI can give high‑pay jobs without years of formal education.
Provides a powerful real‑world example of AI as a vehicle for socioeconomic mobility, challenging assumptions about required training length.
Strengthened the case for inclusive AI policies and rapid skill‑building initiatives, influencing later remarks about grassroots training and the need to bridge digital gaps.
Speaker: Mihir Shukla
Overall Assessment

The discussion was propelled forward by a series of pivotal remarks that moved it from abstract hype to concrete, human‑centred action. Kirthiga’s opening mindset question and the ChatGPT gender‑bias anecdote framed the need for bold reinvention and community‑driven correction of AI’s blind spots. Radha’s decentralised AI‑center model and Lakshmi’s call for inclusive design shifted focus to systemic equity, while Mihir’s emphasis on applied AI and the ‘power of the question’ reframed success as practical impact and critical thinking. Anurag’s heart‑intelligence perspective and the audience’s parenting query introduced a softer, educational dimension, prompting multiple speakers to stress resilience, curiosity, and interdisciplinary learning. Collectively, these comments redirected the conversation toward actionable frameworks—triangular investment, rapid upskilling, and regional empowerment—thereby shaping a narrative that balances technological ambition with social responsibility and human potential.

Follow-up Questions
What are the limiting factors or enablers (talent, capital, compute) for scaling AI in India and how do they interact?
Identifying constraints is essential for shaping policy, investment strategies, and ensuring sustainable AI growth in the country.
Speaker: Kirthiga Reddy
What kind of investments are needed to move people up the AI value chain in India?
Clarifies where funding should be directed—technology, infrastructure, or human talent—to accelerate AI adoption and create high‑value jobs.
Speaker: Kirthiga Reddy (to Radha Basu)
How can India ensure access to advanced chips for building powerful AI models?
Chip access determines a nation’s ability to develop and run large models, affecting competitiveness and innovation capacity.
Speaker: Kirthiga Reddy
What should parents teach their children over the next 15 years to thrive in an AI‑automated world, and what blind spots are they missing?
Guides parenting and educational curricula to equip the next generation with skills and mindsets needed for an AI‑driven future.
Speaker: Anupama (audience)
How can we protect children from AI‑related harms while still empowering them with AI capabilities?
Balancing safety with empowerment is critical for policy makers, educators, and platform designers to prevent misuse while fostering innovation.
Speaker: Hemendra (audience)
How should individuals prioritize what to learn first and next in the rapidly evolving AI landscape?
Helps learners navigate an overwhelming amount of information and choose pathways that maximize career relevance and personal growth.
Speaker: Anjali (audience)
How can we build trust and provenance on the internet when AI answers are increasingly relied upon?
Addressing credibility of AI‑generated content is vital for combating misinformation and maintaining public confidence in digital services.
Speaker: Bina (audience)
How can we bridge the digital gap for grassroots AI training, especially for low‑income or non‑digital populations?
Ensures inclusive AI adoption and prevents widening socioeconomic disparities by bringing AI literacy to underserved communities.
Speaker: Bina (audience)
How can we develop a wisdom cycle for youth to interpret information, ask the right questions, and apply AI responsibly?
Cultivates critical thinking and ethical AI use, which are essential for responsible innovation and societal resilience.
Speaker: Bina (audience)
Area for further research: Accurate data on representation of women in AI in India beyond current ChatGPT estimates.
Reliable gender‑representation metrics are needed to track progress, identify gaps, and design effective interventions for gender equity.
Speaker: Kirthiga Reddy
Area for further research: Strategies to bridge the AI divide between metropolitan and non‑metropolitan regions in India.
Understanding effective models for decentralised AI hubs can guide policy to ensure equitable access to AI opportunities across the country.
Speaker: Radha Basu
Area for further research: Application of AI in precision agriculture and its impact on crop‑failure detection.
Exploring AI‑driven agritech can improve food security and farmer incomes, making it a high‑impact research priority.
Speaker: Radha Basu
Area for further research: Development of AI models tailored for diverse demographic groups in healthcare (e.g., breast‑cancer screening across ethnicities).
Tailored models can reduce diagnostic bias and improve health outcomes for varied populations.
Speaker: Radha Basu
Area for further research: Effectiveness of AI‑driven upskilling programs for women and youth in low‑income settings (e.g., iMerit, Anudip).
Evaluating impact helps scale successful models and informs funding decisions for inclusive AI education.
Speaker: Radha Basu, Mihir Shukla
Area for further research: Role of arts and EQ development in AI education and its market potential.
Integrating creativity with technology may foster more holistic AI talent and open new economic opportunities.
Speaker: Anurag Hoon, Anjali
Area for further research: Impact of AI on future job structures – shift from ‘worker bees’ to ‘queen bees’ concept.
Understanding how AI reshapes labor markets is crucial for workforce planning, social policy, and education system redesign.
Speaker: Mihir Shukla

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit

Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, moderated by Amanraj Khanna, examined how India can turn its ambitious AI policy announcements into real-world adoption and scale [1][7-9][11-14][20-22]. Khanna highlighted recent commitments such as Microsoft’s $20 billion pledge for India, Google’s $15 billion, and partnerships like Anthropic-Infosys, noting that deployment, not just announcement, is the challenge [11-14][15-18][20-22]. He framed the discussion around two perspectives: the national R&D infrastructure led by CDAC and the needs of large Indian enterprises represented by Intel [21-24][27-35][36-38].


CDAC, operating under the Ministry of Electronics and IT, has built the PARAM family of supercomputers, delivering roughly 48 petaflops today and targeting 100 petaflops by year-end across 60 sites [45-52]. About 15 000 researchers and many MSMEs use these clusters for workloads such as drug discovery, protein folding, weather forecasting, oil exploration and computational fluid dynamics [53-60][56-61]. CDAC also provides hands-on support to government agencies and startups through initiatives like Paramuthkarsh in Bangalore [55-57][62].


Nitin Bajaj explained that Indian enterprises struggle to move from pilots to production because they must decide between on-prem, cloud, or edge deployments while balancing ROI, model selection and data quality [70-78][79-84][86-88]. He noted that many firms have already purchased GPUs but remain in pilot mode due to unclear cost-benefit calculations and rapidly evolving AI models [93-94][95-102]. Bajaj promoted “frugal AI” – leveraging Intel CPUs with integrated GPUs/NPU to run 7-20 billion-parameter models efficiently, reducing the need for dedicated GPUs in many use cases [156-162][158-161].


Vivek Kanneja argued that full technological sovereignty is unrealistic in the short term; India can import silicon (e.g., NVIDIA, Intel, AMD) while keeping the software stack, models and applications under domestic control [115-124][125-138]. He added that CDAC is developing a RISC-V based GPGPU expected by 2029-30, but until then reliance on external chips will continue [136-138]. Both speakers identified talent gaps: CDAC sees graduates strong in theory but lacking practical MLOps experience, suggesting curriculum reforms and capstone projects [173-182]. Energy efficiency was raised as a critical issue; CDAC employs liquid cooling and power-aware design to achieve PUE around 1.2, while Intel reports data-center PUE of 1.06 and 15 % power-efficiency gains from new packaging [188-199][207-212].


Kanneja envisions success as AI being embedded in many workflows to simplify lives, whereas Bajaj measures success by widespread, affordable AI use that even a street vendor can leverage, supported by robust Indic models [223][225-231]. The discussion concluded that coordinated advances in infrastructure, enterprise readiness, talent development and sustainable practices are essential for India’s AI ecosystem to mature over the next few years [20-22][173-182][188-205][223][225-231].


Keypoints


Major discussion points


India’s AI infrastructure and policy momentum – The panel opened by highlighting recent policy announcements and massive private-sector investments (e.g., Microsoft’s $20 bn, Google’s $15 bn) and the role of CDAC’s PARAM supercomputing series, which now provides about 48 petaflops and is slated to reach ~100 petaflops by year-end, serving researchers, MSMEs and national missions such as drug discovery and weather prediction[8-14][45-53].


Enterprise hurdles in scaling AI from pilots to production – Nitin explained that Indian firms wrestle with choosing the right deployment model (on-prem, cloud, edge), quantifying ROI, and handling data-quality issues that cause proof-of-concepts to stall. Both speakers stressed the need for robust MLOps, “frugal AI” solutions, and clearer cost-performance trade-offs before large-scale roll-out[70-84][95-102].


Sovereignty versus global technology dependence – Vivek addressed the practical limits of full domestic control, noting India lacks advanced-node fabs and GPU IP, so a pragmatic approach is to import silicon (NVIDIA, Intel, AMD) while keeping the software, model-orchestration and applications under sovereign control. He also mentioned CDAC’s own RISC-V-based GPGPU prototype expected around 2029-30[115-124][125-138].


Talent and capability gaps in AI deployment – Both panelists agreed that while India produces many bright engineers, curricula focus on theory rather than real-world MLOps, data cleaning, and large-model deployment, creating a bottleneck that must be addressed through hands-on capstone projects and industry-academia collaboration[173-182][184-185].


Energy and sustainability of AI compute – The discussion turned to the power demands of supercomputing and data-center AI workloads. Vivek highlighted power-aware chip design, liquid-cooling and low PUE (~1.2) for CDAC systems, while Nitin cited Intel’s ultra-efficient data-center PUE of 1.06 and newer ribbon-fed power-delivery technologies that improve efficiency by ~15%[188-199][207-212].


Overall purpose / goal of the discussion


The session was convened to “translate that vision into adoption and scale” – i.e., to bridge the gap between India’s ambitious AI policy and infrastructure (government R&D, supercomputing) and the practical needs and constraints of large Indian enterprises, identifying where the two tracks intersect or diverge and outlining what success should look like in the next few years[20-22][26-27].


Tone of the discussion


– The conversation began enthusiastic and forward-looking, celebrating recent policy wins and investment announcements[5-10].


– It quickly shifted to a pragmatic, candid tone, with speakers openly describing technical constraints, ROI dilemmas, data-quality challenges, and talent shortages[70-84][95-102][173-182].


– Towards the end, the tone became solution-focused and hopeful, emphasizing concrete steps (sovereign stack control, frugal AI hardware, energy-efficient designs) and a vision of widespread AI deployment across Indian society[115-138][188-212][223-231].


Overall, the dialogue moved from high-level optimism to a realistic appraisal of obstacles, and finally to constructive pathways for achieving scalable, sovereign, and sustainable AI in India.


Speakers

Amanraj Khanna


Area of Expertise: Technology policy, AI ecosystem bridging government and enterprise.


Role / Title: Partner and Managing Director for India at the Asia Group; Moderator of the panel. [S2]


Vivek Kanneja


Area of Expertise: High-performance computing, supercomputing infrastructure, AI research, cybersecurity, national R&D.


Role / Title: Executive Director, Center for Development of Advanced Computing (CDAC). [S3][S4]


Nitin Bajaj


Area of Expertise: Enterprise AI adoption, sales and technology leadership, cloud/edge/CPU/GPU solutions for large Indian enterprises.


Role / Title: Director of Sales for Conglomerate Accounts, Intel India. [S5][S6]


Additional speakers:


Sangeeta Reddy


Area of Expertise: Healthcare leadership, AI applications in health services.


Role / Title: Joint Managing Director, Apollo Hospitals.


Full session reportComprehensive analysis and detailed insights

India’s AI agenda was framed by moderator Amanraj Khanna as a shift from high-profile policy announcements to tangible, large-scale adoption. He opened by noting the “palpable” energy at the summit and highlighted recent commitments – Microsoft’s $20 bn pledge for India, Google’s $15 bn investment and the Anthropic-Infosys partnership – as evidence of a “truly fascinating moment” in the country’s AI ambitions [5-10][11-14][15-18]. He also noted the launch of Pax Silica earlier that morning, underscoring the pace of new AI initiatives [4-5].


The panel brought together Vivek Kanneja, representing CDAC, the national R & D hub under the Ministry of Electronics and IT, and Nitin Bajaj of Intel, speaking for large Indian enterprises [20-24][27-35][36-38].


CDAC’s compute infrastructure


CDAC’s mandate is to deliver super-computing capacity through the National Supercomputing Mission. It has built the PARAM family of machines, now delivering roughly 48 petaflops across the National Knowledge Network, with a target of about 100 petaflops by year-end through 60 installations [45-62]. Approximately 15 000 researchers run jobs on these clusters, and the infrastructure also supports MSMEs via the Paramuthkarsh centre in Bangalore [45-62]. Typical workloads include drug discovery, bioinformatics, protein folding, molecular modeling, weather prediction, oil exploration, finite-element modelling and computational fluid dynamics, with CDAC providing hands-on assistance to government agencies and startups [57-60][61].


Enterprise adoption challenges


Bajaj explained that Indian firms often stall at the proof-of-concept (POC) stage because they first need to identify concrete use-cases-such as smart manufacturing, retail analytics or document search-before confronting the “biggest gap” of choosing an appropriate deployment model (on-prem, cloud or edge) and quantifying return on investment [70-78][79-84]. He noted that even when organisations have purchased GPUs, they remain in pilot mode because the cost of full-scale deployment and the rapid evolution of models create uncertainty [93-102][95-100]. Kanneja highlighted that many projects stall after the POC stage due to data-cleaning and MLOps gaps, a view echoed by Bajaj’s comments on ROI and deployment-model uncertainty [210-215][70-78].


Both panelists stressed cost-effective deployment. Bajaj explicitly branded his approach “frugal AI,” advocating the use of Intel CPUs with integrated GPUs/NPUs to run 7-20 billion-parameter models efficiently, thereby reducing the need for dedicated GPUs [156-162]. Kanneja added that choosing between GPUs and simpler VM setups can also achieve cost-effective outcomes [210-215].


Sovereignty and chip strategy


When asked about AI sovereignty, Kanneja said that end-to-end independence is not feasible today. India lacks advanced-node fabs and GPU IP, so a pragmatic, short-term approach is to import silicon (e.g., NVIDIA, Intel, AMD) while retaining control over software, model orchestration and applications [115-124]. He noted a longer-term ambition to design a RISC-V-based GPGPU, expected around 2029-30, emphasizing that the interim focus must remain on sovereign control of the stack above the chip [136-138].


Data-sovereignty versus performance


Bajaj pointed out that data-sovereignty requirements differ by sector: banking and healthcare demand localisation, whereas manufacturing and retail often prioritise speed and accuracy by using cloud APIs, sometimes deploying at the edge for latency-sensitive tasks [149-156][157-166]. He illustrated “frugal AI” with a prompt-based engine that can handle 15-20 prompts per second on a CPU, avoiding the expense of a GPU-only solution [164-166].


Talent and skills considerations


Kanneja described a current talent gap: Indian graduates possess strong theoretical foundations but lack practical MLOps experience, exposure to messy real-world data and skills in large-model deployment; he called for curriculum reforms and capstone projects that simulate beta-scale data handling [173-182]. Bajaj, in contrast, highlighted India’s youthful demographic (average age 13-25) as a catalyst that will rapidly narrow the gap, noting his own learning from younger engineers [184-186]. Thus, both panelists discussed talent considerations, differing on the immediacy of the shortfall.


Energy consumption and sustainability


Kanneja explained that CDAC’s supercomputers employ power-aware VLSI techniques, clock-gating, and a mix of liquid and water cooling, achieving a Power Usage Effectiveness (PUE) of roughly 1.2-significantly better than conventional water-cooled systems [188-199]. He called for a benchmark of energy consumption per token for both training and inference [200-204]. Complementing this, Bajaj reported Intel’s data-centre PUE of 1.06, achieved through ribbon-fed power delivery and advanced packaging, and stressed that judicious model selection (e.g., using CPUs for 7-8 billion-parameter models) can curtail power demand [207-216].


Vision for the next three to five years


Kanneja envisioned AI “deployed in a lot of workflows and making life much simpler and enjoyable for us” [223]. Bajaj expanded the view to a societal scale, stating that India should move from being a top data-consumer to a leader where even a “Sabziwala” can leverage AI-driven insights, supported by Indic models and mass-scale deployments [225-231].


In summary, the panel identified four inter-linked pillars for India’s AI future: (1) expanding sovereign-controlled compute infrastructure (PARAM supercomputers and eventual domestic GPUs); (2) enabling enterprises to move beyond pilots through clear ROI frameworks, frugal hardware choices and robust MLOps; (3) addressing the talent pipeline with practical curriculum reforms while leveraging the country’s demographic dividend; and (4) ensuring energy-efficient, sustainable operations via low-PUE designs and energy-per-token benchmarks. Consensus emerged on the POC-to-production bottleneck, the primacy of ROI, and the need for energy-efficient designs, while disagreements persisted around the depth of the talent gap, the optimal path to chip sovereignty and the precise efficiency targets. The session closed with Amanraj thanking the panelists and inviting Sangeeta Reddy of Apollo Hospitals to speak, underscoring the broader health-sector interest in AI [236-239].


Session transcriptComplete transcript of the session
Amanraj Khanna

India’s AI stack, bridging government vision with enterprise needs. My name is Amanraj Khanna. I’m a partner and managing director for India at the Asia Group. Also my privilege to be a moderator here today. I have to say the energy here is still so palpable, even after five days of this. So it’s absolutely brilliant to be here with you all. We’ve had a truly fascinating moment in India’s AI ambitions. Some massive policy announcements. We just had Pax Silica announced this morning. Very exciting indeed. Significant infrastructure investments. Brad Smith, if you heard him yesterday, Microsoft announced $50 billion in the global south alone. After $20 billion, that has been announced for India. Google, as you know, $15 billion. And growing enterprise adoption.

Anthropic announced its partnership with Infosys, Tata, OpenAI. I’m sure you’re all watching this. But announcement is one thing. Deployment and then achieving scale quite enough. So that’s why this panel matters. Today’s conversation brings two critical perspectives to this fundamental question. What does it take to translate that vision into adoption and scale? One of my two distinguished speakers brings unparalleled insight into infrastructure being built through national R &D institutions. The other sees the reality of what India’s largest enterprises actually need when they deploy AI. So we are here to have an honest conversation about these two tracks. Where do they connect and perhaps where they don’t. So with that let me introduce my distinguished panelists. First I have to my immediate left Mr.

Vivek Kanneja. Vivek is the executive director of the Center for Development of Advanced Computing, CDAC as it’s known. And CDAC has built the Parham supercomputing series providing AI compute infrastructure for government departments and national missions. It conducts cutting -edge research into high -performance computing and cybersecurity. and also trains thousands of engineers annually in advanced computing and AI. Vivek, of course, has held multiple senior leadership positions within CDAC and has guided national initiatives in these critical areas. Welcome, Vivek. I also have to my far left, Nitin Bajaj. Nitin is Director of Sales for Conglomerate Accounts at Intel India. He leads Intel’s engagement with India’s largest enterprises on their digital transformation and AI adoption journeys. He has over 28 years of global experience in sales and technology leadership and orchestrates a broad partner ecosystem.

These include system integrators, ISVs, cloud providers to deliver Intel -based solutions spanning cloud, AI, HPC, 5G, edge, and end -user computing. So in summary, he sees firsthand what drives enterprise infrastructure decisions and what actually prevents companies from moving AI from pilot to true scale. So with those introductions, let’s get right into it. Vivek, why don’t I start with you? So here’s my first question Vivek. CDAC has built param supercomputing series and AI compute infrastructure. What compute capabilities does CDAC actually provide today? Who uses them? For what workloads? And what are the key constraints that you see operating within?

Vivek Kanneja

Okay, thanks. So as you know, CDAC is a scientific society under the Ministry of Electronics and IT. And one of the mandates that we have is to build supercomputing capacity in the country. This is the mandate which is given to us under the National Supercomputing Mission where we have developed a series of supercomputers under the brand name PARAM. We started in late 80s, starting with the PARAM 8000. And now we have the PARAM series of supercomputers which are installed with our own software. About 48 petaflops of supercomputers are installed in the country. And we have been able to build some of these supercomputers And we have been able to build some of these supercomputers and we have been able to build some of these supercomputers and we have been able to build some of these supercomputers and we have been able to build some of these supercomputers of overall performance connected over the National Knowledge Network or the NKN as it is called.

This capacity is going to be augmented to about 100 pitaflops by the end of this year with 60 installations. Most of these installations are today being used by researchers. About 15 ,000 researchers fire jobs across these machines on the National NKN. A lot of it is being also used by MSMEs. For example, we have opened Paramuthkarsh which is housed in our city at Bangalore Centre for use by start -ups and MSMEs. The kind of applications that run here include drug discovery, bioinformatics, we have protein folding. We have molecular modeling. You have weather prediction. So I mean almost all of these are being used. number crunching problems, oil exploration, finite element modeling, computational fluid dynamic problems. So all such problems are being run across these clusters by researchers.

We also have a lot of expertise in various domains which we have developed in -house over a period of years where we are hand -holding a lot of these agencies, a lot of government agencies are working with us on this.

Amanraj Khanna

Thanks so much, Vivek. Lots of threads to pull on there, but for a moment, let me go to Nitin. Nitin, you work with some of India’s largest enterprises, some of our national champions. So when these enterprises commit to AI, which you must have seen increasingly so, what are some of the actual barriers that prevent them to move from pilot projects to some of those production -scale deployments that everyone envisions, especially at events like this?

Nitin Bajaj

Thank you. First of all, thank you for inviting me. It’s a privilege to be here and talking in front of such an esteemed crowd. So basically, I think I’ll kind of break this into two, three pieces. One, be it Indian enterprises or be it global enterprises, I think everybody is grappling with the same sort of problem statement. And everybody is trying to find those use cases which are very, very pertinent for their own enterprises. And some certain enterprises are into manufacturing domain, some are there in the retail side of it. And so the typical use cases that they have could be around smart manufacturing, smart retail, or specific generic use cases where they want to do a lot of document search.

They have to have fine -tuned search on the policies, the TNCs and things of that sort. So essentially, the way you see it, I see it in twofold. One, They are looking at the speed, and everybody is trying to figure out the best ROI. So I think the biggest gap today is what to use, whether to use it on -prem or whether to go on cloud, use open APIs which are available to them. Then once those use cases are ready, as you said, from pilot to production, what is the final cost of that deployment? And then the third angle that comes in is whether to kind of centralize all of this or to take it to the edge.

So there is no single answer. And when you think of all the ecosystem providers that are there today, be it from silicon to ISVs to system integrators, everybody has a pocket of expertise in their own sense. But today there is no single formula. And the entire. AI journey is changing so rapidly models are being dropped at a speed of light and then the entire from silicon to OS to everything else, the whole ecosystem is changing so fast that even the enterprises are trying to figure out what is the best deployment model for them, what is the best ROI that they can get out of it but in the midst of all of this, in pockets, lot of these enterprises are trying to see what are the specific use cases that they can bring to fore that can bring some incremental benefit to whatever operations that they are running in their organization.

Now I can give multiple examples to it for example in a manufacturing domain, there could be surveillance kind of use cases multi -modal use cases, there could be use cases around how to look at inventory and then how to look at complete digital twin dark factory kind of a scenario in case of retail it is all about say it could be around preventing the thefts, the pilferation that happens or doing customer analytics. And then, as I said, a lot of document search kind of examples are going on. But in most of those cases, the whole decision -making is between edge versus on -prem, cloud versus sovereign data centers, what kind of models they should use, and how to find out the right ROI, which is where we feel frugal AI is what we kind of propose to the industry.

And I’ll talk more about that maybe later. But what is the best deployment model that can really help an enterprise scale at a level, at a cost point, which is really making sense to them?

Amanraj Khanna

Let me ask you a very quick follow -on question on that. Do you see that Indian enterprises are increasingly sophisticated in making these choices? And, of course, Intel works globally, right? So how does it compare in terms of its maturity, its sophistication, compared to the other markets where Intel also operates?

Nitin Bajaj

Clearly, in terms of use cases, I think we have the edge. But again, the veracity of data. it’s a big problem second i would say that these enterprises when you think of it a year back almost everybody wanted to buy gpus and a lot of these enterprises have bought systems which are powerful enough to run all kinds of use cases but still they are in that pilot phase because again because of that roi factor so things are maturing of course enterprises are becoming smarter from llms we are now looking at slms so they are trying to kind of figure out the right silicon where they have to kind of land their workloads on so as things are emerging i think things are getting better but yeah it’s still some time before you see those live deployments coming out

Vivek Kanneja

for the benefit of large larger audience sorry for interjecting i think just just to add to add to what he just said we are seeing a lot of places where people are not being able to come out of the POCs. I think one of the major reasons that, at least personally I have seen, is that people are very happy with the POCs. They can train them on curated data sets. But once it actually goes and hits real -life situations where the data needs to be cleaned up, it’s not clean, you have no proper experience in actual deployments of the MLOps, you have done it in a canned manner, then suddenly the reality hits that, no, it’s not that simple.

And as you talked about the ROI, then you have to make those choices, whether I should have an on -prem this thing, do I really need a GPU for my problem? Or can I work across multiple VMs on a simple, simple IT infrastructure? So those are the choices that hopefully in the next coming… years, intelligent choices will be made. People, we will start to see more MLOps, engineers coming out and deploying these things at scale because POCs is fine, but the real revenue will come only once you actually deploy it at scale.

Amanraj Khanna

Understood. Thank you so much for those perspectives. Unlike yourselves, you know, you’re both technologists. I’m a little bit in the policy space. I’m in the fringes of this. I work on tech policy, and I haven’t had a single conversation here with, you know, a foreign investor which hasn’t talked about sovereignty or dependency. So I’m going to bring my next question. I’ll bring you over to my area because I’d love to pick both your brains on this. So Vivek, let me talk to you about this first, right? So India talks about sovereignty, but SEDAC still relies on global technology, right, whether that’s chips, systems, you know, software stacks. So realistically, what can India… India built domestically versus what will we always need to source globally, right?

and where should we focus our capability? So question to you as someone who’s really on the cutting edge.

Vivek Kanneja

Okay, so let me answer that both technically and politically correct way. See, when you talk of sovereignty, let’s see what does it really mean. Do you want to be completely independent in the entire vertical, right from silicon up to the application? Is that really possible, right? Let’s say I need to design a GPU. It’s a good aspirational goal. But do I have the wherewithal today to do the entire thing in -house? Probably not. I don’t have the IPs. We can start, definitely. We don’t even have today a fab that can give me a 3 nanometer or below production capabilities and package capabilities. So I think a more pragmatic approach is to have maybe the silicon coming from outside and everything.

Everything above that should be under my control. So you should be able to control all your… critical choke points. So that’s the model that India AI has taken. For example, they have created a farm of GPUs which are available under the control freely or at a reasonable cost for the developers. The models which are being built on top of that are under your control. How those models are going to be orchestrated is under your control. The application that will use those models is under your control. So for me, that is the sovereignty where you are getting the maximum ROI. Should I really be competing against an H100 or a B200 from NVIDIA? Probably not in the short term.

But yes, as an aspirational goal, just to let you know, CEDAC is designing its own GPGPU based on RISC -V. We will probably have something by the end of 2029 -30. But then, till that time, we really need to have a lot of this entire stack under sovereign control. maybe using chips from outside, whether it is NVIDIA, whether it is Sapphire Rapid or Granite Rapid from Intel, or from AMD. But everything above that should be, all my critical choke points should be under my control.

Amanraj Khanna

So I have to compliment you. That was a very candid and pragmatic answer. So compliments to you. Nitin, can I reframe that question a little bit for you? So, you know, there’s, as you know, in the same vein, there’s a significant policy focus on data sovereignty and data localization as well, right? So when your enterprise customers are making AI infrastructure decisions, how much do these factor into those choices? And how do they compare with cost and performance considerations as well? So has this calculus shifted with some of the policy developments that we’ve seen over the past couple of years in AI?

Nitin Bajaj

Well, I would say not really. So it all depends on the industry and the market. So it all depends on the industry that enterprise is in. for a banking industry, for healthcare industry, data sovereignty is very, very important. For a manufacturing industry or for any other, say, retail or any other industry, they are trying to build use cases on the cloud because that’s where they feel that they can build those use cases very, very quickly. They can simply call the APIs available there and then they see that the performance is much better and they get better accuracy. So it’s a mix of both. Now, then comes again, even in a manufacturing environment, as I was calling out, in an OT environment, a lot of these manufacturing firms would want their data to reside within their perimeter because it is so, so close to them.

Then again, when it is coming to deployment side of it, they’re again looking at an edge deployment which becomes within the perimeter. So it’s a mix of both. Now, finally, again to the point that I was making in the beginning anything that has to scale the price becomes a key driver so it is about okay I have a model that I have kind of fine tuned on the cloud now when I have to take it to deployment can I use something locally which is available today with me without making a lot of investment and then can I scale it which is where when Intel is talking about it we basically kind of focus on frugal AI today the Intel core and ultra core CPUs they have a GPU, NPU and a CPU all combined in a single processor which allows you enough capability to run maybe a 7 or 8 billion parameter model so the typical requirements that you will have on the edge are very well suited on this CPU itself and when it comes to a data center side the Xeon 6 processors today are able to run a 20 billion parameter very easily.

They can go up to 80 billion parameters depending on the specific use case. So do I need a GPU in every instance is the first question that we are trying to ask. And today everybody has a CPU in their environment. So can they reutilize that? And can they test it out? Can they look at performance levels? One example that I can kind of discuss with my customers is if you’re looking at, say, a prompt -based engine where you need to do a document search, typically a human eye can read about 10 prompts in a second if somebody is a fast reader. Now if a processor can give you 15 to 20 prompts, is this good enough as a performance for you or do you want 200 prompts in that particular second?

Maybe 15, 20 prompts is good enough. So that’s where the cost versus performance comes in. That’s where one has to be very, very… calculated in terms of what is their end use that they are looking at and what would best suffice that particular usage so again it’s a mix of uh this uh localized uh data versus what can go on to cloud and once you look at uh the scale of it again you have to look at the cost because when you’re scaling it on the cloud the cost may be very very different from what you can get on the on -prem side of it i’m not a kind of proponent of on -prem versus cloud for me i think both of them bring their advantages but what i’m trying to say is the customer has to really look at what is their end use and at what cost they want to

Amanraj Khanna

understood so we need to diversify our approaches and have a product to mission fit which is absolutely critical think of it in the past everybody used to think only mainframes can solve the problem today the applications are not only running are no more running mainframe, they came on to CPU level things and then today we are looking at micro services. So everything has kind of evolved over time and same things are happening on the AI side of it today. Understood. So one quick question that’s on everyone’s mind and to both of you. From each of your respective perspectives, whether that’s government R &D infrastructure or enterprise deployment, is talent and the capability gap still a critical choke point from each of your perspectives?

So Vivek, why don’t I start with you on this?

Vivek Kanneja

It is. It’s unfortunate but yes it is. I mean we do find that we have a set of very bright engineers coming out but most of them are trained in good theoretical understanding of what machine learning is. They are good at mathematics, basic understanding but when it comes to actual deployments on field. I think that’s where we are lacking. Maybe we need to have a serious look at our curriculum in the colleges to see how do I train large models, how do I deploy large models using MLOps, because today, as I said, most of these kids are working on curated data sets. They are working on some standard test cases and test and validation cases.

But when it comes to real life, life is not that rosy. I mean, you have data which is missing, you have data which is skewed, you have data which needs to be cleaned, I have real time constraints, I have other security considerations. So those are not a part of the curriculum. So that’s where I think some capstone projects where you are able to handle beta scale data needs to be put in place. Theory is fine, but still there’s a lot to learn on the practical side.

Nitin Bajaj

I’ll give a different perspective. The way I look at India versus all other countries. in other countries it’s an aging population for us it is a booming population at this point in time which with average age of 13 to 25 or so which is kind of exposed to AI so maybe today we may see some sort of gap in terms of AI capabilities but 2 years down the line or 4 years down the line I think that we bridged very very quickly so we have that benefit of demography here in the short run of course as an individual myself I am also learning from the kids today how AI can be deployed so gaps will be there but I think this is a learning curve for everybody

Amanraj Khanna

Thanks Nitin I want to get to just a couple more questions and then we’ll try and be very quick one I’ve been wanting to ask you and one that I get asked often is the energy and sustainability question you know super computing uses huge amounts of energy can have societal impacts right CDACs super computing whether the you know large data centers that Intel perhaps you know works with So how do we think about energy and sustainability implications, especially in the Indian context? So Vivek, perhaps, with you.

Vivek Kanneja

So from my perspective, I look at it to be addressed at two levels. One is something that we have been doing. Coming from a VLSI background, I can tell you that there are today’s standard techniques which are being used in all associate designs which are power -aware designs. So you have multiple power islands, you have clock tree gating that happens, you switch off those cores which are not being used. So that’s from a design perspective. But when it comes to the platform design, there are smart choices which are being made. For example, if I talk about CDAC solutions, today we are using liquid cooling as well as water cooling in a ratio of almost 70 to 30.

And we are slowly moving the entire thing to a pure liquid pool thing. There are other advanced techniques which use only air cooling. So, ultimately what is your PUE will actually determine. So, typically we see a PUE of around 1 .2 or so as compared to a conventional water -cooled thing which is about 1 .4, 1 .5. There are definitely green norms which are being proposed. I think the question here to be asked and we need to do this kind of a benchmarking across all the models is what is the energy that I spend per token for training or for inferencing. I think there should be critical benchmark. We need to seriously look at how I can optimize my models to be more power aware.

Can I have compressed models which take less energy? So, yes, energy is one of the critical factors and especially hyperscalers would need huge amount of power. I mean, there is this joke that we keep on saying that *** But along with the hyperscaler, you also need to have a small power plant which needs to be designed together with it.

Amanraj Khanna

Thank you. Nitin, to you.

Nitin Bajaj

I’ll have two, three points here. One, from manufacturing point of view, Intel is utilizing the latest technologies like ribbon fed and power via, which improves the power efficiency by 15%. So these are the latest technologies available. Second, I would say that we are running our own data centers. Which are running at a PUE of 1 .06, which is the most efficient power data center PUE that you could see. So there’s a white paper on intel .com. I would appreciate if those interested can look at it. Third, I would say, again, power is a problem. Or is the first and the foremost ingredient to running those data centers. So one has to be, again, very cautious of what kind of models you are running and then where it is landing.

So if you. If you’re able to be, if you’re more judicious in terms of our selection. then of course we can save power in some ways.

Amanraj Khanna

So one final question. I realize that we’re time’s up. One question, let’s put this in a quick sentence if you can. When we assess India’s progress over the next three, five years, what does success look like to each of you? So maybe a sentence.

Vivek Kanneja

Success for me would be AI being actually deployed in a lot of workflows and making the life much simpler and enjoyable for us.

Amanraj Khanna

Thank you.

Nitin Bajaj

For me it is like when we were like about 150 in terms of data usage. Today we are number one in terms of data usage. When it comes to increasing the generic intelligence of people, so today we are using media for media and for entertainment, all the data that we are consuming. If we can consume the data for improving or improving the general intelligence of public. that will make a large scale impact on the society and India at large. So in two or three years, if a Sabziwala can figure out how they can kind of up -level their state, that would be the best way. And then all the Indic models and all the other use cases that are coming out should be able to support those.

And then there will be a mass -scale deployment of AI across the board.

Amanraj Khanna

Thanks so much, Nitin. And I see that time’s up. I wish we could pick your brain further. This has been a truly fascinating conversation. And thank you for being so candid with all your responses. Thank you. So please join me in thanking both Vivek and Nitin. Thank you. Thank you. I would now like to invite Sangeeta Reddy, Joint Managing Director, Apollo Hospitals, to give her remarks. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (26)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Anthropic‑Infosys partnership announced “just yesterday” to serve Indian enterprises”

The Fireside Conversation notes that Anthropic and Infosys announced a partnership “just yesterday” to serve Indian enterprises, confirming the report’s statement [S73].

Additional Contextmedium

“Microsoft pledged $20 bn for India’s AI agenda”

The knowledge base records Microsoft’s commitment to train 20 million Indians by 2030, which is a skills-focused initiative rather than a $20 bn financial pledge, providing additional nuance to the claim [S71].

Confirmedhigh

“Panel featured Vivek Kanneja representing CDAC and Nitin Bajaj of Intel”

Vivek Kanneja is identified as the Executive Director of CDAC and Nitin Bajaj as Director, Sales and Marketing at Intel in the source material, confirming their roles on the panel [S1] and [S4].

Confirmedhigh

“CDAC has built the PARAM family of super‑computers providing AI compute infrastructure”

The source states that CDAC has built the Parham (PARAM) supercomputing series that provides AI compute infrastructure for government departments and national missions, confirming the report’s claim [S1].

Additional Contextmedium

“CDAC’s super‑computing infrastructure supports government departments and national missions”

The knowledge base adds that the PARAM series is specifically used by government departments and national missions, giving extra detail to the report’s description of CDAC’s mandate [S1].

External Sources (79)
S1
https://dig.watch/event/india-ai-impact-summit-2026/fireside-chat-intel-tata-electronics-cdac-asia-group-_-india-ai-impact-summit — And growing enterprise adoption. Anthropic announced its partnership with Infosys, Tata, OpenAI. I’m sure you’re all wat…
S2
Fireside Chat Intel Tata Electronics CDAC &amp; Asia Group _ India AI Impact Summit — India’s AI stack, bridging government vision with enterprise needs. My name is Amanraj Khanna. I’m a partner and managin…
S3
Fireside Chat Intel Tata Electronics CDAC &amp; Asia Group _ India AI Impact Summit — First I have to my immediate left Mr. Vivek Kanneja. Vivek is the executive director of the Center for Development of Ad…
S4
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — “For the next session, we have a fireside chat between Mr. Vivek Kaneja, Executive Director, CDAT, Mr. Nitin Bajaj, Dire…
S5
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — -Nitin Bajaj: Director, Sales and Marketing, Intel (mentioned for upcoming fireside chat session) Thank you. Thank you …
S6
https://dig.watch/event/india-ai-impact-summit-2026/fireside-chat-intel-tata-electronics-cdac-asia-group-_-india-ai-impact-summit — First I have to my immediate left Mr. Vivek Kanneja. Vivek is the executive director of the Center for Development of Ad…
S7
Fireside Chat Intel Tata Electronics CDAC &amp; Asia Group _ India AI Impact Summit — – Dr. Vivek Khaneja- Nitin Bajaj Dr. Khaneja advocates for a uniform approach to sovereignty focusing on software contr…
S8
Contents — – 2 Incentivise start-ups and support the research base to spin out more quantum businesses, in line with specific growt…
S9
https://dig.watch/event/india-ai-impact-summit-2026/driving-indias-ai-future-growth-innovation-and-impact — Thank you, Mridu, and thank you, everyone, for joining us for the unveiling of this important blueprint. As we have hear…
S10
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — And then we are focusing on applications. Applications is AI for everyday tasks, for making things better for people. An…
S11
Efforts to improve energy efficiency in high-performance computing for a Sustainable Future — The demand for high-performance computing (HPC) has surged due to technological advancements like machine learning, geno…
S12
Conversation: 01 — “locally in the UAE the main focus is AI for quality of life improvement … we believe that this will translate into ev…
S13
From KW to GW Scaling the Infrastructure of the Global AI Economy — So I think that early on AI got a bad rap. It was going to be the computers were going to take over and blow up the eart…
S14
Workers report major gains from AI use — ChatGPT nowreaches more than 800 million userseach week, and this rapid uptake is fuelling a surge in enterprise AI adop…
S15
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Data governance, security concerns, and potential token pricing shocks are major barriers preventing pilot projects from…
S16
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S17
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — And then the biggest, one of the biggest barriers to scale has been the lack of discipline or willingness to say, I’m go…
S18
Panel Discussion Data Sovereignty India AI Impact Summit — This example demonstrates what Gupta termed “partnership not dependence” – utilizing “the best of foreign technologies” …
S19
Building Indias Digital and Industrial Future with AI — This comment shifted the discussion from abstract policy concepts to concrete technical and operational realities. It pr…
S20
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — The discussion reveals strong consensus on key strategic directions: comprehensive ecosystem development beyond chip man…
S21
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Anita Gurumurthy emphasised that despite improvements in chip efficiency, energy demand from data centres continues crea…
S22
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — To address this, companies are exploring innovative solutions such aspower capping(limiting processor power to 60-80% of…
S23
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S24
Fireside Chat Intel Tata Electronics CDAC &amp; Asia Group _ India AI Impact Summit — High level of consensus with complementary perspectives rather than conflicting viewpoints. The government R&D and enter…
S25
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S26
How Small AI Solutions Are Creating Big Social Change — African languages. And we just released a data set of 21 now, 27 voice languages, given that Africa has 2 ,000 or so lan…
S27
Driving Indias AI Future Growth Innovation and Impact — And lastly, goes back to the same thing. And maybe I’ll use the same example. You know, we had the UPI of money. We need…
S28
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — And I have a deep belief that the entrepreneurial ecosystem in India is going to deliver some incredible global leaders …
S29
AI Innovation in India — Bagla articulated a compelling vision of India’s unique advantages in the global AI landscape, asserting that India will…
S30
Agents of Change AI for Government Services &amp; Climate Resilience — Srinivas Tallapragada introduced an important distinction between strategic sovereignty and technical sovereignty that p…
S31
Building Indias Digital and Industrial Future with AI — Deepak Maheshwari from the Centre for Social and Economic Progress provided historical context, tracing India’s digital …
S32
AI as critical infrastructure for continuity in public services — “Data is siloed, data is not ready for AI scale.”[71]. “So almost 80 % of those pilots don’t make it to production.”[98]…
S33
Practical Toolkits for AI Risk Mitigation for Businesses — Soujanya Sridharan:Thank you very much, Nusrat. We were indeed very excited to have participated in doing this piece of …
S34
Building the Next Wave of AI_ Responsible Frameworks &amp; Standards — This question addresses the economic viability and strategic considerations for businesses choosing between different mo…
S35
Enterprise AI adoption stalls despite heavy investment — AI has moved from experimentation to expectation, yet many enterprise AI rolloutscontinue to stall. Boards demand return…
S36
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — The discussion revealed a common theme across different contexts: the gap between policy ambition and implementation cap…
S37
Powering AI Global Leaders Session AI Impact Summit India — “And what that really means is the technology continues to accelerate.”[14]. “going to become even faster and faster.”[1…
S38
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — I know we’re going to have a little bit of time for questions, I hope, at the end. What is IAS? So I’ve talked about wha…
S39
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Energy efficiency improvements offer significant opportunities for reducing environmental impact while controlling opera…
S40
The Foundation of AI Democratizing Compute Data Infrastructure — A lot of engineers working on AI in industry these days, even in academia, are actually focusing on how can I make this …
S41
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — “AI capability and resilience increasingly depend on where trusted compute is physically located and how it is governed”…
S42
Panel Discussion Data Sovereignty India AI Impact Summit — High level of consensus with complementary perspectives rather than conflicting viewpoints. The implications suggest a m…
S43
Building Sovereign and Responsible AI Beyond Proof of Concepts — Sovereignty dimension focuses on control over data, models, and security measures
S44
Fireside Chat Intel Tata Electronics CDAC &amp; Asia Group _ India AI Impact Summit — Dr. Khaneja outlined CDAC’s substantial progress in building India’s supercomputing backbone through the PARAM series. T…
S45
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Jensen at Davos called this the largest infrastructure build -out in human history. Two weeks ago, 54 countries launched…
S46
India allocates $1.24 billion for AI infrastructure boost — India’s government has greenlit a ₹10,300 Crore ($1.24 billion) fundingprojectto enhance the country’s AI infrastructure…
S47
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Data governance, security concerns, and potential token pricing shocks are major barriers preventing pilot projects from…
S48
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — And then the biggest, one of the biggest barriers to scale has been the lack of discipline or willingness to say, I’m go…
S49
AI as critical infrastructure for continuity in public services — “Data is siloed, data is not ready for AI scale.”[71]. “So almost 80 % of those pilots don’t make it to production.”[98]…
S50
https://dig.watch/event/india-ai-impact-summit-2026/fireside-chat-intel-tata-electronics-cdac-asia-group-_-india-ai-impact-summit — But yes, as an aspirational goal, just to let you know, CEDAC is designing its own GPGPU based on RISC -V. We will proba…
S51
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — This comment reframed the entire sovereignty discussion by identifying compute infrastructure as the critical bottleneck…
S52
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — The discussion reveals strong consensus on key strategic directions: comprehensive ecosystem development beyond chip man…
S53
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — To address this, companies are exploring innovative solutions such aspower capping(limiting processor power to 60-80% of…
S54
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Anita Gurumurthy emphasised that despite improvements in chip efficiency, energy demand from data centres continues crea…
S55
Green AI and the battle between progress and sustainability — AI is increasingly recognised for its transformative potential and growing environmental footprint across industries. Th…
S56
Power demands reshape future of data centres — As AI and cloud computingdemand surges, Siemens is tackling critical energy and sustainability challenges facing the dat…
S57
Partner2Connect High-Level Dialogue — The tone was consistently optimistic and collaborative throughout the discussion. It began with celebratory announcement…
S58
Next-Gen Industrial Infrastructure / Davos 2025 — The tone was largely optimistic and forward-looking, with speakers enthusiastically sharing their visions and initiative…
S59
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S60
Keynote-Rishi Sunak — The tone was consistently optimistic and inspirational throughout. Sunak maintained an enthusiastic, forward-looking per…
S61
Comprehensive Report: “Converging with Technology to Win” Panel Discussion — The discussion began with an optimistic, exploratory tone as panelists shared different models and success stories. The …
S62
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S63
Day 0 Event #257 Enhancing Data Governance in the Public Sector — The discussion maintained a pragmatic and collaborative tone throughout, with speakers acknowledging both opportunities …
S64
Transforming Health Systems with AI From Lab to Last Mile — The discussion maintained a cautiously optimistic and collaborative tone throughout. It began with enthusiasm about AI’s…
S65
Host Country Open Stage — Low to moderate disagreement level. The speakers were largely aligned on identifying problems (aging populations, health…
S66
Driving Indias AI Future Growth Innovation and Impact — The discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI …
S67
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S68
Indias AI Leap Policy to Practice with AIP2 — The discussion maintained a constructive and collaborative tone throughout, with speakers building on each other’s point…
S69
Building the AI-Ready Future From Infrastructure to Skills — The tone was consistently optimistic and collaborative throughout, with speakers expressing excitement about AI’s potent…
S70
Keynote-Ankur Vora — This comment provides crucial context that legitimizes India’s leadership role in AI governance and demonstrates how pas…
S71
Welfare for All Ensuring Equitable AI in the Worlds Democracies — -Democratizing AI Access and Preventing Digital Divide: Concerns about AI’s economic value concentrating in Western econ…
S72
Keynote Adresses at India AI Impact Summit 2026 — And critically, India brings strength. Peace doesn’t come from hoping adversaries will play fair. We all know they won’t…
S73
Fireside Conversation: 01 — The conversation revealed concrete collaborative initiatives, including a partnership between Anthropic and Infosys anno…
S74
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — And as far as the question about data center, I think the enablement of the data centers or AI is hardware driven. Becau…
S75
OpenAI turns to Google Cloud in shift from solo AI race — OpenAIhas enteredinto an unexpected partnership with Google, using Google Cloud to support its growing AI infrastructure…
S76
UNSC meeting: Artificial intelligence, peace and security — Switzerland:Thank you, Madam President. We are grateful to the Secretary General, Antonio Guterres, for participating in…
S77
Building the Future STPI Global Partnerships &amp; Startup Felicitation 2026 — Economic development and social growth. and the Three Sutras of People, Planet and Progress. This summit is focusing ver…
S78
From India to the Global South_ Advancing Social Impact with AI — Minister Chaudhary announced the PM Setu scheme, allocating 60,000 crores to transform India’s Industrial Training Insti…
S79
Quantum for Good: Shaping the future of quantum – What happens next? — Leandro Aolita: Good morning, everybody. My name is Leandro Aolita. I am the chief researcher of the Quantum Research Ce…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
V
Vivek Kanneja
8 arguments157 words per minute1448 words551 seconds
Argument 1
Overview of PARAM supercomputers and current capacity (Vivek Kanneja)
EXPLANATION
Vivek explained that CDAC, under the Ministry of Electronics and IT, has built a series of PARAM supercomputers since the late 1980s. The current installed capacity across India totals about 48 petaflops, providing AI compute resources for various national missions.
EVIDENCE
He described CDAC’s mandate to develop supercomputing capacity under the National Supercomputing Mission, noting the evolution from the PARAM 8000 to the current PARAM series and stating that roughly 48 petaflops of supercomputers are installed in the country [45-50].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Fireside Chat notes that CDAC’s PARAM series provides about 48 petaflops of computing capacity across the National Knowledge Network, serving roughly 15,000 researchers [S2].
MAJOR DISCUSSION POINT
AI compute infrastructure overview
Argument 2
Expansion plans to 100 PFLOPS and support for researchers, MSMEs, and startups (Vivek Kanneja)
EXPLANATION
Vivek outlined plans to double the nation’s supercomputing power to about 100 petaflops by the end of the year, with 60 installations. He highlighted that the infrastructure serves researchers, MSMEs, and startups through initiatives like Paramuthkarsh, enabling applications ranging from drug discovery to weather prediction.
EVIDENCE
He said the capacity will be augmented to about 100 petaflops by year-end with 60 installations, that about 15,000 researchers run jobs on the National Knowledge Network, and that the Paramuthkarsh facility in Bangalore is open to startups and MSMEs for workloads such as drug discovery, bioinformatics, protein folding, molecular modeling, weather prediction, oil exploration, and CFD [52-60].
MAJOR DISCUSSION POINT
Supercomputing expansion and user base
Argument 3
Lack of real‑world MLOps expertise and data‑quality challenges cause POC‑to‑production gaps (Vivek Kanneja)
EXPLANATION
Vivek argued that many AI projects stall at the proof‑of‑concept stage because they rely on curated datasets and lack practical MLOps experience. When confronted with messy real‑world data and ROI pressures, organizations struggle to move to production.
EVIDENCE
He noted that people are happy with POCs trained on curated data, but real-life deployments encounter unclean data, insufficient MLOps expertise, and ROI considerations that force difficult hardware and deployment choices [95-100].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
During the same session, participants highlighted that while POCs on curated data are easy, real-world deployments stumble over missing, skewed, or noisy data and the need for MLOps skills [S2].
MAJOR DISCUSSION POINT
Challenges transitioning from POC to production
AGREED WITH
Nitin Bajaj
Argument 4
Full silicon‑to‑application sovereignty is unrealistic now; focus on controlling models and applications while sourcing chips externally, with a long‑term RISC‑V GPU goal (Vivek Kanneja)
EXPLANATION
Vivek stated that achieving complete end‑to‑end sovereignty is not feasible because India lacks the IP and advanced fabs for cutting‑edge silicon. He suggested a pragmatic approach: source chips externally while retaining control over models, orchestration, and applications, and mentioned a plan to develop a RISC‑V based GPU by 2029‑30.
EVIDENCE
He explained that India does not currently have the capability to design and fabricate advanced GPUs, so the strategy is to use external silicon (e.g., NVIDIA, Intel, AMD) while keeping critical choke points under Indian control, and that CDAC aims to design its own RISC-V GPGPU by 2029-30 [115-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Vivek emphasized a pragmatic sovereignty approach-using external silicon but retaining control over models-mirroring comments on sovereign model development in the summit keynote [S2][S10].
MAJOR DISCUSSION POINT
Sovereignty versus external technology dependence
AGREED WITH
Amanraj Khanna, Nitin Bajaj
Argument 5
Academic training is strong theoretically but weak in practical deployment, MLOps, and handling messy data; curriculum reform needed (Vivek Kanneja)
EXPLANATION
Vivek highlighted that Indian engineering graduates possess solid theoretical knowledge but lack hands‑on experience with large‑scale model deployment, MLOps, and real‑world data challenges. He called for curriculum changes and capstone projects that expose students to beta‑scale data and operational constraints.
EVIDENCE
He observed that while many engineers are bright and mathematically proficient, they are trained mainly on curated datasets and standard test cases, lacking exposure to missing, skewed, or noisy data, real-time constraints, and security considerations, suggesting the need for practical capstone projects [173-182].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion called for curriculum changes to give engineers hands-on experience with noisy data, real-time constraints, and MLOps pipelines [S2].
MAJOR DISCUSSION POINT
Talent and capability gap in AI education
AGREED WITH
Nitin Bajaj
DISAGREED WITH
Nitin Bajaj
Argument 6
Power‑aware design, liquid cooling, and low PUE (~1.2) are being used; need benchmarks for energy per token and model compression (Vivek Kanneja)
EXPLANATION
Vivek described how CDAC incorporates power‑aware VLSI techniques, multiple power islands, and clock‑gating, along with liquid and water cooling to achieve a PUE around 1.2. He emphasized the need for benchmarks on energy per token and model compression to further improve efficiency.
EVIDENCE
He explained that modern designs use power islands and clock-tree gating, that CDAC solutions employ a 70/30 liquid-to-water cooling mix moving toward pure liquid cooling, resulting in a PUE of roughly 1.2, and called for benchmarking energy per token and developing compressed, power-aware models [188-205].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Vivek described CDAC’s power-aware VLSI, liquid-cooling strategy achieving a PUE around 1.2, and the need for energy-per-token benchmarks; broader HPC sustainability concerns are discussed in the energy-efficiency review [S2][S11].
MAJOR DISCUSSION POINT
Energy efficiency and sustainability of AI compute
AGREED WITH
Nitin Bajaj
Argument 7
Widespread deployment of AI across workflows, improving everyday life (Vivek Kanneja)
EXPLANATION
Vivek summarized his vision of success as AI being integrated into many workflows, making daily life simpler and more enjoyable for citizens.
EVIDENCE
He succinctly stated, “Success for me would be AI being actually deployed in a lot of workflows and making the life much simpler and enjoyable for us” [223].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External commentary notes AI’s role in simplifying daily tasks and enhancing quality of life, echoing the vision of broad workflow integration [S13][S14].
MAJOR DISCUSSION POINT
Vision of AI success
AGREED WITH
Nitin Bajaj
Argument 8
CDAC provides not only compute capacity but also domain expertise and hands‑on support to government agencies and startups, facilitating AI project implementation.
EXPLANATION
Beyond building supercomputers, Vivek emphasizes that CDAC offers specialized knowledge across multiple domains and actively assists agencies and startups in applying AI to real‑world problems.
EVIDENCE
He notes, “We also have a lot of expertise in various domains which we have developed in-house… we are hand-holding a lot of these agencies, a lot of government agencies are working with us on this” [61-62].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The panel highlighted CDAC’s hands-holding of government agencies and startups, offering domain expertise beyond raw compute resources [S2].
MAJOR DISCUSSION POINT
Domain support and consultancy role of CDAC
N
Nitin Bajaj
7 arguments163 words per minute1810 words665 seconds
Argument 1
ROI uncertainty, deployment model choices (on‑prem, cloud, edge) hinder scaling from pilot to production (Nitin Bajaj)
EXPLANATION
Nitin explained that enterprises face uncertainty about return on investment and must decide among on‑prem, cloud, or edge deployment models. This indecision, combined with cost considerations, prevents many pilots from reaching production scale.
EVIDENCE
He noted that the biggest gap is deciding what to use-on-prem, cloud, or open APIs-and once use cases are defined, the final deployment cost and ROI become critical factors that hinder moving from pilot to production [76-79].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Consulting insights identify ROI concerns, data-governance and token-pricing as key barriers that keep pilots from reaching production scale [S15]; the Fireside Chat also mentions cost-vs-performance decisions affecting deployment choices [S2].
MAJOR DISCUSSION POINT
Enterprise AI adoption barriers
AGREED WITH
Vivek Kanneja
Argument 2
Data‑sovereignty importance varies by industry; decisions balance cost, performance, and edge vs cloud considerations (Nitin Bajaj)
EXPLANATION
Nitin said data sovereignty is crucial for sectors like banking and healthcare, while manufacturing and retail often prioritize speed and performance by using cloud services. Enterprises therefore weigh cost, performance, and deployment location (edge vs cloud) when making decisions.
EVIDENCE
He described how banking and healthcare require strict data sovereignty, whereas manufacturing and retail favor cloud APIs for speed and accuracy, and highlighted edge deployments for OT environments, noting that cost and performance drive the final choice [149-156].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion cited banking and healthcare as sectors where data sovereignty is critical, with cost-performance trade-offs driving cloud versus edge decisions [S2].
MAJOR DISCUSSION POINT
Industry‑specific data sovereignty considerations
AGREED WITH
Vivek Kanneja, Amanraj Khanna
Argument 3
India’s young demographic will rapidly close the AI skill gap; personal upskilling is ongoing (Nitin Bajaj)
EXPLANATION
Nitin pointed out that India’s large, young population (average age 13‑25) provides a demographic advantage that can quickly bridge the AI talent gap. He also mentioned his own efforts to learn from younger colleagues.
EVIDENCE
He stated that India’s booming young population will help close the AI capability gap within a few years and that he personally is learning from the younger generation about AI deployment [184-186].
MAJOR DISCUSSION POINT
Demographic advantage for talent development
AGREED WITH
Vivek Kanneja
DISAGREED WITH
Vivek Kanneja
Argument 4
Intel’s data centers achieve very low PUE (1.06) and employ efficient packaging; careful model selection further reduces power use (Nitin Bajaj)
EXPLANATION
Nitin highlighted Intel’s use of advanced packaging technologies that improve power efficiency by 15% and reported that Intel data centers operate at a PUE of 1.06, the industry’s best. He added that selecting appropriate models for edge or data‑center deployment can further lower power consumption.
EVIDENCE
He mentioned Intel’s ribbon-fed and power-via technologies, a PUE of 1.06 documented in an Intel white paper, and emphasized that judicious model selection helps conserve energy [207-216].
MAJOR DISCUSSION POINT
Energy efficiency in Intel’s AI infrastructure
AGREED WITH
Vivek Kanneja
Argument 5
Mass‑scale AI adoption enabling even small vendors to leverage Indic models and increase public intelligence (Nitin Bajaj)
EXPLANATION
Nitin envisioned a future where India moves from low data usage to leading the world, allowing even small vendors like a street vegetable seller to benefit from AI. He emphasized the role of Indic models and widespread AI deployment in raising public intelligence.
EVIDENCE
He noted that India has risen from rank 150 to number one in data usage, and that when small vendors can up-level using AI, along with Indic models supporting diverse use cases, mass-scale AI deployment will have a large societal impact [225-231].
MAJOR DISCUSSION POINT
Vision of AI success for the broader economy
AGREED WITH
Vivek Kanneja
Argument 6
Intel’s “frugal AI” strategy leverages CPU‑centric architectures with integrated GPU/NPU to run large language models efficiently, reducing reliance on dedicated GPUs.
EXPLANATION
Nitin describes how Intel’s CPUs can handle 7‑8 billion‑parameter models on the edge and up to 20‑80 billion‑parameter models in data centres, offering a cost‑effective alternative to GPU‑only solutions.
EVIDENCE
He explains that “the Intel core and ultra core CPUs they have a GPU, NPU and a CPU all combined in a single processor which allows you enough capability to run maybe a 7 or 8 billion parameter model… Xeon processors today are able to run a 20 billion parameter very easily, up to 80 billion depending on the use case” [156-160].
MAJOR DISCUSSION POINT
Frugal AI hardware approach
Argument 7
The rapid turnover of AI models and APIs creates uncertainty for enterprises, making it hard to select stable deployment models and contributing to pilot stagnation.
EXPLANATION
Nitin points out that the speed at which models evolve forces enterprises to constantly reassess deployment choices (on‑prem, cloud, edge), which hampers progress from pilot to production.
EVIDENCE
He remarks, “the entire AI journey is changing so rapidly models are being dropped at a speed of light… enterprises are trying to figure out what is the best deployment model…” [83-84] and later adds that “people are happy with the POCs… but once it actually hits real-life situations… the reality hits that, no, it’s not that simple” [95-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panelists observed that AI models are being deprecated at “speed of light,” causing enterprises to hesitate on stable deployment pathways [S2].
MAJOR DISCUSSION POINT
Model volatility and deployment uncertainty
A
Amanraj Khanna
4 arguments154 words per minute1231 words478 seconds
Argument 1
India’s AI strategy must translate massive policy announcements and infrastructure investments into real enterprise adoption and scale.
EXPLANATION
Amanraj points out that while the government has announced significant initiatives such as Pax Silica, multi‑billion‑dollar commitments from Microsoft and Google, and partnerships like Anthropic‑Infosys, the critical challenge remains moving from announcement to deployment at scale.
EVIDENCE
He lists the policy announcements and investment figures (e.g., Pax Silica, $50 billion from Microsoft, $15 billion from Google) and then stresses that “announcement is one thing. Deployment and then achieving scale quite enough” [8-19].
MAJOR DISCUSSION POINT
Policy‑deployment gap
Argument 2
Data sovereignty and technology dependence are central concerns for foreign investors, highlighting the need for a balanced domestic‑global sourcing approach.
EXPLANATION
Amanraj observes that every conversation with foreign investors inevitably raises issues of sovereignty and dependence on external technology, suggesting that India’s AI roadmap must address these concerns explicitly.
EVIDENCE
He states, “I haven’t had a single conversation here with a foreign investor which hasn’t talked about sovereignty or dependency” [107-108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Every conversation with foreign investors reportedly raises sovereignty and dependency issues, underscoring the need for a balanced strategy [S2].
MAJOR DISCUSSION POINT
Sovereignty and dependency concerns
Argument 3
Energy consumption and sustainability of AI compute infrastructure must be addressed as part of India’s AI rollout.
EXPLANATION
Amanraj raises the question of how supercomputing and large data‑center operations impact energy use and sustainability, prompting a discussion on cooling techniques and power efficiency.
EVIDENCE
He asks, “How do we think about energy and sustainability implications, especially in the Indian context?” directed to Vivek [186-187].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The sustainability of high-performance AI compute, including power-aware designs and cooling techniques, is discussed in the HPC energy-efficiency review [S11] and echoed by the summit’s PUE discussion [S2].
MAJOR DISCUSSION POINT
Energy and sustainability of AI infrastructure
Argument 4
Success for India’s AI over the next three to five years should be measured by widespread AI deployment across workflows that tangibly improve everyday life and societal intelligence.
EXPLANATION
In his closing question, Amanraj asks panelists to define success succinctly, implying that the benchmark for progress is broad, practical AI integration rather than isolated pilots.
EVIDENCE
He frames the final query: “When we assess India’s progress over the next three, five years, what does success look like to each of you?” [220-222].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
External sources describe AI’s growing impact on daily workflows and societal intelligence, aligning with the panel’s success metric of broad, tangible AI integration [S13][S14].
MAJOR DISCUSSION POINT
Defining AI success metrics
Agreements
Agreement Points
AI projects frequently stall at the proof‑of‑concept stage because of data‑quality issues, lack of MLOps expertise and ROI pressures, hindering movement to production scale.
Speakers: Vivek Kanneja, Nitin Bajaj
Lack of real‑world MLOps expertise and data‑quality challenges cause POC‑to‑production gaps (Vivek Kanneja) ROI uncertainty, deployment model choices (on‑prem, cloud, edge) hinder scaling from pilot to production (Nitin Bajaj)
Both panelists note that while pilots are easy on curated data, real-world deployments encounter messy data and insufficient operational skills, and the uncertainty around ROI and deployment choices prevents scaling [95-100][76-79][93-94].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple industry analyses note that up to 80 % of AI pilots fail to reach production due to data quality, governance gaps and lack of MLOps expertise, and boards demand clear ROI, confirming the stall at PoC stage [S32][S35][S24].
Cost and return‑on‑investment considerations are the primary drivers of enterprise AI deployment decisions.
Speakers: Nitin Bajaj, Vivek Kanneja
ROI uncertainty, deployment model choices (on‑prem, cloud, edge) hinder scaling from pilot to production (Nitin Bajaj) ROI considerations when choosing on‑prem vs GPU vs VM affect deployment choices (Vivek Kanneja)
Both agree that enterprises must evaluate the final cost and ROI before moving beyond pilots, influencing choices between on-prem, cloud, edge or GPU resources [76-79][99-100].
POLICY CONTEXT (KNOWLEDGE BASE)
Enterprise AI investment decisions are driven by cost-benefit analysis and board-level ROI expectations, as highlighted in studies on AI adoption economics and the need for model-size trade-offs [S34][S35][S32].
Energy efficiency and sustainability of AI compute infrastructure are critical, with both parties highlighting low PUE designs and the need for benchmarks.
Speakers: Vivek Kanneja, Nitin Bajaj
Power‑aware design, liquid cooling, and low PUE (~1.2) are being used; need benchmarks for energy per token and model compression (Vivek Kanneja) Intel’s data centers achieve very low PUE (1.06) and employ efficient packaging; careful model selection further reduces power use (Nitin Bajaj)
Vivek describes CDAC’s power-aware VLSI, liquid-cooling and PUE≈1.2, while Nitin cites Intel data-center PUE≈1.06 and efficient packaging, both calling for energy-per-token benchmarks and judicious model choices [188-205][207-216].
POLICY CONTEXT (KNOWLEDGE BASE)
The sustainability of AI compute is emphasized in the Green AI discourse and in national strategies calling for low PUE data-center designs and benchmark development for carbon-aware AI workloads [S25][S39][S41][S24].
There is a notable talent and capability gap in AI, especially in practical deployment and MLOps, though demographic factors may help close it soon.
Speakers: Vivek Kanneja, Nitin Bajaj
Academic training is strong theoretically but weak in practical deployment, MLOps, and handling messy data; curriculum reform needed (Vivek Kanneja) India’s young demographic will rapidly close the AI skill gap; personal upskilling is ongoing (Nitin Bajaj)
Vivek points to insufficient hands-on training for engineers, while Nitin highlights India’s youthful population as a catalyst for rapidly bridging the skill gap [173-182][184-186].
POLICY CONTEXT (KNOWLEDGE BASE)
India’s large, young workforce is identified as a potential remedy for the AI talent shortage, yet current capacity gaps in MLOps and deployment skills are documented in multiple governance and skills-building panels [S28][S36][S37][S38].
Sovereignty concerns must be balanced with pragmatic reliance on global technology components; control over models and applications is emphasized.
Speakers: Vivek Kanneja, Amanraj Khanna, Nitin Bajaj
Full silicon‑to‑application sovereignty is unrealistic now; focus on controlling models and applications while sourcing chips externally, with a long‑term RISC‑V GPU goal (Vivek Kanneja) Data sovereignty and technology dependence are central concerns for foreign investors, highlighting the need for a balanced domestic‑global sourcing approach (Amanraj Khanna) Data‑sovereignty importance varies by industry; decisions balance cost, performance, and edge vs cloud considerations (Nitin Bajaj)
All three acknowledge that complete end-to-end sovereignty is not feasible; instead, India should retain control over critical layers (models, orchestration, applications) while using external silicon, with industry-specific data-sovereignty requirements shaping choices [115-138][111-114][149-156].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy frameworks distinguish strategic sovereignty (control over data, models, governance) from technical reliance on global components, reflecting a balanced approach endorsed in recent AI sovereignty discussions [S30][S42][S43][S41].
Success for India’s AI ecosystem is envisioned as mass‑scale deployment that improves everyday life and empowers even small enterprises.
Speakers: Vivek Kanneja, Nitin Bajaj
Widespread deployment of AI across workflows, improving everyday life (Vivek Kanneja) Mass‑scale AI adoption enabling even small vendors to leverage Indic models and increase public intelligence (Nitin Bajaj)
Vivek defines success as AI being embedded in many workflows, while Nitin envisions a future where even a street vendor can benefit from AI, both stressing broad societal impact [223][225-231].
Similar Viewpoints
Both see the transition from pilot to production as blocked by operational skill gaps and unclear ROI, requiring better MLOps capabilities and clearer cost models [95-100][76-79].
Speakers: Vivek Kanneja, Nitin Bajaj
Lack of real‑world MLOps expertise and data‑quality challenges cause POC‑to‑production gaps (Vivek Kanneja) ROI uncertainty, deployment model choices (on‑prem, cloud, edge) hinder scaling from pilot to production (Nitin Bajaj)
Both stress that energy efficiency is essential for AI infrastructure and that low PUE designs and careful model choices are key strategies [188-205][207-216].
Speakers: Vivek Kanneja, Nitin Bajaj
Power‑aware design, liquid cooling, low PUE (~1.2) and need for energy benchmarks (Vivek Kanneja) Intel’s data centers achieve PUE 1.06 and efficient packaging; model selection reduces power use (Nitin Bajaj)
Both acknowledge sovereignty concerns but agree that a pragmatic mix of domestic control and external technology is necessary [115-138][111-114].
Speakers: Vivek Kanneja, Amanraj Khanna
Full silicon‑to‑application sovereignty is unrealistic; pragmatic approach using external chips while retaining control (Vivek Kanneja) Data sovereignty and technology dependence are central concerns for foreign investors (Amanraj Khanna)
Unexpected Consensus
Both a government research institute (CDAC) and a private‑sector hardware vendor (Intel) prioritize ultra‑low PUE designs and cooling innovations despite operating in different domains.
Speakers: Vivek Kanneja, Nitin Bajaj
Power‑aware design, liquid cooling, and low PUE (~1.2) are being used; need benchmarks for energy per token and model compression (Vivek Kanneja) Intel’s data centers achieve very low PUE (1.06) and employ efficient packaging; careful model selection further reduces power use (Nitin Bajaj)
It is surprising that both a public supercomputing centre and a commercial data-center operator independently emphasize comparable PUE targets (≈1.2 vs 1.06) and similar cooling strategies, indicating a convergent view on sustainability across sectors [188-205][207-216].
POLICY CONTEXT (KNOWLEDGE BASE)
Both CDAC and Intel have publicly committed to ultra-low PUE cooling solutions, demonstrating cross-sector alignment on energy-efficient AI hardware as noted in the AI Impact Summit consensus [S24][S39][S41].
Overall Assessment

The panel shows a strong consensus across policy, research and industry on the core challenges of AI adoption in India: moving from pilots to production, managing cost/ROI, addressing talent gaps, ensuring energy‑efficient infrastructure, and balancing sovereignty with global technology. All agree that success will be measured by widespread, societally beneficial AI deployment.

High consensus – the alignment of viewpoints suggests that coordinated policy, capacity‑building and industry initiatives can be pursued with shared understanding of priorities and constraints.

Differences
Different Viewpoints
Extent and solution to the AI talent and capability gap in India
Speakers: Vivek Kanneja, Nitin Bajaj
Academic training is strong theoretically but weak in practical deployment, MLOps, and handling messy data; curriculum reform needed (Vivek Kanneja) India’s young demographic will rapidly close the AI skill gap; personal upskilling is ongoing (Nitin Bajaj)
Vivek stresses a serious, structural shortage of hands-on AI skills that requires changes to engineering curricula and capstone projects [173-182]. Nitin counters that the country’s large, youthful population will quickly bridge the gap, and he is personally learning from younger colleagues [184-186].
POLICY CONTEXT (KNOWLEDGE BASE)
The extent of India’s AI talent gap and proposed remediation pathways are debated, with reports highlighting current skill shortages versus demographic advantages for rapid upskilling [S28][S36][S37][S38].
Strategic path to AI sovereignty and chip design
Speakers: Vivek Kanneja, Nitin Bajaj
Full silicon‑to‑application sovereignty is unrealistic now; India should source chips externally while retaining control over models and applications, with a long‑term goal of a RISC‑V GPU by 2029‑30 (Vivek Kanneja) Intel’s “frugal AI” relies on existing CPU‑centric architectures with integrated GPU/NPU, suggesting a focus on leveraging current foreign silicon rather than developing a domestic GPU (Nitin Bajaj)
Vivek proposes a pragmatic approach that uses imported chips but keeps critical layers under Indian control, planning a home-grown RISC-V GPU in the future [115-138]. Nitin emphasizes using Intel’s current CPU-based solutions with integrated accelerators, implying reliance on existing foreign silicon without a domestic GPU roadmap [156-160].
POLICY CONTEXT (KNOWLEDGE BASE)
Strategic AI sovereignty and indigenous chip design are framed by policy papers distinguishing strategic control from technical implementation, and by industry calls for domestic AI-optimized silicon [S30][S38][S43].
Assessment of energy efficiency in AI compute infrastructure
Speakers: Vivek Kanneja, Nitin Bajaj
CDAC’s supercomputing solutions achieve a Power Usage Effectiveness (PUE) of around 1.2 using liquid and water cooling [198-199] Intel’s data centres operate at a PUE of 1.06, the most efficient reported, thanks to advanced packaging and design [211-212]
Vivek reports a PUE of roughly 1.2 for CDAC installations, indicating good but not best-in-class efficiency [198-199]. Nitin claims Intel’s own facilities reach a superior PUE of 1.06, suggesting a higher benchmark for sustainability [211-212].
POLICY CONTEXT (KNOWLEDGE BASE)
Assessments of AI compute energy efficiency reference the Green AI literature and national benchmarks for PUE and carbon-aware AI, underscoring the need for systematic measurement [S25][S39][S41].
Unexpected Differences
Role of data sovereignty in enterprise AI decisions
Speakers: Amanraj Khanna, Nitin Bajaj
Data sovereignty and localization are central policy concerns that shape AI infrastructure choices (Amanraj Khanna) Data‑sovereignty importance varies by industry; banking/healthcare need it, but manufacturing/retail often prioritize speed and cost, using cloud services (Nitin Bajaj)
The moderator frames data sovereignty as a dominant, overarching policy driver for all AI deployments [107-108]. Nitin, however, treats it as a sector-specific factor, arguing that many enterprises will choose cloud or edge solutions despite sovereignty concerns [149-156]. This divergence between a universal policy emphasis and a nuanced industry-specific view was not anticipated.
POLICY CONTEXT (KNOWLEDGE BASE)
Data sovereignty’s impact on enterprise AI choices is addressed in summit panels that outline a balanced model of national data control coupled with global collaboration, informing policy guidance for Indian enterprises [S42][S30][S43].
Overall Assessment

The panel showed convergence on the existence of barriers to scaling AI—technical (MLOps, data quality) and economic (ROI, deployment choices). However, clear disagreements emerged around talent development, the path to technological sovereignty, and assessments of energy efficiency. An unexpected split appeared on how universally data sovereignty should influence enterprise decisions.

Moderate to high. While participants share a common goal of broader AI adoption, they differ on the root causes and optimal policy/technology pathways, indicating that coordinated action will need to reconcile divergent views on skill development, domestic chip strategy, and the weight of sovereignty versus practical performance considerations.

Partial Agreements
Both acknowledge that moving AI projects beyond pilot stages is difficult: Vivek points to technical hurdles such as unclean data and missing MLOps skills [95-100], while Nitin highlights business‑level concerns around ROI and choosing the right deployment model [76-79]. Together they agree that a combination of technical and economic factors stalls large‑scale adoption.
Speakers: Vivek Kanneja, Nitin Bajaj
Lack of real‑world MLOps expertise and data‑quality challenges cause POC‑to‑production gaps (Vivek Kanneja) ROI uncertainty and deployment‑model choices (on‑prem, cloud, edge) hinder scaling from pilot to production (Nitin Bajaj)
Takeaways
Key takeaways
India’s AI compute infrastructure is anchored by CDAC’s PARAM supercomputers, currently ~48 PFLOPS and projected to reach ~100 PFLOPS by year‑end, serving researchers, MSMEs, and startups. Enterprise AI scaling is hampered by unclear ROI, choice of deployment model (on‑prem, cloud, edge), and a gap between successful POCs and production‑grade MLOps capabilities. Sovereignty goals are realistic when focusing on control of models, data, and applications while still sourcing silicon externally; a long‑term RISC‑V GPU is planned for 2029‑30. Data‑sovereignty requirements vary by industry; cost, performance, and edge vs cloud trade‑offs drive infrastructure decisions. Talent gap exists: academic curricula are strong in theory but lack practical MLOps, data‑cleaning, and large‑model deployment training; demographic advantage may close the gap quickly. Energy efficiency is being addressed through power‑aware chip design, liquid cooling, and low PUE data centers (CDAC ~1.2, Intel ~1.06); benchmarking energy per token and model compression are identified as next steps. Success in the next 3‑5 years is envisioned as widespread AI deployment across workflows, enabling even small vendors to leverage Indic models and improve public intelligence.
Resolutions and action items
CDAC to expand PARAM capacity to ~100 PFLOPS with 60 installations by end of the year. CDAC to continue development of a home‑grown RISC‑V based GPGPU, target release around 2029‑30. Call for curriculum reform in Indian engineering colleges to include practical MLOps, data‑quality handling, and large‑model deployment. Proposal to establish benchmarks for energy consumption per token and promote model compression techniques. Intel to promote “frugal AI” solutions that leverage CPUs with integrated GPU/NPU for edge and data‑center workloads.
Unresolved issues
How enterprises will definitively choose between on‑prem, cloud, and edge deployments given rapidly evolving models and tooling. Quantitative framework for ROI calculation that balances cost, performance, and data‑sovereignty across different industry verticals. Standardized, industry‑wide MLOps practices and tooling to bridge the POC‑to‑production gap. Extent to which India can reduce dependence on foreign silicon in the near term while maintaining competitiveness. Concrete policies or incentives to accelerate talent up‑skilling and retention in AI deployment roles.
Suggested compromises
Adopt a pragmatic sovereignty model: source silicon globally but retain control over AI models, orchestration, and applications. Utilize existing CPU infrastructure for many inference workloads, reserving GPUs for only the most demanding tasks (frugal AI approach). Combine cloud services for rapid prototyping with on‑prem or edge deployments for data‑sensitive or latency‑critical workloads. Balance data‑localization mandates with cost‑performance considerations by allowing hybrid cloud‑edge architectures where appropriate.
Thought Provoking Comments
Announcement is one thing. Deployment and then achieving scale quite enough.
Sets the central tension of the panel, distinguishing hype from practical implementation and framing the need to move beyond policy announcements.
Guided the conversation toward concrete challenges of adoption, prompting both panelists to discuss real‑world barriers and infrastructure, and establishing the lens through which subsequent remarks were evaluated.
Speaker: Amanraj Khanna
The biggest gap today is what to use, whether to use it on‑prem or go on cloud, use open APIs… there is no single formula. The AI journey is changing so rapidly that enterprises are trying to figure out the best deployment model for ROI.
Highlights the core strategic dilemma for enterprises—balancing speed, cost, and technology choices amid fast‑evolving models—introducing the concept of “frugal AI.”
Shifted the discussion from infrastructure capacity to decision‑making complexity, leading Vivek to elaborate on POC challenges and prompting deeper analysis of cost‑performance trade‑offs.
Speaker: Nitin Bajaj
People are very happy with the POCs… but once it hits real‑life situations where the data needs to be cleaned, you have no proper experience in actual deployments of the MLOps… then the reality hits that, no, it’s not that simple.
Identifies the critical bottleneck where many projects stall—transition from proof‑of‑concept to production—emphasizing practical MLOps and data quality issues.
Created a turning point by pinpointing why pilots fail, which reinforced Nitin’s earlier ROI concerns and set the stage for discussing talent gaps and curriculum reforms.
Speaker: Vivek Kanneja
When you talk of sovereignty, do you want to be completely independent from silicon up to the application? Pragmatically, we can source silicon abroad but keep the stack above it under our control. We are designing our own RISC‑V GPGPU for 2029‑30.
Provides a nuanced, realistic perspective on AI sovereignty, balancing aspirational goals with current capabilities and outlining a concrete roadmap.
Redirected the policy‑technology debate from an abstract ideal to actionable steps, influencing Nitin’s later remarks on industry‑specific data sovereignty and reinforcing the need for a hybrid approach.
Speaker: Vivek Kanneja
For banking or healthcare data sovereignty is very important, but for manufacturing or retail many use cases run on the cloud for speed and accuracy. The key is frugal AI—using CPUs where possible and matching performance to cost, e.g., 15‑20 prompts per second may be sufficient.
Adds depth by showing how data‑localization concerns vary by sector and introduces a cost‑focused deployment strategy, linking back to the earlier “no single formula” point.
Expanded the conversation to sector‑specific strategies, illustrating how policy intersects with business decisions and reinforcing the earlier discussion on ROI and hardware choices.
Speaker: Nitin Bajaj
We have bright engineers with theoretical knowledge, but they lack hands‑on experience with real‑world data, MLOps, and deployment. Curriculum needs capstone projects that handle messy, large‑scale data.
Identifies a systemic talent gap that underpins many of the deployment challenges discussed, calling for educational reform.
Deepened the analysis by linking technical bottlenecks to workforce development, prompting Nitin to note demographic advantages and suggesting a longer‑term solution to the earlier POC‑to‑production issue.
Speaker: Vivek Kanneja
Energy efficiency must be benchmarked per token for training or inference. We’re moving to liquid cooling, achieving PUE around 1.2, and need green norms and power‑aware model design.
Introduces a concrete metric (energy per token) for sustainability, moving the sustainability conversation from vague concerns to measurable targets.
Shifted the dialogue toward quantifiable sustainability goals, leading Nitin to complement with Intel’s own PUE of 1.06 and reinforcing the theme of frugal, efficient AI.
Speaker: Vivek Kanneja
Overall Assessment

The discussion was shaped by a series of pivotal insights that moved it from high‑level policy announcements to the gritty realities of AI adoption in India. Amanraj’s framing question set the stage, while Nitin’s articulation of the ROI and deployment‑model dilemma highlighted the strategic uncertainty enterprises face. Vivek’s stark description of the POC‑to‑production gap and his candid take on talent and sustainability turned the conversation toward practical bottlenecks and systemic solutions. Their complementary perspectives on sovereignty, sector‑specific data concerns, and frugal AI created a nuanced roadmap that linked policy, infrastructure, talent, and energy considerations. Collectively, these comments redirected the panel from abstract optimism to a grounded, multi‑dimensional view of what success will require in the next three to five years.

Follow-up Questions
What is the energy consumption per token for training and inference across different AI models?
Identifying a standard benchmark for energy per token would help assess and improve the sustainability of AI workloads.
Speaker: Vivek Kanneja
How can Indian engineering curricula be updated to include practical MLOps, large‑model deployment, and handling real‑world data issues?
Current curricula focus on theory; adding hands‑on projects would close the talent gap for AI deployments.
Speaker: Vivek Kanneja
What effective strategies can help enterprises move from POCs to production‑scale AI deployments, including building MLOps capabilities?
Enterprises struggle to scale beyond pilots; research into best practices would accelerate real‑world impact.
Speaker: Vivek Kanneja
How should enterprises decide the optimal deployment model (edge, on‑prem, cloud) while balancing cost, performance, and data‑sovereignty requirements?
Choosing the right deployment architecture is a key barrier; a decision framework would guide cost‑effective adoption.
Speaker: Nitin Bajaj
What are the best practices for ‘frugal AI’—running large models on CPUs versus GPUs—to achieve cost‑effective performance?
Understanding when CPUs suffice can reduce hardware costs and broaden access to AI capabilities.
Speaker: Nitin Bajaj
How can model compression and power‑aware model design reduce energy consumption while maintaining accuracy?
Power efficiency is critical for sustainability; research on compression techniques can lower the energy footprint of AI services.
Speaker: Nitin Bajaj
What metrics and benchmarks should be established to assess AI ROI for Indian enterprises?
A clear ROI framework would help enterprises justify investments and choose appropriate AI solutions.
Speaker: Nitin Bajaj
How will India’s domestic GPU development (RISC‑V based) timeline (2029‑30) impact the AI ecosystem, and what interim solutions are needed?
Understanding the transition path from reliance on foreign GPUs to indigenous ones will inform strategic planning.
Speaker: Vivek Kanneja
How can government and industry collaborate to scale MLOps engineer training and capstone projects for real‑world AI deployment?
Coordinated training initiatives can address the shortage of skilled practitioners needed for large‑scale AI.
Speaker: Vivek Kanneja
What further improvements can be made to supercomputing facility energy efficiency beyond the current PUE of ~1.2?
Even modest gains in PUE can significantly reduce operational costs and environmental impact of national AI infrastructure.
Speaker: Vivek Kanneja

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.