Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Amb Thomas Schneider

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Amb Thomas Schneider

Session at a glanceSummary, keypoints, and speakers overview

Summary

Thomas Schneider opened the session by thanking India and the global audience for hosting the AI Impact Summit in Delhi, emphasizing the event’s significance for worldwide AI governance [1-2]. He noted that Switzerland backs the summit’s focus on the three sutras-people, progress, and planet, and stresses that AI should be developed for the benefit of all [3-4]. Schneider reiterated that AI must promote economic and social progress while respecting human dignity, autonomy and the planet [5-6]. He announced that Switzerland will host the next AI Summit in Geneva in 2027 and expressed enthusiasm about the strong interest from Swiss and international stakeholders [8-10]. According to him, the Swiss motivation is not to stage a show but to meaningfully help humanity harness AI’s transformative potential for good, not harm [12-14]. He invited participants to share ideas and emphasized that the agenda for the Geneva summit will be co-created with all parties, though it will retain a distinct Swiss perspective [15-18]. Schneider pledged to build on existing governance mechanisms such as the UN Internet Governance Forum, AI for Good Summit, ITU-UNESCO forums, OECD and other regional bodies, avoiding duplication of effort [20-22]. He also highlighted collaboration with the Diplo Foundation and the Geneva Internet Platform to help less-resourced communities navigate the complex AI governance ecosystem [23]. Recognising AI’s breadth, he argued that no single institution can address all challenges, and that governance will have to accommodate complexity [24-27]. He drew a parallel with the two-century evolution of engine regulation, noting that societies have created thousands of technical, legal and societal norms to govern physical machines [28-38]. Switzerland has already begun work on new technical standards, binding and non-binding legal instruments, and highlighted the Vilnius Convention on AI, Human Rights, Democracy and the Rule of Law as a principle-based framework for all countries [42-49]. Nevertheless, he warned that additional sector-specific norms will be needed to ensure coherence and complement the Convention [52-53]. Finally, Schneider positioned Switzerland as a facilitator that will foster open, respectful dialogue and pragmatic cooperation to ensure AI contributes to peace, prosperity and dignity worldwide, looking forward to the 2027 Geneva summit [55-58].


Keypoints


Major discussion points


Inclusive, human-centric vision for AI – The speaker stresses that AI must be developed “so that everyone in the world can benefit” while respecting human dignity, autonomy and the planet [4-6][14].


Switzerland’s role as facilitator for the 2027 Geneva AI Summit – Switzerland will host the next summit, build on existing platforms (UN-IGF, AI for Good, OECD, etc.), and act as a bridge-builder for all stakeholders, especially less-resourced communities [8-10][20-23][55-56].


Building on and expanding existing governance instruments – Ongoing work on technical, legal and societal norms is highlighted, with special reference to the Vilnius Convention on AI, Human Rights, Democracy and the Rule of Law as a “principle-based framework” that can be adapted globally [41-49][50-52].


Historical analogy to engine/combustion governance – The speaker draws a parallel between past regulation of physical-engine technologies and the need for a layered, context-specific AI governance ecosystem, noting that no single institution can cover all aspects [27-34][38-41].


Call for pragmatic, collaborative action and gap-filling – The remarks urge participants to identify shared priorities, avoid reinventing existing mechanisms, and work together on binding and non-binding norms that are coherent and interoperable [20-24][53-55].


Overall purpose / goal of the discussion


Thomas Schneider’s address is a diplomatic invitation and roadmap: Switzerland is positioning itself as a neutral convenor for the next global AI governance summit (Geneva 2027), outlining the principles that should guide AI development, summarising the work already underway (technical standards, legal instruments, the Vilnius Convention), and soliciting ideas from the international community to shape a collaborative, multi-stakeholder agenda that fills current governance gaps.


Overall tone


The tone is consistently courteous, optimistic and constructive. It begins with gratitude and a celebratory note [1-3], moves into a principled, inclusive framing of AI’s purpose [4-6], shifts to a pragmatic description of Switzerland’s facilitative role and existing ecosystem [20-23], becomes more concrete when presenting specific governance tools (the Vilnius Convention) [46-52], and concludes with an earnest, forward-looking call for partnership [55-58]. The tone remains steady throughout, with only a slight increase in specificity and urgency when discussing concrete instruments and next steps.


Speakers

Thomas Schneider


Areas of expertise: AI governance, international technology policy, human rights, democracy, rule of law, digital infrastructure.


Roles and titles: Former Chair of ICANN’s Governmental Advisory Committee (2014-2017) [S1]; Lead negotiator of the Vilnius Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law; Representative of Switzerland at the AI Impact Summit and member of the Swiss Summit team.


Additional speakers:


(none)


Full session reportComprehensive analysis and detailed insights

Thomas Schneider opened the session by thanking India and the worldwide audience for hosting the AI Impact Summit in Delhi, describing the gathering as a “pivotal moment for global AI governance” and expressing appreciation for the diverse group of leaders, innovators, researchers and civil-society representatives that had been assembled [1-2]. He noted that Switzerland fully supports the summit’s focus on the three sutras-people, progress, and planet-and affirmed the shared ambition that artificial intelligence be developed and deployed so that “everyone in the world can benefit” [3-4].


Schneider then articulated an inclusive, human-centred vision for AI, insisting that the technology must promote both economic and social progress while safeguarding human dignity, autonomy and the planet, which he described as “the basis for all life” and added that, at least so far, “we haven’t found other life elsewhere” [5-6]. He warned that AI’s transformative power is comparable to historic breakthroughs such as the printing press, radio, television, the internet and the combustion engine, and stressed that this power must be used to raise, not lower, the quality of life for all peoples [13-14].


Switzerland’s role as host of the next AI Impact Summit in Geneva in 2027 was announced, with Schneider stressing that the purpose is not to stage a “show” but to make a substantive contribution to ensuring AI is used for good, not for harm [12-14]. He also clarified that the AI Impact Summit has previously been held in the United Kingdom, Korea, Paris and Delhi, with a future summit planned for Japan [30-33].


Inviting participants to submit their ideas, Schneider emphasized that the agenda for the Geneva summit will be co-created with all stakeholders while retaining a distinct “Swiss flavour” rooted in constructive, creative, innovative and pragmatic problem-solving [15-18]. He pledged that the summit will build on existing multistakeholder platforms-the UN Internet Governance Forum, the AI for Good Summit, the Global Forum on Ethics of AI, the OECD, the Global Partnership on AI (GPI) and other regional bodies-rather than duplicating processes that already work [20-25][34].


To ensure that less-resourced communities are not left behind, Schneider announced collaboration with the Diplo Foundation and the Geneva Internet Platform, which will help these groups navigate the complex AI-governance ecosystem, raise their voices and gain access to relevant information [23].


Acknowledging the breadth and context-specificity of AI, he argued that no single institution or instrument can capture the whole transformation, and that the community must learn to “live with a certain complexity” of governance [24-27]. He drew a historical parallel with the regulation of combustion engines over the past two centuries, describing how societies have developed thousands of technical, legal and societal norms-from highly harmonised safety standards for aircraft to more varied regulations for automobiles-to govern physical machines [28-41].


Switzerland has already begun to address AI governance gaps by analysing existing frameworks, drafting technical standards, and developing both binding and non-binding legal instruments [42-46]. Schneider highlighted the Vilnius Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, which he helped negotiate with 55 countries. The Convention provides a principle-based, flexible framework that can be embedded in diverse national legal traditions, allowing for interoperable but not identical implementations [46-49]. He added that, while the Convention is expected to enter into force soon, additional sector-specific norms-both binding and non-binding-will be needed to ensure coherence across the governance landscape [50-52].


The period leading up to the Geneva summit will be used to identify remaining gaps in global and regional AI governance, to engage all countries and stakeholders in shaping a shared vision, and to develop pragmatic, workable steps that balance innovation with the mitigation of legitimate risks [53-55]. Switzerland will act as a facilitator, building bridges and fostering an open, respectful dialogue that offers “pragmatic structures for trustworthy cooperation” so that AI can contribute to peace, prosperity, security, and dignity worldwide [56-58][58-59].


In closing, Schneider reiterated the centrality of dignity, thanked the audience for their support and attention, and expressed anticipation of collaborative work in the coming months and of meeting participants in Geneva in 2027 [59].


Session transcriptComplete transcript of the session
Thomas Schneider

So, dear friends and colleagues from India and from all around the world, it is an honor and pleasure to be here with you in Delhi at this pivotal moment for global AI governance. And first, of course, I want to express my gratitude to the government of India for bringing together a diverse and distinguished group of leaders, innovators, researchers, civil society representatives from all around the world. Switzerland very much welcomes and supports the focus of the AI Impact Summit, which is well presented in the three sutras, people, progress, planet, as we all have learned in the past weeks and months. And we fully agree that we need to develop and use AI in a way that everyone in the world can benefit from the potential that AI offers.

This includes economic and societal social progress for everyone. At the same time, of course, we need to make sure that we are able to develop and use AI in a way that everyone in the world can benefit from the that we respect human dignity and autonomy, as well as our planet, which is the basis for all life that we know, at least so far. We haven’t found other life elsewhere. So we are honored and very proud to be hosting the next AI Summit in Geneva in 2027. It is overwhelming to see already now and feel the momentum and the enthusiasm that we sense on national level among all Swiss stakeholders, as well as the very positive reactions from our partners from all around the world, who are all eager and willing to cooperate with us and contribute to the summit in Geneva.

Already now, we are approached by many governmental and other stakeholders that share their ideas with us about what the Geneva Summit and the road leading up to it should focus on and what it should achieve. And let me assure you that this is very welcome and helpful to us. The Swiss motivation for organizing the next summit is to, not to make a show, it is to substantially and meaningfully contribute to achieving the goal that mankind and the world want to achieve. it is to substantially and meaningfully contribute to achieving the goal that mankind uses the unprecedented potential of AI to achieve the goal that mankind uses for good and not for bad. This potential of AI, which may be at least as transformative as the invention of the printing press, radio, television and the internet, as well as the invention of the combustion and other engines together, this potential must be used to raise and not lower the quality of life of all people in the world and not just a few.

AI must strengthen and not weaken the dignity and autonomy of all people in the global north, south, east and west or whatever we call the region where we live and help us all to live together in peace and prosperity. So we are very keen to hear your ideas about what we could and should do together to achieve this goal. Of course, we do have some ideas on our own, but we have not decided yet about the focus of the Geneva Summit. We will discuss it with you together, shape it together. Of course, there will be a Swiss flavor to the Geneva Summit, which is based on the way we work and what we understand, our role in the international community.

We will try to be constructive. Thank you. creative and innovative and try to find pragmatic and fair solutions through bringing together all stakeholders in their respective roles and with their respective experience and at the same time we will try not to reinvent the wheel and duplicate processes and instruments that already exist and that work but rather we will try to build on them because we do already have a number of dialogue platforms for AI governance and for sharing good practices such as the UN Internet Governance Forum and its national and regional initiatives, the AI for Good Summit and the Global Forum on Ethics of AI organized by ITU, UNESCO and many other UN related processes and forum.

We have other forum like the OECD, GPI and other international and regional organizations and of course we will build on the outcomes of the previous summit in the UK, Korea, Japan, sorry Paris, Japan will follow at some point in time, UK, Korea, Paris and of course here in Delhi and we should not forget There are many academic and other networks that provide expertise and solutions. So we will do our best to bring them all together. And with the help of our longstanding partners from the Diplo Foundation and the Geneva Internet Platform, we will also try to facilitate the orientation in this complex governance ecosystem, in particular for less resourced communities, so that also they know better about what is going on where and where we need to raise our voice so that they are actually heard.

At the same time, we consider the transformative power of AI to be too big, broad and context -specific so that no one single institution and no single instrument will allow us to seize all opportunities and will solve all problems. So we will have to learn to live with a certain complexity of the governance of this transformation. But also, this is not a completely new situation. If we look at how we have governed the transformative power of combustion and other engines in the past 200 years, there are some lessons that we can also apply to AI. While today we are developing AI to automate cognitive labor, we have developed engines to automate physical labor. We have put engines in vehicles or machines to move goods or people from one place to another.

And we have put engines in machines to produce food or other goods automatically. And we do not expect one single institution or instrument to govern all of this. But we have developed a set of thousands of technical, legal, and also non -written societal norms that guide us in the use of these machines. We have regulated also the infrastructure that these machines use. We are setting requirements and liabilities for the people that develop, handle, and steer these machines. And we have developed instruments to protect people that are affected by the impact of these machines. And we are seeing different levels of harmonizations when it comes to regulating machines and engines. As an example, of course, we know that the airline industry is much more harmonized because it’s global than the way we regulate cars.

Cars driving in our streets on one side or the other side, where there’s more diversity possible. So after 200 years, we are still continuing to adapt the governance framework for engine driven machines, depending on the context of use. And we need to do exactly the same with AI. We need to develop appropriate technical, legal and societal frameworks and norms that allow us to develop and use AI for good in many different ways. And this work has already begun. We have analyzed our existing governance frameworks, have started to identify and fill the gaps. We have started to work on technical norms for AI systems. We have started to work on binding and non -binding legal instruments. And of course, in this regard, I’d like to particularly highlight the Vilnius Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, for which I had the honor to lead the negotiations among 55 countries from all over the world at the Council of Europe in Strasbourg.

This provides for a principle based framework, not just for Europe, but for all countries. It provides for a principle based framework, not just for Europe, but for all countries on our planet that value human rights, democracy and the rule of law. so that our societies and economies can use AI to innovate, while at the same time we uphold our respect to human dignity and autonomy, also in the context of AI. The principles set out by the Vilnius Convention are simple and clear, but the Convention leaves enough leeway to participating states in order to allow to embed these principles in their existing legal and regulatory institutions and traditions. This will allow many countries to become parties to this global convention and to make sure that their governance frameworks may, although not become identical, but at least interoperable.

This Convention, which we hope will be ratified and enter into force very soon, will become one important instrument to make sure that AI is used for the good and not the bad. But of course, there will have to be many more binding and non -binding norms and more sector -specific norms and instruments to complement it, which hopefully will be… at least coherent in their logic and spirit. So we will use the time until the Geneva Summit next year to continue to identify gaps in global and regional governance of AI and achieve our shared objectives so that AI is used for innovation, while at the same time legitimate concerns and risks are appropriately addressed. Switzerland will be the host of the next summit, but we know that we will not be able to achieve anything on our own.

So we look forward to collaborating with all of you, with all countries and all stakeholders from the global north, south, east and west, and we will first try to identify areas where there’s a willingness and a shared vision to make progress together and then work with all of you on pragmatic and workable steps towards this vision. We will only be the facilitators trying to build bridges and build a climate of open and respectful and constructive dialogue, trying to offer pragmatic structures for trustworthy cooperation so that we can all use the potential AI, to say it again, to live together in peace, prosperity and security. Dignity. Dignity. The Swiss Summit team and I personally are looking forward to collaborating with all of you in the coming months, and we look forward to seeing you all in Geneva in 2027.

Thank you for your support and attention.

Related ResourcesKnowledge base sources related to the discussion topics (18)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Thomas Schneider is a Swiss government official or diplomat representing Switzerland in AI governance discussions.”

The knowledge base identifies Thomas Schneider as a Swiss government official/diplomat who leads negotiations among many countries, confirming his representative role for Switzerland.

Confirmedhigh

“The summit’s focus on three sutras—people, progress, and planet—is a guiding principle.”

The three guiding principles called sutras—people, progress, planet—are explicitly listed in the knowledge base as the summit’s guiding principles.

Confirmedhigh

“The summit emphasizes a human‑centred approach that respects human dignity and autonomy.”

Keynote materials stress that technology must serve humanity, respect human dignity, and keep humans at the centre, aligning with the reported human‑centred vision.

Additional Contextmedium

“AI’s transformative power is comparable to historic breakthroughs such as the printing press, radio, television, the internet and the combustion engine.”

Analyses of prior sessions note similar comparisons of AI to historic technologies like the printing press, radio, television and the internet, providing contextual support for this analogy.

Confirmedhigh

“The summit will build on existing multistakeholder platforms such as the UN Internet Governance Forum.”

The IGF is cited in the knowledge base as a successful multistakeholder platform for internet and AI governance, confirming its relevance to the summit’s collaborative approach.

External Sources (73)
S1
Thomas Schneider — From 2014 to 2017, Schneider was the chair of ICANN’s Governmental Advisory Committee (GAC) and in this role negotiated …
S2
Future-proofing global tech governance: a bottom-up approach | IGF 2023 Open Forum #44 — Cedric Sabbah:Sedgwick, Shomael? Hi. Hi. Yeah? You guys hear me? Yes, we can hear you. Awesome. Is now a good time to st…
S3
State of play of major global AI Governance processes — Introduction:So, I would like to invite up the next panel now, which is the state of play of major global AI governance …
S4
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Amb Thomas Schneider — “And we fully agree that we need to develop and use AI in a way that everyone in the world can benefit from the potentia…
S5
Day 0 Event #59 The 1st international treaty on AI and Human Rights — – Thomas SCHNEIDER: Chair/moderator of the discussion Mr Thomas SCHNEIDER: Okay. And I think it’s an important stateme…
S6
Lightning Talk #22 Eurodig Inviting Global Stakeholders — – **Thomas Schneider** – Swiss ambassador, President of the EuroDIG Support Association – **Thomas** (different from Th…
S7
Open Forum #33 Open Consultation Process Meeting for WSIS Forum 2025 — – Thomas Schneider – Ambassador of Switzerland 1. Thomas Schneider’s call to concentrate resources and build on the WSI…
S8
Unpacking the High-Level Panel’s Report on Digital Cooperation: Geneva policy experts propose action plan — Referring to the contributions, Amb. Thomas Schneider, Head of International Relations, Swiss Federal Office of Communic…
S9
Artificial Intelligence & Emerging Tech — Efforts to coordinate in the development of AI regulatory frameworks were deemed essential. It was suggested that instea…
S10
WS #98 Towards a global, risk-adaptive AI governance framework — Regional perspectives were shared, with Sulafa Jabarty from ICC Saudi Arabia noting heavy investment in AI and digital t…
S11
Main Session 2: The governance of artificial intelligence — Different sectors (financial services, agriculture, healthcare) require different regulatory approaches, but there’s a n…
S12
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-amb-thomas-schneider — And we have put engines in machines to produce food or other goods automatically. And we do not expect one single instit…
S13
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — **Ernst Noorman**, Cyber Ambassador for the Netherlands and co-chair of the FOC Task Force on AI and Human Rights, share…
S14
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what…
S15
AI Governance Dialogue: Presidential address — Ettore Balestrero: On behalf of His Holiness Pope Leo XIV, I would like to extend his cordial greetings to all participa…
S16
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Reiterate and check agreement on how to respect human dignity, and being innovative while respecting rights This repres…
S17
Democratizing AI: Open foundations and shared resources for global impact — ## Introduction and Switzerland’s Strategic Position Bernard Maissen: Yes, thank you. Hello, everybody, dear panelists….
S18
WSIS+20 Forum High-Level Event: Open Consultation Process Meeting | IGF 2023 Open Forum #4 — In conclusion, ICTs and digital technologies have become vital tools for the future of humanity. Their potential to acce…
S19
Keynotes — O’Flaherty emphasizes that we are not operating in a legal vacuum when it comes to digital governance. He argues that th…
S20
Comprehensive Report: European Approaches to AI Regulation and Governance — International Cooperation and Standards The Council of Europe Convention establishes general principles similar to Huma…
S21
From principles to practice: Governing advanced AI in action — The conversation highlighted the urgent need for governance frameworks that can keep pace with technological development…
S22
Closure of the session — Venezuela:Mr. Chairman, thank you. The Bolivarian Republic of Venezuela adheres to a paper that was already presented by…
S23
Closing remarks – Charting the path forward — Need for coherent and interoperable policy frameworks to prevent fragmentation while providing clear policy direction th…
S24
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 part 2 — Advocates for Existing Norms:Japan, Republic of Korea, Canada, and the United Kingdom emphasized implementing the existi…
S25
Multi-stakeholder Discussion on issues about Generative AI — Natasha Crampton:I think my fellow panellists have shared many good ideas here. I think one thing that works well in the…
S26
Pre 9: Discussion on the outcomes of the Global Multistakeholder High Level Conference on Governance of Web 4.0 and Virtual Worlds — Legal and regulatory | Infrastructure | Development Multi-stakeholder approach is essential for Web 4.0 governance Sta…
S27
Webinar session — Need to build upon existing foundation rather than starting from scratch
S28
WSIS High-Level Dialogue: Multistakeholder Partnerships Driving Digital Transformation — Finally, consensus points towards enhancing existing frameworks rather than introducing new ones on policy making and mu…
S29
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Amb Thomas Schneider — Additional binding and non-binding norms will be needed to complement the Convention
S30
Closure of the session — Belgium advocates a victim-centric approach for committees to assist victims of cyber incidents, aiming to deepen unders…
S31
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 part 2 — 7. Debate over Legally Binding vs. Non-Binding Norms Cuba: in favour of the development, within the context of the Uni…
S32
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Importance of hearing various perspectives during policy formulation Larissa Zutter stands out as a senior AI policy ad…
S33
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — The digital future must be built with equity, with ethics and, above all, with solidarity between all nations. That is w…
S34
Ethics and AI | Part 5 — Recognizing that activities within the lifecycle of artificial intelligence systems may offer unprecedented opportunitie…
S35
Dedicated stakeholder session — Diplo Foundation:Mr. Chair, distinguished delegates and colleagues, my name is Vladimir Radunovic. I represent Diplo Fou…
S36
WS #266 Empowering Civil Society: Bridging Gaps in Policy Influence — Kenneth Harry Msiska: All right, I welcome you all to this session, Session Number 266 on Empowering Civil Society, Brid…
S37
State of play of major global AI Governance processes — The speaker advocates for a nuanced perspective on AI governance, drawing a parallel with the multifaceted regulation of…
S38
Shaping AI to ensure Respect for Human Rights and Democracy | IGF 2023 Day 0 Event #51 — Moderator:We are approaching the end of this session, and I’d just like to maybe close with one remark that maybe showed…
S39
Main Topic 3 – Innovation and ethical implication  — Thousands of technical norms exist for engines.
S40
Main Session 2: The governance of artificial intelligence — Different sectors (financial services, agriculture, healthcare) require different regulatory approaches, but there’s a n…
S41
WS #162 Overregulation: Balance Policy and Innovation in Technology — Key issues addressed included the role of AI in combating child sexual abuse material (CSAM), the importance of human ri…
S42
Future-proofing global tech governance: a bottom-up approach | IGF 2023 Open Forum #44 — Need for different levels of harmonization depending on context
S43
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — Reiterate and check agreement on how to respect human dignity, and being innovative while respecting rights The recomme…
S44
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — **Ernst Noorman**, Cyber Ambassador for the Netherlands and co-chair of the FOC Task Force on AI and Human Rights, share…
S45
Open Forum #33 Building an International AI Cooperation Ecosystem — – Qi Xiaoxia- Sajid Rahman Ethical Considerations and Inclusivity Human rights principles | Children rights | Privacy …
S46
Multistakeholder Partnerships for Thriving AI Ecosystems — Both speakers emphasize that technology must be made accessible and available to all, not concentrated in the hands of a…
S47
Welcome Address — The speech emphasizes that with proper direction, ethical frameworks, and global cooperation, artificial intelligence ca…
S48
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Amb Thomas Schneider — Already now, we are approached by many governmental and other stakeholders that share their ideas with us about what the…
S49
Democratizing AI: Open foundations and shared resources for global impact — ## Introduction and Switzerland’s Strategic Position Bernard Maissen: Yes, thank you. Hello, everybody, dear panelists….
S50
WSIS+20 Forum High-Level Event: Open Consultation Process Meeting | IGF 2023 Open Forum #4 — In conclusion, ICTs and digital technologies have become vital tools for the future of humanity. Their potential to acce…
S51
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — Ian Barber:Hope everyone’s doing well. Thank you so much for joining this session. One of the many this week on AI and A…
S52
Pre 2: The Council of Europe Framework Convention on AI and Guidance for the Risk and Impact Assessment of AI Systems on Human Rights, Democracy and Rule of Law (HUDERIA) — Martin Boteman raised an important question about achieving optimal balance between investing in AI innovation and makin…
S53
Comprehensive Report: European Approaches to AI Regulation and Governance — International Cooperation and Standards The Council of Europe Convention establishes general principles similar to Huma…
S54
State of play of major global AI Governance processes — The speaker advocates for a nuanced perspective on AI governance, drawing a parallel with the multifaceted regulation of…
S55
From principles to practice: Governing advanced AI in action — Brian Tse: right now? First of all, it’s a great honor to be on this panel today. To ensure that AI could be used as a f…
S56
WS #97 Interoperability of AI Governance: Scope and Mechanism — Mauricio Gibson: Yeah, that’s a very good question and yeah, it touches on the last points I was making at the end the…
S57
Agenda item 5: discussions on substantive issues contained in paragraph 1 of General Assembly resolution 75/240 (continued)/1/OEWG 2025 — Canada: Thank you, Chair. As the UK noted, we mark the 10th anniversary of the UN norms, first established within the…
S58
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 (continued)/ part 2 — High level of consensus on core operational issues, with main disagreements centered on the scope of new norms developme…
S59
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 part 2 — 7. Debate over Legally Binding vs. Non-Binding Norms Ambassador Gafoor noted a disconnect between the rapidly evolving …
S60
Closure of the session — Belgium advocates a victim-centric approach for committees to assist victims of cyber incidents, aiming to deepen unders…
S61
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — The tone was consistently collaborative, optimistic, and forward-looking throughout the session. Delegates maintained a …
S62
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Dr. Garg also referenced observations about the contrast between current AI systems requiring gigawatts of power and hum…
S63
Opening Ceremony — Technology must respect human dignity and be guided by shared values, with humans at the center
S64
Impact the Future – Compassion AI | IGF 2023 Town Hall #63 — The analysis highlights the role of technology in historical transformations. Throughout history, technology has played …
S65
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ananya Birla Birla AI Labs — The speaker describes AI as a technology that expands human cognitive capacity, likening its impact to the physical ampl…
S66
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — Mitsch expresses concern that unlike the industrial revolution where machine distribution was limited by physical constr…
S67
Impact of the Rise of Generative AI on Developing Countries | IGF 2023 Town Hall #29 — Released in the fall of 2022, ChatGPT continues to exert a significant influence on the global landscape, not only due t…
S68
Press Briefing by HMIT Ashwani Vaishnav on AI Impact Summit 2026 l Day 5 — Hi, sir. Shubhan from the Economic Times. I understand that the declaration will be coming tomorrow, and as you mentione…
S69
WSIS prepares for Geneva as momentum builds for impactful digital governance — As preparations intensify for the World Summit on the Information Society (WSIS+20) high-level event, scheduled for 7–11…
S70
WSIS Action Line C10: Ethics in AI: Shaping a Human-Centred Future in the Digital Age — Dafna Feinholz: Okay, good morning, good morning to all. Recording in progress. Thank you very much, be very welcome and…
S71
Newcomers Session | IGF 2023 — Anja Gengo:Ladies and gentlemen, good morning. I hope you can hear me well. So, to everyone who has joined us online fro…
S72
High-Level Session 4: From Summit of the Future to WSIS+ 20 — Mohammed Saud Al-Tamimi The IGF was cited as an example of a successful multi-stakeholder platform for internet governa…
S73
What is it about AI that we need to regulate? — What next for the Global Dialogue on AI Governance?The Global Dialogue on AI Governance is currently under development w…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
T
Thomas Schneider
10 arguments184 words per minute1721 words558 seconds
Argument 1
AI must benefit all humanity, respecting dignity, autonomy, and the planet (Thomas Schneider)
EXPLANATION
Schneider emphasizes that AI development and deployment should serve the entire global population, safeguarding human dignity and autonomy while also protecting the planet. He frames this as a universal ethical imperative for AI governance.
EVIDENCE
He states that AI should be developed and used so that everyone in the world can benefit, including economic and societal progress for all, and that this must be done while respecting human dignity, autonomy, and the planet as the basis for life [4-6].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Schneider’s call for AI that benefits everyone while upholding human dignity and autonomy is echoed in his keynote where he stresses developing AI so that “everyone in the world can benefit” and that respect for dignity and autonomy must be maintained [S4].
MAJOR DISCUSSION POINT
Inclusive, human‑centred AI vision
Argument 2
AI’s transformative potential should raise quality of life globally, not just for a few (Thomas Schneider)
EXPLANATION
Schneider argues that the unprecedented power of AI should be harnessed to improve living standards worldwide, rather than concentrating benefits among a limited elite. He likens AI’s impact to historic breakthroughs such as the printing press and the internet.
EVIDENCE
He describes AI’s potential as comparable to the printing press, radio, television, internet, and combustion engines, and insists that this potential must be used to raise, not lower, the quality of life for all people and not just a few [13-14].
MAJOR DISCUSSION POINT
Transformative potential for global well‑being
Argument 3
Switzerland will host the next summit to meaningfully contribute to global AI governance, not as a showcase (Thomas Schneider)
EXPLANATION
Schneider states that Switzerland’s motivation for organizing the 2027 Geneva AI Summit is to make a substantive contribution to AI governance rather than to stage a publicity event. The hosting is presented as an act of responsibility and leadership.
EVIDENCE
He announces that Switzerland is proud to host the next AI Summit in Geneva in 2027 and clarifies that the Swiss motivation is “not to make a show, it is to substantially and meaningfully contribute to achieving the goal that mankind uses the unprecedented potential of AI for good” [8][12].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The announcement that Switzerland will host the 2027 Geneva AI Summit to make a substantial contribution to global AI governance, explicitly stating it is not a publicity show, is documented in the keynote remarks [S4].
MAJOR DISCUSSION POINT
Switzerland’s role and purpose for the Geneva summit
Argument 4
The summit will be shaped collaboratively, incorporating ideas from diverse stakeholders (Thomas Schneider)
EXPLANATION
Schneider invites participants to contribute ideas and stresses that the agenda of the Geneva summit will be co‑created with global stakeholders. He underscores a collaborative, inclusive process rather than a top‑down design.
EVIDENCE
He expresses eagerness to hear ideas from the audience, notes that Switzerland has its own ideas but will discuss and shape the summit together with participants, and promises a Swiss flavor while being constructive [15-18].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Schneider notes that many governmental and other stakeholders are already sharing ideas for the summit and that the agenda will be co-created with worldwide participants, highlighting a collaborative approach [S4].
MAJOR DISCUSSION POINT
Collaborative agenda setting for the summit
Argument 5
Build on existing platforms (UN IGF, AI for Good, UNESCO, OECD, etc.) rather than reinventing the wheel (Thomas Schneider)
EXPLANATION
Schneider proposes leveraging established multistakeholder forums and initiatives to avoid duplication and to build on proven mechanisms. He lists several global platforms that can be integrated into the summit’s work.
EVIDENCE
He enumerates existing dialogue platforms such as the UN Internet Governance Forum, AI for Good Summit, UNESCO, OECD, GPI, and previous AI summit outcomes, emphasizing that the new summit will build on these rather than reinvent processes [20-21].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He stresses that the new summit will build on established forums such as the UN Internet Governance Forum, AI for Good, UNESCO and OECD, avoiding duplication of existing mechanisms [S4][S9].
MAJOR DISCUSSION POINT
Utilising existing governance mechanisms
Argument 6
Facilitate participation of less‑resourced communities through partnerships like the Diplo Foundation (Thomas Schneider)
EXPLANATION
Schneider highlights the need to ensure that under‑resourced groups can navigate the complex AI governance ecosystem. He mentions collaboration with the Diplo Foundation and the Geneva Internet Platform to provide orientation and a voice for these communities.
EVIDENCE
He states that, with the help of longstanding partners from the Diplo Foundation and the Geneva Internet Platform, the summit will facilitate orientation for less-resourced communities so they know where to raise their voice and be heard [23].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Schneider highlights collaboration with the Diplo Foundation and the Geneva Internet Platform to help under-resourced communities find their voice within the AI governance ecosystem [S4].
MAJOR DISCUSSION POINT
Inclusivity for under‑resourced stakeholders
Argument 7
Governance of AI should mirror the layered technical, legal, and societal norms developed for engines over 200 years (Thomas Schneider)
EXPLANATION
Schneider draws an analogy between the historical governance of engine technologies and the emerging governance needs for AI. He suggests that AI will require a similarly complex, multi‑layered framework of technical, legal, and societal standards.
EVIDENCE
He references the 200-year history of governing combustion engines, noting the development of thousands of technical, legal, and societal norms, infrastructure regulations, liability requirements, and protective instruments, and argues AI needs comparable layered governance [27-36].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He draws an analogy to the centuries-long development of technical, legal and societal norms governing combustion engines, arguing AI will need a similarly layered framework [S4].
MAJOR DISCUSSION POINT
Historical analogy for AI governance
Argument 8
Different sectors (e.g., aviation vs. automotive) illustrate the need for context‑specific yet harmonised regulation (Thomas Schneider)
EXPLANATION
Schneider uses the aviation and automotive sectors as examples to show that some industries achieve global harmonisation while others remain fragmented, indicating that AI regulation must balance global standards with sector‑specific flexibility.
EVIDENCE
He points out that the airline industry is highly harmonised globally, whereas car regulations vary by country, illustrating the need for context-specific yet interoperable governance approaches [37-39].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for sector-specific but interoperable regulatory approaches is reinforced by discussions on varied sectoral regulation in AI governance panels [S11].
MAJOR DISCUSSION POINT
Sector‑specific versus global harmonisation
Argument 9
The Vilnius Convention provides a principle‑based, flexible framework for AI, human rights, democracy, and rule of law (Thomas Schneider)
EXPLANATION
Schneider presents the Vilnius Convention as a foundational, principle‑based instrument that can guide AI governance worldwide, aligning AI development with human rights, democratic values, and the rule of law.
EVIDENCE
He highlights his role in leading negotiations of the Vilnius Convention, describing it as a principle-based framework applicable beyond Europe to any country that values human rights, democracy, and the rule of law, enabling societies to innovate responsibly [46-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Schneider describes the Vilnius Convention as a principle-based framework applicable to any country valuing human rights, democracy and the rule of law, providing a flexible global reference for AI governance [S4].
MAJOR DISCUSSION POINT
Vilnius Convention as a global AI governance framework
Argument 10
The Convention aims for interoperable national implementations and will be complemented by additional binding and non‑binding norms (Thomas Schneider)
EXPLANATION
Schneider explains that the Vilnius Convention allows flexibility for states to embed its principles within existing legal systems, promoting interoperability, while acknowledging the need for further sector‑specific norms to ensure coherence.
EVIDENCE
He notes that the Convention leaves leeway for states to embed principles, enabling many countries to become parties with interoperable frameworks, and that additional binding and non-binding norms will complement it to maintain logical and spiritual coherence [49-52].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He notes that the Convention allows states leeway to embed its principles while additional binding and non-binding norms will be needed to ensure coherence across jurisdictions [S4].
MAJOR DISCUSSION POINT
Interoperability and complementarity of norms
Agreements
Agreement Points
AI must benefit all humanity, respecting dignity, autonomy, and the planet
Speakers: Thomas Schneider
AI must benefit all humanity, respecting dignity, autonomy, and the planet (Thomas Schneider)
Schneider emphasizes that AI development and deployment should serve the entire global population, safeguarding human dignity, autonomy and protecting the planet as the basis for life [4-6].
POLICY CONTEXT (KNOWLEDGE BASE)
This principle aligns with the human-rights-centered AI frameworks emphasized in recent multilateral discussions, notably the focus on protecting human rights, democracy and the rule of law in AI governance [S34] and the call for equity and solidarity among nations [S33]. It also reflects the broader human-rights framing adopted at IGF sessions on AI and democracy [S38].
AI’s transformative potential should raise quality of life globally, not just for a few
Speakers: Thomas Schneider
AI’s transformative potential should raise quality of life globally, not just for a few (Thomas Schneider)
He likens AI to historic breakthroughs and insists its power must be used to improve living standards for everyone, not a limited elite [13-14].
POLICY CONTEXT (KNOWLEDGE BASE)
The statement echoes the equity-focused narrative in the Leaders’ Plenary urging AI to serve all nations and improve global quality of life [S33] and mirrors concerns raised in over-regulation debates about ensuring inclusive benefits from AI innovation [S41].
The summit will be shaped collaboratively with ideas from diverse stakeholders
Speakers: Thomas Schneider
The summit will be shaped collaboratively, incorporating ideas from diverse stakeholders (Thomas Schneider)
Schneider invites participants to contribute ideas and states that the agenda will be co-created with global stakeholders, while retaining a Swiss flavour [15-18].
POLICY CONTEXT (KNOWLEDGE BASE)
This collaborative approach is consistent with the multi-stakeholder model championed in recent forums, where specific challenges are addressed through joint input and resource allocation [S25] and where strengthening existing multistakeholder institutions is deemed essential [S26][S27][S28][S32].
Build on existing multistakeholder platforms rather than reinventing the wheel
Speakers: Thomas Schneider
Build on existing platforms (UN IGF, AI for Good, UNESCO, OECD, etc.) rather than reinventing the wheel (Thomas Schneider)
He lists established forums (UN IGF, AI for Good, UNESCO, OECD, etc.) as foundations for the new summit, avoiding duplication of processes [20-21].
POLICY CONTEXT (KNOWLEDGE BASE)
The recommendation mirrors calls to reinforce existing internet-governance bodies instead of creating new fragmented systems, as articulated in the outcomes of the Global Multistakeholder High Level Conference [S26] and the WSIS High-Level Dialogue consensus to enhance, not replace, current frameworks [S28].
Facilitate participation of less‑resourced communities through partnerships like the Diplo Foundation
Speakers: Thomas Schneider
Facilitate participation of less‑resourced communities through partnerships like the Diplo Foundation (Thomas Schneider)
With help from the Diplo Foundation and the Geneva Internet Platform, the summit will orient and give a voice to under-resourced groups [23].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with the Diplo Foundation’s mandate to provide capacity-building for small and developing countries and diverse civil-society actors [S35], and with broader initiatives aimed at bridging policy-influence gaps for civil society [S36].
Governance of AI should mirror the layered technical, legal, and societal norms developed for engines over 200 years
Speakers: Thomas Schneider
Governance of AI should mirror the layered technical, legal, and societal norms developed for engines over 200 years (Thomas Schneider)
He draws an analogy to the centuries-long evolution of engine regulation, arguing AI needs a comparable multi-layered framework of technical, legal and societal standards [27-36].
POLICY CONTEXT (KNOWLEDGE BASE)
The analogy is supported by expert commentary that AI governance can learn from the multifaceted, layered regulation of engines rather than seeking a single overarching regime [S37], and by the observation that thousands of technical norms already exist for engines, offering a reference model [S39].
Different sectors (e.g., aviation vs. automotive) illustrate the need for context‑specific yet harmonised regulation
Speakers: Thomas Schneider
Different sectors (e.g., aviation vs. automotive) illustrate the need for context‑specific yet harmonised regulation (Thomas Schneider)
He notes that airline regulation is globally harmonised while car regulation varies, showing AI governance must balance global standards with sector-specific flexibility [37-39].
POLICY CONTEXT (KNOWLEDGE BASE)
This need for sector-specific but coherent regulation is reflected in discussions on AI governance across finance, agriculture, healthcare and other domains, emphasizing tailored approaches while maintaining overall consistency [S40], and in calls for context-specific rules that still achieve harmonisation [S41][S42].
The Vilnius Convention provides a principle‑based, flexible framework for AI, human rights, democracy and rule of law
Speakers: Thomas Schneider
The Vilnius Convention provides a principle‑based, flexible framework for AI, human rights, democracy and rule of law (Thomas Schneider)
Schneider highlights his role in negotiating the Vilnius Convention, describing it as a principle-based instrument applicable beyond Europe to any country valuing human rights, democracy and the rule of law [46-48].
The Convention aims for interoperable national implementations and will be complemented by additional binding and non‑binding norms
Speakers: Thomas Schneider
The Convention aims for interoperable national implementations and will be complemented by additional binding and non‑binding norms (Thomas Schneider)
He explains the Convention leaves leeway for states to embed its principles, promoting interoperability, while acknowledging the need for further sector-specific norms to ensure coherence [49-52].
POLICY CONTEXT (KNOWLEDGE BASE)
The statement corresponds with remarks that the Vilnius Convention will require both binding and non-binding instruments to be effective [S29] and with broader debates on complementing international principles with legally binding norms to fill regulatory gaps [S31].
Switzerland will host the next summit to meaningfully contribute to global AI governance, not as a showcase
Speakers: Thomas Schneider
Switzerland will host the next summit to meaningfully contribute to global AI governance, not as a showcase (Thomas Schneider)
He announces the 2027 Geneva AI Summit and stresses that Switzerland’s motivation is substantive contribution rather than a publicity show [8][12].
Similar Viewpoints
All listed arguments stem from Schneider’s consistent emphasis on inclusive, human‑centred AI governance, collaborative processes, leveraging existing mechanisms, and a principled yet flexible legal framework [4-6][13-14][8,12][15-18][20-21][23][27-36][37-39][46-48][49-52].
Speakers: Thomas Schneider
AI must benefit all humanity, respecting dignity, autonomy, and the planet (Thomas Schneider) AI’s transformative potential should raise quality of life globally, not just for a few (Thomas Schneider) Switzerland will host the next summit to meaningfully contribute to global AI governance, not as a showcase (Thomas Schneider) The summit will be shaped collaboratively, incorporating ideas from diverse stakeholders (Thomas Schneider) Build on existing platforms (UN IGF, AI for Good, UNESCO, OECD, etc.) rather than reinventing the wheel (Thomas Schneider) Facilitate participation of less‑resourced communities through partnerships like the Diplo Foundation (Thomas Schneider) Governance of AI should mirror the layered technical, legal, and societal norms developed for engines over 200 years (Thomas Schneider) Different sectors (e.g., aviation vs. automotive) illustrate the need for context‑specific yet harmonised regulation (Thomas Schneider) The Vilnius Convention provides a principle‑based, flexible framework for AI, human rights, democracy and rule of law (Thomas Schneider) The Convention aims for interoperable national implementations and will be complemented by additional binding and non‑binding norms (Thomas Schneider)
Unexpected Consensus
Using the historical governance of combustion engines as a direct analogy for AI governance
Speakers: Thomas Schneider
Governance of AI should mirror the layered technical, legal, and societal norms developed for engines over 200 years (Thomas Schneider)
While many AI discussions focus on novel digital frameworks, Schneider explicitly aligns AI governance with two centuries of engine regulation, a comparison not commonly highlighted in other AI policy debates [27-36].
POLICY CONTEXT (KNOWLEDGE BASE)
This direct analogy is reinforced by analyses that propose AI governance draw lessons from the long-standing regulation of engines rather than pursuing a singular institution [S37], and by references to the extensive technical norms that have historically governed engine safety and performance [S39].
Explicit call for both binding and non‑binding norms to complement the Vilnius Convention
Speakers: Thomas Schneider
The Convention aims for interoperable national implementations and will be complemented by additional binding and non‑binding norms (Thomas Schneider)
The dual emphasis on binding and non-binding instruments, while maintaining coherence, reflects a nuanced consensus that goes beyond a single-track regulatory approach [49-52].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple sessions have highlighted the necessity of a hybrid normative architecture, with binding norms filling legal voids and non-binding norms offering flexibility, as noted in discussions about the Convention’s implementation [S29] and in debates on legally binding versus non-binding norms [S31][S30].
Overall Assessment

The transcript shows strong internal consensus around an inclusive, human‑rights‑based AI vision, collaborative summit design, leveraging existing multistakeholder platforms, and a layered, principle‑based regulatory approach anchored by the Vilnius Convention.

High consensus among the speaker’s statements, indicating a clear, unified direction for the upcoming Geneva AI Summit and broader global AI governance efforts.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript contains remarks only from Thomas Schneider; no other speakers are recorded, and therefore no contrasting positions or debates are observable. All statements reflect a single, coherent set of arguments about inclusive AI governance, collaborative summit design, leveraging existing platforms, and the Vilnius Convention. Consequently, there is no demonstrable disagreement among participants in this excerpt.

None – the absence of multiple speakers means no conflict of viewpoints, implying a unified stance on the discussed AI governance agenda.

Takeaways
Key takeaways
AI should be developed and used in an inclusive, human‑centred way that benefits all of humanity while respecting dignity, autonomy and the planet. Switzerland will host the next AI Impact Summit in Geneva in 2027, aiming to make a substantive contribution to global AI governance rather than a mere showcase. The summit’s agenda and outcomes will be co‑created with a broad range of stakeholders from the global north, south, east and west. Governance should build on existing multilateral platforms (UN IGF, AI for Good, UNESCO, OECD, etc.) and avoid duplicating efforts. Special effort will be made to enable participation of less‑resourced communities through partners such as the Diplo Foundation and the Geneva Internet Platform. Historical experience with the regulation of engine technologies provides a model: layered technical, legal and societal norms that are both harmonised where possible and context‑specific where needed. The Vilnius Convention on AI, Human Rights, Democracy and the Rule of Law offers a principle‑based, flexible framework that can be made interoperable across jurisdictions and will be complemented by additional sector‑specific norms.
Resolutions and action items
Continue a joint effort to identify gaps in current global and regional AI governance frameworks. Engage all interested parties to shape the thematic focus and concrete objectives of the 2027 Geneva AI Summit. Facilitate the inclusion of less‑resourced communities by leveraging the Diplo Foundation and the Geneva Internet Platform for outreach and capacity‑building. Advance the ratification and entry into force of the Vilnius Convention and promote its adoption by a wide set of countries. Develop and propose additional binding and non‑binding norms, including sector‑specific instruments, that are coherent with the Vilnius Convention. Use the period leading up to the summit to propose pragmatic, workable steps toward shared AI governance goals.
Unresolved issues
The precise thematic focus and concrete work‑program for the Geneva Summit have not been decided. How to ensure effective interoperability of national implementations of the Vilnius Convention while preserving legal diversity. Funding mechanisms and concrete support structures for the participation of less‑resourced communities remain undefined. Specific sector‑specific norms and the balance between binding and non‑binding instruments have not been detailed. Timeline and process for the final ratification of the Vilnius Convention are still open questions.
Suggested compromises
Leverage existing international forums and standards rather than creating new parallel structures. Adopt the flexible, principle‑based approach of the Vilnius Convention, allowing national leeway while aiming for interoperable outcomes. Combine harmonised regulation for globally uniform sectors (e.g., aviation) with context‑specific rules for sectors with greater local variation (e.g., automotive). Blend binding legal instruments with non‑binding technical standards to accommodate differing capacities of countries.
Thought Provoking Comments
AI may be at least as transformative as the invention of the printing press, radio, television, the internet, and the combustion engine, and must be used to raise—not lower—the quality of life for all people worldwide.
Frames AI as a historic, paradigm‑shifting technology, setting a high‑stakes narrative that elevates the moral imperative of governance beyond incremental regulation.
This comparison reframes the discussion from routine policy tweaks to a grand, civilizational challenge, prompting participants to consider long‑term, systemic safeguards and to view AI governance as a foundational societal contract rather than a technical add‑on.
Speaker: Thomas Schneider
We will not be able to achieve anything on our own; we must collaborate with all countries and stakeholders from the global north, south, east and west, first identifying areas of shared willingness before moving to pragmatic, workable steps.
Emphasizes inclusive, multilateral collaboration and the need to start with common ground, challenging any top‑down or siloed approaches.
Shifts the tone from a unilateral Swiss initiative to a call for shared ownership, encouraging participants—especially from less‑resourced regions—to voice expectations and propose joint projects, thereby broadening the agenda to include equity and partnership.
Speaker: Thomas Schneider
We will try not to reinvent the wheel or duplicate existing processes, but instead build on platforms such as the UN Internet Governance Forum, AI for Good Summit, ITU, UNESCO, OECD, GPI, and other regional initiatives.
Advocates leveraging established governance ecosystems, highlighting efficiency and continuity rather than creating parallel structures.
Redirects the conversation toward integration and coordination, prompting participants to map current initiatives, identify overlaps, and propose mechanisms for interoperability, which deepens the analysis of existing institutional landscapes.
Speaker: Thomas Schneider
The Vilnius Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law provides a principle‑based framework that, while rooted in European values, offers leeway for states to embed these principles within their own legal traditions, aiming for interoperable rather than identical regulations.
Introduces a concrete, near‑term legal instrument that balances universal human‑rights standards with national flexibility, challenging the notion that global AI norms must be uniform.
Creates a pivot toward concrete policy discussion, inviting scrutiny of the Convention’s provisions, potential ratification pathways, and how it could serve as a template for other regions, thereby moving the dialogue from abstract ideals to actionable legal frameworks.
Speaker: Thomas Schneider
No single institution or instrument can capture the full breadth of AI’s transformative power; we must learn to live with a certain complexity in governance, much like we have done with the regulation of engines over the past 200 years.
Uses a historical analogy to normalize complexity and multi‑layered regulation, challenging any expectation of a single, all‑encompassing AI authority.
Introduces a nuanced perspective that legitimizes a mosaic of technical, legal, and societal norms, steering the conversation toward discussions of layered governance models, sector‑specific standards, and the role of both binding and non‑binding instruments.
Speaker: Thomas Schneider
We will work with the Diplo Foundation and the Geneva Internet Platform to help less‑resourced communities navigate the complex governance ecosystem so that their voices are heard.
Highlights the importance of capacity‑building and equitable participation, bringing attention to power asymmetries that often marginalize the Global South.
Triggers a shift toward inclusion and capacity‑building topics, prompting participants to suggest concrete support mechanisms, outreach strategies, and funding models to ensure meaningful involvement of under‑represented stakeholders.
Speaker: Thomas Schneider
Overall Assessment

Thomas Schneider’s remarks collectively transformed a ceremonial opening into a strategic roadmap. By framing AI as a historic, transformative force, he elevated the stakes of the dialogue. His insistence on multilateral collaboration, building on existing platforms, and acknowledging governance complexity redirected the conversation from abstract aspirations to concrete, inclusive, and layered approaches. The introduction of the Vilnius Convention served as a tangible anchor, moving participants toward policy‑focused deliberations. Together, these pivotal comments shaped the discussion’s trajectory, fostering a shift from unilateral ambition to a shared, pragmatic, and globally representative governance agenda.

Follow-up Questions
What should be the focus and agenda of the Geneva AI Summit in 2027?
Determining the summit’s thematic priorities is essential to align stakeholders, allocate resources, and ensure the event addresses the most pressing AI governance challenges.
Speaker: Thomas Schneider
How can we effectively involve and support less‑resourced communities in the AI governance ecosystem?
Ensuring equitable participation prevents marginalisation, enriches the dialogue with diverse perspectives, and helps these communities have their voices heard in policy formation.
Speaker: Thomas Schneider
Which specific gaps exist in current global and regional AI governance frameworks that need to be addressed before the Geneva Summit?
Identifying gaps allows targeted work on missing standards, legal instruments, or technical norms, making the summit’s outcomes more concrete and actionable.
Speaker: Thomas Schneider
How can the Vilnius Convention be made interoperable with diverse national legal and regulatory traditions?
Interoperability is crucial for widespread adoption; understanding how the convention can fit within varied legal systems will facilitate ratification and implementation.
Speaker: Thomas Schneider
What sector‑specific binding and non‑binding norms are required to complement the Vilnius Convention?
Different sectors (health, finance, transportation, etc.) face unique AI risks; tailored norms ensure comprehensive coverage and practical relevance.
Speaker: Thomas Schneider
How can we avoid duplication of existing AI governance processes and build on current platforms such as IGF, AI for Good, UNESCO, OECD, etc.?
Leveraging existing initiatives saves resources, promotes coherence, and strengthens the global governance ecosystem rather than fragmenting it.
Speaker: Thomas Schneider
What lessons from the historical governance of combustion engines can be applied to AI governance?
Drawing parallels with past technological governance can provide proven governance models, risk‑mitigation strategies, and institutional designs applicable to AI.
Speaker: Thomas Schneider
What technical, legal, and societal frameworks and norms are needed to ensure AI is used for good across different contexts?
Comprehensive frameworks are required to balance innovation with protection of human dignity, autonomy, and planetary wellbeing in varied cultural and economic settings.
Speaker: Thomas Schneider
What pragmatic structures can be established to foster trustworthy international cooperation on AI?
Creating concrete mechanisms (e.g., joint working groups, verification tools, dispute‑resolution processes) will translate dialogue into reliable, collaborative action.
Speaker: Thomas Schneider

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Hemant Taneja General Catalyst

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Hemant Taneja General Catalyst

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session featured Hemant Taneja, CEO of General Catalyst, who framed the discussion around “responsible innovation” and the need for capital to align with societal conscience [1-6]. He opened by thanking Prime Minister Modi for convening AI leaders and emphasized that AI should be designed for human centricity and empowerment [7-10]. Taneja argued that the greatest opportunity for capitalism today is “global resilience,” asserting that artificial intelligence is the primary engine for national resilience across sectors such as healthcare, data, defense, and energy infrastructure [12-21]. He highlighted India’s position as a leading growth market, noting that AI’s deflationary nature can address massive challenges in healthcare and education for over a billion people, thereby offering solutions for the planet [22-27]. According to Taneja, India can achieve this by “leapfrogging” existing digital paradigms, building on past successes like UPI and Aadhaar and leveraging recent infrastructure investments [28-34]. He also pointed out that India’s young demographic provides a vast pool of potential AI-augmented workers, amplifying the impact of productivity gains [39-40]. He stressed the importance of a fluid US-India-Europe innovation corridor, open-source collaboration, and a supportive regulatory environment to scale AI in democratic societies [35-38]. Countering fears that AI will displace young workers, Taneja urged India to reject that narrative and instead empower every new entrant to the workforce with AI tools to boost productivity across industries [41-46]. He identified entrepreneurship as the vehicle for AI leadership, citing thriving startups such as Septo, Rafi, and PolicyBazaar Health as examples of Indian firms rebuilding core societal pillars [47-51]. To accelerate this momentum, General Catalyst announced a $5 billion, five-year commitment to the Indian entrepreneurial ecosystem, described as the largest of its kind [52-55]. The investment aims to catalyze the creation of next-generation companies that can drive both domestic abundance and global market leadership. Taneja concluded by inviting global partners to build alongside Indian innovators, underscoring the collaborative spirit of the initiative [36-38]. Overall, the discussion positioned AI as a catalyst for India’s economic resilience, demographic advantage, and entrepreneurial growth, contingent on responsible innovation and international collaboration.


Keypoints

AI as the engine of national and global resilience – Taneja frames artificial intelligence as the primary solution for “national resilience” across sectors such as healthcare, defense, energy and data, arguing that it will drive the next wave of growth for the world’s strongest market, India [12-22].


India’s structural advantages for AI leadership – He highlights the country’s ability to “leapfrog” by building on past digital breakthroughs (UPI, Aadhaar), massive infrastructure investment, a young demographic, open-source initiatives and a strong US-India partnership that can keep AI innovation flowing in the democratic world [28-34][35-41][36-38].


Entrepreneurship and the startup ecosystem as the catalyst – Taneja stresses that startups are “the most important institutions of the future,” citing examples of Indian AI-driven companies and announcing a $5 billion, five-year fund-the largest of its kind-to accelerate Indian entrepreneurs [47-55].


Rejecting the “AI-takes-jobs” narrative and empowering the workforce – He argues that fears of AI displacing young workers are misplaced; instead, every new entrant to the labour market should be “fully empowered with AI” to boost productivity across the economy [41-46].


Call for international collaboration to scale democratic AI – The speaker urges fluid innovation exchange among the US, India, Europe and the broader Western world so that AI can thrive in a democratic context [36-39][37-38].


Overall purpose/goal


The discussion is a high-level advocacy piece aimed at positioning India as the next global AI hub, encouraging responsible, human-centric innovation, and rallying both domestic entrepreneurs and international investors to commit capital and collaborative support for building resilient, AI-driven industries.


Overall tone


The tone is consistently upbeat, confident and rallying-characterized by optimism about AI’s potential, pride in India’s capabilities, and a persuasive call to action. There is no noticeable shift; the speaker maintains an enthusiastic and forward-looking stance throughout the remarks.


Speakers

Hemant Taneja


– Role/Title: CEO of General Catalyst (venture capital firm)[S1][S2]


– Areas of Expertise: Venture capital, responsible innovation, artificial intelligence, entrepreneurship, national resilience


Speaker 1


– Role/Title: Event moderator/host (introducing the main speaker)[S3][S5]


– Areas of Expertise:


Additional speakers:


(none identified beyond the listed participants)


Full session reportComprehensive analysis and detailed insights

The session opened with the moderator introducing Hemant Taneja, CEO of General Catalyst, as a leading voice from one of Silicon Valley’s most influential venture-capital firms and a long-standing advocate of “responsible innovation,” framing his perspective as a bridge between capital and conscience before welcoming him to the stage [1-6].


Taneja thanked Prime Minister Narendra Modi for convening the world’s AI thought-leaders and emphasized a design principle of human-centric, empowerment-focused AI [7-10].


He identified “global resilience” as the biggest opportunity in capitalism today and argued that artificial intelligence is the key tool to achieve that resilience across nations, positioning AI as the answer for driving what he calls national resilience across critical sectors such as healthcare, data, deterrence, defence, and energy infrastructure [12-13][14-18].


Describing India as the world’s strongest growth market, he noted AI’s deflationary nature and argued that applying AI to the massive challenges of healthcare, education, and other public needs for a population of over a billion could generate solutions with worldwide impact, allowing India to solve problems for the entire planet [12-13][22-23][22-27][S1].


He explained that India can achieve this by “leap-frogging” existing digital paradigms, building on earlier successes such as UPI and Aadhaar, and leveraging recent substantial investment in physical infrastructure, vibrant open-source initiatives, and the U.S.-India corridor, which he described as “incredibly interesting” (citing the work being done around open source) and the “packed silica” announcement as critical to fluid innovation flows between the United States, India, Europe, and the broader Western world [28-30][31][32-34][35-38][S15].


Highlighting India’s young demographic, Taneja dismissed the narrative that AI will displace jobs, urging that every new entrant to the workforce be fully empowered with AI, thereby unleashing unprecedented productivity across companies and industries and amplifying economic opportunity worldwide [39-46][S40][S41].


He positioned entrepreneurship as the vehicle for India’s AI leadership, calling start-ups “the most important institutions of the future” and citing Indian AI-driven firms such as Septo, Rafi, and Policy Bazaar Health as exemplars of this transformation [47-50][S31][S32].


To catalyse this momentum, Taneja announced that General Catalyst will commit $5 billion over the next five years to the Indian entrepreneurial ecosystem-the largest single-purpose fund of its kind-reflecting deep belief in Indian entrepreneurs’ capacity to build next-generation companies that will compete globally and generate widespread prosperity. He concluded with an invitation to international partners to “come build with us” [52-55][S39].


Session transcriptComplete transcript of the session
Speaker 1

Ladies and gentlemen, moving on. Our next speaker is from one of Silicon Valley’s most influential venture capital firms, General Catalyst. And he has been among the most vocal advocates for what he calls responsible innovation. The idea that companies building the future also bear the greatest responsibility for its consequences. And well, I must say that his perspective bridges the worlds of capital and conscience. Please welcome the CEO of General Catalyst, Mr. Hemant Taneja.

Hemant Taneja

Good afternoon. Let me just start by thanking Shri Prime Minister Modi Ji for getting all the AI thought leaders together in this world. And delivering the message around making sure we shape AI for human centricity. For human centricity. For human empowerment. I think that’s a really important design principle. And stepping up and embracing that and enforcing that as a world leader is exactly what we need today as we work on embedding AI into our society. So the biggest opportunity in capitalism today is what I call global resilience. If you think about the last five, seven years, we have gone through so much on the planet. We’ve had a pandemic. We’ve had wars. We have been learning how to embrace artificial intelligence as an enormous technological shift.

Many, many interesting shocks that have happened to us over the last several years. And the answer to embracing sort of resiliency and delivering transformation is actually artificial intelligence. That is the answer for actually driving what I call national resilience in all the key industries. Whether it’s healthcare. Whether it’s data. Whether it’s deterrence and defense. Whether it’s… scaling of the energy infrastructure so we can deploy AI, all those capabilities to present enormous opportunity and artificial intelligence is the answer for all of them. It’s India’s time to lead when it comes to delivering national resilience. It’s the strongest growth market in the world. And as we have learned over the last few years, when you think about diffusion of AI, growth is an enormous lever for it because it creates opportunity to embrace new technologies and new solutions.

The other thing that’s really interesting is because AI is deflationary by nature, it matches well to what’s required to uplift the opportunities here in India. Solving for needs in healthcare and education and other parts of what we deliver to society, at large, with the complexity of over a billion people, that is, if you can go solve that, you’re going to go solve the problems for the entire planet. So I do think India’s got all the dynamics going for it to lead in using AI to transform different industries. The other thing I would say is the way I expect India to deliver these transformations by leapfrogging. If you go back to the digital infrastructure revolution in India and what we saw with UPI and Aadhaar, the opportunity to completely rethink what the paradigms are going to be in these other industries is what lies ahead.

India has a lot of things going for it when it comes to resiliency and being able to deploy AI. First of all, you’ve got increasing investment in infrastructure. We saw that over the last couple of years. There’s a lot of infrastructure investment. There’s work being done around open source. I think the U .S.-India corridor is incredibly interesting. The packed silica announcement today was an important one. We need to make sure the innovation flows fluidly between US, India, Europe, across all parts of the Western world so that AI can thrive in the democratic world. That is where we want to see this technology come to scale. And it’s got a young demographic. It’s got a lot of potential in terms of being able to deploy a lot of these capabilities.

One topic that is very much top of mind for me is there’s this narrative that artificial intelligence can take the jobs of young people and we need to slow down progress. And my biggest advice on India’s leadership in AI is to reject that narrative and lean into it. I think everybody entering the workforce, and there’s a million Indians that enter the workforce every month. Everybody that enters the workforce is a young person. Everybody that enters the workforce should be fully empowered with AI. Because if you have that kind of productivity behind every single human being, entering the workforce, imagine the productivity we create in every company, in every industry, and how it’s going to unleash the opportunity in the world.

The way India is going to lead in artificial intelligence, from my perspective, is through entrepreneurship. Ultimately, startups are the most important institutions of the future. We’re rebuilding every core pillar of society with new businesses, and India has got an enormous talent pool. So many of you came in for the AI Summit, and we are actively building companies here with many of the entrepreneurs. I think just watching businesses like SEPTO and Rafi and Policy Bazaar Health and others that are transforming these industries, we have great confidence that the Indian entrepreneurs are going to build the next generation companies that not only drive abundance and resilience here in India. but are going to be positioned to be the global leaders in different markets.

So to that end, one of the announcements that I made in our roundtable with Prime Minister Modi yesterday was that we’re increasing our investment. We’re going to be investing $5 billion over the next five years in the Indian entrepreneurial ecosystem. It’s the largest of its kind, and thank you. And it comes from a deep belief that Indian entrepreneurs are going to create some of the most interesting companies of the next generation. So come build with us. Thank

Related ResourcesKnowledge base sources related to the discussion topics (18)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Hemant Taneja is the CEO of General Catalyst, a leading Silicon Valley venture‑capital firm and a long‑standing advocate of “responsible innovation” who frames his perspective as a bridge between capital and conscience.”

The knowledge base describes Taneja as CEO of General Catalyst, an advocate for responsible innovation, and emphasizes his focus on bridging capital and conscience in technology development.

Confirmedhigh

“Taneja thanked Prime Minister Narendra Modi for convening the world’s AI thought‑leaders at the summit.”

In the summit transcript Taneja explicitly thanks Prime Minister Modi for arranging the event, confirming this statement.

Confirmedhigh

“AI is the key tool to achieve “global/national resilience” across critical sectors such as healthcare, data, deterrence, defence, and energy infrastructure.”

Taneja is quoted saying that artificial intelligence is the answer for driving what he calls national resilience in all key industries, matching the report’s description.

Additional Contextmedium

“India, with a population of over a billion, can leverage AI’s deflationary nature to solve massive public‑sector challenges (healthcare, education) and generate solutions with worldwide impact.”

The knowledge base notes India’s 1.4 billion‑plus population and describes AI as a “delta multiplier” that can boost the country’s development, providing supporting context, though it does not specifically label AI as “deflationary.”

Confirmedhigh

“India can “leap‑frog” by building on successes such as UPI and Aadhaar, using digital public infrastructure to enhance sovereignty and drive AI innovation.”

The source cites Aadhaar and UPI as examples of digital public infrastructure that strengthen India’s digital sovereignty, confirming the claim about leveraging these platforms.

External Sources (56)
S1
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Hemant Taneja General Catalyst — -Hemant Taneja: CEO of General Catalyst (venture capital firm), advocate for responsible innovation, focuses on bridging…
S2
Sticking with Start-ups / DAVOS 2025 — – Hemant Taneja: Chief Executive Officer and Managing Director at General Catalyst USA Hemant Taneja and Mohit Bhatnaga…
S3
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S4
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S5
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S6
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — And India is definitely leading the way in terms of application layer. There’s no doubt about that. Now, of course, with…
S7
The digital economy in the age of AI: Implications for developing countries (UNCTAD) — While AI has streamlined and facilitated certain programming tasks, human developers are still required for further deve…
S8
AI Innovation in India — So thank you for making us proud. Very well done. And your presentation? remarkable. Thank you. Thank you very much. Th…
S9
UNSC meeting: Artificial intelligence, peace and security — Malta:Thank you, President. And I thank the UK Presidency for holding today’s briefing on this highly topical issue. I a…
S10
WS #270 Understanding digital exclusion in AI era — The speaker advocates for a human-centered approach in AI design to ensure inclusivity and accessibility. This approach …
S11
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-hemant-taneja-general-catalyst — Ladies and gentlemen, moving on. Our next speaker is from one of Silicon Valley’s most influential venture capital firms…
S12
Opening Ceremony — **Lucio Adrian Ruiz**, Secretary for the Dicastery for Communication from the Holy See, provided a philosophical perspec…
S13
Building the Next Wave of AI_ Responsible Frameworks & Standards — What is interesting is India is uniquely positioned in this global AI discourse. Most global AI frameworks are designed …
S14
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — Economic | Development | Infrastructure Five layers identified: application, model, chip, infrastructure, and energy. I…
S15
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Gaud explained Google’s rationale for heavy investment in India beyond the obvious market size. India’s young, tech-eage…
S16
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — Albertazzi positioned India as central to the AI evolution, citing several key advantages that make the country particul…
S17
AI 2.0 Reimagining Indian education system — The discussion positioned India’s educational AI integration within broader national aspirations for global AI leadershi…
S18
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — General Catalyst pledged $5 billion investment in Indian entrepreneurial ecosystem over next five years
S19
A Digital Future for All (afternoon sessions) — AI is enabling economic progress and entrepreneurship, especially in emerging markets. It can boost productivity across …
S20
Democratizing AI: Open foundations and shared resources for global impact — ## International Collaboration Examples Bernard Maissen: Yes, thank you. Hello, everybody, dear panelists. Nina, thank …
S21
Building fair markets in the algorithmic age (The Dialogue) — In conclusion, the analysis underscores the importance of regulating AI and algorithms to address concerns regarding eco…
S22
Global dialogue on AI governance highlights the need for an inclusive, coordinated international approach — Global AI governance was the focus of a high-levelforumat the IGF 2024 in Riyadhthat brought together leaders from gover…
S23
AI Governance Dialogue: Steering the future of AI — Infrastructure | Development | Legal and regulatory Martin identifies two critical areas requiring immediate collaborat…
S24
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Hiya, how are you doing? Check, check. Is that better? Cool. Again, hello. Welcome. My name is Chri…
S25
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S26
Advancing Scientific AI with Safety Ethics and Responsibility — – Speaker 1- Speaker 2 While both speakers support context-appropriate approaches, there’s an implicit tension between …
S27
AI Governance Dialogue: Presidential address — ## Key Principles and Approaches ## Summit Context and Speakers ### Human-Centered Development ### Summit Background …
S28
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Ante este panorama, los países del sur global debemos priorizar estrategias y normativas para un uso ético y responsable…
S29
Global AI Policy Framework: International Cooperation and Historical Perspectives — The speakers demonstrate significant consensus on key principles including the need for inclusive governance, building o…
S30
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — In conclusion, the speakers underscored the importance of addressing ethical challenges in technology development, speci…
S31
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Hemant Taneja General Catalyst — Central to Taneja’s argument is India’s unique positioning for AI leadership. He identified key advantages: India as the…
S32
The Global Power Shift India’s Rise in AI & Semiconductors — -Building India’s AI and Semiconductor Ecosystem: The panel discussed India’s positioning in the global AI and semicondu…
S33
Building the Next Wave of AI_ Responsible Frameworks & Standards — What is interesting is India is uniquely positioned in this global AI discourse. Most global AI frameworks are designed …
S34
How AI Is Transforming Indias Workforce for Global Competitivene — Social and economic development | Artificial intelligence
S35
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — Albertazzi positioned India as central to the AI evolution, citing several key advantages that make the country particul…
S36
AI 2.0 Reimagining Indian education system — The discussion positioned India’s educational AI integration within broader national aspirations for global AI leadershi…
S37
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Gaud explained Google’s rationale for heavy investment in India beyond the obvious market size. India’s young, tech-eage…
S38
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — India’s technical advantages are substantial. The country’s solar and wind patterns are naturally complementary, providi…
S39
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — General Catalyst pledged $5 billion investment in Indian entrepreneurial ecosystem over next five years
S40
Comprehensive Report: Preventing Jobless Growth in the Age of AI — Almost everyone here was talking about. this distinction between automating and replacing workers versus augmenting them…
S41
The digital economy in the age of AI: Implications for developing countries (UNCTAD) — Contrary to the widespread fear of job displacement, it is highlighted that artificial intelligence and emerging technol…
S42
From India to the Global South_ Advancing Social Impact with AI — This comment directly addresses one of the most anxiety-provoking aspects of AI adoption – job displacement. By framing …
S43
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — This comment honestly addresses the double-edged nature of AI adoption – acknowledging both educational challenges and j…
S44
Democratizing AI: Open foundations and shared resources for global impact — Bernard Maissen: Yes, thank you. Hello, everybody, dear panelists. Nina, thank you for giving me the floor. In the globa…
S45
Building fair markets in the algorithmic age (The Dialogue) — In conclusion, the analysis underscores the importance of regulating AI and algorithms to address concerns regarding eco…
S46
AI for Democracy_ Reimagining Governance in the Age of Intelligence — This comment established the foundational tension for the entire discussion – moving from voluntary to mandatory governa…
S47
Global dialogue on AI governance highlights the need for an inclusive, coordinated international approach — Global AI governance was the focus of a high-levelforumat the IGF 2024 in Riyadhthat brought together leaders from gover…
S48
Open Forum #33 Building an International AI Cooperation Ecosystem — Dai Wei: Distinguished guests, ladies and gentlemen, good day to you all. I’m delighted to join you in this United Natio…
S49
High Level Dialogue with the Secretary-General — He mentions the potential of artificial intelligence as a tool for development if used equitably.
S50
9821st meeting — Mr. President, artificial intelligence offers a unique opportunity to transform the approach to international peace and …
S51
Keynote by Sangita Reddy Joint Managing Director Apollo Hospitals India AI Impact Summit — Dr. Reddy challenged conventional thinking by reframing India’s healthcare challenges as competitive advantages. “Health…
S52
WS #43 States and Digital Sovereignty: Infrastructural Challenges — The speaker mentions India’s national ID system (Aadhaar) and payment system (UPI) as examples of DPI enhancing sovereig…
S53
Empowering People with Digital Public Infrastructure — 1. Improved access to services: Hoda Al Khzaimi argued that DPI can reduce inequalities in access to services globally, …
S54
From Innovation to Impact_ Bringing AI to the Public — The discussion concludes with predictions about the pace of transformation. Sharma suggests that the changes will be dra…
S55
Leaders TalkX: ICT Applications Unlocking the Full Potential of Digital – Part II — Anil Kumar Lahoti:Thank you, Dana. First of all, I thank ITU for inviting me to this plus 20, and I consider this as my …
S56
Indias AI Leap Policy to Practice with AIP2 — “This is essentially what we provide for startups.”[16]. “And startups become a very, very important player in this game…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
H
Hemant Taneja
5 arguments147 words per minute887 words361 seconds
Argument 1
AI as a catalyst for national and global resilience
EXPLANATION
Taneja argues that artificial intelligence is the key technology to build resilience at both national and global levels. By applying AI across critical sectors, economies can better withstand shocks and lower the cost of essential services for large populations.
EVIDENCE
He states that AI is the answer for driving national resilience across key industries such as healthcare, data, defence, and energy, highlighting its role in scaling energy infrastructure and other capabilities [18-21]. He further notes that AI is deflationary by nature, which can help uplift a population of over a billion by lowering the cost of essential services like healthcare and education, thereby solving problems for the entire planet [25-26].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 discusses AI’s deflationary impact and its role in solving large‑scale problems for India’s billion‑plus population, aligning with the view of AI as a resilience driver.
MAJOR DISCUSSION POINT
AI as a resilience driver
Argument 2
India’s strategic advantage to lead in AI adoption
EXPLANATION
Taneja contends that India possesses unique strengths that enable it to leapfrog traditional development pathways and become a global AI leader. Existing digital infrastructure, a young demographic, growing investment, open‑source initiatives, and strong US‑India ties create a fertile environment for scaling AI.
EVIDENCE
He points to India’s previous digital infrastructure breakthroughs, such as UPI and Aadhaar, as examples of how the country can leapfrog traditional development pathways when applying AI [28-30]. He also cites ongoing infrastructure investment, open-source work, the US-India corridor, and the country’s young demographic as factors creating a fertile environment for AI scaling [31-40], and emphasizes the need for fluid innovation flows between the US, India, and Europe [35-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 outlines India’s digital breakthroughs (UPI, Aadhaar) and its status as a strong growth market; S6 notes India’s leadership in the application‑layer of AI and sovereign large language models; S8 describes a supportive ecosystem of government, industry, and academia that enables rapid AI scaling.
MAJOR DISCUSSION POINT
India’s AI advantage
Argument 3
AI’s impact on employment and the need for workforce empowerment
EXPLANATION
Taneja rejects the narrative that AI will eliminate jobs and instead proposes that every new entrant to the workforce should be equipped with AI tools. Empowering the large, young Indian labour force with AI will dramatically increase productivity across companies and industries.
EVIDENCE
He dismisses the claim that AI will take jobs and argues that millions of young Indians entering the workforce each month should be fully empowered with AI to boost productivity in every company and industry [41-46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S7 argues that AI augments developers and supports workers rather than displacing them, echoing the claim that AI can boost productivity for new entrants.
MAJOR DISCUSSION POINT
AI and jobs
Argument 4
Entrepreneurship and investment as drivers of AI innovation in India
EXPLANATION
Taneja highlights startups as the primary engines for rebuilding core societal pillars and asserts that Indian talent will create next‑generation, globally competitive AI companies. He backs this claim with a major investment commitment from General Catalyst.
EVIDENCE
He emphasizes that startups are the most important institutions for rebuilding core societal pillars, citing examples such as SEPTO, Rafi, and Policy Bazaar Health as companies transforming industries and expressing confidence that Indian entrepreneurs will build globally competitive firms [47-52]. He then announces General Catalyst’s plan to invest $5 billion over the next five years in the Indian entrepreneurial ecosystem, describing it as the largest of its kind and driven by belief in Indian talent [52-55].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 reports General Catalyst’s $5 billion investment plan for Indian startups; S11 emphasizes the firm’s focus on responsible innovation, underscoring entrepreneurship as a growth engine.
MAJOR DISCUSSION POINT
Startup investment
Argument 5
AI should be designed and deployed with a human‑centric focus
EXPLANATION
Taneja argues that artificial intelligence must prioritize human empowerment and centricity, positioning this design principle as essential for shaping AI’s role in society. He credits the Prime Minister’s leadership for foregrounding this approach.
EVIDENCE
He thanks Prime Minister Modi for gathering AI thought leaders and for delivering the message to ‘shape AI for human centricity’ and ‘human empowerment’, calling it an important design principle [7-10].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S10 promotes a human‑centered, rights‑respecting AI design; S1 records Taneja’s acknowledgment of the Prime Minister’s emphasis on shaping AI for human centricity and empowerment.
MAJOR DISCUSSION POINT
Human‑centric AI
AGREED WITH
Speaker 1
S
Speaker 1
1 argument143 words per minute76 words31 seconds
Argument 1
Responsible innovation bridges capital and conscience
EXPLANATION
Speaker 1 emphasizes that the upcoming speaker represents a venture capital firm that advocates for responsible innovation, suggesting that investment decisions should be guided by ethical responsibility. This frames the discussion around aligning profit motives with societal impact.
EVIDENCE
The moderator introduces the speaker as coming from General Catalyst, describing him as a vocal advocate for ‘responsible innovation’ and noting that his perspective bridges the worlds of capital and conscience [2-5].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 describes Taneja as a vocal advocate for ‘responsible innovation’, linking capital with conscience; S11 repeats this framing for General Catalyst.
MAJOR DISCUSSION POINT
Responsible innovation
AGREED WITH
Hemant Taneja
Agreements
Agreement Points
AI should be developed and deployed with a human‑centric, responsible innovation approach
Speakers: Speaker 1, Hemant Taneja
Responsible innovation bridges capital and conscience AI should be designed and deployed with a human‑centric focus
Both the moderator and the CEO stress that artificial intelligence must prioritize human empowerment and ethical responsibility, linking investment decisions to societal impact and praising the Prime Minister’s emphasis on human-centric AI [2-5][7-10].
POLICY CONTEXT (KNOWLEDGE BASE)
This principle mirrors the policy guidance presented in the “Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance” discussion, which emphasizes human-centric, responsible AI development [S24], and is echoed in the AI Governance Dialogue’s focus on Human-Centered Development as a summit key principle [S27]. The broader Global AI Policy Framework also cites consensus on human-centered approaches across jurisdictions [S29], reinforcing its alignment with emerging international policy norms.
Similar Viewpoints
Both speakers view responsible, human‑centred AI as essential for aligning technological progress with societal values, highlighting a convergence of capital‑driven and policy‑driven perspectives [2-5][7-10].
Speakers: Speaker 1, Hemant Taneja
Responsible innovation bridges capital and conscience AI should be designed and deployed with a human‑centric focus
Unexpected Consensus
Alignment between a venture‑capital leader and a government‑appointed speaker on human‑centric, responsible AI
Speakers: Speaker 1, Hemant Taneja
Responsible innovation bridges capital and conscience AI should be designed and deployed with a human‑centric focus
It is notable that a representative of a profit-focused VC firm explicitly echoes the government’s call for human-centric AI, suggesting an unexpected convergence of commercial and public policy priorities on ethical AI design [2-5][7-10].
POLICY CONTEXT (KNOWLEDGE BASE)
The noted alignment reflects the multi-stakeholder consensus highlighted in recent policy dialogues, where public-sector leaders and private-sector innovators are urged to cooperate on human-centric, responsible AI governance, as outlined in the Human-Centric Development pillar of the AI Governance Dialogue [S27] and the inclusive governance emphasis of the Global AI Policy Framework [S29].
Overall Assessment

The discussion shows a clear point of agreement that AI development must be guided by human‑centric, responsible innovation principles, with both speakers emphasizing ethical responsibility alongside economic opportunity.

Moderate consensus limited to the ethical framing of AI; other themes such as investment scale, entrepreneurship, and AI as a resilience driver are presented only by Hemant Taneja, indicating limited broader alignment.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The exchange shows strong alignment on the promise of AI for societal resilience and the need for a human‑centric approach. The only nuanced tension lies in the emphasis: Speaker 1 highlights responsible innovation as a guiding principle, while Hemant stresses aggressive investment, entrepreneurship and leap‑frogging as the engine for that responsibility.

Minimal explicit disagreement; the conversation is largely complementary. The limited divergence suggests that policy discussions can move forward with a shared vision of AI’s benefits, but will need to reconcile the pace of deployment with the depth of ethical oversight.

Partial Agreements
Both speakers emphasize that AI should be harnessed for societal benefit. Speaker 1 frames the upcoming talk as representing a venture‑capital firm that advocates "responsible innovation" linking profit with ethical impact [2-5]. Hemant Taneja stresses that AI is the key technology for building national and global resilience across health, defence, energy and other sectors, and that it must be shaped for human centricity and empowerment [7-11][18-21]. While they share the goal of using AI for the public good, Speaker 1 foregrounds ethical stewardship, whereas Hemant focuses on rapid scaling, investment and entrepreneurship as the primary means to achieve that benefit.
Speakers: Speaker 1, Hemant Taneja
AI as a catalyst for national and global resilience Responsible innovation bridges capital and conscience
Takeaways
Key takeaways
AI is positioned as a catalyst for national and global resilience, driving transformation in healthcare, defense, energy, and other critical sectors. AI’s deflationary nature can lower costs of essential services, enabling uplift for a population of over a billion and offering solutions with global impact. India possesses strategic advantages for AI leadership, including mature digital infrastructure (UPI, Aadhaar), a young demographic, growing infrastructure investment, open‑source initiatives, and a strong US‑India partnership corridor. The narrative that AI will eliminate jobs is rejected; instead, every new entrant to the workforce should be equipped with AI tools to boost productivity and create abundance. Entrepreneurship is seen as the primary engine for AI‑driven societal rebuilding; Indian startups are expected to become globally competitive leaders. General Catalyst commits to investing $5 billion over the next five years in the Indian entrepreneurial ecosystem to accelerate AI‑driven growth.
Resolutions and action items
General Catalyst will allocate $5 billion in funding to Indian startups over the next five years. Encourage continued flow of innovation between the US, India, Europe, and the broader democratic world to scale AI responsibly. Promote the empowerment of new workforce entrants with AI tools to enhance productivity across industries.
Unresolved issues
Specific policy frameworks or regulatory measures needed to ensure responsible AI deployment were not detailed. Concrete plans for integrating AI into defense, healthcare, and energy sectors remain unspecified. Mechanisms for scaling open‑source AI initiatives and ensuring equitable access were not addressed. Potential social impacts of rapid AI adoption, beyond the job‑displacement narrative, were not fully explored.
Suggested compromises
None identified
Thought Provoking Comments
The biggest opportunity in capitalism today is what I call global resilience. AI is the answer for actually driving what I call national resilience in all the key industries – healthcare, data, defence, energy, etc.
He reframes AI from a mere technological trend to a strategic pillar of economic and societal stability, linking it directly to the concept of ‘global resilience’. This broadens the conversation beyond product‑level innovation to macro‑level policy and investment considerations.
This statement set the overarching theme of the talk, steering the discussion toward AI as a solution to systemic shocks (pandemic, wars) and prompting listeners to think about AI’s role in national security and infrastructure rather than just commercial applications.
Speaker: Hemant Taneja
AI is deflationary by nature, it matches well to what’s required to uplift the opportunities here in India – solving for needs in healthcare, education, and other sectors for a billion‑plus population.
He introduces an economic lens—AI’s deflationary effect—as a catalyst for affordable large‑scale solutions, a perspective not often highlighted in AI policy debates.
This insight opened a new line of thought about how AI can lower costs and expand access, influencing the audience to consider investment models that leverage AI’s price‑reducing potential, especially in a developing‑country context.
Speaker: Hemant Taneja
There’s a narrative that artificial intelligence can take the jobs of young people and we need to slow down progress. My biggest advice on India’s leadership in AI is to reject that narrative and lean into it. Every new entrant to the workforce should be fully empowered with AI.
He directly challenges the prevalent fear of AI‑driven job loss, flipping it into an argument for empowerment and productivity gains. This contrarian stance provokes re‑evaluation of workforce policy.
This comment marked a turning point, shifting the tone from caution to optimism. It prompted listeners to consider up‑skilling and AI integration in education and corporate training, and set the stage for later discussion of entrepreneurship as the vehicle for this empowerment.
Speaker: Hemant Taneja
India will lead by leap‑frogging – just as we did with UPI and Aadhaar, we can completely rethink paradigms in other industries using AI.
He draws a parallel between past digital successes and future AI deployment, suggesting a roadmap for rapid, disruptive adoption rather than incremental change.
This analogy reinforced the narrative of India as a testbed for bold innovation, encouraging stakeholders to think about policy and infrastructure that enable similar ‘leap‑frog’ opportunities across sectors like energy and defence.
Speaker: Hemant Taneja
We are increasing our investment – $5 billion over the next five years in the Indian entrepreneurial ecosystem – the largest of its kind. This comes from a deep belief that Indian entrepreneurs will create the most interesting next‑generation companies.
Beyond rhetoric, he delivers a concrete, sizable financial commitment, signaling serious confidence and providing a tangible catalyst for the ecosystem.
The announcement acted as a decisive turning point, moving the discussion from abstract ideas to actionable support. It likely spurred immediate interest from founders, investors, and policymakers looking to align with the new capital inflow.
Speaker: Hemant Taneja
Overall Assessment

Hemant Taneja’s remarks shaped the discussion by repeatedly expanding the scope of AI—from a tool for resilience and economic deflation to a catalyst for societal empowerment and leap‑frog innovation. Each key comment introduced a fresh dimension (macro‑resilience, deflationary economics, job narrative, historical leap‑frogging, and a $5 billion investment) that redirected the conversation, deepened analysis, and moved the tone from cautious optimism to decisive action. Collectively, these insights reframed AI as a strategic national asset and set a clear, investment‑backed agenda for India’s AI future.

Follow-up Questions
How can AI be leveraged to enhance national resilience across key industries such as healthcare, defense, and energy?
Understanding concrete use‑cases and implementation pathways is essential to translate the broad claim that AI drives national resilience into actionable policies and investments.
Speaker: Hemant Taneja
What specific policies or frameworks are needed to ensure AI development aligns with human centricity and empowerment?
A clear governance model is required to operationalize the principle of human‑centric AI and to address ethical, privacy, and societal concerns.
Speaker: Hemant Taneja
How can India effectively leapfrog existing digital‑infrastructure paradigms (e.g., building on UPI, Aadhaar) to accelerate AI adoption in other sectors?
Identifying mechanisms for rapid, low‑cost scaling will help replicate past successes and avoid reinventing foundational layers.
Speaker: Hemant Taneja
What mechanisms will facilitate fluid innovation flow between the US, India, Europe, and the broader Western world to scale AI democratically?
Clarifying cross‑border collaboration channels, regulatory harmonisation, and talent exchange is crucial for building a globally resilient AI ecosystem.
Speaker: Hemant Taneja
What evidence supports the claim that AI is deflationary and how will that impact India’s economic uplift, especially in healthcare and education?
Empirical data is needed to validate the deflationary effect of AI and to forecast its macro‑economic implications for critical public services.
Speaker: Hemant Taneja
How can the potential job‑displacement narrative be addressed, and what strategies will empower the millions of young entrants to the workforce with AI tools?
Developing concrete upskilling, reskilling, and AI‑augmentation programs is vital to turn perceived threats into productivity gains.
Speaker: Hemant Taneja
What metrics will be used to assess the productivity gains from AI empowerment of the workforce?
Defining measurable indicators will allow policymakers and investors to track the real impact of AI on economic output.
Speaker: Hemant Taneja
How will the $5 billion investment over five years be allocated across the Indian entrepreneurial ecosystem, and what criteria will guide funding decisions?
Transparency on fund distribution and selection criteria will help ensure the capital drives the intended high‑impact AI startups.
Speaker: Hemant Taneja
What research is needed to evaluate the long‑term societal impacts of AI‑driven startups on global resilience and abundance?
Longitudinal studies will inform whether AI‑focused entrepreneurship delivers sustainable benefits beyond short‑term growth.
Speaker: Hemant Taneja
How can open‑source AI initiatives be expanded in India to support responsible innovation?
Open‑source ecosystems can democratise access, foster transparency, and accelerate collaborative problem‑solving.
Speaker: Hemant Taneja

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ananya Birla Birla AI Labs

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ananya Birla Birla AI Labs

Session at a glanceSummary, keypoints, and speakers overview

Summary

Speaker 1 opened by thanking the audience and noting the distinguished company of leaders, framing the address as a high-profile discussion on AI’s role in India’s future [1-5]. He praised India’s emergence as a driver of the global AI conversation under Prime Minister Modi’s leadership [7]. Comparing the current era to the Industrial Revolution, he argued that AI is amplifying human cognition more profoundly than past technological shifts [10-15]. He announced the launch of Birla AI Labs, describing it as both an apex AI body for the Aditya Birla Group and a frontier research institute with a dual mandate [19][24-27].


The lab’s first mandate is to embed AI across the group’s businesses, leveraging the conglomerate’s 160-year legacy, $120 billion market cap and extensive operational data to create a competitive moat [30-33]. In the real-estate arm, AI is projected to cut project-concept timelines by 90%, freeing more than 2,000 man-days each year [34-36]. In financial services, AI has halved underwriting time, reduced credit-assessment preparation by 90%, and enabled a sales AI program targeting over $100 million in revenue while pushing first-call resolution above 90% [37]. Hindalco is deploying a proprietary factory-intelligence system that integrates 24 real-time KPIs and is building a digital twin of its smelters to orchestrate the coal-power ecosystem [40-45]. The micro-finance unit Tantra is embedding AI in sales, audit and quality control to raise productivity by at least 30%, thereby expanding credit access for rural women [46-50]. Consumer brands such as Birla Cosmetics are using AI creativity tools for hyper-personalized marketing and rapid asset production, illustrating that AI now works in complex, capital-intensive industries [53-56].


The lab’s second mandate focuses on frontier research, including structured foundation models for time-series data and a study on how language-model usage affects student curiosity and agency [64-71]. It has also launched an AI-native research and productivity platform at IIT Bombay that combines genetic search, real-time data processing and multimodal intelligence to boost day-to-day efficiency [72-76]. Concluding, the speaker emphasized that no single institution can navigate the $4-to-$40 trillion economic transition alone and called for an ecosystem linking academia, industry and policy for responsible AI development [92-99]. He ended by reaffirming Birla AI Labs’ commitment to shaping India’s AI future responsibly and thanked the audience [100].


Keypoints

AI as a transformative force for India’s future – The speaker frames the current AI wave as surpassing the Industrial Revolution, emphasizing that AI amplifies cognitive abilities and will be pivotal in moving India from a $4 trillion to a $40 trillion economy by 2047 [7-15].


Launch of Birla AI Labs with a dual mandate – Birla AI Labs is presented as both an internal “apex AI body” serving the Aditya Birla Group’s businesses and a frontier research lab creating proprietary AI products for the open market [24-27][58-62].


Real-world AI deployments across the Group’s portfolio – Specific use-cases are highlighted: compressing project timelines at Birla Estates, cutting underwriting time and boosting sales in financial services, real-time factory intelligence and digital twins at Hindalco, productivity gains for the micro-finance arm Tantra, and hyper-personalized marketing in consumer brands [33-50][51-56].


Frontier research initiatives and societal impact studies – The lab’s research focuses include structured foundation models for time-series data, probing model understanding of market crashes, and studying AI’s effect on human cognition; it also builds tools such as an AI-native research platform deployed at IIT Bombay [64-70][73-76].


Call for an ecosystem of academia, industry, and policy – The speaker argues that no single institution can navigate the AI epoch alone and urges collaborative, responsible building of institutions and policies to enable India’s AI leadership [92-98].


Overall purpose/goal


The discussion is a keynote announcement that introduces Birla AI Labs, showcases its early successes, outlines its research agenda, and positions the lab as a catalyst for India’s broader AI ecosystem. It seeks to demonstrate the lab’s strategic value, inspire confidence among stakeholders, and rally partners across academia, industry, and government to co-create a responsible AI future for the country.


Overall tone


The tone begins reverent and humble, acknowledging the stature of the audience and the speaker’s own nerves [1-6]. It quickly shifts to confident and visionary as the speaker describes AI’s historic significance and the lab’s ambitious mandate [7-15][24-27]. Throughout the rollout of concrete business examples, the tone is pragmatic and optimistic, emphasizing tangible impact [33-56]. When describing research and societal responsibilities, it becomes thoughtful and earnest, underscoring a sense of duty [64-70][71]. The closing remarks adopt a collaborative, rally-calling tone, stressing partnership and shared responsibility for the nation’s AI journey [92-98].


Speakers

Speaker 1


– Role/Title:


– Area of Expertise:


Additional speakers:


(none)


Full session reportComprehensive analysis and detailed insights

Speaker 1 opened by thanking the host and acknowledging the distinguished audience, noting that previous speakers had included the Pope, Her Majesty the Queen and Nelson Mandela – a “very high bar” that left him both honoured and nervous [1-6]. He invoked President Obama’s 2011 remark about the calibre of speakers and, despite not being a president himself, felt a similar weight standing before leaders such as Prime Minister Modi, Sundar Pichai, Rishi Sunak, Sam Altman, Mukesh Ambani, N. Murthy and his own father [3-5].


Turning to the national context, he praised India’s emergence as a driver of the global AI conversation under Prime Minister Modi’s leadership [7]. He likened the current AI wave to the Industrial Revolution, arguing that while the latter amplified physical labour, AI amplifies cognitive capability, creating a “Cambrian explosion of possibilities” that rewrites the relationship between human effort and economic output [10-15]. He framed this transformation as essential for moving India from a $4 trillion to a $40 trillion economy by 2047, with technology as the decisive lever [9-15].


Against this backdrop, he announced the launch of Birla AI Labs, positioning it as both an apex AI body for the Aditya Birla Group and a frontier research institute that will develop proprietary AI products for the open market [19][24-27]. The lab’s dual mandate therefore combines (i) serving the Aditya Birla Group as an apex AI body and (ii) operating as a frontier research lab that creates proprietary AI products for the open market [24-27].


The first mandate leverages the Group’s 160-year legacy, a $120 billion market capitalisation, operations in 42 countries and a workforce of over 250 000 people, providing a rich trove of real-world data across manufacturing, financial services, commodities and consumer businesses [30-33]. This depth gives the lab a “rare advantage” of enterprise-scale deployment paired with research freedom [27-29].


Concrete AI deployments were illustrated:


* Birla Estates will use AI to compress project-concept timelines by 90 %, freeing more than 2 000 man-days annually and allowing architects and developers to iterate without the former time constraints. Contract-intelligence tools will now give teams a unified, accurate view of every agreement, flagging potential claims before they escalate [34-38].


* In Aditya Birla Capital, AI has halved underwriting turnaround, cut credit-assessment preparation by 90 %, and powered an AI-enabled sales programme targeting over $100 million in gross sales while pushing first-call resolution beyond 90 % [37][38-40].


* Hindalco is deploying a proprietary factory-intelligence system that integrates 24 real-time KPIs, turning static spreadsheets into a living intelligence layer that flags anomalies early; the ambition extends to a digital twin of smelters and furnaces with an AI layer orchestrating the entire coal-sourced power ecosystem [40-45][45-47].


* The micro-finance arm Tantra embeds AI across sales, audit and quality control, aiming for at least a 30 % productivity lift, which translates into loan officers reaching more customers and women in villages gaining capital that was previously inaccessible [46-50].


* In the consumer-brand segment, Birla Cosmetics’ labels Love Etc and Contraband employ AI creativity tools to move from campaign ideation to final asset delivery at a fraction of traditional cost, enabling hyper-personalised marketing and real-time inventory intelligence. The speaker added that “to build great brands today, product must be backed by content-driven distribution” [52-56][52-54].


These examples collectively shift the discussion from “whether AI works” to “how fast and how deep we should go” amid the surrounding ambiguity [55-57].


The second mandate focuses on frontier research. The lab’s “North Star” is to become India’s next great institution at the intersection of deep research, applied engineering and market creation [61-63]. A global team of researchers and engineers from institutions such as Oxford, IIT Madras, BITS Pilani, ISRO, Google and Goldman Sachs has been assembled [63].


The first research vertical targets structured foundation models for time-series and tabular data, a market estimated at $600 billion according to a Forbes article [62-64]. In December, the team was “in Europe in San Diego …”, an odd geographic phrasing that the transcript preserves [64-66]. Their paper “Time to Time” (presented in San Diego) asks whether such models truly understand phenomena like market crashes or merely fit curves; experiments injecting crash signatures into hidden states caused forecasts to shift, suggesting the models learn structural world knowledge rather than simple curve-fitting [64-68].


The second vertical addresses the human-centred impact of AI. A study conducted with Delhi University students measured how large-language-model usage affects curiosity and cognitive agency; the findings will be presented at King’s College in June, underscoring the lab’s belief that AI builders must understand societal consequences as a core responsibility [68-71].


In parallel, the lab launched an AI-native research and productivity platform at IIT Bombay in December 2024. The platform combines genetic search, real-time data processing and multimodal intelligence to deliver contextual insights across the internet, documents and financial data [72-74], and is already being used in the speaker’s own office to improve day-to-day efficiency [75-76]. This illustrates the intended feedback loop where frontier research directly fuels applied deployment [77-78].


Reflecting on the Aditya Birla Group’s historical alignment with India’s economic journey-from independence through liberalisation and globalisation-the speaker highlighted a “muscle memory” for navigating tectonic shifts and noted that his brother and his sister have learned to read early tremors and invest before consensus [84-86]. He stressed that no single institution, however large, can navigate the $4 trillion-to-$40 trillion transition alone; instead, a sustained ecosystem linking academia, industry and policy is required [92-96].


Concluding, he reaffirmed Birla AI Labs’ commitment to act as an honest, responsible builder of technology, institutions and the broader AI ecosystem, inviting collaborators to co-author the playbook for the next phase of India’s AI future [97-99]. He emphasized that the group is committed to building a new world, closed with sincere thanks, and expressed honour at having spoken before such a distinguished gathering [100-101].


Session transcriptComplete transcript of the session
Speaker 1

Namaste. Thank you so much for that introduction. Good evening everyone. It is truly an honor to be here today. In his May 2011 address to the British Parliament, President Obama said, and I quote, I am told that the last three speakers here have been the Pope, Her Majesty, the Queen, and Nelson Mandela, which is either a very high bar or the beginning of a very funny joke. Unquote. Even though I am no President Obama, I feel something similar standing before you this evening. Being in the company of leaders like our Honorable Prime Minister Modiji, Sundar Pichai, Rishi Sunak, Sam Altman, Mukesh Ambani, Narayan Murthy, and my father, to name a few, this for sure is a very high bar that I will surely not reach.

I am very nervous and this is clearly not a joke. Under the leadership of our Honorable Prime Minister, it has been extraordinary to see to watch India step into a driving role in the global AI discourse. As a young leader, I feel extremely grateful to have this platform and I feel that it is our responsibility to make sure that we deliver. India’s journey from a $4 trillion economy to a $40 trillion economy in the arc that stretches from where we are today to the Vixit Bharat we aspire to be by 2047, technology will play a decisive role. This moment carries a similar taste to that of the Industrial Revolution. A period where the relationship between human labor and economic output was fundamentally rewritten.

And yet, I would argue, that what we are witnessing today is even more profound. The Industrial Revolution amplified our physical capabilities. AI is amplifying our cognitive ones. What we are living through is nothing less than a Cambrian explosion of possibilities. A phase where entirely new forms of value and new modes of human potential are emerging at a pace that defies our linear thinking. We are standing at a seminal moment in the history of human progress. A moment of extraordinary possibilities. In our humble attempt to translate these possibilities into reality, I am here to introduce Birla AI Labs. When it comes to the Aditya Birla group and AI, I want to be clear. My father has been at this for a while.

Deliberately, quietly, and steadily. Not for the spectacle, but to deliver tangible value to our stakeholders. Birla AI Labs has a dual mandate. The first is to service my father’s direction and act as an apex AI body for the Aditya Birla Group, building solutions alongside our business tech teams to unlock new value across our businesses. The second is to operate as a frontier research lab doing ongoing original research at cutting edge and translating that science into proprietary AI products for the open market. This dual positioning gives Birla AI Labs a rare advantage. Real world data, domain know -how and enterprise scale deployment through the group paired with the freedom to build category defining products for global markets.

Let me share what this looks like in practice. As a part of our first mandate driven by my father, we are executing AI deployment, across the Aditya Birla Group. The group has been operational for the last 160 plus years, and we have decades of operational data across manufacturing, financial services, commodities, consumer businesses, and a growing bench of talent that understands both the science and the business, giving us an undeniable moat. With a $120 billion market cap operating across 42 countries with over 250 ,000 employees, we are witnessing tangible early gains across our diverse portfolio as advanced analytics and AI reshape everything from supply chains to workforce management. Let me start with Birla Estates. We will be using AI to compress project concept timelines by 90%.

Freeing over 2 ,000 man days a year. The immediate impact is efficiency. Architects and developers are no longer constrained by the time it takes to test an idea Contract intelligence tools will now give our teams a unified, accurate view of every agreement Flagging potential claims before they escalate Moving into financial services and the transformation takes a completely different shape Aditya Birla Capital has built one of the most ambitious Gen AI programs in India’s financial sector Not by picking a single use case, but by going after the entire value chain at once Underwriting turnaround time is down 50 % Credit assessment preparation has been cut by 90 % A fully AI -enabled sales program is already targeting more than $100 million in gross sales And it’s not just about the sales, it’s also about the value of the product While the customer service platform is pushing first call resolution beyond 90 % What makes this remarkable is not any individual number.

It is the concurrence. Then there is Hindalco. And here the story shifts register entirely. This is about applying intelligence in one of the most physically demanding energy -intensive industries in the world. On the shop floor, a proprietary factory intelligence integrates 24 operational KPIs in real -time, turning what were once static spreadsheets into a living intelligence layer that surfaces anomalies before they escalate. And what we are building is more ambitious still. A digital twin for our smelters and furnaces, and an AI layer on top that will orchestrate the entire coal -sourced power ecosystem. But if there is one place in our portfolio where AI feels most consequential, most human, it is for Tantra. Tantra our microfinance business.

Here we are embedding AI across sales, audit and quality control and we expect it to unlock at least 30 % in productivity gains. It means a loan officer can reach more people. It means a woman in a village gets access to capital that she would not have had access to otherwise. That kind of efficiency does not just improve margins, it has the potential to improve lives. The consumer businesses brings this completely full circle. I have come to believe through my experience that to build great brands today, product must be backed by content -driven distribution. Our fashion, retail and jewelry businesses are deploying AI for hyper -personalized marketing and real -time inventory intelligence. Within Birla Cosmetics, our brand Love Etc and Contraband are using AI creativity tools to move from campaign ideation to final asset delivery at a fraction of the traditional cost.

What this tells me is that the question is no longer whether AI can work in complex, capital -intensive real -world industries. We have seen it and we know it can. The real question is how fast and how deep should we go, given the ambiguity that surrounds artificial intelligence. Now, on to our second mandate. And this is where Birla AI Labs operates as a frontier research lab. The conviction here is simple. India’s next great institution will emerge at the intersection of deep research, applied engineering and market creation. That is our North Star. To do this, we have assembled a global team of researchers and engineers from Oxford and IIT Madras to BITS Pilani, ISRO Google and Goldman Sachs.

Our first major research vertical is in structured foundation. A field that a recent Forbes article estimates at a $600 billion market opportunity. Often overlooked, a vast majority of the world’s data sits in time series and tabular formats Stock prices, sensor readings, supply chain signals, weather patterns, energy consumptions, patient vitals This data could actually power predictive intelligence in industry, in finance, in infrastructure, in healthcare In December, our team was in Europe in San Diego where our paper, Time to Time, was accepted It asks a very provocative question Do these time series foundation models actually understand what a market crash is? Or are they just fitting curves? A researcher showed that you can reach inside a model’s hidden states Inject the signature of a historical crash and watch the forecast shift accordingly This is not a question of time This is not curve fitting This is a model that has learned something about the structure of the world Our researchers are working at that frontier This thesis for Birla AI Labs has been presented at the Oxford AI Summit and the World Summit AI in the Netherlands in 2025 This lab has built, is building I would say a credible global research presence presenting at top venues partnering with leading institutions and attracting talent that could work anywhere in the world but has chosen to build for India A second research vertical is one that I believe the industry has a moral obligation to pursue AI now mediates the everyday decisions, relationships and information environments of over 1 .7 billion people worldwide Yet the study of what this does to human cognition, agency and daily life remains nascent We at Birla AI Labs want to do something about this We are here to help We conducted a study with Delhi University students to measure how language model usage affects curiosity and cognitive agency among students.

The results of this study will be presented at King’s College this June. This is the kind of research that industry too often leaves to others. But I believe that those of us who are building AI have a responsibility to understand its human consequences, not as an afterthought, but as a core part of the enterprise. Alongside the research, we are also building tools. In December 2024, we launched a beta version of an AI native research and productivity platform at IIT Bombay. Combining a genetic search, real -time data processing and multimodal intelligence to deliver contextual insights across the Internet, documents and financial data. That platform is now being used across my own office. to drive day -to -day efficiency.

It is a tangible example of what happens when frontier research meets applied deployment. And that is exactly the loop that we at Birla AI Labs are designed to close. This approach, building at the frontier while staying rooted in real -world application, is not new to us. The Aditya Birla Group’s history has been very intertwined with the story of our nation. My forefathers have built through every chapter of India’s journey, through independence, through liberalization, through globalization. Ours is a history of reading the moment, adapting with conviction, and building institutions that outlast the disruptions that gave birth to them in the first place. This century of building has given us something very invaluable, a muscle memory for navigating tectonic shifts We are here to build a new world.

We are here to build a new world. We are here to build a new world. We are here to build a new world. We are here to build a new world. The generations before me, my brother and my sister, have learned to read the early tremors, to invest before consensus forms, and to build for decades rather than just quarters. Every generation of our group has faced a moment where the old playbook had to be rewritten. What is different today is the elements of high ambiguity and uncertainty, which, if we look at closely, can give rise to immense opportunity. And that is precisely what makes this moment so thrilling and so consequential. But here is what I have come to believe very, very strongly.

No single institution, no matter how large or how well resourced, can navigate this epoch alone. The journey from $4 trillion to $40 trillion will not be powered by industry acting in isolation. It will require something way more fundamental. It will require us to build an ecosystem that brings academia, industry and policy into genuine, sustained collaboration. As India writes its AI chapter, we intend to be on the front lines, not as observers, not as fast followers, but as honest and true responsible builders of technology, of institutions and of the ecosystem this country needs to lead. And we will do so with utmost responsibility. The playbook for what comes next has not yet been written. And at the Aditi Birla Group and at Birla AI Labs, we look forward to writing it together.

Thank you all so very much. It’s been an honour.

Related ResourcesKnowledge base sources related to the discussion topics (13)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“He invoked President Obama’s 2011 remark about the calibre of speakers”

The knowledge base references President Obama’s 2011 address discussing the calibre of speakers, confirming the citation [S4].

Confirmedhigh

“He mentioned leaders such as Prime Minister Modi and Sundar Pichai”

A keynote transcript records Sundar Pichai speaking about Prime Minister Modi, confirming both figures are cited in the context of AI discussions [S50].

Confirmedhigh

“He praised India’s emergence as a driver of the global AI conversation under Prime Minister Modi’s leadership”

The knowledge base notes strong optimism about India’s AI potential and highlights Modi’s leadership in that domain, supporting the claim [S6].

Confirmedhigh

“He likened the current AI wave to the Industrial Revolution, stating AI amplifies cognitive capability and creates a “Cambrian explosion of possibilities””

The source explicitly describes AI as amplifying cognitive abilities and a Cambrian-explosion-like surge of possibilities, mirroring the speaker’s analogy [S9].

External Sources (60)
S1
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S2
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S3
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S4
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ananya Birla Birla AI Labs — “AI is amplifying our cognitive ones.”[1]. “The Industrial Revolution amplified our physical capabilities.”[2]. “This mo…
S6
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Thank you, Prime Minister. It’s an honor to be here, and under your leadership, you have elevated technology from a sect…
S7
What policy levers can bridge the AI divide? — **Additional speakers:** – **H.E. Mr. Solly Malatsi**: Role/title not clearly specified in the transcript, but appears …
S8
Importance of Professional standards for AI development and testing — Don Gotterbarn: Thank you, Stephen. The previous assertion to Stephen’s that says essentially, because there’s differenc…
S9
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-ananya-birla-birla-ai-labs — Freeing over 2 ,000 man days a year. The immediate impact is efficiency. Architects and developers are no longer constra…
S10
Rights and Permissions — Online work platforms are eliminating many of the geographical barriers previously associated with certain tasks. Bangla…
S11
Press Conference: Closing the AI Access Gap — Moreover, the speakers argue that AI can drive productivity, creativity, and overall economic growth. It has the capacit…
S12
Industries in the Intelligent Age / DAVOS 2025 — Julie Sweet: That’s a good place to be, better be in the lead. Julie, you work with probably a lot of these folks, al…
S13
AI tools fuel smarter and faster marketing decisions — Nearly half of UK marketers surveyed already harnessAIfor essential tasks such as market research, campaign optimisation…
S14
The Global Power Shift India’s Rise in AI & Semiconductors — And one of the changes that has happened, obviously India becoming the larger in terms of GDP size, consumer demand, peo…
S15
Indias AI Leap Policy to Practice with AIP2 — “This is essentially what we provide for startups.”[16]. “And startups become a very, very important player in this game…
S16
From High-Performance Computing to High-Performance Problem Solving / Davos 2025 — Georges Olivier Reymond discusses specific use cases for quantum computing in drug design and financial portfolio optimi…
S17
A Global AI in Financial Services Survey — AI has various uses in customer acquisition, including making outreach more personalised, speeding up onboarding procedu…
S18
International Cooperation for AI & Digital Governance | IGF 2023 Networking Session #109 — Dasom Lee leads the AI and Cyber-Physical Systems Policy Lab at KAIST, where they focus on the relationship between AI, …
S19
AI 2.0 Reimagining Indian education system — High level of consensus with complementary perspectives rather than conflicting viewpoints. The speakers, representing d…
S20
Bridging the AI innovation gap — The speaker stressed that all stakeholders—government, industry, academia, and civil society—have important roles in sha…
S21
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — Banning such technologies is seen as necessary to safeguard privacy, freedom, and prevent potential violations of human …
S22
Artificial Intelligence & Emerging Tech — In conclusion, the meeting underscored the importance of AI in societal development and how it can address various chall…
S23
AI Governance: Ensuring equity and accountability in the digital economy (UNCTAD) — Overall, AI governance requires collaboration, inclusivity, transparency, and accountability. It is a complex and evolvi…
S24
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ananya Birla Birla AI Labs — Deliberately, quietly, and steadily. Not for the spectacle, but to deliver tangible value to our stakeholders. Birla AI …
S25
Indias AI Leap Policy to Practice with AIP2 — “This is essentially what we provide for startups.”[16]. “And startups become a very, very important player in this game…
S26
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Namaskar, Pradhan Muthuji. As a world’s leading cybersecurity company, our mission is to deliver this AI vision safely a…
S27
State of play of major global AI Governance processes — Alan Davidson:Well, thank you, Dr. El-Masri. And a quick thank you and congratulations to the ITU and to Secretary Gener…
S28
FOREWORD — – In 2022, UNESCO convened a Southern African sub-regional forum on Artificial Intelligence, attended by se…
S29
Announcement of New Delhi Frontier AI Commitments — “First, advancing understanding of real‑world AI usage through anonymized and aggregated insights to support evidence‑ba…
S30
Secure Finance Risk-Based AI Policy for the Banking Sector — Consumer centric safeguards obviously by way of transparent disclosure clear appeal processes and human intervention mec…
S31
Trade regulations in the digital environment: Is there a gender component? (UNCTAD) — In conclusion, the analysis reinforces the potential of digitalisation and emerging technologies, such as artificial int…
S32
African AI: Digital Public Goods for Inclusive Development | IGF 2023 WS #317 — In certain cases, even the CEO can be voted out if they deviate from the principles outlined in the internal constitutio…
S33
A Global AI in Financial Services Survey — – FinTechs have the privilege of starting from scratch, allowing them to build new IT systems that have a significantly …
S34
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ananya Birla Birla AI Labs — This discussion features a speech by a young leader from the Aditya Birla Group announcing the launch of Birla AI Labs, …
S35
AI Innovation in India — Bagla articulated a compelling vision of India’s unique advantages in the global AI landscape, asserting that India will…
S36
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — – K. Krithivasan- Salil Parekh- C. Vijayakumar Future of Employment and Workforce Transformation References India beco…
S37
Panel Discussion AI in Healthcare India AI Impact Summit — I mean, I might sort of frame it with this. I think, and you talked a little bit about it. Your friends have kind of see…
S38
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-ananya-birla-birla-ai-labs — Let me share what this looks like in practice. As a part of our first mandate driven by my father, we are executing AI d…
S39
Who Watches the Watchers Building Trust in AI Governance — Create partnerships between frontier labs and external actors that draw on internal technical knowledge while ensuring t…
S40
From High-Performance Computing to High-Performance Problem Solving / Davos 2025 — Georges Olivier Reymond discusses specific use cases for quantum computing in drug design and financial portfolio optimi…
S41
Fireside Chat Intel Tata Electronics CDAC & Asia Group _ India AI Impact Summit — Thank you. First of all, thank you for inviting me. It’s a privilege to be here and talking in front of such an esteemed…
S42
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — So I think we don’t have the right like we don’t have enough deployment of the cutting edge models in India data centers…
S43
International Cooperation for AI & Digital Governance | IGF 2023 Networking Session #109 — Dasom Lee leads the AI and Cyber-Physical Systems Policy Lab at KAIST, where they focus on the relationship between AI, …
S44
Are AI safety institutes shaping the future of trustworthy AI? — As AI advances at an extraordinary pace, governments worldwide are implementing measures to manage associated opportunit…
S45
WS #270 Understanding digital exclusion in AI era — The speaker stresses the need for collaboration among multiple stakeholders to address AI challenges. No single stakehol…
S46
Bridging the AI innovation gap — The speaker stressed that all stakeholders—government, industry, academia, and civil society—have important roles in sha…
S47
Regional experiences on the governance of emerging technologies NRI Collaborative Session — – Eliamani Isaya Laltaika: Judge of the High Court of Tanzania, academic at Nelson Mandela African Institution of Scienc…
S48
Open Mic & Closing Ceremony — Audience: Good evening, everybody. It’s been a long five days and it’s a very, very kudos to the organizers of the West …
S49
Masterclass#1 — Gratitude was expressed towards both presenters and participants for engaging in the dialogue.
S50
Keynote-Sundar Pichai — -Prime Minister Modi: Role/Title: Prime Minister (of India, based on context); Area of Expertise: Not mentioned (acknowl…
S51
The State of Cyber Diplomacy: Momentum, Inertia, or Something Else Altogether? — Rudolph Lohmeyer:Inertia, or something else all together? His Excellency Massimo Marotti, Ambassador, International Rela…
S52
Subrata K. Mitra Jivanta Schottli Markus Pauli — The chapter also highlights changes that are taking place under the government of Prime Minister Narendra Modi. We asses…
S53
(Day 1) General Debate – General Assembly, 79th session: morning session — Recep Tayyip Erdoğan – Turkey: Mr. President, dear heads of states and governments, Mr. Secretary General, distinguish…
S54
John le Carré: The Biography — Despite telling us nothing new about the subject’s own career as an intelligence officer, in every other regard this luc…
S55
Public Diplomacy and Nation Brand — To my father, for his inspiration to greater heights
S56
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Minister Vaishnav, Excellencies, ladies and gentlemen, let me begin by giving our thanks and expressing our sincere appr…
S57
Are we creating alien beings? — The conversation touched on AI’s potential effects on employment. Hinton drew parallels to the Industrial Revolution, no…
S58
Fireside Conversation: 01 — Amodei speculates that AI could unlock unprecedented growth rates in India, far beyond typical expectations, by linking …
S59
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-panel-discussion — And John, very quickly to you, I’m going to ask you a cheeky question. The kind of philanthropy I think that we require …
S60
Empowering Workers in the Age of AI — Tom Wambeke: Good afternoon. This is the last input before we can go a little bit more interactive. As you see from the …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
15 arguments135 words per minute2035 words899 seconds
Argument 1
AI as catalyst for India’s economic future – AI amplifies cognitive capabilities, likened to a new Industrial Revolution (Speaker 1)
EXPLANATION
The speaker likens the current AI era to the Industrial Revolution, emphasizing that while the latter amplified physical labor, AI amplifies cognitive abilities, creating a transformative “Cambrian explosion” of possibilities.
EVIDENCE
He describes the present moment as comparable to the Industrial Revolution, noting that the Industrial Revolution amplified physical capabilities while AI amplifies cognitive ones, leading to a rapid emergence of new forms of value and human potential. [10-15]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote describes AI as expanding human cognitive capacity and directly compares the current AI era to the Industrial Revolution’s amplification of physical labor, confirming the speaker’s analogy [S4].
MAJOR DISCUSSION POINT
AI as a transformative catalyst
Argument 2
AI as catalyst for India’s economic future – Leveraging AI is essential to grow India’s economy from $4 trillion to $40 trillion by 2047 (Speaker 1)
EXPLANATION
The speaker argues that achieving India’s ambition of expanding its economy from $4 trillion to $40 trillion by 2047 hinges on technology, with AI positioned as the decisive driver of that growth.
EVIDENCE
He states that India’s journey from a $4 trillion economy to a $40 trillion economy by 2047 will rely heavily on technology, linking this growth to the AI-driven industrial shift described earlier. [9-15]
MAJOR DISCUSSION POINT
AI‑driven economic growth
Argument 3
Birla AI Labs’ dual mandate – Serve the Aditya Birla Group as an apex AI body to create business value (Speaker 1)
EXPLANATION
Birla AI Labs’ first mandate is to act as the central AI organization for the Aditya Birla Group, collaborating with business‑tech teams to develop solutions that unlock new value across the conglomerate’s diverse businesses.
EVIDENCE
He outlines the first mandate of Birla AI Labs as serving his father’s direction and acting as an apex AI body for the group, building solutions alongside business tech teams to create new value across businesses. [24-27][30-33]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The source explicitly states that Birla AI Labs’ first mandate is to act as an apex AI body for the Aditya Birla Group, aligning with the speaker’s description [S9].
MAJOR DISCUSSION POINT
Internal AI service mandate
Argument 4
Birla AI Labs’ dual mandate – Operate as a frontier research lab to develop original AI science and market‑ready products (Speaker 1)
EXPLANATION
The second mandate positions Birla AI Labs as a frontier research laboratory that conducts cutting‑edge AI research and translates scientific breakthroughs into proprietary products for the open market.
EVIDENCE
The second mandate is described as operating as a frontier research lab doing original cutting-edge research and translating that science into proprietary AI products for the open market. [24-27][59-62]
MAJOR DISCUSSION POINT
External research and product development
Argument 5
Concrete AI deployments across the Birla portfolio – Birla Estates: AI cuts project concept timelines by 90%, saving 2,000+ man‑days annually (Speaker 1)
EXPLANATION
At Birla Estates, AI will compress project concept timelines by 90%, freeing more than 2,000 man‑days each year and dramatically improving efficiency for architects and developers.
EVIDENCE
He says AI will compress project concept timelines by 90%, freeing over 2,000 man-days annually, allowing architects and developers to work faster without being constrained by testing time. [33-36]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The claim of a 90% reduction in project concept timelines and the saving of over 2,000 man-days per year is documented in the keynote and transcript excerpts [S4][S9].
MAJOR DISCUSSION POINT
Efficiency gains in real estate
Argument 6
Concrete AI deployments across the Birla portfolio – Aditya Birla Capital: AI reduces underwriting time by 50% and credit‑assessment effort by 90%; AI‑driven sales target >$100 M (Speaker 1)
EXPLANATION
In the financial services arm, AI has halved underwriting turnaround, cut credit‑assessment preparation by 90%, and powers an AI‑enabled sales program targeting over $100 million in gross sales, while also boosting first‑call resolution beyond 90% in customer service.
EVIDENCE
He reports that AI has reduced underwriting turnaround time by 50%, cut credit-assessment preparation by 90%, and an AI-enabled sales program is targeting more than $100 million in gross sales, with the customer service platform pushing first-call resolution beyond 90%. [37]
MAJOR DISCUSSION POINT
Financial services transformation
Argument 7
Concrete AI deployments across the Birla portfolio – Hindalco: Real‑time factory intelligence integrates 24 KPIs; digital twin and AI layer for smelters and power ecosystem (Speaker 1)
EXPLANATION
Hindalco is deploying a proprietary factory intelligence system that integrates 24 operational KPIs in real time, turning static spreadsheets into a living intelligence layer, and is building a digital twin for smelters with an AI layer to orchestrate the coal‑sourced power ecosystem.
EVIDENCE
He describes a proprietary factory intelligence that integrates 24 KPIs in real time, turning static spreadsheets into a living intelligence layer, and mentions building a digital twin for smelters and an AI layer for the power ecosystem. [40-45]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Details of Hindalco’s proprietary factory intelligence system that integrates 24 real-time KPIs and the planned digital-twin with an AI orchestration layer are provided in the source material [S4][S9].
MAJOR DISCUSSION POINT
AI in heavy industry
Argument 8
Concrete AI deployments across the Birla portfolio – Tantra (micro‑finance): AI across sales, audit, quality drives ≥30% productivity, expanding credit access for rural women (Speaker 1)
EXPLANATION
In the micro‑finance business Tantra, AI is embedded across sales, audit and quality control, projected to deliver at least a 30% productivity boost, enabling loan officers to reach more customers and providing women in villages with previously unavailable capital.
EVIDENCE
He explains that AI across sales, audit and quality control is projected to deliver at least 30% productivity gains, allowing loan officers to serve more customers and giving women in villages access to capital they previously lacked. [45-50]
MAJOR DISCUSSION POINT
Inclusive finance through AI
Argument 9
Concrete AI deployments across the Birla portfolio – Consumer brands (Love Etc, Contraband): AI powers hyper‑personalized marketing and rapid creative asset generation (Speaker 1)
EXPLANATION
The consumer‑facing brands are using AI for hyper‑personalized marketing, real‑time inventory intelligence, and AI creativity tools that accelerate campaign ideation to final asset delivery at a fraction of traditional cost.
EVIDENCE
He notes that fashion, retail and jewelry businesses deploy AI for hyper-personalized marketing and real-time inventory intelligence, and that Love Etc and Contraband use AI creativity tools to move from campaign ideation to final asset delivery at a fraction of traditional cost. [52-55]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Industry surveys show AI being used for hyper-personalized marketing, real-time inventory insights, and rapid creative asset generation, matching the described deployments [S13].
MAJOR DISCUSSION POINT
AI in marketing and creative processes
Argument 10
Frontier research focus areas – Structured foundation models for time‑series/tabular data; research shows models can internalize market‑crash signatures (Speaker 1)
EXPLANATION
The lab’s research on structured foundation models for time‑series and tabular data demonstrates that such models can learn underlying world structures, as shown by experiments where injecting a historical crash signature shifts the model’s forecast, indicating more than curve fitting.
EVIDENCE
He outlines a research vertical on structured foundation models for time-series/tabular data, citing a paper ‘Time to Time’ that showed injecting a historical crash signature into a model shifts its forecast, suggesting the model has learned structural knowledge. [64-68]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote references research indicating that time-series foundation models can embed historical market-crash signatures, supporting the stated research focus [S4].
MAJOR DISCUSSION POINT
Advanced AI research on time‑series models
Argument 11
Frontier research focus areas – Human‑centred AI study: impact of large language models on curiosity and agency among Delhi University students (Speaker 1)
EXPLANATION
A study conducted with Delhi University students measured how usage of large language models influences curiosity and cognitive agency, with findings slated for presentation at King’s College in June.
EVIDENCE
He mentions a study with Delhi University students to assess the impact of language model usage on curiosity and cognitive agency, with results to be presented at King’s College in June. [68-71]
MAJOR DISCUSSION POINT
Human‑centred AI impact research
Argument 12
Frontier research focus areas – AI‑native research and productivity platform launched at IIT Bombay, combining genetic search, real‑time data, multimodal intelligence (Speaker 1)
EXPLANATION
In December 2024 a beta AI‑native research and productivity platform was launched at IIT Bombay, integrating genetic search, real‑time data processing and multimodal intelligence to deliver contextual insights, and is now being used internally to improve day‑to‑day efficiency.
EVIDENCE
He describes launching a beta AI-native research and productivity platform at IIT Bombay that combines genetic search, real-time data processing and multimodal intelligence to deliver contextual insights, and notes it is now used across his own office to boost efficiency. [72-76]
MAJOR DISCUSSION POINT
Deployment of frontier research tools
Argument 13
Ethical responsibility and ecosystem building – Industry must proactively study AI’s societal and cognitive effects, not treat them as afterthoughts (Speaker 1)
EXPLANATION
The speaker asserts that industry often leaves the study of AI’s human consequences to others, and that AI builders have a responsibility to understand these societal and cognitive impacts as a core part of their work.
EVIDENCE
He criticizes that industry often leaves the study of AI’s human consequences to others, asserting that builders of AI have a responsibility to understand its societal and cognitive impacts as a core part of their work. [70-71]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Discussion of professional codes of ethics and the need for AI developers to address societal impacts underscores the call for proactive industry study of AI’s effects [S8].
MAJOR DISCUSSION POINT
Ethical responsibility in AI development
Argument 14
Ethical responsibility and ecosystem building – Building India’s AI future requires sustained collaboration among academia, industry, and policy makers (Speaker 1)
EXPLANATION
The speaker emphasizes that no single institution can navigate the AI epoch alone; achieving the $4 trillion to $40 trillion journey will require an ecosystem that brings together academia, industry and policy in genuine, sustained collaboration.
EVIDENCE
He states that no single institution can navigate the AI epoch alone, and that achieving the $4 trillion to $40 trillion journey will require an ecosystem that brings together academia, industry and policy in genuine, sustained collaboration. [92-96]
MAJOR DISCUSSION POINT
Need for multi‑stakeholder ecosystem
Argument 15
Ethical responsibility and ecosystem building – Birla AI Labs commits to responsible development of technology, institutions, and the broader AI ecosystem (Speaker 1)
EXPLANATION
Birla AI Labs pledges to act as honest and responsible builders of technology, institutions and the AI ecosystem, emphasizing utmost responsibility in its actions.
EVIDENCE
He affirms that Birla AI Labs will act as honest and responsible builders of technology, institutions and the broader AI ecosystem, emphasizing utmost responsibility. [96-98]
MAJOR DISCUSSION POINT
Commitment to responsible AI
Agreements
Agreement Points
AI is presented as a transformative catalyst for India’s economic future, likened to a new Industrial Revolution and essential for scaling the economy from $4 trillion to $40 trillion by 2047.
Speakers: Speaker 1
AI as catalyst for India’s economic future – AI amplifies cognitive capabilities, likened to a new Industrial Revolution (Speaker 1) AI as catalyst for India’s economic future – Leveraging AI is essential to grow India’s economy from $4 trillion to $40 trillion by 2047 (Speaker 1)
The speaker repeatedly emphasizes that AI, by amplifying cognitive abilities, will drive a Cambrian-like explosion of value and is the decisive technology needed for India’s ambition to expand its GDP ten-fold by 2047 [10-15][9-15].
POLICY CONTEXT (KNOWLEDGE BASE)
This vision echoes statements at the Leaders’ Plenary where AI was described as enabling a “ten-times industrial revolution” for India and aligns with New Delhi’s Frontier AI Commitments that link AI adoption to economic transformation and growth targets [S26][S29].
Birla AI Labs has a dual mandate: (a) serve the Aditya Birla Group as an apex AI body to create business value, and (b) operate as a frontier research lab producing original AI science and market‑ready products.
Speakers: Speaker 1
Birla AI Labs’ dual mandate – Serve the Aditya Birla Group as an apex AI body to create business value (Speaker 1) Birla AI Labs’ dual mandate – Operate as a frontier research lab to develop original AI science and market‑ready products (Speaker 1)
The speaker outlines a two-pronged strategy: first, an internal AI hub that works with business-tech teams across the conglomerate; second, an external research arm that pursues cutting-edge science and commercialises it [24-27][30-33][59-62].
POLICY CONTEXT (KNOWLEDGE BASE)
The dual-mandate description matches the official keynote where Birla AI Labs is positioned as both an apex AI body for the Aditya Birla Group and a frontier research lab delivering market-ready solutions [S24].
Concrete AI deployments across the Birla portfolio demonstrate efficiency gains, productivity improvements, and new capabilities in real‑estate, financial services, heavy industry, micro‑finance, and consumer brands.
Speakers: Speaker 1
Concrete AI deployments across the Birla portfolio – Birla Estates: AI cuts project concept timelines by 90%, saving 2,000+ man‑days annually (Speaker 1) Concrete AI deployments across the Birla portfolio – Aditya Birla Capital: AI reduces underwriting time by 50% and credit‑assessment effort by 90%; AI‑driven sales target >$100 M (Speaker 1) Concrete AI deployments across the Birla portfolio – Hindalco: Real‑time factory intelligence integrates 24 KPIs; digital twin and AI layer for smelters and power ecosystem (Speaker 1) Concrete AI deployments across the Birla portfolio – Tantra (micro‑finance): AI across sales, audit, quality drives ≥30% productivity, expanding credit access for rural women (Speaker 1) Concrete AI deployments across the Birla portfolio – Consumer brands (Love Etc, Contraband): AI powers hyper‑personalized marketing and rapid creative asset generation (Speaker 1)
Across multiple business units, AI is used to slash project timelines by 90% and free 2,000 man-days (Estates) [33-36]; halve underwriting cycles and cut credit-assessment effort by 90% while targeting $100 M sales (Capital) [37]; integrate 24 real-time KPIs and build digital twins for smelters (Hindalco) [40-45]; boost micro-finance productivity by ≥30% and extend credit to women in villages (Tantra) [45-50]; and enable hyper-personalized marketing and low-cost creative production for consumer brands (52-55).
POLICY CONTEXT (KNOWLEDGE BASE)
These deployments reflect broader industry trends highlighted in a global AI-in-Financial-Services survey that notes fintechs leveraging AI for cost efficiencies and productivity, and they resonate with banking-sector AI policy that stresses risk-based, consumer-centric safeguards for responsible AI use in finance [S33][S30].
Frontier research focus areas include structured foundation models for time‑series/tabular data, human‑centred AI studies on cognition and agency, and the launch of an AI‑native research‑productivity platform.
Speakers: Speaker 1
Frontier research focus areas – Structured foundation models for time‑series/tabular data; research shows models can internalize market‑crash signatures (Speaker 1) Frontier research focus areas – Human‑centred AI study: impact of large language models on curiosity and agency among Delhi University students (Speaker 1) Frontier research focus areas – AI‑native research and productivity platform launched at IIT Bombay, combining genetic search, real‑time data, multimodal intelligence (Speaker 1)
The lab pursues (a) time-series foundation models that can learn structural world knowledge, demonstrated by crash-signature experiments [64-68]; (b) a human-centred study measuring LLM effects on student curiosity and agency, with results to be presented at King’s College [68-71]; and (c) a beta AI-native platform deployed at IIT Bombay that fuses genetic search, real-time processing and multimodal intelligence for productivity gains [72-76].
The speaker stresses ethical responsibility and the need for a multi‑stakeholder ecosystem (academia, industry, policy) to develop AI responsibly and sustainably.
Speakers: Speaker 1
Ethical responsibility and ecosystem building – Industry must proactively study AI’s societal and cognitive effects, not treat them as afterthoughts (Speaker 1) Ethical responsibility and ecosystem building – Building India’s AI future requires sustained collaboration among academia, industry, and policy makers (Speaker 1) Ethical responsibility and ecosystem building – Birla AI Labs commits to responsible development of technology, institutions, and the broader AI ecosystem (Speaker 1)
The speaker argues that AI builders must study human consequences rather than leaving it to others [70-71], that no single institution can navigate the AI epoch alone and a collaborative ecosystem is essential [92-96], and that Birla AI Labs will act as honest, responsible builders of technology and institutions [96-98].
POLICY CONTEXT (KNOWLEDGE BASE)
The call for a multi-stakeholder ecosystem mirrors consensus in AI governance fora: UN-CTAD’s emphasis on collaboration and accountability, IGF-style multistakeholder participation, and ITU-led discussions on inclusive AI governance all underline the same principle [S20][S22][S23][S27][S28].
Similar Viewpoints
Both arguments stress that AI is the key driver of a transformative economic shift comparable to the Industrial Revolution and indispensable for achieving the $4 T → $40 T growth target [10-15][9-15].
Speakers: Speaker 1
AI as catalyst for India’s economic future – AI amplifies cognitive capabilities, likened to a new Industrial Revolution (Speaker 1) AI as catalyst for India’s economic future – Leveraging AI is essential to grow India’s economy from $4 trillion to $40 trillion by 2047 (Speaker 1)
All five deployment examples illustrate how AI is being used to generate efficiency, productivity, and new value across diverse sectors of the Birla conglomerate [33-36][37][40-45][45-50][52-55].
Speakers: Speaker 1
Concrete AI deployments across the Birla portfolio – Birla Estates: AI cuts project concept timelines by 90%, saving 2,000+ man‑days annually (Speaker 1) Concrete AI deployments across the Birla portfolio – Aditya Birla Capital: AI reduces underwriting time by 50% and credit‑assessment effort by 90%; AI‑driven sales target >$100 M (Speaker 1) Concrete AI deployments across the Birla portfolio – Hindalco: Real‑time factory intelligence integrates 24 KPIs; digital twin and AI layer for smelters and power ecosystem (Speaker 1) Concrete AI deployments across the Birla portfolio – Tantra (micro‑finance): AI across sales, audit, quality drives ≥30% productivity, expanding credit access for rural women (Speaker 1) Concrete AI deployments across the Birla portfolio – Consumer brands (Love Etc, Contraband): AI powers hyper‑personalized marketing and rapid creative asset generation (Speaker 1)
The three statements converge on the need for responsible, human‑centred AI development and a collaborative ecosystem spanning academia, industry and policy [70-71][92-96][96-98].
Speakers: Speaker 1
Ethical responsibility and ecosystem building – Industry must proactively study AI’s societal and cognitive effects, not treat them as afterthoughts (Speaker 1) Ethical responsibility and ecosystem building – Building India’s AI future requires sustained collaboration among academia, industry, and policy makers (Speaker 1) Ethical responsibility and ecosystem building – Birla AI Labs commits to responsible development of technology, institutions, and the broader AI ecosystem (Speaker 1)
Unexpected Consensus
Inclusion of rural women through AI‑enabled micro‑finance is framed as both a business productivity gain and an ethical responsibility.
Speakers: Speaker 1
Concrete AI deployments across the Birla portfolio – Tantra (micro‑finance): AI across sales, audit, quality drives ≥30% productivity, expanding credit access for rural women (Speaker 1) Ethical responsibility and ecosystem building – Industry must proactively study AI’s societal and cognitive effects, not treat them as afterthoughts (Speaker 1)
While the Tantra deployment is presented as a productivity initiative ([45-50]), the speaker simultaneously stresses ethical responsibility to study AI’s societal impacts ([70-71]), revealing an unexpected alignment between profit-driven AI use and a commitment to social equity for women in villages.
POLICY CONTEXT (KNOWLEDGE BASE)
Framing AI-enabled micro-finance for rural women aligns with gender-inclusive AI policy strands, such as banking sector risk-based AI guidelines that embed consumer-centric safeguards and gender considerations, UN-CTAD’s analysis of the digital gender gap, and broader human-rights-based AI governance advocating equitable access [S30][S31][S21].
Overall Assessment

Speaker 1 consistently aligns all presented arguments around four core pillars: (1) AI as a transformative engine for India’s macro‑economic ambition; (2) a dual‑mandate structure that couples internal value creation with frontier research; (3) concrete, cross‑sector AI deployments delivering efficiency and inclusive outcomes; and (4) a strong ethical stance calling for responsible AI and multi‑stakeholder ecosystem building.

Given that a single speaker articulates all points, internal consensus is absolute. The coherence across strategic, operational, research, and ethical dimensions suggests a unified vision that, if adopted by other stakeholders, could drive coordinated policy, investment, and regulatory actions to accelerate India’s AI‑led development.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript contains remarks solely from Speaker 1; no other speakers are present, and therefore no contrasting viewpoints or debates are evident. All arguments presented are consistent with a single perspective on AI’s role in India’s economic future, Birla AI Labs’ dual mandate, concrete deployments, frontier research, and ethical responsibilities. Consequently, there are no points of disagreement, partial agreement, or unexpected divergence to classify.

None – the discussion is monologic, indicating full alignment (by default) rather than conflict. This implies that, within the provided material, there is no contested discourse affecting the identified topics.

Takeaways
Key takeaways
AI is positioned as a catalyst for India’s economic growth, likened to a new Industrial Revolution that amplifies cognitive capabilities. Birla AI Labs has a dual mandate: (1) serve the Aditya Birla Group as an apex AI body to generate business value, and (2) operate as a frontier research lab to create original AI science and market‑ready products. Significant AI deployments are already delivering measurable benefits across the Birla portfolio, including dramatic efficiency gains in real‑estate, financial services, heavy industry, micro‑finance, and consumer brands. Frontier research is focused on structured foundation models for time‑series/tabular data, human‑centred AI impacts on cognition and agency, and an AI‑native research and productivity platform deployed at IIT Bombay. The speaker emphasizes ethical responsibility, urging industry to study AI’s societal and cognitive effects proactively and to build an ecosystem of sustained collaboration among academia, industry, and policymakers.
Resolutions and action items
Establish and operationalize Birla AI Labs as both an internal AI service hub and an external research institution. Scale AI solutions across the Aditya Birla Group’s businesses (Birla Estates, Aditya Birla Capital, Hindalco, Tantra, consumer brands). Advance research on structured foundation models for time‑series and tabular data, including validation of market‑crash understanding. Conduct and publish human‑centred AI studies (e.g., the Delhi University student study on curiosity and agency). Deploy the AI‑native research and productivity platform beyond IIT Bombay to broader enterprise use. Foster a collaborative AI ecosystem in India by engaging academia, industry partners, and policy makers for joint development and responsible governance.
Unresolved issues
The optimal depth and speed of AI adoption across highly regulated and capital‑intensive sectors remains ambiguous. Specific frameworks or policies for ensuring responsible AI development and mitigating societal impacts have not been detailed. How to coordinate and sustain long‑term collaboration among diverse stakeholders (academia, industry, government) is mentioned but not concretely outlined.
Suggested compromises
None identified
Thought Provoking Comments
AI is amplifying our cognitive ones. What we are living through is nothing less than a Cambrian explosion of possibilities.
This frames AI not just as a technological tool but as a transformative force comparable to the Industrial Revolution, introducing a macro‑historical perspective that elevates the conversation from corporate initiatives to societal evolution.
It shifts the tone from a personal introduction to a grand narrative, setting up the audience to view subsequent examples (Birla Estates, financial services, etc.) as part of a larger, epoch‑defining wave rather than isolated projects.
Speaker: Speaker 1
Birla AI Labs has a dual mandate: (1) serve the Aditya Birla Group as an apex AI body, and (2) operate as a frontier research lab creating original AI products for the open market.
The dual‑mandate concept bridges the often‑separate worlds of corporate AI deployment and cutting‑edge research, challenging the notion that large conglomerates cannot be innovators at the scientific frontier.
Introduces a new structural idea that guides the rest of the speech, prompting the audience to anticipate concrete business applications followed by research breakthroughs, thereby linking practical impact with thought leadership.
Speaker: Speaker 1
In our micro‑finance business, Tantra, AI can unlock at least 30 % productivity gains – meaning a loan officer can reach more people and a woman in a village gets access to capital she would not have had otherwise.
This ties AI deployment directly to social impact, moving the discussion from profit‑centric metrics to human‑centric outcomes, and challenges listeners to consider AI’s role in inclusive development.
Creates an emotional pivot point; after technical examples, the audience is reminded of the human stakes, deepening the conversation and reinforcing the speaker’s claim that AI can improve lives, not just margins.
Speaker: Speaker 1
Do these time‑series foundation models actually understand what a market crash is? Or are they just fitting curves? A researcher showed you can inject the signature of a historical crash into a model’s hidden states and watch the forecast shift accordingly.
Poses a provocative scientific question about model interpretability versus mere curve‑fitting, highlighting a frontier research challenge and inviting scrutiny of AI’s true understanding of complex phenomena.
Marks a turning point from business‑focused anecdotes to deep technical inquiry, signaling the research‑lab side of the dual mandate and prompting the audience to appreciate the rigor behind the claimed innovations.
Speaker: Speaker 1
We conducted a study with Delhi University students to measure how language‑model usage affects curiosity and cognitive agency. The results will be presented at King’s College this June. Industry too often leaves this to others, but we believe builders of AI must understand its human consequences as a core part of the enterprise.
Challenges the prevailing industry attitude that ethical and cognitive impacts are peripheral, asserting that responsibility for understanding AI’s societal effects belongs to the creators themselves.
Broadens the conversation to include ethics and cognition, prompting listeners to view AI development as a duty rather than a purely commercial venture, and setting up the later call for ecosystem collaboration.
Speaker: Speaker 1
No single institution, no matter how large or well‑resourced, can navigate this epoch alone. The journey from $4 trillion to $40 trillion will require an ecosystem that brings academia, industry and policy into genuine, sustained collaboration.
Calls for a systemic, multi‑stakeholder approach, challenging any siloed mindset and positioning collaboration as the essential catalyst for national AI leadership.
Serves as the concluding turning point, moving the speech from showcasing internal capabilities to a broader, collaborative vision, leaving the audience with a call to action that transcends the speaker’s own organization.
Speaker: Speaker 1
Overall Assessment

The discussion, though delivered by a single speaker, is driven forward by a series of strategically placed, thought‑provoking remarks. Each comment introduces a new dimension—historical context, structural innovation, social impact, scientific rigor, ethical responsibility, and ecosystem collaboration—that progressively expands the conversation’s scope. These pivots transform the speech from a routine corporate announcement into a compelling narrative about AI’s role in India’s economic future and its broader human implications, shaping the audience’s perception and setting a roadmap for collective action.

Follow-up Questions
Do time series foundation models actually understand what a market crash is, or are they merely fitting curves?
Determining whether models capture underlying market dynamics is crucial for reliable AI-driven financial forecasting and risk management.
Speaker: Speaker 1
How fast and how deep should we go with AI deployment given the ambiguity surrounding artificial intelligence?
Guides strategic pacing of AI integration across diverse business units, balancing potential benefits against ethical, operational, and regulatory risks.
Speaker: Speaker 1
Investigate the impact of language model usage on curiosity and cognitive agency among students.
Understanding AI’s effect on human cognition is essential for responsible product design, educational policy, and mitigating unintended societal consequences.
Speaker: Speaker 1
Develop and evaluate structured foundation models for time series and tabular data to power predictive intelligence across sectors.
Leveraging the vast amount of time‑series and tabular data can unlock predictive capabilities in finance, healthcare, supply chains, and energy, creating new economic value.
Speaker: Speaker 1
Create digital twins for smelters and furnaces and an AI layer to orchestrate coal‑sourced power ecosystems.
A digital twin coupled with AI orchestration could dramatically improve efficiency, safety, and sustainability in energy‑intensive manufacturing.
Speaker: Speaker 1
Assess AI‑driven productivity gains in micro‑finance (Tantra) and its social impact on underserved populations.
Quantifying both economic returns and social outcomes will validate the model’s scalability and inform responsible financial inclusion strategies.
Speaker: Speaker 1
Explore AI‑powered hyper‑personalized marketing and real‑time inventory intelligence for consumer brands.
These applications can reduce marketing costs, enhance customer experience, and optimize inventory, directly impacting brand competitiveness.
Speaker: Speaker 1
Study the design of an ecosystem that integrates academia, industry, and policy for AI development in India.
A collaborative ecosystem is needed to sustain AI leadership, ensure responsible innovation, and align regulatory frameworks with industry needs.
Speaker: Speaker 1
Evaluate the AI‑native research and productivity platform launched at IIT Bombay for effectiveness and scalability.
Assessing the platform’s impact on workflow efficiency will determine its value proposition for broader enterprise adoption and future productization.
Speaker: Speaker 1

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden

Session at a glanceSummary, keypoints, and speakers overview

Summary

The AI Impact Summit opened with Speaker 1 thanking the keynote and highlighting the growing global challenges of artificial intelligence and the need for fresh perspectives, while noting Sweden’s reputation for innovation and its role at the nexus of energy policy and AI infrastructure [1-9][10-13]. He introduced Her Excellency Ebba Bush, Deputy Prime Minister of Sweden, as a strategic partner in this dialogue [10-13].


In her address, Bush emphasized India’s status as the world’s largest and youngest democracy and argued that the Global South must be fully included in shaping AI governance and standards [19-21][26-28]. She described the long-term, trust-based partnership between Sweden and India, recalling historic cooperation and the shared vision of using AI to lift 1.4 billion people out of poverty [24-31][34-38].


Bush drew a parallel between the printing press and today’s AI revolution, explaining that every major technology passes through fear, then trust, legitimacy, and finally transformation, and warned that AI is not merely an algorithmic upgrade but a shift in control over energy, compute, data, and trust that will determine economic growth and democratic resilience [44-58][60-64][61-68][70-74].


Discussing data centres, she noted their high energy demand and public perception of them as “someone else’s internet,” but argued they can become local job anchors and renewable-energy hubs if managed correctly, and asserted that no nation can build resilient AI infrastructure alone, defining AI sovereignty as the ability to choose dependencies based on jurisdictional control, sovereign compute capacity, and strategic partner choice [76-85][86-89][97-105].


She positioned Sweden as a strategic partner, citing its abundant clean electricity, low-carbon AI training, and industrial depth in sectors such as semiconductor lithography, processor design, and 5G/6G networks, and outlined Sweden’s AI strategy that includes substantial public funding, a roadmap for sustained leadership, and workshops to embed trustworthy AI in the public sector [116-124][125-131][140-145]. Bush highlighted India’s scale and sovereign AI models that can serve diverse languages and sectors, arguing that combining India’s reach with Sweden’s engineering excellence will create inclusive, democratic AI [146-152]. She concluded that fear fades when people understand value, and that a collaborative, open, and competitive AI industry can empower citizens and avoid the pitfalls of past technological anxieties [152-158].


Keypoints


Major discussion points


AI’s legitimacy must be built on public trust and understanding.


Ebba Bush likens the current AI wave to the introduction of the printing press, describing a familiar emotional curve from fear to trust that eventually leads to societal transformation. She stresses that AI is not just an algorithmic upgrade but a shift in control over energy, compute, data, and trust, and that nations need to shape this transformation with clear values [44-60][61-69].


True AI sovereignty requires international cooperation and three concrete pillars.


She argues that no single nation can build resilient AI infrastructure alone; democracies must cooperate. Sovereignty is defined by (1) jurisdictional control of data, (2) sovereign compute capacity, and (3) strategic choice of partners, emphasizing that sovereignty is about choosing dependencies, not isolation [97-106][102-105].


Sweden-India partnership as a model of complementary strengths.


The speech outlines how Sweden’s abundant clean energy, industrial depth, and trusted institutions can combine with India’s massive scale and democratic development to create trustworthy, low-carbon AI infrastructure. Specific examples include Sweden’s low-carbon AI training footprint, its expertise in complex industrial systems, and India’s ability to develop sovereign AI models that serve 1.4 billion people [107-115][116-124][146-151].


Overall purpose / goal of the discussion


The address aims to persuade global leaders-especially those from the Global South-to view AI development as a collaborative, values-driven endeavor. By highlighting the need for legitimacy, outlining a cooperative sovereignty framework, and showcasing the Sweden-India alliance, the speaker seeks to mobilize coordinated action that ensures AI benefits are inclusive, democratic, and environmentally sustainable.


Overall tone and its evolution


– The opening is formal and celebratory, thanking the summit and emphasizing the excitement of shared AI insights [1-8].


– It then shifts to a cautious, analytical tone, using historical analogy (printing press) to warn of fear and mistrust and to stress the urgency of building legitimacy [44-60].


– The speaker moves to a constructive, persuasive tone, outlining concrete pillars of sovereignty and the necessity of cooperation [97-106].


– Finally, the tone becomes optimistic and inspirational, portraying the Sweden-India partnership as a hopeful blueprint for a trustworthy, inclusive AI future [107-115][146-151][152-157].


Throughout, the speech maintains a diplomatic and forward-looking demeanor, but it transitions from acknowledgment of challenges to a confident call for joint action.


Speakers

Speaker 1


– Role/Title: Event moderator / host (introducing the keynote speaker)[S1][S3]


– Area of Expertise: (unspecified in the transcript)


Ebba Bush


– Role/Title: Deputy Prime Minister and Minister for Energy, Business, and Industry, Kingdom of Sweden[S4][S5]


– Area of Expertise: AI governance, digital sovereignty, energy policy, AI infrastructure, international cooperation[S5]


Additional speakers:


(none identified beyond the listed speakers)


Full session reportComprehensive analysis and detailed insights

The opening of the AI Impact Summit was led by Speaker 1, who thanked the organisers and the keynote speaker for delivering a “very, very interesting session” that offered fresh insights to all participants [1]. He highlighted the growing global awareness of AI-related challenges and argued that such sessions provide new perspectives that deepen understanding of both the difficulties and the future possibilities of artificial intelligence [5-8]. Emphasising Sweden’s reputation as a “quiet powerhouse of innovation” – citing companies such as Ericsson and Spotify – he positioned the country at the critical intersection of energy policy and AI infrastructure, a nexus that will confront every nation as data-centre demand strains national power grids [10-13]. He then introduced Her Excellency Deputy Prime Minister Ebba Bush as a strategic partner in this dialogue [9-10].


Deputy Prime Minister Bush began by expressing gratitude to the European Council, the Government of India, and the summit organizers, and noted the honour of speaking in “beautiful India” [14-17]. She outlined the three pillars of her address: the importance of the summit’s location, reflections on public legitimacy, and the need for cooperation to achieve AI sovereignty [18-19]. Stressing India’s status as the world’s largest and youngest democracy, she argued that the Global South must be fully included in shaping the rules of innovation, technology governance and global standards, because “your leadership matters” and “your perspective matters” for the future global order [20-21][26-28].


She highlighted the long-term, trust-based partnership between Sweden and India, recalling decades of joint industrial activity and the shared commitment to improve the lives of 1.4 billion people through AI-driven development and poverty reduction [24-31][34-38]. In doing so she declared, “Sweden is a proud friend of India” [92]. She also invoked the Samudramanta (cosmic ocean) metaphor, describing the collaboration as a joint “churning of the cosmic ocean” that can generate boundless energy for both societies [84-86].


Using a historical analogy, Bush compared today’s AI boom to the introduction of the printing press in the 15th century, noting that every major technological shift follows an emotional curve from fear to trust, then legitimacy, and finally worldwide transformation [44-58]. She warned that AI is not merely an algorithmic upgrade but a fundamental shift involving control over energy, compute capacity, data and trust, and that nations mastering AI infrastructure will dictate future economic growth, industrial competitiveness and democratic resilience for decades [60-68][70-74]. She framed AI as “a power multiplier for human dignity” [102].


Turning to the practicalities of AI infrastructure, Bush described data centres as “very energy-intensive” and often perceived by citizens as “someone else’s internet using our electricity” [76-81]. She reframed this perception by arguing that, if correctly implemented, data centres can become long-term local job anchors, enable renewable-energy investments, and serve critical sectors such as hospitals, research, defence and industry – effectively becoming the “factories of the new economy” [82-89]. She linked this to the political challenge that “people do not vote for technology, they vote for outcomes” and asserted that policymakers must translate AI’s complexity into tangible benefits to earn electoral legitimacy [90-96].


Bush then asserted that no single nation can build resilient AI infrastructure alone; democracies must cooperate to achieve true AI sovereignty, which she defined as the ability to choose dependencies rather than being isolated [97-101]. She presented a three-pillar framework for sovereignty: (1) jurisdictional control over where data is stored and processed; (2) sovereign compute capacity for advanced models; and (3) strategic choice of partners based on shared values and strength [102-105]. This framework echoes international calls for multi-stakeholder governance to build legitimacy and trust in AI deployments [S42][S43].


She next described Sweden’s clean-energy advantage, noting that the country exports more electricity per capita than any other European nation and that AI training in Sweden generates roughly one-third the carbon footprint of typical U.S. hyperscaler operations, turning Sweden from an energy exporter into an “intelligence exporter” [116-120]. She added that Sweden’s industrial depth includes world-leading firms such as ASML (extreme-ultraviolet lithography), ARM (processor architectures), SAP (enterprise systems) and Ericsson (5G/6G networks), all essential components of the global AI stack [125-131]. Moreover, Sweden’s trusted institutions and political stability enable the creation of AI gigafactories that combine clean power, near-zero carbon emissions and industrial scale, positioning the Nordics as path-finders for future AI infrastructure [132-138].


In line with these capabilities, Bush announced Sweden’s national AI strategy, which commits substantial public funding to AI research, development and implementation, and outlines concrete steps toward sustained AI leadership [140-144]. The strategy is supported by an AI workshop programme aimed at helping the public sector adopt AI safely and efficiently, thereby moving from slogans to implementation and building trust through action [145].


Regarding India’s contribution, Bush praised the country’s ability to develop sovereign AI models that speak all of its languages and serve its diverse society, thereby ensuring genuine inclusion for 1.4 billion people [146-150]. She argued that AI tools empowering farmers, small businesses, teachers and doctors represent a transformative leap rather than mere innovation, and that the partnership between India’s scale and Sweden’s engineering excellence can create AI systems that strengthen democracy, drive sustainable growth and expand opportunity [151-152].


She emphasized that the future will be shaped not merely by those who build the largest models, but by those who build the most trusted systems [115-117]. Bush concluded that our task as leaders is not merely to regulate AI, it is to make it legitimate, understandable and beneficial. She then likened the eventual public acceptance of AI to electricity – “invisible, indispensable, but empowering” – and called for a collaborative, open, competitive, democratic and inclusive AI industry that empowers citizens and avoids the pitfalls of past technological anxieties, thereby shaping a future where AI serves humanity rather than the other way round [152-157]. The future of AI must empower our people and uphold democratic values.


Session transcriptComplete transcript of the session
Speaker 1

Thank you so much, Mr. Cristiano Amon, for that very, very interesting session. And I’m sure each one of us must have gained something, some new insights out of it. Are you all excited about such sessions, such keynote speakers? Louder yes would do better. Thank you. I think we all keep reading about AI. We all are aware of the challenges in front of the world when it comes to AI. But, capital but, B -U -T, such sessions are actually adding such new perspectives to our understanding of AI, the challenge, and also the future, what to expect in future. So I think it’s really time to thank our keynote speakers who are adding such great value to our understanding of artificial intelligence, as well as to this AI Impact Summit.

And ladies and gentlemen, now, it’s my honor to invite Her Excellency, Ebba Bush, Deputy President of the United States, and the President of the United States, Prime Minister and Minister for Energy and Business, Sweden. Sweden has long been a quiet powerhouse of innovation, from Ericsson to Spotify to some of Europe’s most promising AI startups. As Deputy Prime Minister, Ms. Ebba Bush is navigating the critical nexus between energy policy and AI infrastructure. Now, that’s a challenge I think every nation will face as the data centers demand ever -growing share of national power grids. Ladies and gentlemen, please join me in welcoming Deputy Prime Minister of Sweden, Her Excellency, Ebba Bush.

Ebba Bush

Thank you so much, Excellencies, distinguished guests, dear friends. Namaste, ap kärsahein. And let me begin by expressing my sincere gratitude I am very grateful to the European Council for the towards the government of India and to the organizers of this important summit. It is truly an honor to be here in beautiful, beautiful India. Given this unique chance to address you all today, I would like to talk about three points. About why, first of all, it is important to be here, some reflections on public legitimacy, and finally, about cooperation and AI sovereignty. India today is not only the world’s largest democracy, it is a leading voice in shaping the future global order. Your leadership matters, your perspective matters, and the Global South must be fully included when we shape the rules of innovation, technology governance, and global standards.

I am here today as a European, as a proud European, and I am here today as a European, as a proud European, and I am here today as a European, as a proud European, and I am here today as a European, as a proud European, and I am here today as a European, as a proud European, and I am here today as a European, as a proud European, and I am here today as a European, as a proud European, Swede that represents the second largest international delegation here at the AI Summit after France. That’s worth an applaud in itself. Thank you so much. The Nordics are deeply engaged here in India, and we are here because we believe this partnership is strategic.

It is long -term and built on trust. India is not only the world’s largest democracy, it is also the world’s youngest democracy. And I am impressed with the long -term vision of India for a better life for young people, with a commitment that stretches across generations. And Sweden shares this long -term commitment. Since India first gained independence, Swedish companies have worked alongside Indian partners, and we have grown together. And as India makes strategic investments in sovereign and democratic development, we have developed different AI models and advanced research ensuring that 1 .4 billion people can benefit from AI. This is not only industrial policy. It is in many ways poverty reduction. It is empowerment. It is development leap of historic proportions.

Sweden intends to be a reliable and innovative partner as India continues its economic rise. Prime Minister Modi often talks about and speaks of India as Vishmamitra, a friend to the world. Today, we stand at a new frontier where that friendship is more vital than ever, the frontier of artificial intelligence. Sweden is a proud friend of India. In the ancient scriptures, we read of the Samudramanta. The churning of the cosmic ocean. It teaches us that collaboration is the only way to truly unlock the deepest treasures. Today, the vast ocean of data is our samudra and AI is our churning rod. Clearly. Thank you. So as you understand, clearly, there are very, very good reasons why we are here and why this summit is taking place in India, in New Delhi.

And that brings me to the point of legitimacy. In 1450, with modern time telling, when the printing press was introduced, the reaction from the status quo was not excitement. It was fear. Power had long depended on being able to control information and suddenly knowledge could scale. And if you look back at the argument. That we heard that. They’re a bit familiar, actually. This will spread the wrong ideas. People won’t know what to trust. Society will lose control. And people, especially writers at the time, will lose their jobs. But the printing press wasn’t dangerous. What was dangerous was not understanding it. Those who understood it could soon reach a nation in only two weeks and a whole continent possibly in two months.

Every major technological shift follows the same sort of emotional curve. It goes from fear, then trust, then legitimacy, and finally, a worldwide transformation. We are now living through another such moment. And artificial intelligence isn’t just another digital upgrade. It is a fundamental shift. AI is no longer about algorithms alone. It is about control of energy, compute capacity, data, and trust. Nations that master AI infrastructure will shape economic growth, industrial competitiveness, and democratic resilience for decades. It’s going to make a massive shift. Make no mistake, we are not digitalizing the old economy. We are building an entirely new global AI industry, one that will redefine the foundations of productivity, of healthcare, of defense, energy systems, and of course also public administration.

The nations that lead this transformation, they will prosper. Those that merely consume AI built elsewhere will fall behind. The future will not be decided necessarily by the ones that builds the biggest models. But rather, the future will be decided by the ones that build the biggest models. the ones that build the most trusted systems. So for me, the question is not whether or not this transformation will happen. The question is who shapes it and on what values. And that is why I am here. So let’s talk a little bit about something else that is often misunderstood. Data centers. Because AI, much like fire, it is powerful. And in this sense, it is invisible. And it is very energy intensive.

Demanding of energy intense data centers, often on the countryside, rupturing forests and fields. To many citizens, data centers look like someone else’s internet using our electricity. At least that’s the debate in Sweden and I know in many other countries. But I believe that that’s the debate. And I think that’s the debate. That perception is incomplete. In reality, they can be long term. local job anchors if implemented and used correctly. They can enable renewable energy investments. They can be infrastructure for hospitals, for research, defense and industry. And they are the factories of the new economy. And this brings us to the core political challenge. People do not vote for technology. People vote for outcomes. A job, a hospital that works, energy they can afford.

If AI is to become electable in our democracies, policymakers must find a way to translate complexity into tangible benefit. Fear turns into trust when we understand. And when understanding grows. So how do we get there? No nation can build resilient AI infrastructure alone. Democracies have to cooperate. AI sovereignty does not mean isolation. It means choosing your dependencies. To be able to choose our dependencies and the values that shape global AI, we also need a measure of sovereignty over AI. True sovereignty, the way I see it, rests on three pillars. First, jurisdictional control, knowing where your data is stored and processed. Second, infrastructure capacity, having sovereign compute for advanced models. And third, strategic choice, selecting partners from a place of strength, not dependency.

And in a turbulent world, you need to choose your friends carefully. Sweden is choosing India. India provides the incredible scale and speed, the very engine of this movement. Europe and Sweden can provide precision and trust, the filter that ensures that what we extract is the amrit, the nectar of progress for all. Just as Lord Vishwakarma unified divine vision with practical tools, we must unify the human heart with machine power. We must not see AI as a replacement for the human spirit, but as a power multiplier for human dignity. And when we combine India’s digital scale with Sweden’s systematic trust, we do more than build code. We build a future where technology never outweighs. Sweden offers Europe and all of our global partners what the AI transition actually needs.

needs. So now you’ll have a little bit of Swedish bragging, which is not that very common. But first of all, we have an abundant of clean and reliable energy. We export more electricity per capita than any other European country. AI is becoming the most efficient way to export energy without exporting electrons. In Sweden, AI training can run a roughly one third of the carbon footprint of a typical US hyperscaler operations. This transforms us from energy exporter to intelligence exporter, a fundamentally more valuable position. But energy alone is not enough. And that brings me to the second Swedish strength, industrial depth. Sweden has deep expertise in scaling complex industrial systems. We are modernizing traditional industry while building new AI.

Infrastructure. And Europe cannot be underestimated. You cannot bypass the European Union in the AI stack. Consider just ASML in the Netherlands, the only company in the world producing extreme ultraviolet lithography machines essential for advanced ships. Or ARM in the United Kingdom, whose architectures power most of the world’s smartphones and an increasing share of data center properties. Processors. Or SAP in Germany, embedded in the mission -critical enterprise systems of the global economy. And of course, Ericsson from Sweden, a global leader in 5G and a frontrunner in 6G, the backbone of edge computing and AI -enabled networks. You cannot build the AI ecosystems with Europe. And you shouldn’t, because we’ll be a reliable partner. Third, but not least, trusted institutions.

When you make a deal with a Swede, that is a handshake that you can trust. And Sweden offers the ability to move from strategy to execution. In the Nordics, Sweden, Norway, Finland and Denmark, we are now building AI gigafactories, manufacturing intelligence at industrial scale with near zero carbon energy. We combine clean power, political stability, rule of law, technological sophistication and a culture of trust. We see ourselves as a sort of pathfinder, helping define the routes that will shape global AI infrastructure for decades. At the same time, we are making strategic commitments. During this parliamentary term, We have committed a substantial amount of funds to AI research, AI development and implementation, therefore ensuring that Sweden seizes the economic and societal benefits of this transformative technology.

Building on that foundation, we are today presenting in Sweden an AI strategy with high ambitions. The strategy will outline concrete steps that will steer Sweden towards sustained AI leadership. Our strategy not only demonstrates the scale of current commitment, but also maps a path forward for Sweden’s future. And we have launched an AI workshop to help public sector adopt AI safely and efficiently, because trust is built not by slogans, but by implementation. And this implementation brings me back to India. India understands scale. India understands development. Your investments in sovereign AI models ensures that AI speaks all of your languages, reflects your society and serves your people. This is what real inclusion truly looks like. When 1 .4 billion people gain access to AI tools that empower farmers, small businesses, teachers and doctors, that is not just innovation, that is transformation.

Information partnerships between India and Sweden combine scale with engineering excellence, market dynamism with institutional trust. Together, we can ensure AI strengthens democracy, drives sustainable growth and expands opportunity. I’d like to sum up by saying people fear what they do not understand. But what people understand and see value in. They will defend. Our task as leaders is not merely to regulate AI, it is to make it legitimate, to make it understandable, and most importantly, to make it beneficial. If we succeed, AI will not be feared like the printing press. It will be embraced like electricity, invisible, indispensable, but empowering. Let us shape this new industry together, open, competitive, democratic, and inclusive. The future of AI must empower our people and

Related ResourcesKnowledge base sources related to the discussion topics (28)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“Speaker 1 opened the AI Impact Summit, thanked the organisers and the keynote speaker for delivering a “very, very interesting session””

The knowledge base records a moderator/host introducing the session and thanking partners at the AI Impact Summit in India, confirming that a speaker performed this role [S67].

Confirmedhigh

“Deputy Prime Minister Ebba Bush was introduced as a strategic partner in the dialogue”

Ebba Busch is listed as Deputy Prime Minister of Sweden and participated in the opening/plenary segment of the summit, confirming her presence and senior role [S77].

Confirmedhigh

“The summit took place in India and the Deputy Prime Minister thanked the Government of India and the European Council”

Multiple sources identify the event as the India AI Impact Summit and note Indian government involvement, confirming the Indian location and the relevance of thanking the Government of India [S19] and [S68].

Additional Contextmedium

“Sweden is positioned as a “quiet powerhouse of innovation” and a strategic partner for AI infrastructure”

Sweden’s strategic choice to partner with India and its focus on AI and digital sovereignty are described in a keynote by Ebba Busch, providing background for Sweden’s innovation stance [S5]; a separate report of a major Microsoft investment in Swedish cloud and AI infrastructure further illustrates Sweden’s AI capabilities [S73].

Confirmedmedium

“The summit featured a keynote speaker (unspecified) and a “Powering AI Global Leaders” session”

The knowledge base lists a “Powering AI Global Leaders Session” at the AI Impact Summit, confirming the existence of a keynote-style session where a speaker thanked partners [S67].

Additional Contextlow

“The summit’s agenda includes discussions on AI literacy, global principles, data governance and equitable access”

A separate session summary highlights emphasis on AI literacy, global principles and data governance, adding detail to the broader thematic focus of the summit [S70].

Additional Contextlow

“The United States and India signed the Pax Silica Declaration at the AI Impact Summit”

The knowledge base notes a declaration marking a historic US-India partnership during the India AI Impact Forum, providing additional context to the summit’s international cooperation dimension [S68].

External Sources (85)
S1
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S2
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S3
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S4
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Ebba Bush- Deputy Prime Minister and Minister for Energy, Business, and Industry of the Kingdom of Sweden -Sweden- Rep…
S5
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden — These key comments collectively transform what could have been a standard diplomatic technology speech into a sophistica…
S6
Webinar – session 1 — Emerging technologies, particularly artificial intelligence, were identified as presenting both opportunities and challe…
S7
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — Charles Bradley is hosting a session that aims to explore the potential of artificial intelligence (AI) in promoting gen…
S8
Conversation: 01 — Artificial intelligence
S9
Keynotes — Oleksandr Bornyakov: Dear ladies and gentlemen, I’m honored to represent Ukraine today here in Strasbourg in the heart o…
S10
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — Speaker 1’s presentation represents a masterful progression from current state analysis to future vision, punctuated by …
S11
Keynote by Marcus Wallenberg Chairman SEB & Saab — This comment is insightful because it identifies complementary strengths between two nations’ AI approaches – Sweden’s r…
S12
A Digital Future for All (afternoon sessions) — – Ebba Busch – Minister for Energy, Business and Industry and Deputy Prime Minister of Sweden Ebba Busch: Excellencies,…
S13
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — “I believe that nations that command the convergence of biology and AI, or what I like to call the convergence of biolog…
S14
Panel Discussion Data Sovereignty India AI Impact Summit — You will have, and of course capital flows, somebody will have lots of capital and somebody will be waiting for that cap…
S15
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — Legal and regulatory | Economic Technological sovereignty means that we can always choose with who and how we are opera…
S16
AI diplomacy — Finally, we must insist on transparency. Much of the work today is focused on solving the “black box” problem by creatin…
S17
AI as critical infrastructure for continuity in public services — Inclusivity of all affected stakeholders creates legitimacy and trust. Transparency, public comment periods and accounta…
S18
Artificial intelligence (AI) – UN Security Council — In conclusion, the discussions highlighted the importance of fostering transparency and accountability in AI systems. En…
S19
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — The AI race consists of multiple marathons, not just one sprint, and we are only at the starting line. AI must serve peo…
S20
AI That Empowers Safety Growth and Social Inclusion in Action — “So we’ve engaged with member states and different stakeholders about their priorities, and let me bring to your attenti…
S21
Open Forum #30 High Level Review of AI Governance Including the Discussion — The EU’s international engagement is built on three pillars: trust/regulation, excellence/innovation, and international …
S22
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — India’s approach, according to the speaker, centers on three pillars of sovereignty: data sovereignty, infrastructure so…
S23
AI for food systems — The tone throughout the discussion was consistently formal, optimistic, and collaborative. It maintained a ceremonial qu…
S24
AI Innovation in India — No meaningful disagreements were present. This was a celebratory and supportive environment where speakers complemented …
S25
AI for Good Impact Awards — The tone is celebratory and enthusiastic throughout, with host LJ Rich maintaining an upbeat, sometimes humorous demeano…
S26
World Economic Forum Annual Meeting Closing Remarks: Summary — The tone is consistently positive, celebratory, and grateful throughout the discussion. It begins with formal appreciati…
S27
Language (and) diplomacy — A fourth function of historical analogies is as an ‘anti-depressant; a colourful imagery which neutralises a boring and …
S28
Session — The tone was primarily analytical and forward-looking, with the speaker presenting evidence-based predictions while ackn…
S29
Global Internet Governance Academic Network Annual Symposium | Part 1 | IGF 2023 Day 0 Event #112 — However, the discussion was not solely encompassed by scepticism. An audience member provided a positive outlook, sugges…
S30
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S31
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — AI has become accessible to many—you can create an AI chatbot in hours—but unlocking its potential requires far more tha…
S32
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Julie Sweet emphasizes that successful AI scaling requires leaders to have a much deeper understanding of the technology…
S33
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — Parliaments are pivotal to ensuring coherence between domestic legislation, established human rights, and evolving inter…
S34
Internet’s Environmental Footprint: towards sustainability | IGF 2023 WS #21 — Data centers consume significant amounts of power
S36
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — -AI’s Massive Energy Demands and Infrastructure Challenges: The discussion highlighted that AI data centers are becoming…
S37
Democratizing AI Building Trustworthy Systems for Everyone — I think open source is going to be in my mind a critical aspect of it. You’ll have to see how far open source movement t…
S38
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — This comment highlights the potential unintended consequences of regulation and policy discussions, suggesting they migh…
S39
WS #283 AI Agents: Ensuring Responsible Deployment — Anne McCormick: Thank you, Anne McCormick, EY, Global Head of Public Policy. I’m interested in this context of policy no…
S40
AI diplomacy — Finally, we must insist on transparency. Much of the work today is focused on solving the “black box” problem by creatin…
S41
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — The AI race consists of multiple marathons, not just one sprint, and we are only at the starting line. AI must serve peo…
S42
AI as critical infrastructure for continuity in public services — Multi-stakeholder governance involving government, civil society, technical community, and private sector is crucial for…
S43
Artificial intelligence (AI) – UN Security Council — In conclusion, the discussions highlighted the importance of fostering transparency and accountability in AI systems. En…
S45
Open Forum #30 High Level Review of AI Governance Including the Discussion — The EU’s international engagement is built on three pillars: trust/regulation, excellence/innovation, and international …
S46
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — India’s approach, according to the speaker, centers on three pillars of sovereignty: data sovereignty, infrastructure so…
S47
AI That Empowers Safety Growth and Social Inclusion in Action — “So we’ve engaged with member states and different stakeholders about their priorities, and let me bring to your attenti…
S48
AI Safety at the Global Level Insights from Digital Ministers Of — Bengio argued that true sovereignty means retaining the ability to make decisions and succeed economically and political…
S49
Keynote by Marcus Wallenberg Chairman SEB & Saab — He explained that Sweden has taken a research-focused approach to AI development through the WASP program, which his fam…
S50
AI for food systems — The tone throughout the discussion was consistently formal, optimistic, and collaborative. It maintained a ceremonial qu…
S51
AI for Good Impact Awards — The tone is celebratory and enthusiastic throughout, with host LJ Rich maintaining an upbeat, sometimes humorous demeano…
S52
World Economic Forum Annual Meeting Closing Remarks: Summary — The tone is consistently positive, celebratory, and grateful throughout the discussion. It begins with formal appreciati…
S53
Opening address of the co-chairs of the AI Governance Dialogue — The tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate fo…
S54
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — The tone was consistently optimistic, collaborative, and forward-looking throughout the session. It maintained a formal …
S55
Language (and) diplomacy — A fourth function of historical analogies is as an ‘anti-depressant; a colourful imagery which neutralises a boring and …
S56
Great lesson in mediation by Swiss diplomat Olivier Long (Algeria negotiations 1961-1962) — Negotiations tend to be driven by “worst case” scenarios. These scenarios can be poor advisors. They may create an atmos…
S57
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — The tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shif…
S58
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S59
Resilient infrastructure for a sustainable world — The tone was professional and collaborative throughout, with speakers building on each other’s points constructively. Th…
S60
Day 0 Event #161 Preparing Your Internet to Power the Digital of Tomorrow — The discussion maintained a consistently professional and collaborative tone throughout. Speakers demonstrated expertise…
S61
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S62
Welcome address — The tone is formal, diplomatic, and consistently optimistic throughout. The speaker maintains an authoritative yet colla…
S63
Building the AI-Ready Future From Infrastructure to Skills — The tone was consistently optimistic and collaborative throughout, with speakers expressing excitement about AI’s potent…
S64
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — The tone is consistently optimistic, visionary, and inspirational throughout. The speaker maintains an enthusiastic and …
S65
AI in education: Leveraging technology for human potential — The tone is consistently optimistic and inspirational throughout, with Mills maintaining an enthusiastic and visionary a…
S66
From India to the Global South_ Advancing Social Impact with AI — The discussion maintained an overwhelmingly optimistic and energetic tone throughout. It began with excitement about you…
S67
Powering AI Global Leaders Session AI Impact Summit India — The tone is consistently optimistic and inspirational throughout, with Chris Lehane maintaining an encouraging, partners…
S68
Keynote Adresses at India AI Impact Summit 2026 — Good morning. It’s a profound honor to be here in Delhi at the India AI Impact Forum to mark a historic milestone in the…
S69
High Level Session 3: AI & the Future of Work — Juha Heikkila: Right, so training is very important, of course, as was already mentioned and highlighted. So we do need …
S70
Artificial Intelligence & Emerging Tech — In conclusion, the EuroDIG session emphasized the importance of AI literacy, global principles, data governance, and equ…
S71
Session — In conclusion, the discussion painted a complex picture of the global tech landscape in the wake of Trump’s first 100 da…
S72
UN: Summit of the Future Global Call — Kenya:Secretary General, Mr. Antonio Guterres, fellow heads of state and government, distinguished delegates, ladies and…
S73
Microsoft to invest $3.21 billion in Sweden’s cloud and AI infrastructure — Microsoftannouncedon Monday a significant investment of 33.7 billion Swedish crowns ($3.21 billion) to enhance its cloud…
S74
WS #100 Integrating the Global South in Global AI Governance — Roeske Martin: Thanks Fadi, great question. So I think you made a great point that came out in the research which was …
S75
Contents — 1 There is no one single, clear-cut or generally accepted definition of artificial intelligence, but many definitions. I…
S76
DISCUSSION PAPERS IN DIPLOMACY — 80) ‘Transformational Diplomacy’, Fact Sheet, Office of the Spokesman, U.S. Department of State, Washington, DC, 18 Ja…
S77
Opening & Plenary segment: Summit of the Future – General Assembly, 3rd plenary meeting, 79th session — – Ebba Busch, Deputy Prime Minister of Sweden Chair: I invite Her Excellency Mia Mottley, Prime Minister, Minister fo…
S78
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-ebba-busch-deputy-prime-minister-sweden — Building on that foundation, we are today presenting in Sweden an AI strategy with high ambitions. The strategy will out…
S79
https://dig.watch/event/india-ai-impact-summit-2026/leaders-plenary-global-vision-for-ai-impact-and-governance-morning-session-part-1 — Artificial intelligence requires enormous competition. Artificial capacity, which in turn requires unprecedented amounts…
S80
First round of informal consultations with member states, observers and stakeholders (2024) — The speaker began by thanking the organisers for both arranging the meeting and providing guiding questions, which they …
S81
We are the AI Generation — In her conclusion, Martin articulated that the fundamental question should not be “who can build the most powerful model…
S82
Opening keynote — Bogdan-Martin framed the AI revolution as a pivotal moment for the current generation, calling it an opportunity to take…
S83
AI Governance Dialogue: Presidential address — ### Three-Pillar Framework
S84
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Because, while using technology, if we do not use all the technology, then its direction can also be wrong. And that is …
S85
The Challenges of Data Governance in a Multilateral World — India sees technology and digitisation as drivers of its economic growth. The country has introduced the Digital Persona…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
2 arguments152 words per minute247 words96 seconds
Argument 1
AI sessions provide new perspectives on AI challenges and future (Speaker 1)
EXPLANATION
Speaker 1 emphasizes that while many are already aware of AI challenges, the sessions bring fresh viewpoints that deepen understanding of both the challenges and the future trajectory of AI. This highlights the educational value of such gatherings.
EVIDENCE
Speaker 1 remarks that although everyone knows the challenges AI poses, “such sessions are actually adding such new perspectives to our understanding of AI, the challenge, and also the future, what to expect in future” [7].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Speaker 1’s presentation is described as delivering paradigm-shifting insights and a forward-looking vision, illustrating the added perspectives of the sessions [S10].
MAJOR DISCUSSION POINT
AI sessions provide new perspectives on AI challenges and future (Speaker 1)
AGREED WITH
Ebba Bush
Argument 2
High‑level keynote speakers add great value to understanding artificial intelligence (Speaker 1)
EXPLANATION
The speaker thanks the keynote presenters, stating that their expertise significantly enriches the audience’s grasp of artificial intelligence and the summit’s overall impact.
EVIDENCE
Speaker 1 says, “So I think it’s really time to thank our keynote speakers who are adding such great value to our understanding of artificial intelligence, as well as to this AI Impact Summit” [8].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Keynote remarks at the summit are highlighted as enriching participants’ grasp of AI, confirming their high value [S9].
MAJOR DISCUSSION POINT
High‑level keynote speakers add great value to understanding artificial intelligence (Speaker 1)
E
Ebba Bush
12 arguments125 words per minute1934 words924 seconds
Argument 1
Sweden‑India partnership is strategic, long‑term, and built on trust (Ebba Bush)
EXPLANATION
Ebba Bush describes the Sweden‑India collaboration as a deliberate, enduring alliance founded on mutual confidence. She positions it as a cornerstone for future AI cooperation.
EVIDENCE
She states, “The Nordics are deeply engaged here in India, and we are here because we believe this partnership is strategic” and follows with “It is long-term and built on trust” [24-25].
MAJOR DISCUSSION POINT
Sweden‑India partnership is strategic, long‑term, and built on trust (Ebba Bush)
Argument 2
Combining India’s scale with Sweden’s precision and trust will shape the global AI frontier (Ebba Bush)
EXPLANATION
The speaker argues that India’s massive data and computational scale, paired with Sweden’s reputation for precision and trustworthy systems, can jointly define the next phase of global AI development.
EVIDENCE
She notes, “India provides the incredible scale and speed, the very engine of this movement” and that “Europe and Sweden can provide precision and trust, the filter that ensures that what we extract is the amrit, the nectar of progress for all” [108-109].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bush explicitly pairs India’s scale and speed with Sweden’s precision and trust as a formula for global AI leadership [S5].
MAJOR DISCUSSION POINT
Combining India’s scale with Sweden’s precision and trust will shape the global AI frontier (Ebba Bush)
Argument 3
Technological shifts follow a curve: fear → trust → legitimacy → transformation (Ebba Bush)
EXPLANATION
Drawing on the historical example of the printing press, Bush outlines a recurring pattern where new technologies first provoke fear, then gain trust, achieve legitimacy, and finally drive widespread transformation.
EVIDENCE
She recounts the printing-press reaction-fear of loss of control and jobs-and explains that “Every major technological shift follows the same sort of emotional curve. It goes from fear, then trust, then legitimacy, and finally, a worldwide transformation” [45-58].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
She references the printing-press analogy and outlines the fear-trust-legitimacy-transformation trajectory for new technologies [S5].
MAJOR DISCUSSION POINT
Technological shifts follow a curve: fear → trust → legitimacy → transformation (Ebba Bush)
AGREED WITH
Speaker 1
Argument 4
AI legitimacy requires public understanding and clear, tangible benefits (Ebba Bush)
EXPLANATION
Bush stresses that citizens vote for concrete outcomes, not technology itself, so AI must be presented in ways that demonstrate direct, understandable advantages to gain legitimacy.
EVIDENCE
She observes that “People do not vote for technology. People vote for outcomes” and that “If AI is to become electable in our democracies, policymakers must find a way to translate complexity into tangible benefit” while noting that “Fear turns into trust when we understand” [90-94].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The speech stresses that technical excellence must be translated into concrete human benefits to secure democratic legitimacy [S5].
MAJOR DISCUSSION POINT
AI legitimacy requires public understanding and clear, tangible benefits (Ebba Bush)
Argument 5
Data centers are energy‑intensive but can become local job anchors and enable renewable energy investments (Ebba Bush)
EXPLANATION
While acknowledging the high energy demand of AI data centers, Bush argues they can also generate local employment, support renewable energy projects, and serve critical public services if properly integrated.
EVIDENCE
She describes data centers as “very energy intensive” and then lists their potential: “They can be long term local job anchors if implemented and used correctly. They can enable renewable energy investments. They can be infrastructure for hospitals, for research, defense and industry” [76-88].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Data centers are framed as “factories of the new economy” that can create long-term jobs and support renewable projects [S5].
MAJOR DISCUSSION POINT
Data centers are energy‑intensive but can become local job anchors and enable renewable energy investments (Ebba Bush)
Argument 6
Nations that master AI infrastructure will dictate economic growth, industrial competitiveness, and democratic resilience (Ebba Bush)
EXPLANATION
Bush claims that control over AI compute, data, and energy will determine which countries lead economically, maintain industrial edge, and preserve democratic institutions for decades to come.
EVIDENCE
She states, “Nations that master AI infrastructure will shape economic growth, industrial competitiveness, and democratic resilience for decades” and adds that “It’s going to make a massive shift” [64-70].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bush claims control over AI compute, data and energy will shape economic and democratic outcomes for decades [S5].
MAJOR DISCUSSION POINT
Nations that master AI infrastructure will dictate economic growth, industrial competitiveness, and democratic resilience (Ebba Bush)
Argument 7
True AI sovereignty rests on three pillars: jurisdictional control, sovereign compute capacity, and strategic partner selection (Ebba Bush)
EXPLANATION
Bush outlines a framework for AI sovereignty, emphasizing control over where data is processed, having domestic high‑performance compute, and choosing partners from a position of strength.
EVIDENCE
She enumerates the three pillars: “First, jurisdictional control, knowing where your data is stored and processed. Second, infrastructure capacity, having sovereign compute for advanced models. And third, strategic choice, selecting partners from a place of strength, not dependency” [102-105].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
She enumerates jurisdictional control, sovereign compute, and strategic partner choice as the three pillars of AI sovereignty [S5].
MAJOR DISCUSSION POINT
True AI sovereignty rests on three pillars: jurisdictional control, sovereign compute capacity, and strategic partner selection (Ebba Bush)
Argument 8
Sovereignty does not mean isolation; it means choosing dependencies that align with national values (Ebba Bush)
EXPLANATION
She clarifies that AI sovereignty is about selective interdependence, ensuring that any external reliance matches a country’s strategic values rather than being forced into isolation.
EVIDENCE
She says, “AI sovereignty does not mean isolation. It means choosing your dependencies” and adds that true sovereignty rests on “choosing our dependencies and the values that shape global AI” [99-101].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bush clarifies that AI sovereignty is about selective interdependence, not isolation, a view echoed in a panel discussion on data sovereignty [S5] and [S14].
MAJOR DISCUSSION POINT
Sovereignty does not mean isolation; it means choosing dependencies that align with national values (Ebba Bush)
Argument 9
Sweden offers abundant clean energy, allowing AI training with roughly one‑third the carbon footprint of typical US hyperscalers (Ebba Bush)
EXPLANATION
Bush highlights Sweden’s surplus of renewable electricity, noting that AI model training in Sweden emits far less CO₂ than comparable operations in the United States, positioning Sweden as an “intelligence exporter”.
EVIDENCE
She notes that Sweden “exports more electricity per capita than any other European country” and that “AI training can run a roughly one third of the carbon footprint of a typical US hyperscaler operations” [116-120].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sweden’s surplus renewable electricity enables AI training with about one-third the CO₂ emissions of US hyperscalers [S5].
MAJOR DISCUSSION POINT
Sweden offers abundant clean energy, allowing AI training with roughly one‑third the carbon footprint of typical US hyperscalers (Ebba Bush)
Argument 10
Sweden’s industrial depth (ASML, ARM, SAP, Ericsson) and trusted institutions enable reliable AI development and deployment (Ebba Bush)
EXPLANATION
She points to Sweden’s strong industrial ecosystem—semiconductor lithography, processor design, enterprise software, and telecom equipment—as well as a culture of trust that together support robust AI infrastructure.
EVIDENCE
She cites examples such as “ASML in the Netherlands, ARM in the United Kingdom, SAP in Germany, and Ericsson from Sweden” as key components of the AI stack and adds that “when you make a deal with a Swede, that is a handshake that you can trust” [123-131].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote cites ASML, ARM, SAP, Ericsson and Sweden’s trust culture as foundations for a robust AI stack [S5].
MAJOR DISCUSSION POINT
Sweden’s industrial depth (ASML, ARM, SAP, Ericsson) and trusted institutions enable reliable AI development and deployment (Ebba Bush)
Argument 11
AI should empower 1.4 billion Indians, providing tools for farmers, small businesses, teachers, and doctors (Ebba Bush)
EXPLANATION
Bush argues that AI must be inclusive, delivering practical applications that improve livelihoods across agriculture, entrepreneurship, education, and healthcare for the vast Indian population.
EVIDENCE
She states that “when 1.4 billion people gain access to AI tools that empower farmers, small businesses, teachers and doctors, that is not just innovation, that is transformation” [147-149].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Summit discussions highlight India’s scale as a model for inclusive AI that reaches farmers, entrepreneurs and public services [S4].
MAJOR DISCUSSION POINT
AI should empower 1.4 billion Indians, providing tools for farmers, small businesses, teachers, and doctors (Ebba Bush)
Argument 12
The goal is to shape an AI industry that is open, competitive, democratic, and inclusive, avoiding fear and fostering empowerment (Ebba Bush)
EXPLANATION
She calls for an AI ecosystem built on openness, competition, democratic values, and inclusivity, stressing that public understanding will replace fear with empowerment.
EVIDENCE
She concludes with, “Let us shape this new industry together, open, competitive, democratic, and inclusive. If we succeed, AI will not be feared like the printing press. It will be embraced like electricity, invisible, indispensable, but empowering” [152-157].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bush calls for an open, competitive, democratic and inclusive AI ecosystem, likening future acceptance to electricity [S5].
MAJOR DISCUSSION POINT
The goal is to shape an AI industry that is open, competitive, democratic, and inclusive, avoiding fear and fostering empowerment (Ebba Bush)
Agreements
Agreement Points
AI is a transformative shift that requires new perspectives and deeper understanding to navigate its challenges and future impact
Speakers: Speaker 1, Ebba Bush
AI sessions provide new perspectives on AI challenges and future (Speaker 1) Technological shifts follow a curve: fear → trust → legitimacy → transformation (Ebba Bush) AI is a fundamental shift that will reshape economies, industry and democratic resilience (Ebba Bush)
Both speakers stress that AI represents a major, transformative change and that gaining fresh insights-through sessions, historical analogies, or broader analysis-is essential for societies to grasp its challenges and future trajectory [7][58-60][64-70].
POLICY CONTEXT (KNOWLEDGE BASE)
The consensus mirrors observations that AI’s rapid accessibility demands a cultural shift and deeper expertise beyond prior digital transformations, as highlighted in forecasts for 2025 and World Economic Forum discussions on scaling AI [S31][S32].
Data centers powering AI are energy‑intensive and pose a significant challenge for national power grids, but can also be leveraged for broader societal benefits
Speakers: Speaker 1, Ebba Bush
Data centers demand ever‑growing share of national power grids (Speaker 1) Data centers are very energy intensive but can become local job anchors and enable renewable energy investments (Ebba Bush)
Both speakers acknowledge that AI data centres consume large amounts of electricity, creating pressure on national grids, while also highlighting their potential to generate jobs and support renewable energy when properly integrated [12][76-88].
POLICY CONTEXT (KNOWLEDGE BASE)
Energy consumption of AI data centers has been flagged as a major sustainability issue at multiple IGF sessions, underscoring the need for policy measures to manage grid impacts while exploring societal benefits of the infrastructure [S34][S35][S36].
Similar Viewpoints
Both speakers portray Sweden as a key, reliable partner in the AI ecosystem, emphasizing its innovation heritage (Ericsson, Spotify) and its clean‑energy, industrial strengths that can underpin global AI collaboration [10-13][116-124].
Speakers: Speaker 1, Ebba Bush
Sweden is a strategic, innovative partner with strong industrial and energy capabilities (Speaker 1) Sweden offers abundant clean energy, industrial depth and trusted institutions that support AI development (Ebba Bush)
Unexpected Consensus
The need for trust and legitimacy in AI adoption
Speakers: Speaker 1, Ebba Bush
Sessions add new perspectives that help audiences understand AI challenges (Speaker 1) Fear turns into trust when people understand AI; legitimacy requires clear, tangible benefits (Ebba Bush)
While Speaker 1’s remarks focus on the educational value of the summit, the underlying message aligns with Ebba Bush’s emphasis that public understanding and trust are prerequisites for AI legitimacy-an alignment not obvious from the moderator’s brief introduction [7][90-94].
POLICY CONTEXT (KNOWLEDGE BASE)
Building trust is identified as essential for responsible AI deployment, with calls for deeper understanding, transparent governance, and open-source approaches shaping policy debates on legitimacy and regulation [S32][S38][S37][S39].
Overall Assessment

The speakers show a solid consensus that AI is a disruptive, transformative technology requiring fresh insights, collaborative international effort, and trustworthy implementation. Both highlight Sweden’s strategic role and the energy challenges of AI data centres, while also converging on the importance of public understanding to achieve legitimacy.

Moderate to high consensus: there is clear agreement on the transformative nature of AI, the necessity of cooperation and trust, and on Sweden’s contribution, suggesting a shared strategic outlook that can facilitate coordinated policy and investment actions.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript shows strong alignment rather than conflict. Speaker 1 offers a brief, appreciative framing of AI sessions, while Ebba Bush delivers a detailed policy vision covering sovereignty, partnership, energy, and societal impact. The only point of divergence is the level of detail and focus, not substantive disagreement.

Low – the speakers largely concur on the significance of AI and the need for collaboration, implying that policy discussions can proceed without major contention on the core issues.

Partial Agreements
Both speakers acknowledge that AI presents significant challenges and that understanding/collaboration is essential. Speaker 1 notes that “we all are aware of the challenges in front of the world when it comes to AI” and that sessions add new perspectives to that understanding [6]. Ebba Bush stresses that AI is a fundamental shift and that “no nation can build resilient AI infrastructure alone. Democracies have to cooperate” [97-98], and that mastering AI infrastructure will shape economic and democratic outcomes [64-70]. Thus they share the goal of recognizing AI’s importance and the need for collective effort, though Speaker 1 focuses on the educational value of sessions while Bush emphasizes strategic sovereignty and cooperation.
Speakers: Speaker 1, Ebba Bush
AI sessions provide new perspectives on AI challenges and future (Speaker 1) Nations that master AI infrastructure will dictate economic growth, industrial competitiveness, and democratic resilience (Ebba Bush) No nation can build resilient AI infrastructure alone. Democracies have to cooperate. (Ebba Bush)
Takeaways
Key takeaways
AI-focused sessions and high‑level keynote speakers provide fresh perspectives on AI challenges and future directions. Sweden‑India partnership is portrayed as strategic, long‑term, and built on mutual trust, combining India’s scale with Sweden’s precision and trustworthiness. Technological revolutions follow a fear → trust → legitimacy → transformation curve; AI legitimacy depends on public understanding and clear, tangible benefits. AI infrastructure, especially data centers, is energy‑intensive but can become local job anchors and support renewable energy if managed correctly. Control over AI infrastructure will shape economic growth, industrial competitiveness, and democratic resilience; nations must aim for AI sovereignty. True AI sovereignty rests on three pillars: jurisdictional control of data, sovereign compute capacity, and strategic partner selection aligned with national values. Sweden contributes abundant clean energy (lower AI training carbon footprint), deep industrial expertise (ASML, ARM, SAP, Ericsson), and trusted institutions to the global AI ecosystem. The vision is an inclusive, democratic AI future that empowers billions (e.g., 1.4 billion Indians) with tools for agriculture, business, education, and healthcare.
Resolutions and action items
Sweden will roll out a national AI strategy outlining concrete steps toward sustained AI leadership. Swedish government has committed substantial funding for AI research, development, and implementation. An AI workshop has been launched to help the public sector adopt AI safely and efficiently. Sweden plans to build AI gigafactories using clean, low‑carbon energy, positioning itself as an intelligence exporter. India and Sweden will deepen cooperation on AI models, leveraging India’s scale and Sweden’s engineering and trust frameworks.
Unresolved issues
How to effectively translate AI complexity into tangible benefits that resonate with voters and the broader public. Specific mechanisms for ensuring jurisdictional control and data residency in cross‑border AI collaborations. Balancing the energy demands of data centers with local environmental and community concerns. Defining concrete criteria for strategic partner selection to maintain AI sovereignty without isolation. Operational details of the India‑Sweden AI partnership, including governance, funding, and intellectual property arrangements.
Suggested compromises
AI sovereignty is framed not as isolation but as selective dependency—choosing partners that align with national values while still collaborating. Data centers can be positioned as local job anchors and renewable‑energy enablers, addressing community concerns while meeting AI compute needs.
Thought Provoking Comments
Every major technological shift follows the same sort of emotional curve. It goes from fear, then trust, then legitimacy, and finally, a worldwide transformation. We are now living through another such moment with AI.
She connects the current AI boom to historical patterns like the printing press, framing AI as a societal transition rather than just a technical upgrade. This analogy broadens the discussion to cultural and political dimensions.
This comment reframed the conversation from a purely technical focus to a historical‑sociological perspective, prompting the audience to consider AI’s broader societal impact and setting up later points about legitimacy and public trust.
Speaker: Ebba Bush
AI sovereignty does not mean isolation. It means choosing your dependencies. True sovereignty rests on three pillars: jurisdictional control, infrastructure capacity, and strategic choice of partners.
Introduces a concrete framework for national AI policy, challenging the simplistic notion that sovereignty is about self‑sufficiency alone.
The framework opened a new line of discussion about how countries can balance collaboration with autonomy, influencing the subsequent emphasis on Sweden‑India partnership and the role of Europe in the AI stack.
Speaker: Ebba Bush
Data centers are often seen as foreign internet consuming our electricity, but they can be long‑term local job anchors, enable renewable energy investments, and serve hospitals, research, defense, and industry.
She reframes a common environmental and social criticism into an opportunity narrative, adding nuance to the debate on AI infrastructure.
This shifted the tone from fear of energy consumption to a constructive view of data centers as economic and social assets, paving the way for arguments about political legitimacy and tangible benefits for citizens.
Speaker: Ebba Bush
People do not vote for technology. People vote for outcomes—jobs, affordable energy, functional hospitals. If AI is to become electable, policymakers must translate complexity into tangible benefit.
Highlights the democratic legitimacy challenge of AI, linking technical deployment to electoral politics and public expectations.
Introduced a political dimension that deepened the analysis, leading to the call for making AI understandable and beneficial, and reinforcing the earlier point about trust and legitimacy.
Speaker: Ebba Bush
Sweden can export intelligence: AI training here runs at roughly one third the carbon footprint of a typical US hyperscaler operation, turning us from an energy exporter to an intelligence exporter.
Provides a concrete, data‑driven claim that positions AI as a strategic economic asset tied to sustainability, challenging the notion that AI is purely a cost center.
Supported the argument that clean energy combined with AI expertise creates a competitive advantage, reinforcing the earlier sovereignty discussion and justifying Sweden’s role as a partner for India.
Speaker: Ebba Bush
If we succeed, AI will not be feared like the printing press. It will be embraced like electricity— invisible, indispensable, but empowering.
Concludes with a powerful vision that synthesizes earlier themes of fear, trust, legitimacy, and empowerment, offering a hopeful narrative for the future.
Served as a rhetorical climax that unified the discussion’s strands, leaving the audience with a memorable framing of AI’s potential societal role.
Speaker: Ebba Bush
Overall Assessment

The discussion was shaped primarily by Ebba Bush’s remarks, which moved the conversation from a generic appreciation of AI to a nuanced exploration of its historical parallels, geopolitical sovereignty, infrastructural opportunities, and democratic legitimacy. Her historical analogy set the stage for a deeper analysis of trust and legitimacy, while the introduction of the three‑pillar sovereignty framework and the reframing of data centers turned potential criticisms into strategic opportunities. By linking AI outcomes to electoral politics and highlighting Sweden’s low‑carbon AI advantage, she broadened the dialogue to include economic, environmental, and political dimensions. The concluding vision tied these threads together, leaving the audience with a cohesive, forward‑looking narrative that reoriented the summit’s focus toward collaborative, trustworthy, and inclusive AI development.

Follow-up Questions
How can policymakers translate AI complexity into tangible benefits that voters can understand and support?
She highlighted the challenge of making AI outcomes understandable to the public, noting that people vote for outcomes, not technology.
Speaker: Ebba Bush
What metrics or frameworks can assess AI sovereignty across jurisdictional control, infrastructure capacity, and strategic choice?
She emphasized the need for a measure of sovereignty over AI and outlined three pillars, indicating a gap in concrete assessment tools.
Speaker: Ebba Bush
How can data centers be designed to serve as long‑term local job anchors, enable renewable energy investments, and minimize environmental impact?
She discussed misconceptions about data centers and suggested they could provide local benefits if implemented correctly, implying further study is needed.
Speaker: Ebba Bush
What mechanisms ensure AI models are inclusive, reflect diverse societies, and serve local languages and contexts?
She stressed the importance of sovereign AI models that speak all languages and serve people, pointing to a need for research on inclusivity.
Speaker: Ebba Bush
What international cooperation models allow democracies to build resilient AI infrastructure without creating harmful dependencies?
She stated that no nation can build resilient AI alone and that democracies must cooperate, indicating a need to explore cooperative frameworks.
Speaker: Ebba Bush
What are best practices for constructing AI gigafactories with near‑zero carbon energy?
She mentioned Nordic AI gigafactories that combine clean power and industrial scale, suggesting further investigation into their design and operation.
Speaker: Ebba Bush
How can trust be operationalized in AI deployments, moving from slogans to concrete implementation?
She noted that Sweden runs AI workshops to help the public sector adopt AI safely, highlighting a need to study how trust translates into practice.
Speaker: Ebba Bush
What role can European industrial firms (e.g., ASML, ARM, SAP, Ericsson) play in the global AI stack, and what research is needed to integrate their technologies?
She referenced these companies as essential to AI infrastructure, implying further study on their integration and impact.
Speaker: Ebba Bush
How can AI regulation be shaped to ensure legitimacy, public understanding, and acceptance similar to historical technology transitions?
She compared AI to the printing press and electricity, suggesting research on regulatory approaches that build legitimacy.
Speaker: Ebba Bush
What is the comparative carbon footprint of AI training in Sweden versus typical US hyperscaler operations, and how can this gap be closed?
She claimed Swedish AI training has roughly one‑third the carbon footprint of US operations, indicating a need for detailed comparative analysis.
Speaker: Ebba Bush
What are the socioeconomic impacts of AI‑driven tools on farmers, small businesses, teachers, and doctors in India?
She highlighted AI empowering 1.4 billion people in India, pointing to a need for research on actual outcomes in these sectors.
Speaker: Ebba Bush
How effective are AI workshops in the public sector at ensuring safe and efficient AI adoption?
She mentioned launching an AI workshop for the public sector, suggesting evaluation of its impact and best practices.
Speaker: Ebba Bush

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Digital Democracy Leveraging the Bhashini Stack in the Parliamen

Digital Democracy Leveraging the Bhashini Stack in the Parliamen

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session focused on building an inclusive, open-source voice AI ecosystem for India, emphasizing the need to continuously adapt technologies to diverse languages, cultures and users [1-13]. Amitabh Nag highlighted that AI solutions have a short “shelf life” and must be regularly upgraded because there is no warranty for static systems, especially given the vast linguistic and cultural diversity across the region [5-8][9-13].


Ariane Ahildur introduced the newly released Policy Report and Developers Toolkit, describing them as a joint German-Indian effort that provides best-practice guidance and embodies a shared vision of digital inclusion through voice technology [24-38][42-44]. She stressed that voice interfaces are crucial for low-literacy populations and that responsible, multilingual voice AI can unlock access to public services, aligning with the Hamburg Declaration on AI for Sustainable Development Goals [36-41][49-52].


Harleen Kaur outlined a four-pillar policy framework-treating foundational data as public goods, institutionalising sustainable open-source infrastructure, building open and representative models, and strengthening responsible deployment [73-78]. The accompanying developer toolkit translates these principles into practice by focusing on representation planning, data-quality assurance, and embedding responsible AI throughout the development lifecycle [90-94][97-101].


In the panel, Nag described two main pathways for sustaining data creation: large-scale “brute” collection of diverse speech samples and the generation of improvement corpora from deployed products, including both open-domain and closed-domain sources [121-138]. Ghosh argued for a smarter, cost-effective approach that leverages intrinsic linguistic components rather than exhaustive data gathering, illustrating this with a Telugu project that covered four dialects by identifying common acoustic features and supplementing them with targeted data [154-168][174-184]. Kritika emphasized that industry adoption requires scalable, edge-ready infrastructure, domain-specific model fine-tuning, and compliance safeguards to ensure reliable deployment across sectors such as healthcare and manufacturing [190-199]. Thomas highlighted the intersecting legal challenges of privacy and copyright, urging robust documentation, privacy-enhancing techniques, and clear licensing from the outset to build a trusted ecosystem [205-224]. Ghosh warned that human transcription variability makes traditional word-error-rate metrics insufficient, proposing multi-layered, subjective-objective evaluation methods and downstream feedback loops [228-241]. Nag reinforced that ultimate acceptance of voice systems rests on audience perception rather than absolute rankings, suggesting that standards should be shaped by what end-users deem understandable and trustworthy [256-272].


The participants agreed on the need for a unified, nationally coordinated evaluation framework-potentially a single leaderboard-to drive continuous improvement while fostering collaborative competition [315-321]. The discussion concluded that aligning policy, technical, legal and evaluation efforts is essential to realize inclusive, responsible voice AI that serves India’s diverse population [24-38][73-78][205-224].


Keypoints


Major discussion points


Dynamic, user-driven data ecosystems are essential for sustainable voice AI.


Amitabh Nag stresses that foundational speech datasets must be continuously created, enriched through user feedback, and treated as digital public goods to keep models improving over time [121-138]. Nihar Desai later summarizes this as “data sets need to be more of lived-in nature… built upon by users” [146-148].


Inclusive language coverage requires smart, cost-effective collection strategies rather than brute-force data gathering.


Prasanta Ghosh explains that Indian linguistic diversity can be addressed by focusing on intrinsic language families (Indo-Aryan, Dravidian) and balancing data volume with coverage [155-168]. He illustrates the approach with the Telugu dialect project, showing how a “region-anchored” method reduces time and budget while preserving diversity [174-183].


A four-pillar policy framework and a developer toolkit translate inclusive AI principles into practice.


Harleen Kaur outlines the policy pillars: treating foundational data as public goods, institutionalising sustainable open-source infrastructure, building open and representative models, and strengthening responsible deployment [73-78]. The accompanying toolkit operationalises these pillars through guidance on representation, data quality, and embedding responsible AI (RAI) throughout the development lifecycle [90-108].


Legal and governance safeguards (copyright, privacy, documentation) are critical to protect trust in the ecosystem.


Thomas Vallianeth highlights the intersecting challenges of copyright and privacy, urging early-stage provenance checks, privacy-enhancing techniques, and robust documentation to enable safe downstream use [208-218][221-224]. He later notes that while the law can accommodate some subjectivity, clear evidence and trust-building measures are needed [286-298].


Evaluation of voice models must move beyond single-metric, objective scores to a multi-layered, ecosystem-wide approach.


Ghosh points out the variability in human transcription and argues that word-error-rate alone is insufficient; instead, multi-output models, subjective human review, and downstream-application feedback should be incorporated [228-240]. Nag adds that ultimate acceptance hinges on audience perception rather than absolute rankings [256-273], and participants call for a national, collaborative benchmarking framework [315-319].


Overall purpose / goal


The session launched the Policy Report and Developers Toolkit “Building on Open and Responsible Voice Technology Ecosystem in India” and served to (1) showcase the Indo-German partnership that produced the report, (2) present a concrete policy framework and practical toolkit for inclusive voice AI, and (3) mobilise stakeholders-government, academia, industry, and civil society-to adopt open, responsible, and culturally diverse voice technologies that advance public services and sustainable development.


Overall tone and its evolution


– The discussion begins with a formal and optimistic tone, celebrating collaboration and the report’s release [24-34].


– It then shifts to a technical and problem-solving tone as participants detail challenges in data collection, linguistic diversity, and legal compliance [65-84][208-218].


– Mid-conversation the tone becomes reflective and candid, acknowledging the inherent uncertainties, subjectivity, and “no-warranty” nature of AI systems [8-15][256-273].


– The closing remarks adopt a constructive and forward-looking tone, urging continued workshops, benchmarking, and ecosystem-wide trust mechanisms [302-319][322-327].


Overall, the dialogue remains collaborative and solution-oriented, moving from celebration to deep analysis and finally to actionable next steps.


Speakers

Ariane Ahildur – Dr.; Director General, Department for Global Health, Equality of Opportunity, Digital Technologies and Food Security, German Federal Ministry for Economic Cooperation and Development; expertise in global health policy, digital technologies, and food security. [S2]


Nihar Desai – Head of JNI; Moderator of the panel discussion; expertise in moderation and digital initiatives. [S3]


Moderator – Session moderator (unnamed); role: moderating the event.


Kritika K.R. – Head Artificial Intelligence and Product Researcher, SanLogic; expertise in applied AI and product research. [S8]


Prasanta Ghosh – Dr.; Associate Professor, Indian Institute of Science; expertise in speech technology research and academia. [S9]


Thomas J. Vallianeth – Counsel, Trilegal; expertise in legal aspects of AI, copyright, and data governance. [S11]


Harleen Kaur – Research Manager, Digital Futures Lab; expertise in policy research and developer-toolkit development. [S12]


Amitabh Nag – CEO of DIBD (also referenced as CEO of Bhashini); expertise in AI ecosystem building and voice technology. [S13]


Additional speakers:


Shailendra Pal Singh – Senior General Manager, Bhashani; role: felicitate the speakers at the event.


Full session reportComprehensive analysis and detailed insights

Opening Remarks – Amitabh Nag


Nag opened by stressing that any AI-driven voice solution must be scalable across regions such as Southeast Asia and Africa and continually refreshed, noting that a model’s “shelf-life” can be as short as three to six months [1-5]. He contrasted AI systems with static machines, pointing out that there is no warranty or guarantee for AI models and that diversity of people, languages and cultures makes inclusion a core design requirement rather than an after-thought [6-13]. Nag concluded that progress will be incremental, moving step-by-step toward higher levels of inclusion [17-19].


Keynote – Ariane Ahildur (Director General of the Department for Global Health, Equality of Opportunity, Digital Technologies and Food Security, German Federal Ministry for Economic Cooperation and Development) [24-26]


Ahildur launched the Policy Report and Developers Toolkit “Building on Open and Responsible Voice Technology Ecosystem in India.” She thanked Digital Futures Lab, Art Park, TriLegal, and NASSCOM as key partners [33-36]. The report, a product of a German-Indian partnership, offers best-practice guidance and hands-on advice for policymakers and the tech community [30-32]. Ahildur framed voice AI as a gateway for low-literacy populations to access public services, health care, education and economic participation, warning that failure to provide multilingual voice interfaces can reinforce exclusion[34-41]. She linked the initiative to the Hamburg Declaration on Responsible AI for the Sustainable Development Goals, underscoring that AI should serve people and the planet [49-52].


Report & Toolkit Presentation – Harleen Kaur (Research Manager, Digital Futures Lab) [73-78]


Kaur outlined the four-pillar policy framework:


1. Treat foundational datasets as public goods;


2. Institutionalise sustainable open-source infrastructure;


3. Build open and representative models;


4. Strengthen responsible deployment.


She explained that treating data as a public good means government funding and convening for languages that are not commercially viable[79-81]; institutionalisation involves standardised documentation, collaborative data-steward models and shared national compute resources[82-85]; the third pillar calls for locally curated benchmarks and representative models[86-88]; and the fourth stresses public-value sharing, community buy-in and literacy to prevent misuse[85-88].


The accompanying developer toolkit translates these pillars into practice, focusing on representation planning, data-quality assurance, and embedding Responsible AI (RAI) throughout the development lifecycle[90-108]. Practical recommendations include maintaining a diversity wish-list, using synthetic data, adopting a layered data-strategy (active, passive and synthetic sources), applying robust transcription standards, and implementing continuous post-deployment monitoring[97-111].


Panel Moderation – Nihar Desai (Head, JNI) [122-124]


Desai moderated the discussion and opened with the question: Should foundational datasets be treated as digital public goods, and how can a data-flywheel be created to sustain them?


Data-Creation Strategies – Amitabh Nag[124-148]


Nag described two complementary pathways:


* Traditional “brute-force” field collection that captures diverse speech samples across regions and dialects;


* Product-derived corpora generated automatically from models, including open-domain sources (e.g., YouTube) and closed-domain feedback loops from enterprise or government applications.


He argued that a flywheel of data generation and feedback is essential because datasets must be “lived-in” rather than static[146-148].


Linguistically-Informed Sampling – Prasanta Ghosh[155-184]


Ghosh proposed a cost-effective, language-family-first approach: start from the major families (Indo-Aryan and Dravidian), identify common acoustic components, and then target specific dialects. Using the ResPin Telugu project as an example, his team covered four dialects by first collecting data that captured shared acoustic features and then supplementing with targeted recordings, thereby reducing timeline and budget while preserving diversity[174-184]. This “region-anchored” strategy demonstrates how smart sampling can replace exhaustive data gathering[160-168].


Industry Perspective – Kritika K.R. (Head of AI & Product Research, SanLogic) [190-214]


K.R. highlighted the need for scalable, edge-ready infrastructure and domain-specific model fine-tuning to enable reliable deployment in sectors such as healthcare, manufacturing and automotive. She stressed that model optimisation for device-level intelligence, combined with compliance safeguards, allows open-source models to be deployed on-premise, protecting sensitive data while supporting industry-specific vocabularies [200-207][208-214].


Legal & Governance – Thomas J. Vallianeth[208-224][289-301]


Vallianeth outlined three legal dimensions:


1. Copyright provenance & licensing – even publicly available voice datasets may be subject to copyright and require provenance checks and appropriate licences;


2. Privacy-enhancing techniques at the point of collection to avoid storing personal data;


3. Robust early-stage documentation to provide downstream users with trust and evidentiary support in any legal dispute.


He warned that subjectivity in AI outputs will increasingly surface in courts, and that pre-emptive safeguards and transparent processes can mitigate such flashpoints[289-301].


Evaluation Debate


* Ghosh noted that human transcribers rarely agree word-for-word, making word-error-rate (WER) insufficient; he advocated for multi-layered evaluation that includes multiple hypothesis outputs, subjective human review and downstream task performance[228-244].


* Nag complemented this by asserting that acceptability is determined by whether the end-user understands the output, and that different contexts (e.g., courts versus casual conversation) demand different levels of linguistic purity [256-279].


The panel reached consensus on the need for a national, collaborative benchmarking system – a single leaderboard under “Varshini” – to drive competitive yet cooperative progress across languages and dialects [313-321].


Broad Consensus


Participants agreed that:


(i) Voice technology and speech datasets should be treated as public goods;


(ii) Continuous, feedback-driven data enrichment is essential;


(iii) Open-source governance and sustainable infrastructure must be institutionalised;


(iv) Evaluation must move beyond single-metric scores to multi-dimensional, context-aware frameworks; and


(v) Legal safeguards, documentation and privacy-by-design are prerequisites for trust[1-3][19][73-78][90-108][208-218][256-270][313-321].


Actionable Take-aways


Adopt the four-pillar framework and publish the developer toolkit to embed RAI practices.


Establish a continuous data-flywheel that combines field collection, product-derived improvement corpora, and a layered data strategy.


Convene regular workshops to co-design a national, multi-layered evaluation framework and an annual leaderboard under Varshini.


Implement early-stage documentation, licensing checks, and privacy-by-design measures to satisfy legal requirements.


Encourage governments to act as ecosystem stewards, funding non-commercial language projects and maintaining open-source infrastructure [73-78][79-88][90-108][121-148][208-218][313-321].


Conclusion


The launch of the Policy Report and Developers Toolkit marks a concrete step toward an inclusive, open-source voice AI ecosystem for India that can be replicated globally. By aligning policy, technical, legal and evaluation efforts, participants underscored that continuous, community-driven data creation, responsible governance and user-centred evaluation are the pillars upon which sustainable, equitable voice technologies must be built [24-52][73-78][90-108][121-144][256-270][313-321].


Session transcriptComplete transcript of the session
Amitabh Nag

including, you know, Southeast Asia as well as Africa and other places. So from that perspective, it is very important that we scale these solutions. We have policies, standards, toolkits which are developed which can be actually replicated. And frankly speaking, in this area, in this situation, nothing is static. You have a shelf life which is sometimes three months or six months or even less. Yes. So we have to continuously upgrade the things as we go by. You know, we can’t be saying that this is what we have done, unlike a machine which we have built up and it works for six years or five years. There is no guarantee, no warranty in these kind of systems which we are building in AI.

AI, and the reason for this is diversity. You know, each person is different. Each language is different. Each culture is different. So there is… There is huge amount of diversity and we have to live with the diversity unlike the earlier digital systems which used to work on only standards. You know, they had standards and they would perhaps keep the outliers away. Here, inclusion is the name of the, inclusion is part of the design, diversity is part of the design. And we would perhaps have to go step by step to define those diversities so that they start becoming standards. Right. You know, it’s a very different kind of a setup which is there and happy to be part of this journey, happy to, happy and acknowledged to the help which is being provided.

And hopefully we are going to get across to the next level and higher steps in the journey as we go by in future. Thank you very much.

Moderator

Thank you, Mr. Nag for your insightful words and also for your incredible support throughout the last year over the course of the program. Right. Thank you. I will now invite Dr. Ariane Ahildur -Brandt, Director General of the Department for Global Health, Equality of Opportunity, Digital Technologies and Food Security of the German Federal Ministry for Economic Cooperation and Development to deliver the keynote address. Thank you. Thank you.

Ariane Ahildur

Dear Mr. Naack, dear partners, distinguished guests, it is a great pleasure to welcome you to this launch today. We present to you the Policy Report and Developers Toolkit Building on Open and Responsible Voice Technology Ecosystem in India. The report and the toolkit are the impressive result of a very productive partnership between Germany and India. And it is the result of a joint effort involving a group of distinguished partners and experts. This is why I would like to start by thanking you, Mr. Nack, and your colleagues from Ascini, for the excellent cooperation. And I would like to thank the Digital Futures Lab, Art Park, TriLegal, and NASSCOM for their invaluable support. Dear guests, you will find that the report and toolkit that we are presenting today is full of best practices and lessons learned.

It will provide guidance and hands -on advice to policymakers and to the tech community alike. But for me, this report is more than useful and more than practical content. It also conveys a shared conviction, shared values, and a shared vision for digital inclusion. In fact, when it comes to inclusion, voice technology has a key role to play. For millions of people. Voice is the most natural and powerful interface to the digital world, especially for those with limited literacy or access to digital devices. When voice AI works in local languages and dialects, it will become a gateway to public services, healthcare, education, and economic participation. When it does not, AI risks reinforcing existing devices and may even become an instrument for exclusion.

This is why responsible, inclusive voice AI is not just a technical issue. As I said, it is part of a shared vision, a shared vision between India and Germany. At a time when artificial intelligence is often framed as a global competition, this report offers a different narrative, and this is a narrative of cooperation. The Indo -German Partnership on AI, and particularly on language, and voice technologies shows what is possible when we join forces. Together with Bashini and the Indian Institute of Science, our initiative Fair Forward has created open voice technologies for nine Indian languages. These language models can now be used by NGOs, state agencies and companies. For example, they can be integrated into voice assistance for health workers, which in turn can improve health care for women.

Or they can be used to advise farmers on crop management. This collaboration, based on the principles of openness, fairness and responsibility, is the foundation for AI that truly serves the common good. And it contradicts those who claim that only fierce competition can generate prosperity and innovation. Ladies and gentlemen, this approach, closely aligns with the principles articulated by the International Cooperation on Climate Change. in the Hamburg Declaration on Responsible AI for Sustainable Development Goals. This declaration, presented by BMZ, our ministry, and UNDP last year, has been endorsed by more than 50 stakeholders already, including governments, international organizations, NGOs, and companies. The declaration reminds us that AI should serve the people and the planet, strengthen inclusion, and support sustainable development.

And our report here is a very practical and relevant contribution to that agenda, translating shared principles into concrete guidance. So let us thus deepen cooperation, strengthen trust, and build voice technologies that truly speak to everyone. Thank you for your attention.

Moderator

Thank you so much, Dr. Hillbrand. We shall now move on to the formal launch of the report and toolkit. I’ll invite all the representatives of the consortium from GIZ, Tri -Legal, Art Park, NASSCOM, Digital Futures Lab to please come on stage. And Mr. Nag to present the data. Thank you. Thank you. Thank you. Thank you. Now that we’re done with the formal launch of the report and policy toolkit, just to give you a brief overview, I invite Ms. Harleen Kaur, Research Manager, Digital Futures Lab, to present the report.

Harleen Kaur

Good morning, everyone, and thank you for being present. on a Friday morning for the launch of this report, as well as the developer toolkit. So I’ve linked the outputs in case you’d want to see them. If you can take a quick photo, and I’ll move towards discussing the high points of the findings that we had both for our policy report as well as developer’s toolkit. So when we began this work last year, we found that the challenges that are there in the voice tech arena, they are not limited to data collection alone. So the challenges are multi -layered that start right at the data collection stage and curation stage, but then move on to model development, where we see linguistic diversity gaps, lack of standards, uneven documentation, unclear data ownership and structures being a problem.

But then when we move on to the, hosting and licensing aspect, long -term infrastructure costs, costs, governance of open source assets, as well as sustainability of shared resources is something that we felt was a very important problem that needed to be solved in a certain manner. And the last is downstream deployment and impact, where bias, exclusion and lack of accountability for misuse become more visible. All of these are essentially starting at the data collection stage, but they move on to the life cycle of the voice technology ecosystem in India, specifically when you feel like supporting an open voice ecosystem in India. To lay down our approach for this project, we thought about how can we move on from the traditional government systems where government has primarily acted as a regulator, it enforces rules, it corrects market failures, to a newer active role, and that we have seen with Bhashani.

We encourage governments across the world to adopt this framework where the government acts as a steward of public good. ecosystem convener, as well as a standard setter, not just through licenses, but actually through practice as well. This is the overview of our policy framework. Based on this approach, we have structured our policy framework around the four pillars that you see on the screen. The first is treating foundational data sets as public goods. Second is institutionalizing sustainable open source infrastructure. Third is building open and representative models. And finally, strengthening responsible deployment. And what do we mean when we say this? When we say treat foundational data sets as public good, we are saying that government should be encouraging both funding and convening for public good functions.

For example, supporting languages that are not commercially viable as such. Institutionalizing governance. Governance framework. Thank you. to strengthen RAI practices, for example, through procurement, etc. On open representative models, we believe that local and contextually relevant benchmarks that are curated by government bodies not just at the center, but at the relevant diversity ecosystem, whether it is state, district, etc., is important. Shared national compute infrastructure, preferential treatment to open source ecosystem is something that we propose. On open source infrastructure itself, standardization of documents and promoting collaborative data steward models is something that has already been written in the report. Strengthening responsible deployment, public value sharing is another aspect of the report. We believe that public value sharing comes not just from financial arrangements, but also a buy -in of communities into what kind of…

uses of voice technology are there. And of course, supporting public literacy to protect against misuse and preventing harms is the policy side of our suggestion. Moving on to developer’s toolkit. You know, policy intent alone does not ensure inclusive AI systems. So alongside the policy framework, we’ve developed a developer toolkit that translates some of these principles into practice for developers. So it focuses on three broad areas, representation being the foremost through diversity planning, et cetera. Second being data quality and evaluation. And the third one being embedding RAI practices throughout the lifecycle of development of open voice I’ll just give you a brief overview of what we mean when we say this. So for developers, we have a toolkit that includes best practices that we’ve seen in industry.

And we have a toolkit that we’ve seen in India and outside on what does it mean to ensure adequate representation. on what does it mean to ensure adequate representation. So we have a toolkit that we’ve seen in India and outside So we have a toolkit that we’ve seen in India and outside on what does it mean to ensure adequate representation. Things like having a diversity wish list, making sure that you’re not collecting data from one source, applying linguistic expertise, using synthetic data, training model for linguistic and environmental nuances, and also layered data strategy. Which again means that don’t just use one source of data. Don’t do active or passive collection alone. Use a hybrid layered structure to make your models more diverse.

Once the developer move on from data collection to curation, we suggest many, many ways. This is just a very bird’s eye view overview in which data quality can be enhanced in the constraints that we operate in, in countries like India. And there are suggestions to make the applications inclusive and useful in practice, including robust transcription standards, contextual benchmarks. using data cards, model cards that are standardized, as well as continuous post -deployment monitoring. You can find more details in the report itself. And the last aspect of the developer’s toolkit is actually embedding RAI practices. We’ve taken another lifecycle framework within this where we believe that RAI practices are not the domain of policy alone. At enterprise startup developer level, ensuring a framework that serves to support them by providing them clarity on what does it mean when we say your output should be responsible.

So things like be mindful of engagement with the communities from whom you are taking data, annotation is happening, consent protocols, privacy enhancing techniques. So this report essentially is compliance plus. It actually shares practices that we believe are useful to promote open, responsible AI voice technology ecosystem. Please feel free to engage with the reports We’ll be very happy to take your comments, suggestions Thank you so much

Moderator

Thank you, Harleen We shall now move on to a short panel discussion On voice technologies in India Unpacking the present and future Of the voice AI application ecosystem For India and beyond Joining us today, I will invite to the stage Mr. Amitabh Nag, CEO of DIBD Dr. Prasanta Ghosh, Associate Professor At the Indian Institute of Science Ms. Kritika K .R., Head Artificial Intelligence And Product Researcher, SanLogic Mr. Thomas Valunith, Counsel Trilegal And this discussion will be moderated By the Board of Directors of the Indian Institute of Science And Product Researcher, SanLogic Mr. Nihar Desai, Head of JNI Thank you.

Nihar Desai

Hello. Hello. Am I audible? Okay. Thanks everybody for joining. So, I just delving right deep into it. My first question to you would be Mr. Nag. As we saw in the toolkit, we were arguing that data set like foundational data sets, speech data sets, must be treated as DPIs and DPGs and hence be available in general. From your experience in driving this ecosystem for about two years since I’ve been a part at least, what does it take to continue creation, ongoing facilitation of such innovations being put up as a digital public good while ensuring trust safety, right? And is there a way for us to have a flywheel of data sorts, data goods of sorts?

Amitabh Nag

Yeah, that’s a very important aspect of what we should be doing. That means continue the creation of data sets because it will then improve the models as we go by. Now, continuation of creation of data sets are, I would say that these are going to be in two or three ways, you know. One is the way which we have been… doing, which is the brute data collection, which is going to the various fields and then picking up the data from there and then creating the diversity which is required to actually build the model. So that is one way of doing it and that will continue. We will have to keep the focus with respect to saying that now I am doing for this particular area, this particular dialect, this particular language, while as it will be for other language in some other way.

The second is to actually look at using the products which have been developed using these models and creating such open domain activities to create the digital data. So you are creating the digital data which you are speaking, automatically creating the parallel corpus and then finding a way to actually vet this out and annotate and label and saying that, okay, this is the improvement corpus. That is the second thing. So one, you are creating a primary corpus. Second is… you are creating an improvement corpus which can be again fed back to the model and say that this is what is to be used and that is a big area of work as we look at. Allied to that is a lot of also the digital data is getting created any which way in the open domain which we can actually use to build the corpus again.

So you know YouTube videos today the world is more digital than it was yesterday. But the conscious way of looking at it as a program is what is required. How do I look at it as a program that I will be creating a data corpus at various places and this need not be necessarily an open domain. Open domain is kind of an easy way to work upon it. It can be a closed domain as well that there is an application which is working in an enterprise or a government and the people there are given an option to give suggestions to the translations or the answers or the things which you have gone in and that can get into a wetting pipeline and you are able to create that.

So those applications which are related to this when we are looking at AI portfolio not only languages but otherwise AI portfolio is very important for us to be on a continuous improvement journey. The most important aspect hence would be that if a person for example is working on a enterprise system of mails for example and it is actually deriving some summary of a document in perhaps a known language also or not a known language. The summary differs from what he thinks as a manual activity. He should be able to put that down somewhere and that goes as a feedback to the model. Currently that is something which is a concept which which may or may not exist, some enterprise would have done it, other enterprise would not have done it.

So looking at these kind of interventions which can be run as a program in a conscious way that everybody is able to contribute into the system his or her own things and then take it back from the, you know, improve the model or improve the AI systems, because they still require a lot of interventions from each and every person. The knowledge still is deficient. Thank

Nihar Desai

So what I’m taking away is that data sets need to be more of lived in nature. It’s not static. It has to be built upon by users and by others. And also just the fact that the feedback itself could lead to better data quality and which is something that enterprises might be doing, but it could definitely be done more. Thank you for that input. But to his point on the first question on data set inclusivity, Prashanta, like in going back to your research activity. mostly on inclusive data sets. The toolkit also argues that inclusivity must be designed at the foundational data layer at the time of designing data sets. But still we do find data sets which do lack this aspect.

What’s your take on what are the gaps over here at the research and academia level in terms of designing better inclusive data sets that could hence lead to better applications down the road?

Prasanta Ghosh

That’s a very deep and good question. So to cover the diversity and become more inclusive, one approach would be to cover in the data, right? But if we think about the diversity that is there in Indian languages, right, that is a function of the culture, caste, local knowledge and everything, right? And while we see the diversity, they are not independent elements. There are certain commonalities and certain uniqueness in each of these languages and dialects and accents that we talk about. So one important direction in modeling would be to think about this intrinsic basis components that finally leads to this diversity. Instead of a brute force way of covering data from all parts of the country.

So if you can discover, for example, just an example, I’m not an expert of linguistics, but if you look at the Indian languages, there are two broad, right? One is Indo -Aryan and the other is Dabirian. Now, while there are multiple languages within each of the streams, we may say, well, can we go and then to cater certain technologies? Two speakers of these languages. should we go ahead and collect a good amount of data in everything, each of those. That may not be the only way to think about. How do we balance and make a trade -off between the amount of data we collect? We know that’s challenging and costly as well, to a novel modeling where we start from those intrinsic basis components and then manifest into those individual diversities.

I think that may help us to jointly think about modeling and collection for catering to this diverse population.

Nihar Desai

If you could help the audience with one example of when you say balance both aspects. Let’s say if we could pick up one of your initiatives, Syspin, Respin or Wani or any other data set. How did you manage or balance inclusivity versus model building activities versus maybe other factors that might be coming into factor while designing specifications?

Prasanta Ghosh

Yeah, so the aspect of modeling that I brought out is something I would say not very well established at this moment. But from my experience in the project ResPin, I can give a concrete example. For example, if you take the Telugu as a language, right, there are, we worked with four major dialectal variations. One is in the region of Krishna Guntur, another is Vishakapatnam Vizag, another is Anandpur Chittoor, another is Nalgonda. Now, when you look at their intrinsic variations, we see that there are some commonalities. And then there are some unique aspects in each of those dialects. So now think about a brute force approach that I collect thousand hours in each of them. Versus think of collecting certain kind of stimuli to cover the actuality.

Acoustics case of the speakers, maybe from one region that will automatically cater to the other region. And then collect something that will complement. in each of the other regions, right? So that way, our overall timeline, budget, cost will all go down. And there has to be a novelty in terms of having a model that will start from the intrinsic one and then naturally diversify itself to cater to those populations. So that has to become a region -anchored approach that we started later on in one.

Nihar Desai

I see. Okay. Thanks for that input. Just to summarize, what I’m taking away is that instead of having brute force approach, what we’re essentially saying is balancing across various parameters on the basis of which you would train a model, such as linguistic diversity, acoustic diversity, and then using some sort of a smart approach to dissect the current audience, ways of collecting data, to maximize the output while maximizing bang for the buck. Thanks for that input. But this… This is also slightly… you are coming from the perspective of academia I would like to switch to Dr. Krithika from the perspective of as an applied AI researcher you are also one of the people in this panel who has really deployed speech AI solutions what is your take on challenges that you faced with inclusivity either at the data set layer or the application layer

Kritika K.R

More towards on core of the enterprise applications, knowledge repo integrations are coming up, beta healthcare, or even the manufacturing automobiles. So voice being the go -to interface for different applications and enabling the workforce across the industries is coming up. So in that case, again, as I said, on the consistency with the various user scenario and more specific to the domain adoption. Specialized domain adoption is required. That feedback loop is more important while the system is in the practice or while the system is in progress. I would say that point. And more critical aspect is on giving the scalable and sustainable infrastructure that comes with more optimized models and also like bringing the edge deployments also.

So that the real adoption can be scaled across multiple… industries and the normal usage for… various sectors across the industry. So I’m talking more on the end user perspective and using, getting the data. Data is one source of it, but making it reliable across the infrastructure and also giving the required scalable model at the device intelligence level is also important when it comes to the real adoption of these AI models.

Nihar Desai

Thanks for the input. So I guess after all, industry is also using feedback as a tool. It’s a nice validation over here. Yeah, maybe coming to Thomas, switching tracks to slightly legal sites. We’ve seen that, at least in the toolkit also, we’ve argued that speech models and speech data sets are at the intersection of copyright law, you know, data governance and security, etc. And how do you propose, how do you propose balancing sort of innovation? versus caution on these sites, especially with all the researchers and practitioners in the room?

Thomas J. Vallianeth

Thanks, Nihal. That’s, again, a very helpful question. I think Harleen had articulated it quite well in the beginning when we have to consider the entire ecosystem as a whole. There is a common myth in India that anything that is public is freely available. I think what we have to think about is also that, you know, all data sets operate at the intersection of privacy law and copyright law. Under privacy law, most publicly available data sets are essentially freely available to be used under, you know, even the new legislation. But under copyright law, even if it is publicly available, somebody else may own the copyright on that. So there has to be careful thought put in place right from the beginning itself in terms of what data sets you’re collecting, what is the copyright provenance of it, are you able to defer to, you know, freely licensed and open source kind of material to compile it, compile that data set, and if not, are you able to obtain the licenses to do so?

So the thought process from the beginning in terms of how you’re structuring the way to get this and also how to reduce the surface area of the impact of some of these laws. So for instance, in relation to privacy laws, if you’re collecting somewhat more private data sets, if you can use privacy enhancing technologies or you’re able to extract data such that no personal data is ultimately captured or stored at the point of data collection, all of these are various ways in which you can put in place mechanisms right from the start of when the ecosystem begins to ensure that downstream use cases are also protected in that sense. The second big aspect is, of course, the documentation, right?

Now, the data collector, the data creator is essentially the person who is the gateway to the entire ecosystem in some senses. The documentation has to be robust right from the beginning to enable everybody in the downstream chain to be able to use this data and to ensure that there’s a good and safe and trusted ecosystem created. with respect to that specific data set. So yes, there are flexibilities that are available under the law in terms of how you are able to use voice data sets, but at the same time, there’s some caution that you have to put in place right from the beginning and throughout the life cycle of this in terms of figuring out how to be able to use these data sets effectively.

Of course, the last kind of related aspect to this is to think about the various layers in which these legalities operate. So of course, you can think of the speech data set itself as being copyrighted, but equally, if they are reading out of a book passage or if they’re reading specific performance and so on, there may be separate rights that are allocated in relation to some of these other tangential elements as well. All of these are to be accounted for from the very beginning of the ecosystem itself such that downstream usage is not… in that sense impacted. So I would say, you know, the report’s argument in that sense is that think about it as a whole.

Don’t think of each action in isolation. Think about the entire impact downstream as well. And then account for both either enabling maneuvers under law in terms of documentation, privacy enhancing techniques and so on, or implement the appropriate cautionary mechanisms to ensure that downstream usage is also protected.

Nihar Desai

Yeah, at least in some of the hats that I wear, I am also collecting data sets and those are important points that we keep in mind. And hopefully we’ll be able to take the learnings out of toolkit to actually implement in our processes. Switching tracks slightly to Dr. Prashanta here, we’ve without measurement, right, we don’t really get anywhere in terms of implementing the right frameworks, implementing the right legal processes, etc., in terms of implementing, measuring quality. what you’ve also spoken about evaluations being broken as far as Indian context are concerned can you elaborate a little bit on what challenges we face on a day to day basis where do they come across and how do you foresee this sort of challenges either getting resolved or getting amplified again I think this is an important area that all of us together should explore and contribute to

Prasanta Ghosh

so when we build something like an automatic speech recognition system that is being used in many many applications think of this to be yet another human who is listening to the audio and trying to spit out what is spoken in text now if you go out in the real world as we have realized that multiple number of times and experienced through multiple projects in ResPin as well as Vani and many other projects that I have done and that we are able to do and that we are able to do and that we are able to do and that we are able to do is that if you give a piece of audio to two individuals, they never exactly agree on what they hear.

And I’m telling from my experience, not from two different parts of the country, I’m talking in terms of, you know, two people from the same district. In fact, there was an incident where we realized that these two people were just three kilometers away in terms of their location, but still they did not agree how that should be written from the audio they hear. So what it tells us is there is an inherent variation or variability in the way as an individual, as an Indian, I perceive or I like to see the text as, right? Now, if we accept that fact that exists today, we need to think of building our systems and system evaluation to cater to that variation.

So we need to think of that variability and to be… robust to that variability. So if, as I said in the beginning, if we treat the system also as a human, it will also not agree with another human. So if we just go by word -by -word comparison of how the system performs compared to some of the humans, certainly it will not be 100 % accurate. Or in other words, we calculate using what we call word error rate, which is objective way of evaluating. So a word -based comparison is not probably the right way to go at this point. Maybe the ASR system is doing pretty well, but just because it made a mistake slightly in one of the words, we are penalizing and telling that it’s not doing well.

So now we have to think about how do we solve this problem. It could be that we have a multiple evaluation system where we just don’t use word error rate. That’s one aspect. Another way to think about this will be to build ASR so that it itself can give not just one output, rather multiple outputs. which could be potentially right and then evaluate that not just objectively but also subjectively through human because human can absorb that error and say yes still it’s okay third will be to take that to the downstream application where depending on what you are using could be an LLM or any other QNA system that can absorb that robustness so I think we need to break down the entire evaluation system into multi -layered evaluations and then they are not really independent we need to take feedback all the way down to the final application back to ASR and so on so forth so I guess here individuals from the application areas, individuals from the linguistic background, engineers everyone has to come together and

Nihar Desai

so what I am hearing is that to solve this is more of an ecosystem level challenge right and then And maybe before our ecosystem champion over here, Mr. Nag, before you come in on this, I would just like one industry perspective of Dr. Krithika, how do you solve this from an application standpoint? Prashanta explained this challenge from more of an academic or foundational research standpoint. But how does evaluation play a role in your daily application layer?

Kritika K.R

Yeah, so as I said, right, so the applications are varied. So now the adoption is at the conversational level, right from bringing the analytics out of the data. Then now it is more on the voice interface and the multilingual conversation. Now with the speech -to -speech translation, those things are more prevalent with the conversation right now. Now coming to the industry application, industry aspect of it, yeah, adopting these models to the custom data set is one way. And also right pick of sourcing the data. From the available open source so that this model will be more specialized to those particular tasks. and the work they are supposed to do it. So now coming with the LLMs, these models are more adaptable to the industry jargons or even the core of the industry workflow.

Now making AI with the ASR models also enabling with the LLM, you have various methods from the data creation perspective, leveraging the open source data, and also like custom tuning the data to the various industry use cases. Definitely with the required compliance and these open source models are also enabling the on -prem deployment of these models, which enables the security aspect when it comes to creating the model for different core industry applications so that the models can be much more fine -tuned or trained across the domain, keeping the compliance aspect and the security aspects intact.

Nihar Desai

so having heard both of these perspectives Mr. Nag, how do you just from your experience standpoint, how do we approach resolving this conflict where all of us sort of concur that evaluations need we need a better framework to evaluation but it’s also in some ways nobody’s problem at the moment so is there a way to break this

Amitabh Nag

so let’s step back and let’s evaluate our conversation itself you know, is there a framework by which we can say that who was saying, who has spoken better language right, it was as good as other people understand it you know, if the audience is able to understand what I am speaking and what I am intending to speak that is what is going to be the final evaluation by any aspect. What we have to actually look at it is that we have to reach a level by which it is acceptable to the people who are sitting in front of me. I don’t think we will be able to ever reach a situation where we will be able to say that this is the best, second best, third best.

It is a situation, ultimately the audience decide whether they are in a position to do that. We are looking at few of the use cases where we have actually deployed these technologies and we incidentally, you know, go to various evaluations. One of them is grievance incidentally and when we were giving it to the last, to the person who is actually the owner of the system, the acceptance was supposed to be taken up by various ministries. So one ministry would say that this model is better. The other ministry would perhaps display. It’s a question of perception and ultimately the audience would decide. And some would like the tone of speaking, some would like the modality, some would like the pronunciation.

So it’s all based on what the person’s perception is. Now, is there a common way in which we can say that this is the acceptable thing? Right? But then also we will have differences. You know, many of the public figures, for example, when they speak, you know, Hindi or English or whatever language, there are gaps in the language, but still, you know, they are understood. They are able to connect to the people. So we have a difficult challenge. Rather than looking at it only from a perspective of application or academics, we would have to look at it from a perspective of audience. But then we also have some issues. You know, we have situations where we have a lot of people who are not aware of the situation.

And we have to look at the situation from a perspective of the audience. which require accurate and perfect transcriptions. Like, for example, if I’m arguing a case in a court, you know, I can’t have variations in terms of languages. If I am, for example, trying to be in a meeting where I am saying something, again, I cannot have variations. But for that also, we will perhaps have to step two steps back and look at purity of language with respect to the acceptance. Because most of our language has become impure because of the fact that we are, you know, using mixed code most of the time, especially in the cosmopolitan area. And in the other areas, even if we are having native language, dialects are taking over.

So it’s a very complex problem. It’s not an easy problem to solve. At this point in time, when we are looking at how do we actually take it forward, I would tend to say that we should look at what is acceptable to the audience and then start working back to define an acceptable way by which in which the models can go out in the market.

Nihar Desai

Yeah, that’s an important point that so far we’ve been looking at mostly, at least I have been looking at mostly from the lens of application versus academia, but maybe we need to go what works point of view and not really from just the traditional ranking point of view. But Thomas, in a world where, and this is, we’ve not talked about this and this might be a curveball, but in a world where evaluation is slightly subjective and no longer objective, how does law see this? How do you make decisions for procurement? How do you resolve arguments, differences between two opinions, and especially in cases where both might be right and it’s a gray area? Like, do you foresee these sort of scenarios coming in, especially with Gen AI, which is like…

Like, do you foresee these sort of scenarios coming in, especially with Gen AI, to be fair I think the legal principles at least on this are somewhat more clear at least in terms of some of the more privacy facing or copyright facing principles they occur much before outputs for instance are produced or any of these methodologies are implemented and we have a body of law that existed for many years in India it’s just a question of how do you lead evidence in relation to some of these matters so if it ever comes to the question of is a specific output right or is a specific output implying this or a specific output implying that I think where we haven’t caught up as a country is in terms of how to evaluate the evidentiary standard in relation to that the principles of course are fairly laid out saying that this is how you would decide it but what you would show the court to say this is the evidence for that that’s something I think that’s still evolving but I think it also brings me to I think a larger point and I think we’re making on the in the report as well is that you know there is a measure of trust that needs to be put in place in the ecosystem as a whole, right?

Irrespective of what the outcome of evaluation may be, there are measures that you can put in place right from the get -go. And one example I can give you is in relation to harmful content, right? Now, if there is a debate in relation to whether content is harmful or not, and it is a subjective determination, you can avoid that question to some degree by putting in place the necessary rails and safeguards right from the beginning itself so that trust is engineered into the process already as opposed to having to face that choice kind of downstream. But yes, to your point, and if we’re coming to a place where we need to face that question, I think the principles exist, but how you lead evidence, how you show the court that one is the interpretation over the other, still developing and very, very subjective.

I think some of the cases that, you know, the prominent AI players have in the country will go a long way to develop. Some of those standards, but at least as of now, the court system is still trying to catch up. to some of these principles. Documentation goes a long way to show intent. Methodologies that you have implemented that go to the extent of showing that you assumed reasonably high enough safeguards, reasonably high enough principles. All of these go a large extent to show intent. And so the subjectivity, I think, in that sense is far reduced if you put in place some of these measures that bring trust in the entire ecosystem. So I think at that one flashpoint of failure perhaps is tough to look at for the courts as well.

But if you look at it from an ecosystem perspective, I think there’s a lot of that that may reduce those flashpoints of failure or those flashpoints of evaluation at least from a legal perspective.

Thomas J. Vallianeth

I see. Thanks for that summarization. That the law as such is at a stage where it can accommodate some amount of subjectivity but there needs to be dialogue and more policy decisions to make it crisper and of course follow on into the application of the law. Thanks for that input. last question is leaving the floor open in terms of any inputs we do have the topic at hand is challenges and best practices for speech models and data sets at the ecosystem level or from your experiences any open points any arguments that you would like to make or any sort of a call out that you would like to make to the ecosystem right here it means the call out is means like you know many of the things which were indeterministic or unknown a few days back have started coming into a situation where we are able to crystallize it so I think we need to get into more workshops more discussions to think about it as how to do it take more use cases study more use cases in detail to figure out a framework by which acceptability and evaluations are properly benchmarked.

That’s a good point. Go ahead, Thomas. I have a point to add here, which is, you know, I think there is a certain sense of affinity in this ecosystem towards open source data sets or open models. I would be more thoughtful in terms of how and when these are suitable. Are there particular safeguards you need to put in place for open source data sets is something you need to think about. Are there end -use considerations that need to be tailored? And a good example is, you know, we have, I’ve seen an example where somebody is training a model to detect hate speech, right? Now the safeguards you would put in place to detect hate speech in a model is different from a data set and a model that you would develop to detect regular speech -to -speech translation.

So the decision as to what licensing frameworks, what documentation frameworks are fairly, need to be informed by what end -use case you’re doing, what unique… Thank you. attributes arise as a result of the specific data sets and applications that you are considering and finally on the basis of what downstream users you are expecting the choice needs to be made I think in a little bit more of a conscious fashion Nishant you wanted to say sure this your question actually stimulates me to think about you know English I mean you know sort of models that were built on American English so there have been always a standardization on evaluation in fact NIST evaluation if you look at there have been various protocols and there have been call out every year who beats the best baseline so far achieved I believe we have to do in our country in India at least for Indian languages and it’s very diverse as we just discussed so first of all thinking about how to evaluate and then creating a national level framework for evaluation.

And every year, let’s assess ourselves, all these stakeholders, right?

Prasanta Ghosh

It could be general evaluation, could be application specific in each language or dialect. And then we really have a leaderboard, which, of course, you know, there are many individual leaderboards across the country, but let’s have only one under Varshini, let’s say, right? And that should be elaborate enough to cater to all languages and dialects. And maybe that’s not the right way, but you think through and make sure every year we make progress in each of those three. I think that has to be brought in in the system to bring competitiveness in a collaborative way, of course. And overall, that can help improve the voice technology in Indian languages. And the reason I’m saying it

Nihar Desai

mostly from my understanding and experience with the English that has happened, in the past. Yeah. interesting points, Prasanta, in terms of I hear you sort of speak passionately about evaluation and now you’re taking it one step further in terms of how do we really create a unified framework for evaluation within competitive but yet collaborative manner for the ecosystem housed under a central, unpartial entity like Bhashani. This is a great point. I hope the audience found some of these points helpful and enriching. Thank you so much for making time in what is sure to be a very busy event and hope you have a rest of a good day. Thank you. Invite Mr. Shailendra Pal Singh, Senior General Manager, Bhashani to felicitate the speakers.

Thank you. Mr. Amitabh Nag Dr. Prasanta Ghosh Dr. Krithika K .I. Mr. Thomas Salenat I’m Ms. Harleen Kaur Thank you to all our speakers for walking us through this rich tapestry of voice technologies and their life cycle in the Indian context and we hope you read our report and the toolkit and find it useful. Thank you so much. Thank you so much to the audience for staying with us patiently throughout this entire hour. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (32)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Diversity of people, languages and cultures makes inclusion a core design requirement rather than an after‑thought”

The knowledge base stresses that diversity of languages, cultures and people is essential for inclusive AI systems, as noted in [S10] and reinforced by Yann LeCun’s comment on the need for multilingual training in [S101].

Confirmedhigh

“Voice AI is a gateway for low‑literacy populations to access public services, health care, education and economic participation, and failure to provide multilingual voice interfaces can reinforce exclusion”

Multiple sources describe multilingual voice AI as a way to bridge digital exclusion and serve low-resource users, e.g., the discussion on multilingual AI bridging gaps in [S73] and the emphasis on voice-driven multilingual interfaces for equity in [S113].

Confirmedmedium

“The initiative is linked to the Hamburg Declaration on Responsible AI for the Sustainable Development Goals”

The Hamburg Declaration on Responsible AI for the SDGs is documented in [S17], confirming the report’s reference to this framework.

Additional Contextmedium

“The policy report and developers toolkit are a product of a German‑Indian partnership”

Broader context on Indo-German AI collaboration is provided in [S111] and the German-Asian AI partnership overview in [S108], which illustrate the existence of such bilateral initiatives.

Additional Contextlow

“Institutionalising sustainable open‑source infrastructure is a pillar of the policy framework”

The importance of open-source solutions for governments in the Global South is highlighted in [S104], adding nuance to the report’s emphasis on open-source infrastructure.

External Sources (118)
S1
EQUAL Global Partnership Research Coalition Annual Meeting | IGF 2023 — Ariana is an aerospace engineer and technology policy specialist who is passionate about creating gender-inclusive innov…
S3
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — -Nihar Desai- Head of JNI, Panel Discussion Moderator
S4
IGF Retrospective – Past, Present, and Future — – **Nitin Desai** – Role/Title: Former MAG chair (approximately 5 years), chaired the working group on Internet governan…
S5
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S6
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S7
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S8
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — -Kritika K.R.- Head Artificial Intelligence and Product Researcher, SanLogic
S9
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — -Prasanta Ghosh(Dr. Prasanta Ghosh) – Associate Professor at the Indian Institute of Science
S10
https://dig.watch/event/india-ai-impact-summit-2026/digital-democracy-leveraging-the-bhashini-stack-in-the-parliamen — Thank you. Mr. Amitabh Nag Dr. Prasanta Ghosh Dr. Krithika K.I. Mr. Thomas Salenat I’m Ms. Harleen Kaur Thank you to all…
S11
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — -Thomas J. Vallianeth(Thomas Valunith/Thomas Salenat in transcript) – Counsel, Trilegal
S12
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — – Thomas J. Vallianeth- Harleen Kaur
S13
Inclusive AI_ Why Linguistic Diversity Matters — -Amitabh Nag- CEO of Bhashini
S14
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — – Kritika K.R.- Amitabh Nag – Prasanta Ghosh- Amitabh Nag
S15
Internet standards and human rights | IGF 2023 WS #460 — Furthermore, there is a pressing need for equal access and inclusion in standard-setting bodies, particularly for civil …
S16
Digital democracy and future realities | IGF 2023 WS #476 — Communities continue to build their own tools and generate content, but they face difficulties in gaining a strong footh…
S17
Hamburg Declaration champions responsible AI — TheHamburg Declaration on Responsible AI for the Sustainable Development Goals (SDGs)is a new global initiative jointly …
S18
Day 0 Event #189 Toward the Hamburg Declaration on Responsible AI for the SDG — – CLAIRE: No role/title mentioned – THIAGO MORAES: Works at the Brazilian Data Protection Authority and PhD researcher …
S19
Multistakeholder Partnerships for Thriving AI Ecosystems — And as I mentioned at the beginning, one of the things that we have been doing with the, as part of the Hamburg Sustaina…
S20
WS #254 The Human Rights Impact of Underrepresented Languages in AI — 3. Facilitating Dialogue: Creating platforms for knowledge sharing and discussion among diverse stakeholders. Gustavo F…
S21
Opportunities of Cross-Border Data Flow-DFFT for Development | IGF 2023 WS #224 — ATSUSHI YAMANAKA:Thank you, Mineta-san. Actually, that’s actually a nice segue into the public goods discussions. I thin…
S22
Global Internet Governance Academic Network Annual Symposium | Part 1 | IGF 2023 Day 0 Event #112 — However, this policy approach has sparked substantial critique for its disregard of other significant aspects of data ac…
S23
Keynote by Sangita Reddy Joint Managing Director Apollo Hospitals India AI Impact Summit — “These health systems of the future connect public and private, connect primary care with advanced care, connect researc…
S24
https://dig.watch/event/india-ai-impact-summit-2026/building-public-interest-ai-catalytic-funding-for-equitable-compute-access — And here, India is not waiting for permission. India is not waiting for permission. India is showing that it can be done…
S25
Bridging the Digital Divide: Inclusive ICT Policies for Sustainable Development — ## Dr. Rahman’s Three-Pillar Framework for Inclusive ICT Policies ### Policy Recommendations ### Three-Pillar Policy F…
S26
Leveraging AI4All_ Pathways to Inclusion — The report identified three interconnected pillars essential for inclusive AI: design, access, and investment. The desig…
S27
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — So bridging these gaps, which should be a priority for all of us, requires investment in connectivity and other digital …
S28
Connecting open code with policymakers to development | IGF 2023 WS #500 — Conversely, the potential negative effects of open source were also discussed. The speakers raised concerns regarding th…
S29
Dynamic Coalition Collaborative Session — Eleni argues that inclusive OER ecosystems need backing by institutional frameworks, funding and standards, including op…
S30
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — AI policies in Africa should ideally espouse a context-specific and culturally sensitive orientation. The prevailing ten…
S31
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Isadora Hellegren: Thank you so much, Tatiana. It is really a true pleasure to be here with all of you today. And before…
S32
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — ## Introduction and Context Abhishek Singh: Thank you for convening this and bringing this very, very important subject…
S33
Main Session | Dynamic Coalitions — Tatevik Grogryan: I would like to start by saying that we have a number of stakeholders in this cluster, the first one o…
S34
Open Forum #29 Advancing Digital Inclusion Through Segmented Monitoring — Fabio Senne: No, yes, I agree with this discussion of the cycle. It’s interesting because if you take, there’s a very st…
S35
Operationalizing data free flow with trust | IGF 2023 WS #197 — In summary, data flow is fundamental to our modern society, as it underpins almost all aspects of our lives. Establishin…
S36
Catalyzing Cyber: Stimulating Cybersecurity Market through Ecosystem Development — Cybersecurity plays a critical role in protecting strategic companies and assets from daily attacks. Saudi Arabian Milit…
S37
WS #484 Innovative Regulatory Strategies to Digital Inclusion — Strong consensus exists on core challenges (coverage vs. meaningful access, device affordability, need for skills) and t…
S38
High-level AI Standards panel — Paul Gaskell: Thank you, Bilel. So, I mean, as a government, we recognize that digital standards really matter. So we’re…
S39
AI That Empowers Safety Growth and Social Inclusion in Action — Multi-layered approach is needed including model requirements, application testing, executive review, and post-launch mo…
S40
Risks and opportunities of a new UN cybercrime treaty | IGF 2023 WS #225 — Lastly, inclusive involvement of the technical community in the policy-making process is advocated. The technical commun…
S41
Dedicated stakeholder session (in accordance with agreed modalities for the participation of stakeholders of 22 April 2022)/OEWG 2025 — Singapore emphasizes the need for capacity building efforts targeted at the leadership level. They argue that such progr…
S42
PERMANENT MISSION OF THE REPUBLIC OF SINGAPORE UNITED NATIONS NEW YORK — – b) Emphasizing that there is no one-size-fits-all solution to capacity-building, States proposed that efforts to tailo…
S43
Operationalizing data free flow with trust | IGF 2023 WS #197 — It emphasizes the need for balance, regulation, and global alignment to ensure that data flows are both efficient and se…
S44
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — But then when we move on to the, hosting and licensing aspect, long -term infrastructure costs, costs, governance of ope…
S45
https://dig.watch/event/india-ai-impact-summit-2026/digital-democracy-leveraging-the-bhashini-stack-in-the-parliamen — But then when we move on to the, hosting and licensing aspect, long -term infrastructure costs, costs, governance of ope…
S46
WS #106 Promoting Responsible Internet Practices in Infrastructure — This comment broadened the stakeholder discussion to include the open source community as a critical but often invisible…
S47
Connecting open code with policymakers to development | IGF 2023 WS #500 — Conversely, the potential negative effects of open source were also discussed. The speakers raised concerns regarding th…
S48
Digital Cooperation and Empowerment: Insights and Best Practices for Strengthening Multistakeholder and Inclusive Participation — The discussion revealed growing recognition that complex challenges require coordinated responses from multiple stakehol…
S49
How AI Drives Innovation and Economic Growth — High level of consensus across diverse perspectives (World Bank, academia, legal scholarship, development practice) sugg…
S50
Strengthen Digital Governance and International Cooperation to Build an Inclusive Digital Future — These key comments fundamentally shaped the discussion by providing concrete frameworks for understanding abstract chall…
S51
Lightning Talk #7 Privacy Redefined: equitable Access in the AI Age — Low to moderate disagreement level. The speakers generally aligned on identifying problems but differed on solutions and…
S52
The Power of the Commons: Digital Public Goods for a More Secure, Inclusive and Resilient World — – Integrating DPGs into broader policy discussions on climate change, education, and healthcare Alicia Buenrostro Massi…
S53
Digital divides & Inclusion — Discussion on whether internet should be a human right or a public good In terms of online content, the importance of l…
S54
AI as a tech ally in saving endangered languages — For this reason, language technology should be treated as public infrastructure. Not as a symbolic cultural initiative, …
S55
Contents — 1. Policy-makers could clearly identify intended objectives (e.g. to improve data privacy and ensure proper collection o…
S56
A Primer — –  data creation, –  collection, –  organization, and –  use. such example is drone swarms which are ‘made up of co…
S57
Policies and platforms in support of learning: towards more coherence, coordination and convergence — 137. UNHCR has established a centralized systematic learning centre overseeing all learning solutions across th…
S58
Qatar’s Open data policy — The scope of the policy includes all government entities that create, store, or manage data and information. It requires…
S59
Exploring Digital Transformation for Economic Empowerment in Africa: Opportunities, Challenges, and Policy Priorities (International Trade and Research Centre, Nigeria) — Currently, there is a lack of metrics to evaluate the impact of policies in the policy space. It is important to develop…
S60
Presentation of outcomes to the plenary — This aligns with SDGs 13 and 14, which call for climate action and the conservation of marine life. Overall, the compreh…
S61
Pre 2: The Council of Europe Framework Convention on AI and Guidance for the Risk and Impact Assessment of AI Systems on Human Rights, Democracy and Rule of Law (HUDERIA) — Jordi Ascensi-Sala focused on the practical implementation of the HUDERIA methodology, which bridges the gap between leg…
S62
Open Forum #30 High Level Review of AI Governance Including the Discussion — Lucia Russo: Thank you, Yoichi. Good morning and thank you my fellow panelists for this interesting discussion. As Yoich…
S63
EU Artificial Intelligence Act — 1. Detailed description of the evaluation strategies, including evaluation results, on the basis of available public eva…
S64
Report by the Commission on the Measurement of Economic Performance and Social Progress — – 29) The information relevant to valuing quality of life goes beyond people’s self-reports and perceptions to include…
S65
Driving Social Good with AI_ Evaluation and Open Source at Scale — However, audience questions revealed tension between this contextual approach and institutional needs for standardizatio…
S66
AI & Child Rights: Implementing UNICEF Policy Guidance | IGF 2023 WS #469 — This objective evaluation approach eliminates bias and subjectivity that may arise from teachers’ individual assessment …
S67
Global Standards for a Sustainable Digital Future — Dimitrios Kalogeropoulos: Yeah, hello, everyone. Forgive me, but I will read. So the title for me today is Building Brid…
S68
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — Devine Salese Agbeti: Thank you. Firstly, we have to align AI with international human rights standards. In that, for ex…
S69
WS #110 AI Innovation Responsible Development Ethical Imperatives — Ricardo Israel Robles Pelayo: Thank you very much. Good afternoon, everyone. It is an honor to be here and share a refle…
S70
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — ### Community-Led Development Abhishek Singh: One part is that, of course, the way the technology is evolving, there is…
S71
Opportunities of Cross-Border Data Flow-DFFT for Development | IGF 2023 WS #224 — This implies that active engagement and participation from individuals are key factors in driving meaningful discussions…
S72
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — Implement layered data strategies using multiple sources (active collection, passive collection, synthetic data) rather …
S73
How Multilingual AI Bridges the Gap to Inclusive Access — Moving beyond the initial 22 constitutional languages to serve broader linguistic diversity requires scalable data colle…
S74
Open Forum #29 Advancing Digital Inclusion Through Segmented Monitoring — Fabio Senne: No, yes, I agree with this discussion of the cycle. It’s interesting because if you take, there’s a very st…
S75
AI That Empowers Safety Growth and Social Inclusion in Action — Thank you very much, Peggy, and thanks for having Microsoft here. So, yeah, I want to start with the inception of our re…
S76
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what…
S77
Operationalizing data free flow with trust | IGF 2023 WS #197 — In summary, the fear of government access to data poses a threat to the free flow of data with trust. Microsoft’s statis…
S78
Expert workshop on the right to privacy in the digital age — The perspective of Internet service providers (ISPs) was provided byMrMike Silber, head of the legal and commercial depa…
S79
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — Emily argues that privacy and criminal justice are not in opposition but can coexist within proper legal frameworks. She…
S80
High-level AI Standards panel — Amandeep Singh Gill reinforced this perspective by advocating for multidisciplinary approaches that embrace socio-techni…
S81
Advancing Scientific AI with Safety Ethics and Responsibility — Evaluation must go beyond model‑centric metrics to include institutional practices, DIY science, and broader socio‑techn…
S82
World Economic Forum Annual Meeting Closing Remarks: Summary — The tone is consistently positive, celebratory, and grateful throughout the discussion. It begins with formal appreciati…
S83
Opening of the session — The tone began very positively and constructively, with the Chair commending delegations for focused, specific intervent…
S84
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S85
AI for food systems — The tone throughout the discussion was consistently formal, optimistic, and collaborative. It maintained a ceremonial qu…
S86
Panel 5 – Ensuring Digital Resilience: Linking Submarine Cables to Broader Resilience Goals — This comment emphasizes the critical importance of collaboration while also pushing for concrete actions rather than jus…
S87
Multistakeholder digital governance beyond 2025 — The discussion maintained a constructive and collaborative tone throughout, with speakers sharing both challenges and su…
S88
Law, Tech, Humanity, and Trust — The discussion maintained a consistently professional, collaborative, and optimistic tone throughout. The speakers demon…
S89
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — The discussion maintained a professional, collaborative, and optimistic tone throughout. Panelists demonstrated mutual r…
S90
Bridging the Digital Divide: Inclusive ICT Policies for Sustainable Development — The discussion maintained a formal, academic tone throughout, characteristic of a research presentation or conference se…
S91
Session — The tone was primarily analytical and forward-looking, with the speaker presenting evidence-based predictions while ackn…
S92
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — The discussion maintained a cautiously optimistic tone throughout, balancing enthusiasm for AI’s potential with realisti…
S93
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — The tone was consistently optimistic yet pragmatic throughout the conversation. Speakers maintained an encouraging outlo…
S94
AI as critical infrastructure for continuity in public services — The discussion maintained a collaborative and constructive tone throughout, with participants building on each other’s p…
S95
Parliamentary Closing Closing Remarks and Key Messages From the Parliamentary Track — The discussion maintained a collaborative and constructive tone throughout, characterized by diplomatic language and mut…
S96
Closing remarks – Charting the path forward — The tone throughout was consistently formal, diplomatic, and optimistic. It maintained a collaborative and forward-looki…
S97
Building Inclusive Societies with AI — The discussion maintained a constructive and solution-oriented tone throughout, characterized by: The tone remained con…
S98
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S99
Global dialogue on AI governance highlights the need for an inclusive, coordinated international approach — Global AI governance was the focus of a high-levelforumat the IGF 2024 in Riyadhthat brought together leaders from gover…
S100
Open Forum #33 Building an International AI Cooperation Ecosystem — Participant: ≫ Distinguished guests, dear friends, it is a great honor to speak to you today on a topic that is reshapin…
S101
Debating Technology / Davos 2025 — Yann LeCun: Well, I think the answer to this is diversity. So, again, if you have two or three AI systems that all com…
S102
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — 8 year old prodigy: Sharing is learning with the rest of the world. One, an AI that is independent. From large global A…
S103
Agenda item 6: other matters — Chair: Thank you very much, France, for your statement. Well, thank you also for the confidence-building measure of sp…
S104
https://dig.watch/event/india-ai-impact-summit-2026/democratizing-ai-building-trustworthy-systems-for-everyone — I think open source is going to be in my mind a critical aspect of it. You’ll have to see how far open source movement t…
S105
https://dig.watch/event/india-ai-impact-summit-2026/from-innovation-to-impact_-bringing-ai-to-the-public — So you are saying that when you make a financial decision, when financial industry or system makes a decision, there may…
S106
DPI High-Level Session — Dr. Yolanda Martinez:heat for those at ITU, and I would like to welcome you, WSIS multistakeholders, DPI ecosystem, and …
S107
Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37 — Martin Wimmer:Thank you. Yesterday morning, I went to Ryoen-Chi. This World Heritage Site in Kyoto and yours is one of t…
S108
GermanAsian AI Partnerships Driving Talent Innovation the Future — Ms. Kofler, please come up. There’s no signs. You can choose in the middle. Next panelist, I would really warmly welcome…
S109
Launch of the eTrade Readiness Assessment of Mauritania (UNCTAD) — It is supported by the financial contribution from GIZ on behalf of the German Federal Ministry for Economic Cooperation…
S110
Digital Trade for Development — In summary, the future of trade is digital, with services, green practices, and inclusivity driving its growth. The expa…
S111
IndoGerman AI Collaboration Driving Economic Development and Soc — Building confidence and security in the use of ICTs | Data governance | Artificial intelligence India’s demographic div…
S112
https://dig.watch/event/india-ai-impact-summit-2026/indogerman-ai-collaboration-driving-economic-development-and-soc — And circular economy. that government, academia, and industry work hand -in -hand. By promoting research and development…
S113
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — “It can deal with multilinguality and voice.”[51]. “There’s firstly a lot of opportunity to bridge some of these inequit…
S114
tABle of Contents — rs, including improved health care, better education, access to a greater number of economic opportunities and greater c…
S115
WS #144 Bridging the Digital Divide Language Inclusion As a Pillar — Manal Ismail: Thank you, Ram, and from a government perspective, of course, truly multilingual internet is crucial for d…
S116
Digital Policy Perspectives — The strategy advocates for democracy, rights-respecting policies, and inclusivity across the digital landscape. The stra…
S117
Ministerial Roundtable — Rashad Nabiyev: We can – thank you, thank you. So here we – According to the alphabetical order, so we start with Azerba…
S118
Secure Finance Risk-Based AI Policy for the Banking Sector — Ajay Kumar Chaudhary opened by highlighting India’s opportunity to lead in AI development while managing associated risk…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Ariane Ahildur
3 arguments126 words per minute562 words266 seconds
Argument 1
Inclusive Voice as Public Good
EXPLANATION
Voice technology can serve as a natural interface for millions of people who have limited literacy or lack access to conventional digital devices. When voice AI operates in local languages and dialects it unlocks public services, healthcare, education and economic participation, whereas failure to do so risks deepening exclusion.
EVIDENCE
She highlighted that voice is the most natural and powerful interface for those with limited literacy or device access, and that local-language voice AI becomes a gateway to essential services, while its absence can reinforce exclusion [34-38]. She also stressed that responsible, inclusive voice AI is a shared vision beyond a mere technical issue [39-41].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Voice technology is highlighted as a gateway to digital inclusion for low-literacy users and as a public good in the Bhashini discussion and IGF standards dialogue, and the Hamburg Declaration reinforces its role for sustainable development [S2][S15][S17][S20].
MAJOR DISCUSSION POINT
Inclusive Voice as Public Good
AGREED WITH
Amitabh Nag, Harleen Kaur, Nihar Desai
Argument 2
Responsible AI Aligned with Hamburg Declaration
EXPLANATION
The report aligns its responsible AI principles with the Hamburg Declaration, which calls for AI that serves people and the planet, strengthens inclusion and supports the Sustainable Development Goals. This positions the Indo‑German partnership as a model of cooperation rather than competition.
EVIDENCE
She referenced the Hamburg Declaration on Responsible AI for Sustainable Development Goals, noting its endorsement by over 50 stakeholders and its emphasis that AI should serve people, the planet, inclusion and sustainable development [49-52].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Hamburg Declaration explicitly frames responsible AI for the SDGs and is cited as the reference point for aligning AI principles in the session [S17][S18][S19].
MAJOR DISCUSSION POINT
Responsible AI Aligned with Hamburg Declaration
Argument 3
Open‑source voice models for nine Indian languages empower diverse stakeholders
EXPLANATION
Ahildur highlights that the Fair Forward initiative has released open voice technologies covering nine Indian languages, which can be freely used by NGOs, state agencies, and companies to build inclusive applications.
EVIDENCE
She states that Fair Forward created open voice technologies for nine Indian languages that can now be used by NGOs, state agencies, and companies [43-45].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The release of open-source voice models covering nine Indian languages is documented in the Bhashini panel, and India’s public compute infrastructure is presented as an enabling backdrop [S2][S24].
MAJOR DISCUSSION POINT
Open‑source multilingual voice models as public assets
AGREED WITH
Amitabh Nag, Harleen Kaur
N
Nihar Desai
3 arguments131 words per minute1767 words804 seconds
Argument 1
Data as Digital Public Good
EXPLANATION
Foundational speech datasets should be treated as Digital Public Goods (DPIs/DPGs) so that they are openly available for reuse and innovation. This requires mechanisms that ensure trust, safety and continuous enrichment of the data.
EVIDENCE
He asked whether foundational speech datasets can be treated as DPIs/DPGs and made generally available, emphasizing the need for trust and safety in their creation and use [118-119]. He later summarized that datasets must be “lived-in” and continuously improved through user feedback rather than remaining static [146-148].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Foundational speech datasets are advocated as Digital Public Goods, with calls for trust, safety and continuous enrichment appearing in the policy brief and cross-border data-flow discussions; a contrasting view notes some jurisdictions still treat data purely as private assets [S2][S21][S15][S22].
MAJOR DISCUSSION POINT
Data as Digital Public Good
AGREED WITH
Ariane Ahildur, Amitabh Nag, Harleen Kaur
Argument 2
Flywheel Model for Ongoing Dataset Enrichment
EXPLANATION
A sustainable ecosystem should create a virtuous flywheel where data collection, model improvement and user feedback continuously reinforce each other. This loop keeps datasets fresh and models increasingly accurate.
EVIDENCE
He posed the question about establishing a flywheel for data goods, asking how ongoing creation and facilitation can be achieved while ensuring trust and safety [117-120]. His later remarks about data needing to be “lived-in” and built upon by users echo this flywheel concept [146-148].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concept of a data-creation flywheel is outlined in the Bhashini session and reinforced by the health-AI summit’s description of a self-reinforcing data loop [S2][S23].
MAJOR DISCUSSION POINT
Flywheel Model for Ongoing Dataset Enrichment
AGREED WITH
Amitabh Nag, Harleen Kaur, Prasanta Ghosh
Argument 3
Policy Framework for Sustainable Open‑Source Infrastructure
EXPLANATION
The proposed policy framework rests on four pillars, one of which institutionalises sustainable open‑source infrastructure and standard‑setting. This creates a stable environment for public‑good data and models.
EVIDENCE
He presented the four-pillar policy framework and highlighted the pillar on institutionalising sustainable open-source infrastructure as a core element of the approach [70-73].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A four-pillar policy framework that institutionalises sustainable open-source infrastructure is presented in the discussion, while later IGF remarks raise concerns about verification and procurement of open-source code [S2][S15][S28].
MAJOR DISCUSSION POINT
Policy Framework for Sustainable Open‑Source Infrastructure
AGREED WITH
Harleen Kaur, Thomas J. Vallianeth
H
Harleen Kaur
6 arguments143 words per minute1036 words432 seconds
Argument 1
Policy Pillars for Inclusion
EXPLANATION
The policy report structures its inclusion strategy around four pillars: treating foundational datasets as public goods, institutionalising sustainable open‑source infrastructure, building open and representative models, and strengthening responsible deployment. Together they provide a roadmap for inclusive voice AI.
EVIDENCE
She outlined the four pillars-public-good data, sustainable open-source infrastructure, open representative models, and responsible deployment-and described each pillar’s focus in the presentation [73-78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The four-pillar inclusion strategy (public-good data, sustainable open-source, open models, responsible deployment) is detailed in the Bhashini report and aligns with broader inclusive ICT policy recommendations [S2][S15][S26].
MAJOR DISCUSSION POINT
Policy Pillars for Inclusion
AGREED WITH
Amitabh Nag, Prasanta Ghosh, Nihar Desai
Argument 2
Treat Foundational Datasets as Public Goods
EXPLANATION
Foundational speech datasets should be funded and convened as public‑good resources, especially for languages that are not commercially viable. This ensures that essential linguistic diversity is preserved and made accessible.
EVIDENCE
She explicitly stated that treating foundational datasets as public goods involves government support for funding and convening, particularly for non-commercial languages [79-81].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Treating foundational speech datasets as publicly funded, especially for non-commercial languages, is emphasized in the policy brief and echoed in cross-border data-goods discussions [S2][S21].
MAJOR DISCUSSION POINT
Treat Foundational Datasets as Public Goods
AGREED WITH
Ariane Ahildur, Amitabh Nag, Nihar Desai
Argument 3
Embedding RAI Practices in the Toolkit
EXPLANATION
The developer toolkit translates responsible AI (RAI) principles into concrete practices across the entire development lifecycle, covering representation, data quality, and continuous monitoring. This ensures that developers can build inclusive voice systems from the start.
EVIDENCE
She described how the toolkit embeds RAI through best-practice guidance on representation, data quality, evaluation and lifecycle integration, providing concrete steps for developers [90-108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The developer toolkit translates Responsible AI principles into concrete practices, drawing on the Hamburg Declaration’s RAI framework and multistakeholder responsible-AI initiatives [S17][S15][S19].
MAJOR DISCUSSION POINT
Embedding RAI Practices in the Toolkit
Argument 4
Institutionalising Open‑Source Governance and Standards
EXPLANATION
Governments should act as stewards and standard‑setters for open‑source voice ecosystems, establishing governance frameworks, documentation standards and collaborative data‑steward models. This creates trustworthy, interoperable resources for the community.
EVIDENCE
She advocated for governments to be ecosystem conveners, standard setters and to institutionalise sustainable open-source infrastructure, noting the need for standardisation of documents and collaborative stewardship models [71-76] and further emphasizing documentation and standardisation in later slides [84-85].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for government stewardship, standard-setting and collaborative data-steward models are supported by IGF calls for inclusive standard-setting, while later remarks highlight verification and procurement challenges for open-source code [S15][S28][S29].
MAJOR DISCUSSION POINT
Institutionalising Open‑Source Governance and Standards
AGREED WITH
Nihar Desai, Thomas J. Vallianeth
Argument 5
Continuous post‑deployment monitoring and standardized documentation
EXPLANATION
Kaur advocates for ongoing monitoring of voice AI systems after deployment, using standardized model cards, data cards, and transcription benchmarks to ensure responsible performance over time.
EVIDENCE
She references robust transcription standards, contextual benchmarks, data cards, model cards, and continuous post-deployment monitoring as part of the toolkit recommendations [103].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The recommendation to use model cards, data cards and benchmark suites for ongoing monitoring aligns with IGF discussions on standardized documentation and legal safeguards for open models [S15][S28].
MAJOR DISCUSSION POINT
Post‑deployment monitoring and documentation
Argument 6
Synthetic and layered data strategies to enhance representation
EXPLANATION
Kaur proposes using synthetic data generation and a hybrid, layered data collection approach to broaden linguistic representation while avoiding reliance on a single data source.
EVIDENCE
She describes having a diversity wish list, using synthetic data, and employing a hybrid layered structure to make models more diverse, emphasizing not to collect data from only one source [97-100].
MAJOR DISCUSSION POINT
Synthetic and layered data for inclusive AI
A
Amitabh Nag
6 arguments162 words per minute1513 words558 seconds
Argument 1
Inclusion by Design and Continuous Upgrade
EXPLANATION
AI systems must be built with inclusion and diversity baked into their design, recognizing that language, culture and individual differences evolve rapidly. Consequently, models need continual updates rather than static, long‑lived deployments.
EVIDENCE
He explained that diversity of persons, languages and cultures makes inclusion a core design element, and that AI systems lack warranties and must be continuously upgraded, unlike static machines [9-16] and [5-6].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Design-first inclusion and the need for continual model updates are echoed in AI4All’s design pillar and IGF calls for inclusive standard-setting [S26][S15].
MAJOR DISCUSSION POINT
Inclusion by Design and Continuous Upgrade
AGREED WITH
Ariane Ahildur, Harleen Kaur, Nihar Desai
Argument 2
Continuous, Feedback‑Driven Data Creation
EXPLANATION
Data creation should be an ongoing process that combines brute‑force collection with feedback loops from deployed products, generating primary and improvement corpora that are fed back into models. This creates a self‑reinforcing data ecosystem.
EVIDENCE
He described two approaches: traditional field collection to capture diversity and leveraging product usage to automatically generate parallel corpora, followed by annotation and feedback pipelines that continuously enrich the model [121-144].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Feedback loops that turn product usage into new training data are described in the flywheel model and reinforced by the health-AI summit’s example of a self-reinforcing data ecosystem [S23][S2].
MAJOR DISCUSSION POINT
Continuous, Feedback‑Driven Data Creation
AGREED WITH
Harleen Kaur, Prasanta Ghosh, Nihar Desai
DISAGREED WITH
Prasanta Ghosh
Argument 3
Audience Perception as Evaluation Criterion
EXPLANATION
The ultimate measure of a voice model’s success is whether the audience understands and accepts it; perception, tone and pronunciation matter more than abstract metrics. Different stakeholders may rank models differently based on their own expectations.
EVIDENCE
He argued that evaluation is decided by the audience’s ability to understand the output, noting varied preferences across ministries and contexts, and emphasizing perception over absolute rankings [256-270].
MAJOR DISCUSSION POINT
Audience Perception as Evaluation Criterion
AGREED WITH
Prasanta Ghosh, Thomas J. Vallianeth
DISAGREED WITH
Prasanta Ghosh, Thomas J. Vallianeth
Argument 4
Leveraging Product Feedback Loops for Model Improvement
EXPLANATION
Enterprise users should be able to flag mismatches between AI‑generated summaries and manual expectations, feeding these corrections back into the model to drive continuous improvement. Such conscious feedback programs turn every interaction into a data source.
EVIDENCE
He gave the example of a user noticing a summary discrepancy, recording it, and feeding it back into the model, highlighting the need for systematic feedback mechanisms in enterprise applications [139-144].
MAJOR DISCUSSION POINT
Leveraging Product Feedback Loops for Model Improvement
Argument 5
Scaling inclusive voice AI solutions globally
EXPLANATION
Nag argues that the challenges and solutions discussed should extend beyond India to regions such as Southeast Asia and Africa, requiring scalable policies and toolkits that can address diverse linguistic and cultural contexts worldwide.
EVIDENCE
He mentions the importance of scaling solutions to Southeast Asia, Africa, and other places, and highlights the need for replicable policies, standards, and toolkits that can be adapted across regions [1-3].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The push to extend Indian-origin voice AI solutions to Southeast Asia and Africa is supported by India’s public compute infrastructure rollout and multistakeholder partnership narratives [S24][S15][S19].
MAJOR DISCUSSION POINT
Global scaling of inclusive voice AI
AGREED WITH
Ariane Ahildur, Harleen Kaur
Argument 6
Replicable policy and toolkit frameworks as enablers
EXPLANATION
Nag stresses that the policies, standards, and toolkits developed for voice AI should be designed for replication, enabling other organizations and countries to adopt proven approaches without reinventing the wheel.
EVIDENCE
He states that there are policies, standards, and toolkits which have been developed and can actually be replicated [3].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The notion that policies, standards and toolkits should be designed for replication is highlighted in multistakeholder partnership reports and the Bhashini policy framework [S19][S2].
MAJOR DISCUSSION POINT
Replicable frameworks for voice AI deployment
AGREED WITH
Ariane Ahildur, Harleen Kaur
DISAGREED WITH
Thomas J. Vallianeth
P
Prasanta Ghosh
2 arguments160 words per minute1184 words443 seconds
Argument 1
Need for Multi‑Layered, Context‑Sensitive Evaluation
EXPLANATION
Because human annotators disagree on transcriptions, evaluation must move beyond simple word‑error‑rate metrics to multi‑layered, context‑aware approaches that consider variability, downstream task tolerance and subjective judgments. A combination of objective and human‑centric assessments is required.
EVIDENCE
He recounted instances where two annotators from nearby villages produced different transcriptions, arguing that evaluation should accommodate such variability through multi-layered metrics, alternative outputs and downstream task-specific assessments [228-244].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for multi-layered, context-aware evaluation metrics align with IGF discussions on inclusive standards and the need for robust evaluation frameworks for open models [S15][S28].
MAJOR DISCUSSION POINT
Need for Multi‑Layered, Context‑Sensitive Evaluation
AGREED WITH
Amitabh Nag, Thomas J. Vallianeth
DISAGREED WITH
Amitabh Nag, Thomas J. Vallianeth
Argument 2
Intrinsic linguistic component modeling to reduce data collection costs
EXPLANATION
Ghosh suggests that instead of brute‑force data gathering across all dialects, modeling can start from intrinsic linguistic bases (e.g., Indo‑Aryan vs Dravidian families) and then expand to dialectal variations, lowering cost and time.
EVIDENCE
He explains the concept of intrinsic basis components, using the example of Indian language families and balancing data collection with modeling to achieve coverage with reduced resources [160-168].
MAJOR DISCUSSION POINT
Efficient modeling via intrinsic linguistic components
AGREED WITH
Amitabh Nag, Harleen Kaur, Nihar Desai
DISAGREED WITH
Amitabh Nag
K
Kritika K.R.
4 arguments138 words per minute432 words186 seconds
Argument 1
Application‑Level Evaluation and Domain Adaptation
EXPLANATION
Different industry domains require tailored evaluation and model adaptation; voice AI must be fine‑tuned to specific jargon, workflows and compliance needs. Leveraging LLMs and custom data sets helps align models with sector‑specific requirements.
EVIDENCE
She described how industry applications need domain-specific data, custom tuning and compliance considerations, and how LLMs can be adapted to industry jargon and workflows, enabling on-premise deployment for security [190-199] and further detailed the process of sourcing open-source data, fine-tuning and compliance for specific use cases [245-254].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Domain-specific tuning and the use of LLMs for industry jargon are discussed in AI4All’s application pillar and IGF notes on sector-specific AI deployment [S26][S15].
MAJOR DISCUSSION POINT
Application‑Level Evaluation and Domain Adaptation
Argument 2
Scalable, Edge‑Ready Infrastructure for Enterprise Use
EXPLANATION
For widespread adoption, voice models must be lightweight, scalable and capable of running on edge devices, ensuring low latency and data‑privacy for enterprise deployments across sectors such as healthcare, manufacturing and logistics.
EVIDENCE
She highlighted the need for scalable, sustainable infrastructure, optimized models and edge deployments to enable real-world adoption across multiple industries [196-199].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for lightweight, edge-deployable voice models is reinforced by India’s public compute capacity plan and AI4All’s emphasis on scalable, sustainable AI infrastructure [S24][S26].
MAJOR DISCUSSION POINT
Scalable, Edge‑Ready Infrastructure for Enterprise Use
Argument 3
On‑premise deployment for security and compliance in enterprise AI
EXPLANATION
Kritika emphasizes that deploying voice AI models on‑premises allows enterprises to maintain data security, meet compliance requirements, and tailor models to sector‑specific needs.
EVIDENCE
She notes that open-source models enable on-prem deployment, which supports security and compliance for different core industry applications [254-259].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
On-premise deployment for data security and compliance is highlighted alongside concerns about procurement law compliance for open-source software in IGF sessions [S28].
MAJOR DISCUSSION POINT
On‑premise deployment for secure enterprise AI
Argument 4
Combining voice AI with large language models for domain‑specific adaptation
EXPLANATION
She argues that integrating voice AI with LLMs facilitates customization to industry jargon and workflows, improving performance in specialized sectors.
EVIDENCE
She describes how LLMs can be adapted to industry jargon and core workflows, enabling models to be fine-tuned for specific use cases [250-254].
MAJOR DISCUSSION POINT
Voice AI + LLM integration for domain adaptation
T
Thomas J. Vallianeth
5 arguments171 words per minute1138 words397 seconds
Argument 1
Legal Foundations for Data Ownership and Privacy
EXPLANATION
Voice datasets sit at the intersection of privacy and copyright law; therefore, projects must assess provenance, secure appropriate licences and apply privacy‑enhancing technologies from the outset. Robust documentation is essential to maintain a trusted downstream ecosystem.
EVIDENCE
He explained that publicly available data may still be copyrighted, requiring careful provenance checks and licences, and that privacy-enhancing techniques can be used to avoid personal data capture; he also stressed the need for strong documentation to enable safe downstream use [208-214][215-218].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of provenance, licensing and privacy-enhancing technologies for voice datasets is discussed in IGF standards and legal-risk analyses, with additional critique on data-as-public-good policies in certain jurisdictions [S15][S28][S22].
MAJOR DISCUSSION POINT
Legal Foundations for Data Ownership and Privacy
DISAGREED WITH
Amitabh Nag
Argument 2
Subjectivity, Evidence, and Trust in Legal Evaluation
EXPLANATION
Legal assessment of AI outputs involves subjective judgments; establishing trust requires clear documentation, privacy safeguards and demonstrable safeguards from the beginning. Courts will need evidentiary standards that reflect these safeguards.
EVIDENCE
He discussed how subjectivity in evaluating harmful content can be mitigated by embedding safeguards early, and how documentation and demonstrated high-level safeguards reduce evidentiary uncertainty for courts [285-298].
MAJOR DISCUSSION POINT
Subjectivity, Evidence, and Trust in Legal Evaluation
Argument 3
Documentation, Licensing, and Privacy‑Enhancing Measures
EXPLANATION
Effective legal compliance hinges on clear documentation of data provenance, appropriate open‑source licences and the use of privacy‑enhancing technologies. These measures protect both data subjects and downstream users.
EVIDENCE
He reiterated the importance of documenting copyright provenance, applying suitable licences and employing privacy-enhancing techniques to ensure that downstream usage remains lawful and trustworthy [208-218].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Robust documentation, appropriate open-source licences and privacy-enhancing techniques are emphasized in the open-source legal safeguards discussion [S28].
MAJOR DISCUSSION POINT
Documentation, Licensing, and Privacy‑Enhancing Measures
AGREED WITH
Harleen Kaur, Nihar Desai
Argument 4
End‑Use Safeguards and Licensing Choices for Open Models
EXPLANATION
Licensing and safeguards must be chosen based on the intended downstream application; a model for hate‑speech detection requires different controls than one for speech‑to‑speech translation. Tailoring licences and safeguards to use‑case reduces risk.
EVIDENCE
He gave the example of differing safeguards for hate-speech detection versus speech-to-speech translation, arguing that licensing frameworks and documentation should reflect the specific end-use and downstream user expectations [311-313].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Tailoring licences and safeguards to specific downstream applications is recommended in the IGF open-source licensing debate [S28].
MAJOR DISCUSSION POINT
End‑Use Safeguards and Licensing Choices for Open Models
Argument 5
National‑level evaluation framework for Indian language AI
EXPLANATION
Vallianeth calls for the creation of a standardized, country‑wide evaluation framework and regular benchmarking contests to assess Indian language speech models, fostering both competition and collaboration.
EVIDENCE
He mentions the need for a national level framework for evaluation, referencing NIST-style protocols and annual assessments to track progress across languages and dialects [313-314].
MAJOR DISCUSSION POINT
National evaluation framework for Indian language AI
AGREED WITH
Prasanta Ghosh, Amitabh Nag
DISAGREED WITH
Amitabh Nag, Prasanta Ghosh
M
Moderator
1 argument68 words per minute267 words232 seconds
Argument 1
Multi‑stakeholder convening as catalyst for voice AI ecosystem
EXPLANATION
The moderator stresses that bringing together representatives from government, industry, academia, and civil society is essential to foster collaboration, share expertise, and accelerate the development and deployment of inclusive voice technologies.
EVIDENCE
The moderator invites participants from GIZ, Tri-Legal, Art Park, NASSCOM, and Digital Futures Lab to the stage and later assembles a panel that includes the CEO of DIBD, an academic professor, an AI product researcher, and legal counsel, demonstrating a deliberate effort to create a multi-sector dialogue [57-61][112-113].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity of multi-stakeholder dialogue for inclusive voice AI is underscored in IGF calls for broader participation in standard-setting and in multistakeholder partnership reports [S15][S19][S20].
MAJOR DISCUSSION POINT
Multi‑stakeholder convening for ecosystem development
Agreements
Agreement Points
Voice technology and speech datasets should be treated as public goods to promote digital inclusion
Speakers: Ariane Ahildur, Amitabh Nag, Harleen Kaur, Nihar Desai
Inclusive Voice as Public Good Inclusion by Design and Continuous Upgrade Treat Foundational Datasets as Public Goods Data as Digital Public Good
All four speakers stress that voice AI and foundational speech datasets must be openly available and designed for inclusion, especially for low-literacy and underserved populations, so that they become public assets that can be replicated and scaled. [34-38][9-16][79-81][118-119][146-148]
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with the Digital Public Goods framework that advocates treating language and AI technologies as shared infrastructure for inclusion [S52] and echoes calls to treat language technology as public infrastructure for preserving endangered languages [S54].
Datasets need continuous, feedback‑driven enrichment (flywheel model) rather than static collection
Speakers: Amitabh Nag, Harleen Kaur, Prasanta Ghosh, Nihar Desai
Continuous, Feedback‑Driven Data Creation Policy Pillars for Inclusion Intrinsic linguistic component modeling to reduce data collection costs Flywheel Model for Ongoing Dataset Enrichment
The speakers agree that data creation must be an ongoing process that combines field collection with automatic generation from product usage and smart linguistic modeling, creating a virtuous flywheel that continuously improves models. [121-144][89-100][160-168][117-120][146-148]
Current evaluation metrics are insufficient; a multi‑layered, context‑sensitive evaluation framework is needed
Speakers: Prasanta Ghosh, Amitabh Nag, Thomas J. Vallianeth
Need for Multi‑Layered, Context‑Sensitive Evaluation Audience Perception as Evaluation Criterion National‑level evaluation framework for Indian language AI
All three highlight that simple word-error-rate scores do not capture the variability of human transcription or downstream task requirements, calling for richer, multi-dimensional evaluation methods and a national benchmarking system. [228-244][256-270][313-314]
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects concerns raised about the need for context-sensitive evaluation versus standardized benchmarks, as discussed in the AI governance debate on evaluation frameworks [S65] and the EU AI Act’s requirement for detailed evaluation strategies [S63].
Sustainable open‑source infrastructure and governance must be institutionalised with clear documentation and licensing
Speakers: Harleen Kaur, Nihar Desai, Thomas J. Vallianeth
Institutionalising Open‑Source Governance and Standards Policy Framework for Sustainable Open‑Source Infrastructure Documentation, Licensing, and Privacy‑Enhancing Measures
The panel concurs that governments should act as stewards, setting standards, maintaining robust documentation, and applying appropriate licences and privacy-enhancing techniques to build trustworthy open-source voice ecosystems. [71-76][84-85][70-73][208-218]
POLICY CONTEXT (KNOWLEDGE BASE)
Echoes the challenges identified regarding long-term hosting, licensing costs and governance of open-source assets [S44] and the call to recognize the open-source community as a critical stakeholder in policy design [S46].
Policies, toolkits and open models should be designed for replication and global scaling
Speakers: Amitabh Nag, Ariane Ahildur, Harleen Kaur
Scaling inclusive voice AI solutions globally Open‑source voice models for nine Indian languages empower diverse stakeholders Replicable policy and toolkit frameworks as enablers
These speakers emphasize that the frameworks, toolkits and open voice models developed in India are intended to be replicated and adapted for other regions such as Southeast Asia and Africa, enabling broader impact. [1-3][19][43-45][3]
POLICY CONTEXT (KNOWLEDGE BASE)
Consistent with the multistakeholder ecosystem approach for scaling digital infrastructure highlighted in IGF discussions on inclusive participation [S48] and the emphasis on capacity-building for scalable solutions [S41].
Similar Viewpoints
Both stress that licensing, safeguards and deployment choices must be aligned with the specific downstream application—e.g., hate‑speech detection versus speech‑to‑speech translation—to ensure security and compliance. [254-259][311-313]
Speakers: Kritika K.R., Thomas J. Vallianeth
End‑premise deployment for security and compliance in enterprise AI End‑Use Safeguards and Licensing Choices for Open Models
Both frame voice technology and foundational speech data as public goods that require open access, trust, and safety mechanisms. [34-38][118-119]
Speakers: Ariane Ahildur, Nihar Desai
Inclusive Voice as Public Good Data as Digital Public Good
Both recognize that domain‑specific needs (dialects, industry jargon) demand tailored data collection and modeling strategies to balance cost and performance. [160-168][245-254]
Speakers: Prasanta Ghosh, Kritika K.R.
Intrinsic linguistic component modeling to reduce data collection costs Application‑Level Evaluation and Domain Adaptation
Unexpected Consensus
Importance of documentation and early safeguards to build trust across legal and technical domains
Speakers: Thomas J. Vallianeth, Amitabh Nag
Legal Foundations for Data Ownership and Privacy Audience Perception as Evaluation Criterion
While Thomas discusses documentation to satisfy legal evidentiary standards, Nag emphasizes audience understanding as the ultimate metric; both nonetheless converge on the idea that clear, upfront documentation and safeguards are essential for trustworthiness of AI systems. [208-218][256-270]
POLICY CONTEXT (KNOWLEDGE BASE)
Matches the emphasis on building legal certainty and trust in data flows through clear documentation [S43] and the HUDERIA methodology that bridges legal and technical safeguards [S61].
Recognition that evaluation challenges are a shared responsibility rather than belonging to a single stakeholder group
Speakers: Prasanta Ghosh, Harleen Kaur, Amitabh Nag
Need for Multi‑Layered, Context‑Sensitive Evaluation Embedding RAI Practices in the Toolkit Audience Perception as Evaluation Criterion
Academic (Ghosh), policy (Kaur) and industry (Nag) participants all agree that evaluation must be multi-dimensional, involve responsible AI practices, and consider end-user perception, indicating a cross-sector consensus on rethinking evaluation. [228-244][90-108][256-270]
POLICY CONTEXT (KNOWLEDGE BASE)
Aligns with the call for multi-stakeholder collaboration in AI governance, noting the open-source community’s role and the need to avoid regulatory overreach [S46].
Overall Assessment

There is strong consensus that inclusive voice AI must be treated as a public good, that data and models require continuous, feedback‑driven enrichment, that open‑source governance and robust documentation are essential, and that evaluation metrics need to evolve beyond simple error rates to multi‑layered, context‑aware frameworks. Participants also agree on the need for scalable, replicable policies and toolkits to extend impact globally.

High consensus across government, academia, industry and legal stakeholders, indicating a solid foundation for coordinated policy action, standard‑setting and investment in sustainable voice AI ecosystems.

Differences
Different Viewpoints
Evaluation methodology for voice AI systems
Speakers: Amitabh Nag, Prasanta Ghosh, Thomas J. Vallianeth
Audience Perception as Evaluation Criterion Need for Multi‑Layered, Context‑Sensitive Evaluation National‑level evaluation framework for Indian language AI
Nag argues that the ultimate measure of a model is whether the audience understands and accepts it, rejecting absolute rankings and emphasizing perception [256-270]. Prasanta counters that human annotator variability makes word-error-rate insufficient and calls for multi-layered, context-aware metrics that combine objective and subjective assessments [228-244]. Thomas adds that a standardized, country-wide evaluation framework with regular benchmarking (similar to NIST) is needed to provide comparable, objective scores across languages and dialects [313-314].
POLICY CONTEXT (KNOWLEDGE BASE)
The EU AI Act specifies detailed evaluation criteria and methodologies for AI systems, providing a policy backdrop for debates on voice AI evaluation [S63].
Approach to data collection and corpus creation
Speakers: Amitabh Nag, Prasanta Ghosh
Continuous, Feedback‑Driven Data Creation Intrinsic linguistic component modeling to reduce data collection costs
Nag proposes a two-pronged strategy: continued brute-force field collection to capture diversity, plus leveraging product usage to generate primary and improvement corpora that feed back into models [124-131]. Prasanta suggests a more efficient route that starts from intrinsic linguistic bases (e.g., Indo-Aryan vs Dravidian families) and then expands to dialects, lowering the need for exhaustive data gathering [160-168].
POLICY CONTEXT (KNOWLEDGE BASE)
Highlights the data lifecycle stages (creation, collection, organization, use) outlined in AI data management primers [S56] and the open-data policy that governs data collection practices while balancing privacy concerns [S58].
Legal safeguards versus open‑source public‑good framing
Speakers: Thomas J. Vallianeth, Amitabh Nag
Legal Foundations for Data Ownership and Privacy Replicable policy and toolkit frameworks as enablers
Thomas stresses that voice datasets, even if publicly available, may be copyrighted and require careful provenance checks, appropriate licences, privacy-enhancing techniques, and robust documentation to ensure downstream trust [208-218]. Nag treats open-source voice technologies as public goods that can be replicated and scaled globally, focusing on inclusion and continuous upgrade without foregrounding legal constraints [3][16].
POLICY CONTEXT (KNOWLEDGE BASE)
Reflects the tension between fostering open-source commons and ensuring legal compliance, as discussed in the context of data free flow with trust [S43] and concerns about procurement and licensing laws for open-source code [S47].
Unexpected Differences
Subjectivity of evaluation versus demand for objective national benchmarks
Speakers: Amitabh Nag, Thomas J. Vallianeth
Audience Perception as Evaluation Criterion National‑level evaluation framework for Indian language AI
While both aim for trustworthy AI, Nag dismisses the possibility of objective ranking, stating that evaluation is decided by audience perception and cannot produce a “best” model [256-270]. Thomas, however, calls for a standardized national framework with regular benchmarking contests to create comparable, objective metrics [313-314]. This clash between a perception-based view and a formalized, metric-driven approach was not anticipated given their shared focus on reliability.
POLICY CONTEXT (KNOWLEDGE BASE)
Echoes the observed tension between contextual evaluation approaches and the institutional demand for standardized benchmarks noted in AI governance forums [S65] and the EU AI Act’s objective evaluation requirements [S63].
Open‑source scaling versus legal licensing and privacy concerns
Speakers: Amitabh Nag, Thomas J. Vallianeth
Replicable policy and toolkit frameworks as enablers Legal Foundations for Data Ownership and Privacy
Nag promotes open-source voice technologies as freely replicable public goods for rapid scaling [3][16], whereas Thomas warns that even open datasets may be subject to copyright and privacy law, requiring careful licensing, provenance checks, and documentation to protect downstream users [208-218][311-313]. The tension between an unrestricted open-source vision and a cautious legal compliance stance was not foreseen.
POLICY CONTEXT (KNOWLEDGE BASE)
Corresponds to discussions on long-term licensing costs and sustainability of open-source assets [S44], as well as privacy and legal restrictions on data release in open-data policies [S58] and procurement law constraints on open-source use [S47].
Overall Assessment

The discussion revealed three principal fault lines: (1) how to evaluate voice AI—whether through audience perception, multi‑layered/context‑aware metrics, or standardized national benchmarks; (2) the optimal data‑collection strategy—brute‑force field work plus product feedback versus linguistically‑informed modeling to cut costs; (3) the balance between an open‑source public‑good mindset and the legal safeguards required for copyright and privacy. While participants share common goals of inclusivity, continuous improvement, and multi‑stakeholder collaboration, they diverge on concrete pathways to achieve these goals.

Moderate to high. The disagreements are substantive enough to affect policy design, funding allocations, and implementation road‑maps, requiring coordinated effort to reconcile technical, legal, and evaluation perspectives for a coherent voice AI ecosystem.

Partial Agreements
All participants concur that datasets cannot be static; they should be continuously enriched through feedback loops, post‑deployment monitoring, and community contributions, ensuring models improve over time [124-131][121-144][103][146-148].
Speakers: Amitabh Nag, Prasanta Ghosh, Harleen Kaur, Nihar Desai
Continuous, Feedback‑Driven Data Creation Flywheel model for ongoing dataset enrichment Continuous post‑deployment monitoring and standardized documentation Datasets must be “lived‑in” and built upon by users
The speakers share the goal of treating foundational speech datasets as digital public goods that are openly available to support inclusion and public services, especially for low‑literacy and non‑commercial language communities [118-119][146-148][79-81][34-38].
Speakers: Nihar Desai, Harleen Kaur, Ariane Ahildur
Data as Digital Public Good Treat Foundational Datasets as Public Goods Inclusive Voice as Public Good
All agree that a multi‑stakeholder, collaborative approach—bringing together government, industry, academia, and civil society—is essential to accelerate inclusive voice AI development and deployment [57-61][112-113][41-43][26-28].
Speakers: Moderator, Ariane Ahildur, Harleen Kaur
Multi‑stakeholder convening as catalyst for voice AI ecosystem Indo‑German partnership and cooperation Joint effort involving distinguished partners and experts
Takeaways
Key takeaways
Inclusive voice AI must be treated as a public good, requiring continuous, diversity‑aware design and regular updates (Nag, Ahildur). Foundational speech datasets should be created, maintained, and enriched as digital public goods through ongoing collection, user feedback loops, and open‑domain pipelines (Nag, Desai, Kaur). A four‑pillar policy framework is proposed: treat data as public goods, institutionalise sustainable open‑source infrastructure, build open and representative models, and strengthen responsible deployment (Kaur). Evaluation of speech systems needs multi‑layered, context‑sensitive metrics that go beyond simple word‑error‑rate, incorporating audience perception, downstream task tolerance, and human judgment (Ghosh, Nag, K.R.). Legal and governance aspects must address copyright, privacy, licensing, and documentation from the outset to ensure trustworthy, compliant ecosystems (Vallianeth). Open‑source stewardship, standardisation, and national‑level benchmarking are essential for scaling inclusive voice technologies across India’s linguistic diversity (Ghosh, Desai). Industry adoption hinges on scalable, edge‑ready infrastructure, domain‑specific model fine‑tuning, and safeguards that align with compliance and security requirements (K.R., Nag).
Resolutions and action items
Commit to treat foundational speech datasets as Digital Public Goods and to fund/convene efforts for under‑served languages (policy recommendation). Develop and publish a developer toolkit that embeds Responsible AI practices, diversity planning, data‑quality checks, and post‑deployment monitoring (Kaur). Establish a continuous data‑flywheel: collect primary corpora, generate improvement corpora from deployed products, and feed back into model training (Nag). Initiate regular workshops and stakeholder meetings to co‑design a national, multi‑layered evaluation framework and annual benchmarking leaderboard for Indian languages (Ghosh, Desai). Implement documentation standards, privacy‑enhancing techniques, and clear licensing strategies for all datasets and models from the start (Vallianeth). Encourage governments to act as ecosystem stewards and standard‑setters, not only regulators, by supporting open‑source infrastructure and public‑good funding mechanisms (Kaur, Desai).
Unresolved issues
Exact methodology for a unified, India‑wide evaluation benchmark that accommodates linguistic variability and subjective audience perception. How to balance cost‑effective data collection with the need for comprehensive dialect coverage without a clear, agreed‑upon trade‑off model. Legal evidentiary standards for disputes over AI outputs and how courts will assess compliance with privacy and copyright requirements. Specific mechanisms for ensuring end‑use safeguards and licensing choices for open‑source models in sensitive applications (e.g., hate‑speech detection). Operational details for scaling edge‑deployment infrastructure across diverse industry sectors.
Suggested compromises
Adopt a hybrid data‑collection strategy: combine brute‑force primary corpus gathering with targeted intrinsic‑component sampling to reduce cost while preserving diversity (Ghosh, Nag). Use both objective metrics (e.g., error rates) and subjective audience‑perception assessments to evaluate models, acknowledging that perfect ranking may be unattainable (Nag, Ghosh). Allow open‑source datasets for general use but require additional licensing or safeguards for high‑risk applications, tailoring the approach to end‑use scenarios (Vallianeth). Blend government stewardship with community‑driven open‑source governance, sharing responsibility for standards, funding, and sustainability (Kaur, Desai). Implement multi‑layered evaluation pipelines that feed back into model improvement, thereby aligning academic rigor with industry practicality (K.R., Ghosh).
Thought Provoking Comments
AI systems have a very short shelf life – sometimes only three to six months – because of the immense diversity of people, languages, and cultures. Unlike static machines, AI must be continuously upgraded and inclusion has to be built into the design.
Highlights the fundamentally dynamic nature of AI compared to traditional technology and stresses that diversity and inclusion are not add‑ons but core design constraints, reframing how stakeholders should think about sustainability.
Set the stage for the entire discussion, prompting participants to consider continuous data collection, feedback loops, and policy mechanisms rather than one‑off solutions. It led directly to Nihar’s question about treating datasets as Digital Public Goods and to Amitabh’s later elaboration on primary vs. improvement corpora.
Speaker: Amitabh Nag
Voice AI is a gateway to public services for millions with limited literacy; when it works in local languages it enables inclusion, but when it doesn’t it can reinforce exclusion. This is a narrative of cooperation, not competition, between India and Germany.
Frames voice technology as a social equity issue and positions the Indo‑German partnership as a model of collaborative, responsible AI, shifting the conversation from technical details to broader societal impact.
Reoriented the audience toward the ethical stakes of the work, reinforcing Harleen’s policy pillars and encouraging the panel to discuss inclusion not just as a technical challenge but as a shared value.
Speaker: Ariane Ahildur
Our policy framework rests on four pillars: treating foundational datasets as public goods, institutionalising sustainable open‑source infrastructure, building open and representative models, and strengthening responsible deployment.
Provides a concrete, actionable structure that bridges high‑level policy with developer‑level practice, making the abstract goals of inclusion and responsibility tangible.
Guided the subsequent panel questions, especially Nihar’s probing about data‑as‑public‑good and Amitabh’s discussion of continuous data creation. It also gave a reference point for the legal and evaluation debates that followed.
Speaker: Harleen Kaur
Instead of brute‑force data collection across every dialect, we can start from intrinsic linguistic families (Indo‑Aryan vs Dravidian) and then strategically collect data that covers common acoustic bases, using smart trade‑offs to reduce cost and time.
Introduces a novel, linguistically informed methodology that could dramatically improve efficiency of dataset creation while preserving coverage, challenging the prevailing assumption that more data is always better.
Shifted the conversation from quantity to strategic quality, prompting Amitabh to elaborate on primary vs. improvement corpora and inspiring later discussion on evaluation metrics that respect linguistic variation.
Speaker: Prasanta Ghosh
Data sets sit at the intersection of privacy and copyright law. Even publicly available data may be copyrighted, so we must verify provenance, use privacy‑enhancing technologies, and maintain rigorous documentation from the start.
Brings a critical legal dimension that many technical participants overlook, emphasizing that compliance is not an afterthought but a design requirement.
Prompted the panel to consider legal safeguards alongside technical solutions, influencing Amitabh’s later remarks about trust and leading Thomas to later discuss how documentation can reduce subjectivity in legal disputes.
Speaker: Thomas J. Vallianeth
Human transcribers rarely agree word‑for‑word; therefore, using word error rate alone is insufficient. We need multi‑layered evaluation, possibly returning multiple hypotheses, and linking ASR performance to downstream task outcomes.
Challenges the dominant evaluation paradigm, exposing its inadequacy for Indian linguistic diversity and proposing a more nuanced, application‑centric assessment approach.
Catalysed a deeper dive into evaluation methods, leading Amitabh to argue that audience acceptance matters more than absolute scores, and setting up the later call for a national leaderboard.
Speaker: Prasanta Ghosh
Evaluation should be judged by whether the audience understands and accepts the output, not by ranking models as first, second, or third. Different contexts (court, meeting) demand different levels of purity and tolerance.
Reframes evaluation from an objective ranking to a user‑centric acceptance model, highlighting the contextual nature of ‘accuracy’ in real‑world deployments.
Steered the discussion toward practical deployment concerns, resonating with Kritika’s focus on scalable infrastructure and prompting Thomas to discuss trust‑by‑design as a way to manage subjective judgments.
Speaker: Amitabh Nag
Legal disputes will increasingly involve subjective judgments about AI outputs. Building trust through upfront safeguards, thorough documentation, and privacy‑enhancing measures can reduce the need for courts to arbitrate nuanced cases.
Offers a forward‑looking solution that links technical governance with legal risk mitigation, acknowledging the evolving nature of AI jurisprudence.
Provided a concluding bridge between the technical, policy, and legal strands of the conversation, reinforcing the earlier call for holistic documentation and influencing the final consensus on needing more workshops and a unified evaluation framework.
Speaker: Thomas J. Vallianeth
Overall Assessment

The discussion was shaped by a series of pivotal insights that moved it from a generic launch event to a deep, interdisciplinary exploration of voice AI in India. Amitabh’s opening remark about AI’s fleeting shelf‑life framed the need for continuous, inclusive data pipelines, which Harleen then codified into a four‑pillar policy. Prasanta’s linguistic‑family approach and critique of word‑error‑rate evaluation introduced strategic efficiency and methodological rigor, prompting the panel to rethink data collection and performance metrics. Ariane’s emphasis on inclusion and cooperation set a moral compass, while Thomas’s legal analysis anchored the conversation in compliance and trust‑by‑design. Together, these comments redirected the dialogue toward user‑centric evaluation, sustainable open‑source ecosystems, and proactive legal safeguards, culminating in a consensus that future progress will require coordinated workshops, national evaluation standards, and a holistic, trust‑engineered approach.

Follow-up Questions
What mechanisms are needed to continuously create and facilitate digital public good voice datasets while ensuring trust and safety, and can a data‑flywheel model be established?
Understanding sustainable data pipelines is crucial for keeping voice AI models up‑to‑date and trustworthy, especially given rapid changes in language use.
Speaker: Nihar Desai
What are the current gaps at the research and academia level in designing inclusive datasets that lead to better downstream applications?
Identifying academic shortcomings will guide targeted research to improve dataset representativeness and model performance across diverse Indian languages.
Speaker: Nihar Desai
Can you provide concrete examples of how initiatives (e.g., ResPin) balanced inclusivity with model‑building constraints and other factors?
Real‑world case studies illustrate practical trade‑offs and inform best‑practice guidelines for future projects.
Speaker: Nihar Desai
What challenges have industry practitioners faced regarding inclusivity at the dataset layer or application layer, and how have they been addressed?
Capturing industry pain points helps align research, policy, and tooling with real deployment needs.
Speaker: Nihar Desai
How can innovation in speech models and datasets be balanced with legal caution concerning copyright, privacy, and data governance?
Balancing rapid development with compliance is essential to avoid legal risks while fostering open innovation.
Speaker: Nihar Desai
What day‑to‑day challenges arise in evaluating ASR systems for Indian languages, and how might these challenges be resolved or mitigated in the future?
Improved evaluation methods are needed to reflect linguistic variability and ensure reliable performance metrics.
Speaker: Nihar Desai
From a legal standpoint, how should subjective evaluation outcomes be handled in procurement decisions, dispute resolution, and evidentiary standards for AI outputs?
Clarifying legal treatment of subjective AI assessments will support fair contracting and judicial review.
Speaker: Nihar Desai
What open points, arguments, or calls to action should the ecosystem prioritize to advance speech models and datasets?
Gathering community‑wide input can shape future research agendas, standards, and collaborative initiatives.
Speaker: Nihar Desai
What specific safeguards and end‑use considerations are required when deploying open‑source speech datasets and models?
Tailored safeguards ensure that open resources are used responsibly across varied applications such as hate‑speech detection versus translation.
Speaker: Nihar Desai
How can India develop a national‑level evaluation framework and annual benchmarking process for voice technologies across its many languages and dialects?
A coordinated national benchmark would drive continuous improvement and comparability across stakeholders.
Speaker: Nishant (participant)
Should a unified leaderboard (e.g., under Bhashani) be created to evaluate models across languages and dialects, and how should it be structured?
A single, comprehensive leaderboard would foster healthy competition and collaborative progress in multilingual ASR.
Speaker: Prasanta Ghosh
What multi‑layered evaluation metrics (beyond word error rate) are needed to capture both objective and subjective performance of ASR systems in Indian contexts?
Current metrics miss linguistic variability; richer evaluation would better reflect real‑world usefulness.
Speaker: Prasanta Ghosh
What documentation practices, privacy‑enhancing techniques, and governance structures are required to ensure legal compliance throughout the voice data lifecycle?
Robust documentation and privacy safeguards are foundational for trustworthy, law‑compliant AI ecosystems.
Speaker: Thomas J. Vallianeth
How can sustainable open‑source infrastructure and governance models be established to support the long‑term viability of the voice technology ecosystem?
Ensuring funding, maintenance, and community stewardship is vital for open resources to remain usable over time.
Speaker: Harleen Kaur
What policies are needed to treat foundational speech datasets as public goods, including mechanisms for funding, convening, and supporting non‑commercial languages?
Public‑good treatment can unlock resources for under‑served languages and promote equitable AI access.
Speaker: Harleen Kaur
How can Responsible AI (RAI) practices be embedded throughout the development lifecycle, including community consent, privacy, and post‑deployment monitoring?
Integrating RAI at every stage reduces bias, misuse, and builds public trust in voice AI applications.
Speaker: Harleen Kaur

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Driving Enterprise Impact Through Scalable AI Adoption

Driving Enterprise Impact Through Scalable AI Adoption

Session at a glanceSummary, keypoints, and speakers overview

Summary

The town-hall convened to examine how the abundance of AI-generated knowledge creates new dilemmas for learners and educators [1][3]. Panelists argued that, while AI makes information instantly available, the real scarcity is now human attention and the ability to judge trustworthiness [40][43][46-47]. Hugo highlighted Herbert Simon’s “poverty of attention” and warned that large language models often provide answers without explaining their sources, eroding trust [40][46-47]. Aidan warned that easy access to surface-level answers fosters a false sense of deep mastery, making rigorous testing essential to verify what learners truly understand [48][50-53]. Debbie reported that the audience’s votes favored critical thinking and sustained attention as the most needed skills in this environment [65-68].


Hugo described Udemy’s evolution from a massive catalogue of 250 000 courses and 80 million learners to an AI-driven reskilling platform that can assess individuals and personalize feedback [73-78][91-107]. Cohere, explained Aidan, builds enterprise-grade LLMs that stay within a client’s security perimeter and helps organisations shift workers from performing tasks to orchestrating AI agents [110-118][119-121]. Both speakers agreed that AI can augment but not replace teachers, citing the Bloom two-sigma finding that one-on-one coaching dramatically outperforms large classes and that AI could scale such personalised tutoring [149-157][184-188].


They also stressed the need for explainability, noting that future models must provide reasoning traces or retrieval-augmented citations so users can audit answers [346-353][374-382]. Hugo warned that reliance on black-box models could diminish human agency and ethical guardrails, urging societies to retain the ability to question and validate AI output [346-353][369-370]. Aidan added that while reasoning-enabled models are emerging, they remain brittle, so exposing their chain-of-thought is crucial for trust [374-376][386-390].


The panel concluded that education must adapt by emphasizing front-end skills such as asking the right questions and back-end skills like critical evaluation, while leveraging AI for personalization and scalable assessment [274-277]. Overall, the discussion underscored that AI will reshape knowledge delivery, but preserving critical thinking, explainability, and human oversight is essential for effective learning [65-68][311-313].


Keypoints


Major discussion points


Attention, critical thinking and deep mastery are becoming scarce resources in an AI-driven world.


Hugo notes Herbert Simon’s “wealth of information, poverty of attention” and highlights attention and trust as key challenges [40-44][46-47]. Aidan warns that LLMs can give a false sense of deep mastery, making genuine understanding the most at-risk skill [48-53]. Debbie reports that the audience ultimately favored critical thinking and sustained attention as the most valuable traits [65-70].


AI can personalize and scale learning, enabling rapid reskilling and adaptive education.


Hugo describes Udemy’s pivot to an AI platform that uses rapid assessment, role-play simulations and feedback loops to tailor learning to each individual [93-107]. He also cites the “Bloom two-sigma” research and the shift toward bite-size, in-the-flow learning for enterprises [124-141]. Aidan adds that Cohere’s enterprise LLMs focus on secure, on-premise deployment, giving businesses the tools to embed AI into their workforce [110-118].


Rigorous testing and assessment are essential to preserve human judgment and avoid superficial competence.


Aidan stresses that testing without AI tools is the “gold-standard” for measuring true understanding [48-53][321-334]. Hugo argues that human teachers remain indispensable as storytellers and mentors, and that AI-driven tutors must augment-not replace-this human element [156-166]. Both panelists agree that without strong assessment, learners can “fake” their way through education.


A tension exists between for-profit ed-tech models and traditional universities, raising questions about the future of degrees and possible unbundling.


Debbie frames the panel as representing “for-profit educational technology” versus the “not-for-profit” university sector [13-15]. Hugo later calls the university degree a “convenient bundle” that may need to be re-examined in light of AI-enabled delivery [408-416] and discusses the need for more adaptable, skill-focused credentials [419-421]. Audience members ask directly about gaps between online platforms like Udemy and accredited colleges [403-406].


Explainability, trust, and agency are major concerns when AI provides answers without transparent reasoning.


Hugo points out that most LLMs do not explain how an answer was derived, threatening trust [44-47]. He later calls for research on explainability and specialized, trusted models [340-368]. Aidan describes emerging “reasoning” models that generate internal monologues and Retrieval-Augmented Generation (RAG) to cite sources, aiming to improve auditability and user confidence [372-383][386-393].


Overall purpose / goal of the discussion


The town-hall was convened to surface and interrogate the “dilemmas around knowledge” that arise as AI makes information instantly accessible. Participants examined how AI reshapes the scarcity of attention, critical thinking, and mastery; explored ways AI can enhance personalized, scalable learning; debated the need for robust assessment and human oversight; and considered the shifting relationship between traditional universities and for-profit ed-tech providers. The ultimate aim was to identify challenges and opportunities for educators, businesses, and policymakers in an AI-infused knowledge ecosystem.


Tone of the discussion


The conversation begins with a formal, inquisitive tone as Debbie introduces the panel and the poll question. As the dialogue progresses, Hugo and Aidan adopt an optimistic, solution-oriented tone, highlighting AI’s potential for personalization and reskilling. Mid-session, the tone shifts to a more cautionary and reflective stance, emphasizing the risks of attention loss, superficial mastery, and loss of agency [40-47][48-53][340-368]. Toward the end, the tone becomes balanced-acknowledging both the transformative promise of AI and the need for rigorous testing, explainability, and thoughtful redesign of educational structures. Throughout, the discussion remains professional and collaborative, with occasional moments of urgency when addressing trust and ethical concerns.


Speakers

Hugo Sarazen – President, Chairperson and Chief Executive Officer of Udemy; expertise in online learning platforms, corporate training, and AI-driven education. [S1]


Debbie Prentice – Professor and Vice Chancellor of the University of Cambridge; expertise in higher-education leadership and the not-for-profit education sector. [S2]


Audience – Various participants representing industry and academia (e.g., Anna Van Niels, Director of the Livium Trust; Nathaniel, founder of an education company in Australia; Pranjal Sharma, author and analyst; Kian, CEO of Workera). Roles/titles as noted. [S3][S4][S5]


Aidan Gomez – Co-founder and Chief Executive Officer of Cohere, an enterprise AI company; expertise in large language models, AI product development, and enterprise AI deployment. [S6][S7][S8]


Additional speakers:


– None


Full session reportComprehensive analysis and detailed insights

The World Economic Forum town-hall opened with Professor Debbie Prentice, Vice-Chancellor of Cambridge, welcoming participants and framing the session as an exploration of “dilemmas around knowledge” that have persisted since the invention of schools and libraries but are now amplified by AI-driven instant access to information [1-4]. She introduced the panel: Aidan Gómez, co-founder and CEO of Cohere, an enterprise AI firm building large language models (LLMs), and Hugo Sarazen, President, Chairperson and CEO of Udemy, a global online-learning platform [5-10]. The moderator highlighted the diversity of perspectives – for-profit ed-tech versus not-for-profit university – and invited the audience to engage via the Slido app and the hashtag #WEF26 [12-22].


Poll question & live results


The first poll asked participants which resource is becoming scarcest in a world of instant AI answers, offering options: sustained attention, independent judgment, deep mastery, motivation, and trust [22-28]. The live results showed critical thinking receiving the most votes, with sustained attention a close second [65-70].


Panel responses


Hugo Sarazen argued that attention is the most pressing shortage, invoking Herbert Simon’s insight that “when you have a wealth of information, you have a poverty of attention” and warning that LLMs often provide answers without explaining their provenance, thereby undermining trust [40-44][46-47].


Aidan Gómez counter-pointed that the greatest risk is a false sense of deep mastery: learners can obtain surface-level responses that feel comprehensive, so rigorous testing that removes the tool is essential to verify what the human actually knows [48-53][321-334].


Debbie Prentice rejected all five poll options, noting that without cues about difficulty or interest students cannot gauge their own understanding [54-61]; she then pointed to the audience vote, which favored critical thinking (and sustained attention) as the leading choice [65-70].


Udemy’s evolution (deep-dive)


Hugo described how Udemy has moved from a catalogue of 250 000 courses and 80 million learners to an AI-driven reskilling platform. By assessing each learner quickly, breaking courses into adaptive pathways, and providing real-time feedback loops-including role-play simulations (e.g., sales-pitch practice) and automated scoring rubrics-Udemy can keep users engaged longer than generic courses [73-78][91-107][199-207][144-146][215-217]. He also referenced Bloom’s two-sigma problem, noting that one-on-one tutoring yields a two-sigma improvement over classroom instruction but has been economically infeasible to scale, a gap AI can now begin to fill [149-157][184-188].


Cohere’s approach (deep-dive)


Aidan explained that Cohere supplies enterprise-grade LLMs that run inside a client’s security perimeter, ensuring no data leaves the organisation while enabling workers to shift from performing tasks to orchestrating AI agents [110-118][119-121]. He highlighted recent advances: an “internal monologue” or chain-of-thought reasoning that structures problem-solving before output, and Retrieval-Augmented Generation (RAG) that cites external sources (e.g., the Cambridge library) to improve auditability and user confidence [372-383][386-394]. Both panelists agreed that explainability is crucial; Hugo called for specialised, trusted models and research into transparent reasoning, while Aidan stressed exposing chain-of-thought and source citations as a technical route [340-353][374-383].


Discussion on attention, personalization, assessment, and detection


The conversation returned to attention scarcity, with Hugo emphasizing AI-driven personalization-quick learner assessment, adaptive pathways, and instant feedback-as a way to mitigate the deficit [93-107][144-146][215-217]. Aidan reiterated that the “gold-standard” remains testing without AI to gauge true retention, but also recognised that proficiency with AI tools is itself a skill that should be evaluated with the tool in the loop [171-182][321-334]. He warned that current AI-text detectors are unreliable and described a technique of embedding subtle cues in model outputs to enable more robust detection [321-330]. Both warned that more reliable detection mechanisms are needed [321-330].


An audience member raised a paradox in companies: senior professionals can judge AI output while junior staff cannot, creating concerns about future job security and underscoring the need to train the next generation in critical evaluation of AI [460-470].


Audience Q&A


Motivation: Anna Van Niels asked how AI can sustain motivation without a human teacher; Hugo answered with AI-driven role-play and feedback loops that mimic gym-style repetition to keep learners engaged [194-207][215-217].


Physical classrooms: Nathaniel from Australia queried AI’s role amid a social-media ban for under-16s; Aidan argued AI should be taught as a calculator-like tool with safeguards, while Hugo stressed teaching students to ask the right questions and develop critical judgment [217-250].


Applied knowledge: Pranjal Sharma highlighted the gap between academic credentials and applied knowledge; Aidan noted AI can accelerate programme creation but that skill mapping must remain human-led [247-266][254-266].


Degree bundle: Hugo described the university degree as a “convenient bundle” of credential, rite of passage, and research, suggesting AI-enabled delivery may prompt a re-evaluation or unbundling of these components [408-416]; Debbie defended the broader mission of universities-fostering critical thinking, deep mastery, and research-while acknowledging graduates may need additional AI-supported skill development for the workplace [425-426][267-272].


Closing remarks


The panel concluded that AI will irrevocably reshape knowledge delivery, but effective education in the AI era will require:


– preserving and cultivating human attention, critical thinking, and self-knowledge;


– deploying secure, enterprise-grade LLMs that can be personalised and audited;


– maintaining human teachers as mentors and storytellers; and


– establishing robust, dual-track assessment regimes that combine tool-free testing with AI-enhanced simulations.


Consensus highlights


– Attention and critical thinking are the most endangered cognitive resources.


– AI-driven personalization can help alleviate attention scarcity.


– Explainability and trust are non-negotiable for widespread adoption.


– Human educators remain indispensable, with AI serving as an augmentative tool.


These points reflect the transcript’s emphasis on trust, explainability, and human agency as central pillars for responsible AI integration in education [40-44][65-70][340-353][184-188][169-176].


Session transcriptComplete transcript of the session
Debbie Prentice

Good afternoon, everyone, and thank you for joining this town hall discussion where we will be talking about a topic that university and education leaders are all buzzing about, which is namely dilemmas around knowledge. This has been a topic for us since schools were first invented, libraries were first invented, and it’s still with us today. It’s extremely relevant today in an age in which AI is changing, making knowledge available broadly to everybody all the time. But it doesn’t mean that there aren’t still dilemmas around knowledge, and we’re going to probe these today. I’m Professor Debbie Prentice, and I’m the Vice Chancellor of the University of Cambridge. I’m very pleased to introduce you to our panelists for this session.

So we have Aidan Gomez, who is the co -founder and chief executive officer of Cohere, an enterprise AI company developing advanced language models for use by business. And we also welcome Hugo Sarazen, who is president and chairperson. Chief Executive Officer of Udemy, which provides a wide range of business and leadership development courses, including AI courses, to businesses and organizations around the world in fields such as financial services, higher education, government, manufacturing, and technology. We have some fascinating questions to discuss this afternoon around knowledge, misinformation, AI, attention spans, and even the nature of expertise. And we’re going to bring the audience in early and often, so I hope that you’ll all participate with us. We, as panelists, come from very different perspectives.

Aidan and Hugo run very successful businesses selling a product. They are from the for -profit educational technology sector, and I’m from the not -for -profit sector. So there are different pressures, different opportunities, different challenges that we face in this space. Before we get started with… Before we get started with our panel discussion, I’d like to remind the online audience that… If you are sharing with us through your social channels, you should use the hashtag, hashtag WEF26. And whether you’re joining online today or here in person, and it’s great to see so many of you here. Thank you so much for coming. Please feel free to get involved in the session by reacting to the questions we discuss in our conversation and also by submitting questions to panelists via the Slido app.

Okay? Okay, so our first question is, in a world of instant answers and AI assistance, what is becoming the scarcest resource? Okay, the answers are from a list of options. Is it sustained human attention, independent judgment and critical thinking, deep understanding and mastery, motivation to learn in the first place, or trust in what we know and who to believe? And actually I said or. That could be and. You can choose as many of these as you. as you want. Okay? So you can see on the screen, actually, as people are responding via the Slido app, but I want to ask our panelists, what would you say? So you can see the answers on the screen.

What would you say, Hugo?

Hugo Sarazen

Well, I think it’s a complicated question, and I think there’s a lot of all of the above. If you take a historical perspective, knowledge was scarce. That was a source of power. Our countries fought for that. And we also had experts that built knowledge over time, but very few polymath. Very few. Those ones that were were very, very, very important. Now today, you have LLMs that can learn everything, and they can learn across different domains, and they can become the polymath. So every data center, every time we say there’s a new infrastructure that’s being added, we’re adding millions and millions, millions of polymath. And that becomes a democratization. of that knowledge. The problem is, and there’s an amazing quote from Herbert Simon, when you have a wealth of information, you have a poverty of attention.

And I think that’s what’s happening for a lot of learners, and that’s why traditional methods need to change. And we’re going to come up and talk, I’m sure, about how learning needs to evolve, what the process, what’s the role of traditional institution in changing, what’s the role corporation need to, and what individual needs to do. So I think attention is one big component. The second is a lot of, when you go to LLM and AI and you ask for a question, it will give you an answer. It will feel very comfortable with that answer. It doesn’t explain. Explainability in AI is a whole field, a whole domain, and most of these LLMs don’t give you that.

So if you have a society that begins to rely on product, that give you an answer but don’t tell you where that answer is, answer came from how do you learn and what do you have in terms of trust so i think the trust piece is also equally important so i’ll stop at that we can go well further but

Aidan Gomez

yeah i was looking at the uh the poll up there and i for whatever reason the first one that came to me was deep mastery which seems to be the most unpopular choice so i think um you know when you exist in a world where it’s so fast and easy to get answers to whatever question you might have or to get a very surface level answer to even a complex question like whatever like how does quantum mechanics work it’ll give you a four paragraph response um but that’s not deep understanding of the subject matter and so i think llms can’t chat bots they can fool you into thinking that you understand something when you don’t and i view that as a core risk as we integrate these LLMs into an education environment, is this false sense of mastery or understanding.

We can discuss the different solutions to that. I think that testing is essential to it. The idea that you need to take away the tool and see what the human alone understands and has retained. The ability for you to assess depth has to take away those tools. I think that is, from my perspective, what’s most at risk.

Debbie Prentice

That’s interesting. My answer is a variant on yours. I, of course, wanted to reject all five. But I think it’s because of where I come from, coming from the university sector. I wanted to say self -knowledge for the learner. It’s part of what you’re saying. You don’t know if you’re mastered and you don’t know if you’re interested in it. You don’t know if you get it. It comes to you. So much of what you learn, so much of what you learn, what you learn comes from what is difficult and what is compelling. So for those cues to no longer be actually useful cues for self -understanding means how will you even know, but that’s my answer anyway.

So we can see what the – whoops, it went away. I think critical thinking was the one that won out at the end. It looked like critical thinking was actually the audience preferred. We can keep coming back to this, but I want to use this as a jumping – oh, there we go. Okay, yeah, critical thinking and then sustained attention. They were neck and neck for most of the time, yeah, and then trust and then deep mastery, right. That’s interesting. So I want to talk a little bit about each of what you do. So we can start with you, Hugo. Tell us about Udemy.

Hugo Sarazen

So Udemy is a 15 -year -old company that, at the time, did a – pretty cool thing around introducing online learning. It was a great innovation to change accessibility and the cost of reaching out to millions and millions of people and created a creator economy around that. So we now have 250 ,000 courses, 80 million learners on a regular basis. We serve 17 ,000 large enterprise. We have 85 ,000 instructors that kind of come to this marketplace to offer their wear. They’re very deeply committed. They know stuff and they want to share it to the world. And we do it in about 40 % of our revenues are in the U .S. The rest is around the world. So we’re in tons of languages, 46 plus. And the funny story, I’ve only been in the world for less than a year.

When I came in my first on -haul and the people who may be listening online who were working on this, they were like, oh, I’m going to do this. I’m going to do this. I’m going to do this. I’m going to do this. I’m going to do this. I’m going to do this. I’m going to do this. town hall, I came in and I said, we’re going to exit online learning. That is a wonderful innovation. It did a bunch of great things, but it doesn’t solve the problem of today. And with AI, we can do so many different things. So I want to make a hard pivot of the business toward becoming an AI platform to reskill the workforce of the future.

And we can talk about that. And I don’t want to take too much time, but there’s a lot of ways you can use AI to do some of the things you were suggesting to kind of help build the mastery, how to do assessment using AI, how to use AI role play to immerse people. And it also does the thing that I think is so, so important. Traditional online learning and actually traditional learning. You’re an instructor and you teach to the average, right? You create your curriculum and you think you’re going to hit like the most of the people. You can’t get for the super fast. You can’t get for the super slow. You’re on online learning.

And then different people have different starting points, and we don’t have an easy way to accommodate that. Now with AI, you can do a quick assessment. You can break apart the class. You can have feedback loop and reinforce that in a very, very powerful way. And I think that’s one of the things that’s going to emerge of using AI to kind of re -skill the workforce. It’s going to build on that previous generation of online learning to do something pretty remarkable and quite different moving forward.

Debbie Prentice

Thank you. Aidan?

Aidan Gomez

Yes, so Cohere builds large language models. So we’re one of the developers of this core piece of technology that powers things like ChatGPT and all these different applications. We’re focused purely on the enterprise side of house, and so we work with businesses to put those models to work inside the organization. We give them access to internal data and systems that the humans have access to. And then we teach or we work with our customer to teach. the workforce to shift their role from being the ones individually doing the work to managing a team of these models or agents to carry out that work. Our big differentiator is on the security side. So there’s no data exiting our customer’s perimeter.

Instead, we send all of our models and software to them, and they keep it self -contained. Yeah.

Debbie Prentice

So you have certain customers who will only subscribe to you, right?

Aidan Gomez

Yeah. Certainly critical industries, financial services, telco, healthcare, and then, of course, government applications as well. Anything that’s a national security concern, and arguably education is within that remit, that’s a place that we do extremely well.

Debbie Prentice

That’s interesting. So, Hugo, what can we learn from the arc of progress from MOOCs and… online education to now AI -driven?

Hugo Sarazen

I think a few things. The first one is, you know, if you look at the traditional learning processes and methods that we had, there was a void. And that’s why online learning took off and that’s why there’s a whole industry. And it addressed a bunch of problems around, you know, getting to skills, specific skills, and also getting to certification and then helping organization rescale. So that was a very, very, very powerful thing. What is now becoming a lot more a priority, and in the last six months, I spent an enormous amount of time, I spoke to 400 CHROs and head of learning and development in a large enterprise. So the pattern that I saw is they had an enormous proliferation of tools and things that were bought during the pandemic.

During the COVID era. very few could explain the ROI. How do you measure the ROI of learning? It’s a really good question. And everybody kind of defaulted to, did they take the class? Did they complete the class? Hours of learning. And as a business leader, it’s not particularly helpful. And it gets even worse. When they get certification in Google Cloud or AWS or Cyber something, to know that you’ve certified yourself two years ago, I’m a business leader. I want to know, are you current? Are you relevant today? So I think the arc now is moving in the enterprise to an ability to do in the flow of work learning, do it at bite size, do it in an adaptive way, and then we can come back to what adaptive means, and with an ROI, an ability to measure what skills people are deploying in real time.

So you’re now beginning to create a workforce management tool that is powered by an operating learning system.

Debbie Prentice

so Aidan you said that you said that you were not as worried about uh sustained human attention as you were some of the others how does Coherence solve the attention problem

Aidan Gomez

um well I mean I don’t know if Coherence solves the attention problem I think it it’s definitely a concern there’s lots of pressures on our attention span I think um social media short -form content is uh driving a lot of that um I’m certainly on the receiving end of that you know after 30 seconds because of TikTok my attention span ends and I need to talk about something else and also just the way that we do business now are in these short 30 minute meetings where you completely swap context and so I think those are difficult challenges not related to AI that are still applying pressure on human attention span um but it has a pretty good impact on the the pretty strong consequence on how people learn and how students can learn when they’re constantly being distracted when they to sit with material over time.

I think AI can perhaps assist in resolving that by its ability to personalize the experience to the individual and engage them more effectively. And so if you have a generic education offering, which, you know, bores some part of the population, excites the other, you’re missing, you’re underserving that population that gets bored. But if we can have a very targeted, scalable approach for each individual, giving them something that’s engaging, exciting, if they are auditory learners or visual learners, we can tailor it to them and hopefully keep their attention better than we might otherwise would. So AI might be part of the solution as opposed to the source of the problem.

Debbie Prentice

Yugo, does your vision of AI comport with that?

Hugo Sarazen

It completely matches. And I think, you know, there is a well -known piece of research from the 80s from a university, a Chicago professor. It’s the Bloom two -sigma problem. And they did some research where they looked at the ability to learn with one -on -one coaching. It was two -sigma higher than the classroom. But the economics of doing that was not there. That’s why we have these big classrooms, and that’s why there are bigger classrooms for first years. It doesn’t deliver the same learning experience. Now, to Aidan’s point, with AI, you can personalize the experience. You can adapt it, and you can create feedback loops that a professor cannot today. You’ve got 40 students. You cannot pick up who’s not easily.

Some teachers are amazing, and they have the ability to do incredible things. But now you have the ability to have that feedback. So I think we’re going to see a lot of AI expert tutors and coaches that will have context and that will have been trained. on a body of knowledge that is hopefully trusted, hopefully accurate, and will help in the way that you like to learn. So if you’re an auditory learner, we’re going to give it to you that way. And if you’re a visual, we’ll give it to you that way. I think that’s a really exciting and promising world we’re entering from that point of view. So we’re going to go to questions from the audience in just a second.

So start thinking about your question. I’m just going to ask one more question of our panelists myself, which is where do humans fit in in this brave new world of AI -based education? I think all of us who are educators know that at some point we need human intervention in the process, even with the most fabulous technology. Where do you think they need to come in?

Aidan Gomez

I think they’re the customer. So they’re the ones that we’re serving with this technology. And so we need to be able to serve them. We need to create. the best possible product for them. If we just do surface -level education that’s very confirmatory, oh yeah, you’ve got it, great, you know, a bit sycophantic, then they won’t be effective in the real world when they actually enter the job market. And so there’s a burden on us as product creators to create the most effective product to teach people skills and give them knowledge. And I think that AI is actually an incredibly effective tool towards that. But I do still believe that it’s a tool. It’s like a calculator.

It’s something that you can lean on to give you faster answers, more thorough answers. But we still need to ground ourselves in the human without the tool. And so testing becomes, it’s always been important, of course, but I think it becomes absolutely critical now because you can fake your way through an education system much more easily. And so having very strict testing regimens is going to be essential.

Hugo Sarazen

I have a variation on this I do think the teachers, the instructors are partly the customers but I do think they need to be in the loop they’re amazing storytellers they have a way if I ask anybody in this room who was your favorite teacher in high school and I pause for 5 seconds, there’s somebody in your mind right now what was special about that person and you cannot replicate that but you can augment that you can make that person now be able to maybe teach you on something that they were not like my favorite teacher in high school was a physics teacher I loved the way he presented, I loved the way he engaged and it was so motivating my chemistry teacher was not that but now I can augment with AI and have the voice, not just the voice but the way he thought, the way he presented the information apply to a different topic.

And I think that gets pretty exciting as well. You may finally understand chemistry. I may finally understand chemistry. I stayed away from chemistry because of that. But physics I love.

Debbie Prentice

Okay, I wanna open up to questions from the audience. So I will call on you the old fashioned way. If you raise your hand. Oh, you have to, sorry, you have to speak into my ear.

Audience

Anna Van Niels, director of the Livium Trust. I guess learning is a bit like working out it’s got to hurt to be effective. How do you think AI enabled tech of various kinds can help with that motivation issue? You’ve talked about the teacher being the one that absolutely the motivates, but a lot of the systems we’re talking about in the workplace, et cetera, you’re not gonna have that human in the loop. So can we do things with AI and tech that could prompt that?

Hugo Sarazen

Yeah, I’m gonna offer a few suggestions. And this is not like future, this exists today. So you can do AI role -playing and you can do AI role -playing. in a way that makes you go through the learning process. And I’m going to use a business example. So if you’re a new salesperson and you have a new product that you need to sell, you can load up the specs of that product into an AI role play and practice selling to a person. And there will be a rubric against which we’re going to score you. And we’re going to discover whether or not you are competent at selling this product that you’re responsible for. So that’s a business example.

I can do the same thing in a call center. You know, we have one of the largest call center outsourcers. There are 20 ,000 call center agents they need to onboard every month. That is incredibly complicated. But now you can load, you know, the most common error cause, the most common tickets, the product specs. And instead of taking three weeks to onboard somebody, through the process of learning, of experimenting, you can load up the specs of the product that you need to sell. You can do a role play and get to accelerate that learning by doing a lot of practice. So it’s simulation. So that’s one powerful example. I think the other one is AI can give you feedback and monitor the progress you’re making in a way that we can bring you back to that point in the gym where you’re struggling with whatever exercise you’re doing.

We’re going to make you do that exercise more and more and get that repetition in a way that reinforces the gap that you have.

Audience

Hi, I’m Nathaniel. I run an education company in Australia. Now, as a region, Australia has an interesting relationship with technology. As many of you may know, we’ve just recently had a social media ban for young people under 16. And in a similar vein, we don’t really have a good consensus around the role of AI. So my question is, what do you believe the role is for AI in physical classrooms? And what would you say to people who might be on the side of banning versus not banning it?

Aidan Gomez

Yeah, I think I’m interested to hear your answer. But from my side, I think it’s a tool like a calculator. I think also a duty of the education system now is to teach people how to use this AI, how to engage with it, how to most effectively use that tool. And so it certainly should exist as part of the classroom and as part of schooling. But like I said, it can become a crutch and it can be used to cheat. And so we have to come up with ways to ensure that students aren’t misusing it or using it in the ways that are unproductive to their learning. I’m excited to hear your answer.

Hugo Sarazen

I’ve got two -part answer. The first one is any business process or any endeavor, you have the problem statement asking the right question, you have the solving, and then you have the quality assurance in the back. It’s a feedback loop that you go through a circle all the time. And education is… No different. What AI does well is that middle part. It doesn’t do a whole lot in the front end and the back end. So what we need to teach young students and adults is how to ask the right question. The critical thinking, I love that it came out at the very top. Super, super important. But you can, as you said, the calculator is a calculator.

The fact that I can’t do multiplication table all the way to 100 is not that relevant for my day -to -day job. But the fact that I can be critical in my thinking, I can summarize, I can contextualize, I think those are the skills you want. Second part, for those who are curious, I have no relationship, but I am just fascinated. There’s a school in the U .S. called Alpha School. And they’ve got a really powerful model. They are using AI. They are encouraging students to use AI. And they are demonstrating that I’m going to get all the stats wrong, but they get two. the learning in half the time or three times the learning half the time and then the kids in the afternoon they go learn and learn how to be a civic leader or you know a leader in all sorts of other contexts instead of spending all their time where you know historically you would have learned you know various dates it’s not that relevant to know the dates of specific things but it’s relevant to understand the context of those events and I think that’s where we can focus a lot of the effort

Audience

Thank you Terrific topic to be discussed at Davos I’m Pranjal Sharma I’m from India I’m an author and analyst we’re looking at a lot of the micro pieces but I’d like to focus on the macro we have a situation today where we’re all skilled up but nowhere to go right last year I think ILO says 7 million fewer jobs were created not to mention the existing jobs that disappeared So there is a cry from the industry. Firstly, they don’t know who to hire and why to hire and what to hire, and they don’t even know what to test that credentials on. The second part is there’s a huge disconnect between what they want and what academia is offering.

Plus, the concept of a degree shouldn’t exist, and even continuous learning in terms of applied knowledge is missing. So I think the core phrase to be used here is applied knowledge. How do you create information for a person to be able to earn a livelihood, irrespective of white, gray, blue collar? And I think that’s the gap of applied knowledge delivered in the right way to the right people at the right time.

Aidan Gomez

From a labor market perspective, I think there’s a good case to be concerned about the impact of AI and what might happen, and reskilling is going to be an essential component of that. Thank you. The mismatch in the market between what education institutions are offering and what the market is demanding, I think that is a major issue that we need to figure out how to solve. I think AI can be a part of speeding up delivery of new programs and courses and keeping up with changes in demand much faster than we have in the past. The process of scaling up educational infrastructure to meet a shift in market demand has been historically extremely slow and laborious.

But with AI, we’re able to create programs much faster. The models are infinitely scalable. They’re always awake 24 -7. They never get annoyed at the student. So we have these incredibly compelling tutors to deploy at scale against the problem of teaching the population the skills that we need. But I think the issue might be in identifying the skills that we need, and that’s still going to have to come first. From us, the humans, the business leaders. the policymakers. So that might be the core constraint. We need a direction to be set against to start building the solution.

Debbie Prentice

I think too, I mean, what I would say is I think that, you know, universities aren’t teaching to what businesses need necessarily. We’re teaching things that we believe are fundamentally important, and I would defend that. I mean, we’re teaching critical thinking, and we’re teaching deep mastery, and we’re teaching them to people at a critical moment in their lives, most of them, where they actually really need to have a go and learn these skills. They may need additional skills when they go out into the workplace, and that, as far as I’m concerned, is what the kinds of products that you’re talking about are for. Good, thank you. Let’s go back to the critical thinking because now in the university the students widely use the AI assistance and get the instant answer.

In that case, how can we teach them to increase their capability of critical thinking to make factual check, logical check, scientific check, ethical check to the instant answer they got from models?

Hugo Sarazen

conclusion. The AI will outdo the human. So where we can be competitively differentiated versus the AI is in the front and in the back end. So we need to adapt the curriculum to make sure that people are asking the right questions with the right context. And it is critical thinking. It is critical thinking, but we need to expand and we need to have a better way to evaluate the level of critical thinking these students have when they hit the workforce so that you can evaluate. And then the same on assessing. I mean, AI is marvelous right now. It generates codes like there’s no tomorrow, but it’s mostly garbage. It is, you know, we have bottlenecks and quality assurance in the back end.

So how do you kind of create the new tools and you teach people to have, you know, the critical thinking to see if this is using the right library, is it using the right pattern? Is it using the right data? I think that’s one of the core, you know, change that. academic institution, organization like me, an individual need to do, as you do your self -development, you need to kind of really lean into this ability to ask the right question. Because the middle part, you don’t have a competitive advantage. You will be outgunned. And the thing that is even more crazy, historically, like people did PhD. I have a PhD. I went like super deep on one little topic and I got buried somewhere in the sinkhole.

And it took my entire body of effort to get there. And to be a polymath is very hard. To be able to understand, I know nothing about chemistry. I know nothing about biology, psychology. My dad did that. So I got something rubbed up on me, maybe. But AI is a polymath by design. It has the data set across all of that. So the middle part is a foregone conclusion, folks. You need to get, get good at the front and the back end.

Aidan Gomez

Yeah, I was going to say another thing, which is teaching is a skill in the same way coding is a skill or doing math is a skill. And so it’s a core capability that we as model developers need to invest in. And it’s not something that is easily benchmarked. And it’s not something that is accurately tracked at the moment. But I think the more this rolls out, I mean, it’s already in the hands of every student on the face of the planet. It’s going to become imperative that we’re able to track the performance of models in teaching tasks to ensure that they’re actually effective and improve that over time. That’s just so like a technical level that is not done presently.

I don’t know of a teaching benchmark, but I can point to probably 30 code ones, 50 math ones, you know, biology, et cetera.

Audience

All right. it happens from time to time I think that psychology is rubbing off well when you say AI is a polymath by design, it’s a brilliant thought you know, it was you articulated it very well which also means that by definition humans cannot compete so we basically have to end the session and say that doom is nigh

Hugo Sarazen

well, I don’t think so I mean, I’m more optimistic so the polymath thing is real I mean, if you do, again, historical perspective he who had Leonardo da Vinci on his team had an advantage to build a war machine or a better court or whatever now there’s going to be a similar debate, like who assembles these polymath AI thingy has an advantage that is a foregone conclusion, that’s why there’s all these battles for a But I think we cannot, as the human race, give up that ability to influence. I think that we made a point, I think you did at the very beginning. Like, these models typically are not designed, though some of them can be designed, to explain their reasoning.

So if as a society we begin to rely on this thing that is super facile, that gives us an answer, and we don’t have the questioning, and we don’t kind of do the checking and the validating, we lose agency on important decision. And I think that is one of the things that we need to focus on deeply as a society. It also leads to the guardrail, the ethical things, and all that other stuff. We need to go there, because in the middle, it’s going to come up with answers that will be amazing in biology, and will solve things in biology, because I got trained in English language, I don’t know. But it’s going to be pretty wild, but we cannot lose agency around this polymath.

I mean, every data center is going to have… hundreds of millions of polymaths in there.

Audience

yeah I just want to shed a thinking I believe there’s a type of paradox within companies about this critical thinking let me say it this way we senior professionals we know how to judge what the AI is doing so I ask them one day for the AI to model whatever and I could judge my juniors they were not able to judge because they don’t have the experience so but to some extent I could fire them because I don’t need them anymore because of these AI technologies but maybe there will be a gap so at some point in time AI can enhance a lot what I do but if you don’t train let’s say the new generation the junior who will be the future who will be in the future able to do this critical thinking on what AI is doing I don’t have the answers obviously companies need to take efficiency and we need to do our best to reduce cost whatever but I think it’s something we as a society will have to think a lot about

Debbie Prentice

it’s fair thank you here we’ve got one here you wanted you were up right yeah i didn’t just call call you

Audience

hi thank you for your insights i’m i’m kian i’m the CEO of an AI company called workera um i really like what you said even on um testing the human and i think in in the world of testing right now there’s almost two camps one that says you can test them with the calculator we can test them without the calculator and there’s also overlaid on top of it the risks of proctoring and understanding um who’s cheating who’s not cheating and what can you tell about it so how are you thinking about that idea of testing with or without the calculator

Aidan Gomez

yeah the uh the cheat like can you tell whether a piece of text was written by AI it’s really tough a lot of the detectors out there are total scams they’ll say 100 % AI even when it’s not used at all so they’re extremely overconfident very high error rate on both sides uh false positive false negative. And but the answer to that question is, you can. Like, you can insert into language models subtle cues to indicate for the reader, this was written by an AI. You can not sample from natural language, language that I’m drawing from right now. You can sample from a slightly shifted distribution and use certain words much more than any normal like any human would use.

And then as soon as those words appear, you have a good piece of evidence that this was written by a language model. And so us language modeling companies do that. We shift the distribution of the language model so that when its text gets read, we have some ability to say, you know, I can assign a likelihood that that was generated by my model. So you can detect that to some extent, but many of the tools are scams. And so I think we need to make better tools and put them in the hands of educators more readily. Thank you. On testing with and without the calculator, I have a pretty strong focus on without the calculator.

I think everything needs to be ripped away, and you, standing alone as yourself, need to prove your knowledge. That is like the gold standard test of what you have learned retained. But of course, like I was saying earlier, using the language model is a skill itself, and we should have space to test that, in which case, of course, you’re going to need the LLM in the loop.

Debbie Prentice

Let me seize the chair’s prerogative here to ask, because I’m curious what you would both say to this question. What happens in this brave new world of polymaths and not showing your work and not explaining your answer to expertise or authority? So, you know, we have at Cambridge, you know, library after library of big books that tell you the truth, or that was always the… the idea, right? You would go look it up somewhere. What do you do in a world in which looking it up is no longer… there’s not a dictionary, there’s not a truth?

Hugo Sarazen

I’ll start. I think most technology go back and forth. There’s a pendulum. We’re in the pendulum that bigger is better. We’re throwing everything under the sun. Every Reddit quote is now part of training every large language model. And that is good. It’s going to give you an average answer for average problem. Now, over time, I think we’re going to come back and say, you do need specialized, trusted, and we need to have confidence that we did use the right source. And I think there will be a space for that. At least I want to hope that that will be the case, that we’re going to come back and we’re going to have these specialized model that will not only be rag, but they are going to be defined from scratch with the right intent.

And they don’t need to be a zillion, trillion function points and whatever. I mean, they just need to be trained on the expertise. And then you do need to trust it. It’s going to be incredibly important. I think we also need a lot of research on explainability. And Ben Gio at the University of Montreal, one of the guys who got the Turin Awards, has been very vocal around this. We need to kind of go back and explain a lot more. These are statistical models. This is all this is. These are huge matrices, and they’re like weights assigned to different things. So this is not a piece of software where you say if, then, this, that.

This is just statistics. So it, on average, gives good answers. But it depends on the data. And you need to come back and put a bunch of tools to put the explainability into the model. And there are ways to do it. It’s not yet super advanced. And I think we need to invest in that so that we do have the confidence, build a trust. And I do think it’s part of the learning. The learning question you have. Because if the models are black box, you lose. the ability to learn from their deduction process, which doesn’t exist. It’s just a statistical model. There’s no deduction. So anyway, those are my two ideas.

Aidan Gomez

Yeah, over the course of last year, there was a paradigm shift in the type of model that gets to use now. We don’t just use input -output direct response models like you were alluding to. Every model now is a reasoning model. And so before it actually responds, it has an internal monologue where it thinks through the problem, tries to reason about it, and then delivers a response. It is primitive. It’s a year old, but it’s getting much better. And so I think exposing that to the user and showing these chains of thought, this reasoning is an important solution. And then like you say, RAG, which is retrieval augmented generation, where the model isn’t just drawing on its own knowledge, but it’s actually making direct and specific reference to external knowledge.

So we can plug it in into the Cambridge library. I went to Oxford, so the Bodley. And it can cite directly back from those sources. And that provides some degree of both reasoning and RAG provides some degree of auditability. So you can have a little bit more confidence in the response because you can check its work.

Debbie Prentice

Just out of curiosity, what’s driving that? What’s driving the need for reasoning?

Aidan Gomez

Because the models were brittle. They would very confidently answer with the wrong solution. And it turns out humans don’t put the same amount of energy into answering every question. But that was the prior expectation on these models. You would ask them, what’s 1 plus 1? And it would immediately respond and put the same amount of effort into answering that question. And you would ask it to prove some unsolved Erdos problem or something. And it would put the same amount of effort as 1 plus 1 into that. That was obvious. You know, there are some problems that we should spend. days, weeks, months, years, decades, putting effort in to solve, and there are others that can be responded to instantly.

It’s just a better, more robust intelligence.

Debbie Prentice

That’s fascinating. We have time for one more question. Anything pressing in there?

Audience

Thank you. Yeah, I’m very interested to ask the question of just circling back to the beginning where we said we have like public sector university as well as a technology, a tech platform being in the same room. The question I have on my mind is that with right now, like in the U .S. especially, education cost is so astronomically high and prohibitive. Lots of people are saying the narrative goes as like there’s no point going to university anymore. And I would see in that world, there would be a lot of attention turned to online education. I think we’re all very familiar with Udemy. What is the gaps between an online education and an accredited college or an elite college?

Has there ever been customer or market demand for online education to move towards a model or imitate a traditional college experience? Has that ever surfaced as a need? And just comparing the gaps there.

Hugo Sarazen

I’m going to say something maybe controversial, but it’s fun. The university degree is a bundle. It’s a convenient bundle that as a society we chose to create. So you learn something, you get an accreditation, and you get a degree. have a rite of passage. You know, these kids are at a moment, they leave home, they go, and they, and that bundle is a convenient, and we bundle that with research, because the same people could now pass on their knowledge to others. It is a convenient bundle as a society. It has worked well for, you know, a long time. Oxford and Cambridge are examples of long -standing institutions that had a version of this bundle. It changes over time.

Is it time to revisit whether all of these components need to fit together because of the economics and what AI can do to change the economic of delivery? Maybe. I think the second…

Debbie Prentice

Think it quickly.

Hugo Sarazen

Yeah, quickly. And the second piece is just the adaptability. If you have the labor market that moves so fast, you’re now going to begin to put more weight on the addressing a specific need for a specific skill. So I think that is a reality in addition to that potential unbundling of that whole experience.

Debbie Prentice

You have a good word for the university. I’m actually interested to hear from the university’s perspective. Then I’ll just end by saying I think that they are currently serving very different functions. Right now, university does so much more than provide knowledge that it still is worth its weight in gold, and it is gold. But we’ll see how the space develops. With that, I’m getting all kinds of signals from the producers, so we’ve got to end it. But thank you very much. Thank you for your questions, and thank you to our panelists. Thank you. To be continued… To be continued… Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (20)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“Professor Debbie Prentice is the Vice‑Chancellor of the University of Cambridge”

The knowledge base identifies her as “Professor Debbie Prentice, Vice Chancellor of the University of Cambridge” confirming her role [S1] and also refers to her as “Deborah Prentice, Vice-Chancellor of the University of Cambridge” [S17].

Confirmedhigh

“Aidan Gómez is co‑founder and CEO of Cohere”

Both sources list him as the CEO (and co-founder) of Cohere, confirming his position [S1] and [S17].

Confirmedhigh

“Hugo Sarazen is President, Chairperson and CEO of Udemy”

The panelist is identified in the knowledge base as Hugo Sarrazin, President and CEO of Udemy, confirming the organisational role though the surname spelling differs [S1] and [S17].

Confirmedmedium

“The moderator highlighted the contrast between for‑profit ed‑tech companies and a not‑for‑profit university”

A source explicitly notes the for-profit nature of Aidan and Hugo’s businesses versus the not-for-profit sector of the academic speaker, matching the report’s description [S25].

!
Correctionhigh

“The report misspells Hugo Sarazen’s surname; the correct spelling is Sarrazin”

Both knowledge-base entries list the Udemy executive as Hugo Sarrazin, indicating the report’s spelling “Sarazen” is inaccurate [S1] and [S17].

Additional Contextlow

“The report refers to the panelist as “Debbie Prentice” while the knowledge base uses “Deborah Prentice””

The knowledge base records her full name as Deborah Prentice; “Debbie” is a common diminutive, providing additional naming context but not a factual error [S1] and [S17].

External Sources (89)
S1
Driving Enterprise Impact Through Scalable AI Adoption — – Hugo Sarazen- Aidan Gomez – Hugo Sarazen- Debbie Prentice
S2
Driving Enterprise Impact Through Scalable AI Adoption — -Debbie Prentice- Professor and Vice Chancellor of the University of Cambridge, representing the not-for-profit educatio…
S3
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S4
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S5
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S6
Media Briefing: Unlocking the North Star for AI Adoption, Scaling and Global Impact / DAVOS 2025 — – Aidan Gomez: CEO of Cohere Aidan Gomez: And there were definitely indications that it was a promising architecture f…
S7
Lift-off for Tech Interdependence? / DAVOS 2025 — – Aidan Gomez: CEO at Cohere Aidan Gomez: I’ll be quick. So I think, from our perspective, Cohere is focused on prod…
S8
AI expert Aidan Gomez joins Rivian board — Aidan Gomez, co‑founder and chief executive of AI specialist Cohere, has been appointed to theboard of electric‑vehicle …
S9
Pre 6: Countering Disinformation and Harmful Content Online — Valentyn Koval: Democracy is by their very nature. open societies, war of censorship, and bound by bureaucratic inertia …
S10
Education, Inclusion, Literacy: Musts for Positive AI Future | IGF 2023 Launch / Award Event #27 — Eve Gaumond:Thank you very much. I would like to thank you for inviting me to comment . I would like to build upon three…
S11
IGF 2024 Global Youth Summit — AI technology has the capability to create virtual classroom environments and interactions. This can offer educational e…
S12
Rethinking learning: Hope, solutions, and wisdom with AI in the classroom — But teachers need support. They need professional development around AI literacy, reasonable class sizes that allow for …
S13
Can AI replace the transmission of wisdom? — However, in all these cases, we must keep the role of AI as a supportive tool, not as a teacher. This is because technol…
S14
AI (and) education: Convergences between Chinese and European pedagogical practices — **Norman Sze** (former Chair of Deloitte China) provided industry perspective on AI’s impact on professional work, notin…
S15
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Lt Gen Vipul Shinghal — “The black box of data must become a glass box.”[11]. “the commander taking a decision based on an AI -enabled system bu…
S16
How Small AI Solutions Are Creating Big Social Change — Artificial intelligence | Building confidence and security in the use of ICTs Reliability, Safety & Verifiability
S17
Knowledge in the Age of AI: World Economic Forum Town Hall Discussion — This is a critical business challenge as organizations struggle to demonstrate the value and impact of their learning in…
S18
From the Parthenon to Patterns: Ancient Greek philosophy for the AI Era — However, some new possibilities emerge as well. For example, AI platforms such as Chat GPT could simulate dialogue aroun…
S19
Keynote-Bejul Somaia — This is psychologically and strategically insightful because it identifies the mental model that has shaped Indian entre…
S20
Keynote-N Chandrasekaran — “It is the age of abundant intelligence where the scarce resources are trust, stewardship, and human capability.”[39]. “…
S21
WS #208 Democratising Access to AI with Open Source LLMs — Abraham Fifi Selby: All right, thank you very much for the session, and I’m very happy to join this panel. I’m from th…
S22
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — By making models publicly accessible, flaws and issues can be identified and fixed by a diverse range of researchers, im…
S23
Keynote-Rishi Sunak — “And for just a few dollars a month, their rate of learning has doubled.”[40]. “These children are being provided with p…
S24
Education meets AI — Artificial intelligence has the potential to revolutionize education by offering personalized learning experiences to ev…
S25
https://dig.watch/event/india-ai-impact-summit-2026/driving-enterprise-impact-through-scalable-ai-adoption — And I think that’s what’s happening for a lot of learners, and that’s why traditional methods need to change. And we’re …
S26
Generative AI in Education — In conclusion, the summary underscores the need for a balanced integration of GAI in education, advocating for its use a…
S27
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S28
Enhancing rather than replacing humanity with AI — People’s judgment remains crucial, particularly for decisions that involve values, context, or individual circumstances.
S29
Main Session on Cybersecurity, Trust & Safety Online | IGF 2023 — Another argument put forth is the crucial involvement of stakeholders outside of government in cybercrime discussions. T…
S30
A Guide for Practitioners — – What are the current macroeconomic, political and social environments, and how do they relate to health? A thoro…
S31
BOOK LAUNCH: The law and politics of Global Competition — In regards to developing and least developed economies, the speakers raise a question regarding the approach these econo…
S32
The Gig Economy: Positioning Higher Education at the Center of the Future of Work (USAID Higher Education Learning Network) — A challenge faced by universities is the disconnect between the skills and knowledge they provide and the skills demande…
S33
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — Traditional education system faces challenges as students question value of expensive degrees
S34
Artificial intelligence (AI) – UN Security Council — Across different sessions, participants expressed concerns about the lack of transparency in AI algorithms, which can le…
S35
What is it about AI that we need to regulate? — The lack of transparency in AI systems was identified as a fundamental issue requiring regulation.Abel Pires da Silva fr…
S36
WS #189 AI Regulation Unveiled: Global Pioneering for a Safer World — Juliana Sakai: Hi everyone, thank you. So we have like right now the policy question three with the theme enhancing en…
S37
How Trust and Safety Drive Innovation and Sustainable Growth — Alexandra Reeve Givens This insight identifies a critical gap in current regulatory approaches – that AI creates an ‘en…
S38
Tech Transformed Cybersecurity: AI’s Role in Securing the Future — The role of universities and educational institutions is also emphasized. It is noted that many universities still utili…
S39
Driving Enterprise Impact Through Scalable AI Adoption — Audience sentiment suggests a growing narrative that university degrees may no longer be necessary, highlighting a chall…
S40
INTRODUCTION — Given the increasing needs of the workforce for personnel with advanced digital competencies and the curren t gap in…
S41
Secure Finance Risk-Based AI Policy for the Banking Sector — Trust is built when systems are predictable, explainable, and accountable. Trust deepens when innovation aligns with pub…
S42
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S43
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — Workflow woes: Even if an AI model performs well in a lab, integrating it into a real-world radiology workflow is a whol…
S44
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — Amal El Fallah Seghrouchni: Thank you very much for the question. Yes, Morocco is Arabic-African. We have we are close t…
S45
The sTaTe of The — Survey respondents felt that the overall quality of aid workers in the field seemed to have improved overall, but not in…
S46
Policies and platforms in support of learning: towards more coherence, coordination and convergence — 35. The first and most meaningful observation that should be highlighted is that, despite general agreement on the princ…
S47
Young voices from Africa – Harnessing digital tools for sustainable trade — The lack of comprehensive understanding and data collection on the informal sector is identified as a major hindrance to…
S48
AI growth faces data shortage — The surge in AI, particularly with systems like ChatGPT,is facinga potential slowdown due to the impending depletion of …
S49
Keynote-N Chandrasekaran — “It is the age of abundant intelligence where the scarce resources are trust, stewardship, and human capability.”[39]. “…
S50
Upskilling for the AI era: Education’s next revolution — Doreen Bogdan Martin: Good afternoon, ladies and gentlemen. Yesterday morning on this very stage I spoke about skills. I…
S51
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S52
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Legal and regulatory | Sustainable development | Development Reports consistently identify governance of artificial int…
S53
AI (and) education: Convergences between Chinese and European pedagogical practices — – The irreplaceable importance of human emotional intelligence and mentorship 1. **Universities and teachers remain ess…
S54
The National Education Association approves AI policy to guide educators — The US National Education Association (NEA) Representative Assembly (RA) delegates haveapprovedthe NEA’s first policy st…
S55
Education meets AI — Artificial intelligence has the potential to revolutionize education by offering personalized learning experiences to ev…
S56
Empowering India & the Global South Through AI Literacy — Okay. So if you want to analyze the transformative bets, the major transformation that AI can bring into the classroom, …
S57
IGF 2024 Global Youth Summit — AI has the potential to tailor education to each student’s specific requirements. This personalization can enhance the l…
S58
AI & Child Rights: Implementing UNICEF Policy Guidance | IGF 2023 WS #469 — Steven:Thanks, Vicky. And good afternoon, everyone. Good morning to those online. It’s a pleasure to be here. So I’m a d…
S59
AI-generated ads face new disclosure rules in South Korea — South Korea will require advertisers to labelAI-generated or AI-assisted advertisingfrom early 2026, marking a shift in …
S60
Revitalising trust with AI: Boosting governance and public services — AI is reshaping public governance, offering innovative ways to enhance services and restore trust in institutions. The d…
S61
Day 0 Event #261 Navigating Ethical Dilemmas in AI-Generated Content — Human rights | Legal and regulatory | Sociocultural Information Integrity and Human Rights Framework There must be dis…
S62
Democratizing AI Building Trustworthy Systems for Everyone — I think that’s a fantastic question. I’m going to start with a very broad context and then narrow it down to that specif…
S63
Knowledge in the Age of AI: World Economic Forum Town Hall Discussion — The central question explored what becomes the scarcest resource when AI can provide instant answers to virtually any qu…
S64
Driving Enterprise Impact Through Scalable AI Adoption — Deep understanding and mastery are at risk as LLMs can fool people into thinking they understand when they don’t Sustai…
S65
Defying Cognitive Atrophy in the Age of AI: A World Economic Forum Stakeholder Dialogue — But even those skills can be eroded without regular practice and engagement. Core cognitive capabilities, such as judgme…
S66
Keynote-N Chandrasekaran — “It is the age of abundant intelligence where the scarce resources are trust, stewardship, and human capability.”[39]. “…
S67
Education meets AI — Additionally, the speakers emphasized the need for personalized learning and adaptive teaching methods. They discussed t…
S68
IGF 2024 Global Youth Summit — AI has the potential to tailor education to each student’s specific requirements. This personalization can enhance the l…
S69
Why apprenticeship and storytelling are the future of learning in the AI Era — AI, through approaches such as apprenticeship models and storytelling, can help swing the ‘learning pendulum’ back. It c…
S70
Education, Inclusion, Literacy: Musts for Positive AI Future | IGF 2023 Launch / Award Event #27 — The use of Artificial Intelligence (AI) in education has both positive and negative impacts. On one hand, AI has the pot…
S71
Enhancing rather than replacing humanity with AI — People’s judgment remains crucial, particularly for decisions that involve values, context, or individual circumstances.
S72
THIRD SECTION — considers that what is essential for the protection of individuals’ rights in the context of the regime …
S73
One-Person Enterprise — Human oversight is still needed for important decisions
S74
Artificial intelligence and machine learning in armed conflict: A human-centred approach — Human control and judgement will be particularly important for tasks and decisions that can lead to injury or loss of li…
S75
UNSC meeting: Artificial intelligence, peace and security — Brazil:Thank you, Mr. President, Mr. President, dear colleagues. I thank the Secretary General for his briefing today an…
S76
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — Traditional education system faces challenges as students question value of expensive degrees
S77
The Gig Economy: Positioning Higher Education at the Center of the Future of Work (USAID Higher Education Learning Network) — Jennifer DeBoer has herself gone through traditional university education and holds multiple formal degrees The readine…
S78
Artificial intelligence (AI) – UN Security Council — Across different sessions, participants expressed concerns about the lack of transparency in AI algorithms, which can le…
S79
Can we test for trust? The verification challenge in AI — Anja Kaspersen: Massively so. So let me, I’m just gonna rewind a little bit to our title of this session if you allow me…
S80
Artificial Intelligence & Emerging Tech — If a computer cannot explain its behavior, people will not trust it Issues mentioned include transparency, explainabili…
S81
Workshop 6: Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies — Domenico Zipoli: Thank you very much. It’s always fascinating to be in a room with both stakeholders coming from compani…
S82
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — 2. Data privacy: Gong Ke highlighted data privacy concerns as a challenge for transparency. Amal El Fallah Seghrouchni:…
S83
WS #136 Leveraging Technology for Healthy Online Information Spaces — Nighat Dad: Yeah, no, thank you so much. Julia, I would, so I’ll talk a little bit about the UN Secretary General, HLAB,…
S84
WSIS Action Lines for Advancing the Achievement of SDGs | IGF 2023 Open Forum #5 — Interfaces exist between different diverging opinions The diversity of perspectives in defining feminism was also ackno…
S85
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — The panelists and online audience agreed on the equal importance of these ethical principles and called for further disc…
S86
World in Numbers: Risks / DAVOS 2025 — The speakers encouraged engagement with the report using the hashtag #WEF25 and mentioned that Channel 2 was available f…
S87
https://dig.watch/event/india-ai-impact-summit-2026/keynote-n-chandrasekaran — Finally, in conclusion, I just want to say that we are standing here at a very defining moment. It is the age of abundan…
S88
WS #231 Address Digital Funding Gaps in the Developing World — Online moderator: Yeah, sure. Thank you, Neeti. So we have an insight. One of the participants, Maarten, says that in ou…
S89
Filtered data not enough, LLMs can still learn unsafe behaviours — Large language models (LLMs) caninherit behavioural traits from other models, even when trained on seemingly unrelated d…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
H
Hugo Sarazen
12 arguments170 words per minute3459 words1214 seconds
Argument 1
Attention scarcity (Hugo) – The abundance of information creates a “poverty of attention,” making sustained human focus the most limited resource.
EXPLANATION
Hugo argues that the sheer volume of information available today overwhelms individuals, leading to a scarcity of sustained attention. This scarcity, he suggests, is a critical bottleneck for learning in the AI era.
EVIDENCE
He cites Herbert Simon’s observation that “when you have a wealth of information, you have a poverty of attention” and emphasizes that attention is a major component for learners today [40-43].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion notes that sustained human attention is becoming scarce due to information wealth creating a poverty of attention [S1] and references scarcity thinking in the AI era [S19].
MAJOR DISCUSSION POINT
Attention scarcity
DISAGREED WITH
Aidan Gomez, Debbie Prentice
Argument 2
Adaptive AI tutoring (Hugo) – AI can deliver individualized, multimodal learning experiences, role‑play simulations, and real‑time feedback to keep learners engaged.
EXPLANATION
Hugo describes how AI can personalize learning pathways, adapt content to different learner modalities, and provide immediate feedback through simulations. This approach aims to maintain engagement and improve skill acquisition.
EVIDENCE
He explains Udemy’s pivot toward an AI platform for workforce reskilling, highlighting quick assessments, class segmentation, and feedback loops that enhance learning outcomes [94-107]; later he gives concrete role-play examples for sales and call-center training with scoring rubrics [199-207].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Personalized learning experiences enabled by AI are highlighted as a way to engage learners and adapt to modalities [S24]; examples of AI-driven virtual classrooms support role-play and interactive feedback [S11]; rapid personalization that doubles learning rates is also discussed [S23].
MAJOR DISCUSSION POINT
Adaptive AI tutoring
DISAGREED WITH
Aidan Gomez
Argument 3
Human storytelling & augmentation (Hugo) – Teachers remain irreplaceable storytellers; AI can augment their style and expertise but not replace the human connection.
EXPLANATION
Hugo emphasizes the unique role of teachers as storytellers who inspire learners, asserting that AI can only augment—not replace—their personal teaching style. This augmentation could extend a teacher’s influence to new subjects.
EVIDENCE
He recounts how teachers are memorable storytellers and suggests AI could replicate a favorite teacher’s voice and presentation style for different topics, enhancing learning without supplanting the teacher [184-188].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to keep teachers as central, supportive figures while using AI as a tool is emphasized in discussions about AI not replacing the transmission of wisdom [S13].
MAJOR DISCUSSION POINT
Human storytelling & augmentation
Argument 4
Need for explainable AI (Hugo) – Reliance on black‑box answers erodes trust; specialized, transparent models and explainability research are required.
EXPLANATION
Hugo points out that AI systems often provide answers without revealing their reasoning, which undermines user trust. He calls for development of explainable models and dedicated research to restore confidence.
EVIDENCE
He notes that many LLMs give answers without indicating sources, creating a trust issue, and later stresses the necessity for explainable AI, specialized trusted models, and research on model transparency [46-47][340-353].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for transparent, “glass-box” AI systems that reveal data sources and training provenance are made in trusted-AI discussions [S15]; reliability, safety, and verifiability of AI outputs are also stressed [S16].
MAJOR DISCUSSION POINT
Explainable AI
DISAGREED WITH
Aidan Gomez
Argument 5
ROI ambiguity and adaptive learning (Hugo) – Companies struggle to measure learning ROI; AI enables bite‑size, in‑flow, skill‑tracking solutions that align with business outcomes.
EXPLANATION
Hugo reports that enterprises find it difficult to quantify the return on investment of learning initiatives. He proposes AI-driven, bite‑size, adaptive learning that can be measured in real time to better align with business goals.
EVIDENCE
He references conversations with 400 CHROs revealing a proliferation of tools with unclear ROI, and describes AI’s ability to deliver bite-size, adaptive, in-flow learning with real-time skill tracking [128-140].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The difficulty of demonstrating learning ROI and the shift toward adaptive, bite-size learning measured in real time are highlighted [S1]; Bloom’s two-sigma problem and its relevance to ROI are discussed [S17]; AI-driven personalized learning that improves outcomes is noted [S23].
MAJOR DISCUSSION POINT
Learning ROI and adaptive solutions
Argument 6
Degree as a societal bundle (Hugo) – Traditional degrees combine credential, rite of passage, and research; AI‑driven economics may prompt a re‑evaluation and possible unbundling of these components.
EXPLANATION
Hugo characterizes university degrees as a societal bundle that includes certification, a rite of passage, and research output. He suggests that AI’s impact on education economics could lead to reconsidering or unbundling these elements.
EVIDENCE
He explains that a degree bundles credential, rite of passage, and research, noting that AI may force a re-evaluation of this structure and potentially lead to unbundling [408-416].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The degree is described as a “convenient bundle” of credential, rite of passage, and research, with AI prompting reconsideration of this structure [S1]; further analysis of the bundle appears in the Knowledge in the Age of AI discussion [S17].
MAJOR DISCUSSION POINT
Future of degree bundles
DISAGREED WITH
Debbie Prentice
Argument 7
Historical knowledge scarcity conferred power, and AI is reshaping that dynamic.
EXPLANATION
Hugo notes that in earlier eras knowledge was scarce and a source of geopolitical power, with nations fighting over it. He contrasts this with today’s AI, which can make vast amounts of information widely accessible, altering the traditional power structures tied to knowledge.
EVIDENCE
He references a historical perspective where knowledge was scarce, served as a source of power, and few polymaths existed, highlighting the shift brought by LLMs that can learn everything and democratize knowledge [31-36][37-40].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift from scarce knowledge as a source of power to democratized AI-mediated knowledge is discussed in the panel on enterprise impact [S1] and in the context of scarcity thinking [S19].
MAJOR DISCUSSION POINT
Historical knowledge scarcity vs AI democratization
Argument 8
Large language models act as digital polymaths, democratizing access to knowledge across domains.
EXPLANATION
Hugo describes how modern LLMs can acquire expertise in multiple fields simultaneously, effectively becoming polymaths that anyone can query. This widespread availability reduces the exclusivity of expertise.
EVIDENCE
He states that LLMs can learn everything, become polymaths, and each new data center adds millions of such polymaths, leading to a democratization of knowledge [37-40].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open-source LLMs are presented as a way to democratize AI access globally, especially for the Global South [S21]; the broader impact of publicly accessible models on knowledge democratization is explored [S22].
MAJOR DISCUSSION POINT
AI as a democratizing polymath
Argument 9
AI can deliver one‑on‑one tutoring comparable to Bloom’s two‑sigma effect, overcoming economic barriers.
EXPLANATION
Citing Bloom’s research on the superior outcomes of individualized coaching, Hugo argues that AI can provide similar personalized instruction at scale, offering the benefits of one‑on‑one tutoring without the prohibitive costs.
EVIDENCE
He mentions the Bloom two-sigma problem, noting that one-on-one coaching yields two-sigma higher learning, and explains that AI can replicate this personalized feedback loop at scale [149-156].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bloom’s two-sigma problem is cited as evidence of the power of individualized tutoring, and AI is positioned as a scalable way to achieve similar gains [S17]; personalized lessons that double learning rates further support this claim [S23].
MAJOR DISCUSSION POINT
AI‑enabled personalized tutoring
Argument 10
AI can help learners formulate the right questions, a prerequisite for effective learning.
EXPLANATION
Hugo emphasizes that teaching students how to ask the correct question is essential, and AI tools can support this skill by prompting, providing feedback, and guiding inquiry, thereby strengthening critical thinking.
EVIDENCE
He asserts that education must teach how to ask the right question and that AI can aid in this process, highlighting the importance of question formulation for learning outcomes [236-239].
MAJOR DISCUSSION POINT
Question‑formulation support by AI
Argument 11
AI‑driven platforms can halve learning time by focusing on contextual understanding rather than rote memorization.
EXPLANATION
Referencing the Alpha School example, Hugo explains that AI can accelerate learning by emphasizing the context of information, allowing students to achieve mastery in a fraction of the traditional time.
EVIDENCE
He describes Alpha School’s use of AI to achieve learning in half the time by prioritizing contextual learning over isolated facts, illustrating a practical outcome of AI-enhanced education [242-250].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Evidence that AI can double learning speed by emphasizing context over memorization is provided in discussions of personalized, contextual learning [S23] and adaptive learning systems [S24].
MAJOR DISCUSSION POINT
Accelerated learning through contextual AI
Argument 12
Rapid labor‑market changes demand adaptable, bite‑size learning focused on specific skills rather than broad curricula.
EXPLANATION
Hugo argues that because skills become obsolete quickly, education must be flexible and deliver targeted, bite‑size modules that align with immediate business needs, moving away from one‑size‑fits‑all programs.
EVIDENCE
He notes the necessity for adaptability and skill-specific focus due to fast-moving labor markets, emphasizing the shift toward modular, real-time skill development [419-421].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for modular, skill-specific learning to keep pace with fast-moving labor markets is highlighted in analyses of adaptive learning and the future of education [S24]; calls for rethinking traditional curricula appear in broader education reform commentary [S25].
MAJOR DISCUSSION POINT
Adaptability and skill‑specific learning
A
Aidan Gomez
9 arguments166 words per minute1973 words710 seconds
Argument 1
Deep‑mastery erosion (Aidan) – Rapid, surface‑level answers from LLMs foster a false sense of mastery, threatening deep understanding.
EXPLANATION
Aidan warns that LLMs provide quick, superficial answers that can give learners an illusion of mastery without true comprehension. This false confidence jeopardizes deep learning.
EVIDENCE
He describes how LLMs deliver brief responses that appear to answer complex questions, creating a false sense of mastery, and argues that testing without tools is essential to assess true depth of understanding [48-53].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Balanced integration of AI that safeguards deep learning and critical thinking is advocated in discussions of responsible AI use in education [S26].
MAJOR DISCUSSION POINT
Erosion of deep mastery
DISAGREED WITH
Hugo Sarazen, Debbie Prentice
Argument 2
Enterprise LLM deployment (Aidan) – Cohere equips organizations with secure, on‑premise models that let employees manage AI agents, shifting work from manual execution to AI‑augmented decision‑making.
EXPLANATION
Aidan outlines Cohere’s strategy of providing large language models that run within a client’s own infrastructure, ensuring data security while enabling employees to orchestrate AI agents for tasks, thereby augmenting human work.
EVIDENCE
He details Cohere’s development of core LLM technology, its focus on enterprise customers, and the security model that keeps data on-premise without leaving the customer’s perimeter [110-118].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The enterprise-centric AI model, emphasizing on-premise deployment for security, is described in the panel on scalable AI adoption [S1]; industry perspectives on AI reshaping professional work support this view [S14].
MAJOR DISCUSSION POINT
Secure enterprise LLMs
Argument 3
Human grounding & testing (Aidan) – Humans are the ultimate customers; AI is a tool that must be complemented by rigorous, tool‑free testing to ensure genuine competence.
EXPLANATION
Aidan stresses that while AI serves as a powerful assistant, humans remain the end‑users who must be evaluated without reliance on AI to verify true competence. Rigorous testing without AI is therefore crucial.
EVIDENCE
He states that humans are the customers, emphasizes the need for testing without AI tools to gauge real knowledge, and calls testing critical especially as AI can enable cheating [171-182].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of testing without AI tools to assess true competence is emphasized in recommendations for assessment practices [S26]; the role of educators in validating AI-augmented learning is also noted [S13].
MAJOR DISCUSSION POINT
Testing without AI
DISAGREED WITH
Hugo Sarazen
Argument 4
Reasoning & citation mechanisms (Aidan) – New “reasoning” models with internal monologues and retrieval‑augmented generation can expose chains of thought and cite sources, improving auditability.
EXPLANATION
Aidan introduces advanced LLMs that generate internal reasoning steps before answering and can retrieve and cite external documents, making their outputs more transparent and verifiable.
EVIDENCE
He explains that modern models perform internal monologues, reason through problems, and use retrieval-augmented generation to cite sources such as Cambridge or Oxford libraries, enhancing auditability [374-383].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for transparent, auditable AI outputs that reveal reasoning and sources are made in trusted-AI discussions [S15]; safety and verifiability concerns reinforce the need for such mechanisms [S16].
MAJOR DISCUSSION POINT
Reasoning and source citation
DISAGREED WITH
Hugo Sarazen
Argument 5
Curriculum‑market mismatch (Aidan) – Academic offerings often lag behind labor‑market needs; AI can accelerate creation of new programs but skill identification must come from humans.
EXPLANATION
Aidan highlights the gap between university curricula and the rapidly evolving skill demands of the labor market. He suggests AI can speed up program development, yet the identification of needed skills must be driven by human stakeholders.
EVIDENCE
He points out the mismatch between education and market demand, proposes AI to quickly create new courses, and notes that humans must first define the required skills [254-266].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The gap between university curricula and labor-market demand is highlighted, with AI proposed as a tool to speed program creation while human expertise defines skill needs [S24]; broader calls for curriculum reform appear in education change commentary [S25].
MAJOR DISCUSSION POINT
Education‑industry alignment
Argument 6
Calculator‑free assessment (Aidan) – The gold standard is testing without AI to gauge true retention, while also recognizing AI‑usage as a distinct skill to be evaluated.
EXPLANATION
Aidan argues that the most reliable assessment removes AI tools to measure what learners truly retain, but also acknowledges that proficiency in using AI is itself a skill that should be assessed separately.
EVIDENCE
He critiques existing AI-detectors, then asserts that testing without AI is the gold standard for measuring retention, while also noting that using AI effectively is a skill worth evaluating [321-334].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Recommendations for tool-free assessments as the gold standard, alongside evaluation of AI-usage competence, are discussed in responsible AI assessment guidelines [S26].
MAJOR DISCUSSION POINT
Assessment without AI tools
DISAGREED WITH
Hugo Sarazen
Argument 7
Enterprise focus over formal credentials (Aidan) – Cohere’s enterprise model emphasizes skill development within organizations rather than formal academic certification.
EXPLANATION
Aidan explains that Cohere’s business model targets corporate clients and critical industries, focusing on building employee capabilities directly rather than providing traditional academic degrees.
EVIDENCE
He mentions that Cohere serves critical sectors such as finance, telecom, healthcare, and education, emphasizing enterprise-centric skill development over formal credentials [120-121] (building on the earlier description of Cohere’s enterprise focus [110-118]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The shift toward enterprise-centric skill development over traditional degrees is noted in industry analyses of AI’s impact on professional training [S14]; personalized learning that doubles outcomes supports this enterprise focus [S23].
MAJOR DISCUSSION POINT
Enterprise‑centric skill building
Argument 8
Robust benchmarks are needed to evaluate the teaching effectiveness of AI models.
EXPLANATION
Aidan points out that while teaching with AI is a skill, there are currently no standardized metrics to assess model performance across subjects, calling for the creation of comprehensive benchmarks.
EVIDENCE
He states that teaching is a skill requiring benchmarks, notes the lack of existing teaching benchmarks, and mentions possible benchmark domains such as code, math, and biology [300-307].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The lack of standardized benchmarks for AI teaching effectiveness is identified, with calls for comprehensive evaluation frameworks in AI education research [S26].
MAJOR DISCUSSION POINT
Need for AI teaching benchmarks
Argument 9
The brittleness of early LLMs, which answered all queries with equal confidence, necessitates reasoning and internal monologue mechanisms.
EXPLANATION
Aidan explains that prior models were brittle, providing confident but sometimes incorrect answers regardless of question complexity, prompting the development of reasoning models that incorporate internal deliberation before responding.
EVIDENCE
He describes how models responded with the same effort to simple arithmetic and complex unsolved problems, revealing brittleness, and then introduces reasoning models with internal monologues to improve reliability [386-394].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Early model brittleness and the need for reasoning, internal monologue, and source citation to improve reliability are discussed in trusted-AI and safety literature [S15][S16].
MAJOR DISCUSSION POINT
Brittleness driving reasoning models
D
Debbie Prentice
7 arguments124 words per minute1283 words616 seconds
Argument 1
Critical‑thinking priority (Debbie) – Audience poll shows critical thinking is most valued; self‑knowledge and the ability to judge one’s own learning are essential.
EXPLANATION
Debbie notes that the live poll indicated critical thinking as the top choice among participants, underscoring its importance for self‑assessment and autonomous learning.
EVIDENCE
She reports that the audience’s preferred option was critical thinking, with sustained attention close behind, based on the poll results displayed during the session [65-69].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Critical thinking and literacy as essential competencies for a positive AI future are emphasized in youth summit and inclusion discussions [S10][S26].
MAJOR DISCUSSION POINT
Critical thinking as priority
DISAGREED WITH
Hugo Sarazen, Aidan Gomez
Argument 2
Attention‑boosting personalization (Debbie) – AI‑driven personalization can counter short‑attention spans by tailoring content to auditory, visual, or other learner preferences.
EXPLANATION
Debbie suggests that AI can mitigate declining attention spans by customizing learning material to match individual learner modalities, thereby keeping learners engaged longer.
EVIDENCE
She argues that AI can personalize experiences for auditory, visual, or other learners, making content more engaging and helping sustain attention [144-146].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-enabled personalization that adapts to learner modalities to sustain attention is highlighted in adaptive learning research [S24] and virtual classroom innovations [S11].
MAJOR DISCUSSION POINT
Personalized attention
Argument 3
Human intervention necessity (Debbie) – Even with sophisticated technology, educators must intervene to guide, validate, and contextualize learning.
EXPLANATION
Debbie emphasizes that technology cannot replace the role of educators, who need to provide guidance, validation, and context to ensure meaningful learning outcomes.
EVIDENCE
She states that despite advanced tools, educators still need to step in to guide, validate, and contextualize learning experiences [169-170].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for teacher support, professional development, and human guidance alongside AI tools is discussed in educator-centric AI integration studies [S12][S13].
MAJOR DISCUSSION POINT
Need for educator intervention
Argument 4
Expertise without visible work (Debbie) – Raises concern that future AI may provide answers without showing reasoning, challenging traditional notions of authority and verification.
EXPLANATION
Debbie questions how society will handle AI outputs that lack transparent reasoning or citations, which could undermine established methods of verifying expertise and authority.
EVIDENCE
She asks what to do in a world where AI provides answers without showing the work or citing sources, challenging the traditional reliance on libraries and expert verification [335-339].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Concerns about AI outputs lacking transparent reasoning and the call for “glass-box” AI to maintain authority are raised in trusted-AI discussions [S13][S15].
MAJOR DISCUSSION POINT
Opaque AI outputs
Argument 5
University’s broader mission (Debbie) – Universities prioritize critical thinking and deep mastery, offering value beyond immediate job skills despite industry pressure for applied knowledge.
EXPLANATION
Debbie defends the university role in fostering critical thinking and deep mastery, arguing that higher education provides broader societal value beyond immediate vocational training.
EVIDENCE
She argues that universities teach critical thinking and deep mastery, providing essential skills even if employers seek more applied knowledge, and positions this as a core university contribution [267-272].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The university’s role in fostering critical thinking and deep mastery, distinct from vocational training, is defended in higher-education commentary [S13][S25].
MAJOR DISCUSSION POINT
University mission vs. job skills
DISAGREED WITH
Hugo Sarazen
Argument 6
Detecting AI‑generated work (Debbie) – Current AI‑detectors are unreliable; better tools are needed to differentiate human from machine output in assessments.
EXPLANATION
Debbie highlights the inadequacy of existing AI‑detection tools and calls for improved mechanisms to reliably identify AI‑generated content in educational assessments.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The unreliability of existing AI-detectors and the need for improved verification mechanisms are highlighted in trusted-AI and safety literature [S15][S16].
MAJOR DISCUSSION POINT
AI detection reliability
Argument 7
Universities’ enduring value (Debbie) – Despite cost pressures, universities deliver research, critical inquiry, and cultural functions that remain “gold” in society.
EXPLANATION
Debbie asserts that universities continue to provide essential research, critical inquiry, and cultural contributions, making them invaluable despite rising tuition costs.
EVIDENCE
She describes universities as delivering research, critical inquiry, and cultural functions, referring to them as “gold” in society [425-426].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The enduring societal value of universities in research, critical inquiry, and cultural contributions is emphasized in discussions of higher-education purpose [S13][S25].
MAJOR DISCUSSION POINT
Enduring value of universities
A
Audience
6 arguments164 words per minute886 words323 seconds
Argument 1
AI‑enabled technologies can boost learner motivation through interactive role‑play and gamified feedback loops.
EXPLANATION
An audience member asks how AI can address motivation, and Hugo later illustrates AI role‑play simulations and feedback mechanisms that make learning feel like a workout, keeping participants engaged.
EVIDENCE
The audience member (Anna) questions motivation and AI’s role [194-198]; Hugo responds with AI role-play examples for sales and call-center training, plus feedback loops that act like gym repetitions to reinforce learning [199-216].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Interactive, gamified AI learning experiences that increase motivation are described in adaptive learning and personalized feedback studies [S24][S23].
MAJOR DISCUSSION POINT
Motivation via AI‑driven interactive learning
Argument 2
Integrating AI into physical classrooms requires teaching critical thinking about AI use and a balanced policy on bans.
EXPLANATION
A participant from Australia raises concerns about AI’s role in classrooms and potential bans, prompting Aidan to argue that AI should be taught as a tool while emphasizing the need to prevent misuse and ensure critical engagement.
EVIDENCE
The audience member (Nathaniel) asks about AI’s role in physical classrooms and bans [217-222]; Aidan replies that AI must be part of schooling, taught as a tool, with safeguards against cheating [223-229].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guidelines for integrating AI in classrooms while teaching critical AI literacy and establishing balanced policies are discussed in educator-focused AI integration literature [S12][S13].
MAJOR DISCUSSION POINT
AI in classrooms and policy balance
Argument 3
Education should focus on delivering applied knowledge that directly supports livelihoods across sectors, moving beyond traditional degree structures.
EXPLANATION
An audience speaker highlights the mismatch between academic offerings and job market needs, calling for curricula that provide practical, employable skills for all types of work rather than emphasizing degrees.
EVIDENCE
The audience member (Pranjal Sharma) describes the gap between skills demanded by industry and what academia offers, argues that degrees should be reconsidered, and emphasizes the need for applied knowledge that enables livelihood across white-, gray-, and blue-collar jobs [247-252].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for applied, livelihood-focused curricula and a shift away from traditional degree-centric models appear in education reform commentary [S25][S24].
MAJOR DISCUSSION POINT
Shift toward applied, livelihood‑focused education
Argument 4
Reliable detection of AI‑generated work and balanced testing strategies are essential, combining tool‑free assessment with evaluation of AI‑assisted skills.
EXPLANATION
A CEO questions how to test with or without AI calculators and the reliability of detectors; Aidan acknowledges the shortcomings of current detectors and stresses the importance of both strict, tool‑free testing and assessing AI‑usage competence.
EVIDENCE
The audience member (Kian) raises concerns about AI detection tools and testing approaches [320-327]; Aidan discusses the high error rates of existing detectors, the need for better tools, and the value of testing without AI while also recognizing AI-usage as a skill [321-334].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity of robust AI-detection tools and the combination of tool-free testing with assessment of AI-usage skills are highlighted in trusted-AI and assessment guidelines [S15][S16][S26].
MAJOR DISCUSSION POINT
AI detection and testing methodology
Argument 5
The emergence of AI polymaths raises existential concerns about human relevance, underscoring the need to preserve human agency.
EXPLANATION
An audience comment expresses a pessimistic view that AI’s polymath capabilities could render humans obsolete, highlighting the urgency of maintaining human decision‑making and oversight.
EVIDENCE
The audience participant remarks that AI’s polymath nature means humans cannot compete and suggests a bleak outlook for humanity [308-312].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Debates about AI democratizing knowledge while preserving human agency and the need for transparent, accountable AI systems are discussed in AI ethics and governance literature [S13][S21].
MAJOR DISCUSSION POINT
Human agency versus AI polymath
Argument 6
Training junior workers in critical thinking about AI outputs is crucial, as senior professionals can assess AI but juniors often cannot.
EXPLANATION
An audience member points out that senior staff can judge AI‑generated results, whereas junior employees lack the experience, emphasizing the need for capacity‑building programs that develop critical evaluation skills in the next generation.
EVIDENCE
The audience comment notes the gap in critical thinking abilities between senior professionals and junior staff, calling for training to bridge this divide [318-322].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of building AI literacy and critical evaluation skills in junior staff, alongside educator support, is emphasized in professional development and AI education research [S12][S26].
MAJOR DISCUSSION POINT
Building critical AI literacy in junior workforce
Agreements
Agreement Points
Attention scarcity and the need for AI‑driven personalization to sustain it
Speakers: Hugo Sarazen, Debbie Prentice, Aidan Gomez
Attention scarcity (Hugo) Attention‑boosting personalization (Debbie)
All three speakers note that the abundance of information creates a poverty of attention, making sustained human focus a scarce resource, and suggest that AI-enabled personalization can help keep learners engaged [40-43][144-146][143].
Critical thinking is essential for navigating AI‑generated information
Speakers: Debbie Prentice, Hugo Sarazen
Critical‑thinking priority (Debbie) Human storytelling & augmentation (Hugo)
Debbie highlights that the audience prioritized critical thinking, and Hugo stresses teaching learners how to ask the right questions and apply critical judgment, indicating shared emphasis on critical thinking as a key competency [65-69][236-239].
AI outputs must be explainable and trustworthy
Speakers: Hugo Sarazen, Aidan Gomez, Debbie Prentice
Need for explainable AI (Hugo) Reasoning & citation mechanisms (Aidan) Expertise without visible work (Debbie)
The panel agrees that AI systems that provide answers without showing reasoning or sources erode trust; they call for models that expose internal reasoning, cite references, and improve auditability [46-47][340-353][374-383][386-394][335-339].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with emerging AI governance frameworks that stress explainability, accountability, and trust, as highlighted in UN AI Security Council discussions on algorithmic transparency [S42] and India’s Secure Finance Risk-Based AI Policy emphasizing predictable, explainable systems [S41]. Ethical AI sessions also call for transparent, trustworthy tools [S44], and broader initiatives aim to revitalize trust in public services through AI governance [S60].
Human educators remain indispensable and should be augmented, not replaced, by AI
Speakers: Hugo Sarazen, Debbie Prentice, Aidan Gomez
Human storytelling & augmentation (Hugo) Human intervention necessity (Debbie) Human grounding & testing (Aidan)
All speakers assert that teachers’ storytelling and human guidance are crucial; AI can augment but not supplant the educator’s role in guiding, validating, and contextualizing learning [184-188][169-170][171-176].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy statements from UNESCO and the NEA underscore that teachers should transition to coaching roles while AI serves as a supportive augmentative tool [S53][S54]. UNICEF’s AI policy guidance further stresses the need for human oversight to protect children’s rights when integrating AI in education [S58].
AI can accelerate learning and provide personalized, one‑on‑one tutoring comparable to high‑impact human coaching
Speakers: Hugo Sarazen, Aidan Gomez
AI can deliver one‑on‑one tutoring comparable to Bloom’s two‑sigma effect (Hugo) Reasoning & citation mechanisms (Aidan)
Hugo cites the Alpha School example where AI halves learning time, while Aidan notes AI can speed creation of new programs and provide reasoning capabilities, together indicating AI’s potential to dramatically accelerate learning [242-250][254-266][374-383].
POLICY CONTEXT (KNOWLEDGE BASE)
Research collaborations such as those with Stanford demonstrate adaptive learning systems that deliver personalized tutoring at scale [S55]. The IGF Youth Summit also notes AI’s capacity to tailor education to individual student needs, enhancing learning outcomes [S57].
Similar Viewpoints
Both argue that modern LLMs need internal reasoning steps and source citation to become trustworthy and auditable tools [340-353][374-383].
Speakers: Hugo Sarazen, Aidan Gomez
Need for explainable AI (Hugo) Reasoning & citation mechanisms (Aidan)
Both describe enterprise‑focused AI solutions that embed models within organizations to deliver personalized, secure, and scalable learning experiences for workforce reskilling [94-107][110-118].
Speakers: Hugo Sarazen, Aidan Gomez
Adaptive AI tutoring (Hugo) Enterprise LLM deployment (Aidan)
Both stress that educators must remain in the loop to provide guidance, motivation, and validation, with AI serving as a supportive tool rather than a replacement [184-188][169-170].
Speakers: Hugo Sarazen, Debbie Prentice
Human storytelling & augmentation (Hugo) Human intervention necessity (Debbie)
Unexpected Consensus
Attention as the scarcest resource despite differing sectoral perspectives
Speakers: Hugo Sarazen, Debbie Prentice
Attention scarcity (Hugo) Attention‑boosting personalization (Debbie)
Although Hugo represents a for-profit ed-tech firm and Debbie a non-profit university, both converge on the view that sustained human attention is the most limited resource in the AI era and that personalization is the remedy, a convergence not anticipated given their institutional differences [40-43][144-146].
Overall Assessment

The discussion reveals strong convergence on four main fronts: (1) attention scarcity and the role of AI personalization; (2) the centrality of critical thinking; (3) the necessity for explainable, trustworthy AI; (4) the enduring, augmentable role of human educators. Additionally, both speakers see AI as a catalyst for faster, individualized learning and for enterprise‑level workforce development.

High consensus across speakers on the challenges posed by AI (attention, trust, critical thinking) and on AI‑enabled solutions (personalization, adaptive tutoring, explainability). This broad agreement suggests a shared understanding that future education policies must balance AI integration with human oversight, prioritize critical thinking, and invest in transparent, learner‑centric AI systems.

Differences
Different Viewpoints
What is the scarcest or most threatened resource for learning in the AI era?
Speakers: Hugo Sarazen, Aidan Gomez, Debbie Prentice
Attention scarcity (Hugo) – The abundance of information creates a “poverty of attention,” making sustained human focus the most limited resource. Deep‑mastery erosion (Aidan) – Rapid, surface‑level answers from LLMs foster a false sense of mastery, threatening deep understanding. Critical‑thinking priority (Debbie) – Audience poll shows critical thinking is most valued; self‑knowledge and the ability to judge one’s own learning are essential.
Hugo argues that sustained human attention is the bottleneck due to information overload [40-43]. Aidan counters that the real danger is a superficial sense of mastery caused by quick LLM answers, eroding deep understanding [48-53]. Debbie notes that the audience prioritized critical thinking, implying that the ability to evaluate one’s own learning may be the most needed skill [65-69].
POLICY CONTEXT (KNOWLEDGE BASE)
AI policy forums have identified trust, stewardship, and human capability as the most scarce resources in the AI decade, echoing concerns about limited attention and capability [S49]. UN learning policy discussions also highlight strategic allocation of learning resources as a critical challenge [S46].
How to restore trust and transparency in AI‑generated answers?
Speakers: Hugo Sarazen, Aidan Gomez
Need for explainable AI (Hugo) – Reliance on black‑box answers erodes trust; specialized, transparent models and explainability research are required. Reasoning & citation mechanisms (Aidan) – New “reasoning” models with internal monologues and retrieval‑augmented generation can expose chains of thought and cite sources, improving auditability.
Hugo calls for dedicated research on explainable, specialized models and mechanisms to show sources, stressing that black-box answers undermine trust [46-47][340-353]. Aidan proposes technical solutions: reasoning steps and retrieval-augmented generation that reveal the model’s chain of thought and citations, thereby increasing auditability [374-383][386-394].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple governance initiatives call for algorithmic transparency, rigorous testing, and explainability to rebuild trust, as seen in UN AI Security Council recommendations [S42], India’s AI trust and accountability framework [S41], and dedicated sessions on transparency and explainability [S44]. Recent efforts to revitalize public trust focus on AI governance, disclosure, and oversight mechanisms [S60][S61][S62].
Future role of the university degree versus AI‑driven online credentials.
Speakers: Hugo Sarazen, Debbie Prentice
Degree as a societal bundle (Hugo) – Traditional degrees combine credential, rite of passage, and research; AI‑driven economics may prompt a re‑evaluation and possible unbundling of these components. University’s broader mission (Debbie) – Universities prioritize critical thinking and deep mastery, offering value beyond immediate job skills despite industry pressure for applied knowledge.
Hugo suggests that AI could force a re-thinking of the bundled university degree, potentially unbundling credential, rite of passage, and research functions [408-416]. Debbie defends the university’s broader mission, emphasizing its role in fostering critical thinking, deep mastery, and research, which she views as “gold” for society [425-426][267-272].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry analyses note a growing narrative that traditional university degrees may become less essential, challenging higher-education institutions to demonstrate unique value [S39]. Concurrently, calls for universities to modernize curricula and strengthen AI-focused degree programs are documented [S38][S40], while policy discussions stress the need for higher-education to adapt amid AI-driven credentialing trends [S53].
Approach to assessment: tool‑free testing versus AI‑driven evaluation.
Speakers: Aidan Gomez, Hugo Sarazen
Human grounding & testing (Aidan) – Humans are the ultimate customers; AI is a tool that must be complemented by rigorous, tool‑free testing to ensure genuine competence. Calculator‑free assessment (Aidan) – The gold standard is testing without AI to gauge true retention, while also recognizing AI‑usage as a distinct skill to be evaluated. Adaptive AI tutoring (Hugo) – AI can deliver individualized, multimodal learning experiences, role‑play simulations, and real‑time feedback to keep learners engaged.
Aidan stresses that the most reliable assessment removes AI tools to measure true knowledge retention and that testing without AI should be the gold standard, while also acknowledging AI-usage as a skill [171-182][321-334]. Hugo promotes AI-enabled quick assessments, class segmentation, and feedback loops as a way to personalize and accelerate learning, implying AI can be part of the assessment process [104-107][199-207].
Unexpected Differences
Optimism about AI as a democratizing polymath versus concern over false mastery.
Speakers: Hugo Sarazen, Aidan Gomez
Attention scarcity (Hugo) – The abundance of information creates a “poverty of attention,” making sustained human focus the most limited resource. Deep‑mastery erosion (Aidan) – Rapid, surface‑level answers from LLMs foster a false sense of mastery, threatening deep understanding.
Both speakers come from AI-focused companies, yet Hugo is upbeat about AI’s ability to democratize knowledge and solve attention problems, while Aidan warns that AI may give learners a misleading sense of mastery, undermining deep learning. The contrast between optimism about AI’s benefits and caution about its superficial impact was not anticipated given their similar industry backgrounds [40-43][48-53].
Different views on the primary bottleneck for learning—attention versus deep mastery.
Speakers: Hugo Sarazen, Aidan Gomez
Attention scarcity (Hugo) – The abundance of information creates a “poverty of attention,” making sustained human focus the most limited resource. Deep‑mastery erosion (Aidan) – Rapid, surface‑level answers from LLMs foster a false sense of mastery, threatening deep understanding.
While both discuss challenges posed by AI, Hugo identifies attention as the scarce resource, whereas Aidan points to the erosion of deep mastery as the core problem. The divergence in diagnosing the primary learning bottleneck was unexpected given the shared context of AI-enhanced education [40-43][48-53].
Overall Assessment

The panel reveals substantive disagreements on which learning resource is most endangered (attention vs deep mastery vs critical thinking), how to secure trust in AI outputs (explainability research vs technical reasoning/citation), the future of the university degree bundle, and the proper role of AI in assessment. While all participants agree AI will reshape education, they diverge on priorities and implementation pathways.

Moderate to high disagreement: the speakers share a common recognition of AI’s transformative potential but differ sharply on strategic focus areas, indicating that consensus on policy and practice will require careful negotiation across academia, industry, and education providers.

Partial Agreements
Both agree that AI should be leveraged to improve learning outcomes, but Hugo emphasizes personalization through tutoring and simulations, whereas Aidan focuses on technical enhancements (reasoning, citation) to make AI outputs trustworthy. Their shared goal is better learning, but the pathways differ [104-107][199-207][374-383].
Speakers: Hugo Sarazen, Aidan Gomez
Adaptive AI tutoring (Hugo) – AI can deliver individualized, multimodal learning experiences, role‑play simulations, and real‑time feedback to keep learners engaged. Reasoning & citation mechanisms (Aidan) – New “reasoning” models with internal monologues and retrieval‑augmented generation can expose chains of thought and cite sources, improving auditability.
Both see critical thinking (and the ability to evaluate information) as essential. Hugo links it to the need for explainable AI to support critical evaluation, while Debbie highlights the audience’s preference for critical thinking as a skill to be cultivated. They converge on the importance of critical evaluation but differ on whether the focus should be on AI transparency or pedagogical emphasis [340-353][65-69].
Speakers: Hugo Sarazen, Debbie Prentice
Need for explainable AI (Hugo) – Reliance on black‑box answers erodes trust; specialized, transparent models and explainability research are required. Critical‑thinking priority (Debbie) – Audience poll shows critical thinking is most valued; self‑knowledge and the ability to judge one’s own learning are essential.
Takeaways
Key takeaways
In the AI era, human cognitive resources—especially sustained attention, deep mastery, and critical thinking—are becoming scarcer than information itself. Audience consensus places critical thinking as the most valued skill, followed closely by sustained attention. LLMs provide rapid, surface‑level answers that can create a false sense of mastery, threatening deep understanding. AI can enable highly personalized, multimodal learning experiences (adaptive tutoring, role‑play simulations, real‑time feedback) that may help mitigate attention deficits. Enterprise‑focused AI (Cohere) emphasizes secure, on‑premise deployment and shifting workers from manual execution to managing AI agents. Human teachers remain essential as storytellers and mentors; AI should augment, not replace, the human connection and pedagogical judgment. Trust and explainability are critical; current black‑box LLM outputs erode confidence, prompting calls for reasoning models, internal monologues, and retrieval‑augmented generation with citations. There is a pronounced gap between academic curricula and rapidly evolving industry skill demands; AI can accelerate program creation but skill identification must be driven by humans. Assessment strategies need a dual approach: tool‑free testing to verify true retention and AI‑enhanced simulations for competency verification; existing AI‑detectors are unreliable. The traditional university degree is a societal bundle (credential, rite of passage, research) that may be reconsidered or unbundled as AI lowers delivery costs, yet universities retain unique value in research and critical inquiry.
Resolutions and action items
Develop and deploy AI‑driven adaptive tutoring and role‑play assessment tools to personalize learning and provide immediate feedback (suggested by Hugo). Invest in explainable‑AI research, including reasoning models with internal monologues and retrieval‑augmented generation that can cite sources (suggested by Aidan). Create more robust AI‑generated content detection mechanisms for educators (Aidan). Implement testing regimes that include both AI‑free assessments for core knowledge retention and AI‑in‑the‑loop assessments for tool proficiency (Aidan). Align enterprise learning platforms with real‑time skill‑tracking and ROI measurement to demonstrate business impact (Hugo). Identify emerging labor‑market skill needs through human‑led analysis to guide AI‑generated curriculum development (Aidan). Maintain human teacher involvement to augment AI outputs, preserving storytelling and mentorship aspects (Hugo).
Unresolved issues
How to reliably ensure trust and explainability of AI answers at scale, especially in high‑stakes contexts. Standardized methods for measuring learning ROI that go beyond completion metrics. Effective ways to integrate AI into physical classrooms while preventing misuse or over‑reliance (debate on bans vs. adoption). Long‑term implications of unbundling the university degree and how credentialing will evolve. Scalable solutions for detecting AI‑generated work that are accurate and not prone to false positives/negatives. Clear frameworks for balancing AI‑augmented learning with the development of deep, disciplined mastery.
Suggested compromises
Combine AI personalization with human storytelling and mentorship, using AI to augment rather than replace teachers. Adopt a dual assessment model: retain traditional, tool‑free exams for core competence while adding AI‑enhanced simulations for applied skills. Use AI for rapid curriculum creation and skill‑mapping, but keep human experts responsible for defining market‑relevant skill sets. Maintain the degree bundle for its cultural and research value while allowing modular, AI‑driven skill certifications for specific job needs.
Thought Provoking Comments
When you have a wealth of information, you have a poverty of attention.
He invoked Herbert Simon’s classic insight to frame attention as the scarcest resource in the AI era, shifting the focus from data abundance to human cognitive limits.
This remark redirected the conversation from what knowledge is scarce to how learners cope with overload. It prompted Hugo and Aidan to discuss attention‑related challenges, leading to later dialogue about AI‑driven personalization as a possible remedy.
Speaker: Hugo Sarazen
LLMs can fool you into thinking you understand something when you don’t… testing is essential – you need to take the tool away to see what the human alone retains.
He identified a core risk of AI‑augmented education: a false sense of deep mastery, and proposed rigorous, tool‑free assessment as a safeguard.
This sparked a deeper debate on assessment, influencing both panelists to stress the importance of testing (Aidan on strict testing regimes, Hugo on AI‑augmented feedback) and setting up later discussion about measuring AI‑generated learning outcomes.
Speaker: Aidan Gomez
The Bloom two‑sigma problem shows one‑on‑one tutoring can double learning outcomes, but economics prevented scaling – AI can now provide that personalized coaching at scale.
He connected a well‑known educational research finding to current AI capabilities, suggesting a concrete way AI could overcome historic scalability limits.
This comment opened a new thread about AI‑driven adaptive learning and role‑play simulations, leading to audience questions on motivation and concrete examples of AI‑based practice environments.
Speaker: Hugo Sarazen
Modern models now have an internal monologue – a reasoning step – and can be coupled with retrieval‑augmented generation to cite sources, giving auditability and explainability.
He introduced the technical evolution from pure input‑output LLMs to reasoning and RAG architectures, directly addressing concerns about trust and explainability raised earlier.
This shifted the discussion from philosophical concerns to concrete technical solutions, prompting Hugo to elaborate on the need for specialized, trusted models and influencing the later debate on “front‑end” vs. “back‑end” AI capabilities.
Speaker: Aidan Gomez
The university degree is a bundle – a convenient social contract – and AI may force us to unbundle its components (knowledge, credential, rite of passage).
He challenged the entrenched notion that a degree must remain a monolithic credential, opening space to reconsider higher‑education economics in the AI age.
This provoked a reflective response from Debbie about the broader role of universities beyond knowledge delivery, and set up the final segment comparing online platforms to elite colleges, influencing the audience’s perception of future education models.
Speaker: Hugo Sarazen
If we rely on black‑box AI that gives answers without reasoning, we lose agency and the ability to validate decisions – we must preserve human critical thinking.
He warned of societal risks of over‑reliance on opaque systems, emphasizing the need for human oversight and critical questioning.
This comment reinforced the earlier theme of critical thinking, leading the panel to stress teaching question‑asking skills, and it resonated with audience concerns about testing and trust, culminating in the discussion of explainability tools.
Speaker: Hugo Sarazen
Testing without the calculator is the gold‑standard; yet using the LLM is itself a skill that should be assessed with the tool in the loop.
He nuanced the earlier stance on tool‑free testing by recognizing AI proficiency as a legitimate competency, bridging the gap between pure knowledge assessment and AI‑augmented skill evaluation.
This nuanced view broadened the conversation about assessment strategies, influencing the audience’s follow‑up question on detecting AI‑generated work and prompting Aidan to discuss detection methods and the need for better testing frameworks.
Speaker: Aidan Gomez
Overall Assessment

The discussion’s trajectory was shaped by a handful of pivotal remarks that reframed the problem space. Hugo’s attention‑poverty observation and his reference to the Bloom two‑sigma study introduced human‑centric constraints and a concrete AI solution, steering the dialogue toward personalization. Aidan’s warnings about false mastery and his exposition of reasoning‑enabled, retrieval‑augmented models supplied both a problem statement and a technical answer, deepening the analysis of trust and explainability. Hugo’s challenge to the traditional degree bundle and his caution about losing agency to black‑box AI broadened the conversation from pedagogy to societal structures. Together, these comments sparked new sub‑topics (assessment design, AI‑driven tutoring, credential unbundling, explainability) and prompted participants and the audience to reconsider assumptions, thereby elevating the discussion from a surface‑level inventory of scarce resources to a nuanced exploration of how AI reshapes learning, evaluation, and the very purpose of higher education.

Follow-up Questions
How can AI-enabled technology help with learner motivation, especially when there is no human teacher in the loop?
Addresses the challenge of sustaining motivation in AI‑driven, largely automated learning environments, which is crucial for effective skill acquisition.
Speaker: Anna Van Niels (Audience)
What is the role of AI in physical classrooms, and how should we address arguments for banning versus not banning AI?
Seeks guidance on policy and pedagogical integration of AI in traditional school settings, a pressing issue given recent regulatory actions.
Speaker: Nathaniel (Audience)
How can we create applied knowledge resources that enable people across all job types to earn a livelihood?
Targets the mismatch between academic offerings and labor‑market needs, emphasizing the need for practical, employable knowledge for diverse occupations.
Speaker: Pranjal Sharma (Audience)
How can universities teach students to increase critical thinking to fact‑check, logically verify, scientifically evaluate, and ethically assess instant AI answers?
Highlights the risk of over‑reliance on AI outputs and the necessity of embedding robust critical‑thinking skills in higher education curricula.
Speaker: Debbie Prentice
How can we train junior employees to develop critical thinking about AI outputs, given senior staff can judge but juniors cannot?
Points to a future workforce gap where younger professionals may lack the experience to evaluate AI, underscoring the need for systematic upskilling.
Speaker: Audience member (unnamed)
What happens in a world of AI polymaths that provide answers without showing work or explaining reasoning, and how do we preserve expertise and authority?
Raises concerns about transparency, trust, and the erosion of expert authority when AI delivers unexplainable answers.
Speaker: Debbie Prentice
What is driving the need for reasoning capabilities in AI models?
Seeks to understand the motivations behind developing reasoning‑oriented models, which impacts model design and user trust.
Speaker: Debbie Prentice
What are the gaps between online education platforms (e.g., Udemy) and accredited elite colleges, and is there market demand for online models that emulate a traditional college experience?
Aims to identify functional differences and potential demand for hybrid or unbundled education models, informing strategic direction for both sectors.
Speaker: Audience member (unnamed)
Research on explainability of AI models to build trust and understand reasoning processes
Explainability is essential for users to validate AI outputs, ensure accountability, and maintain confidence in AI‑augmented learning.
Speaker: Hugo Sarazen
Develop methods to measure ROI of learning interventions and real‑time skill deployment in enterprises
Current tools lack clear ROI metrics; robust measurement is needed for organizations to justify and optimize learning investments.
Speaker: Hugo Sarazen
Create more reliable detection tools for AI‑generated text and improve accuracy of AI‑detectors
Existing detectors have high false‑positive/negative rates, hindering academic integrity and trust in AI‑generated content.
Speaker: Aidan Gomez
Establish teaching benchmarks for AI models to assess their effectiveness in educational tasks
A lack of standardized benchmarks makes it difficult to evaluate and compare AI tutoring systems, limiting their adoption.
Speaker: Aidan Gomez
Identify specific labor‑market skills needed to guide AI‑driven reskilling programs
Aligning AI‑enabled training with actual skill demand is critical to address the mismatch between education outputs and employer needs.
Speaker: Aidan Gomez
Develop evaluation methods for graduates’ critical‑thinking levels as they enter the workforce
Measuring critical‑thinking outcomes is necessary to ensure that education translates into effective workplace performance.
Speaker: Hugo Sarazen, Debbie Prentice
Research specialized, trusted AI models trained on curated expert data (RAG) for reliable answers
Specialized, retrieval‑augmented models could improve answer accuracy and trustworthiness compared to generic LLMs.
Speaker: Hugo Sarazen, Aidan Gomez
Investigate the impact of AI on attention spans and how personalized AI can mitigate distraction
Understanding and counteracting attention fragmentation is vital for effective learning in an AI‑rich environment.
Speaker: Aidan Gomez, Hugo Sarazen
Study the effectiveness of AI role‑play simulations for skill acquisition and learner motivation
AI‑driven simulations could provide immersive, feedback‑rich practice, but empirical evidence of their impact is needed.
Speaker: Hugo Sarazen
Explore ethical implications of reliance on AI without human agency, especially regarding decision‑making and trust
Ensuring human oversight and agency is essential to prevent over‑dependence on opaque AI systems.
Speaker: Hugo Sarazen

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc

Session at a glanceSummary, keypoints, and speakers overview

Summary

Jeetu Patel opened the AI summit by congratulating the organizers and the Indian government for hosting a massive event and noting that AI development is moving faster than ever, now entering its second phase of autonomous agents and soon a third phase of physical AI [3-5][8-13]. He argued that AI has already transformed software creation, citing Cisco’s first product built entirely by AI without human-written code, which he says will turn the innovation curve from exponential to vertical and requires AI to be present in every loop rather than humans [14-16]. Patel identified three fundamental constraints that could hinder AI progress: insufficient infrastructure, a lack of contextual information for agents, and a deficit of trust in AI systems [18-23][26-33]. The infrastructure constraint stems from global shortages of power, compute, network bandwidth, and data-center capacity, which he describes as “oxygen for AI” and warns will limit AI’s potential if not addressed [20-24]. He further explained that as AI shifts from chat-bot style spikes to continuously running agents, the demand for steady-state compute will change the required infrastructure profile, necessitating new “token generation factories” designed for this behavior [38-41]. The context gap, he said, arises because agents lack the trillions of tokens of real-world context humans process, leading to poor decisions when operating without sufficient information [27-30][42-46]. To close this gap, Patel proposed enriching models with proprietary enterprise data and the growing volume of machine-generated time-series data, and redesigning workflows so that agents, not humans, adapt to the processes [51-66][67-72]. The trust deficit, according to Patel, is no longer about incorrect answers but about agents taking harmful actions, requiring protection against jailbreaks, prompt-injection, and data-poisoning as well as runtime guardrails that can intervene dynamically [74-83]. He claimed that Cisco is developing solutions across all three areas by building networks for agents, creating context-rich environments, and implementing security and observability throughout the stack [84-91]. Patel highlighted India’s strategic advantage for AI, noting its large pool of young talent, robust digital identity and payment infrastructure, and massive scale that provides abundant data for training models [98-103]. He emphasized that this combination positions India to not only adopt AI but also shape its global direction, and expressed optimism that collaborative, secure AI deployment can address humanity’s biggest challenges [94-105][106]. The discussion concluded with a call for an ecosystem approach to keep AI safe and secure, underscoring that progress will be measured by each nation’s ability to generate tokens safely, securely, and efficiently [36][105]. Overall, Patel’s remarks framed the urgent need to overcome infrastructure, context, and trust hurdles while leveraging India’s strengths to ensure AI’s responsible and transformative impact worldwide [37][107].


Keypoints

Major discussion points


Rapid evolution of AI and its societal impact – Patel describes moving from “intelligent chatbots” to autonomous agents and soon “physical AI,” fundamentally re-imagining work across many dimensions [8-13].


Three fundamental constraints on AI progress – He identifies (1) infrastructure limits (power, compute, bandwidth, data-center capacity) [18-26]; (2) a “context gap” where agents lack the rich, real-time information humans use to make decisions [26-33]; and (3) a “trust deficit” that hinders adoption unless safety and security are built in [31-35].


Mind-set shift: AI in every loop, not humans in every loop – Patel argues that AI must become an “augmented teammate” embedded in every workflow, flipping the traditional “human-in-the-loop” model [14-16].


Cisco’s integrated approach to address the three constraints – The company is developing (a) AI-ready network infrastructure, (b) context-enriched data pipelines (including enterprise and machine data), and (c) runtime security and observability to protect both agents and users [84-91].


India’s strategic advantage and partnership opportunity – Patel highlights India’s large talent pool, strong digital foundations (Aadhaar, UPI), and massive scale of data as key assets for global AI leadership, and pledges collaboration with the nation [92-105].


Overall purpose / goal


The discussion aims to map the current AI landscape, pinpoint the critical barriers (infrastructure, context, trust), propose a new operational paradigm where AI is embedded in every loop, showcase how Cisco is building the necessary technology stack, and rally India’s unique strengths to jointly advance a safe, secure, and globally competitive AI ecosystem.


Overall tone


The tone begins enthusiastic and celebratory, applauding the summit and India’s progress [3-6]. It then shifts to a cautiously analytical stance as Patel outlines the three major constraints and the risks of inadequate context or trust [18-35]. Following this, the tone becomes solution-focused and confident, describing Cisco’s concrete initiatives and the practical steps needed [84-91]. The closing remarks return to an optimistic and collaborative tone, emphasizing partnership with India and the hopeful potential of AI when managed responsibly [98-107].


Speakers

Speaker 1


Role/Title: Event moderator/host (introducing the keynote) [S1][S3]


Area of Expertise:


Jeetu Patel


Role/Title: President and Chief Product Officer, Cisco Inc. [S4] (Representative from Cisco) [S6]


Area of Expertise: Artificial Intelligence, networking, enterprise technology


Additional speakers:


– None identified


Full session reportComprehensive analysis and detailed insights

The summit opened with a warm greeting from the MC (Speaker 1), who welcomed the audience and introduced Jeetu Patel [1-2]. Patel then congratulated Prime Minister Narendra Modi and Minister Vaishnav for delivering a spectacular AI summit that attracted roughly a quarter-million participants [3-6].


Patel framed the current moment as a rapid acceleration of artificial intelligence, describing three successive phases: the early “intelligent chatbots” that seemed magical three years ago, the present wave of autonomous AI agents that perform tasks with minimal human oversight, and an imminent third phase of “physical AI” that will fundamentally re-imagine work across dimensions never before imagined [9-13].


He added that “if you think about what AI is doing, it’s basically forcing us to rethink every assumption that we’ve had in society,” underscoring the broader societal rethink triggered by AI [9-11].


A concrete illustration of this shift is Cisco’s first product that was 100 % generated and coded by AI, with no human writing a single line of code [15]. Patel emphasized that AI-first creation turns the traditional exponential innovation curve into a near-vertical line, effectively collapsing years of incremental progress into a rapid surge [14-16]. Consequently, the paradigm moves from a “human-in-the-loop” model to an “AI-in-the-loop” approach, where AI acts as an augmented teammate in every workflow [14-16].


Patel identified three fundamental constraints that could impede AI’s continued progress.


Infrastructure constraint – The world lacks sufficient power, compute, network bandwidth, memory, and data-centre capacity, which he described as “oxygen for AI” [20-24]. The shift from spiky chatbot-style inference workloads to continuously operating AI agents creates a steady-state demand for compute [39-41]. This requires new “token-generation factories” – large-scale compute facilities that generate the AI inference tokens needed for continuous agent operation – and AI-ready network designs capable of supporting persistent inference loads [38-41].


Context gap – Human cognition processes trillions of tokens of contextual information each second, a richness that current AI agents lack, leading to sub-optimal decisions [27-30]. Patel proposed three complementary actions to close this gap: (i) enrich AI models with proprietary, non-public enterprise data to turn internal intellectual property into a competitive differentiator [51-58]; (ii) feed agents large volumes of machine-generated time-series data such as weather feeds, sensor logs, and system metrics, which are projected to constitute over half of future data growth [61-71]; and (iii) redesign workflows so that AI agents are embedded in every process, adjusting the processes to the agents rather than expecting agents to fit legacy workflows [67-72].


Trust deficit – The primary risk has shifted from providing wrong answers to taking harmful actions [74-76]. Patel called for two layers of protection: (i) safeguarding AI agents from external attacks such as jail-breaking, prompt-injection, tool abuse, and data-poisoning [79-80]; and (ii) protecting the broader world from rogue agent behaviour by deploying dynamic, runtime guardrails that can intervene in real time, moving governance from static documents to live enforcement [81-84].


Cisco is positioning itself to address all three constraints simultaneously. The company is building AI-ready network infrastructure on which AI agents can run, creating context-rich data pipelines that integrate both enterprise and machine data, and embedding security controls with pervasive observability-from GPU utilisation to model performance and agent actions-across the entire stack [84-91][144-152]. This integrated stack is intended to enable every nation and enterprise to safely, securely, and efficiently generate AI tokens, a metric Patel identified as the new benchmark of global competitiveness [36-37].


Patel highlighted why India is uniquely poised to lead in this AI era. The nation boasts a massive, youthful talent pool, one of the world’s largest cohorts of people under 30 [98-100]; a robust digital foundation exemplified by Aadhaar and UPI that provides common identity and payment infrastructure at scale [101-102]; and an enormous population that supplies the data volume AI systems thrive on [103-105]. These assets enable India not only to adopt AI but also to shape its global direction [94-96][153-159].


In concluding remarks, Patel expressed strong optimism that, if humans can confidently delegate tasks to AI within a safe and secure framework, the technology could help solve humanity’s most pressing challenges-disease, poverty, and education-while urging the community to band together as an ecosystem to keep AI trustworthy [170-176][106-108]. He thanked the audience, reaffirmed Cisco’s partnership with India, and closed the session on a hopeful note [177-179].


Session transcriptComplete transcript of the session
Speaker 1

works without resilient, secure infrastructure is both timely and essential. Ladies and gentlemen, please welcome Mr. Jeetu Patel.

Jeetu Patel

Namaste. I feel very happy to see India’s progress. So firstly, congratulations to all of you for hosting one of the most spectacular AI summits that the world has ever seen with about 250 ,000 attendees. And congratulations to His Honorable Prime Minister, Mr. Narendra Modi, as well as Minister Vaishnav. For actually bringing us all together to talk about what the possibilities of the future are with AI. So what I thought I’d do is I wanted to actually walk you through. where we are today and what the possibilities are and what the constraints are going to be that we need to overcome. But let me just take a step back and say that we are probably moving at a pace that is faster than we’ve ever either expected or seen before with AI.

And we are now squarely in the second phase of AI. So we started with this kind of notion of intelligent chatbots that answered questions for us that felt like magic three years ago. And now we are at this point where agents are conducting tasks and jobs for us almost fully autonomously. And we are actually soon going to go to the third phase, which is physical AI as well. And what this is going to do is fundamentally reimagine work across a multitude of dimensions and vectors that we had never even imagined before. Now, if you think about what AI is doing, it’s basically forcing us to rethink every assumption that we’ve had in society. and I think a lot of these are going to be positive and we also need to be mindful of the downsides that might be there but if you really think deep and long and hard the first thing that I’d say is the modern development process for software development has completely changed and flipped at this point in time where AI is going to in fact at Cisco we have our first product that was 100 % built and coded with AI where there was no human writing a single line of code what that actually has as an implication is that your exponential curve of innovation is almost going to feel like a vertical line and how we need to adjust for that because right now what’s happening is that the rate of change is going to accelerate but as that acceleration is happening what you’ll find is the absorption rate of technology is going to increase and the absorption rate of technology is still not quite at the same level as the innovation rate of the technology itself And so rather than having a human in every loop, which is the way that we’ve thought about it, we need to flip that model and make sure that AI is in every loop rather than thinking about a human in the loop.

And the big mindset shift that’s starting to occur is this notion that, you know, these aren’t just productivity tools. These are going to be augmented teammates into our society where they will be working on behalf of humans for humans to go out and conduct things that we actually need additional capacity for. So then the question to ask is, what could hold progress back for AI? And we think there are three things that could fundamentally be impediments for the progress of AI. The first one is an infrastructure constraint. And what I mean by that is there’s just not enough power, compute, and network bandwidth in the world. Now there’s not enough memory, enough capacity. There’s not enough capacity to build out the data centers.

These are massive constraints and infrastructure is oxygen for AI. So if you don’t have enough amount of infrastructure, you’re not going to be able to make sure that you can fulfill and harness the full potential of AI. So infrastructure is the first big constraint that we see. The second big constraint is this notion of a context gap. If we think of each one of us in our lives, the way that we think about acknowledging, kind of gathering information, is we are taking trillions of tokens of context every second and assessing it in our brains and actually informing ourselves of what we need to do as we move forward. These agents are going to need to have that same level of context enrichment.

And if they don’t, they’ll still make decisions, but they’re not going to be very good decisions. You know? So the second one is this fundamental context gap. And then the third area is a trust deficit. If you don’t trust these systems, you’re never going to be able to use them. And you’ll actually see that there’s an impedance to adoption. as a result of the absence of trust. So you have to make sure that you actually start to think about safety and security at a fundamentally different level than what we’ve seen before. And if you think about what the new metric for success is, the new metric for global competitiveness moving forward is our ability to safely, securely, and efficiently generate tokens for the use of AI.

And every country and every company is going to actually get measured by your ability to safely, securely, and efficiently generate tokens, and that’ll directly impact your economic prosperity as well as national security. So let’s go into each one of these three constraints and talk about what the dynamics are and how do we need to make sure that we overcome them. Because if you look at the pattern on the infrastructure side, specifically with what’s happening with agents, what you’re starting to see is as we move from chatbots to agents, the pattern of inferencing. is going from this very spiky kind of compute, you know, consumption that used to happen to a much more steady state, you know, kind of persistent demand signal that you’re starting to see in the market, right?

And that’ll have a very, very different level of infrastructure requirements than what we might have seen before. And so I think we have to keep that in mind as we’re building out the rest of, you know, these kind of token generation factories that we’re building out. We’ll need to make sure that they can actually accommodate for that second, you know, kind of behavior model rather than the first one. Now, as you go into the context gap, imagine this. Imagine if you’re an ER doctor and imagine that you actually had an unresponsive patient and you had no charts on the patient, you had no history about the patient, you had no symptoms that you knew that the patient was experiencing.

How would you be able to go treat that patient? You might still be able to do it, but you’re going to make a bunch of guesses, right? An agent without context is still going to make decisions, but those decisions might not be the kind of decisions we want that agent to make. And so we have to make sure that we figure out effective and efficient ways to enrich context for that agent, for the AI. And in the absence of that, it’s just going to be forced to guess. And those guesses are as good as you flipping a coin and your head showing up. So how do you close that context gap? And so the way you close the context gap, the first one is these models have been trained on human -generated data that’s publicly available on the Internet.

But we are running out of human -generated data publicly available on the Internet. So now what you’re starting to see is there’s a tremendous amount of enterprise data that’s actually intellectual property of these companies. Can we make sure that we can enrich these models with this proprietary enterprise data for the purposes of that organization so that they can create competitive differentiation? And so the first one is this. No. The notion of connecting enterprise data to AI and agents. The second big area is this notion of enriching agents with machine data. Because right now what you see is most of the data is human -generated data that these AI models have actually been using. We need to make sure that we use machine data.

What does machine data mean? It’s time series data. When something, you and all of us humans, we start our day by actually consuming machine data. We might check the weather. That’s machine data. As you have more and more agents, 55 % of the growth of data in the world is going to be machine data. As these agents work 7 by 24, you’ll have much, much more data and logs, metrics, events, traces that will actually need to be consumed. So that second area of agents being enriched with machine data is going to be critical for these agents actually operating with a sufficient level of context. And third is you have to embed AI in every workflow. You can’t just actually think about that as, I’m just going to use this machine data.

I’m just going to have a tool that augments to my broken process. You have to fundamentally rethink the process that accommodates these agents. agents don’t adjust to us. We have to adjust our process to the agents so that they can actually be effective for us. So that’s the second big area is the context gap. And then the third area that we talked about was this notion of a trust deficit. And in AI, the risk with AI now is no longer that AI is going to give us the wrong answer. The risk with AI is AI is going to take the wrong action. And when you actually start having an AI take the wrong action, the consequences are far more grave than just giving you the wrong answer.

So what do we have to do? So there’s two areas that we think are going to be really important. The first one is we got to make sure that these agents get protected from the world, which means that jailbreaking an agent, having prompt injection attacks, making sure that there’s a level of tool abuse or data poisoning. We have to make sure that we can protect the agents from that happening. And then the second thing we have to do is we have to make sure that we can protect the world from the agents so that the agent starts going rogue. It’s having behavior that’s unintended. that we can actually make sure that we can provide effective guardrails at runtime because no longer is governance a document.

It’s going to be a runtime implementation so that as the agent is working, if you start seeing that the agent’s doing something that’s not going to be in the best interest of humans, we have to be able to inject guardrails into that dynamically at runtime so that we can make sure that that creates a level of trust for the system. Now, it turns out that Cisco is actually building solutions across all of these three areas. And so the way that we think about ourselves is we want to invent and innovate in making sure that the critical infrastructure for the AI era is as simple to deploy, as safe and secure as we want it to be, and as context -enriched as it can be.

So what are we doing? We’re building networks that agents will run on. We’re building contexts that makes it richer for these agents to operate. in a way that allows us to make sure that we can delegate safely and securely to them and feel good about the outcomes that we’re going to get. And we’re going to make sure that we’re going to have security that governs these agents. And all of this is going to be done with a tremendous amount of observability and visibility that says in every layer of the stack, from the way that the GPU is getting utilized, to how the model is performing, to how the apps are being built, to how the agents are performing, we can have observability from bottom to top.

The entire stack. Because if we can do that and figure out that every company, every country is generating tokens in the most effective and secure way, then you’re going to see that there’s progress being made. Now, why is this a tremendous opportunity for India? Because India is not just going to use AI. The way that we’ve seen over the course of the past week is you’re actually helping shape the direction of the entire world with AI. And I think there’s a few… There’s a few key areas… that we should all feel very hopeful for and why India can be a tremendous contributor to AI for the rest of the world. Not just for India, but for the entire world.

The first one is we have a huge talent pool of young, vibrant, intelligent, smart, educated people within India that actually contribute to the workforce. We’ve got one of the largest groups of people under 30 that are actually going out and contributing to the economy. Number two is we have a very strong digital foundation, having common identity with Aadhaar, having UPI. These are things that in India you all might take for granted, but these are very rare to come by in countries, especially at scale. And third, India actually has massive, massive scale. And why is that important? Because AI works best with scale. Because AI works best when you have the most amount of data. right and so the way i think about this is we have a tremendous opportunity ahead of us now the future is not going to be built by ai alone in fact the future gets built when humans can confidently put ai to work and delegate jobs and tasks to ai in a way that we feel safe and secure and so i i’m actually as hopeful as i’ve ever been however i also feel like there’s tremendous amount of risks of these things can go going sideways and so we as a community have to band together and make sure that we actually work as an ecosystem to keep ai safe and secure because if we do that in the right way we’re going to solve the hardest problems that humanity has faced and reduce and hopefully end suffering in so many different areas we might be able to cure the best uh the the hardest diseases that we’ve not been able to overcome we might be able to overcome poverty we might be able to overcome you , know the gaps around education so that can be evenly distributed to people we can make sure that we can improve people’s quality of life So I think there’s a lot to be excited about, but those constraints need to be kept in mind.

And we are so grateful to be partnering with India in this journey ahead. So thank you all. Take care.

Related ResourcesKnowledge base sources related to the discussion topics (18)
Factual NotesClaims verified against the Diplo knowledge base (10)
Confirmedhigh

“The MC (Speaker 1) welcomed the audience and introduced Jeetu Patel.”

The transcript notes that the session introduced Jeetu Patel from Cisco, confirming his introduction as a speaker [S5].

Confirmedhigh

“Jeetu Patel is President and Chief Product Officer at Cisco Inc.”

The keynote listing identifies Jeetu Patel as President and Chief Product Officer of Cisco [S4].

Confirmedmedium

“Patel congratulated Prime Minister Narendra Modi and Minister Vaishnav for delivering a spectacular AI summit.”

The leaders’ plenary references a Minister Vaishnav (spelled Vesnav) being thanked, confirming a minister named Vaishnav was present; the summit’s opening by a prime minister aligns with the welcome address format [S6] and [S45].

Additional Contextmedium

“Cisco’s first product was 100 % generated and coded by AI, with no human writing a single line of code.”

Cisco’s collaboration with OpenAI to embed agentic AI into enterprise software engineering shows development of AI-native code, supporting the claim of AI-generated product, though the source does not state it was 100 % AI-written [S57].

Additional Contextlow

“AI‑first creation turns the traditional exponential innovation curve into a near‑vertical line, collapsing years of incremental progress.”

Discussions of AI-native development treating AI as operational infrastructure illustrate a rapid acceleration of innovation, consistent with this description [S57].

Confirmedhigh

“The paradigm moves from a “human‑in‑the‑loop” model to an “AI‑in‑the‑loop” approach, where AI acts as an augmented teammate in every workflow.”

Speakers describe AI as augmented teammates and a shift from mere productivity tools to collaborative agents in workflows [S59] and [S57].

Confirmedhigh

“Infrastructure constraint – the world lacks sufficient power, compute, network bandwidth, memory, and data‑centre capacity, described as “oxygen for AI”.”

A global AI policy framework highlights infrastructure and compute limitations as key barriers to scaling AI, confirming the described constraint [S10].

!
Correctionlow

“Human cognition processes trillions of tokens of contextual information each second, a richness that current AI agents lack.”

The knowledge base discusses limits of language models but does not provide a quantitative figure of “trillions of tokens per second”; the specific number is not substantiated [S63].

Additional Contextlow

“Feeding agents large volumes of machine‑generated time‑series data is projected to constitute over half of future data growth.”

While the knowledge base notes challenges in data discovery and rapid data expansion, it does not give the specific projection that >50 % of future growth will be time-series data [S64].

Confirmedhigh

“The primary risk has shifted from providing wrong answers to taking harmful actions.”

Analyses of language model hallucinations and safety concerns describe a move from factual errors to potentially harmful behavior, supporting this risk shift [S63].

External Sources (64)
S1
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S2
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S3
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S4
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — -Speaker: No specific role, title, or area of expertise mentioned in the transcript And if they don’t, they’ll still ma…
S5
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — Note: The transcript appears to conclude mid-sentence with the introduction of Jeetu Patel from Cisco, suggesting additi…
S7
Agents of Change AI for Government Services & Climate Resilience — Saibal Chakraborty noted that conversations have moved decisively towards end-to-end AI-led execution of business and go…
S8
From Innovation to Impact_ Bringing AI to the Public — “we are all in committed towards agent -first interfaces.”[91]. “The agent will talk to agent.”[82]. Artificial intelli…
S9
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S10
Global AI Policy Framework: International Cooperation and Historical Perspectives — Werner identifies three critical barriers that prevent AI for good use cases from scaling globally. He emphasizes that d…
S11
Setting the Rules_ Global AI Standards for Growth and Governance — Because I’ve been on a lot of panels this week. The fear, uncertainty, and doubt is not only just the policy gap. It’s a…
S12
The fading of human agency in automated systems — In practice, however, being “in the loop” frequently means supervising outputs under conditions that make meaningful jud…
S13
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Human-in-the-lead approach rather than human-in-the-loop mentality
S14
The Intelligent Coworker: AI’s Evolution in the Workplace — And so we’re we actually built in an automatic ability to run triage and transfer the call immediately to a human. And s…
S15
Cisco to reinvent network security for the AI era — Cisco hasintroduceda major evolution in security policy management, aiming to help enterprises scale securely without in…
S16
The Global Power Shift India’s Rise in AI & Semiconductors — The discussion highlighted that India’s opportunity in AI and semiconductors is real but time-bound. Success requires st…
S17
Building the Next Wave of AI_ Responsible Frameworks & Standards — What is interesting is India is uniquely positioned in this global AI discourse. Most global AI frameworks are designed …
S18
AI 2.0 Reimagining Indian education system — The discussion positioned India’s educational AI integration within broader national aspirations for global AI leadershi…
S19
Press Conference: Closing the AI Access Gap — Countries need robust data strategies that include sharing frameworks and data protection measures. These strategies are…
S20
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — At Cisco, he’s leading the company’s transformation into an AI -native networking and security powerhouse. In a world ob…
S21
S22
Artificial Intelligence & Emerging Tech — Connectivity issues in developing countries for leveraging AI are also highlighted. This negative sentiment emphasizes t…
S23
Critical infrastructure — AI plays a pivotal role in safeguarding critical infrastructure systems. AI can strengthen the security of critical infr…
S24
AI as critical infrastructure for continuity in public services — Resilience, data control, and secure compute are core prerequisites for trustworthy AI. Systems must stay operational an…
S25
Discussion Report: Sovereign AI in Defence and National Security — Infrastructure involves resilience at a critical level. And just yesterday, after the Munich drone attack that almost st…
S26
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — Patel describes a rapid progression of AI from chat‑based bots to agents that can perform tasks autonomously, and antici…
S27
Comprehensive Summary: The Future of Robotics and Physical AI — The discussion revealed a field in transition, moving from experimental research to practical deployment while grappling…
S28
Comprehensive Discussion Report: AI Agents and Fiduciary Standards — Pentland presented a future where AI agents would handle virtually every business and government process, essentially ad…
S29
Agents of Change AI for Government Services & Climate Resilience — So I’m probably going to jump on the train here. You know, what we were seeing last year was narrow agents able to solve…
S30
Global AI Policy Framework: International Cooperation and Historical Perspectives — Werner identifies three critical barriers that prevent AI for good use cases from scaling globally. He emphasizes that d…
S31
Laying the foundations for AI governance — – The four fundamental obstacles identified by the moderator: time, uncertainty, geopolitics, and power concentration T…
S32
World Economic Forum Panel Discussion: Global Economic Growth in the Age of AI — High costs of energy and commodities, combined with a lack of skilled workforce, represent fundamental obstacles to econ…
S33
Open Forum #70 the Future of DPI Unpacking the Open Source AI Model — **Judith Okonkwo** provided crucial insights into practical challenges of implementing AI technologies across different …
S34
The fading of human agency in automated systems — In practice, however, being “in the loop” frequently means supervising outputs under conditions that make meaningful jud…
S35
The Intelligent Coworker: AI’s Evolution in the Workplace — And so we’re we actually built in an automatic ability to run triage and transfer the call immediately to a human. And s…
S36
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Human-in-the-lead approach rather than human-in-the-loop mentality
S37
Keynote-Julie Sweet — This distinction is philosophically profound and practically important. ‘Humans in the loop’ suggests a reactive, compli…
S38
Enterprise AI security evolves as Cisco expands AI Defense capabilities — Cisco has announced a major update to itsAI Defense platformas enterprise AI evolves from chat tools into autonomous age…
S39
Cisco to reinvent network security for the AI era — Cisco hasintroduceda major evolution in security policy management, aiming to help enterprises scale securely without in…
S40
The Global Power Shift India’s Rise in AI & Semiconductors — The discussion highlighted that India’s opportunity in AI and semiconductors is real but time-bound. Success requires st…
S41
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — Albertazzi positioned India as central to the AI evolution, citing several key advantages that make the country particul…
S42
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — India’s advantages in this transformation include demographic energy, linguistic complexity, cultural depth spanning tho…
S43
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Julie Sweet from Accenture highlighted another crucial advantage: India’s human capital. With over 350,000 employees in …
S44
https://dig.watch/event/india-ai-impact-summit-2026/partnering-on-american-ai-exports-powering-the-future-india-ai-impact-summit-2026 — Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you….
S45
Welcome Address — Friends, the leader who made this, who gave this vision to the world, I now invite the Honorable Prime Minister for your…
S46
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Sovereignty does not mean solitude. We must work together. But it does mean that we have to work with like -minded count…
S47
Open Forum: A Primer on AI — Artificial Intelligence is advancing at a rapid pace
S48
UNSC meeting: Artificial intelligence, peace and security — The impression is created that artificial intelligence as a technology is at its early stages of development
S49
9821st meeting — Ecuador:Mr. President, I thank the United States for convening this important meeting. I also thank the Secretary Genera…
S50
Digital Humanism: People first! — Pavan Duggal: Okay. Thank you for giving this opportunity. Today we are actually undergoing a new revolution. This is an…
S51
IGF 2023 WS #313 Generative AI systems facing UNESCO AI Ethics Recommendation — Siva Prasad Rambhatia:Okay, okay. So what is important for us is generally any technology. Technology discriminates betw…
S52
From Technical Safety to Societal Impact Rethinking AI Governanc — “It’s not going to come automatically.”[3]. “We’re all going to have to insist.”[4]. “I think that’s a great place to en…
S53
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — This presentation is structured as a single, extended keynote rather than a traditional discussion, but Hunter-Torricke’…
S54
AI for Social Empowerment_ Driving Change and Inclusion — “But actually, one of the bigger issues that’s happening is rethinking how to work and ways of working and the disruptio…
S55
WS #271 Data Agency Scaling Next Gen Digital Economy Infrastructure — This fundamentally inverts the traditional tech development paradigm. Instead of expecting users to adapt to technology,…
S56
“Re” Generative AI: Using Artificial and Human Intelligence in tandem for innovation — Joyce Benza:Thank you, Moira. Good morning, everyone. I’m coming from the practical side where I’ve applied AI in the fi…
S57
Cisco and OpenAI push AI-native software development — Cisco hasdeepened its collaborationwith OpenAI to embed agentic AI into enterprise software engineering. The approach re…
S58
Keynote-Bejul Somaia — “In 2008, a small number of entrepreneurs and investors in India looked at a world with very limited internet penetratio…
S59
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-jeetu-patel-president-and-chief-product-officer-cisco-inc — And the big mindset shift that’s starting to occur is this notion that, you know, these aren’t just productivity tools. …
S60
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Simon Chesterman:Sure, thanks so much, and again it’s great to be part of this conversation. I think as Carlos said earl…
S61
Brainstorming with AI opens new doors for innovation — AI is increasingly embraced as a reliable creative partner, offering speed and breadth in idea generation. In Fast Compa…
S62
Comprehensive Report: AI’s Impact on the Future of Work – Davos 2026 Panel Discussion — Palsule argues that traditional HR policies designed for humans alone are inadequate for the new reality where work outp…
S63
When language models fabricate truth: AI hallucinations and the limits of trust — AI has come far from rule-based systems and chatbots with preset answers.Large language models (LLMs), powered by vast a…
S64
Networking Session #50 AI and Environment: Sustainable Development | IGF 2023 — Despite the promising opportunities AI and data science offer, there are challenges that need to be addressed for their …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
J
Jeetu Patel
12 arguments180 words per minute2540 words845 seconds
Argument 1
AI entering second phase with autonomous agents (Jeetu Patel)
EXPLANATION
Patel states that AI has moved beyond simple chatbots into a second phase where autonomous agents perform tasks and jobs with little human intervention. This marks a rapid acceleration in AI capabilities compared to earlier expectations.
EVIDENCE
He notes that we are “now squarely in the second phase of AI” and describes the transition from chatbots to agents that conduct tasks almost fully autonomously, citing the shift from magic-like chatbots three years ago to today’s autonomous agents [9-11].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Patel’s description of a rapid progression from chat-based bots to autonomous agents is confirmed in the keynote, which notes the shift to agents that can perform tasks autonomously [S4].
MAJOR DISCUSSION POINT
Second phase of AI with autonomous agents
Argument 2
Upcoming third phase: physical AI reshaping work across dimensions (Jeetu Patel)
EXPLANATION
Patel predicts an imminent third phase of AI that will involve physical embodiments of intelligence, fundamentally changing how work is performed across many sectors. This physical AI will expand the impact of AI beyond software into tangible operations.
EVIDENCE
He says “we are actually soon going to go to the third phase, which is physical AI as well” and explains that this will “fundamentally reimagine work across a multitude of dimensions and vectors” that were previously unimaginable [12-13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same keynote highlights an imminent third phase of “physical AI” that will fundamentally reshape work across industries [S4].
MAJOR DISCUSSION POINT
Third phase – physical AI
Argument 3
Software development now AI‑first: code can be generated entirely by AI (Jeetu Patel)
EXPLANATION
Patel highlights that AI has become the primary driver of software creation, with Cisco’s first product being built and coded 100 % by AI without any human‑written code. This signals a shift where AI accelerates innovation at a near‑vertical rate.
EVIDENCE
He describes Cisco’s product that “was 100 % built and coded with AI where there was no human writing a single line of code,” emphasizing the resulting exponential innovation curve and the need to place AI in every loop rather than a human in the loop [14].
MAJOR DISCUSSION POINT
AI‑first software development
Argument 4
Infrastructure constraint: insufficient power, compute, bandwidth, and memory (Jeetu Patel)
EXPLANATION
Patel identifies a fundamental bottleneck for AI progress: the world lacks enough power, compute capacity, network bandwidth, and memory to support large‑scale AI models and data centers. He likens infrastructure to oxygen for AI, without which its potential cannot be realized.
EVIDENCE
He lists the specific shortages-“not enough power, compute, and network bandwidth,” “not enough memory, enough capacity,” and “not enough capacity to build out the data centers,” describing these as massive constraints and calling infrastructure “oxygen for AI” [19-24].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Patel identifies global shortages of power, compute, network bandwidth, memory and data-center capacity as a core limitation for AI, echoed in the source’s summary of infrastructure constraints [S4].
MAJOR DISCUSSION POINT
Infrastructure as a limiting factor
AGREED WITH
Speaker 1
Argument 5
Context gap: agents need richer human and machine context to make good decisions (Jeetu Patel)
EXPLANATION
Patel argues that AI agents must have access to the same depth of contextual information that humans process, otherwise their decisions will be poor or random. He stresses the need to enrich agents with both human‑generated and machine‑generated data.
EVIDENCE
He defines the “fundamental context gap” by explaining that humans process trillions of tokens of context each second, and agents lacking this will make sub-optimal decisions [26-31]. He illustrates the problem with an ER-doctor scenario where lack of patient history forces guesswork, showing that agents without context are akin to flipping a coin [42-50].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote explicitly calls out a “fundamental context gap” that hampers agent decision quality, supporting Patel’s claim [S4].
MAJOR DISCUSSION POINT
Need for richer context in AI agents
Argument 6
Trust deficit: lack of safety, security, and runtime guardrails hampers adoption (Jeetu Patel)
EXPLANATION
Patel points out that without trust—ensured through safety measures, security protections, and dynamic guardrails—organizations and societies will resist adopting AI. The risk shifts from wrong answers to potentially harmful actions.
EVIDENCE
He notes that “if you don’t trust these systems, you’re never going to be able to use them” and highlights the need for safety, security, and runtime guardrails, describing how AI can be compromised by jailbreaks, prompt-injection, and data poisoning, and that guardrails must be injected dynamically at runtime [31-36][74-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Patel discusses a “trust deficit” and the need for dynamic runtime guardrails to ensure safety and security, as detailed in the source [S4].
MAJOR DISCUSSION POINT
Trust and safety as adoption barriers
Argument 7
Deploy AI‑ready network infrastructure and token‑generation factories (Jeetu Patel)
EXPLANATION
Patel outlines Cisco’s strategy to build the physical and logical infrastructure needed for AI agents, including networks optimized for steady‑state inference and dedicated token‑generation factories that can meet the new demand patterns.
EVIDENCE
He describes the shift from spiky compute consumption to a steady, persistent demand as agents evolve, and stresses the need to design token-generation factories that accommodate this behavior [38-41]. Later he mentions Cisco’s work building networks for agents and observability across the stack [85-91].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The source mentions the need to build “token generation factories” and highlights new infrastructure requirements for steady-state AI inference workloads [S4].
MAJOR DISCUSSION POINT
Infrastructure and token factories for AI
Argument 8
Enrich agents with proprietary enterprise data and machine (time‑series) data (Jeetu Patel)
EXPLANATION
Patel proposes that AI agents should be fed with both proprietary enterprise data and machine‑generated time‑series data to close the context gap and improve decision quality. This dual enrichment leverages data that is not publicly available and the growing volume of machine data.
EVIDENCE
He explains that models have been trained on publicly available human-generated data, but now enterprises have valuable proprietary data to add, and that 55 % of future data growth will be machine data such as logs, metrics, and traces, which agents must consume [51-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Patel notes the depletion of publicly available human-generated data and the rise of enterprise-specific data as a critical source for AI agents, as reflected in the transcript [S4].
MAJOR DISCUSSION POINT
Data enrichment for agents
Argument 9
Embed AI into workflows and implement dynamic runtime guardrails for security (Jeetu Patel)
EXPLANATION
Patel stresses the need to redesign business processes so that AI agents are integral to workflows, and to deploy runtime guardrails that can intervene when agents behave unexpectedly, thereby building trust and security.
EVIDENCE
He argues that agents cannot adjust to us; instead, processes must be adjusted to agents, and that guardrails must be applied at runtime rather than as static documents, enabling dynamic protection against rogue behavior [67-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote stresses redesigning workflows for AI agents and implementing runtime guardrails, aligning with Patel’s recommendation [S4].
MAJOR DISCUSSION POINT
Workflow integration and runtime guardrails
Argument 10
Vast, youthful talent pool fuels AI innovation (Jeetu Patel)
EXPLANATION
Patel highlights India’s demographic advantage: a large, young, educated workforce that can drive AI research, development, and implementation, positioning the country as a global AI leader.
EVIDENCE
He notes that India has “a huge talent pool of young, vibrant, intelligent, smart, educated people” and “one of the largest groups of people under 30” contributing to the economy [98-100].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Patel’s claim about India’s large, young, educated workforce is corroborated by the source’s reference to a “huge talent pool of young, educated people under 30” [S4].
MAJOR DISCUSSION POINT
India’s talent advantage
Argument 11
Strong digital foundations (Aadhaar, UPI) enable scalable AI deployment (Jeetu Patel)
EXPLANATION
Patel points out that India’s existing digital infrastructure—national ID (Aadhaar) and digital payments (UPI)—provides a ready-made platform for scaling AI solutions across the population, reducing friction for adoption.
EVIDENCE
He cites India’s “very strong digital foundation, having common identity with Aadhaar, having UPI” as rare at scale in other countries [100-102].
MAJOR DISCUSSION POINT
Digital infrastructure as AI enabler
Argument 12
Massive population provides the data scale essential for AI performance (Jeetu Patel)
EXPLANATION
Patel argues that India’s sheer population size offers the massive data volumes AI systems need to train and improve, making the country uniquely positioned to benefit from AI’s data‑driven nature.
EVIDENCE
He states that India “has massive, massive scale” and explains that “AI works best when you have the most amount of data,” linking scale directly to AI effectiveness [102-105].
MAJOR DISCUSSION POINT
Population scale as data advantage
Agreements
Agreement Points
Both speakers stress that resilient, secure infrastructure is essential for AI progress.
Speakers: Speaker 1, Jeetu Patel
Infrastructure constraint: insufficient power, compute, bandwidth, and memory (Jeetu Patel)
Speaker 1 opens by noting that work without resilient, secure infrastructure is essential [1], and Patel later describes infrastructure as “oxygen for AI” and lists shortages of power, compute, bandwidth, memory and data-center capacity as the first major constraint [19-24].
POLICY CONTEXT (KNOWLEDGE BASE)
This view mirrors authoritative statements from industry leaders that AI systems depend on resilient, secure infrastructure, as highlighted by Cisco executives at the India AI Impact Summit 2026 and in a keynote on trusted AI at scale [S20][S21]. It also aligns with policy discussions framing AI as critical infrastructure that requires resilience, data control, and secure compute to be trustworthy [S24].
Similar Viewpoints
Patel repeatedly emphasizes that AI’s next wave requires new, AI‑first infrastructure – from sufficient power/compute to steady‑state token‑generation factories and observable networks – to meet the shift from spiky to persistent inference demand [19-24][38-41][85-91].
Speakers: Jeetu Patel
Infrastructure constraint: insufficient power, compute, bandwidth, and memory (Jeetu Patel) Deploy AI‑ready network infrastructure and token‑generation factories (Jeetu Patel)
Patel highlights a fundamental "context gap" where agents lack the trillions of tokens humans process, and proposes closing it by feeding agents proprietary enterprise data and the growing volume of machine‑generated time‑series data [26-31][42-50][51-66].
Speakers: Jeetu Patel
Context gap: agents need richer human and machine context to make good decisions (Jeetu Patel) Enrich agents with proprietary enterprise data and machine (time‑series) data (Jeetu Patel)
Patel argues that without trust—ensured through safety, security, and dynamic runtime guardrails—AI adoption stalls, and calls for redesigning workflows and embedding guardrails that protect both agents and the world in real time [31-36][74-84].
Speakers: Jeetu Patel
Trust deficit: lack of safety, security, and runtime guardrails hampers adoption (Jeetu Patel) Embed AI into workflows and implement dynamic runtime guardrails for security (Jeetu Patel)
Unexpected Consensus
Critical role of resilient, secure infrastructure for AI
Speakers: Speaker 1, Jeetu Patel
Infrastructure constraint: insufficient power, compute, bandwidth, and memory (Jeetu Patel)
While Speaker 1 only delivers a brief opening line about the necessity of resilient, secure infrastructure [1], Patel expands this into a detailed argument about global infrastructure shortages as the primary bottleneck for AI [19-24]. The alignment between a terse introductory remark and a comprehensive technical argument was not anticipated given the limited content from Speaker 1.
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on infrastructure resilience is echoed in broader policy contexts, including calls for robust data strategies and secure data sharing frameworks to enable responsible AI use [S19], and analyses of connectivity challenges in developing regions that stress infrastructure as a prerequisite for AI adoption [S22]. Additionally, AI is identified as a component of critical infrastructure that must be safeguarded through resilient and secure systems [S23][S25].
Overall Assessment

The discussion shows strong internal coherence in Patel’s presentation: he consistently links infrastructure, context, and trust as three inter‑related constraints on AI, and proposes concrete infrastructure, data‑enrichment, and governance measures. The only cross‑speaker agreement is on the importance of resilient, secure infrastructure, echoing the opening remark of Speaker 1.

High consensus on the three constraint pillars (infrastructure, context, trust) within Patel’s arguments, and moderate consensus across speakers limited to the infrastructure theme. This suggests a unified vision for AI advancement that hinges on building robust, secure, and context‑rich ecosystems.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The discussion shows strong alignment between the two speakers on the critical role of resilient and secure infrastructure for AI development. No substantive disagreements were evident; the only divergence is the level of detail, with Patel providing a comprehensive analysis of infrastructure, context, and trust constraints, while Speaker 1 offers a brief introductory endorsement.

Minimal – the speakers are largely in agreement, indicating a cohesive perspective on infrastructure needs, which bodes well for coordinated action on AI development and deployment.

Partial Agreements
Both speakers stress that robust and secure infrastructure is essential for AI progress. Speaker 1 states that "works without resilient, secure infrastructure is both timely and essential" [1], while Patel later describes infrastructure shortages as the primary bottleneck for AI and likens infrastructure to oxygen for AI [19-24]. Their viewpoints converge on the goal of strengthening infrastructure, though Patel focuses on global capacity constraints whereas Speaker 1 frames it as a timely priority.
Speakers: Speaker 1, Jeetu Patel
Speaker 1 emphasizes the need for resilient, secure infrastructure (Speaker 1) Patel identifies infrastructure as a fundamental constraint and calls it “oxygen for AI” (Jeetu Patel)
Takeaways
Key takeaways
AI is moving from the chatbot era to an autonomous‑agent era (second phase) and will soon enter a physical‑AI third phase that will fundamentally reshape work. Software development has become AI‑first; Cisco has demonstrated a product built entirely by AI, indicating a rapid acceleration of innovation. Three major constraints could impede AI progress: (1) infrastructure limits (power, compute, bandwidth, memory), (2) a context gap where agents lack sufficient human and machine context, and (3) a trust deficit caused by safety, security, and governance concerns. Cisco’s proposed strategy to overcome these constraints includes deploying AI‑ready network infrastructure and token‑generation factories, enriching agents with proprietary enterprise data and machine (time‑series) data, embedding AI into existing workflows, and implementing dynamic runtime guardrails for security and trust. India possesses strategic advantages for the AI future: a large, youthful talent pool, strong digital foundations (Aadhaar, UPI), and massive scale that provides the data volume AI systems need.
Resolutions and action items
Cisco will continue building and deploying AI‑ready network infrastructure and “token generation factories” to meet the steady‑state compute demand of autonomous agents. Cisco will develop solutions to connect proprietary enterprise data and machine‑generated time‑series data to AI models, thereby closing the context gap. Cisco will embed AI into enterprise workflows and create runtime guardrails to protect both agents and users, addressing the trust deficit. Cisco commits to partnering with India to leverage its talent, digital infrastructure, and data scale for global AI advancement.
Unresolved issues
Insufficient global infrastructure (power, compute, bandwidth, memory) remains a bottleneck; no concrete plan or timeline for scaling it was provided. How to systematically and securely acquire and integrate large volumes of proprietary enterprise data into AI models is still an open challenge. Effective mechanisms for real‑time, dynamic guardrails and governance of AI agents need further definition and standardisation. The broader policy, regulatory, and national‑security implications of token generation and AI competitiveness were mentioned but not resolved.
Suggested compromises
Shift the paradigm from “human‑in‑the‑loop” to “AI‑in‑the‑loop,” treating AI as an augmented teammate rather than a mere tool. Adjust existing business processes to accommodate AI agents instead of expecting agents to fit legacy workflows. Balance rapid AI deployment with security by implementing runtime guardrails that can be injected dynamically, rather than relying solely on static governance documents.
Thought Provoking Comments
We are now squarely in the second phase of AI… agents are conducting tasks and jobs for us almost fully autonomously, and we are soon going to the third phase, physical AI, which will fundamentally re‑imagine work across dimensions we never imagined before.
Frames AI evolution as distinct, observable phases, moving the conversation from hype about chatbots to a concrete roadmap of autonomous agents and physical AI, highlighting the scale of upcoming disruption.
Sets the macro‑level context for the entire talk, prompting the audience to think beyond current applications and preparing them for the deeper discussion of constraints and societal impact that follows.
Speaker: Jeetu Patel
The modern development process for software has completely flipped – we now have a product that was 100 % built and coded with AI, with no human writing a single line of code. This turns the innovation curve into a vertical line and forces us to move from ‘human‑in‑the‑loop’ to ‘AI‑in‑the‑loop’.
Highlights a paradigm shift in software engineering, illustrating how AI can become the primary creator rather than a mere assistant, and introduces the need for a new mindset about responsibility and control.
Triggers a shift in tone from describing what AI can do to questioning how organizations must reorganize processes, leading directly into the three constraints (infrastructure, context, trust) that need to be addressed.
Speaker: Jeetu Patel
There are three fundamental impediments to AI progress: (1) infrastructure – insufficient power, compute, bandwidth, and memory; (2) a context gap – agents lack the rich, real‑time context humans use; (3) a trust deficit – without trust, adoption stalls.
Provides a concise, structured framework that moves the discussion from abstract optimism to concrete challenges, giving the audience clear lenses for evaluating AI initiatives.
Organizes the remainder of the talk into three focused sections, each becoming a mini‑topic that deepens the conversation and invites listeners to consider solutions in their own domains.
Speaker: Jeetu Patel
Imagine an ER doctor with no patient history or symptoms – the doctor would be forced to guess. An AI agent without sufficient context is the same; its decisions become a coin‑flip.
Uses a vivid, relatable analogy to make the abstract ‘context gap’ tangible, emphasizing the real‑world stakes of insufficient data for AI decision‑making.
Deepens audience understanding of why context matters, leading to the subsequent discussion on enriching agents with proprietary enterprise data and machine‑generated time‑series data.
Speaker: Jeetu Patel
The new metric for global competitiveness will be a country’s or company’s ability to safely, securely, and efficiently generate tokens for AI use.
Reframes economic and security competition in terms of AI token generation, linking technical capability directly to national prosperity and security—a novel way to view AI as a strategic asset.
Elevates the conversation from technical hurdles to geopolitical implications, prompting listeners to consider policy, investment, and sovereignty issues alongside engineering challenges.
Speaker: Jeetu Patel
Risk with AI is no longer a wrong answer; it is a wrong action. Therefore we must protect agents from jail‑breaking, prompt‑injection, and data‑poisoning, and also protect the world from rogue agents by injecting runtime guardrails.
Shifts the focus of AI safety from static correctness to dynamic behavior control, introducing the concept of runtime governance rather than static policy documents.
Creates a turning point toward actionable security strategies, influencing the later claim that Cisco is building solutions across these three areas and setting expectations for concrete safeguards.
Speaker: Jeetu Patel
India’s unique advantage comes from three pillars: a massive, youthful talent pool; a strong digital foundation (Aadhaar, UPI); and massive scale of data – AI works best with scale.
Connects the global AI narrative to a specific national context, turning the abstract discussion into a call to action for Indian stakeholders and highlighting how local strengths can address the earlier‑identified constraints.
Shifts the tone from a global overview to a localized opportunity, encouraging Indian participants to see themselves as key contributors to the AI future and aligning the earlier challenges with national capabilities.
Speaker: Jeetu Patel
The future will be built when humans can confidently delegate jobs to AI in a safe and secure way; we must band together as an ecosystem to keep AI safe, so we can solve humanity’s hardest problems like disease, poverty, and education.
Synthesizes the entire discussion into a hopeful, collaborative vision that balances optimism with the earlier‑stated risks, framing responsible AI as a collective mission.
Provides a concluding rallying point that reinforces the earlier themes (trust, context, infrastructure) and leaves the audience with a clear, purpose‑driven call to action.
Speaker: Jeetu Patel
Overall Assessment

Jeetu Patel’s remarks systematically moved the audience from awe at AI’s rapid evolution to a grounded appraisal of the concrete barriers—infra‑structure, context, and trust—that must be overcome. Each pivotal comment introduced a new analytical lens (phases of AI, flipped development paradigm, structured constraints, vivid analogies, geopolitical metrics, runtime safety, national strengths, and a collaborative vision) that redirected the conversation, deepened its technical and strategic depth, and ultimately framed the discussion as both a challenge and an opportunity for India and the global community. These insights shaped the flow by creating clear turning points, prompting listeners to reconsider assumptions, and ending with a unifying call to collective responsibility.

Follow-up Questions
What could hold progress back for AI?
Identifying potential impediments (infrastructure, context gap, trust deficit) is crucial to address barriers to AI advancement.
Speaker: Jeetu Patel
How can we close the context gap for AI agents?
Closing the gap between human-level contextual understanding and AI agents is essential for reliable decision‑making.
Speaker: Jeetu Patel
How can enterprise and proprietary data be safely integrated to enrich AI models?
Leveraging internal data can provide competitive differentiation, but requires research on privacy, security, and data governance.
Speaker: Jeetu Patel
What are effective methods to enrich AI agents with machine (time‑series) data?
Machine data will constitute a large portion of future data streams; research is needed on ingestion, normalization, and real‑time use.
Speaker: Jeetu Patel
How should workflows be redesigned to embed AI agents rather than merely augment them?
Rethinking processes to accommodate AI agents is a research area to maximize efficiency and effectiveness.
Speaker: Jeetu Patel
What safeguards are needed to protect AI agents from jailbreaking, prompt‑injection, tool abuse, and data poisoning?
Ensuring agent integrity is vital to prevent malicious manipulation and maintain trust.
Speaker: Jeetu Patel
What runtime guardrails and governance mechanisms are required to protect the world from rogue AI agent behavior?
Dynamic, real‑time controls are needed to prevent harmful actions, a key research focus for safety.
Speaker: Jeetu Patel
What metrics should be used to measure a nation’s or company’s ability to safely, securely, and efficiently generate AI tokens?
Defining quantitative competitiveness indicators will guide policy and investment decisions.
Speaker: Jeetu Patel
What infrastructure designs best support the shift from spiky chatbot inference to steady‑state agent workloads?
Research into networking, compute, power, and bandwidth provisioning is needed to meet evolving demand patterns.
Speaker: Jeetu Patel
How can India leverage its large talent pool, digital identity (Aadhaar), UPI ecosystem, and scale to become a global AI leader?
Understanding how these unique assets translate into AI innovation and deployment requires targeted study.
Speaker: Jeetu Patel

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Collaborative AI Network – Strengthening Skills Research and Innovation

Collaborative AI Network – Strengthening Skills Research and Innovation

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by framing AI diffusion as a potential digital public infrastructure that must first demonstrate concrete use cases before delivering value [12-13]. Saurabh Garg argued that for AI to become a trusted, interoperable DPI similar to Aadhaar or UPI, four foundational resources-compute, data sets, talent and models-need to be democratized and governed by appropriate frameworks [14-15]. He outlined four criteria for “AI-ready” data-discoverability, trustworthiness, interoperability and usability-while emphasizing privacy-preserving access and the importance of locally relevant datasets [15-18]. To coordinate this effort, the AI summit’s working group proposed a voluntary, modular platform called METRI (Multi-stakeholder AI for Resilient Infrastructure) to foster shared development of these resources [23-28].


Building on that, Speaker 2 highlighted that AI, like earlier general-purpose technologies, was invented in the West but must be diffused through the Global South via 100 targeted pathways by 2030, with Kizom acting as a key partner [46-54]. He questioned how such pathways could be operationalized across sectors, noting the gap between invention and impact [39-41].


Representing the UN Development Programme, Speaker 3 described the G7 AI hub as a mechanism to unlock compute, data and talent for low-income regions, stressing the need to build business cases for local data-centers and to retain diaspora talent for co-architecting solutions for farmers and women entrepreneurs [61-68]. Janet Zhou warned that “pilotitis” hampers scale and argued that lasting impact requires governments to be involved from design through implementation, creating trustworthy, inclusive institutions that lower market entry costs for innovators [75-86]. Brazil’s Beatriz Vasconcellos illustrated a national approach that creates shared data ecosystems, standardised early-childhood and environmental datasets, and a centralized chatbot platform built on digital ID to move from pilot to transactional services [95-131].


Kizom later explained that existing digital public rails such as UPI, DigiLocker and language stacks like Bhashani enable AI services to become “invisible” and seamlessly integrated into everyday workflows, while new rails are emerging to support multilingual voice interactions [140-158]. She cited the MOSIP open-source ID platform as an example of how technical standards, operational support and financing combine to lubricate adoption of public-good infrastructure across countries [178-193]. Participants agreed that avoiding vendor lock-in and fostering modular, multi-model solutions are essential, and highlighted a jointly developed “use-case adoption framework” that maps vertical sector needs to horizontal data and compute enablers to guide the 100 diffusion pathways [231-236][240-247].


The discussion concluded that scaling AI in the Global South will depend on building trustworthy digital public infrastructure, democratizing core resources, and institutionalising collaborative, standards-based pathways that move pilots to production at population scale [14-15][75-86][240-247].


Keypoints

Major discussion points


AI must be treated as a Digital Public Infrastructure (DPI) and democratized through shared foundational resources.


Saurabh Garg emphasized that AI will only deliver value once concrete use-cases are identified and that, like Aadhaar or UPI, AI needs to become a trusted, interoperable, and shareable public good built on four core resources – data, compute, talent, and models [12-15]. He outlined the need for “AI-ready” data that is discoverable, trustworthy, interoperable and usable, and introduced the METRI platform as a modular, voluntary framework to develop these resources [16-18][23-29].


Making data “AI-ready” is a prerequisite for diffusion.


The speaker detailed four criteria for data readiness: discoverability via common metadata, quality-based trustworthiness, technical interoperability through unique identifiers, and usability enabled by international standards [15-18]. He also stressed privacy safeguards while ensuring data remains locally relevant and can drive context-specific AI solutions [17-22].


Existing digital public rails (e.g., UPI, digital IDs, DigiLocker) are the backbone for scaling AI across sectors and borders.


Participants highlighted how public infrastructure that is invisible to users-such as payment systems, identity platforms, and emerging language stacks like “Bhashani”-provides the “rails” on which AI services (e.g., chat-bots for farmers) can be layered [140-152][155-162]. The discussion linked this to the broader vision of 100 AI diffusion pathways by 2030, stressing convergence of multiple national rails into a global ecosystem.


Transitioning from pilots to production requires institutional trust, inclusive governance, and coordinated standards.


Janet Zhou pointed out that “pilotitis” is solved when governments are at the design table, creating trustworthy, inclusive institutions that lower market entry costs for innovators [75-82][84-86]. Brazil’s experience illustrated concrete steps: building shared data ecosystems, standardizing early-childhood and environmental data, and centralising chatbot services under a national digital ID framework [95-104][115-124].


Removing friction through open-source platforms, centralized services, and capacity-building avoids vendor lock-in and builds domestic capability.


The MOSIP open-source ID platform was cited as a model for establishing technical standards, operational support, and financing that enable cross-country adoption [178-190]. Brazil’s Secretariat for Shared Services demonstrates how a single procurement channel and internal innovation units can streamline AI deployment while resisting reliance on external vendors [204-216][222-229].


Overall purpose / goal


The panel was convened to map out concrete pathways for “AI diffusion” – i.e., moving AI from isolated pilots to scalable, inclusive public services worldwide. Participants shared experiences, frameworks (METRI, use-case adoption framework), and policy ideas aimed at establishing AI as a trusted digital public infrastructure that can be leveraged across sectors and geographies by 2030.


Tone of the discussion


– The conversation began with a formal, forward-looking tone, focusing on high-level concepts such as DPI and resource democratization.


– As speakers introduced regional case studies (India, Brazil, Africa), the tone shifted to pragmatic and collaborative, acknowledging real-world constraints and the need for institutional trust.


– Towards the end, the dialogue became more candid and slightly informal, with participants noting operational hurdles, vendor-lock-in concerns, and even a light-hearted “we’ve been kicked out of the room” remark, while still maintaining an overall constructive and solution-oriented spirit.


Speakers

Speaker 1 – Role/Title: Moderator / Host; Area of Expertise: 


Saurabh Garg – Role/Title: Secretary, Ministry of Statistics and Programme Implementation (MOSPI), Government of India; Area of Expertise: AI policy, Digital Public Infrastructure, AI democratization [S12]


Speaker 2 – Role/Title: Moderator / Chair of the panel; Area of Expertise: AI diffusion, inclusive AI development [S6][S7]


Speaker 3 – Role/Title: United Nations Development Programme (UNDP) representative; Area of Expertise: AI implementation in developing regions, AI diffusion pathways [S1]


Beatriz Vasconcellos – Role/Title: Brazilian government official (AI lead); Area of Expertise: AI adoption in the public sector, digital public infrastructure in Brazil [S4]


Janet Zhou – Role/Title: AI adoption specialist; Area of Expertise: Scaling AI pilots, institutional capacity for AI [S5]


Additional speakers:


(none)


Full session reportComprehensive analysis and detailed insights

The session opened with a brief logistical exchange – the moderator asked panelists Mr Shankar and Mr Saurabh to pose for a quick photograph, thanked everyone for joining, and then invited the Secretary of MOSIP India, Mr Saurabh Garg, to deliver the keynote address [1-8].


In his opening remarks, Garg characterised artificial intelligence as “a solution in search of a problem” and argued that AI will generate value only when concrete use-cases are identified [12-13]. He positioned AI as a potential digital public infrastructure (DPI), comparable to Aadhaar or UPI, and asserted that for AI to become a trusted, interoperable and shareable public good it must rest on four foundational resources – compute, data sets, talent and models [14-15]. Garg then outlined a four-point rubric for “AI-ready” data: (i) discoverability through common metadata, (ii) trustworthiness via quality assessments, (iii) interoperability enabled by unique identifiers, and (iv) usability ensured by internationally aligned standards [15-18]. He noted that access must be balanced with privacy safeguards and that locally relevant data are essential for context-specific AI solutions [17-22]. To coordinate the democratisation of these resources, the AI summit’s working group proposed a voluntary, modular platform called METRI (Multi-Stakeholder AI for Resilient and Trustworthy Infrastructure) that would allow stakeholders to contribute compute, data, models and talent on a non-committal basis [23-29].


The moderator (Speaker 2) then introduced the metaphor of “diffusion pathways”, observing that, like earlier general-purpose technologies, AI was invented in the West but its impact must be realised in the Global South. She announced the ambition of 100 AI diffusion pathways by 2030 and asked how such pathways could be operationalised across sectors [39-41][46-54].


Kizom (UNDP) responded by describing the newly created G7 AI Hub as a mechanism to unlock compute, data and talent for low-income regions [61-68]. He highlighted the need to build business cases for local data-centres, to break data silos despite the abundance of data in the Global South, and to retain diaspora talent so that AI solutions can be co-architected for smallholder farmers and women entrepreneurs [62-66]. In illustrating the scale of talent networks, Kizom named “Selena from Zindi” and noted that Zindi’s community of ≈ 100 000 African data scientists functions as a public-interest infrastructure [150-158]. He also pointed to existing digital public “rails” – such as UPI, DigiLocker and emerging language stacks like Bhashani – that make AI services invisible to end-users, allowing AI-driven chat-bots for agriculture or health to ride on already trusted infrastructure [140-152][155-162]. The MOSIP open-source digital-ID platform was cited as an illustration of how technical standards, operational support and financing together “lubricate” the adoption of public-good infrastructure across countries [178-190].


The moderator then asked Janet Zhou (Speaker 3) whether the barrier for moving AI from pilot to production was merely funding and how AI could transition from prototype to scale [70-78]. Zhou answered that “pilotitis” – projects stuck at the prototype stage – predates AI and can be overcome only when governments sit at the design table from the outset. She argued that trustworthy, inclusive institutions are essential for lowering market-entry costs for innovators, and that shared infrastructure must be built on standards that create a positive feedback loop of trust and adoption [79-86].


Building on this, Zhou used the MOSIP open-source digital-ID platform as a road-analogy: standards are the road surface, side-of-the-road rules govern traffic, and financing provides the fuel that keeps the road functional. She stressed that even after a “road” has been built, agreed-upon technical standards, side-rules and financing are required to sustain adoption [178-190].


When the moderator turned to the Brazilian perspective, Beatriz Vasconcellos (Speaker 4) presented Brazil’s DPI vision of “one government for each person”. She described the creation of shared data ecosystems – thematic datasets for early-childhood and environmental domains that are standardised, interoperable and linked to a canonical citizen profile [95-110]. A centralised chatbot platform, built on the national digital-ID system (gov.vr), has moved from informational pilots to transactional services, enabling citizens to complete service requests securely [115-124][125-131]. To avoid duplication and vendor lock-in, Brazil’s Secretariat for Shared Services now offers a single procurement channel for AI solutions, allowing ministries to acquire services with a simple digital transfer, thereby reducing implementation time and dependence on external vendors [211-216]. Vasconcellos also warned against over-reliance on external vendors, stressing the need to develop domestic AI capability through internal experimentation and capacity-building [222-230].


Later, the moderator asked about safe conversations in agriculture and health and whether reusable playbooks could be created. The panel highlighted the need for safety guardrails and voice-AI playbooks to ensure trustworthy interactions in high-risk domains [165-170].


Kizom (UNDP) then elaborated on the “use-case adoption framework”. He explained that the framework maps vertical sectoral impact (e.g., agriculture, health, education) to horizontal unlocks such as language localisation, compute access, and AI-ready data. Co-design between governments, private sector and civil society is required to fuse vertical needs with horizontal enablers, ensuring that each use-case can scale efficiently [210-225].


Across the discussion, several points of agreement emerged. All speakers concurred that AI must be treated as a DPI, requiring trust, interoperability and shared “rails” to enable cross-border diffusion (Garg, Kizom, Zhou) [14-15][140-152][155-158][179-190]. They also agreed on the four criteria for AI-ready data and on the necessity of standardised metadata, quality checks, unique identifiers and international classifications [15-18]. The need to overcome pilotitis through early government involvement and shared infrastructure was echoed by Zhou, Vasconcellos and the moderator [79-86][211-216][71-74]. Finally, multilingual and voice AI were identified as equalisers that broaden inclusion, a view shared by the moderator and Kizom [264-267][154-156][168-170].


Nevertheless, moderate disagreement was evident on the preferred architecture for AI DPI. Garg framed AI as a government-led DPI with strict standards, whereas the moderator advocated a multi-stakeholder, open-source, multi-model ecosystem to avoid concentration of Western large language models, and Zhou highlighted MOSIP’s open-source, vendor-free model as a practical alternative [14-15][231-236][179-190]. A second tension concerned the scaling mechanism: Vasconcellos promoted a top-down, centralised procurement and shared-service model, while Garg and the moderator emphasised modular, voluntary rails and horizontal enablers such as compute and talent [211-216][231-236][14-15]. A third divergence related to external assistance: Kizom’s G7 AI Hub seeks to import compute and talent to the Global South, whereas Vasconcellos warned that excessive reliance on external vendors could erode domestic capability [61-66][222-230].


From these convergences and divergences, the panel distilled a set of key take-aways:


1. AI should be institutionalised as DPI, with trust, interoperability and shareability as core attributes.


2. The four foundational resources-compute, data, talent and models-must be democratised, and data must satisfy the discoverability, trustworthiness, interoperability and usability criteria.


3. The “100 AI diffusion pathways by 2030” agenda stresses horizontal enablers (language, compute, talent) linked to sector-specific use-cases.


4. Early government participation and the provision of shared public rails (identity, payments, data exchange) are essential to move AI from pilot to production at population scale.


5. Open-source, multi-model platforms such as METRI and MOSIP are preferred to avoid vendor lock-in and to build domestic capability.


6. Multilingual and voice AI are critical equalisers for reaching underserved users.


7. Co-architecting public-private partnerships and multi-stakeholder collaborations (e.g., the G7 AI Hub, XTEP, Gates Foundation) are required for sustainable diffusion [14-15][23-29][53-54][75-86][231-236][178-190][240-247].


The panel also identified concrete actions. The METRI platform will be further developed as a voluntary, modular framework for sharing compute, data, models and talent [23-28]. The G7 AI Hub will continue to unlock resources for Africa, Latin America and Asia [61-66]. Countries are encouraged to adopt the AI use-case adoption framework, which maps vertical sectoral impact to horizontal unlocks, to guide scaling [240-247]. Brazil will proceed with its centralised procurement and shared-service model to streamline AI deployment across ministries [211-216]. Nations are urged to explore open-source digital-ID solutions such as MOSIP, accompanied by operational support, training and financing [178-190]. Finally, stakeholders are asked to promote multi-model, open-source AI solutions to mitigate vendor lock-in [231-236].


Unresolved issues remain. Detailed governance mechanisms for ensuring data trustworthiness and privacy across jurisdictions have yet to be finalised. Viable business models for building compute infrastructure (e.g., data-centres and GPU clusters) in the Global South need further articulation. Clear timelines, metrics and accountability structures for achieving the 100-pathway target by 2030 are still missing. The development of safety guardrails and reusable playbooks for voice-enabled health or agricultural AI interactions requires additional research [165-170]. Finally, the extent of regulatory reforms needed to support an AI-centric DPI while preventing vendor lock-in is an open question [16-22][66-70][84-86][222-230].


In sum, the discussion moved from a high-level framing of AI as a nascent technology to a nuanced blueprint for embedding AI within trusted, interoperable public infrastructure. The pivotal moments were Garg’s articulation of AI-ready data and the DPI metaphor, Zhou’s diagnosis of “pilotitis” with a governance remedy, and the concrete national examples from Brazil and the MOSIP model. These insights reshaped the conversation from problem identification to solution design, culminating in a shared AI adoption framework that links sectoral needs with horizontal resources. The consensus on the importance of DPI, data readiness, early government involvement and open-source, multi-model ecosystems provides a solid foundation for future policy work, while the identified disagreements highlight the need for flexible, context-specific pathways that balance standardisation, sovereignty and openness. Together, the three pillars-digital public infrastructure, democratised foundational resources, and the 100-pathway agenda-chart a clear route forward, with next steps focused on developing METRI, scaling the G7 AI Hub and mainstreaming the use-case adoption framework. [14-15][75-86][140-152][240-247]


Session transcriptComplete transcript of the session
Speaker 1

request all the panelists along with Mr. Shankar and Mr. Saurabh for a picture, please, because everyone has different schedules. So we just want to get a quick photo of this moment before we move ahead. Yeah, content first. All right. Thank you so much. Panelists, you can take your seat. To take us forward, I’d like to invite to deliver a keynote Mr. Saurabh Garg, who is the Secretary of MOSPE India. If you can take us forward. Thank you so much.

Saurabh Garg

Thank you. Good afternoon and great to be here on this session. We’re talking of diffusion, AI diffusion. I’ll just speak of one or two aspects of it because I’m sure the panelists would lend a lot more color to this topic. Just to take off where Shankar left, sometimes he’s talking about use cases and that’s very necessary because AI is perhaps something like a solution in search of a problem. So until and until we don’t find use cases for that, it will not be able to give the value that it potentially can and I think that’s really, really important. We’re talking of AI being a possible DPI, a digital public infrastructure. and I suppose for that some steps would be needed to ensure that it becomes trusted, interoperable and shareable.

I think those are aspects which a DPI like Aadhaar or UPI has and I think we are still in early days but the mechanisms for that, how we can ensure that it would be possible and given that we talk of four resources as foundational AI resources, compute, data sets, talent and models apart from obviously the frameworks that would be necessary for this and I mention this because I had the privilege of chairing the democratizing AI resources group, working group of the AI summit and various… various options that we discussed with other countries on how we can ensure democratization of these four foundational resources. obviously each of them would have a different mechanism but one thing I would just go down in slightly greater details is on the data sets back part which is also something that we are doing within the Ministry of Statistics across the two different ministries and and states and why I am saying data sets is also because perhaps data is also the raw material for AI models so it’s a very foundational resource in that sense and compute is perhaps something that we can acquire and therefore we have discussions around models whether they need to be more efficient they are right now extremely power both compute and energy intensive or we can make them lighter going forward that is something which is work in progress I think it will take some time before the small and domain small domain models come in which will perhaps improve diffusion but data is something that would need to be AI ready going forward and in AI ready I would probably make four things that it needs to be one is discoverable how do you ensure that data is easily discoverable and that’s perhaps by ensuring that the metadata is understood by everyone and that makes it easier for any models also to understand second is on the trustworthiness of the data and that’s the quality assessments that we have whether it’s trustworthy and it’s credible and that would determine its use the third is in its interoperability with the two sets of data sets how interoperable they are what are the kind of unique identifiers it has to be able to identify what is it how to link different data sets and the fourth is its usability across systems and that would be dependent on the standardization and the classifications that we use which are internationally similar so that different conclusions do not come from the same data set.

And obviously, the focus would have to be on access and dissemination so that it is available for use while preserving the privacy of the data, the safeguards that would need to be built. And why I am saying about data is because this would be also where a lot of the local contexts, linguistic contexts, cultural contexts will come in and that will come in from the data sets that are. We talk of ensuring that it is locally relevant, the inferences and the solutions and I suppose the data would determine its relevance. So, we have to be very careful about that. So, we have to be very careful about that. So, we have to be very careful about that.

So, we have to be very careful about that. and ensure that it’s useful at different levels. So I’ll stop here, apart from the fact saying that for democratizing AI resources, the working group discussed with the others and a kind of a platform has been suggested going forward, which has been named as METRI. METRI in Hindi means friendship for those who are not aware. And it’s an acronym for multi -stakeholder AI for resilient and I’m forgetting what’s the T for. Now that, sorry, trustworthy. So and infrastructure. So that’s the acronym that we hope to be able to. But what the concept is that on a modular level, on a voluntary basis level, on a non -commitment level.

how we can develop on the foundational AI resources of availability of compute, data sets, models and talent. And I think the way we are able to develop this and move towards a DPI for AI resources, I am sure diffusion would become all the more easier. So thank you for this opportunity and look forward to a great time. Thank you.

Speaker 2

Thank you everyone. And we will carry on. We don’t have enough panels in which all of us are women. So three cheers for that. Don’t look at each other. You guys had a great contribution. So a couple of weeks back, some of us got together and we said, invention has happened in West. Impact has to happen at each one of us. What’s the gap between invention and impact? And that’s where we came out and thought about we thought about adoption and then we said, isn’t it diffusion? And why did we pick diffusion? We actually read a book. We read a book by Jeffrey Ding. He’s a professor in it’s in D .C. Why am I forgetting the name of the institute?

It’s D .C. Georgetown. Sorry. Georgetown in D .C. And we read about AI diffusion and that GPT, the general purpose technology like electricity, it diffused into the society over several decades. It was created in Europe but actually diffused in U .S. quite a lot. U .S. capture and also chemical engineering which Shankar talked about that chemical the chemical engineering creation if you see, if you remember chemistry, Bohr model you know all those were Germans but actually it’s US who capitalized on that. AI is like that. Invention happened in the West. We all know that. But it’s the global south is going to have use cases who are going to diffuse it into sectors into the and the horizontal enablers have to happen across these sectors for us to benefit, for us to have more economic benefit out of AI.

So that’s when all of us said that yes we will do 100 diffusion pathways by 2030. And one of the partner in crime was Kizom. She is here with us and Kizom my first question is to you. Tell us about how you think because Kenya comes in, you are based in Italy and we did a tripartite with Kenya, Italy and India. How do you think 100 pathways to 2030 pan out for you and what does it mean for you? How do you think it will happen?

Speaker 3

absolutely Shalini how long do I have to answer this question short version long version short version

Speaker 2

as long as people are okay with stories you can carry on

Speaker 3

um well I mean as Saurabhji the chair of the working group for democratization of AI spoke about there are some fundamental resources or inputs AI need in order for it to actually work in a way that can help a common citizen or a small business owner and some of those foundations that he spoke about our AI ready data compute and those are the things that I in my role at the United Nations development program working in Africa in parts of Latin America in Asia discovered that they were able to do in the United Nations development program working in Africa in parts of Latin America in Asia discovered that they were able to do in the United Nations development program working in Africa in Asia discovered that there is a constraint on access to some of these there is a constraint on access to some of these foundational resources And so this G7 AI hub was created to address that constraint by, one, unlocking additional resources from, of course, the friendly G7 countries that wanted to focus on parts of Africa.

But also, as we do that, to think about what is the business case for data centers, for GPUs on the continent? How do you break data silos, even though the global south is so rich in data? As well as how do you orchestrate talent, especially since we saw that much of, you know, let’s say Microsoft’s or big tech’s talent pool on the continent of Africa and in other parts of the world, were actually coming from the global south countries. And over the last one year or so, I’ve seen this tremendous momentum of many of the African people who worked in big tech or large companies moving back to the continent because they actually don’t want the continent to be…

left behind. They want to be co -architects of the future, this fundamental shift that humanity is going through. And this is where when we talk about 100 AI diffusion pathways, it is about co -architecting pathways where we look at how do we bring not just language data, but voice adoption into solutions that a smallholder farmer can use, that a woman entrepreneur can use, and not just as pilots, but to think about it from a infrastructure perspective, a digital public infrastructure perspective where we can scale to millions of farmers, go across national boundaries and be able to look across borders either as digital public goods or as expansion of private sector innovations or public private partnerships. So as Shankar said, diffusion pathways could be many, and it’s for

Speaker 1

Thank you, Kizom. I’ll come to you, Janet. You lead global development for AI across multiple geographies. But most of them are stuck in pilots, right? How does AI become production scale? And do you think it’s funding that they lack only? Or are there more diffusion pathways that we can create so that actually AI pilots most of population scale?

Janet Zhou

Hello? Hi. Maybe I would first start by saying the problem of pilotitis is actually one that sort of predates AI. And we have many technologies that are enormously beneficial for pilotizing. humanity that I think are currently still stuck in not having diffused. But when I think about the positive examples, the places where I think as a global community, we’ve had tremendous scaled impact, right? Having or reducing child mortality by half since 2000, 170 million people out of extreme poverty. The common threads are often that we’ve managed to figure out how to get both governments and markets to really focus and work for the most vulnerable populations. And so whether it’s vaccines that we’re talking about or instant payment systems, often it is really just ensuring that government is there at the design phase, at the table, in the driver’s seat, not brought in after the pilot results come in.

It is very much focused on making sure that… We make it easier for local innovators to be able to enter markets. So whether that… That’s you can aggregate low margin demand, you can streamline market entry, but really making it easy to lower the cost to serve for the most kind of vulnerable people at the edge. And then it is very much also building institutional capacity and making sure that, you know, it’s not there’s playbooks and training and all of that, but really shared infrastructure that allow sort of all boats to rise and making sure that that infrastructure is trustworthy, is inclusive, sort of creates, I think, a really positive feedback loop because I loved what Nanda Nilakani expressed, which is that we, you know, really rely on institutions for trust, not on algorithms.

And I think one of the ways that institutions become trustworthy is by being inclusive and making sure that they actually serve the people that otherwise, would be less to benefit.

Speaker 1

Yeah, absolutely. I think that’s a key that how do you trust the institutions and AI output will, you know, suppose it’s coming out of a AI advisory application. Do you trust that or do you trust the institution which is giving in a physical or will the institution adopt this AI advisory so that there’s more trust on the advice itself being given? I mean, that’s a quite hybrid and risky manner and institutions have to understand AI and adopt and first trust the AI output before they say that this is ours. I think that that part is key on AI adoption. Bia, tell us about Brazil that, you know, a very different perspective. Just let us first let us understand that how is the AI adoption in that region?

And are you also stuck in this pilot? pilot to production and is there a gap and how do you see that being bridged?

Beatriz Vasconcellos

Perfect. So I think there are many different ways and perspectives to think about AI. In the Brazilian government we chose to establish a vision for one government for each person. So that means we are going fully on the personalization and even in the agentic state vision, right? So for that we need to be thinking about some shared infrastructure and shared capabilities. So what we did was starting with the data. We have a project now to not just catalog but also prepare the data sets for training. We are also building some shared platforms for personalization and to understand citizens’ characteristics. So it’s… within our state -owned enterprises, we have two large IT state -owned enterprises, and we are making them collaborate on a shared platform in which we have some canonical data sets about citizens, and every ministry contributes with different characteristics, and we are creating different labels for every citizen.

And then one different way in which we are trying to break the data silos, which, of course, is a very big issue, is to think about the data ecosystems. So we came up with this concept, and it doesn’t mean that we’re doing data lakes. It means that we’re thinking about interoperability from a thematic perspective. So one example is the early childhood ecosystem, data ecosystem. So we know that a lot of policies related to early childhood, they have different data requirements, and they need to use similar registries, and we’re going to look at some of these different data systems and we’re going to look at some of these different data systems and we’re going to look at some of these different data systems and we’re going to look at some of these different data systems and we’re going to look at some of these different data systems So we created this ecosystem.

We brought together five ministries, Ministry of Health, Education, Social Development, Management, and Human Rights. And we cataloged the policies and what kind of data would be needed. And then we started creating the standards for that specific ecosystem. So we prioritized early childhood and environmental and land ecosystems. Land, environmental, land, and climate. It’s in the same group. So we are starting with that, and it seems to be an interesting approach. It seems to be working. The other thing is coming back to the GPI discussion. It is very helpful for us to have the digital ID and authentication to implement this vision. So what we’re doing now is we started, well, a lot of people in the government want to do Gen AI, right?

because I think it’s the easiest and maybe most famous type of AI implementation. So a lot of government entities and ministries wanted to do their own chatbots. So it was being spread all over. So what we did was also to try to centralize that capability. And we started with informational chats. So what kind of policies or information would be helpful. Now we are just starting the transactional part of the chatbots. So the idea is that a citizen will be able to actually complete a service request or get their service done through the chat. And that’s only possible because we have the gov .vr authentication. So we know that the person is actually the right person.

And then the third step, which we still haven’t entered, but that’s the vision, is the agentic state. So the agentic state. To build the agent specific for that person. And that will only be able to happen once we have the data platform. infrastructure. So that’s more or less how we’re thinking about it.

Speaker 1

Okay. And thanks for bringing DPI into the picture because my next question is on that. Nandan announced yesterday 100 Pathways to 2030 because it comes back, comes from a lot of experience on DPI. And Kizo, my next question is to you that, you know, you were also in the DPI journey, working with India. Do you think in AI, there are rails like, you know, DPI lays down rails, roads, which then other countries can take. Do you think in AI, how do use cases cross borders? How do use cases, what are the pathways? What are the playbooks that different countries can benefit from? How do you think that can happen?

Speaker 3

Shalini, great question. And I’m assuming this room is fully aware of or is a user of digital public infrastructure. Raise your hands if you’re not. Oh my God. One or two people. It’s probably one of the reasons why I don’t. Okay, we’re not going to get into that right now. You use UPI, right? You use DigiLocker. You don’t use DigiLocker, but you use DigiYatra. No? Okay. I think you should. But you use UPI. Okay, so he’s a DPI user. And that’s the beauty of digital public infrastructure. You actually want to be invisible. And one of the sort of design, one of the ambitions that we have as part of the AI diffusion pathways is that we actually don’t want AI to be this noisy, chaotic technology.

We want it to be so invisible because it’s actually part of your life. Part of not just our life, because obviously for us it’s very convenient. We’re English speakers. and so we are at the summit but for a small holder farmer, for a small micro -entrepreneur business, a woman who is crossing borders between Guinea and Sierra Leone for example so to go back to your question Shalini one as Bia was already starting to say as she is seeing in Brazil and as certainly we are seeing in many parts of the world including in India, when you have data that’s already interoperable and public rails such as identity payments, data exchange then the power of AI is much more easier to bring to that same service that you wanted to reach out that’s now an AI trackbot to a farmer on those rails so that’s fantastic but then we are also seeing an emergence of additional rails and I think that’s a great point For those of you who are from India, you probably have heard of someone using Bhashani, which is built on AI for Bharat and sort of the Indic language stack.

So that is definitely a public rail. And I know that in different parts of the world, there are many such rails being created. And I hope that we see the emergence of rails, but also the convergence of rails. Because as the French president yesterday was saying, along with Honorable Prime Minister Modi, that it’s not that we need to do more, it’s that we need to do better together. So this is where the public rails really need to come together. And then I want to recognize Selena from Zindi here from Africa. She runs a public infrastructure. She runs a network of 100 ,000 data scientists across Africa. And that’s already infrastructure. It’s public interest, public value. And we’re at a place where we’re trying to figure out what is the business case.

how do we still make them sustainable by creating those innovation layers on top of the public rails that are also emerging on AI but it’s not like you have to compete between DPI and AI it’s like the DPI principles of interoperability, modularity, reusability becoming a digital public good those still remain quite intact and this is how we might see population scale the scale towards impact

Speaker 1

Thank you Kizum for explaining it so well and actually that’s happening because not just the language the multilinguality voice AI that is becoming a DPI because you should be able to interact in voice and the voice stack is something which should be available for most of the people to build on top of it safety, the guardrails they are DPI in itself they can become DPI in itself how do you do safety safe conversations in agriculture So how do you do safe conversations if someone is calling up for patient care in health care? And can those conversations become a playbook in itself? So these are the playbooks which can get created. So thank you so much for talking about it.

I’ll come to you, Janet, that, you know, the frictions which are there, right? I mean, do you think there could be certain programs or investments to be done where such frictions? Because everybody is building like the full stack. Hey, I’ll do language translation. Hey, I will do compute I need, data I need. So how do you remove the frictions and do you think some programs and investments can help this?

Janet Zhou

You know, I was thinking about this question and the example that kind of came to mind that maybe kind of illustrates it really well. I was thinking about the MOSIP program, which is really kind of an open source platform. It’s inspired by the Adar program, but it is part of a larger effort with World Bank and many other partners to actually try to take that open source, you know, vendor free lock in national ID system and bring it to many, many countries. And when I thought about the components of that, the programming components, I know a lot of it was around ensuring that there was sort of an open production ready reference implementation frame. And maybe if we’re going to continue on the road analogy, I was trying to think of like, what would that be?

And you still, if you have a road, you still need to pick sides of the road and agree on which sides of the road everyone’s going to drive. And you have to agree that a stop sign means stop and that red means stop. And so there. Are still, I think, a set of programmatic standards and norms. that really makes it easier, not only for the adoption, but for those that have adopted to be then able to benefit from that adoption. And then, you know, a lot of when I think about programmatically what has happened in something like MOSIP is, in addition to the technical implementation, there was a lot of operational support, a lot of just examples, countries visiting each other.

And, you know, I think India has sent many delegations to many countries to help explain their story, share their pathway. There’s training that needs to be had, right? You still have to get your driver’s license and prove that you know how to use it. And so, you know, I think even after building the rails, there’s still plenty of program implementation work to actually really help facilitate and lubricate that adoption. And, of course, financing as well, which kind of came through the World Bank program. So, you know, it’s a bit of… no sort of single bullet, even after, I think, after… having the rails set. There’s still, I think, a lot of work to be done, program implementation and operational support.

Speaker 2

Thank you. Bia, what’s the hardest challenge? I mean, this all sounds very easy. Have diffusion pathways, go and build it. But it has to be operational. It has to be adopted. They’re people, right? The human in the loop is the most important in AI. We can never ignore that. What’s the hardest challenge that you see in this? Just one? Just one? Oh, we’re lucky.

Beatriz Vasconcellos

about those. So obviously, it’s not just creating applications. It’s the same old story of digital transformation, right? It’s just at a different level, but you’ve got to change the processes, the way things work. So what we’re doing, I think maybe three interesting things that we are trying to do. Also, I’m not trying to sell it, right? Everything that we are doing, we’re passing. So let’s see what works and what doesn’t. But one thing that we’re doing now is in the Ministry of Management, we have a Secretariat for Shared Services, and they didn’t used to work with AI. So the idea is that we make it very, very simple for any ministry to use a service that is centralized in the Ministry of Management.

So for example, with these chatbots that I was telling you about, we came up, we’ve centralized the procurement, and we chose one that was going to be a service that was going to be a service that was going to be a service vendor to help us build the solution. and each ministry was doing their own. So we said, hey, if you buy it through the centralized service, you only need, it takes just a few hours, you just need to sign a document and transfer some money digitally to the Ministry of Management and you can use the service. So you don’t have to go through any procurement. So that’s one way that we’re trying to overcome the problem of multiple solutions and difficult implementation.

We also came up with an interesting, I think, institutional arrangement, which is, when we’re talking about AI, we’re talking about innovation and new capabilities. We’re talking about innovation capabilities through the Ministry of Management, which means… that they’re building the whole process of how you can come up with first a policy goal, like what the AI project is going to target, how do you experiment. They build the process for experimenting. They have analysts looking at the data and seeing if things are working. So that’s something that seems to be working well, and we think it’s going to be good. The other real challenge that we have, I think, is with the vendors. And I’m using my development hat here from my previous background.

I think everyone is talking about AI and how every agency and ministry needs to be doing something on AI. And obviously there are some big vendors who are saying, yeah, like you government, you don’t have the capabilities. We have the capabilities. We can do it very fast. We do it at scale. And if you start making these decisions day after day, you’re not going to build any capabilities. You’ll just… You’re not going to build sources. and I use an analogy with for example the army no one thinks that it’s reasonable to outsource your army to a country that has a stronger army or a better army but in terms of digital we’re doing it every day like for every decision oh this country, this company does it better so we’re just going to outsource and there are some essential capabilities it’s not just an AI tool or something like we’re playing with national data, we have some very strategic goals also so I think if we don’t think about building these capabilities even if you start small and it takes a while to build the muscles we’ve got to build the muscles so we’re trying to incentivize also the agencies to test and experiment and don’t buy prepackaged solutions because we’ve got to build our own muscles

Speaker 2

yeah I think you brought in a very valid point which is what a lot of people are scared of is a vendor lock -in oh we’re going to have to do this and we’re going to have to do this and you would have seen Amul AI which got launched by the Prime Minister and VSA Step Foundation made it possible and the one key thing was there that how do you keep it multi -model like multiple models should be able to do it why just one and that’s been a key thing that how do you give choice to people how do you give you know not logged into the system because that’s where the diffusion works.

Diffusion is not about like concentrated western LLMs all together and just deploy it. It’s about actually walking the path give choice, replaceability have domain knowledge, have data with you because data is there with us in our enterprise systems and we don’t want you know learning from them. How can you separate that so it’s about actually and now this know -how that we have got We want to share with everybody. And that playbook is a diffusion pathway. That’s exactly and that gives an example for that. Kizum, you and me co -authored a paper, which is up in Atlantic Council. And we also talk about the use case adoption framework. Would you like to tell people about the use case adoption framework and how that can be a friction remover?

Speaker 3

Oh, absolutely. And I’m looking for the key author that I saw of the use case adoption framework, Tanvi Lal, director at People Plus AI at XTREP Foundation. So, you know, when we were preparing for the AI Impact Summit many months ago, which feels like many years ago, we started with this idea saying adoption. Adoption is proving to be a challenge. what are we learning from our experience? And this is where XCHEP looked at Mahavish Star, its work with AI for Bharat, its ongoing conversations with Entropic and other private sector companies on safety tooling. And I did the same across a number of countries, and I think together we consulted about 20 -plus countries, had convenings in South Africa to New York to, I don’t know, many, many more places along with the Gates Foundation as well.

And what we learned was that the impact of what technology like artificial intelligence can do sits in sectors, so education, health, climate change, but its ability to move from pilot to scale depends on the horizontal unlocks. So this 100 AI diffusion pathways, underpinning that is this framework that we call… the AI adoption framework, the use case adoption framework. where we see impact in sectors where you need contextual data, contextual knowledge, process, workflows, things that have to change in a department of education to a department of health and so on. But then the horizontal unlocks are the language data, compute. Generally, how do you make AI data AI -ready? Or how do you make data interoperable because a farmer is going to be buying things, selling things, getting public services?

So we have to think about it from a user life perspective. So this is really, I think, a bit about the use case adoption framework that we’ve done together with countries, Gates Foundation, with XTEP. And we hope that this helps us ground our 100 AI diffusion pathways because as Shalini was saying, this is not about just going and saying, I have the solution, you adopt it. we’re not going to see that impact with that approach. We’ll have to co -design some pathways. We’ll have to fuse verticals and horizontals. And this is where, at least when I talk to many innovators, private sector companies in the global south, I see them saying, aha, this is how we co -architect the future.

This is where, when we develop a voice optimization solution as a public good, that goes out to the world. We are builders of the future too. So this, you know, it’s just such a powerful kind of learning that we’ve put together into this 100 AI diffusion pathways towards impact.

Speaker 2

Thank you. Thank you, Kizam. I’m looking at the time, and I would like to ask the audience to have like two questions. So please raise your hand if, yeah. I saw yours first, and okay. would anybody like to take it

Speaker 3

yeah yeah I was so distracted by the crowd that’s coming in we’re getting kicked out guys so I think your question was how to address diversity and diffusion but if you can’t read can you hear? because this is where I think voice adoption is something that is key to the inclusion agenda and the impact agenda of AI so I would say to answer your question, voice adoption

Speaker 2

yeah actually that’s why AI becomes more equalizer and it actually bridges the divide right so there are inequalities and how do you bring in new language, today bringing a new language into a model has become fairly easy fairly so bringing a new language which we can talk about yeah that’s available there is data which is logged in PDFs into various regions and people are not knowing that today that’s become easier so that’s how it is a leveler that’s a trusted source that’s a trusted source so I’ll maybe talk to you later about what Mr. Saurabh Garg talked about and how evidence one last question yeah I think you’re talking about a pivotal moment right I think you’re talking about a pivotal moment one like you know I am not a fortune teller right but what I can do is I do understand the ecosystem about AI I think the fact that multilinguality can be one very big change because it draws people in what is change about change is always about people that how people are able to when UPI was initially talked about it was like bank said I have to change my whole system about it the user friendliness of it and the fact that it’s so easy to deploy and by people is what drew to it so any AI moment which draws people in because of the interoperability the usability and the fact that itself will become has it happened no can it happen yes and multilinguality is one of them but we have to see that how it pans out Okay, thank you so much Thank you very much We have been kicked out of the room and a great panel Thank you, bye

Speaker 1

Thank you everyone for joining us and sharing your thoughtful views On behalf of India AI team we would like to offer a souvenir with our sincere thanks Thank you so much Thank you Thank you

Related ResourcesKnowledge base sources related to the discussion topics (28)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“The moderator thanked everyone for joining, asked panelists Mr Shankar and Mr Saurabh to pose for a quick photograph, and invited the Secretary of MOSIP India, Mr Saurabh Garg, to deliver the keynote address.”

The moderator’s thank-you is recorded in the session notes [S86] and the invitation for a group photograph and the presence of Mr Shankar Maruwada are described in the opening remarks [S87].

Confirmedhigh

“Garg characterised artificial intelligence as “a solution in search of a problem” and argued that AI will generate value only when concrete use‑cases are identified.”

A comment highlighting that AI exists but lacks a clear problem-solution fit matches Garg’s statement and is documented in the discussion summary [S4].

Confirmedhigh

“Garg outlined a four‑point rubric for “AI‑ready” data: discoverability through common metadata, trustworthiness via quality assessments, interoperability enabled by unique identifiers, and usability ensured by internationally aligned standards.”

Dr Saurabh Garg’s description of the four essential elements for AI-ready data infrastructure-discoverability, trustworthiness, interoperability and usability-is recorded in the transcript notes [S16].

Confirmedmedium

“He noted that access must be balanced with privacy safeguards and that locally relevant data are essential for context‑specific AI solutions.”

The need to balance data access with privacy protections is explicitly mentioned in the discussion summary [S93]; the emphasis on local relevance aligns with broader remarks on agency and co-creation in digital public infrastructure [S12].

Confirmedhigh

“The moderator announced the ambition of 100 AI diffusion pathways by 2030.”

The initiative to create 100 AI diffusion pathways by 2030 is documented in the session summary [S100].

External Sources (100)
S1
Building the Workforce_ AI for Viksit Bharat 2047 — -Speaker 1- Role/Title: Not specified, Area of expertise: Not specified -Speaker 3- Role/Title: Not specified, Area of …
S2
S3
Advancing Scientific AI with Safety Ethics and Responsibility — – Speaker 1- Speaker 2- Speaker 3 – Speaker 1- Speaker 3- Moderator
S4
Collaborative AI Network – Strengthening Skills Research and Innovation — – Beatriz Vasconcellos- Speaker 1 – Speaker 3- Beatriz Vasconcellos
S5
Collaborative AI Network – Strengthening Skills Research and Innovation — – Beatriz Vasconcellos- Janet Zhou – Speaker 1- Janet Zhou
S6
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S7
Policy Network on Artificial Intelligence | IGF 2023 — Moderator 2, Affiliation 2 Speaker 1, Affiliation 1 Speaker 2, Affiliation 2
S8
S9
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S10
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S11
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S12
The Foundation of AI Democratizing Compute Data Infrastructure — -Saurabh Garg: Secretary in the Ministry of Statistics and Program Implementation in the Government of India
S13
https://dig.watch/event/india-ai-impact-summit-2026/the-foundation-of-ai-democratizing-compute-data-infrastructure — And they could be partly technological and partly policy -based or protocol -based. And a combination of this will ensur…
S14
S15
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — It is very clear to me that the 2030s will be a chaotic era. There will be disruption. There will be large changes. And …
S16
Regional Leaders Discuss AI-Ready Digital Infrastructure — Dr. Saurabh Garg opened the discussion by outlining four essential elements for AI-ready data infrastructure. First, dis…
S17
AI as critical infrastructure for continuity in public services — “Distributed software development.”[65]. “At Bilenium, recently we have developed as well one dedicated solution, which …
S18
Building Population-Scale Digital Public Infrastructure for AI — This comment reframes AI diffusion from a technology problem to an infrastructure problem, introducing the powerful meta…
S19
AI for agriculture Scaling Intelegence for food and climate resiliance — Shankar Maruwada from EkStep Foundation provided the technical framework for scaling AI solutions through digital public…
S20
WS #189 AI Regulation Unveiled: Global Pioneering for a Safer World — Juliana Sakai: Hi everyone, thank you. So we have like right now the policy question three with the theme enhancing en…
S21
International Cooperation for AI & Digital Governance | IGF 2023 Networking Session #109 — By relying heavily on external entities for critical technology infrastructure, the country runs the risk of losing cont…
S22
Latin America struggling to join the global AI race — Currently,Latin America is laggingin AI innovation. It contributes only 0.3% of global startup activity and attracts a m…
S23
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — -Speaker 2: Role appears to be event moderator or host. Area of expertise and specific title not mentioned. And by the …
S24
Effective Governance for Open Digital Ecosystems | IGF 2023 Open Forum #65 — Digital Public Infrastructure (DPI) is defined as society-wide digital capabilities that are essential for citizens, ent…
S25
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — 3. Contextualising Policies and Technologies: Adamma Isamade: Good afternoon, everyone. The question is very interestin…
S26
AI for Social Good Using Technology to Create Real-World Impact — And I think that’s what we’re doing. And to give you another example of how it reduces the complexity, there’s a very in…
S27
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — But the second aspect of competition is really diffusion or adoption. As each country and the companies from each countr…
S28
Safe and responsible AI — The Czech Republic is one of the most industrialized countries with almost 40% share of value added in the economy. Of t…
S29
Building the AI-Ready Future From Infrastructure to Skills — The progression from proof-of-concept to production represents a critical challenge. Resources like AMD’s Developer Clou…
S30
AI and Data Driving India’s Energy Transformation for Climate Solutions — The expert panel discussion emphasized critical enabling conditions for scaling these solutions beyond pilot projects. K…
S31
Quantum Technologies: Navigating the Path from Promise to Practice — Bogdan-Martin argues that successful quantum technology deployment requires simultaneous progress on multiple fronts bey…
S32
Swiss AI Initiatives and Policy Implementation Discussion — Using open-source models with fine-tuning for public institutions to avoid vendor lock-in while maintaining quality
S33
Connecting open code with policymakers to development | IGF 2023 WS #500 — In conclusion, accessing timely and up-to-date data for development objectives is a significant challenge in developing …
S34
Host Country Open Stage — Collaborative approaches are essential for addressing complex societal challenges in small populations Nordhaug argues …
S35
Collaborative AI Network – Strengthening Skills Research and Innovation — Garg detailed four critical requirements for AI-ready data: discoverable (through proper metadata), trustworthy (through…
S36
Regional Leaders Discuss AI-Ready Digital Infrastructure — Dr. Saurabh Garg opened the discussion by outlining four essential elements for AI-ready data infrastructure. First, dis…
S37
AI and Digital in 2023: From a winter of excitement to an autumn of clarity — At thetechnical level, data needs standards in order to be interoperable. Here, the work of standardisation and technica…
S38
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — And accessibility has to be also broadened in terms of multi -modality and also, where necessary, include a human in the…
S39
The Foundation of AI Democratizing Compute Data Infrastructure — “It needs to be interoperable and shareable.”[37]. “So I think two characteristics of digital public infrastructure, whi…
S40
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — Adamma Isamade: Good afternoon, everyone. The question is very interesting, but I think it’s not a question that I can a…
S41
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — AI policies in Africa should ideally espouse a context-specific and culturally sensitive orientation. The prevailing ten…
S42
Inclusive AI_ Why Linguistic Diversity Matters — Means India has got about, means we were talking to Survey of India, and they have about 16 lakh places named, which are…
S43
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — ## Introduction and Context ### Data Governance and Collective Approaches ### Framework for Inclusive Development Abh…
S44
WSIS Action Lines C4 and C7:E-employment: Emerging technologies in the world of work: Addressing challenges through digital skills — The strong consensus on key principles—particularly the need for partnerships, human-centred AI integration, and adaptiv…
S45
Artificial Intelligence & Emerging Tech — Jörn Erbguth:Thank you very much. So I’m EuroDIG subject matter expert for human rights and privacy and also affiliated …
S46
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — Another important point emphasized in the analysis is the significance of involving users and technical experts in the p…
S47
Artificial intelligence (AI) – UN Security Council — The discussion highlighted that open-source models enable a wide range of entities, from startups to larger corporations…
S48
WS #208 Democratising Access to AI with Open Source LLMs — Bianca Kremer: Hi, everybody hears me? First of all, I’d like to apologize for the delay and other procedures, we’re i…
S49
Open Forum #67 Open-source AI as a Catalyst for Africa’s Digital Economy — Continental Strategy and Coordination Legal and regulatory | Data governance The speaker describes ongoing policy deve…
S50
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — The participant explains that India is following the same successful approach used for DPI development, where basic buil…
S51
AI as critical infrastructure for continuity in public services — “Distributed software development.”[65]. “At Bilenium, recently we have developed as well one dedicated solution, which …
S52
WS #145 Revitalizing Trust: Harnessing AI for Responsible Governance — The level of consensus among the speakers was relatively high, particularly on the benefits and potential applications o…
S53
Driving Social Good with AI_ Evaluation and Open Source at Scale — High level of consensus with complementary perspectives rather than conflicting viewpoints. The speakers built upon each…
S54
Smart Regulation Rightsizing Governance for the AI Revolution — Low to moderate disagreement level. The speakers generally agreed on the problems (AI divides, need for cooperation, cap…
S55
Building Population-Scale Digital Public Infrastructure for AI — Excellent point. Excellent point, Trevor. And I think you brought out the inherent stress in the phrase diffusion pathwa…
S56
RESEARCH PAPERS — developing countries opened up by the adoption of ICTs and destroy the potential for increased access to knowledge. The…
S57
Diplomatic policy analysis — Digital divides:Not all countries have equal access to advanced analytical tools, perpetuating inequalities in diplomati…
S58
Collaborative AI Network – Strengthening Skills Research and Innovation — “We’re talking of AI being a possible DPI, a digital public infrastructure.”[1]. “I think those are aspects which a DPI …
S59
Effective Governance for Open Digital Ecosystems | IGF 2023 Open Forum #65 — Digital Public Infrastructure (DPI) is defined as society-wide digital capabilities that are essential for citizens, ent…
S60
The Foundation of AI Democratizing Compute Data Infrastructure — This connects AI democratization to broader digital infrastructure development, suggesting that individual data empowerm…
S61
WS #257 Data for Impact Equitable Sustainable DPI Data Governance — Chetty Pria: And thank you so much, Payal, and thanks for sharing also or introducing that what we are witnessing here i…
S62
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — I mean, access to compute is what makes or breaks a startup. So the way in India, the way I see it, the way we have star…
S63
AI for Social Good Using Technology to Create Real-World Impact — And I think that’s what we’re doing. And to give you another example of how it reduces the complexity, there’s a very in…
S64
https://dig.watch/event/india-ai-impact-summit-2026/fireside-conversation-01 — It’s about institutions. It’s about trust building. It’s about negotiations. It’s about guardrails, which Dario mentione…
S65
Discussion Report: AI Implementation and Global Accessibility — And when you look at deployment, the guardrails of fairness, accountability, privacy, security need to be maintained. An…
S66
Safe and Responsible AI at Scale Practical Pathways — “Deep work on working on fragmented data silos.”[5]. “It can be bridged but we have to think about how to make data inte…
S67
AI as critical infrastructure for continuity in public services — “Distributed software development.”[65]. “At Bilenium, recently we have developed as well one dedicated solution, which …
S68
AI for agriculture Scaling Intelegence for food and climate resiliance — Shankar Maruwada from EkStep Foundation provided the technical framework for scaling AI solutions through digital public…
S69
Scaling AI for Billions_ Building Digital Public Infrastructure — Absolutely, and very rightly said. So it’s becoming a fundamental part of the infrastructure that is being then used to …
S70
Building Population-Scale Digital Public Infrastructure for AI — This comment reframes AI diffusion from a technology problem to an infrastructure problem, introducing the powerful meta…
S71
AI and Data Driving India’s Energy Transformation for Climate Solutions — The speakers demonstrate strong consensus on fundamental challenges around data fragmentation, the need for standardized…
S72
Manufacturing’s Moonshots Are Landing . . . Are You Ready for the Next Wave? — From connected buildings to advanced AI and decarbonization efforts, companies that embrace these changes thrive in the …
S73
Nepal Engagement Session — Open architecture and interoperability are critical for long-term sustainability, avoiding vendor lock-in, and maintaini…
S74
Swiss AI Initiatives and Policy Implementation Discussion — Using open-source models with fine-tuning for public institutions to avoid vendor lock-in while maintaining quality
S75
Host Country Open Stage — Collaborative approaches are essential for addressing complex societal challenges in small populations Nordhaug argues …
S76
Empowering People with Digital Public Infrastructure — Brendan Vaughan: It’s really, really important. And I totally agree. Yeah, so I would add to that email. Pretty good…
S77
Day 0 Event #61 Accelerating progress for unified digital cooperation — The tone of the discussion was largely constructive and forward-looking. Speakers acknowledged challenges but focused on…
S78
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S79
Transforming Health Systems with AI From Lab to Last Mile — The discussion maintained a cautiously optimistic and collaborative tone throughout. It began with enthusiasm about AI’s…
S80
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The discussion maintained an optimistic and collaborative tone throughout, characterized by constructive problem-solving…
S81
Comprehensive Discussion Report: Governance Frameworks for Reducing Digital Divides in African and Francophone Contexts — The tone was pragmatic and solution-oriented, with speakers expressing both frustration with past failures and cautious …
S82
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S83
Science as a Growth Engine: Navigating the Funding and Translation Challenge — The discussion maintained a consistently thoughtful and collaborative tone throughout. While panelists acknowledged seri…
S84
Pre 12: Resilience of IoT Ecosystems: Preparing for the Future — The discussion maintained a serious, urgent tone throughout, with speakers consistently emphasizing the critical nature …
S85
WS #226 Strengthening Multistakeholder Participation — The discussion maintained a collaborative and constructive tone throughout, with participants openly acknowledging chall…
S86
S87
https://dig.watch/event/india-ai-impact-summit-2026/building-population-scale-digital-public-infrastructure-for-ai — Thank you so much, Mr. Nandan. At this point, I would love to invite our panelists up to the stage. We’ll start by takin…
S88
High Level Session 2: Digital Public Goods and Global Digital Cooperation — Thomas Davin: Thank you so much. So indeed, an alignment within that notion of DPGs, there is very much a value based sy…
S89
Opening of the session — ### Procedural Arrangements Canada: Thank you, Chair. We thank you for your efforts in seeking to devote tomorrow to th…
S90
An exciting and fearsome tool – Statement by Pope Francis at G7 Summit — Artificial intelligence is designed in this way in order to solve specific problems. Yet, for those who use it, there is…
S91
Brainstorming with AI opens new doors for innovation — AI is increasingly embraced as a reliable creative partner, offering speed and breadth in idea generation. In Fast Compa…
S92
How nonprofits are using AI-based innovations to scale their impact — This comment helped establish a key takeaway for the nonprofit audience and shifted the conversation toward practical im…
S93
TradeTech’s Trillion-Dollar Promise — Additionally, current technological interlinkages can create barriers due to excessive data requests, posing a challenge…
S94
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Arun Shetty from Cisco identified three major impediments to AI adoption: infrastructure constraints (power, compute, an…
S95
Internet Governance Forum 2024 — The discussion on moving beyond the dichotomy between data localisation and cross-border data flows was prominently feat…
S96
Discussion Summary: US AI Governance Strategy Under the Trump Administration — Regarding US-China competition, Ball emphasized that America should win through superior adoption and development of AI …
S97
Building Public Interest AI Catalytic Funding for Equitable Compute Access — I mean, let’s not torture the analogy and take something really fun and then try to, like, tie it to AI. But here’s what…
S98
https://dig.watch/event/india-ai-impact-summit-2026/collaborative-ai-network-strengthening-skills-research-and-innovation — Diffusion is not about like concentrated western LLMs all together and just deploy it. It’s about actually walking the p…
S99
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — Technologies created in the West can be double biased in technology transfer and adoption in other regions
S100
Fireside Conversation: 01 — A major announcement was the initiative to create 100 AI diffusion pathways by 2030. As Matthan noted with the catchphra…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Saurabh Garg
2 arguments130 words per minute866 words397 seconds
Argument 1
AI must become a trusted, interoperable, and shareable public infrastructure, similar to Aadhaar or UPI (Saurabh Garg)
EXPLANATION
Saurabh Garg argues that for AI to deliver its potential value it must be treated as a Digital Public Infrastructure (DPI), requiring trust, interoperability, and shareability akin to existing Indian DPIs such as Aadhaar and UPI. He stresses that establishing these qualities is essential before AI can be widely adopted.
EVIDENCE
He states that AI is being considered as a possible DPI and that mechanisms are needed to ensure it becomes trusted, interoperable and shareable, drawing a parallel with Aadhaar and UPI as examples of such infrastructure [14]. He also notes that this is an early-day effort and that foundational resources like compute, data sets, talent and models need appropriate frameworks to support this vision [15].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The DPI concept and the need for open-source, adaptable systems for countries are discussed in [S12]; Garg’s emphasis on trust, interoperability and shareability aligns with the description of Aadhaar-like digital public infrastructure in [S14] and [S15]; the metaphor of shared rails for AI diffusion is elaborated in [S18].
MAJOR DISCUSSION POINT
AI as Digital Public Infrastructure (DPI) and foundational resources
AGREED WITH
Speaker 1, Speaker 3, Janet Zhou
DISAGREED WITH
Speaker 2, Janet Zhou
Argument 2
AI‑ready data must be discoverable, trustworthy, interoperable, and usable, with proper metadata, quality assessment, unique identifiers, and standards (Saurabh Garg)
EXPLANATION
Garg outlines four key attributes that data must possess to be AI‑ready: discoverability through clear metadata, trustworthiness via quality assessments, interoperability using unique identifiers, and usability ensured by standardized classifications. These criteria are presented as prerequisites for effective AI model training and deployment.
EVIDENCE
He details the four requirements, explaining that discoverable data needs understandable metadata, trustworthy data requires quality assessments, interoperable data must have unique identifiers, and usable data depends on international standardization and classification [15]. He adds that access and dissemination must balance availability with privacy safeguards [16], and that locally relevant data will shape AI relevance [17-19].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Four essential elements for AI-ready data-discoverability, trustworthiness, interoperability and usability-are enumerated in the same wording in [S16].
MAJOR DISCUSSION POINT
Data readiness and governance for AI diffusion
AGREED WITH
Beatriz Vasconcellos, Janet Zhou
DISAGREED WITH
Beatriz Vasconcellos, Speaker 2
S
Speaker 1
1 argument120 words per minute647 words323 seconds
Argument 1
Establishing “rails” for AI use cases across borders mirrors DPI principles and enables scalable diffusion (Speaker 1)
EXPLANATION
Speaker 1 proposes that, similar to how DPI provides common ‘rails’ for services like UPI, AI should have cross‑border rails that make use cases portable and scalable. These rails would act as standards and pathways that other countries can adopt to accelerate diffusion.
EVIDENCE
During the panel, the moderator asks whether AI can have rails like DPI that other nations can follow, emphasizing the need for cross-border use-case pathways and playbooks [135-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The idea of AI ‘rails’ as a public infrastructure analogue to UPI is presented in the DPI discussion in [S12] and reinforced by the shared-rail metaphor in [S18].
MAJOR DISCUSSION POINT
AI as Digital Public Infrastructure (DPI) and foundational resources
AGREED WITH
Saurabh Garg, Speaker 3, Janet Zhou
B
Beatriz Vasconcellos
3 arguments154 words per minute1218 words474 seconds
Argument 1
Brazil is building thematic data ecosystems (e.g., early‑childhood, climate) with common standards to break silos and enable AI applications (Beatriz Vasconcellos)
EXPLANATION
Beatriz describes Brazil’s approach of creating sector‑specific data ecosystems, starting with early‑childhood and climate, to standardize data, break silos, and facilitate AI‑driven services. Ministries collaborate to produce canonical datasets and shared standards.
EVIDENCE
She explains that Brazil is cataloguing and preparing datasets for training, building shared platforms across state-owned enterprises, and creating thematic ecosystems such as early-childhood, where five ministries contribute data and standards are defined [99-110].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Brazil’s creation of sector-specific data ecosystems and common standards is described in the collaborative AI network report in [S4].
MAJOR DISCUSSION POINT
Data readiness and governance for AI diffusion
AGREED WITH
Saurabh Garg, Janet Zhou
Argument 2
Brazil’s centralized procurement and shared‑service model streamlines AI deployment across ministries, reducing duplication and accelerating scale‑up (Beatriz Vasconcellos)
EXPLANATION
Beatriz outlines a centralized procurement mechanism where a single shared service provides AI tools (e.g., chatbots) to all ministries, cutting down procurement time and costs. This model aims to overcome fragmented implementations and speed up scaling.
EVIDENCE
She notes that the Ministry of Management created a Secretariat for Shared Services, allowing ministries to obtain AI services through a single procurement process that takes only a few hours and a simple digital transfer, eliminating separate procurement for each ministry [211-216].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The centralised chatbot procurement and shared-service approach adopted by Brazil’s Ministry of Management is detailed in [S4].
MAJOR DISCUSSION POINT
Overcoming “pilotitis” and scaling AI to production
DISAGREED WITH
Speaker 2, Saurabh Garg
Argument 3
Reliance on external vendors risks lock‑in and hampers domestic capability building; governments should nurture home‑grown AI talent and retain strategic control (Beatriz Vasconcellos)
EXPLANATION
Beatriz warns that over‑reliance on large vendors can prevent the development of national AI capabilities, likening it to outsourcing a nation’s army. She advocates for building internal expertise and avoiding vendor lock‑in.
EVIDENCE
She describes how big vendors claim superior capabilities, leading governments to outsource AI solutions, which she argues prevents the building of domestic skills and strategic control, using an army analogy to illustrate the risk [222-230].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Risks of heavy reliance on external entities for critical AI infrastructure are highlighted in [S21]; concerns about vendor lock-in and capacity gaps in Latin America are noted in [S22].
MAJOR DISCUSSION POINT
Building local capacity and avoiding vendor lock‑in
AGREED WITH
Speaker 2, Saurabh Garg
DISAGREED WITH
Speaker 3
J
Janet Zhou
3 arguments154 words per minute715 words277 seconds
Argument 1
Open‑source ID platforms like MOSIP illustrate how standardised, production‑ready components and operational support accelerate AI adoption (Janet Zhou)
EXPLANATION
Janet highlights MOSIP as an open‑source, vendor‑free digital ID platform that provides reference implementations and operational support, enabling many countries to adopt a common ID infrastructure quickly. This standardisation reduces lock‑in and speeds up AI‑related services that rely on identity verification.
EVIDENCE
She explains that MOSIP, inspired by Aadhaar, offers an open-source, production-ready reference implementation, with programmatic standards, operational support, country delegations, training, and financing from the World Bank to help nations adopt the platform [179-190].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
MOSIP’s open-source, production-ready reference implementation and its operational support model are discussed in [S4]; the broader DPI open-source approach is covered in [S12].
MAJOR DISCUSSION POINT
Data readiness and governance for AI diffusion
AGREED WITH
Saurabh Garg, Beatriz Vasconcellos
DISAGREED WITH
Saurabh Garg, Speaker 2
Argument 2
Institutional involvement early in design, coupled with inclusive, trustworthy public infrastructure, is essential for moving pilots to production (Janet Zhou)
EXPLANATION
Janet argues that successful scaling of AI requires governments to be at the design table from the start, ensuring that both public and private actors align with the needs of vulnerable populations. Inclusive institutions build trust, which is crucial for adoption.
EVIDENCE
She notes that the problem of “pilotitis” predates AI and cites examples like vaccines and instant payment systems where governments were involved early, making it easier for local innovators to enter markets and for infrastructure to be trustworthy and inclusive [76-82].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Janet Zhou’s emphasis on early government involvement and inclusive institutions to avoid “pilotitis” is recorded in [S4].
MAJOR DISCUSSION POINT
Institutional involvement early in design, coupled with inclusive, trustworthy public infrastructure, is essential for moving pilots to production
Argument 3
Pilotitis is a long‑standing issue; successful scaling requires governments and markets to co‑design solutions from the outset and provide shared infrastructure (Janet Zhou)
EXPLANATION
Janet describes “pilotitis” as the tendency for projects to remain in pilot phase due to lack of coordinated design and shared infrastructure. She stresses that co‑design by governments and markets, together with common platforms, is needed to transition to production.
EVIDENCE
She references historical examples where scaling successes (e.g., vaccines, instant payments) were achieved by involving governments early and building shared infrastructure, contrasting this with many AI pilots that remain stuck [76-82].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The persistent challenge of “pilotitis” and the need for co-design and shared platforms are outlined in [S4].
MAJOR DISCUSSION POINT
Overcoming “pilotitis” and scaling AI to production
AGREED WITH
Beatriz Vasconcellos, Speaker 1
S
Speaker 2
3 arguments122 words per minute1028 words504 seconds
Argument 1
The “100 AI diffusion pathways by 2030” concept stresses horizontal enablers (language, compute, talent) to move AI from invention to impact (Speaker 2)
EXPLANATION
Speaker 2 introduces the ambition to create 100 AI diffusion pathways by 2030, emphasizing that horizontal enablers such as multilingual capability, compute resources, and talent are required to translate AI inventions into real‑world impact.
EVIDENCE
She recounts a discussion where the group decided on “100 diffusion pathways by 2030” as a target, linking it to the need for adoption and diffusion rather than just invention [53-54], and earlier she reflects on the gap between invention and impact, citing the book on AI diffusion and the need for adoption [38-42].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The target of 100 AI diffusion pathways and the role of horizontal unlocks are mentioned in the collaborative AI network notes in [S4] and reinforced by the pathway framing in [S18].
MAJOR DISCUSSION POINT
Strategic diffusion pathways and adoption framework
AGREED WITH
Saurabh Garg, Speaker 3
Argument 2
Promoting multi‑model, open‑source approaches and sharing know‑how prevents concentration of Western LLMs and supports a competitive, diverse AI ecosystem (Speaker 2)
EXPLANATION
Speaker 2 warns against vendor lock‑in and the dominance of a few Western large language models, advocating for multi‑model, open‑source solutions that give users choice and foster a more diverse AI landscape.
EVIDENCE
She cites concerns about vendor lock-in, mentions the Amul AI initiative, and stresses the importance of multi-model capability, replaceability, and domain knowledge, noting that these principles constitute a diffusion pathway and referencing a co-authored paper with Kizum on the use-case adoption framework [231-236].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for open-source, multi-model solutions to avoid vendor lock-in are echoed in the DPI open-source discussion in [S12] and the collaborative AI network’s emphasis on open-source models in [S4].
MAJOR DISCUSSION POINT
Building local capacity and avoiding vendor lock‑in
DISAGREED WITH
Beatriz Vasconcellos, Saurabh Garg
Argument 3
Multilingual and voice AI act as equalisers, lowering language barriers and expanding AI benefits to underserved populations (Speaker 2)
EXPLANATION
Speaker 2 highlights that adding new languages to AI models is now relatively easy, making multilingual and voice AI powerful tools for bridging digital divides and reaching marginalized groups.
EVIDENCE
She explains that multilinguality can level the playing field because new languages can be added quickly using existing data, and that this capability can help bring AI to people who previously lacked access, positioning AI as an equaliser [264-267].
MAJOR DISCUSSION POINT
Inclusion through multilingual and voice AI
AGREED WITH
Speaker 3
S
Speaker 3
3 arguments143 words per minute1512 words631 seconds
Argument 1
The G7 AI Hub and multi‑stakeholder collaboration aim to unlock compute, data, and talent for the Global South, co‑architecting sector‑specific pathways (Speaker 3)
EXPLANATION
Speaker 3 describes the G7 AI Hub as a mechanism to address resource constraints in the Global South by unlocking compute, data, and talent, and by co‑designing sector‑specific diffusion pathways.
EVIDENCE
She notes that the G7 AI Hub was created to address constraints on foundational AI resources, unlocking additional resources from friendly G7 countries, and focusing on co-architecting pathways for Africa, Latin America, and Asia [61-66].
MAJOR DISCUSSION POINT
Strategic diffusion pathways and adoption framework
Argument 2
The AI use‑case adoption framework links sectoral impact (education, health, climate) with horizontal unlocks (data, compute, multilingual capability) to guide scaling (Speaker 3)
EXPLANATION
Speaker 3 outlines a framework that connects vertical sectoral needs with horizontal enablers, arguing that scaling AI from pilot to impact requires both contextual sector data and cross‑cutting resources like compute and multilingual data.
EVIDENCE
She references the development of the AI adoption framework with partners such as the Gates Foundation, describing how it maps sectoral impact (education, health, climate) to horizontal unlocks like language data, compute, and interoperable data, and stresses co-design of pathways [240-255].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The adoption framework that maps sectoral needs to horizontal resources such as data, compute and language is described in the collaborative AI network summary in [S4].
MAJOR DISCUSSION POINT
Strategic diffusion pathways and adoption framework
Argument 3
Voice‑enabled AI, built as a public rail, can deliver safe, context‑aware services in agriculture, health, and other sectors, enhancing inclusivity (Speaker 3)
EXPLANATION
Speaker 3 argues that voice AI should be integrated as an invisible public rail, enabling safe, context‑aware interactions for diverse users such as farmers or patients, thereby expanding AI’s inclusive reach.
EVIDENCE
She mentions that AI should be invisible and part of everyday life, cites examples like Bhashani for Indic languages as a public rail, and discusses the need for safety guardrails in voice interactions for agriculture and healthcare, positioning these as potential playbooks [154-156][168-170].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The rail analogy for AI services, including voice-enabled solutions for agriculture and health, is presented in the AI for agriculture scaling discussion in [S19] and the shared-rail concept in [S18].
MAJOR DISCUSSION POINT
Inclusion through multilingual and voice AI
Agreements
Agreement Points
AI should be treated as a Digital Public Infrastructure (DPI) with trust, interoperability, and shared “rails” to enable cross‑border diffusion.
Speakers: Saurabh Garg, Speaker 1, Speaker 3, Janet Zhou
AI must become a trusted, interoperable, and shareable public infrastructure, similar to Aadhaar or UPI (Saurabh Garg) Establishing “rails” for AI use cases across borders mirrors DPI principles and enables scalable diffusion (Speaker 1) AI should be invisible, built on public rails that make services portable across countries (Speaker 3) Open‑source ID platforms like MOSIP illustrate how standardised, production‑ready components and operational support accelerate AI adoption (Janet Zhou)
All four speakers concur that AI needs to be built on a common, trustworthy, interoperable infrastructure-akin to existing DPIs such as Aadhaar/UPI-so that services can be deployed seamlessly across jurisdictions. They cite the rail metaphor and MOSIP’s open-source model as concrete illustrations. [14-15][135-138][152-158][179-190]
POLICY CONTEXT (KNOWLEDGE BASE)
This view reflects the emerging DPI policy framework endorsed in WSIS Action Lines and recent AI-in-DPI panels, which stress trust, interoperability and shared standards for cross-border services [S38][S39][S40][S50].
Data must be AI‑ready: discoverable, trustworthy, interoperable, and usable through clear metadata, quality assessments, unique identifiers and standards.
Speakers: Saurabh Garg, Beatriz Vasconcellos, Janet Zhou
AI‑ready data must be discoverable, trustworthy, interoperable, and usable, with proper metadata, quality assessment, unique identifiers, and standards (Saurabh Garg) Brazil is building thematic data ecosystems (e.g., early‑childhood, climate) with common standards to break silos and enable AI applications (Beatriz Vasconcellos) Open‑source ID platforms like MOSIP illustrate how standardised, production‑ready components and operational support accelerate AI adoption (Janet Zhou)
The speakers agree on the necessity of establishing robust data governance-metadata, quality, identifiers and standards-to make datasets AI-ready and interoperable, as reflected in India’s four-pillar view, Brazil’s sectoral ecosystems, and MOSIP’s reference implementation. [15-16][99-110][179-190]
POLICY CONTEXT (KNOWLEDGE BASE)
Garg’s four-pillar model for AI-ready data-discoverability, trustworthiness, interoperability and usability-has been documented in multiple policy briefs and aligns with standardisation efforts highlighted by technical bodies [S35][S36][S37].
Overcoming “pilotitis” requires early government involvement and shared infrastructure to move AI from pilots to production at scale.
Speakers: Janet Zhou, Beatriz Vasconcellos, Speaker 1
Pilotitis is a long‑standing issue; successful scaling requires governments and markets to co‑design solutions from the outset and provide shared infrastructure (Janet Zhou) Centralized procurement and shared‑service model streamlines AI deployment across ministries, reducing duplication and accelerating scale‑up (Beatriz Vasconcellos) How does AI become production scale? (Speaker 1)
All three emphasize that AI projects remain stuck in pilot phases unless governments take a design-lead role and provide common platforms or procurement mechanisms that lower barriers for ministries and innovators. [76-82][211-216][71-74]
POLICY CONTEXT (KNOWLEDGE BASE)
India’s DPI rollout, which couples early public sector coordination with shared services to scale pilots, exemplifies this approach and was cited as a best-practice at the AI Impact Summit [S50][S52][S55].
Multilingual and voice AI are key equalisers that broaden inclusion and reach underserved populations.
Speakers: Speaker 2, Speaker 3
Multilingual and voice AI act as equalisers, lowering language barriers and expanding AI benefits to underserved populations (Speaker 2) Voice‑enabled AI, built as a public rail, can deliver safe, context‑aware services in agriculture, health and other sectors, enhancing inclusivity (Speaker 3)
Both speakers stress that adding language and voice capabilities makes AI more accessible, turning it into an inclusive tool for farmers, patients and other marginalised users. [264-267][154-156][168-170]
POLICY CONTEXT (KNOWLEDGE BASE)
Panel discussions on AI in DPI emphasise multimodality and linguistic diversity as inclusion levers, and recent studies on language-specific glossaries underline the need for voice-enabled AI for underserved communities [S38][S42][S44][S57].
Avoiding vendor lock‑in through open‑source, multi‑model, modular platforms is essential for sustainable AI diffusion.
Speakers: Speaker 2, Beatriz Vasconcellos, Saurabh Garg
Promoting multi‑model, open‑source approaches and sharing know‑how prevents concentration of Western LLMs and supports a competitive AI ecosystem (Speaker 2) Reliance on external vendors risks lock‑in and hampers domestic capability building; governments should nurture home‑grown AI talent and retain strategic control (Beatriz Vasconcellos) A platform named METRI has been suggested to democratise AI resources on a voluntary, non‑commitment basis (Saurabh Garg)
The participants converge on the need for open, modular solutions-whether through multi-model strategies, centralized procurement reforms, or the METRI platform-to keep AI ecosystems open and locally controllable. [231-236][222-230][23-28]
POLICY CONTEXT (KNOWLEDGE BASE)
Policy analyses warn against concentration of Western LLMs and promote open-source ecosystems to preserve sovereignty and competition, as reflected in UN and AU deliberations on open-source AI [S41][S47][S48][S49][S43].
Foundational AI resources (compute, data, talent, models) are horizontal enablers that must be addressed to realise the 100 diffusion pathways by 2030.
Speakers: Saurabh Garg, Speaker 2, Speaker 3
AI resources – compute, data sets, talent and models – are foundational for diffusion (Saurabh Garg) The “100 AI diffusion pathways by 2030” concept stresses horizontal enablers (language, compute, talent) to move AI from invention to impact (Speaker 2) The AI use‑case adoption framework links sectoral impact with horizontal unlocks (data, compute, multilingual capability) to guide scaling (Speaker 3)
All three highlight that scaling AI requires addressing the same set of cross-cutting resources-computing power, quality data, skilled people and efficient models-across sectors, forming the backbone of the 100-pathway agenda. [14-15][53-54][38-42][240-255]
POLICY CONTEXT (KNOWLEDGE BASE)
The notion of AI as critical infrastructure, requiring interoperable compute and data layers, is articulated in DPI literature and aligns with calls for capacity-building in emerging economies [S39][S51][S55].
Similar Viewpoints
Both see AI as a layer on top of existing digital public infrastructure, requiring seamless integration and trust. [14-15][152-158]
Speakers: Saurabh Garg, Speaker 3
AI should be invisible, built on public rails that make services portable across countries (Speaker 3) AI must become a trusted, interoperable, and shareable public infrastructure, similar to Aadhaar or UPI (Saurabh Garg)
Both advocate for standardised, ready‑to‑use platforms backed by operational support to reduce duplication and speed up scaling. [211-216][179-190]
Speakers: Beatriz Vasconcellos, Janet Zhou
Brazil’s centralized procurement and shared‑service model streamlines AI deployment (Beatriz Vasconcellos) MOSIP’s open‑source, production‑ready reference implementation with operational support accelerates adoption (Janet Zhou)
Both warn against dependence on single external vendors and call for capacity‑building and open solutions. [231-236][222-230]
Speakers: Speaker 2, Beatriz Vasconcellos
Vendor lock‑in must be avoided; promote multi‑model, open‑source solutions (Speaker 2) Reliance on external vendors risks lock‑in; need to build domestic capability (Beatriz Vasconcellos)
Unexpected Consensus
Use of open‑source, production‑ready platforms (MOSIP in ID space and Brazil’s shared AI service model) as a means to accelerate AI diffusion across sectors.
Speakers: Janet Zhou, Beatriz Vasconcellos
Open‑source ID platforms like MOSIP illustrate how standardised, production‑ready components and operational support accelerate AI adoption (Janet Zhou) Centralized procurement and shared‑service model streamlines AI deployment across ministries, reducing duplication and accelerating scale‑up (Beatriz Vasconcellos)
Although one example concerns digital identity and the other AI service procurement, both converge on the principle that open, ready-to-use, centrally managed platforms are critical for rapid, scalable diffusion-an alignment not explicitly anticipated given their different policy domains. [179-190][211-216]
Linking multilingual/voice AI (Speaker 2) with the notion of locally relevant, AI‑ready data (Saurabh Garg) as a joint pathway to inclusion.
Speakers: Speaker 2, Saurabh Garg
Multilingual and voice AI act as equalisers, lowering language barriers (Speaker 2) Local relevance of AI depends on data that reflects linguistic and cultural contexts (Saurabh Garg)
The connection between language-focused inclusion and the technical requirement for locally relevant datasets was not overtly discussed, yet both speakers implicitly agree that language-specific data is essential for effective, inclusive AI diffusion. [264-267][17-19]
POLICY CONTEXT (KNOWLEDGE BASE)
Combining Garg’s AI-ready data framework with linguistic-diversity initiatives mirrors integrated policy recommendations that link data quality with multilingual service delivery [S35][S42].
Overall Assessment

The panel demonstrates strong convergence around four core themes: (1) framing AI as a Digital Public Infrastructure with shared, trustworthy rails; (2) establishing AI‑ready data through standards, metadata and interoperability; (3) moving beyond pilot projects via early government involvement and shared service models; (4) ensuring inclusivity through multilingual/voice capabilities while avoiding vendor lock‑in by promoting open‑source, modular solutions. Horizontal enablers—compute, talent and models—are repeatedly identified as prerequisites for the 100‑pathway ambition.

High consensus – most speakers echo each other’s positions, indicating a shared understanding that scaling AI responsibly requires DPI‑style governance, data standards, institutional coordination and open, inclusive technology stacks. This broad agreement suggests that future policy initiatives can build on these common foundations to design coordinated diffusion strategies.

Differences
Different Viewpoints
Approach to building AI as a public infrastructure – government‑led DPI with standardized, trusted, interoperable components versus a multi‑stakeholder, open‑source, multi‑model ecosystem to avoid concentration of Western LLMs.
Speakers: Saurabh Garg, Speaker 2, Janet Zhou
AI must become a trusted, interoperable, and shareable public infrastructure, similar to Aadhaar or UPI (Saurabh Garg) Promoting multi‑model, open‑source approaches and sharing know‑how prevents concentration of Western LLMs and supports a competitive, diverse AI ecosystem (Speaker 2) Open‑source ID platforms like MOSIP illustrate how standardised, production‑ready components and operational support accelerate AI adoption (Janet Zhou)
Saurabh Garg frames AI as a Digital Public Infrastructure that requires trust, interoperability and government-driven standards [14-15]. Speaker 2 argues for a decentralized, open-source, multi-model approach to keep AI from being dominated by a few Western providers [231-236]. Janet Zhou points to MOSIP as an open-source, vendor-free platform that provides standardized building blocks and operational support for adoption [179-190]. The three positions differ on who should lead and how the foundational AI infrastructure should be created and governed.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates in the AU open-source AI forum and EuroDIG highlight tension between state-centric DPI models and multi-stakeholder open-source ecosystems aimed at preventing Western model dominance [S41][S43][S47][S48][S49].
Method for scaling AI solutions – centralized, government‑run procurement and shared services versus modular, open‑source rails and voluntary, non‑committal development.
Speakers: Beatriz Vasconcellos, Speaker 2, Saurabh Garg
Brazil’s centralized procurement and shared‑service model streamlines AI deployment across ministries, reducing duplication and accelerating scale‑up (Beatriz Vasconcellos) Promoting multi‑model, open‑source approaches and sharing know‑how prevents concentration of Western LLMs and supports a competitive, diverse AI ecosystem (Speaker 2) AI‑ready data must be discoverable, trustworthy, interoperable, and usable, with proper metadata, quality assessment, unique identifiers, and standards (Saurabh Garg)
Beatriz describes a top-down, centralized procurement mechanism that lets ministries obtain AI services through a single, fast process [211-216]. Speaker 2 advocates for a bottom-up, open-source, multi-model strategy that avoids vendor lock-in and relies on shared rails [231-236]. Saurabh Garg focuses on data readiness and standards as the basis for diffusion [15-16]. The disagreement lies in whether scaling should be driven by centralized government procurement or by decentralized, open-source, modular development.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs on AI scaling contrast top-down procurement strategies with modular open-source pathways, noting divergent views on effectiveness (see Smart Regulation analysis) [S43][S54].
Reliance on external resources (e.g., G7 AI Hub) versus building domestic capability and avoiding vendor lock‑in.
Speakers: Speaker 3, Beatriz Vasconcellos
The G7 AI Hub was created to address constraints on foundational AI resources in the Global South by unlocking additional resources from friendly G7 countries (Speaker 3) Reliance on external vendors risks lock‑in and hampers domestic capability building; governments should nurture home‑grown AI talent and retain strategic control (Beatriz Vasconcellos)
Speaker 3 outlines the G7 AI Hub as a mechanism to bring in compute, data and talent from external partners to support AI pathways in Africa, Latin America and Asia [61-66]. Beatriz warns that over-reliance on large external vendors can prevent the development of national AI capabilities and lead to strategic lock-in, advocating for building internal expertise instead [222-230]. The tension is between leveraging external assistance versus prioritizing self-reliance.
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of digital divides and regional AI strategies stress the importance of domestic capacity building over dependence on external hubs, especially for developing economies [S41][S57][S49].
Unexpected Differences
Perceived ease of creating diffusion pathways versus acknowledged operational and institutional challenges.
Speakers: Speaker 2, Beatriz Vasconcellos
It sounds easy: “go and build it” – diffusion pathways can be created without much friction (Speaker 2) There are significant operational challenges, vendor lock‑in risks, and the need for coordinated procurement and capacity building (Beatriz Vasconcellos)
Speaker 2 suggests that building diffusion pathways is straightforward and mainly a matter of implementation [197-199], whereas Beatriz highlights concrete obstacles such as vendor lock-in, the need for shared services, and capacity constraints [204-207][222-230]. The contrast between an apparently simple rollout and the complex realities on the ground was not anticipated given the overall collaborative tone of the discussion.
POLICY CONTEXT (KNOWLEDGE BASE)
Recent workshops on diffusion pathways acknowledge implementation hurdles and highlight the gap between aspirational roadmaps and on-the-ground institutional capacity [S54][S55].
Overall Assessment

The panel shows broad consensus on the need to scale AI from pilots to production and to avoid vendor lock‑in, but diverges on the preferred architecture and governance model—government‑led DPI with strict standards versus open‑source, multi‑stakeholder rails. Additional tension exists around the role of external assistance (G7 AI Hub) versus building domestic capacity. These disagreements reflect differing national experiences and strategic priorities, suggesting that a one‑size‑fits‑all roadmap may be difficult to achieve without flexible, context‑specific pathways.

Moderate – while all participants share the overarching goal of AI diffusion, they propose distinct routes (centralized government standards, open‑source modularity, external resource hubs). The implications are that policy coordination will need to accommodate multiple models and negotiate trade‑offs between standardisation, sovereignty, and openness.

Partial Agreements
All speakers agree that AI must move beyond pilots to production at scale, but differ on the primary lever: Janet stresses early government co‑design and shared infrastructure; Beatriz promotes centralized procurement; Saurabh focuses on data readiness and standards; Speaker 2 highlights horizontal enablers and a target‑driven pathway framework. The common goal is scaling AI, yet the routes proposed vary. [76-82][211-216][15-16][53-54]
Speakers: Janet Zhou, Beatriz Vasconcellos, Saurabh Garg, Speaker 2
Institutional involvement early in design, coupled with inclusive, trustworthy public infrastructure, is essential for moving pilots to production (Janet Zhou) Brazil’s centralized procurement and shared‑service model streamlines AI deployment across ministries, reducing duplication and accelerating scale‑up (Beatriz Vasconcellos) AI‑ready data must be discoverable, trustworthy, interoperable, and usable, with proper metadata, quality assessment, unique identifiers, and standards (Saurabh Garg) The “100 AI diffusion pathways by 2030” concept stresses horizontal enablers (language, compute, talent) to move AI from invention to impact (Speaker 2)
Both agree that vendor lock‑in is a problem and that AI ecosystems should remain open and diverse. Beatriz proposes building domestic capacity and limiting external vendor dependence, while Speaker 2 calls for multi‑model, open‑source solutions and knowledge sharing to achieve the same end. [222-230][231-236]
Speakers: Beatriz Vasconcellos, Speaker 2
Reliance on external vendors risks lock‑in and hampers domestic capability building; governments should nurture home‑grown AI talent and retain strategic control (Beatriz Vasconcellos) Promoting multi‑model, open‑source approaches and sharing know‑how prevents concentration of Western LLMs and supports a competitive, diverse AI ecosystem (Speaker 2)
Takeaways
Key takeaways
AI should be treated as Digital Public Infrastructure (DPI), requiring trust, interoperability, and shareability similar to Aadhaar and UPI. Four foundational AI resources—compute, data sets, talent, and models—must be democratized; data readiness is critical and includes discoverability, trustworthiness, interoperability, and usability. The “100 AI diffusion pathways by 2030” initiative emphasizes horizontal enablers (language, compute, talent) and sector‑specific use‑cases to move AI from invention to impact. Early involvement of governments and institutions in design, along with shared public rails (identity, payments, data exchange), is essential to overcome “pilotitis” and achieve production‑scale deployment. Open‑source, multi‑model approaches (e.g., METRI, MOSIP) are preferred to avoid vendor lock‑in and to build domestic AI capability. Multilingual and voice AI are seen as key equalizers that can broaden inclusion for underserved populations. Co‑architecting pathways through public‑private partnerships and multi‑stakeholder collaboration (G7 AI Hub, XTEP, Gates Foundation) is necessary for sustainable diffusion.
Resolutions and action items
Proposal to develop the METRI platform (Multi‑stakeholder AI for Resilient and Trustworthy Infrastructure) as a voluntary, modular framework for sharing compute, data, models, and talent. Commitment to continue the G7 AI Hub effort to unlock compute, data, and talent resources for the Global South. Adoption of the AI use‑case adoption framework (linking sectoral impact with horizontal unlocks) to guide scaling of pilots. Brazil to proceed with centralized procurement and shared‑service model for AI applications across ministries. Encourage nations to adopt open‑source ID platforms like MOSIP and to provide operational support and training for implementation. Promote multi‑model, open‑source AI solutions to mitigate vendor lock‑in and foster local capability building.
Unresolved issues
Specific governance mechanisms and standards for ensuring AI data trustworthiness and privacy across jurisdictions. Detailed financing and business models for building compute infrastructure (e.g., data centers) in the Global South. Concrete timelines, metrics, and accountability structures for achieving the “100 AI diffusion pathways” target by 2030. How to create and operationalize safety guardrails and playbooks for voice/health AI interactions. Mechanisms for cross‑border sharing of AI use‑cases while respecting data sovereignty. Extent of required regulatory reforms to support AI DPI and prevent vendor lock‑in.
Suggested compromises
Adopt a modular, voluntary approach for METRI rather than a mandatory, one‑size‑fits‑all solution. Use centralized procurement and shared services to reduce duplication while allowing ministries to retain flexibility in implementation. Combine public‑sector standards with private‑sector innovation, encouraging co‑design of pathways rather than imposing top‑down solutions. Promote open‑source, multi‑model ecosystems to balance the need for advanced capabilities with the desire to avoid dependence on single vendors.
Thought Provoking Comments
AI is perhaps something like a solution in search of a problem… we need to ensure it becomes a trusted, interoperable and shareable Digital Public Infrastructure (DPI) like Aadhaar or UPI.
Frames AI not just as a technology but as a public infrastructure that must meet standards of trust, interoperability and scalability, shifting the conversation from isolated use‑cases to systemic foundations.
Set the agenda for the rest of the panel, prompting others to discuss foundational resources (data, compute, talent, models) and how to democratize them. It led directly to the detailed discussion of data‑readiness criteria and the METRI platform.
Speaker: Saurabh Garg
We read about AI diffusion – the invention happened in the West, but the impact must happen in the Global South. That’s why we aim for 100 diffusion pathways by 2030.
Introduces the central metaphor of ‘diffusion pathways’ and explicitly positions the Global South as the engine of impact, reframing the problem from technology creation to equitable adoption.
Triggered the round‑table on how different regions (Kenya, Italy, India) can contribute, leading to Kizom’s explanation of G7 AI Hub and the subsequent focus on cross‑border collaboration.
Speaker: Speaker 2 (Shalini)
Data is the raw material for AI models. For it to be AI‑ready it must be discoverable, trustworthy, interoperable and usable across systems while preserving privacy.
Provides a concrete, four‑point framework for data readiness, moving the discussion from abstract resource needs to actionable criteria.
Guided later speakers (Kizom, Janet, Beatriz) to reference data interoperability, data ecosystems, and standards as essential steps, deepening the technical layer of the conversation.
Speaker: Saurabh Garg
The problem of ‘pilotitis’ predates AI. Scaled impact comes when governments are at the design table from the start, making infrastructure trustworthy and inclusive.
Links the recurring issue of pilots stuck in limbo to a systemic solution—early government involvement and institutional trust—offering a clear remedy rather than just diagnosing the problem.
Shifted the tone from lamenting pilots to proposing concrete governance mechanisms, influencing Beatriz’s description of Brazil’s centralized procurement and Janet’s later MOSIP analogy.
Speaker: Janet Zhou
In Brazil we are building a ‘one government for each person’ vision: shared data platforms, thematic data ecosystems (early childhood, environment), and centralized chatbot services built on a common digital ID.
Illustrates a real‑world, nation‑scale implementation of the DPI concepts discussed earlier, showing how data interoperability and shared services can move from pilot to production.
Provided a concrete case study that other panelists referenced when talking about rails, modularity, and the need for centralized services, reinforcing the DPI narrative.
Speaker: Beatriz Vasconcellos
Digital public infrastructure should be invisible. AI should sit on existing rails (UPI, DigiLocker, identity) so that users never notice the AI layer, and we must create new rails (multilingual voice stacks) that converge globally.
Reframes the goal of AI diffusion as seamless integration rather than a visible add‑on, emphasizing the importance of modular, interoperable rails and multilingual accessibility.
Prompted a discussion on the emergence of new rails (voice, language) and the need for convergence, leading to the mention of Zindi’s data‑science network and the broader conversation about cross‑border standards.
Speaker: Speaker 3 (Kizom)
The MOSIP analogy: even after you have a road, you need agreed signs, side‑of‑the‑road rules, operational support, and financing to make it usable for everyone.
Uses a familiar infrastructure metaphor to explain the layers of standards, capacity‑building, and financing needed beyond the technical platform, making the abstract concept tangible.
Reinforced the earlier point about rails and added a practical roadmap for implementation, influencing the later discussion on vendor lock‑in and capacity building.
Speaker: Janet Zhou
Vendor lock‑in is a major risk. Governments must build their own AI muscles instead of repeatedly outsourcing to large vendors, otherwise strategic capabilities are lost.
Highlights a systemic challenge that could undermine the whole diffusion effort, shifting the conversation toward sustainable capability development and procurement reform.
Triggered a response from Speaker 2 about multi‑model approaches and the need for choice, and reinforced the panel’s consensus on building domestic capacity.
Speaker: Beatriz Vasconcellos
The AI adoption framework (use‑case adoption framework) shows that vertical impact (education, health, climate) depends on horizontal unlocks (language data, compute, interoperable data), and we must co‑design pathways that fuse both.
Synthesizes earlier points into a structured framework, providing a roadmap for moving from pilots to scale and linking sectoral needs with foundational resources.
Served as a concluding turning point, aligning all previous contributions into a unified strategy and giving the audience a concrete tool to think about diffusion pathways.
Speaker: Speaker 3 (Kizom)
Overall Assessment

The discussion evolved from a high‑level framing of AI as a nascent technology to a nuanced blueprint for turning AI into a trusted, interoperable public infrastructure. The most pivotal moments were Saurabh Garg’s articulation of AI as DPI and the data‑readiness framework, Janet Zhou’s ‘pilotitis’ diagnosis with a governance solution, and the concrete national examples (Brazil’s shared data ecosystem and the MOSIP analogy). These comments redirected the conversation from abstract possibilities to actionable standards, cross‑border collaboration, and capacity‑building, ultimately converging on a unified AI adoption framework that ties vertical sectoral impact to horizontal foundational resources. The panel’s flow was repeatedly reshaped by these insights, moving the tone from problem‑identification to solution‑design and setting a clear agenda for future diffusion pathways.

Follow-up Questions
How will the 100 AI diffusion pathways to 2030 pan out for the Kenya‑Italy‑India partnership and what does it mean for each partner?
Understanding the concrete implementation steps and expected outcomes for this tripartite collaboration is essential to gauge feasibility and scalability.
Speaker: Speaker 2 (Shalini)
How can AI move from pilot projects to production‑scale deployment across multiple geographies? Is funding the only barrier or are additional diffusion pathways needed?
Identifying the systemic factors beyond financing that prevent pilots from scaling is crucial for achieving widespread impact.
Speaker: Speaker 1
How can trust be established in AI advisory outputs versus the institutions delivering them? What governance models enable institutions to adopt and trust AI advice?
Trust is a prerequisite for adoption; clarifying the relationship between institutional credibility and AI outputs will inform policy and design.
Speaker: Speaker 1
What is the current state of AI adoption in Brazil, are pilots stuck in the pilot‑to‑production gap, and how can that gap be bridged?
Brazil’s experience can provide lessons for other regions; understanding barriers to scaling will help design effective interventions.
Speaker: Speaker 1
Are there ‘rails’ analogous to DPI that can guide cross‑border AI use‑cases? What playbooks or pathways can different countries adopt to leverage shared AI resources?
Defining reusable frameworks and standards can accelerate diffusion and ensure interoperability across nations.
Speaker: Speaker 1
What is the single hardest challenge in operationalising AI diffusion pathways, considering human factors and implementation realities?
Pinpointing the top obstacle helps prioritize interventions and allocate resources efficiently.
Speaker: Speaker 2
Can you describe the ‘use‑case adoption framework’ and how it can act as a friction‑remover for AI diffusion?
A clear articulation of this framework could provide a practical tool for stakeholders to move from pilot to scale.
Speaker: Speaker 2
What mechanisms are needed to make datasets ‘AI‑ready’ (discoverable, trustworthy, interoperable, usable) and how can standards be developed and adopted internationally?
Standardizing data readiness is foundational for trustworthy AI and requires coordinated research and policy work.
Speaker: Saurabh Garg
What is the detailed structure and governance model of the proposed METRI platform, and how will it operationalise multi‑stakeholder AI resources?
Clarifying METRI’s design is necessary to assess its potential as a modular, voluntary infrastructure for AI democratization.
Speaker: Saurabh Garg
What is the business case for building data‑center and GPU capacity on the African continent, and how can data silos be broken despite abundant local data?
Understanding economic incentives and technical solutions for local compute infrastructure is key to reducing reliance on external providers.
Speaker: Speaker 3 (Kizom)
How can public‑private partnerships and digital public goods be structured to ensure sustainable, inclusive AI services without vendor lock‑in?
Research into governance and procurement models can help nations retain strategic AI capabilities while leveraging external expertise.
Speaker: Beatriz Vasconcellos
What are effective strategies for building national AI talent and capability rather than over‑relying on external vendors?
Developing domestic expertise is critical for long‑term sovereignty and resilience of AI deployments.
Speaker: Beatriz Vasconcellos
How can multilingual voice AI be developed as a public good and integrated into existing DPI to promote inclusion and equity?
Voice interfaces can bridge language gaps; research is needed on scalable, open‑source voice models and their deployment.
Speaker: Speaker 3 (Kizom)
What metrics and evaluation frameworks should be used to monitor progress of the ‘100 AI diffusion pathways’ initiative toward 2030?
Measuring impact is essential to validate the approach, adjust strategies, and demonstrate value to stakeholders.
Speaker: Speaker 2 (Shalini)
What operational support, training, and financing mechanisms are required post‑rail construction to ensure effective AI adoption, similar to MOSIP’s model?
Even with technical standards, practical implementation support is needed; studying MOSIP’s experience can inform AI rollout.
Speaker: Janet Zhou

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Democratizing AI Building Trustworthy Systems for Everyone

Democratizing AI Building Trustworthy Systems for Everyone

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by asking what the greatest obstacle is to coordinating a global AI effort, to which Dr. Saurabh Garg identified governance of sharing mechanisms, the interdependence of hardware-software-protocol ecosystems, and a shortage of talent and institutional capability as the primary challenges [6-12]. He emphasized that while infrastructure can be acquired, expertise must be developed to democratize AI worldwide [9-10].


Microsoft’s chief responsible AI officer, Natasha Crampton, announced a $50 billion commitment to bring AI to the global south by 2030, framing the initiative around five inter-linked pillars [28-33]. The first pillar focuses on building data-centre and connectivity infrastructure while respecting national sovereignty through configurable public and private cloud controls [34-41]. The second pillar targets large-scale skilling, including a program to teach AI-specific skills to two million Indian teachers, recognizing that education drives rapid technology diffusion [46-53]. The third and fourth pillars address multilingual, multicultural AI and local innovation, with collaborations such as the Lingua Africa project and partnerships with Indian AI firms to share adoption data for policy making [54-69].


Natasha stressed that AI products must be “trusted by design” and offer configurable defaults so that different jurisdictions can apply the same models within their own legal and cultural contexts [80-86]. She noted that conflicts between jurisdictions can be mitigated by open-source models and a robust partner ecosystem that enables local adaptation [90-93].


Peter Mattson of ML Commons argued that the current bottleneck for AI is reliability, which can only be improved through common, industrial-scale benchmarks and metrics [108-124]. He described federated evaluation techniques, such as the MedPerf healthcare benchmark, that allow diverse data sets and legal regimes to be tested securely and at scale [135-137]. Both Mattson and Dr. Garg highlighted that measurement of AI performance, energy use, and domain-specific models is essential for widening diffusion and reducing compute costs [310-312][328-330].


The discussion also touched on the role of open data, with Wendy Hall noting that while not all data can be fully open, shared repositories and UN-backed data-governance frameworks are critical for trustworthy AI development [305-311]. Participants concluded that achieving trustworthy, inclusive AI will require coordinated governance, sustained investment in infrastructure and skills, culturally aware technologies, open benchmarking, and rigorous measurement to guide policy and practice [1-5][70-74][94-100][321-324].


Keypoints

Major discussion points


Global coordination and governance challenges for AI collaboration – The panel opened with a question about the biggest hurdles in international AI work, to which Dr. Garg highlighted resource sharing, inter-dependence of hardware-software-protocol layers, governance of sharing mechanisms, and the need for talent and institutional capability [5][6-12]. Later, Justin asked how differing national laws (e.g., the “Brussels effect”) can be reconciled, and Natasha explained Microsoft’s need to embed configurable controls so each jurisdiction can apply the technology safely [75-78][80-90].


Microsoft’s five-pillar strategy for AI diffusion to the Global South – Natasha described a $50 billion commitment structured around (1) infrastructure (data-centres, connectivity, sovereignty controls) [33-40]; (2) skilling (e.g., training 2 million Indian teachers) [46-53]; (3) multilingual & multicultural AI (Lingua Africa, safety benchmarks for Hindi, Tamil, etc.) [54-61]; (4) support for local innovation and data sharing with policy makers [62-69]; and (5) partnerships with governments and other funders [42-45].


Trustworthiness, reliability and the need for robust measurement – Both Dr. Garg and Natasha stressed that trustworthy AI requires reliable systems and governance [7-12][80-90]. Peter Mattson expanded on this, arguing that AI’s biggest barrier is reliability, which must be demonstrated through industrial-grade, multilingual safety and security benchmarks, federated evaluation, and continuous measurement [106-136][138-146][328-331].


Open data, open-source models and collaborative ecosystems – The panel repeatedly linked openness to trust: Microsoft’s open-weight model family and open-source releases empower ecosystems [92-93]; Wendy highlighted the importance of open data while noting that not all data can be fully public, and called for cross-border data-sharing mechanisms and registries [305-311]; Peter echoed that open benchmarks and open-source models are essential for sovereign capability building [92-93][106-112].


Inclusion, equity and societal impact – Wendy warned that AI discussions often exclude women, children and marginalized groups, stressing the need for “all-inclusive” governance [258-270]; the participant from the Gates Foundation emphasized language, edge-computing, sustainability, and reaching the “bottom 50 %” of the population to avoid new divides [183-229]; Natasha’s teacher-training initiative also illustrated a focus on equitable skill development [51-53].


Overall purpose / goal of the discussion


The panel was convened to explore how the global AI community can democratize and responsibly diffuse AI technologies, especially to the Global South, by addressing governance, infrastructure, talent, measurement, and inclusivity. Speakers presented concrete initiatives (Microsoft’s $50 bn plan, ML Commons benchmarks, UN data-governance work) and debated the policies and technical standards needed to build trustworthy, sovereign AI capabilities worldwide.


Tone of the discussion


The conversation began with a formal, appreciative tone (thanks, acknowledgments) and quickly shifted to a constructive, solution-focused dialogue about challenges and concrete strategies. Throughout, participants remained optimistic and collaborative, interspersed with occasional informal remarks and humor (e.g., jokes about “the man who drank bleach”). By the end, the tone became reflective and rallying, emphasizing collective responsibility and calling for continued measurement and open collaboration, culminating in a warm, appreciative closing.


Speakers

Dr. Saurabh Garg – Secretary, Ministry of Statistics and Programme Implementation, Government of India; AI governance expert focusing on resource sharing, interdependence of AI ecosystem, and talent development [S1].


Natasha Crampton – Microsoft’s first Chief Responsible AI Officer; leads the Office of Responsible AI; drives AI diffusion to the Global South and oversees AI infrastructure, skilling, multilingual AI, and policy measurement [S4].


Participant – Representative of the Gates Foundation (identified in the transcript as “Dr. Aya”); discusses philanthropic support for trustworthy AI in low-infrastructure settings, focusing on health, agriculture, edge computing, sustainability, and open-source models.


Wendy Hall – Dame Wendy Hall, Regius Professor of Computer Science and Associate Vice-President, International Engagement, University of Southampton; Director of the Web Science Institute; former member of the UN high-level expert advisory body on AI; involved in UK AI measurement and security initiatives [S10].


Peter Mattson – President of ML Commons and CEO; Senior Staff Engineer at Google; founder of ML Commons; former head of Programming Systems and Applications at NVIDIA; works on open benchmarks, reliability, multilingual safety, and federated evaluation [S12].


Justin Carsten – Moderator and panel host; leads discussion on AI democratization, governance, and measurement.


Additional speakers:


Dr. Clark – Mentioned in the closing rapid-fire round; likely an AI researcher or policy expert (specific role not detailed in the transcript).


Dr. Aya – Gates Foundation representative (identified as the “Participant” above); senior figure in the foundation’s health and agriculture AI initiatives.


Harish – Referred to by name during the rapid-fire segment; appears to be the same individual as the “Participant” representing the Gates Foundation, though the transcript treats the name separately.


Brad – Cited by Justin as having given a speech earlier in the summit; no direct remarks recorded in this transcript.


Tim Berners-Lee – Mentioned by Wendy Hall in reference to the invention of the web; not an active speaker in this session.


Nigel Shadbolt – Referenced by Wendy Hall regarding a prior review; not a speaker in this session.


Vint Cerf – Mentioned as an intended participant who could not attend; not a speaker in this session.


Ms. Asha – Name called by Justin near the end, but no spoken contribution recorded.


Full session reportComprehensive analysis and detailed insights

The session opened with moderator Justin Carsten thanking the audience and the panelists and framing the discussion around the difficulty of coordinating a truly global AI effort. He asked the working-group chair what the biggest obstacle to such international collaboration might be [5][13-15]. Dr Saurabh Garg responded that the most pressing problems lie not in the physical hardware alone but in the governance of sharing mechanisms, the inter-dependence of hardware, software and protocol layers, and the scarcity of talent and institutional capability to manage these resources [6-12]. He stressed that while data-centre infrastructure can be purchased, the expertise required to operate it responsibly must be cultivated [9-10].


Carsten then highlighted the political backdrop of the summit – noting the photograph of Prime Minister Modi with tech leaders and the presence of Microsoft – before introducing Microsoft’s first Chief Responsible AI Officer, Natasha Crampton, who leads the Office of Responsible AI [16-20][21-24]. Crampton announced that Microsoft will commit US $50 billion by the end of the decade to accelerate AI diffusion in the Global South, organising the effort around five inter-linked components[28-33].


The first component concerns the construction of data-centre and connectivity infrastructure that respects national sovereignty. Microsoft plans to invest in new data-centres and power-grid upgrades while offering both public-cloud and private-cloud options that embed “sovereignty controls” for host countries [34-41]. Crampton stressed that these facilities will be co-designed with government partners to ensure agency for the nations that house them [42-45].


The second component targets large-scale skilling. Recognising that technology diffusion historically follows education, Microsoft will train two million Indian teachers in AI-specific skills, partnering with national standards bodies to embed AI literacy at the grassroots level [46-53].


Components three and four focus on multilingual, multicultural AI and local innovation. Microsoft is collaborating with ML Commons to extend safety benchmarks to Hindi, Tamil, Malay, Japanese and Korean, and has launched the “Lingua Africa” initiative to collect rich, locally-sourced spoken-language data in partnership with the Gates Foundation [54-61][62-69]. These efforts aim to ensure AI systems operate correctly in the languages and cultural contexts of end-users, thereby supporting home-grown solutions and informing policy through shared adoption data [64-69].


The fifth component underlines the necessity of deep partnerships with governments, NGOs and other funders, acknowledging that the scale of required infrastructure cannot be met by the private sector alone [70-74].


When Carsten raised the ‘Brussels effect’-the tendency of EU regulations such as GDPR to become de-facto global standards-Crampton explained that Microsoft builds its models “trusted by design” with configurable defaults, allowing each jurisdiction to adjust controls to meet local legal and cultural requirements [75-79][80-90]. She added that open-weight model families-Microsoft’s “five families of models”-enable ecosystems to adapt technology without compromising sovereignty [91-93].


Peter Mattson of ML Commons shifted the conversation to reliability, arguing that the principal barrier to AI adoption is not capability but trustworthiness. He called for industrial-grade, repeatable benchmarks and described “federated evaluation” – exemplified by the MedPerf healthcare project – which tests models across disparate data sets while preserving privacy through confidential compute [106-124][135-137]. Mattson warned that turning experimental benchmarks into dependable, multilingual safety and security standards is a massive technical undertaking that must be sustained over time [128-136].


Justin then introduced Harish, the Gates Foundation participant, noting a recent blog post he co-authored with Brad (“Brad’s speech yesterday… based upon a recent blog post you and Brad put out”) [70-73]. Harish outlined several practical concerns for the Global South: the need for edge-inference capabilities in low-connectivity settings such as healthcare and agriculture; worries about energy consumption and the importance of lower-parameter, low-energy models for sustainability; exploration of novel hardware architectures (e.g., “multi-parameter, multi-state compute capabilities”) as future enablers of edge AI; the centrality of open-source models because many governments cannot afford large proprietary offerings; the state-level policy variation in India (e.g., differing maternal-risk rules in Uttar Pradesh vs. Telangana) that AI tools must respect; and the broader social impact of creating jobs, avoiding a digital divide within countries, and ensuring AI benefits the bottom 50 % of the population [150-180].


Wendy Hall, a Dame and director of Web Science at the University of Southampton, broadened the discussion to AI metrology. She advocated for a new science of AI measurement, likening it to the National Physical Laboratory’s work on weather forecasting and announced the UK’s Centre for AI Measurement and AI Security Institute as institutional anchors for systematic trust metrics [290-299]. Hall also highlighted the importance of open-data governance, proposing cross-border data-sharing mechanisms and global data registries while acknowledging that not all data can be fully open [305-311]. She noted the conference size (“250,000 people here”) and described a “love-hate relationship” with the event [250-255]. When asked directly about the UK’s sovereign AI strategy, Hall declined to answer and shifted to broader commentary on AI hype [252-267].


Across the panel, several points of agreement emerged. All speakers concurred that effective governance of sharing mechanisms is essential for international AI collaboration [5][6-7][70-71]; that deep, multi-stakeholder partnerships are required to deliver the five-component strategy [70-74][72-74]; that systematic measurement-whether through AI metrology institutes or industrial-grade benchmarks-is vital for trustworthy AI [290-299][306-309][328-330]; that multilingual, culturally adapted AI is a prerequisite for global diffusion [54-61][124-126][192-199]; and that large-scale skilling and talent development underpin sustainable diffusion [46-53][8-12]. Both Crampton and Mattson stressed the role of open-source models and open benchmarks in lowering entry barriers and enabling local customisation [91-93][92-93].


Nevertheless, the panel revealed notable disagreements. Hall argued that while open data is valuable, privacy and sovereignty constraints mean that only “exchangeable, shareable” datasets-not fully open ones-should be circulated, and she called for global data registries [305-311]; Crampton, by contrast, presented data sharing as a core component of Microsoft’s five-component plan without foregrounding such limits [64-69]. A second tension arose between Crampton’s $50 billion private-sector investment model and Harish’s view that open-source models are a more affordable route for the Global South [30-33][150-180]. Finally, Hall’s proposal for AI metrology institutions differed from Mattson’s emphasis on industrial-grade benchmark development as the primary path to reliability [290-299][106-124]; Hall also unexpectedly declined to answer the direct question about the UK’s sovereign AI strategy [252-267].


Thought-provoking remarks punctuated the discussion. Garg’s warning that governance and talent, rather than raw compute, are the real bottlenecks reframed the debate [6-7]; Crampton’s concrete $50 bn, five-component roadmap gave the panel a tangible agenda [30-33]; Mattson’s claim that “reliability, not capability, is the real barrier” and his illustration of federated evaluation provided a clear technical solution [116-124][135-137]; Hall’s call for a new science of AI metrology linked measurement to interdisciplinary collaboration [290-299]; and Harish highlighted the practical challenges of edge inference, language diversity, energy consumption and the need for open-source accessibility in low-connectivity settings [150-180][216-220].


Key take-aways


– Global AI collaboration is hampered by complex governance, talent shortages and the inter-dependence of the AI stack [5][6-12].


– Microsoft’s $50 bn, five-component plan seeks to close the North-South AI gap through sovereign-aware infrastructure, massive skilling, multilingual data collection, local innovation and policy-oriented data sharing [28-33][34-45][46-53][54-69][70-74].


– Deep partnerships with governments, NGOs and the broader ecosystem are indispensable for realising each component [70-74][72-74].


– Trustworthy AI hinges on industrial-grade benchmarks, federated evaluation and emerging AI metrology institutions [106-124][290-299][306-309].


– Open data and open-source models can lower barriers but must be balanced against privacy and sovereignty concerns [91-93][305-311].


– Efficient, domain-specific, low-energy models are needed to make AI viable in low-resource environments [150-180].


– Inclusive development-addressing language, cultural norms, gender and age representation-is essential to avoid creating new digital divides [54-61][124-126][190-208].


The panel also identified unresolved issues: designing governance frameworks that reconcile conflicting national regulations while preserving interoperability; securing sustainable financing beyond private investment; delivering reliable edge AI in low-connectivity contexts; establishing concrete, multi-dimensional metrics that link technical performance to societal impact; and creating global data-registry and cross-border sharing mechanisms that respect privacy and sovereignty. Suggested compromises included configurable defaults in AI products, hybrid sovereign-cloud models, leveraging open-source families alongside private investment, matching corporate funds with public and venture-capital contributions, and employing federated evaluation with confidential compute to enable cross-jurisdictional benchmarking.


In closing, the participants reiterated that democratising trustworthy AI will require coordinated governance, sustained investment, robust measurement, multilingual and culturally aware technologies, and inclusive talent development. Carsten praised the panel’s collaborative spirit, thanked the speakers and noted a brief round of applause before ending the session [340-345][70-74][94-100][321-324]. The consensus, though tempered by differing views on data openness and financing models, points toward a shared commitment to build an AI ecosystem that is both globally interoperable and locally trustworthy.


Session transcriptComplete transcript of the session
Justin Carsten

Thank you. Thank you. you you Thank you. Thank you. Thank you. Thank you. you you you you Thank you. Thank you. Thank you so much, Dr. Garg. It really highlights one of the things about collaboration, and I’ll be talking to… a number of the panelists about… about that and that i’ve been so impressed this week at how much people are really coming together for the community you know this is a much bigger summit than we’ve had previously many more people really opening it up to everyone but if i can just ask you one thing on because the working group that you’re doing i think is is excellent it’s going to be really important um what do you see is the biggest challenge around that what do you think you know your vast experience that you’ve got of coming together do you think um there’s any particular challenges in coordinating that international effort

Dr. Saurabh Garg

of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control every layer of the resources that is there and while foundational resources the foundational computer resources sharing would be a major challenge but i think a bigger challenge might be to manage the interdependence of the AI ecosystem because it spans hardware, software, and the protocols, so to say, or the ethics around that. So I think one of the biggest challenges would be the governance around this sharing mechanisms, sharing protocols, and managing the framework. And the other would be what would be the talent and the institutional capability, which is in a way required. Well, the infrastructure can be acquired, but expertise has to be developed.

And I think that’s critical to ensure that if you want to democratize and ensure that GlobalSoft is integral to that, and that’s where it would be. And I think, you know, we don’t need to focus so much on whether each country is owning each layer of the AI, but how one can do that. What is the capability and confidence in the systems that manage that we have the required methods to ensure that it takes care of the priorities and the values that each country wants to push forward?

Justin Carsten

Thank you so much. And I agree with you. It’s a big challenge, but I’m glad that you’re there to take that forward. And this week, you may have seen the photograph of Modi here with many of the leaders in tech. And it’s a great pleasure that one of the large organizations in the private sector, Microsoft, has got representation here. So I come to you, Natasha. So Natasha Crampton is Microsoft’s first chief responsible AI officer and leads the Office of Responsible AI. And it was interesting how long that’s been going. I heard earlier this week. But she’s putting Microsoft’s AI principles into practice by defining, enabling, and governing the company’s approach. to responsible AI. The office also collaborates with internal and external stakeholders to shape new laws, norms, and standards to help ensure that the promise of AI technologies is realized for the benefit of all.

As I said, that’s been a key theme. So I saw Brad speak yesterday. It was a fantastic speech, and that was based upon a recent blog post that you and Brad put out just a couple of days ago. So can you tell us a little bit about that for some people who haven’t had the chance to absorb in this session, please?

Natasha Crampton

Sure. Thank you, Justin, and it’s a pleasure to be here with the panel and the audience today. So I think our announcement earlier in the week was about how Microsoft is contributing to bringing AI to the global south, and the headline that you might have seen is that we’re on. Hi. to spend 50 billion US dollars in order to do that by the end of the decade. What we’re seeing from the diffusion data that we have access to and that we’ve publicly published already is that there is an urgent need to focus on the diffusion and what it’s going to take to do that broadly and beneficially of AI to the global south because we are already seeing that diffusion in the global north is roughly double what we see in the global south.

And so for Microsoft, as a private sector player here, we think we have a role to play in helping to close that gap and we see it as being centred on five different components. First, as Dr. Garb mentioned initially, we need to help build out the infrastructure that is needed for broad AI diffusion. So this is both… Investments in data centres to power AI applications, but it’s also investments in connectivity as well. There are real electricity needs that need to be met. We’re trying to do that with an eye towards the sovereignty of countries around the world. We realise that the world is a fragmented place, and so we design our data centres and also the services that run on top of them with a recognition that there needs to be real agency for the countries hosting those data centres.

And so we have a range of different controls that we put in to our data centres, which include sovereignty controls and public clouds. Sometimes we build private clouds. But most importantly, it’s all built on a foundation of collaborating with our government partners around the world. The scale of the infrastructure… The infrastructure investment that’s needed is just so great. It’s really hard to see how we’ll achieve what we need to without significant private sector investment as well as funding from a range of different sources as well, governments, venture capitalists and others. So the first limb is all about infrastructure. The second limb is all about skilling. What we’ve learnt from the history of diffusion of other general purpose technologies like electricity, for example, is that the countries that succeed in these really transformative economic moments are not actually the countries that necessarily invent the new general purpose technology.

They’re the countries that diffuse and adopt that technology fastest. And if you look back at history, skilling turns out to be one of the major unlocks to that adoption and broad diffusion. So, as I said, We’ve made a range of skilling announcements. One that I’m particularly energised by myself is a very specific one focused on educating educators to help them with an AI -driven educational future. And of course, when you teach teachers, you’re teaching students, and therefore the workforce of the future as well. So we committed to teach AI -specific skills to 2 million Indian teachers in partnership, of course, with Indian national standards and training institutions, which is an exciting thing to me to support the future.

Third, the third limb is all about investments in multilingual and multicultural AI. You know, AI is… It’s no good to you if it does not work in the language… that you speak and the culture in which you use the system. So we’ve been pleased to collaborate with Peter Mattson from ML Commons on an expansion to represent Hindi, Tamil, Malay, Japanese, Korean, of some safety benchmarks that ML Commons has played a key role in standing up. But we’re working upstream of testing and evaluation as well. So we’re pleased to announce a Lingua Africa initiative where we are working with local communities in partnership with the Gates Foundation and others to really make sure that we’re collecting lots of that really rich local data with and for communities.

All of that data is not well represented on the internet and spoken languages. And spoken languages in particular require that careful collection. is all about supporting local innovation. I think it’s critically important that as the private sector we really deeply understand that AI will only be meaningful in people’s lives if it’s actually solving the local problems that matter to them. So we announced some initiatives here in India and further afield that are designed to really support that local innovation. Last, we announced also as part of the new Delhi Frontier AI commitments that several leading Indian AI companies and Frontier AI companies from around the world signed on to yesterday that we’re going to be contributing our data as to what we can see about adoption and usage of AI in the economy into some central projects.

Including one led by the World Bank. So that policy makers are in a good… position to understand how is AI being adopted in the economy? Where are the places where it’s going faster than expected? Where are the places where it’s going slower? Because I think that kind of data is incredibly useful for policy making because it allows you to spot those places where you might need a skilling intervention or an infrastructure intervention.

Justin Carsten

That was fantastic. And if you ever want to know about really believing in something, having such a complex blog and then just reeling off the five pillars, and that really just shows that commitment, I think, that we’re seeing from Microsoft taking that leading role. And actually, collaboration has been, since Brad’s presidency really, has been one of the things that he really encouraged about saying, look, we’ve got to work together.

Natasha Crampton

Absolutely. I mean, not one of those five limbs is possible without deep partnership. And that coordination of those five pillars is really important. Thank you. Thank you. of those partnerships and deeply investing in them over time is really what’s going to give us the outsized impact here.

Justin Carsten

And if we think about this, because Microsoft is a global corporation, you’ve got lots of countries, each with, just as Dr Garg said, they’ve got their own customisations, they think. They’ve got their own local laws and regulations. And some things, you know, there’s something called the Brussels effect around GDPR, for example, which went pretty global, but it’s not the case for AI, for example. How do you think you manage that challenge of trying to make sure that it’s broad enough but focuses for the individual needs of nations? Have you come across that challenge?

Natasha Crampton

Yes, that is part of what I work on day in, day out at Microsoft, because part of my role is working very closely with our product teams to make sure that we are building our product. our models in a way that’s trusted and trustworthy by design. And so we are building products and technologies that we aim to share with the world. And it is absolutely true that not every part of the world has the same rules or expectations. And part of what we need to do is to make sure that we’re building technology in a way that has enough sort of controls and choices that people can make downstream of what we choose to do at Microsoft to apply that technology in their own context.

So we ourselves do have a point of view about how we want our technology to show up in the world. So, you know, we do think carefully about if we’re making available a service that’s got some configurable controls, we do think carefully about what we think the default should be. But we also really do recognize… the need for that agency, and we do deeply understand that not every part of the world is homogenous. I think it’s, you know, here in India, it’s just a beautiful place to recognize the sort of linguistic and cultural diversity of the world. Quite honestly, if we don’t build technology that can be easily adapted and applied in people’s local contexts with their values, with their laws, we’re just missing the opportunity to, you know, have our technology reach the world.

So there are complex challenges. Sometimes there are direct conflicts between what one jurisdiction wants and what another jurisdiction has sort of declared as a matter of law. They can be worked through, and this is partly why you also need a great partner ecosystem, right? Being able to make available models open source or in an open -weight space. which Microsoft has long done, for example, with our five family of models. This is another way of empowering the ecosystem to adapt and build based on that.

Justin Carsten

Thank you so much. And you just touched on, you mentioned ML Commons and you touched about culturally sensitive. And it’s interesting, there is a report that’s been released by ML Commons this week on robust and defensible benchmarks. And part of that was some great work from the Singaporean agency IMDA, which the response from an AI, it has to be culturally sensitive. And that’s the point that you made. I think culture is important because what is seen as acceptable in one culture may not be in another. So that brings me nicely to Dr. Peter Mattson, who is the president of ML Commons and also a CEO. He’s a senior staff engineer at Google. So he founded ML Commons himself and was previously the head of the programming systems and applications group at NVIDIA.

So on that ML Commons, I think it’s done some great work, as we’ve heard. It’s played a major role in benchmarking performance and efficiency of AI. How do you see that open benchmarks can contribute to building sovereign capabilities, Peter?

Peter Mattson

I think that’s a fantastic question. I’m going to start with a very broad context and then narrow it down to that specific. And the broad context I want to start in is why is trust and reliability so vital for AI? AI has tremendous potential to change everything we do. But in order for it to do that, people need to feel comfortable adopting it. And we’re all… smart, we don’t adopt things we don’t trust. You don’t give them your banking information. You don’t give them your business information. You don’t give them your medical information or trust what they say or do about it if they’re not reliable. And so the question becomes, how do we make AI reliable?

Because if I had to point to anything that’s holding back AI today, it’s not capability, it’s reliability, right? Is it correct? Is it secure? Is it safe all the time? And if we can make AI truly reliable, the potential for benefits to everyone around the world, and frankly, the potential for businesses and markets is fantastic. But the way that we drive that is with metrics, is with evaluations. AI is an incredibly complex black box system. So to make it better, you need to have common yardsticks that you use to measure progress. And we need those common yardsticks back. widely for all aspects of reliability. So you alluded to the work on security with IMDA. Natasha alluded to some of the work around multilingual safety that we’re collaborating with Microsoft on and with folks at Google as well.

These are examples of what’s necessary to drive that push towards reliability. But they’re very technically hard. This is something that I don’t think people appreciate enough. They see someone publish a paper. We made a benchmark for something, right? And they made a data set and they did it once. But there’s a tremendous amount of technology to go to industrial quality benchmarking, which is what we need for industrial level reliability. There’s one. We need to work to take the experiments we’re doing in multilingual benchmarking and turn those into a dependable framework that empowers people around the world to produce very high quality. quality, multilingual safety and security benchmarks, and then to maintain and evolve them over time, right?

If ML Commons can help lift the resources there so that people can make the choices about language and culture where they have expertise without having to grapple with the really hard technical questions of how you do AI benchmarking, we hope that could be very empowering. An example from the healthcare space, we have a MedPerf project that uses what we call federated evaluation, where it sends models out to different facilities and then tests them on a small bit of data and accumulates the results. This is how you do healthcare benchmarking for reliability, for correctness, against very, very diverse data sets, potentially around the world. It’s technology like that, like dependable industrial scale multilingual safety and security, or medical benchmarking, or medical benchmarking, made possible by the with data sets across disparate legal systems through technology like Federated Eval and Confidential Compute that we believe really unlocks that future of high reliability systems.

Justin Carsten

That’s excellent. Thank you. And the repeated use of that term reliable. So what we need is reliable LLMs, but we need the reliable benchmarks, as you said. Yes, yes. And I think this point about healthcare is really interesting because what we need to do is, you mentioned industrial scale as well, we need this process that can be trusted. And that’s one thing that I found working with ML Commons, how we all come together, the people from industry, many academics around the world. You just look at any of the papers released, so you can go to the website, and how many authors and how many years of expertise is donated to that effort. Yes, yes. Where do you see, Peter, the next sort of big movements for ML commons?

Because these yardsticks will change. You’ve done healthcare. Where do you think is the important area for you in benchmarking in the near future?

Peter Mattson

I think thanks to the contributions from all of those experts. I truly think it is a testament to the industry that we are getting very in -demand experts from some of the leading companies to contribute to this work. Like, people really care about doing AI right. That is unarguable if you look at, as you say, the author list. What we need to do is leverage that expertise to scale. It’s not enough to do a benchmark and publish a paper. We need to make that benchmark available to the industry. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper.

It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. prompt response. You ask a question, you look at the answer, you see whether it’s safe or secure or correct.

But the future, as everyone knows, is multi -turn and agentic. And so we need to drive, you know, wider and deeper at the same time. There is tremendous demand for what we do. It is tremendously resource -intensive, and

Justin Carsten

You mentioned the work of Google, so I’m going to come to Dr. Aya from the Gates Foundation in a moment, just talking about some of the conversations. So we were hoping to have Vint Cerf, who some of you may know. I know, Wendy, you know him very well. But he doesn’t travel so much, does he? No, yeah, that’s the thing. He couldn’t travel. He’s got some issue that he couldn’t. improve public health and economic development. He’s a strategic partner between Indian researchers, you’re based over here in India, global partners and Gates Foundation teams in areas including vaccine preventable diseases, disease surveillance and modelling. So thank you for joining us today. We’ve heard a little bit, of course, India has really pushed forward with its digital public infrastructure.

And we’ve heard in the last session, Dr. Gog was in from Sanjay Jain, your colleague, about Mosef, which is modelled on Adha in some ways and is an open source initiative. So what I’d like to ask you is, where countries lack foundational infrastructure, what role do philanthropic organisations like the Gates Foundation play in enabling access to… to trustworthy AI capabilities?

Participant

Thank you so much for inviting me. I think this is obviously a very complex question, not fully settled, I will say for sure. So I mean, most of my experience in this field is in India. So I think, first off, I’d like to start by saying it’s great that India is hosting this summit. It’s fantastic. And showcasing a lot of the work that the country has done, the capability and the use cases that we are very closely supporting. I think the trustworthy question is very much, and I would say sustainability as well is another question that we have to think about, is about what sort of models do we need to have? Are they large centralized models?

Or are they dispersed decentralized models on the edge? do we need in countries with poor connectivity so trustworthiness has got many aspects to it is it going to be ready to work when you want it to work suppose again my work a lot of it is in health and agriculture and things like that so if you are a front line worker how do you make sure that they can if they have to make inferences and primary care can they make inferences if needed on the edge if you are a health system person and you want to improve the working of a health system making sure the right experts are in the right facility the right medicines are there patients are taken care of there is a great opportunity to make this very high quality but again the question becomes how do you access the compute how quickly can inferences come how easy it is to prompt there is all this which is very, because if it doesn’t work well, then you lose trust.

That’s the, it just doesn’t work. The next level question is language. I think Dr. Garth talked about it, the whole Bhashani project in India and there are similar projects that we’ve been involved in and there’s been a lot of debate even within the foundation as to which models can perform on language well. Which systems can interpret super complex, I think we heard from the other speakers about how complex this is, what works well. So trustworthiness will partly come from how systems respond and the lived experience in terms of simple things like, is it accessible? Is it the right language? Is it relevant? I mean, India is a continent on its own between different states, the health system and approaches are often different based on local policies.

how does it work in terms of policy in a particular state? One thing I’m particularly familiar with is pregnancy risk stratification. We talk a lot about how to reduce maternal mortality, infant mortality, stillbirth. The rules in Uttar Pradesh, for example, may be different from the rules in Telangana. How do you make sure that if you have a tool that supports frontline workers in understanding and improving identification of risk of pregnant mothers, how do you make sure that it works in that context? So this context is important. I think trust has all of these things built into it. I’ll also talk a little bit about sustainability questions. Sustainability also requires these kinds of questions to be answered well.

What’s the energy consumption? Are there simpler, lower parameter, lower energy consuming models rather than the giant models? To me, it’s a core question. And I think… it’s nice to know that there are researchers in the country who are thinking about that. Beyond that, can compute hardware itself look different? You know, beyond digital, let’s say, I saw these researchers recently looking at, you know, multi -parameter, multi -state compute capabilities and that was really fascinating. I just saw it two weeks ago because I was prepping for a bunch of meetings. Can those be great opportunities? Maybe they are further in the future to improve the likelihood of edge computing and edge inferences. So there’s a lot of, and then I think finally, open source.

I think open source is going to be in my mind a critical aspect of it. You’ll have to see how far open source movement takes track here. I believe because many governments in the global south may not be able to afford the large amounts of money that may be needed for a long period of time. How do you do these use cases well? So that I think is going to be another aspect of it that allows for adoption, trust at the highest levels. Again, I’m talking about the bottom 50 % of the pyramid. Top 10 % of the pyramid, they’ll do what they have to do. But ultimately to build trust, you need to get to the bottom 50 % of the pyramid.

And so there are different in quotes, markets here at UL. People who can pay at different levels. Even within a country like India, obviously there’s multiple different levels. How can you make sure that this thing can reach everybody and don’t create a divide, not just between global north and global south, but even within countries, you want to make sure that this doesn’t create a divide. And that’s, I think, another important part of building societal trust. The last point, which I think is also important is, what is the impact on society of this technology? I think this is going to be an important one as well. Are you able to create jobs, employment, and there’s a meta question about how does

Justin Carsten

Thank you so much. And we’ll come back to some of those points in a minute if I may, Harish. Because, as you may have seen, we’ve just been joined by Dame Professor Wendy Hall, someone I’ve…

Wendy Hall

Professor Dame, but don’t mind. Carsten, you should know that. You’re a Brit.

Justin Carsten

I’m not a Dame. But if you were a Sir. It’s always Professor Sir. But if I keep being nice to you, maybe you’ll put a word in for me. So I’ve known Wendy for a long time. She’s a Regis Professor of Computer Science and Associate Vice President, International Engagement at the University of Southampton, where she’s also Director of Web Science. There are so many accolades. She’s been a Dame Commander since 2009 and is a Fellow of the Royal Society and the Royal Academy of Engineering and the ACM and was President of lots of those organisations, including the British Computer Society, BCS, sorry. and most notably she was the co -chair of the UK government’s AI review and a member of the AI council.

We’ve talked also about skills actually, Wendy. We were both on the, I think you were probably leading it, but I was just a member of it, the review with Nigel Shadbow into computer science, if you remember.

Wendy Hall

No, he did that one. That was Professor Sir Nigel. No, I didn’t.

Justin Carsten

Okay, okay. Anyway, you’ve been involved in advising many governments around the world and could you tell us a little bit about the UK’s approach to developing sovereign AI capabilities?

Wendy Hall

No, I’m not going to answer that question because this is a trustworthy panel, right? And I want to talk about trustworthiness. Okay. And that’s why I was asking what the panel was about because I’m doing three panels this morning and I’ve got a lunch date to go to, so an important one. So I was asking Peter what the panel was about and he said, because it’s about trustworthy AI, right? Yeah. so I want to say if you don’t mind Carsten I could tell you what the UK is doing it’s very parochial I’m very excited that this conference has been in India but I have a love hate relationship with it it’s been a really difficult conference to navigate 250 ,000 people here but you end up talking to rooms of tens of people ok it’s out on YouTube does AI need this sort of jamboree I don’t know for the future but it is fabulous to have the spotlight on India I’m a member of the MOSIP

Justin Carsten

of course you are

Wendy Hall

I’ve been involved I’m in awe of what India has done with the Aadhaar and built the digital public infrastructure and I want to see how that works I would love to see how that works in the UK but it doesn’t translate it works in developing countries it’s much harder to translate it to an old world that has long established rules and regs and ways of working and anyway so that’s I’m really excited it’s here and it was fabulous also to see the young people here because in the UK and I think it’s probably true in most of Europe and the US people are really worried about AI they’re scared because that’s what they get, they get scaremongering they’re scared it’s going to attack them they’re scared it’s going to wipe the world out they’re scared they’re going to lose their job here the kids are going wow what an opportunity right and for India I mean that’s been an eye opener for me I mean I know I’ve been working in India long enough to know I mean I helped introduce the web into India right web and internet and the website and stuff work I’ve done here and I know what you can do with the power of that technology for people that can’t read and write and live in the rural areas I mean it’s just amazing what it does, add AI on top of that, they’re not worried about the deep fakes yet what they want is to get the information to their people in the fields the farmers in the fields in rural India I suppose deep fakes, I mean I don’t know but that’s not what they’re worried about at the moment so it has been fabulous and I love the slogan here, in India AI is all inclusive but it isn’t AI is missing out 50 % of the population right this technology and I’ve been fighting this sort of thing all my career totally male dominated totally male dominated and I love, I’m very sorry but the way we talk about women’s safety women aren’t involved in these discussions right?

children aren’t involved in these discussions 50 % of us are women and we’re not involved in the discussions about keeping us safe actually we need to keep men safe too right? men suffer from deep fakes as much as women do so you know well maybe someone’s not agreeing with that but you know it could be disproportionately hitting women and children but I don’t want to exclude the men here so I have become I have become even more passionate I talked about it in my keynote on Wednesday not in the talk itself but in the conversation that it’s so important that this is really all inclusive and that women are involved at the top level in the decision making about what we do and I think take for example the Australian experiment to stop the kids under 16 using social media.

Now that is an experiment. Everything about this world is a global experiment and people are doing different bits of it. The web was like that. The web itself from the genius that is Tim Berners -Lee was a worldwide experiment. There are many different ways that you could have built a hypermedia network on top of the internet. Boy, I tried to do one myself. And it was better than the web. But what Tim did was give it away, make it fantastic, make it open. And actually that led to the rise of the use of it. But it’s also left us with the stuff we’ve got today. Because anyone can do anything on the web. So bad people can do bad things.

And bad things happen unintentionally. The unintended consequences is what I call my talk on Wednesday. So this ban on social media, we need to we’ve got to be able to study the effects. Now, I know the Australians are. We heard Macron say here in France it’s going to be under 15. Keir Starmer’s saying under 16. But he changed his mind on a penny, so it’ll probably change. But that’s a joke for the Brits. But I think Spain has said under 16. In the US, of course, Trump says, no, we won’t need to worry about safety. But I made this joke in the other panel. And he’s the man that drank bleach during COVID. But the point is we have to study.

And people say, oh, it’s all moving so fast. The alpha males say that, right? The alpha males say, it’s all moving so fast. And I’m bigger, better, faster, and cheaper than you are, right? All that sort of alpha male stuff. We have to think about how we actually measure the effects of what we’re doing. So… two good things that have come out of the UK this is my last point just this last month the National Physical Laboratory I’m their AI advisor but that’s beside the point it’s like the UK equivalent of NIST they do our metrology it’s a word I’ve learnt to say very well weather forecasting is metrology studying the weather if we can do that we can do flipping AI because that’s complicated the thing about AI is of course it’s got people in it not just physical objects doing things systems so it’s harder in that sense but the National Physical Laboratory announced two weeks ago backed by the UK government the Centre for AI Measurement and the UK AI Security Institute which was founded by Rishi Sunak at Bletchley Park from Bletchley Park is part of the network of security institutes.

And the US, this is the man again who drank bleach during COVID, says no regulation. So we can’t talk about the network being a network of safety institutes. Why would we want to be safe? Sorry, joke. But they’ve renamed it the Network for AI Measurement and Evaluation. Now, this is brilliant. Brilliant. So with my ACM hat on and everything else I can do in the dying embers of my career. No, it’s not dying yet. But the, is to start a science of AI that’s about AI metrology. But what we’re doing, of course, is we are measuring the effects of social machines, which is difficult. You have to like, so, you know, the social scientists have taught me how you have to gather the data.

How do you gather the evidence? and we can do it we don’t have there is time to do this the world is not going to end at the end of this year because of AI other things yes but not because of AI so that’s where I want to leave you the thought I think if we can develop this new science put all our the compute power the best brains from social science and computer science and psychology and all the other disciplines we need, the law, everything we can really start to think about how we measure trust one of the metrics in AI metrology will be the trust factor I leave it there thank you very much a round of applause please thank you and I’m ever so sorry you can ask me what I’ve got to go in two minutes

Justin Carsten

I’ll ask you one thing very briefly then open data you’ve been a proponent of right Tim and Nigel

Wendy Hall

yeah yeah yeah yeah yeah

Justin Carsten

so I just wanted openness collaboration is important we’ve talked about open source what role do you think open data has in trustworthiness

Wendy Hall

well there’s two things about that, the open data movement has been really important but not all data can be open it can’t be and I mean you can have data that is exchangeable shareable that won’t necessarily be open so another thing I’m on is the UN, it’s the CSTD Commission for Science and Technology in the UN data governance working group and I could tell you in much more detail about that for me data governance we ignore that when we talk about AI governance we ignore data governance at our peril and we’ve really got to build on that from the UN report we did the General Assembly accepted all the points we recommended they’re being implemented that’s the other panel I should have been on today there’s a UN panel and they accepted everything that we recommended the global scientific panel the global dialogue the global fund and the Secretary General yesterday asked for three billion that’s not very much you know for a global fund to develop AI in the global south but our recommendations on data governance were not accepted because people would not the countries would not vote for them because it’s so difficult it’s so complicated and so there’s another thing I’m working hard on is how can we actually get some you know how do we do cross -border data sharing how do we get the data flows so we can actually share data sets and another thing we need to do which is something I want to do is build data tell people where the data is we need data repositories or at least registries that’s around the world so researchers know where the data is so they can do this study I’ll leave you with that that’s something else I was on my agenda

Justin Carsten

thank you so much Wendy I’m going to Yes, thank you. Thanks so much. I’m going to go to each of the panelists for just 30 seconds. I’ll start with Dr. Clark, then Harish, then Natasha, and then Peter, just to make us busy. Just one comment for the audience about how we really push this democratizing AI and trustworthiness.

Dr. Saurabh Garg

Yes. I think one issue which I mentioned in the earlier panel is that we perhaps need to give a lot more attention to the models because that will also help more efficient models will help reduce the requirement for compute and energy, which is among the biggest costs presently. And having models which are more domain specific would also enable better usage of those models and widen diffusion across. Thank you so much.

Justin Carsten

Harish.

Participant

Just very quickly, I think real world evidence is going to be very important in terms of, is it actually useful? I think we all assume it’s useful, but I’m talking about social and the development sector. I can imagine so many ways it’s useful, but it would be good to make sure we build evidence on how it can be trusted and, of course, be useful, metricize this a bit more. Thank you.

Justin Carsten

Thank you. Ms. Asha? Well, I

Natasha Crampton

think one of the points that has come out clearly in this discussion is that trustworthy AI diffusion is not going to just happen by itself. We have to make choices that lead to that outcome. And so for that reason, I am excited about these attempts at measurement in multiple dimensions, measurement of the systems, but also measurement in the changes of our economy so that we can then start to see whether the interventions that we’re putting in place are actually having the desired effect. Because we get to write this future, but we have to actively guide it. And I think data in multiple dimensions is really important. keys are there. Thank you. And the

Justin Carsten

final word on measurement should go to Peter. So Peter. I’m going

Peter Mattson

to echo the obvious point, which is that measurement is tremendously important. And then the hidden point, which is the scope of measurement is vast. And so we need to get really good at it, both in terms of quality and the efficiency, the cost efficiency with which we can implement it and with which we can evolve it. Thank you. Could you

Justin Carsten

please give a round of applause to an excellent panel. Thank you so much. Thank you. Hello, hello, hello, hello, hello. Hello. Hello. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (20)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedmedium

“Justin Carsten served as the moderator/host of the panel discussion.”

The knowledge base lists Justin Carsten as the moderator/host of the session [S2].

!
Correctionhigh

“Microsoft will commit US $50 billion by the end of the decade to accelerate AI diffusion in the Global South.”

Microsoft’s announcement referenced in the knowledge base states the company is on pace to spend $50 billion by the end of this year, not by the end of the decade [S39].

Additional Contextlow

“The first component involves building data‑centre and connectivity infrastructure that respects national sovereignty, offering public‑cloud and private‑cloud options with “sovereignty controls”.”

The knowledge base discusses Microsoft’s sovereign-cloud approach and the importance of data-centre sovereignty for states, providing additional detail on how such controls are being designed [S26] and [S120].

External Sources (120)
S1
The Foundation of AI Democratizing Compute Data Infrastructure — I’m Saurabh Garg. I’m secretary in the Ministry of Statistics and Program Implementation in the Government of India.
S2
S3
https://dig.watch/event/india-ai-impact-summit-2026/the-foundation-of-ai-democratizing-compute-data-infrastructure — And they could be partly technological and partly policy -based or protocol -based. And a combination of this will ensur…
S4
Multi-stakeholder Discussion on issues about Generative AI — Natasha Crampton:So, I’m Natasha Crankjian from Microsoft. I’m incredibly optimistic about AI’s potential to help us hav…
S5
https://dig.watch/event/india-ai-impact-summit-2026/democratizing-ai-building-trustworthy-systems-for-everyone — Absolutely. I mean, not one of those five limbs is possible without deep partnership. And that coordination of those fiv…
S6
Democratizing AI Building Trustworthy Systems for Everyone — – Natasha Crampton- Participant – Peter Mattson- Natasha Crampton
S7
https://dig.watch/event/india-ai-impact-summit-2026/democratizing-ai-building-trustworthy-systems-for-everyone — Thank you so much for inviting me. I think this is obviously a very complex question, not fully settled, I will say for …
S8
https://dig.watch/event/india-ai-impact-summit-2026/how-multilingual-ai-bridges-the-gap-to-inclusive-access — And I think that’s something that’s very evident in this conversation. So it’s great to be part of this club. So C -Line…
S9
https://dig.watch/event/india-ai-impact-summit-2026/ai-without-the-cost-rethinking-intelligence-for-a-constrained-world — And so it’s very easy for the students who are in a school. You know, they can do their assignments in a minute or in a …
S10
From Technical Safety to Societal Impact Rethinking AI Governanc — -Dame Wendy Hall- Regius Professor of Computer Science, Associate Vice President and Director of the Web Science Institu…
S11
Beyond North: Effects of weakening encryption policies | IGF 2023 WS #516 — Prateek Waghre:Thank you very much for having me. I was told I have about 10 minutes, so I’ve just started my timer to m…
S12
https://dig.watch/event/india-ai-impact-summit-2026/democratizing-ai-building-trustworthy-systems-for-everyone — Because if I had to point to anything that’s holding back AI today, it’s not capability, it’s reliability, right? Is it …
S13
Democratizing AI Building Trustworthy Systems for Everyone — – Peter Mattson- Natasha Crampton- Participant – Peter Mattson- Wendy Hall- Participant
S14
https://dig.watch/event/india-ai-impact-summit-2026/democratizing-ai-building-trustworthy-systems-for-everyone — And if we think about this, because Microsoft is a global corporation, you’ve got lots of countries, each with, just as …
S15
S16
WS #84 The Venn Intersection of Cyber and National Security — Karsan Gabriel: Thank you very much. My name is Carsten. I work as the coordinator of the African Parliamentarian Ne…
S17
Overcoming the fragmentation of the digital governance: what role for the Global Digital Compact and e-trade rules? (South Centre) — Developing countries, in particular, face challenges in keeping track of discussions and negotiations related to digital…
S18
Multistakeholder Partnerships for Thriving AI Ecosystems — Not only the big players. So all those things need framework and need governance. And we have to make sure that the outc…
S19
Keynote-Surya Ganguli — Energy Efficiency: Learning from Biological Computation So, I work in a unified science of intelligence across both bra…
S20
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Wai Sit Si Thou: Just to double-check whether you can see my screen and hear me well. Yes. Yes. Okay, perfect. So my sha…
S21
Building Public Interest AI Catalytic Funding for Equitable Compute Access — “computer capability collaboration connectivity compliance and context”[3]. “From these discussions, there were six foun…
S22
Published by DiploFoundation — An important argument of this paper is that traditional diplomatic training is no longer adequate to address the global …
S23
Authors — Governments have a leading role to play in developing cybersecurity norms; however their challenge is that they must do …
S24
Safeguarding Children with Responsible AI — Cultural, contextual, and inclusion considerations She highlights the need for global norms that respect cultural and r…
S25
AI in 2026: Learning to live with powerful systems — Purpose-built models designed for specific domainsbegin to play a more prominent role. In healthcare, education, public …
S26
WS #43 States and Digital Sovereignty: Infrastructural Challenges — Ekaterine Imedadze: Thank you so much for amazing question. Thank you. Actually, you pointed out in the question, the to…
S27
Data centres now deemed critical national infrastructure in the UK — Great Britain has recently designated its data centres as’critical national infrastructure,’a move designed to bolster t…
S28
WS #111 Addressing the Challenges of Digital Sovereignty in DLDCs — Kossi AMESSINOU: Thank you, Moderator. Data is very important for government, because when we don’t have data, we don’…
S29
https://dig.watch/event/india-ai-impact-summit-2026/welfare-for-all-ensuring-equitable-ai-in-the-worlds-democracies — And then we are coupling that with investments in skilling. So we have made some big -number commitments around how we a…
S30
Leaders TalkX: ICT application to unlock the full potential of digital – Part I — Himanshu Rai: So, thank you for the question. You know, I’ll foreground it in a little bit of a fact about what is the m…
S31
WS #462 Bridging the Compute Divide a Global Alliance for AI — Ivy Lau-Schindewolf: Sure. Yeah, it’s kind of hard to go after, you know, Elena. And that was a very, very good point an…
S32
Conversational AI in low income & resource settings | IGF 2023 — In conclusion, the analysis underscores the potential of conversational AI in addressing healthcare gaps and improving g…
S33
Navigating the Double-Edged Sword: ICT’s and AI’s Impact on Energy Consumption, GHG Emissions, and Environmental Sustainability — The widespread adoption of efficient infrastructure implementations across sectors is supported by arguments that model …
S34
Main Session | Policy Network on Artificial Intelligence — The discussion highlighted the complex and multifaceted nature of AI governance challenges. While there was broad agreem…
S35
What is it about AI that we need to regulate? — The Role of International Institutions in Setting Norms for Advanced TechnologiesThe discussions across IGF 2025 session…
S36
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The level of disagreement is moderate but significant for implementation. While speakers share fundamental goals of resp…
S37
Developing capacities for bottom-up AI in the Global South: What role for the international community? — Gurumurthy argues that mainstream AI solutions often fail Global South contexts and advocates for alternative approaches…
S38
Indias AI Leap Policy to Practice with AIP2 — Thanks, Doreen. As you can see, Doreen has spent her career in ensuring. Every country, every community has access to or…
S39
Keynote-Brad Smith — -Infrastructure investment requirements: The need for massive investment in data centers, compute power, connectivity, a…
S40
AI Meets Cybersecurity Trust Governance & Global Security — “AI governance now faces very similar tensions.”[27]”AI may shape the balance of power, but it is the governance or AI t…
S41
Can we test for trust? The verification challenge in AI — **Anja Kaspersen** stressed the importance of bringing technical professional organizations into governance conversation…
S42
Regulating Open Data_ Principles Challenges and Opportunities — It is also evident in the market concentration of hyperscale cloud providers whose global dominance shapes where data is…
S43
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — Isabella Hampton:Thank you for the question. So the key consideration that I think organizations should make is framing,…
S44
Connecting open code with policymakers to development | IGF 2023 WS #500 — Building trust in open source was another significant argument put forth. In Nepal, for instance, there was a lack of tr…
S45
Education, Inclusion, Literacy: Musts for Positive AI Future | IGF 2023 Launch / Award Event #27 — The research does not provide specific supporting facts in this regard, but it implies that efforts should be made to id…
S46
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — Adamma Isamade stresses the need for a multistakeholder approach in policymaking. She argues that policies often lack in…
S47
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — A recent analysis of different viewpoints on AI technologies has revealed several key themes. One prominent concern rais…
S48
AI & Child Rights: Implementing UNICEF Policy Guidance | IGF 2023 WS #469 — Children in Uganda primarily focused on the material aspects of fairness, while children in Japan emphasized the psychol…
S49
Driving Indias AI Future Growth Innovation and Impact — Thank you, Mridu, and thank you, everyone, for joining us for the unveiling of this important blueprint. As we have hear…
S50
Multistakeholder Partnerships for Thriving AI Ecosystems — And what are the conditions that helped ensure these collaborations? Translated into sustained impact rather than… and…
S51
Building Population-Scale Digital Public Infrastructure for AI — To address this challenge, the Gates Foundation is investing in “scaling hubs” in Rwanda, Nigeria, Senegal, and soon Ken…
S52
Democratizing AI Building Trustworthy Systems for Everyone — “Because if I had to point to anything that’s holding back AI today, it’s not capability, it’s reliability, right?”[62]….
S53
Building the Next Wave of AI_ Responsible Frameworks & Standards — And this is, you can see up here on the screen, the QR code, and you can scan the QR code and then you’ll get access to …
S54
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S55
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Fadi Daou:OK. Wonderful. So I really hear your concerns, in fact. And it’s interesting, starting by I was expecting, in …
S56
From India to the Global South_ Advancing Social Impact with AI — Darren Farrant from the United Nations Information Centre, speaking with his characteristic Australian humor about crick…
S57
Democratizing AI: Open foundations and shared resources for global impact — The model incorporates over 1,000 languages, including Swiss minority languages, addressing critical gaps in AI accessib…
S58
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Preserving multilingual societies is essential because different language structures enable different ways of thinking a…
S59
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — And I think that’s true in the short term when the ecosystem is getting prepared. But in longer term, frauds and mis -se…
S60
Building Scalable AI Through Global South Partnerships — In India, we are a subcontinental scale. There are 22 official languages and many other languages which need to be taugh…
S61
Democratising AI: the promise and pitfalls of open-source LLMs — At theInternet Governance Forum 2024 in Riyadh, the sessionDemocratising Access to AI with Open-Source LLMsexplored a tr…
S62
WS #100 Integrating the Global South in Global AI Governance — 4. Leveraging Private Sector Involvement Jill: Thank you, Fadi. I think in a nutshell, I think it’s important to ackno…
S63
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — ## Challenges and Implementation Barriers Anne Rachel: Thank you very much and good afternoon everybody. I’m actually v…
S64
AI and Digital @ WEF 2024 in Davos — AI access and control should not be exclusive to a few corporations but accessible to all, including the developing worl…
S65
The open-source gambit: How America plans to outpace AI rivals by democratising tech — The AI openness approach will spark a heated debate around the dual nature of open-source AI. The benefits are evident i…
S66
WS #208 Democratising Access to AI with Open Source LLMs — Daniele Turra: Thank you so much, Ahitha, for presenting me today. I’m so glad to be here to discuss this very importa…
S67
Artificial Intelligence &amp; Emerging Tech — Another important aspect highlighted in the analysis is the ethical considerations in AI development. It is argued that …
S68
AI adoption reshapes UK scale-up hiring policy framework — AI adoption is prompting UK scale-ups torecalibrateworkforce policies. Survey data indicates that 33% of founders antici…
S69
UK AI plan calls for AI sovereignty and bottom-up developments — The UK government has launched an ambitiousAI Opportunities Action Planto accelerate the adoption of AI to drive economi…
S70
Global AI Policy Framework: International Cooperation and Historical Perspectives — Given your role in leading AI policy at United Nations Office for Digital and Emerging Technologies, what are the AI pri…
S71
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The level of disagreement is moderate but significant for implementation. While speakers share fundamental goals of resp…
S72
Policymaker’s Guide to International AI Safety Coordination — Moderate disagreement with significant implications – while speakers share common concerns about AI safety, their differ…
S73
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — Despite technical and economic opportunities, significant policy challenges remain. Chandra identified lack of coordinat…
S74
Skilling and Education in AI — Infrastructure development emerged as crucial, with investments in data centers, subsea cables, and compute capacity to …
S75
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — The convergence on skills development as a critical priority, combined with innovative approaches to infrastructure shar…
S76
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — “We’re investing to train everyone.”[15]. “We have over 350,000 people here, and we are growing ourselves.”[208]. “And w…
S77
The Impact of Digitalisation and AI on Employment Quality – Challenges and Opportunities — Mr. Sher Verick:Great. Well, thank you very much. It’s a real pleasure to be with you here today. I think Janine updated…
S78
What is it about AI that we need to regulate? — The Role of International Institutions in Setting Norms for Advanced TechnologiesThe discussions across IGF 2025 session…
S79
Laying the foundations for AI governance — This discussion revealed both the substantial challenges in translating AI governance principles into practice and the s…
S80
Main Session | Policy Network on Artificial Intelligence — The discussion highlighted the complex and multifaceted nature of AI governance challenges. While there was broad agreem…
S81
Smart Regulation Rightsizing Governance for the AI Revolution — However, significant implementation challenges remain, particularly around scaling coalition-building approaches beyond …
S82
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — -Global AI Governance Alignment: The critical need for international coordination on AI regulation to avoid fragmentatio…
S83
Democratizing AI Building Trustworthy Systems for Everyone — A key component addresses multilingual and multicultural AI development, as “AI is no good to you if it does not work in…
S84
Welfare for All Ensuring Equitable AI in the Worlds Democracies — -Democratizing AI Access and Preventing Digital Divide: Concerns about AI’s economic value concentrating in Western econ…
S85
Microsoft commits $17.5 billion to AI in India — The US tech giant, Microsoft,has announcedits largest investment in Asia, committing US$17.5 billion to India over four …
S86
Building Scalable AI Through Global South Partnerships — “So the examples I’ve given of TB, government has a wonderful platform called Nikshay”[8]. “Rajasthan, as an example, ha…
S87
Discussion Report: AI Implementation and Global Accessibility — $4 billion announcement for enabling capacity building for nearly 20 million people across the world over the next two t…
S88
Can we test for trust? The verification challenge in AI — This comment fundamentally reframes the discussion by deconstructing the oversimplified concept of ‘trust’ in AI. It pro…
S89
Connecting open code with policymakers to development | IGF 2023 WS #500 — Henri Verdier:If I can say something, because that’s very important. So most of the people that went to work with me did…
S90
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — Chris Albon:I think when it comes to regulation, I agree with Jim. I would love to see space for people, particularly pe…
S91
Regulating Open Data_ Principles Challenges and Opportunities — It is also evident in the market concentration of hyperscale cloud providers whose global dominance shapes where data is…
S92
Global Perspectives on Openness and Trust in AI — Okay, so let’s get into it. I’m going to moderate this panel, so I’ll take a seat. Thank you. So let’s get into it. Okay…
S93
From Technical Safety to Societal Impact Rethinking AI Governanc — This comment created a pivotal moment that shifted the discussion from theoretical safety concerns to examining the very…
S94
Open Forum #30 High Level Review of AI Governance Including the Discussion — These key comments fundamentally shaped the discussion by introducing three critical themes that transformed it from a r…
S95
Global AI Governance: Reimagining IGF’s Role &amp; Impact — – Shamira Ahmed- Paloma Lara-Castro- William Bird AI presents shared challenges and opportunities for humanity, requiri…
S96
AI Governance: Ensuring equity and accountability in the digital economy (UNCTAD) — A lot of efforts are concentrated in a handful of countries and companies Developing countries need to be included for …
S97
Next-Gen Education: Harnessing Generative AI | IGF 2023 WS #495 — Involving different stakeholders, organizations, and companies is emphasized throughout the discussions. This inclusive …
S98
Main Session on Artificial Intelligence | IGF 2023 — Audience:Okay. Hello, everybody. This is Hossein Mirzapour from data for governance lab for the record. Thank you for br…
S99
Leveraging the UN system to advance global AI Governance efforts — Daren Tang:I think the most important thing is that we need to be the platform where we are big tent and we’re inclusive…
S100
WS #97 Interoperability of AI Governance: Scope and Mechanism — 3. The need to streamline UN agencies and define clear duties (Mauricio Gibson) Mauricio Gibson emphasized the need for…
S101
Public-Private Partnerships in Online Content Moderation | IGF 2023 Open Forum #95 — Cross-cultural understanding is also important for translating research into a global aspect. Ethical considerations, in…
S102
Plenary session on CBMs and capacity building — Team Pink:Once again, let me say thanks, Mr Chair, for affording me the privilege and our group to interact. I think one…
S103
Futuring Peace in Northeast Asia in the Digital Era | IGF 2023 Open Forum #169 — The analysis then delves into the challenges of international cooperation, particularly in regions with differing stages…
S104
EU institutions are close to reaching a deal on data sharing between businesses and governments — The Data Act, a landmark law governing howdata is accessed, transferred, and shared, was the subject of an update commun…
S105
Opening of the session — El Salvador: Thank you, Chair. El Salvador, thank you for convening this session. For my country, it is essential to …
S106
https://dig.watch/event/india-ai-impact-summit-2026/building-public-interest-ai-catalytic-funding-for-equitable-compute-access — From these discussions, there were six foundational pillars that we had to address. And we thought need to form the back…
S107
Cybersecurity of Civilian Nuclear Infrastructure | IGF 2023 WS #220 — Building trust and cooperation with industry is crucial for the IAEA. While the organization has purchased commercial pr…
S108
Internet standards and human rights | IGF 2023 WS #460 — Lastly, Perkins asserts that engagement in standard development requires time, effort, and expertise. He emphasizes that…
S109
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — The tone is consistently optimistic, collaborative, and forward-looking throughout the discussion. Speakers emphasize “l…
S110
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Minister Vaishnav, Excellencies, ladies and gentlemen, let me begin by giving our thanks and expressing our sincere appr…
S111
Shaping the Future AI Strategies for Jobs and Economic Development — So, Narendra, first of all, welcome. Narendra is the MD of RackBank and NeveCloud. Narendra, you’ve heard all the challe…
S112
Microsoft Reimagine Tomorrow Summit — Microsoft held its first virtual summit tilted ‘Forward Together. Reimagine Tomorrow’ on 12-13 October 2020. During the …
S113
Microsoft to boost AI investment in South Africa — Microsoft hasannouncedplans to invest an additional 5.4 billion rand (about $296.81 million) by 2027 to enhance its clou…
S114
Brazil to benefit from major $2.7 billion Microsoft AI Investment — Microsofthas committed $2.7 billionto enhance cloud and artificial intelligence infrastructure in Brazil. The investment…
S115
Microsoft at 50 – A journey through code, cloud, and AI — Microsoft, the American tech giant, wasfounded50 years ago, on 4 April 1975, by Harvard dropout Bill Gates and his child…
S116
Conversation: 01 — Krishnan outlined the Trump administration’s three-pillar strategy developed over 13 months. The first pillar focuses on…
S117
Digital sovereignty: the end of the open Internet as we know it? (Part 1) — Perceptions are changing drastically and fast, becausethe political project of liberalism is being overridden by a neo-m…
S118
Analyst flags potential slowdown in Microsoft’s data centre expansion — Microsoft has reportedlyscrapped leasesfor significant data centre capacity in theUnited States, raising concerns about …
S119
Microsoft to invest in Sweden’s digital transformation and open sustainable data centers in 2021 — Microsoft has announced its plan to invest in Sweden’sdigital transformation and open sustainable data centresin 2021. T…
S120
Day 0 Event #270 Everything in the Cloud How to Remain Digital Autonomous — Bullwinkel acknowledged the legitimacy of sovereignty concerns whilst emphasising that trust forms the foundation of all…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Dr. Saurabh Garg
3 arguments132 words per minute297 words134 seconds
Argument 1
Governance complexity and sharing mechanisms (Garg)
EXPLANATION
Garg highlights that coordinating AI resources across nations faces significant governance hurdles, especially around how foundational computing resources are shared. Effective governance structures and sharing protocols are essential to manage the interdependent AI ecosystem.
EVIDENCE
He notes that while foundational computer resource sharing is a major challenge, a larger issue is managing the interdependence of the AI ecosystem across hardware, software, and protocols, and stresses the need for governance around sharing mechanisms and protocols [6-7].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The challenges of fragmented digital governance and the need for comprehensive frameworks are highlighted in discussions on digital governance fragmentation [S17] and multistakeholder partnership governance [S18].
MAJOR DISCUSSION POINT
Governance and Coordination Challenges in International AI Collaboration
AGREED WITH
Justin Carsten
Argument 2
Need for talent and institutional capability to manage AI ecosystems (Garg)
EXPLANATION
Garg argues that merely acquiring infrastructure is insufficient; developing expertise and institutional capacity is crucial for sustainable AI collaboration. Talent development and institutional capability are required to operationalize shared AI resources.
EVIDENCE
He points out that while infrastructure can be acquired, expertise must be developed, emphasizing the importance of talent and institutional capability for managing AI ecosystems [8-12].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Garg’s emphasis on capability development and talent is reflected in the AI democratization report that notes his prioritization of talent and domain-specific models [S1] and in the identified talent pillar of public-interest AI funding frameworks [S21].
MAJOR DISCUSSION POINT
Governance and Coordination Challenges in International AI Collaboration
Argument 3
More efficient, domain‑specific models reduce compute and energy costs, aiding diffusion (Garg)
EXPLANATION
Garg suggests that creating more efficient, domain‑specific AI models can lower compute and energy requirements, making AI more affordable and easier to diffuse globally. This approach can also help address the high cost of large models.
EVIDENCE
He states that focusing on more efficient models will reduce compute and energy costs, and that domain-specific models enable better usage and wider diffusion across regions [310-312].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The push for less-power domain models is supported by Garg’s prioritization of such models in the democratizing compute report [S1] and by broader industry trends toward purpose-built domain models [S25].
MAJOR DISCUSSION POINT
Infrastructure, Skilling, and Local Innovation as Foundations for AI Adoption
N
Natasha Crampton
6 arguments140 words per minute1432 words611 seconds
Argument 1
$50 B investment and five pillars: infrastructure, skilling, multilingual AI, local innovation, data for policy (Crampton)
EXPLANATION
Microsoft has pledged a $50 billion investment by 2030 to accelerate AI diffusion to the Global South, organized around five strategic pillars: infrastructure, skilling, multilingual AI, local innovation, and data for policy. Each pillar targets a specific barrier to AI adoption.
EVIDENCE
She announces a $50 billion commitment and outlines the five pillars-building data-centre and connectivity infrastructure, large-scale skilling, multilingual and multicultural AI, supporting local innovation, and providing data for policy makers-to close the AI gap between the Global North and South [30-33][33-69].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The five-pillar strategy with sovereignty-aware infrastructure is described in the democratizing AI report outlining the five-pillar approach [S2] and echoed in the six-pillar framework of catalytic funding initiatives [S21].
MAJOR DISCUSSION POINT
Microsoft’s Five‑Pillar Strategy for AI Diffusion to the Global South
DISAGREED WITH
Participant
Argument 2
Deep partnerships with governments and NGOs are required to deliver each pillar (Crampton)
EXPLANATION
Crampton stresses that the success of each pillar depends on strong collaborations with governments, NGOs, and other stakeholders. Partnerships enable sovereign‑respectful infrastructure, funding, and local relevance.
EVIDENCE
She describes designing data centres with sovereignty controls and collaborating with government partners worldwide, noting that significant private-sector and governmental funding are needed, and later affirms that none of the five limbs is possible without deep partnership [38-42][72-74].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity of deep government and NGO partnerships is emphasized in multistakeholder partnership discussions that call for governance frameworks and open-source outcomes [S18].
MAJOR DISCUSSION POINT
Microsoft’s Five‑Pillar Strategy for AI Diffusion to the Global South
Argument 3
Configurable controls enable AI products to respect diverse legal and cultural contexts (Crampton)
EXPLANATION
Crampton explains that Microsoft builds AI products with configurable controls, allowing downstream users to adapt them to local laws, cultural norms, and values. This flexibility is key to ensuring trust and relevance across jurisdictions.
EVIDENCE
She notes that Microsoft builds technology with enough controls and choices for downstream adaptation, carefully considers defaults while recognizing the need for agency and local context, and emphasizes that without such adaptability the technology would miss global reach [80-87].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Responsible AI guidelines stress cultural and contextual adaptability, aligning with configurable controls for legal diversity [S24], while sovereignty-aware design of data-centre services also highlights such flexibility [S2].
MAJOR DISCUSSION POINT
Microsoft’s Five‑Pillar Strategy for AI Diffusion to the Global South
Argument 4
Building data‑centre infrastructure and connectivity while respecting national sovereignty (Crampton)
EXPLANATION
Crampton outlines investments in data centres and connectivity, emphasizing that designs incorporate sovereignty controls so host nations retain agency over their infrastructure. This respects fragmented global regulations and promotes trust.
EVIDENCE
She describes investments in data centres and connectivity, the need to meet electricity requirements, and the inclusion of sovereignty controls and private-cloud options to give countries agency over hosted data centres [33-39].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sovereignty-controlled data-centre investments are detailed in the five-pillar description [S2] and reinforced by discussions on digital sovereignty challenges and the classification of data centres as critical infrastructure [S26][S27].
MAJOR DISCUSSION POINT
Infrastructure, Skilling, and Local Innovation as Foundations for AI Adoption
AGREED WITH
Wendy Hall
Argument 5
Large‑scale skilling programmes, e.g., training 2 million teachers in India, to drive diffusion (Crampton)
EXPLANATION
Crampton highlights a targeted initiative to educate 2 million Indian teachers on AI, recognizing that teacher training cascades knowledge to students and the future workforce, thereby accelerating AI adoption.
EVIDENCE
She states that Microsoft committed to teach AI-specific skills to 2 million Indian teachers in partnership with national standards and training institutions, linking educator training to broader workforce development [51-54].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Large-scale skilling commitments in India, targeting millions of learners, are reported in the scaling initiative for 10 million Indians by 2030 [S29] and contextualized within higher-education missions [S30].
MAJOR DISCUSSION POINT
Infrastructure, Skilling, and Local Innovation as Foundations for AI Adoption
Argument 6
Culturally and linguistically adapted AI, with configurable controls, ensures relevance and acceptance in diverse societies (Crampton, Participant)
EXPLANATION
Both speakers stress that AI must work in local languages and cultural contexts, and that configurable controls allow adaptation to varied legal frameworks. This cultural and linguistic alignment is essential for trust and widespread uptake.
EVIDENCE
Crampton details collaborations to expand safety benchmarks for Hindi, Tamil, Malay, Japanese, Korean and the Lingua Africa initiative to collect rich local data, emphasizing the need for AI to work in users’ languages and cultures [54-61]; participants add that language support, local policy differences, and culturally appropriate models are critical for trustworthiness [192-199].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for multilingual, culturally adapted AI is highlighted in inclusive AI for development discussions [S20], responsible AI cultural considerations [S24], and low-resource AI deployment studies emphasizing language support [S32].
MAJOR DISCUSSION POINT
Open Data, Open Source, and Cultural/Legal Adaptation
P
Participant
3 arguments158 words per minute1007 words381 seconds
Argument 1
Edge compute, language support, and sustainable low‑energy models are critical for trust in low‑connectivity settings (Participant)
EXPLANATION
The participant argues that for regions with poor connectivity, AI must run on edge devices, support local languages, and be energy‑efficient to maintain reliability and user trust. Sustainable, low‑parameter models are needed to ensure accessibility.
EVIDENCE
He discusses the need for dispersed, decentralized models on the edge, language suitability, reliable inference for frontline workers, and the importance of lower-parameter, lower-energy models to address sustainability and trust in low-connectivity environments [190-208].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Research on AI in low-income settings underscores the importance of edge compute, local language support, and low-parameter, energy-efficient models for trustworthy deployment [S32] and links these needs to broader sustainability concerns [S33].
MAJOR DISCUSSION POINT
Infrastructure, Skilling, and Local Innovation as Foundations for AI Adoption
Argument 2
Open‑source models lower cost barriers, making AI accessible to the Global South (Participant)
EXPLANATION
The participant emphasizes that open‑source AI reduces financial barriers for governments in the Global South, enabling broader adoption where funding is limited. Open‑source also fosters local innovation and trust.
EVIDENCE
He notes that many governments in the Global South cannot afford large, long-term costs, and that open-source can help them adopt AI use cases more affordably [216-220].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Investments in open-source to make AI outcomes widely available are discussed in multistakeholder partnership frameworks [S18], and open-source is identified as a key pillar in catalytic funding for equitable compute access [S21].
MAJOR DISCUSSION POINT
Open Data, Open Source, and Cultural/Legal Adaptation
DISAGREED WITH
Natasha Crampton
Argument 3
Culturally and linguistically adapted AI, with configurable controls, ensures relevance and acceptance in diverse societies (Crampton, Participant)
EXPLANATION
The participant adds that language and cultural relevance are vital for trust, citing examples of state‑specific policies and the need for AI tools to adapt to local regulations and linguistic nuances. This complements Crampton’s emphasis on multilingual AI.
EVIDENCE
He references the Bhashani project, state-specific rules in Uttar Pradesh vs. Telangana, and the necessity for AI tools to work in the appropriate language and cultural context to be trusted [192-199].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for multilingual, culturally adapted AI is highlighted in inclusive AI for development discussions [S20], responsible AI cultural considerations [S24], and low-resource AI deployment studies emphasizing language support [S32].
MAJOR DISCUSSION POINT
Open Data, Open Source, and Cultural/Legal Adaptation
W
Wendy Hall
2 arguments156 words per minute1740 words667 seconds
Argument 1
Establishing AI metrology and measurement institutes creates systematic trust metrics (Hall)
EXPLANATION
Hall describes the creation of UK AI measurement bodies, such as the Centre for AI Measurement and the AI Security Institute, to develop systematic metrics for AI trustworthiness. She likens this effort to metrology in physical sciences.
EVIDENCE
She explains that the National Physical Laboratory, acting as the UK equivalent of NIST, announced the Centre for AI Measurement and the AI Security Institute, aiming to build a science of AI metrology and develop trust metrics like a ‘trust factor’ [290-299].
MAJOR DISCUSSION POINT
Trustworthiness, Reliability, and Measurement of AI Systems
DISAGREED WITH
Peter Mattson
Argument 2
Open data governance, cross‑border data sharing, and global data registries support trustworthy AI while respecting privacy (Hall)
EXPLANATION
Hall argues that while open data is valuable, not all data can be fully open; instead, mechanisms for cross‑border sharing, data registries, and robust data governance are needed to balance openness with privacy and sovereignty concerns.
EVIDENCE
She notes that the open data movement is important but limited, emphasizes the need for exchangeable but not necessarily open data, mentions her work with the UN CSTD data governance working group, and calls for global data registries to facilitate research while respecting privacy [305-311].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The five-pillar approach includes open data governance with sovereignty controls [S2], and broader data-governance pillars emphasize cross-border sharing and registries while balancing privacy [S21].
MAJOR DISCUSSION POINT
Open Data, Open Source, and Cultural/Legal Adaptation
DISAGREED WITH
Natasha Crampton
P
Peter Mattson
2 arguments172 words per minute901 words314 seconds
Argument 1
Open, industrial‑grade benchmarks are necessary to make AI systems reliable (Mattson)
EXPLANATION
Mattson stresses that reliable AI requires common, industrial‑grade benchmarks that go beyond academic prototypes. Such benchmarks provide consistent yardsticks for safety, security, and performance across the industry.
EVIDENCE
He explains that AI reliability hinges on common yardsticks and that moving from experimental benchmarks to industrial-quality benchmarking is essential, highlighting the need for dependable multilingual safety and security benchmarks and noting the repeated emphasis that a benchmark must be made available to industry, not just published in a paper [121-124][132-164].
MAJOR DISCUSSION POINT
Trustworthiness, Reliability, and Measurement of AI Systems
DISAGREED WITH
Wendy Hall
Argument 2
Open benchmarks and federated evaluation enable reliable, privacy‑preserving testing across jurisdictions (Mattson)
EXPLANATION
Mattson presents federated evaluation, exemplified by the MedPerf project, as a way to test AI models on distributed data while preserving privacy and complying with diverse legal regimes. This approach supports trustworthy, cross‑jurisdictional AI deployment.
EVIDENCE
He describes the MedPerf project that uses federated evaluation to send models to different facilities, test on local data, and aggregate results, illustrating how federated evaluation and confidential compute enable reliable, privacy-preserving benchmarking across varied legal systems [133-137].
MAJOR DISCUSSION POINT
Open Data, Open Source, and Cultural/Legal Adaptation
J
Justin Carsten
2 arguments81 words per minute1457 words1070 seconds
Argument 1
Collaboration across nations is essential for progress (Carsten)
EXPLANATION
Carsten underscores that the scale of the summit and the willingness of many stakeholders to work together illustrate the importance of international collaboration for AI progress. He frames collaboration as a key challenge and opportunity.
EVIDENCE
He remarks on the larger summit, the increased openness, and the need to coordinate international working groups, asking what the biggest challenges are and praising the collaborative spirit [5][70-71].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Multistakeholder partnership discussions stress the importance of international collaboration and shared governance frameworks for AI ecosystem development [S18].
MAJOR DISCUSSION POINT
Governance and Coordination Challenges in International AI Collaboration
AGREED WITH
Natasha Crampton, Peter Mattson
Argument 2
Systematic measurement is vital for democratizing trustworthy AI (Carsten)
EXPLANATION
Carsten calls for robust measurement frameworks to ensure AI systems are trustworthy and democratically accessible. He emphasizes that measurement guides policy and validates interventions.
EVIDENCE
He urges the panel to focus on democratizing AI and trustworthiness, noting the need for measurement to assess impact and guide policy, and later thanks the panel for their contributions [306-309][332-334].
MAJOR DISCUSSION POINT
Trustworthiness, Reliability, and Measurement of AI Systems
AGREED WITH
Wendy Hall, Peter Mattson, Natasha Crampton
Agreements
Agreement Points
Governance and coordination challenges are a central barrier to international AI collaboration
Speakers: Dr. Saurabh Garg, Justin Carsten
Governance complexity and sharing mechanisms (Garg) Collaboration across nations is essential for progress (Carsten)
Both speakers highlight that effective governance structures and sharing protocols are critical to manage the interdependent AI ecosystem across countries, and that coordinating such efforts is a major challenge [6-7][5][70-71].
POLICY CONTEXT (KNOWLEDGE BASE)
This view mirrors calls for multistakeholder governance frameworks highlighted in discussions on AI ecosystem partnerships [S50] and reflects the coordination bottlenecks identified in national data-centre planning in India [S73]. It also aligns with the broader governance disagreement noted in international AI policy analyses [S71].
Deep partnerships with governments, NGOs and ecosystem players are essential to deliver AI diffusion pillars
Speakers: Natasha Crampton, Justin Carsten, Peter Mattson
Deep partnerships with governments and NGOs are required (Crampton) Collaboration across nations is essential for progress (Carsten) Partner ecosystem needed for adaptable models (Mattson)
All three emphasize that none of the AI diffusion components can succeed without strong, multi-stakeholder partnerships, including government, NGOs and a vibrant partner ecosystem [72-74][5][70-71][91-92].
POLICY CONTEXT (KNOWLEDGE BASE)
The importance of public-private-NGO collaborations is underscored by the multistakeholder partnership principles in [S50] and by the Gates Foundation’s scaling-hub model that channels government funding through regional partners in Africa [S51]. Private-sector involvement in capacity-building is further emphasized in the Global South forum [S62].
Systematic measurement and metrics are vital for trustworthy and democratic AI
Speakers: Wendy Hall, Peter Mattson, Justin Carsten, Natasha Crampton
Establishing AI metrology and measurement institutes (Hall) Open, industrial‑grade benchmarks are necessary (Mattson) Systematic measurement is vital for democratizing trustworthy AI (Carsten) Measurement important for guiding interventions (Crampton)
The panel converges on the need for robust, standardized measurement frameworks-ranging from AI metrology institutes to industrial-grade benchmarks-to assess trustworthiness and guide policy and deployment [290-299][328-330][306-309][320-324].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for common yardsticks and industrial-grade benchmarks for AI reliability are documented in expert testimonies on trustworthy AI [S52], the development of testing frameworks for AI systems [S53], and the emphasis on interoperable protocols for trustworthy AI <a href="https://dig.watch/event/india-ai-impact-summit-2026/u-s-ai-standards-shaping-the-future-of-trustworthy-artificial-intelligence/" target="_blank" class="diplo-source-cite" title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-title="U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence" data-source-snippet="Sihao Huang: of these agents work with each other smoothly. And protocols are so important because that’s how builders interact with these products and how we make them interoperable “>[S54].
Multilingual and culturally adapted AI is essential for global diffusion
Speakers: Natasha Crampton, Participant, Peter Mattson
Culturally and linguistically adapted AI (Crampton, Participant) Open, industrial‑grade benchmarks (Mattson) include multilingual safety
All agree that AI must work in local languages and cultural contexts, with benchmarks and initiatives specifically targeting multilingual safety and data collection to ensure relevance and trust [54-61][192-199][124-126].
POLICY CONTEXT (KNOWLEDGE BASE)
Multilingual AI initiatives covering over 1,000 languages have been showcased as a way to close accessibility gaps [S57]; the need to preserve multilingual societies and decolonize AI is discussed in [S58]; India’s experience with 22 official languages illustrates large-scale cultural adaptation [S60]; and open-source models are highlighted for supporting diverse linguistic contexts [S64].
Skilling and talent development are crucial for AI diffusion
Speakers: Natasha Crampton, Dr. Saurabh Garg
Large‑scale skilling programmes (Crampton) Need for talent and institutional capability (Garg)
Both stress that building expertise-through massive teacher training programmes and broader talent development-is a prerequisite for sustainable AI adoption [51-54][8-12].
POLICY CONTEXT (KNOWLEDGE BASE)
Skill-building is a priority in national AI roadmaps, as seen in India’s AI education agenda [S74], the impact of AI on UK hiring and workforce policies [S68], and large-scale training commitments reported by industry leaders [S76].
Efficient, low‑energy models and edge compute are needed for trustworthy AI in low‑connectivity settings
Speakers: Participant, Dr. Saurabh Garg
Edge compute, language support, and sustainable low‑energy models (Participant) More efficient, domain‑specific models reduce compute and energy costs (Garg)
Both highlight that energy-efficient, possibly edge-deployed models are essential to maintain reliability and trust where connectivity and resources are limited [190-208][310-312].
Open data and open‑source models lower barriers and support trustworthy AI
Speakers: Wendy Hall, Participant, Peter Mattson
Open data governance, cross‑border sharing, data registries (Hall) Open‑source models lower cost barriers (Participant) Open, industrial‑grade benchmarks (Mattson)
The speakers concur that openness-whether through data sharing frameworks, open-source models, or publicly available benchmarks-facilitates broader, more affordable, and trustworthy AI deployment [305-311][216-220][92-93].
POLICY CONTEXT (KNOWLEDGE BASE)
The democratizing potential of open‑source large language models is a recurring theme at the Internet Governance Forum and other forums [S61, S64, S65, S66], emphasizing lower entry barriers and broader trust through transparency.
Infrastructure investments must respect national sovereignty and provide configurable controls
Speakers: Natasha Crampton, Wendy Hall
Building data‑centre infrastructure and connectivity while respecting national sovereignty (Crampton) Open data governance, cross‑border sharing respecting privacy and sovereignty (Hall)
Both underline that AI infrastructure and data initiatives need to embed sovereignty-aware designs and configurable controls to align with diverse legal and cultural regimes [33-39][305-311].
POLICY CONTEXT (KNOWLEDGE BASE)
The UK AI Opportunities Action Plan explicitly calls for AI sovereignty and bottom-up development of infrastructure [S69]; broader sovereign-aware infrastructure considerations are discussed in UN-level AI policy dialogues [S70] and in calls to decolonize AI systems [S58].
Similar Viewpoints
Both see multilingual safety benchmarks and culturally adapted AI as foundational to trustworthy global AI deployment [54-61][124-126].
Speakers: Natasha Crampton, Peter Mattson
Culturally and linguistically adapted AI (Crampton, Participant) Open, industrial‑grade benchmarks (Mattson) include multilingual safety
Both argue that standardized, industrial‑grade measurement tools are essential for AI reliability and trustworthiness [290-299][328-330].
Speakers: Wendy Hall, Peter Mattson
Establishing AI metrology and measurement institutes (Hall) Open, industrial‑grade benchmarks are necessary (Mattson)
Both stress that multi‑stakeholder collaboration is the backbone of successful AI diffusion initiatives [5][70-71][72-74].
Speakers: Justin Carsten, Natasha Crampton
Collaboration across nations is essential for progress (Carsten) Deep partnerships with governments and NGOs are required (Crampton)
Unexpected Consensus
Both a UK academic (Wendy Hall) and a Microsoft executive (Natasha Crampton) prioritize sovereignty‑aware infrastructure and measurement despite differing regional perspectives
Speakers: Wendy Hall, Natasha Crampton
Open data governance, cross‑border sharing respecting privacy and sovereignty (Hall) Building data‑centre infrastructure and connectivity while respecting national sovereignty (Crampton)
It is surprising that a UK-based academic, who initially declined to discuss UK sovereign AI, aligns closely with Microsoft’s emphasis on sovereignty-aware data-centre design and measurement, indicating cross-sector convergence on respecting national legal frameworks while promoting openness [305-311][33-39].
POLICY CONTEXT (KNOWLEDGE BASE)
Their focus on sovereignty-aware infrastructure aligns with the UK AI sovereignty strategy outlined in the national AI plan [S69] and with international discussions on sovereign-respectful AI infrastructure in UN forums [S70].
Agreement between a private‑sector leader (Peter Mattson, ML Commons) and a public‑sector academic (Wendy Hall) on the necessity of industrial‑grade benchmarks for AI reliability
Speakers: Peter Mattson, Wendy Hall
Open, industrial‑grade benchmarks are necessary (Mattson) Establishing AI metrology and measurement institutes (Hall)
Despite coming from different sectors, both converge on the need for rigorous, standardized benchmarking infrastructure to underpin trustworthy AI, a point not explicitly raised by other participants [328-330][290-299].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for industrial-grade benchmarks is reinforced by expert commentary on common yardsticks for AI reliability [S52] and by the development of testing frameworks that enable benchmark-based evaluation of AI systems [S53].
Overall Assessment

The panel shows strong convergence on several fronts: the need for robust governance and coordination mechanisms; the centrality of deep, multi‑stakeholder partnerships; the importance of systematic measurement and benchmarking; the necessity of multilingual, culturally aware AI; and the role of skilling, efficient models, and open data/open‑source approaches. These shared positions cut across private‑sector, public‑sector, academic and civil‑society perspectives.

High consensus across most themes, indicating a shared understanding that trustworthy AI diffusion requires coordinated governance, partnership, measurement, and contextual adaptation. This broad agreement suggests that future policy and investment initiatives are likely to find common ground, facilitating collaborative action toward equitable AI deployment.

Differences
Different Viewpoints
Extent and openness of data sharing for trustworthy AI
Speakers: Wendy Hall, Natasha Crampton
Open data governance, cross‑border data sharing, and global data registries support trustworthy AI while respecting privacy (Hall) $50 B investment and five pillars: infrastructure, skilling, multilingual AI, local innovation, data for policy (Crampton)
Wendy Hall argues that while open data is valuable, not all data can be fully open and stresses the need for exchangeable data, cross-border sharing mechanisms and global registries to balance openness with privacy and sovereignty [305-311]. Natasha Crampton emphasizes large-scale data sharing for policy making as part of Microsoft’s five-pillar strategy, presenting data sharing as a key component of AI diffusion without highlighting limits on openness [64-69]. The two positions differ on how openly data should be shared and the mechanisms required.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates over open data versus security concerns are reflected in discussions on the dual nature of open-source AI, including potential vulnerabilities [S65], as well as arguments for open-source models to democratize access while managing trustworthiness [S61, S64].
Preferred mechanism to enable AI adoption in the Global South – massive private‑sector investment vs. open‑source models
Speakers: Natasha Crampton, Participant
$50 B investment and five pillars: infrastructure, skilling, multilingual AI, local innovation, data for policy (Crampton) Open‑source models lower cost barriers, making AI accessible to the Global South (Participant)
Natasha Crampton outlines a $50 billion private-sector commitment by Microsoft, structured around five strategic pillars to close the AI gap between North and South [30-33]. The Participant counters that open-source AI models can reduce financial barriers for governments in the Global South, suggesting a lower-cost, community-driven approach instead of relying on large private investments [216-220]. This reflects a disagreement on the primary pathway to democratize AI.
POLICY CONTEXT (KNOWLEDGE BASE)
Private-sector scaling-hub investments in Africa illustrate the massive investment pathway [S51], while capacity-building arguments stress the role of private actors [S62]; contrastingly, open-source LLM initiatives advocate for low-cost, locally adaptable models as a complementary route [S61, S64].
Primary approach to measuring and ensuring AI trustworthiness – AI metrology vs. industrial‑grade benchmarks
Speakers: Wendy Hall, Peter Mattson
Establishing AI metrology and measurement institutes creates systematic trust metrics (Hall) Open, industrial‑grade benchmarks are necessary to make AI systems reliable (Mattson)
Wendy Hall proposes building AI metrology institutions (e.g., Centre for AI Measurement, AI Security Institute) to develop systematic trust metrics akin to physical-science metrology [290-299]. Peter Mattson argues that reliable AI requires common, industrial-grade benchmarks and federated evaluation frameworks, emphasizing the need for robust benchmarking beyond academic prototypes [121-124][132-164]. The two experts differ on whether measurement should focus on metrology institutions or on benchmark development.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between metrology-style measurement and benchmark-driven evaluation mirrors the call for common yardsticks and industrial-grade benchmarks in trustworthy AI discussions [S52, S53] and is highlighted as a source of disagreement in AI governance roadmaps [S71].
Unexpected Differences
Refusal to address UK sovereign AI strategy and off‑topic commentary on AI hype
Speakers: Wendy Hall, Other panelists (e.g., Justin Carsten, Natasha Crampton)
Open data governance, cross‑border data sharing, and global data registries support trustworthy AI while respecting privacy (Hall) Wendy Hall declines to answer about the UK’s sovereign AI approach, instead makes jokes about AI scaremongering and broader societal impacts (Hall)
When asked to describe the UK’s approach to sovereign AI capabilities, Wendy Hall explicitly refuses to answer and shifts the discussion to personal remarks about AI hype, scaremongering, and societal experiments, which was unexpected given the panel’s focus on trustworthy AI diffusion [252-256][258-267]. This departure created a surprising divergence from the expected substantive discussion.
POLICY CONTEXT (KNOWLEDGE BASE)
The relevance of the UK sovereign AI strategy is underscored in the official AI Opportunities Action Plan [S69] and in analyses of UK AI policy impacts on hiring and workforce planning [S68], making avoidance of the topic a notable divergence from established policy discourse.
Overall Assessment

The panel shows broad consensus on the importance of trustworthy AI diffusion, yet diverges on the means to achieve it—ranging from governance and talent development, large private‑sector investments, open‑source models, benchmarking, to AI metrology. Disagreements are most pronounced around data openness, funding models, and measurement strategies, while an unexpected deviation occurs when Wendy Hall sidesteps a direct question about sovereign AI policy.

Moderate disagreement: while all participants share the same overarching goal, they propose distinct pathways, leading to substantive but not antagonistic conflicts. The implications suggest that coordinated policy will need to reconcile these differing approaches—balancing governance, investment, open‑source, and measurement frameworks—to build a cohesive global AI strategy.

Partial Agreements
All speakers agree that trustworthy AI diffusion is essential, but they propose different primary levers: Garg stresses governance and talent development; Crampton focuses on massive investment across five pillars; Mattson emphasizes industrial‑grade benchmarking; Hall calls for AI metrology and trust metrics; the Participant highlights edge compute, language support, and low‑energy models for reliability in low‑connectivity environments [6-7][30-33][121-124][290-299][190-208].
Speakers: Dr. Saurabh Garg, Natasha Crampton, Peter Mattson, Wendy Hall, Participant
Governance complexity and sharing mechanisms (Garg) $50 B investment and five pillars: infrastructure, skilling, multilingual AI, local innovation, data for policy (Crampton) Open, industrial‑grade benchmarks are necessary to make AI systems reliable (Mattson) Establishing AI metrology and measurement institutes creates systematic trust metrics (Hall) Edge compute, language support, and sustainable low‑energy models are critical for trust in low‑connectivity settings (Participant)
Takeaways
Key takeaways
International AI collaboration faces complex governance and coordination challenges, especially around sharing mechanisms, talent development, and institutional capability. Microsoft announced a $50 billion, multi‑pillar strategy to accelerate AI diffusion to the Global South, focusing on infrastructure, skilling, multilingual/cultural AI, local innovation, and data for policy making. Deep partnerships with governments, NGOs, and other private‑sector actors are essential to deliver each pillar and respect national sovereignty and cultural contexts. Trustworthiness and reliability of AI depend on industrial‑grade, open benchmarks and systematic measurement (AI metrology), as advocated by ML Commons and the UK’s new AI measurement institutes. Open data, open‑source models, and federated evaluation are critical for cross‑border testing, privacy preservation, and lowering cost barriers for low‑connectivity regions. Efficient, domain‑specific and low‑energy models are needed to reduce compute and energy costs, facilitating broader diffusion. Inclusive development—addressing language diversity, cultural norms, gender and age representation—is necessary to avoid creating new digital divides.
Resolutions and action items
Microsoft will invest $50 billion by 2030 to build data‑centre infrastructure, improve connectivity, and support sovereign cloud options. Microsoft commits to up‑skill 2 million Indian teachers on AI‑driven education in partnership with national standards bodies. Launch of the Lingua Africa initiative to collect and curate multilingual data with local communities and the Gates Foundation. Microsoft and partner AI companies will contribute adoption and usage data to a World Bank‑led central project for policy insight. ML Commons will advance industrial‑scale, multilingual safety and security benchmarks and develop federated evaluation tools for sectors such as healthcare. The UK’s National Physical Laboratory will establish the Centre for AI Measurement and the AI Security Institute to create AI metrology standards. Participants called for creation of global data registries and cross‑border data‑sharing frameworks under UN data‑governance initiatives.
Unresolved issues
How to design and enforce governance frameworks that reconcile conflicting national AI regulations while maintaining interoperability. Sustainable financing models for the massive infrastructure required in the Global South beyond private‑sector investment. Technical and policy solutions for delivering trustworthy AI on edge devices in low‑connectivity environments. Concrete mechanisms for measuring the real‑world impact of AI interventions and linking those metrics to policy decisions. Balancing open‑data benefits with privacy, security, and sovereignty constraints; specifics of cross‑border data sharing remain undefined. Ensuring inclusive participation (gender, age, regional) in AI governance and development processes. Standardizing and scaling benchmark maintenance to keep pace with rapidly evolving AI capabilities.
Suggested compromises
Implement configurable controls and default settings in AI products so jurisdictions can adapt models to local laws and cultural values. Combine sovereign (private) cloud deployments with shared public‑cloud resources to respect national data sovereignty while leveraging economies of scale. Leverage open‑source model families to lower entry barriers for the Global South, while allowing local customization. Adopt a partnership model where private investment is matched with public funding and venture capital to spread financial risk. Use federated evaluation and confidential compute to enable cross‑jurisdictional benchmarking without moving raw data. Develop AI measurement institutes that provide common metrics but allow region‑specific extensions to address local priorities.
Thought Provoking Comments
One of the biggest challenges would be the governance around sharing mechanisms, sharing protocols, and managing the framework. And the other would be the talent and institutional capability required to develop expertise, not just acquire infrastructure.
Highlights that technical resources alone are insufficient; governance and human capital are critical bottlenecks for global AI collaboration.
Shifted the conversation from purely technical infrastructure to the need for policy frameworks and capacity building, prompting later speakers (Natasha, Peter) to discuss measurement, standards, and partnership models.
Speaker: Dr. Saurabh Garg
Microsoft will spend $50 billion by the end of the decade to close the AI diffusion gap between the Global North and South, focusing on five pillars: infrastructure, skilling, multilingual & multicultural AI, local innovation, and data sharing for policy‑making.
Provides a concrete, multi‑dimensional roadmap that links private‑sector investment to societal outcomes, introducing the notion of sovereign‑controlled data centres and education of 2 million teachers.
Set a concrete agenda that other panelists referenced (e.g., Peter’s benchmarks, Wendy’s call for measurement), and steered the dialogue toward practical implementation and the role of large corporations.
Speaker: Natasha Crampton
Reliability, not capability, is the real barrier to AI adoption. We need industrial‑grade, repeatable benchmarks—like MedPerf’s federated evaluation—to turn experimental datasets into trustworthy, globally‑usable metrics.
Frames the core problem as trustworthiness and introduces federated evaluation as a technical solution, moving the discussion from high‑level policy to concrete evaluation methodology.
Prompted deeper discussion on measurement, inspired Wendy’s remarks on AI metrology, and reinforced the panel’s focus on trustworthy AI as a measurable objective.
Speaker: Peter Mattson
We need a new science of AI metrology – a systematic way to measure trust, safety, and societal impact, similar to how the National Physical Laboratory measures weather. This requires collaboration across computer science, social science, law, and psychology.
Introduces the ambitious concept of AI metrology, linking technical measurement to societal trust and emphasizing interdisciplinary collaboration, while also critiquing current governance gaps.
Created a turning point that broadened the conversation to include measurement standards, data governance, and inclusivity, influencing subsequent remarks about open data and the need for metrics.
Speaker: Prof. Dame Wendy Hall
Trustworthiness must consider edge inference, language diversity, energy consumption, and open‑source accessibility, especially for frontline workers in health and agriculture in low‑connectivity settings.
Brings a ground‑level perspective on practical constraints—connectivity, language, sustainability—that challenge the lofty goals of AI diffusion, emphasizing real‑world usability.
Added nuance to the earlier high‑level strategies, prompting the panel to acknowledge the importance of lightweight models, multilingual benchmarks, and open‑source solutions.
Speaker: Harish (Participant, Gates Foundation)
We should give more attention to developing efficient, domain‑specific models to reduce compute and energy costs, which will also widen diffusion across regions.
Re‑emphasizes the link between model efficiency and equitable access, tying back to earlier points about talent and infrastructure while offering a concrete technical direction.
Reinforced the earlier discussion on sustainability and guided the final round‑up toward actionable research priorities.
Speaker: Dr. Saurabh Garg (closing remark)
Open data is vital but not all data can be open; we need exchangeable, shareable datasets, cross‑border data flows, and global registries so researchers know where data resides.
Balances the ideal of openness with practical privacy and sovereignty concerns, and proposes a concrete mechanism (global data registries) to support trustworthy AI development.
Extended the earlier conversation about governance and measurement, linking data accessibility directly to the ability to create reliable benchmarks and trustworthy systems.
Speaker: Prof. Dame Wendy Hall
Overall Assessment

The discussion was shaped by a series of pivotal insights that moved it from a broad celebration of collaboration to a focused examination of the concrete levers needed for trustworthy AI diffusion. Dr. Garg’s emphasis on governance and talent reframed the problem beyond hardware. Natasha’s five‑pillar plan supplied a tangible corporate commitment, which Peter then grounded in the technical necessity of reliable, industrial‑scale benchmarks. Wendy’s call for AI metrology and data‑governance frameworks broadened the scope to include interdisciplinary measurement and inclusivity, while Harish’s on‑the‑ground concerns about edge use‑cases and sustainability added practical urgency. These comments collectively redirected the panel toward actionable strategies—standardized metrics, efficient models, multilingual support, and open yet controlled data—thereby deepening the conversation and setting a roadmap for future collaboration.

Follow-up Questions
How do you manage the challenge of ensuring AI solutions are broad enough yet tailored to individual nations’ needs?
Balancing global AI standards with diverse local regulations, cultural contexts, and sovereignty concerns is critical for widespread adoption.
Speaker: Justin Carsten
Where do you see the next big movements for ML Commons, particularly which areas of benchmarking will be important after healthcare?
Identifying future focus areas will guide research priorities, funding, and community effort toward the most impactful benchmarks.
Speaker: Justin Carsten
What role does open data play in building trustworthy AI?
Understanding the benefits and limits of open data is essential for transparency, validation, and responsible AI deployment while respecting privacy and security.
Speaker: Justin Carsten
How can we develop real‑world evidence and metrics to assess the trustworthiness and usefulness of AI in health and development contexts?
Empirical evidence is needed to validate AI interventions, inform policy, and ensure that AI delivers reliable benefits in practical settings.
Speaker: Harish (Participant)
How can we create more efficient, domain‑specific AI models to reduce compute and energy costs and accelerate diffusion?
Reducing resource demands makes AI sustainable and accessible, especially for low‑resource regions, and promotes broader diffusion.
Speaker: Dr. Saurabh Garg
What measurement frameworks are needed to evaluate AI systems across multiple dimensions (technical, economic, societal) and to track the impact of interventions?
Multi‑dimensional metrics are essential for guiding, monitoring, and assessing the effectiveness of AI democratization efforts.
Speaker: Natasha Crampton
How can we improve the quality, cost‑efficiency, and scalability of AI reliability benchmarks?
Robust, affordable benchmarks are foundational for establishing trustworthy AI across industries and for continuous improvement.
Speaker: Peter Mattson
What mechanisms are required for cross‑border data sharing, data repositories, and governance to support AI development while respecting privacy and sovereignty?
Effective data governance and infrastructure enable global collaboration and trust while protecting national interests and individual rights.
Speaker: Wendy Hall
How can multilingual and culturally sensitive AI be advanced through local data collection and community involvement?
Ensuring AI works in diverse languages and cultural contexts is vital for equitable benefits and adoption worldwide.
Speaker: Natasha Crampton
What governance structures and talent development strategies are needed to manage the interdependence of the AI ecosystem globally?
Coordinated governance and capacity building are identified as major challenges for international AI collaboration and responsible deployment.
Speaker: Dr. Saurabh Garg
How can open‑source models and weight spaces empower ecosystems to adapt AI to local laws and values?
Open models enable customization, allowing jurisdictions to apply AI within their regulatory and ethical frameworks.
Speaker: Natasha Crampton
How can a science of AI metrology be developed to measure trust and other social impacts of AI systems?
Standardized metrics for trust and societal effects would support regulation, accountability, and public confidence in AI technologies.
Speaker: Wendy Hall

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Designing Indias Digital Future AI at the Core 6G at the Edge

Designing Indias Digital Future AI at the Core 6G at the Edge

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session focused on embedding artificial intelligence at the core of emerging 6G networks and how India can lead this transformation [10][27-30]. Ashok Kumar explained that, unlike earlier generations, the ITU’s 6G framework envisions AI as a native element across all system components, termed “ubiquitous intelligence” [27-30].


He outlined several government measures to build a robust 6G ecosystem, including subsidised TSDSI membership for startups to join 3GPP at a reduced fee of ₹10,000 [42]; the launch of a 6G Accelerated Research Program that has funded over 100 projects in terahertz, AI, semantic communications and related areas [45-48]; support through terahertz and AOC testbeds and a partnership with ANRF to evolve release-18 systems toward release-21, expected within two quarters [52-58]; and collaboration with the Bharat 6G Alliance, the DST’s RDI scheme, plus the rollout of 100 operational 5G labs across institutes to reinforce indigenous technology development [59-66][69-71].


Panelists highlighted that AI-enabled devices-from smart glasses to wearables-will generate far higher uplink traffic, shifting the traditional downlink-to-uplink ratio from around 10:1 to potentially 4:1 by 2033 [115-119][185-190]; AI-driven traffic could account for about 30 % of total data volume by 2033, demanding new network capacity and spectral efficiency [126-131]; AI techniques such as DeepRx/DeepTx can improve signal decoding in low-SNR conditions, offering 25-30 % capacity gains and enabling higher-order modulation [197-202]; Rajiv Saluja emphasized that most inference workloads will move to the edge, reducing centralized power consumption and creating a sovereign, end-to-end intelligence stack for every citizen [149-158][224-226][278-282]; and Sandeep Sharma added that latency, coverage and a token-economy model are critical performance dimensions, while national frameworks for data exchange, model auditing and safety guardrails are needed to scale AI responsibly [166-173][237-251][262-264].


The discussion converged on the need for an open, API-driven ecosystem-similar to India’s UPI model-to ensure interoperability of AI applications across devices and operators [311-320][330-336]. Participants agreed that building a sovereign AI infrastructure, while keeping certain components open for collaboration, will lower costs and support India’s goal of a wireless-first economy [267-276][286-287]. Overall, the forum concluded that coordinated government policy, industry research, and open standards are essential to realize AI-native 6G and deliver affordable intelligence to the entire nation [33-35][88-92][363-365].


Keypoints


Major discussion points


Government-driven ecosystem building for 6G and AI – The Department of Telecom (DoT) outlined a suite of initiatives to nurture a home-grown 6G stack: low-cost TSDSI/3GPP membership for startups, the “6G Accelerated Research Program” with 100+ projects, test-beds (terahertz, AOC), collaboration with the Bharat 6G Alliance, and the rollout of 100 5G labs across institutes to seed 6G research [35-44][45-53][55-63][69-73].


AI-native design of 6G – Unlike earlier generations where AI was an after-thought, the ITU 6G framework (released two years ago) embeds AI as one of six usage scenarios and defines “ubiquitous intelligence” as a core design principle, meaning every element-from user equipment to core and applications-will have native AI capabilities [26-31][27-30][28-30].


Technical implications of AI-driven traffic – Panelists highlighted a projected shift toward far higher uplink demand (from a downlink-to-uplink ratio of ~10:1 to possibly 4:1) driven by AI-enabled devices and edge inferencing, requiring larger bandwidth (≈400 MHz) and AI-enhanced RAN functions such as DeepRx/DeepTx to boost spectral efficiency by 25-30 % [114-119][125-132][185-194][195-202][186-190][191-202].


Business, societal and sovereignty considerations – The discussion moved to the need to “democratise intelligence” (making AI affordable for every citizen), the emergence of new enterprise value pools (demand analytics, workflow automation, security), and the call for a sovereign, end-to-end AI ecosystem that is built and operated within India [149-158][267-276][278-286][289-292].


Coordination, standards and open-API ecosystems – Participants stressed the importance of national frameworks, sandbox environments, and open, API-driven architectures to avoid siloed pilots, ensure safety and auditability, and enable interoperability (e.g., across devices like Meta glasses) while leveraging India’s massive data assets [237-252][309-320][330-337][338-343].


Overall purpose / goal


The session aimed to align government, industry, and academia around India’s strategic roadmap for “AI at the Core, 6G at the Edge.” It sought to (i) showcase policy and funding mechanisms that will foster indigenous 6G research and standard-setting, (ii) articulate the technical shift toward AI-native networks, and (iii) explore how this convergence can create economic value, societal benefits, and a sovereign AI-telecom ecosystem for the country.


Tone of the discussion


The conversation maintained a formal, forward-looking tone throughout, marked by optimism and a collaborative spirit. Early remarks from the government highlighted opportunity and pride (“historic opportunity,” [33-34]), while later panel exchanges remained constructive, focusing on technical challenges, shared solutions, and collective action. There was no noticeable shift to contention; the tone stayed consistently positive and solution-oriented from start to finish.


Speakers

Sandeep Sharma – Vice President and Global Head of Emerging Technologies, Network Services at Tech Mahindra; expertise in AI, emerging technologies, and network services. [S1][S2]


Rajeev Saluja – Vice President, 5G Radio at Reliance Jio; expertise in telecommunications, 5G/6G technology development. [S2]


Moderator – (role: session moderator); no specific title or expertise mentioned.


Radhakant Das – Head of Technology Engineering and Innovation Function for Network Solutions and Services (NSS) at Tata Consultancy Services (TCS); expertise in technology engineering, innovation, and network solutions; served as panel discussion moderator. [S6][S7]


Ashok Kumar – Director General, Department of Telecommunications, Government of India; expertise in government policy and telecom regulation. [S8][S9]


Surojeet Roy – Senior Telecommunications Leader, Head of Technology, Technology and Solutions, COE, Nokia India; expertise in telecommunications technology and network solutions. [S10]


Audience – Unnamed audience members who asked questions; no specific titles or expertise provided.


Additional speakers:


Radhika – Mentioned only in the closing remarks for handing over a memento; role and expertise not specified.


Full session reportComprehensive analysis and detailed insights

Opening & Theme – The moderator opened the session by framing the theme “AI at the Core, 6G at the Edge” as a strategic opportunity for India to shift from a consumer of global technology to a leader in the next intelligence and connectivity frontier [1][2][10].


Keynote – Ashok Kumar


Ashok Kumar, Director-General of the Department of Daily Communication, delivered the keynote. He traced the evolution from 2G-4G (designed mainly to connect people) through NB-IoT (an after-thought machine-to-machine layer) to the 5G IMT-2020 framework, which embedded massive machine connectivity and ultra-low latency as core use cases [12-14][15-24]. He noted that AI was added retrospectively in the 5G release-15-to-release-18 cycle, whereas the ITU’s 6G framework (released two years ago) lists integrated AI as one of six usage scenarios and enshrines “ubiquitous intelligence” as a design pillar, meaning AI will be native to every element of the end-to-end system [26-30][28-30].


Government Initiatives – The Department of Telecommunications (DoT) outlined several measures to realise this vision:


* TSDSI subsidy – Start-ups can join the Telecommunication Standard Development Society of India and obtain 3GPP membership for a subsidised fee of ₹10 000 (instead of the usual ₹5-6 lakh) [42-44].


* 6G Accelerated Research Program – Launched two years ago, it has funded more than 100 projects covering terahertz hardware, AI/ML algorithms, semantic communications and advanced sensing [45-49][50-52].


* Test-bed ecosystem – Includes a terahertz test-bed, an AOC test-bed, and a partnership with ANRF to evolve a release-18 system through releases 19, 20 and the forthcoming release 21 (the first 6G-specific release), expected within the next two quarters [52-58][55-58].


* Bharat 6G Alliance – Coordinates working groups on technology, spectrum and devices [59-66][63-66].


* DST-RDI inclusion – The Department of Science & Technology’s Research, Development & Innovation scheme now explicitly includes the telecom sector, securing dedicated funding for 6G-related research [60-62].


* 5G laboratory network – 100 operational 5G labs have been rolled out in academic institutes, providing a platform for seeding 6G research; Ashok Kumar urged industry to adopt one or two of these labs for joint development [69-73][71-73][70].


Panel Introduction – The moderator introduced the panelists and set the focus on technical, business and policy implications of an AI-native 6G [80].


Device & Traffic Outlook (Surojeet Roy)


Roy highlighted a new generation of AI-enabled devices-smart glasses, wearables and body-patch sensors-that will off-load inference to edge or central data-centres, creating a substantial increase in uplink traffic [115-119][121-124]. He cited Nokia Bell Labs forecasts that AI-driven traffic could rise from the current 5 % to roughly 30 % of total data volume by 2033, and that the traditional downlink-to-uplink ratio of about 10:1 may compress to around 4:1, thereby demanding higher uplink capacity [126-132][185-190].


Intelligence-Utility Vision (Rajiv Saluja)


Saluja argued that “democratising intelligence” means placing most simple, latency-sensitive inference at the edge while reserving multi-step, multi-agent workflows for the core or cloud, thus distributing power consumption and avoiding concentration in large data-centres [149-158][158-162][224-226]. He emphasized the need to “build, not rent” intelligence and proposed a sovereign, token-based AI economy in which the entire end-to-end AI value chain is Indian-owned [154-157][278-282][284-287].


AI-6G Business Impact (Sandeep Sharma)


Sharma linked AI progress to business outcomes, redefining latency as a productivity KPI and stressing that AI-driven services must be delivered through an open, API-driven architecture modelled on India’s UPI system [312-319][330-336][262-264]. He called for national data-exchange platforms that enable secure, anonymised sharing of industry data for training large language models, and for safety guardrails, model auditability and sandbox environments to ensure responsible AI deployment within telecom networks [337-344][338-344][237-252][229-236][261-264]. Sharma also suggested placing GPUs at cell-tower sites to democratise AI compute and alleviate both latency and energy pressures [219-221].


Technical Enhancements for 6G (Roy continued)


Roy noted that 6G is expected to operate with up to 400 MHz of contiguous spectrum-four times the typical 5G bandwidth-requiring a five-fold increase in spectral efficiency to achieve the projected 20-fold capacity boost [202-206][203-205]. AI-enhanced radio functions such as DeepRx/DeepTx have already demonstrated 25-30 % capacity gains and enable higher-order modulation even under low signal-to-noise conditions [195-202][203-206].


Open vs Sovereign Ecosystem


The discussion contrasted Saluja’s vision of a sovereign token economy with Sharma’s advocacy for an open, interoperable AI layer. Both agreed that a hybrid model-open APIs for innovation combined with Indian-owned token mechanisms for critical services-would balance national interests and global collaboration [278-287][330-336].


Socio-Economic Context


Roy cited the Niti Aayog report that targets a ₹30 trillion economy by 2030 and highlighted the 490 million informal workers who could benefit from AI-driven tools in agriculture, skilled-trade assistance and other sectors [140-144].


Audience Q&A


* Interoperability & AI-API – Participants referenced the Meta glasses demo and called for an open, API-driven ecosystem akin to UPI [312-319][330-336][262-264].


* Data for LLMs – A request for a national data-exchange platform to feed large language models was echoed, with Sharma stressing anonymisation and security [337-344][338-344].


* OneEdge / Network-API Monetisation – An audience member asked about Jio/Airtel’s OneEdge initiative; Rajiv Saluja gave a brief answer and promised a detailed offline discussion [350-352].


* GPU-at-Cell-Tower – Sharma reiterated his suggestion to install GPUs at cell sites to democratise AI compute [219-221].


Unanswered / Open Issues – The panel did not quantify the exact split of AI inference across device, edge, core and cloud [120-124]; ROI metrics for AI-6G pilots in priority sectors remain to be defined [161-165]; the full 2030 roadmap-including release-21 timelines, token-economy mechanisms and sovereign data-exchange frameworks-was only sketched [55-58][277-287]; and concrete standards for interoperable AI APIs, safety guardrails and audit mechanisms are still pending [237-252][312-319][261-264].


Conclusion – The forum underscored a historic inflection point for India: AI is now embedded at the core of the forthcoming 6G architecture, and a coordinated ecosystem-spanning low-cost standards participation, research accelerators, test-beds, the Bharat 6G Alliance, DST-RDI support, 5G labs and open-API frameworks-is being assembled to realise this vision. While the panel largely agreed on the strategic direction, the debate over the balance between openness and sovereign token-based control highlights the need for a hybrid approach that safeguards national interests while fostering interoperable innovation. Next steps include finalising technical roadmaps, establishing national data-exchange and safety sandboxes, and aligning industry pilots with the imminent 6G standards to ensure affordable, AI-driven intelligence reaches every Indian citizen [27-30][88-89][45-53][55-62][330-336][262-264].


Session transcriptComplete transcript of the session
Moderator

opportunity, ensuring that India moves from being a consumer of global technology cycles to becoming a sharper of the world’s next intelligence and connectivity frontier. To kick off the discussion, I would like to invite Mr. Ashok Kumar, Deepthi Director General, Department of Daily Communication, Government of India, to deliver a keynote address. Thank you.

Ashok Kumar

So my colleague panelist, the expert panelist here, the distinguished dignitaries in the hall, and other participants gathered here, Thank you,

Moderator

Mr. Ashok. Thank you, Mr. Ashok. Thank you, Mr. Ashok. So it’s

Ashok Kumar

my privilege to deliver the keynote address before such a gathering. So although the hall is like empty, but I suppose many of our participants are online. The theme of this session, AI at the Core, 6G at the Edge, captures the transformative journey which we have started now. So let me go back slightly back in the history. When we rolled out 2G, 3G and 4G, the vision was to connect V, human beings, and as technology progressed, we started connecting machines and objects through innovations like NB -IoT, as all of us know. Although they were not part of the original vision and we can say that those were evolutions, extensions and maybe we can also call that as afterthought.

When the work on 5G started at ITU way back in 2012, if you recall, after three years of deliberations with all the state 190 plus countries and also the sector members like industry, academia. So ITU released a 5G framework, they call it IMT 2020. And for the first time. The usage scenario, the three usage scenario envisioned by ITU included support for massive connectivity of objects and machines and also the applications which required very, very low latency. So, what we should say that for the first time technology was designed, it was not an afterthought even for machines, not only for we humans but also for the machines. As we know that the 5G journey started with 3G BP release 15 and that was also delivered in three parts, right?

Just to start early, so they had three part of releases of release 15 and then every one and half years or two years we have the next evolution of the 5G technology. And when we reached to release 18 and that is also called 5G advanced. So, basically AI, artificial intelligence began to be integrated at part of 3G. To solve the network functions or to solve some of the network functions requirement. So again, this was some sort of an afterthought, right? Because we started, our vision was not the native integration of AI into the 5G system, but as technology evolved, we started doing that, perhaps the precursor of the 6G. The shift now which we are seeing in 6G, the story is different.

If you look at the ITU framework for 6G, which was released two years back, so that has got six usage scenarios, they have envisioned six usage scenarios, and one of the usage scenarios is integrated artificial intelligence and communications. So now, the artificial intelligence is part of the initial thought itself, and more important, along with those six usage scenarios, what ITU conceived is the four overarching principles, and the fifth is the three main principles, and the sixth is the three main principles, and the sixth is the three main principles, The four, the key design principle we can say, and one of the design principle if you read is ubiquitous intelligence. So when we say ubiquitous intelligence, what we mean is that every element of our end -to -end 6G system, be it the user equipment or be it the radio or be it core or be it applications, everyone will use AI embedded natively into the system.

So the earlier generations, if you talk about connected humans and objects or machines, 6G will actually correct the intelligence as it is envisioned in the ITU document. And of course, 3GPP has started working on all those aspects. So this is a kind of… It’s a historic opportunity for me in India, particularly for our… ecosystem, that is our MSME, startup, academia and everyone. So it’s an opportunity not only to like participate in the standard so that our technology, our innovations becomes part of the standard, but also to build our own end -to -end 6G technology stack. So what are the different government efforts since I come from government, Department of Telecom, so I would also like to touch upon what are different efforts the government is trying to do to create a robust ecosystem of 6G research and innovations.

Of course, government alone cannot do everything, but whatever effort we are trying. So one of the important aspects is about whatever technology we are trying to develop, right, whatever IP we are trying to create, if that enters into the standard, 3GPP standard itself, it’s good for us that we are shaping the standards. The India is also. So, I mean, we started doing such activities from 5G onwards. Before that, we were not at all participating in the 6G, I mean, telecom technologies standard making. So, to support our startup, et cetera, onto this, so if a startup company want to, say, participate in 3GPP standards, that company has to be member of first our TSDSI and also individual members of 3GPP and that’s a cost, right?

So, at DOT, we are supporting TSDSI so that our startups can be member of TSDSI and 3GPP at a very, very low cost of 10 ,000, not 5 lakh, 6 lakh, and they can participate. So, that’s, it’s a continuous thing which we try, trying and doing. Interesting. In addition to that, as we know that unless we do our own kind of a research and technology development, even before the standard starts, building up and then take it to the standard. So to support that activity, we had come out with a scheme called 6G Accelerated Research Program. So that was floated, I think, two years back. And we have selected 100 plus 6G related projects in different area. That includes terahertz technology, artificial intelligence, machine learning, semantic communications.

And every aspect of sensing, every aspect of the vision of the 6G. And those projects are progressing. And we are trying to help them also participate into the standard. In addition to that, we have also supported some 6G related testbed like terahertz testbed and one AOC testbed, which is doing very good work as of now. In addition to that, there are many other. Programs which are sort of in progress. For example, recently we worked with. ANRF wherein we are trying to come out with a scheme wherein we are trying to build end -to -end system based on release 18 and evolve it to release 19, 20 and 21. As you know that release 21 would be the first release of 60.

So we are trying to do that and perhaps that will come very soon, maybe in the next two quarters that will be out. In addition to that, I would also like to take name of Bharat 6G Alliance here because we are also closely working with Bharat 6G Alliance as government. So Bharat 6G Alliance has created multiple working group on technology, on spectrum, on devices and some of the members of the alliance have been working on the technology and some of the members of the chair of those working groups are here as part of this session also. So, basically, Bharat 6G alliance is kind of suggesting government that what next to be done to be leader in 6G and based on that, we are trying to, I mean, shape the policies of the government.

In addition to Department of Telecom, our other ministries like Maiti is also supporting various 6G related projects. I would take name of the scheme of DST, which is RDI. So, once you have a technology, perhaps you want to scale RDI will come handy. And we have taken up with DST that telecom sector should be included as part of the sector which will be supported. In the RDI and Secretary DST had agreed to this particular aspect and whenever the schemes are getting floated, our companies, our startup in the field of telecom can actually apply. As part of DST, they also have, they have been running cyber physical programs. So, they are also, they are supporting some of the.

5G and 6G related projects. One most important thing which DOT did previous years, which was actually announced in the budget and inaugurated by our Honorable Prime Minister was 100 5G lab in 100 different institutes across the country. Those labs are actually operational. So those are some of the points where actually 6G research has also started because once you have good knowledge of 5G and if you are able to develop use cases or 5G network elements itself, perhaps you are ready to do something on 6G. And so my request to industry here, those who are online, that please adopt one or two 5G lab and try to work with them that what more can be done in the technology area.

With this, I want to conclude my address by inviting the esteemed panelists to deliberate and provide some answers to your questions. Thank you. Thank you. not only to the government, but also to the industry, MSME and startup and academia on the way forward on this

Moderator

Thank you, sir. So now we are moving to our very next segment, the panel discussion. Our first speaker is Rajiv Seluja, Vice President, 5G Radio at Reliance Jio. Also joining us is Surojeet Roy, a Senior Telecommunications Leader, Head of Technology, Technology and Solutions, COE, at Nokia India. Sandeep Sharma, a technology leader and AI innovator, Vice President and Global Head of Emerging Technologies, Network Services at Tech Mahindra. The dialogue will be moderated by Radhakant Das, who heads the Technology Engineering and Innovation Function for Network Solutions and Services, NSS, at TCS. Before we start this panel discussion, I would like all the speakers to have group photographs, please. May I also request Ashok Kumar, sir, to be here?

Thank you. Thank you, sir.

Radhakant Das

Okay. Can we start? Great. So, good morning, our distinguished guests. My colleagues from the Government, Industry and Academy. All of you who are online, good morning to you all as well. So, this entire topic, which you can read out, A at the core and 6G at the edge, and Designing India’s Next Resilient, Innovative and Efficient Digital Frontier. We are at a historic inflection point where the intelligence is the basic infra. based on which the next evolution of this planet will actually continue. And we have seen until 5G, but in the 6G, a lot of hope. And we see that 6G not only emerges a faster network as an option, but as a distributed computer fabric.

It’s going to have a platform that enables the intelligence everywhere across radio, core, and age, including the satellite, which is non -terrestrial networks, and the sensor ecosystems. Devices in 6G and AI will take a major role. We’ll talk about how the 6G payloads or the designs will actually be AI -native, how it will drive the overall objective of bringing AI and 6G together. As a success. The professor has already pointed out the standards of already… will take in AI native to the 6G standards which is coming up in Magda’s next two quarters. It’s quite optimistic but yes, we are looking forward for the faster to come. And thanks for the government to give all the support to the industry, academia and the Vara 6G Alliance is also doing a great job and our Honourable Minister and Prime Minister are actually actively supporting and giving directions time to time to get this forward.

So we will focus on the edge interfaces at a scale. We’ll talk about semantic communications where like you would have seen India has really put a very strategic point of view that AI is, we will ensure that AI is kind of energy efficient. It will not be responsible for melting the data centres. It will be power efficient. And we will ensure that every compute capacity is being optimally utilized, not like we have enough compute and we will use it as much as possible. And data is a strategic fuel for this AI. And the networks, telecom networks, it’s not only 6G, but all kinds of connectivity networks, they will drive this data, this strategic fuel to the users, to the sensors, to the cloud, to the computing systems and deliver it.

So here we go. We start, I think, all our panelists are there. I think their names are already there in the backside of the screen. So I’ll just start with some of the questions. So maybe we’ll start with Surajit. Yeah. So Surajit, I have. I have the first question for you. The India in the context of India’s, in the context of 6G vision. where networks are expected to reason, self -organize and optimize across ecosystem, run, core, edge and of course when you say edge, it includes devices and the sensors. How do you see AI is transforming the RAN for the 6G in the day one? You may throw some light on that please.

Surojeet Roy

Yeah, sure. So, I think we can talk about it in few steps. For example, first one is on the devices. So, we have many form factor of devices coming up. We already have smart glasses launched. We had this AR, VR glasses earlier where we could not see outside, but then now we have glasses which look more like the normal glasses we wear. But those are having this AI functionalities, right? you can do lot many you know work in the background and nobody would know that you are actually looking at something else while you are talking to a person so I think from the device perspective the intelligence is being built up in the devices handsets there was a talk that maybe all these smart glasses and wearables will take away the handset but then I guess these handsets are going to stay for a while I don’t know till when but at least for the next 4 -5 years those are going to stay and we will also have lots of wearables right and maybe some you know body patches as well which can sense your heart rate so as a person as a user I see that we will be having multiple devices going forward not only one device we will have handset, we will have smart glasses, we will have wearables right and this all will be having AI enabled devices capabilities, but because of the form factor, these devices might not be able to do all the inferencing tasks on their own, which means that there will be some inferencing help needed from the data centers, whether it is centralized or edge data centers, which means there will be lots of traffic requirements towards the network, especially in the uplink.

Radhakant Das

Okay. So, Rojit, if you would like to expand it a little further, the inferencing is now tiered or distributed, as what I am mentioning. So, what percent is, maybe you can take an average application, will be there residing in the devices or sensor side? What percent is in the RAM? What percent is in the core of the network premises? And what will go to the cloud?

Surojeet Roy

Yeah, I think we do not have those exact numbers, but I think if I look at the data traffic as such, the WAN traffic. So, it is going to grow maybe six to nine times from now. till 2033 there is a position we have from Nokia Bell Labs and out of the total traffic in 2033 almost 30 % will be AI driven right. So 30 % traffic will be AI driven. It can be direct AI which can be slightly lesser but the indirect AI where you know once you use any application it drives you towards some other application and that increases your data. Maybe right now we have 5 % of the AI traffic it will go to 30 % in next 3 years. Not 3 years I think the projection is by 2033 around 2033.

So it might get to 30%. It is getting embedded to all our life faster than we have thought about.

Radhakant Das

So any of you would like to address this thing like how much of influencing would you like to see from the agency? Any of you would like to address this thing like how much of influencing would you like Like for example, of course, cloud has to do large part of it.

Surojeet Roy

Yeah, on that, what I can comment, maybe Sandeep and Rajiv can also add. So first thing is, you know, it really depends on the use case. Physical AI use cases like autonomous vehicles, robots, I think autonomous vehicles are definitely picking up in US and China. But if I look at India, I think it’s going to take some time because we don’t follow rules. You know, we have a bad habit of driving. You know, I think the AI models have to tune to understand how the drivers drive in India. Right. So I think autonomous vehicles will take some time. But those are the use cases, autonomous vehicles, industrial robots, maybe robotic surgeries, where you need much lower latency.

Those are the ones where the inferencing might be needed at the edge. But I think for normal consumers and normal use cases, we can still manage with the inferencing at the central location. Right. But the main problem is. having a centralized data center establishing that is a problem because I think the power consumption and the power requirements site infrastructure, those are a major challenge and that’s why we see a trend that the data centers are gradually moving towards the edge. Maybe not driven by only the use cases but maybe driven by the infrastructure.

Radhakant Das

Yeah, a lot of, a heavy dose of this data center related concerns were there for last four days in the summit. So Rajiv if I can come to you question for you is are you witnessing a shift from telcos as a connectivity providers to intelligence utilities and how does your organizations plan to deliver intelligence at the lowest cost?

Rajeev Saluja

Right, you know, so in the past decade like Ashok sir also mentioned was about democratizing the connectivity, right? Today more than 99 % of India’s population is connected by high speed broadband, right? The next decade is going to be about how can we democratize intelligence. So how can the last citizen of India have the strongest intelligence ecosystem built? That is the whole objective towards which we are working. And like our chairman said yesterday, you cannot rent intelligence. We cannot, as India, we cannot afford to rent intelligence. We need to build it. We need to scale it. And the complete infrastructure that we are building, we are building up from connectivity to the cloud, to the edge, and then the intelligence ecosystem on top.

So just to add to your previous question, we believe that most of the simple agentic and inferencing workloads will get handled at the edge. And only the multi -step, multi -agent, complex workflows. those are the ones which will get handled at the central location but our whole focus is how can we create an ecosystem an end -to -end ecosystem which can ensure all pervasive and an affordable intelligence to every citizen of this country that’s the whole focus

Radhakant Das

Thank you So Rajiv, what you are actually referring to is if we distribute the inferences and the processing across so the power requirement will get distributed and also we will not have a lot of concentration of power consumption and the data centers itself it’s a good thought so maybe Sandeep, we’ll just come to you how do you see AI and 6G anchor use cases can deliver the ROI within next, let’s say one and a half year from the India’s priority sectors such as BFSI, manufacturing, healthcare, mobility and how do you see that and how do you put the metrics as a success?

Sandeep Sharma

I think fairly good question honestly speaking and if you look at AI and 6G are two parallel things. They are going to merge but as on today we see lot of AI traffic is getting generated. Maybe it’s 6G or 5G or maybe on wireline. And the pattern is also evolving drastically the type of AI traffic that is running. So till now, till 2G, 3G, 4G we thought of only voice and data is the actual traffic for which network should be defined. But going further, depending on different type of use cases, different latencies use case need, network has to be defined for three parallel dimensions which is latency. Latency is there today in the network but we don’t take much of attention because most of the use cases are not latency sensitive.

The other thing is coverage. Coverage is equally important. reason being the uplink sensitivity of the traffic is getting more and more relevant in the AI type of traffic. And finally, one thing that we all should be aware of, the token economy is something which drives all the use cases. How much token you are going to consume, at what pace, at what latency, drives many of the use cases, efficiency or not. So if we bifurcate it from the industry to industry perspective, if you look at the industries which are more sensitive to delay, or maybe the robotic surgery, the hospital industry, and maybe the floor machines where robots are taking all the production control, their latency plays an important role.

So we should be using 6G -centric or the 5G -centric networks to realize as good as low latency, so that the tokens which are exchanged should be acknowledged well in time, and we have a faster time to resolve. And even we have observed that even if you reduce 10 to 20 % of latency, the efficiency improves drastically so it’s no more a network KPI it’s a productivity KPI for those use cases if we talk about the coverage perspective AI is going to be more uplink heavy more bursty and we need persistent traffic around it requests will keep on coming and that persistence in uplink will only be achieved if you have a good reliable coverage and the scenarios like when you have to do a lot of tracking of the assets lot of monitoring of the assets you need to have certain use cases realized on that those are the immediate use cases that industry will look at and finally all these AI specific things will only scale when you have some national framework around it you have certain national sandboxes around it so whatever is coming into the ecosystem it’s well tested across a diverse set of vendors diverse set of customers diverse set of ecosystem players because use cases for AI may not be related to the one use cases which we have seen so far so these three dimensions we should look at and once we look at the economics of the token then coming back to the question that you asked Rishabh where the influence should happen I think it’s not about only the where but at what cost so that defines how the influencing traffic will shape up

Radhakant Das

has also urged some of the industries to take over or adopt a couple of these labs. I think what you are suggesting as a part of sandbox on the applications, they are already happening. I think more the Department of Telecom and GOI should be working on that part. Okay, Surajit, we’ll come back to you again. Again, let’s say for the next four years, until 2030, how do you see the evolution, Surajit? And starting from the devices, use cases, traffic growth, and how do you see the impact of AI derived from the networks? One is tokens, the number of tokens, we’ll start using the KPI, which Sandeep has already mentioned about. So, what’s your opinion on that?

Yes.

Surojeet Roy

Yeah, I think we touched upon it, I think, but just to be more specific about it, So the uplink traffic is going to see a significant increase. So currently we see a downlink to uplink ratio of maybe 10s to 1 or 12s to 1. I don’t know the exact number, but that’s the range we’re talking about. But with this AI applications, we are predicting that this pattern will change to maybe 4s to 1, 4 in the downlink and 1 in the uplink. So what it means is that you need much higher data rates in the uplink. Today the networks are sort of not built for that, which means there will be lots of enhancements required in the network. This can come a bit from the 5G advanced, and then more enhancement will come when we go to 6G.

There will be lots of improvement on the spectral efficiency in the uplink, and then using AI in the RAN. We can improve the coverage. I’ll give you some example how that can be done. So, for example, you know, the communication between the transmitter and receiver, it involves the signal received, the interference, the noise floor, right, and the scheduling. And there is lots of data, huge amount of data which is involved there. So, I think with AI using the deep learning algorithms, we can create, you know, some logics which can help optimize this entire communication. So, and then with AI, this communication can be adaptive as well. So, we are talking about something called DeepRx, DeepTx, where Nokia is very much, you know, engaged.

And we have done some initial proof of concepts. And using that, what we have seen is that even in an environment where you have the signal to noise ratio, which is much worse than what, for example, 5G can decipher. using AI you can actually decipher those signals and that can give a capacity increase maybe 25 -30 % and what you can also do is you can have higher order modulation supported. So this is going to increase the capacity of the network and then as I mentioned the multitude of devices which will require lots of low latency use cases, much higher capacity, we are talking about minimum 400 MHz of bandwidth when we are talking about 6G. So today 5G networks are primarily running with say 100 MHz typical bandwidth.

We are talking about 400 MHz of bandwidth which might be required and we are talking about 5 times spectral efficiency. So which means 5 into 4, you are talking about 20 times more capacity coming out from 6G networks. But I think this is an evolution, right? So we are doing the standalone networks right now and you know this voice over NR, slicing, this will… you know, get, I would say, advanced and will have the entire network having slicing capabilities, voice will transform to voice for NR and then gradually you go towards 6G where you will be building the networks which are more AI native.

Radhakant Das

Very interesting point you brought in, Surajit. So what you mentioned is, it’s very interesting, I didn’t think about it earlier. You were saying even a single digit designer, I can extract more information. Actually, we are going to improve Sagan’s principle. That’s a good aspect, right? Exactly. And also, one thing if you can just throw on, the tokens are smaller packets. You just have instructions and some questions. Why it should increase the opening bust? Ideally, it should not. There are a lot of popular talks like it is going to the 6Gs or the AI is going to reverse the traffic pattern. But why? Just tokens.

Surojeet Roy

So I think it depends on the, because you have to send the contextual information, right? For example, you are standing somewhere and you want to send a 360 degree view of where you are and you want to send it to the inferencing application so that it can help you, you know, understand whatever question you have. So I think that contextual information, sending it upwards, right, it will take lots of data requirement and primarily we are not doing it today. That’s the main reason because and this type of tasks will increase and that’s going to increase the uplink requirement.

Radhakant Das

Rajiv, do you want to add something to this?

Rajeev Saluja

No, the only thing which I wanted to add on the uplink side was that, you know, there are going to be multi -modal agents. So right now the traffic that we see is which consumer initiates, right, but when the agents and then on top multi -modal agents who are orchestrating end -to -end workflows, then they start initiating the traffic, that’s when the uplink also starts. So you will have multiple agents.

Radhakant Das

A2A traffic.

Rajeev Saluja

Yep.

Radhakant Das

Good. So Sandeep, the next question for you, there are a lot of AI6Z pilots are happening. I think whatever the organizations we have seen in last four days of AI Impact Summit, a lot of them are there. And what specific coordination mechanisms or co -creation models do you think we all should work together as industry, academia, government to ensure that these pilots, they align to the standards, they just don’t build on the silos while 6Z standard is maybe two quarters or maybe couple of more quarters away. So how we put the standardization as a perspective which can be adopted later stage. Also safety guidelines. A lot of safety issues will come. The more we are excited about how great things can happen, also the more of the things are exposed.

And we have seen the goals. We have seen the work, what it’s doing and how things are really getting into out of control. and so any AI maybe outcome based AI native deployment if you just can throw some light on that

Sandeep Sharma

Frankly speaking your question is so long I am not sure how long should be the answer I got it I got it just kidding so I think if you look at the perspective lot many good things are being done in the country there are lot many good organizations as we heard in the keynote that there is a Biosense there is a DSDA as well so lot of coordination is already in place the problem is with the pilot and the scale gap is not a technology gap is basically a gap of how we put things together in the frameworks which are scalable and referenceable as well so as I mentioned in my previous response that we need to have some national frameworks around AI native architectures once that is in place I think the quicker thing can be done is that let’s align the fundamental what type of use case are being driven and how the data needs to flow around it.

Other part is that India, as a consumer, we have a huge amount of data across the industries. And data is like bread and butter for AI. There’s no AI if there’s no data. But the data today is either siloed within the industries. If you look at the sector, they don’t combine the data together. So a national framework of putting the data together creating national exchanges where data can come in and people or organizations are allowed to train the data, train the models with the data. And we can have certain models which are more industry specific. And that plethora of variation of data, putting it together gives a very useful reference of creating frameworks which can be referenced or replicated, not only in India, globally as well.

And certain organizations are already in place to take care of that. I think more and more efforts, more and more programs are needed. thirdly if you look at more and more the safety guardrails I think we need to have certain framework in place how the AI is audited monitored within the telecom network as well we can’t allow some model to take a change of any parameter in live traffic if we can’t audit it maybe policy frameworks for intervention if certain models are changing in network parameter how and why they are changing certain explanation needs to be brought in and it can’t be done in isolation reason being if you do it in isolation again there is no clarity will come in to have a national policy around it will improve the reliability or explainability of the models hence people will come together rather than creating a differentiation of another layer of security that may encompass certain things that should be known to the larger audience and certain things that we all should do as an industry that let’s contribute more and more in these forums which government has started like Bharat 6G Alliance.

I am part of the 6G use case group, work very closely with Shokji and I think many things are already in place. We drafted certain white papers which could be referenced around AI, what type of implications it will bring into the network and we collaborate well with the 5G 100 labs as well. The things that we have done already, we should encourage them to take these things to the next level. Certain things could be referenced, certain things could be evolved. Not everything may be done right, but there is an opportunity to do everything right in 6G in terms of coordination, in terms of national referenceable frameworks. Good. Have I missed anything? It was a long question.

Radhakant Das

No, no, no. Thanks for that answer. So what actually we are seeing is in the security perspective, we are seeing we have… already a work is happening on interest in terms of policy perspective. These are in place and typically what if I have to bring in the DPI stuff or public and digital public infrastructure. So you have a lot of learning from all these sectors and how to deal with it. Even in the telcos, even in all the industry sectors, how to keep that as a creative, how that particular data which is not been seen in the telco ecosystems but still we are all responsible to deal with that. I think with the AI coming in, maybe some way we need to understand the DPI of DPI.

Sandeep Sharma

And just to add here, you brought a very good point. I think whenever I give a reference, the importance of open ecosystem, interoperable ecosystem, I always give an example of UPI. UPI wouldn’t have been a success if we had not promoted the open ecosystem around it. I think same mindset is needed in the AI era.

Radhakant Das

Good, good. Thanks. So Rajiv, I have the next question for you. how does the AI native telco change the way enterprises consume technology and what are the new value pools that will emerge out of it?

Rajeev Saluja

I think it’s a very good question see Sandeep brought out a very important aspect of latency and the second was on the uplink so first of all 6G is the solution to both the problems right enterprises in particular will benefit a lot from the advent of and the confluence of 6G and AI so there are three major drivers of value pools which enterprises can derive from 6G and AI the first would be demand analysis so you know they will be able to analyze what kind of demand is coming today their entire data is limited to the research that they do right But with the new data streams flowing in, they will be able to understand what new services can they provide to their customers and how can they embed intelligence into those new services that they are able to deliver.

The second important value pool which enterprise would be able to deliver is the workflow automation. So today, a lot of work which is manual will get automated. They will be able to orchestrate end -to -end workflows and humans will go up the value chain. That is the second important value which enterprises can derive from the confluence of 6G and AI. The third most important part which enterprises can derive out of 6G and AI is how can they make their end -to -end processes, how can they make their end -to -end security framework more robust. And see, till now, whenever we used to speak about digitization of enterprises, it used to stop at ERP implementation. Or basic business process automation.

Now, AI and 6G are going to take it to a completely different level. India is a wireless economy we don’t have fiber penetration in this country so the only way enterprises can go and reach to the last mile in this country is through the confluence of 6G and AI

Radhakant Das

so another extension to this question so how do you see sovereignty of the entire ecosystem we should deploy when you say sovereignty it is a very complex question one side that we think that ok if we make it open only it will grow at the same time sovereignty is also asked for every country maybe you can actually make entire European continent will only have one sovereign stuff what do you see on that and how much we should make it as a sovereign how much we should make it open what is your viewpoint on this

Rajeev Saluja

see this token economy in which we are going to go in the next 5 to 7 years so sovereignty is going to be a token sovereignty right in my point of view it will be very important for us to build our own intelligence and then deploy and scale it. We cannot be dependent on the world to deliver the intelligence to us because that will simply be too expensive for us to handle. So in order to make sure that intelligence reaches the last person in the most remote area and the last remote enterprise in India, the most important thing we need to have is have a token sovereign. We need to have a sovereign AI ecosystem, an end -to -end ecosystem starting from device to the cloud to the edge to the intelligence layers on top.

This end -to -end ecosystem has to be sovereign and we don’t have an option in this.

Radhakant Das

So you are saying end -to -end ecosystem platforms or stacks are sovereign?

Rajeev Saluja

Yes.

Radhakant Das

Token may or may not be sovereign or you can classify as a sovereign or as a general public one?

Rajeev Saluja

Correct, but we are basically calling it as a sovereign token. What I basically mean is that right from the time the request gets initiated by an agent or by a human. to the time an inference happens and the value gets delivered to the human or to the enterprise, this entire value chain has to be made in India, has to be sovereign.

Radhakant Das

So you have some view on it?

Sandeep Sharma

I think I was just supporting him with the gesture, but honestly speaking, the level of intelligence that country needs may not be a priority for the intelligence for other regions who are creating their own intelligence. So having a sovereign AI has an economy sense as well and has an importance for our own social values that we build in the system. AI is not only about telecom, AI is a bigger base. What we are getting as a query output as a new generation, they should be very well aware of what is right and that can be ascertained only if we have certain sovereignty developed in the ecosystem for our own nation.

Radhakant Das

So while we are talking about sovereignty, we should be very specific about it. so there is something which we need to keep as a there are certain things which you need to keep it as open stuff for learning from each other community learning across the country across the planet and all so that’s something would you like to comment and maybe I would request Surajit to also comment on that as well

Sandeep Sharma

I think just to start and Surajit will elaborate more the era of going further will not be a abstract one or abstract zero we need to look over the hybrid ecosystem which works best as a mix of which type of AIs as a mix of which type of compute and mix of which type of influencing and industry or the economy is going to be more use case and I would say efficiency driven so we should be leveraging which is best for to satisfy that particular use case

Radhakant Das

Surajit thank you

Surojeet Roy

Yeah, I think just to add, I was reading one Niti Aayog report, you know, where we are aiming for a 30 trillion economy by 2047 as part of the Vixit Bharat initiative. So out there it was very much mentioned that there are approximately 490 million informal users, you know, workers. So for example, all these carpenters, drivers, so they are the informal users and they are not yet equipped with all the applications which might enhance their productivity. So I think from that perspective, AI use cases can be significantly helpful out here. We can have, you know, smart robots working in the, you know, fields for helping on the agriculture. then there may be use cases where maybe an electrician or carpenter, you send a video of your work and they can, that AI can generate a list of the tools they need, what all steps they have to come prepared with, right?

But for all this, I think the most important part is the model needs to be trained based on data which is coming from India. Because if you train the models based on data which is coming outside India, then maybe it is not tuned for, you know, the India specific use cases. There will be a bias there. So I think that’s why it is important.

Radhakant Das

So the cultural perspective you are hitting upon, the cultural perspective has to be understood by us. So there is a bell. So are we going to have some questionnaire sessions, Q &A sessions? Any questions? Maybe we have just two minutes left.

Audience

Can you hear? Yeah, we can. In fact, I had a lot of questions. Okay, so let me first, my question would be around interoperability. So in mobile world, we see that whatever user equipment you buy from the market, it works on all the operators, right? When we are moving towards having AI -related applications, we see there is some problem. So I was looking at meta glasses they were exhibiting. So basically, the meta glass being, say, coming out in the market will only work with the meta. So should we not think of creating some AI? API sort of architecture wherein a product created by one. user side should work in different applications. It should work with Google.

It should work with geo -applications sort of. That’s the first question which I have. The second question is about the model training which Surjeet was trying to address. So I mean the advantage of India is like any applications we can scale to a billion users. That is one. And the second advantage is that we have a huge data set on many aspects. So how to leverage these two for AI because although we may not be good at LLMs, various LLMs which we have today. Of course new companies have started. Servum and all have started working on. But we are good at having a data and the market. So how to leverage that so that. I mean models are trained here models are utilized here so these are the two questions which I have in mind thank you.

Rajeev Saluja

Sir I will try to attempt to answer your question the first part I think Sandeep also mentioned see this entire ecosystem end -to -end ecosystem has to be open has to be API driven and loosely coupled so that you know there is no proprietary interface from one point to another so the whole work which is going on right now at least in our organization is to make sure that how this end -to -end ecosystem can become open can become efficient and can scale you brought out a second very important point about India’s scale right and this scale is going to reduce the cost of intelligence and that affordability is a also a very important factor for us to deliver value to our you know 140 million sorry 140 crore people that reduction in cost is a very very important factor The third important point which I want to make here is that when you talk of LLMs, they are important.

But delivering intelligence is not about LLMs or training LLMs. It is about delivering this entire ecosystem to the last mile. When it comes to LLMs, the way we are building intelligence in India is in every language. These models have to be trained. So every person, whether it is from the south, from Kerala, or from Assam, or from any state in the northeast, they should be able to get this intelligence in their local language made. That is the whole work we are doing right now as part of Jio.

Sandeep Sharma

Thank you, Rajiv. I think just to answer the second part of the question, there is a lot of data, how we can ensure. I think a framework of having is centralized data exchanges and centralized processes. Training exchanges. where enterprise can port their data with a certain anonymization so that no confidential data passed out but industries can come and train the models specific with the data which is available from the enterprises or from the end users within India. But I think central exchange mechanism is need to be placed.

Surojeet Roy

So just to add, I think democratizing the AI is also very important. It should be accessible to everybody at much lower cost and I think in that direction putting GPUs at cell towers can be one way of doing it because what you can do is when the network is not very busy and the resources are free, those resources can be given to the users to train their models or do some inferencing functions because those resources are there at every site. So that can be one way of helping on this direction.

Radhakant Das

Thank you. So you have another question? All right.

Audience

Morning. My name is Sidhu. I’m from AT &T. One quick question, now that Rajiv is also here. See, across the world, telecom companies are realizing that not having a network API exchange and then monetizing that is becoming a problem for many of the large enterprise customer use cases. For example, if a bank wants to understand their customer behavior, customers have got multiple networks, so they don’t get the visibility, right? So with OneEdge, I think Jio and Airtel have also joined hands last year. On the U .S. side, some work is happening, but I wanted to understand how much of this monetization of the network API -centric economy is materializing from India’s standpoint. I know Jio covers almost, I don’t know, 40%, 50 % of the overall population in India, so you might throw some.

I don’t know if you’re going to throw some. I don’t know if you’re going to throw some. I don’t know if you’re going to throw some. I don’t know if you’re going to throw some.

Rajeev Saluja

I will try to answer this quickly because the time is up and we can take this discussion offline. But see, we are committed to an open AI ecosystem to drive value. And like I said, enterprise value cannot be delivered unless the end -to -end ecosystem is open and connected. I think we are ringing the bell, but I will take this discussion offline. Thank you, sir. Thank you.

Radhakant Das

So we

Moderator

have time to stop. Any other questions that we can remind of now? Thank you. Thank you, everyone. May I request Radhika and Das to hand over the memento to all our speakers? May I also request Ashok, sir, to please come on stage and kindly collect your memento? Thank you. Thank you so much. Thank you. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (16)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“The moderator opened the session by framing the theme “AI at the Core, 6G at the Edge” as a strategic opportunity for India to shift from a consumer of global technology to a leader in the next intelligence and connectivity frontier.”

The knowledge base describes the discussion as focusing on India’s strategic approach to integrating AI with 6G under the same tagline, confirming the moderator’s framing.

Confirmedhigh

“AI was added retrospectively in the 5G release‑15‑to‑release‑18 cycle.”

Source S6 explains that artificial intelligence began to be integrated with the rollout of release 18 (5G‑Advanced), confirming the retrospective addition of AI.

Additional Contextmedium

“The ITU’s 6G framework (released two years ago) lists integrated AI as one of six usage scenarios and enshrines “ubiquitous intelligence” as a design pillar, meaning AI will be native to every element of the end‑to‑end system.”

S58 notes that the ITU introduced a new framework for 6G development, highlighting AI as a key component and emphasizing broader design considerations such as energy efficiency, providing additional context to the claim.

Confirmedmedium

“Bharat 6G Alliance – Coordinates working groups on technology, spectrum and devices.”

S17 confirms that the Department of Telecommunications launched the Bharat 6G Alliance to develop a roadmap for 6G, bringing together industry, academia, research institutions and standards bodies, which aligns with the claim of coordinated working groups.

Confirmedlow

“The moderator introduced the panelists and set the focus on technical, business and policy implications of an AI‑native 6G.”

S70 indicates that the session moderator introduced the panelists and managed the discussion format, confirming this aspect of the report.

External Sources (78)
S1
S2
Designing Indias Digital Future AI at the Core 6G at the Edge — -Rajeev Saluja: Vice President, 5G Radio at Reliance Jio – expertise in telecommunications and 5G/6G technology developm…
S3
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S4
Keynote-Vinod Khosla — -Moderator: Role/Title: Moderator of the event; Area of Expertise: Not mentioned -Mr. Jeet Adani: Role/Title: Not menti…
S5
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S6
https://dig.watch/event/india-ai-impact-summit-2026/designing-indias-digital-future-ai-at-the-core-6g-at-the-edge — Thank you, sir. So now we are moving to our very next segment, the panel discussion. Our first speaker is Rajiv Seluja, …
S7
Designing Indias Digital Future AI at the Core 6G at the Edge — -Radhakant Das: Heads the Technology Engineering and Innovation Function for Network Solutions and Services (NSS) at TCS…
S8
Designing Indias Digital Future AI at the Core 6G at the Edge — These key comments fundamentally shaped the discussion by elevating it from a technical conversation about 6G and AI int…
S9
Scaling Innovation Building a Robust AI Startup Ecosystem — -Shri Ashok Gupta: Title – Director STPI Gurugram; Role – Dignitary presenting mementos -Shri Atul Kumar Singh: Title -…
S10
Designing Indias Digital Future AI at the Core 6G at the Edge — -Surojeet Roy: Senior Telecommunications Leader, Head of Technology, Technology and Solutions, COE, at Nokia India – exp…
S11
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S12
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S13
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S14
Trusted Connections_ Ethical AI in Telecom &amp; 6G Networks — Artificial intelligence and telecommunications complement each other to form the backbone for the intelligence era. Tele…
S15
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Cristiano Amon — The equipment was different. The use case is different. We’re heading to the next big transformation of the telecom sect…
S16
The geopolitics of digital standards: China’s role in standard-setting organisations — 5G, the fifth-generation mobile network, is key in unlocking the potential of advanced technologies such as AI, the IoT,…
S17
The Indian Department of Telecommunications launches Bharat 6G Alliance — The Department of Telecommunications (DoT) in Indiahas launched the Bharat 6G Alliance(B6GA) to develop a roadmap for 6G…
S18
Future Network System as Open Platform in Beyond 5G/6G Era | IGF 2023 Day 0 Event #201 — Abhimanyu Gosain worked with National Science Foundation and 35 global industry member companies on a flagship project f…
S19
Global telecommunication and AI standards development for all — India has been chosen to host the distinguished World Telecommunication Standardisation Assembly (WTSA 2024), set to tak…
S20
IndoGerman AI Collaboration Driving Economic Development and Soc — Several emerging technology areas were identified as prime candidates for enhanced collaboration. India’s successful dev…
S21
India’s comprehensive strategy to revolutionise telecommunications and foster inclusive growth — The Indian government hasmadeconnectivity a cornerstone of its vision for a digitally empowered nation. The government i…
S22
AI for Good Technology That Empowers People — “So, you know, AI being available at the edge, not from, you know, the very basic thing that we all use every day is you…
S23
Artificial intelligence as a driver of digital transformation in industries (HSE University) — The analysis offers a comprehensive examination of artificial intelligence (AI) and its impact on various sectors. One s…
S24
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — Suggests governments should use procurement to ensure companies provide safe products that have human rights as core des…
S25
Workshop 6: Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies — Angela Coriz: Thank you. I will try to be quick. So I work at Connect Europe. This is a trade association that represent…
S26
5G traffic surges under growing AI usage — AI-driven applications are reshaping mobile data norms, and5G networks are feeling the pressure. Analysts warn that upli…
S27
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — This comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents di…
S28
Building Indias Digital and Industrial Future with AI — This comment introduced nuance to the sovereignty debate and influenced the conversation toward finding balance between …
S29
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S30
Responsible AI in India Leadership Ethics &amp; Global Impact — “Techniques you use for responsible AI should be interoperable, open, and standardized”[20]. “We are built on an open st…
S31
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — Collaboration among different countries and stakeholders is seen as a key driver for advancing regulatory sandboxes and …
S32
Nepal Engagement Session — “So either from a technology point of view, we have the interoperability, the standards which we have chosen, the models…
S33
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The discussion highlighted the importance of policy interoperability rather than uniform global governance, recognizing …
S34
Designing Indias Digital Future AI at the Core 6G at the Edge — “API sort of architecture wherein a product created by one.”[111]”… this entire ecosystem end -to -end ecosystem has t…
S35
Open Forum #26 High-level review of AI governance from Inter-governmental P — 5. Balancing Global and Local Needs: The discussion highlighted the need to balance global standards with local needs an…
S36
WS #97 Interoperability of AI Governance: Scope and Mechanism — Mauricio Gibson: Thank you. Yeah, I mean, just building on what Chet was saying, I think, and what you were saying, Olg…
S37
AI Meets Agriculture Building Food Security and Climate Resilien — Chief Minister Devendra Fadnavis presented Maharashtra’s Maha Agri AI Policy 2025-2029, emphasizing the shift from demon…
S38
AI for agriculture Scaling Intelegence for food and climate resiliance — The policy adopts a government‑led, ecosystem‑driven approach to foster AI solutions for agriculture across Maharashtra….
S39
From Human Potential to Global Impact_ Qualcomm’s AI for All Workshop — It’s like shop something for me, check my bank balance. If I have enough over there, I want to buy that thing and then w…
S40
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Artificial intelligence | Information and communication technologies for development Arun advocates for moving inferenc…
S41
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Artificial intelligence | Data governance He explains that remote, low‑connectivity scenarios benefit from edge deploym…
S42
Opening remarks — Such an ecosystem should promote the development and application of technology within an environmentally conscious, fair…
S43
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — And that promotes open… The third pillar is based on sovereignty of our infrastructure. The fourth pillar is based on …
S44
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — Good evening, distinguished guests. Welcome to the session on powering AI. As AI scales at speed, so does its infrastruc…
S45
Omnipresent Smart Wireless: Deploying Future Networks at Scale — An ethical and responsible approach to 6G technology is emphasized to ensure its positive use and avoid potential negati…
S46
Challenges and Opportunities: Emerging Technologies and Sustainability Impacts  — Concurrently, the importance of standardisation is being emphasised in the context of emerging technologies, particularl…
S47
Future Network System as Open Platform in Beyond 5G/6G Era | IGF 2023 Day 0 Event #201 — Advocacy for alterative business models, drawing upon the S-line model by Docomo, were seen as more adaptable with the p…
S48
AI Infrastructure and Future Development: A Panel Discussion — -Audience- Audience member asking a question
S49
Main Session | Policy Network on Artificial Intelligence — 3. Interoperability and Global Cooperation Anita Gurumurthy: Sure, I can do that. Am I audible? Okay. Thank you. I jus…
S50
What policy levers can bridge the AI divide? — *This summary reflects the content available in the provided transcript, which contained significant portions of unclear…
S51
WS #208 Democratising Access to AI with Open Source LLMs — To improve AI models for specific regions, there is a need for high-quality local data. This includes data on local lang…
S52
Al and Global Challenges: Ethical Development and Responsible Deployment — Alfredo Ronchi:Most interesting presentation from the standpoint of China. Thanks a lot for this date. And now we will t…
S53
Designing Indias Digital Future AI at the Core 6G at the Edge — “And we have selected 100 plus 6G related projects in different area.”[10]”So to support that activity, we had come out …
S54
Trusted Connections_ Ethical AI in Telecom &amp; 6G Networks — And let’s do it. India can show the direction forward. For whole world. There is a tradition for great. collaboration, g…
S55
https://dig.watch/event/india-ai-impact-summit-2026/designing-indias-digital-future-ai-at-the-core-6g-at-the-edge — As you know that release 21 would be the first release of 60. So we are trying to do that and perhaps that will come ver…
S56
Artificial intelligence as a driver of digital transformation in industries (HSE University) — The analysis offers a comprehensive examination of artificial intelligence (AI) and its impact on various sectors. One s…
S57
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — Suggests governments should use procurement to ensure companies provide safe products that have human rights as core des…
S58
High-level dialogue on Shaping the future of the digital economy (UNCTAD) — As a result of these discussions, a treaty with a four-year effectiveness was established. In terms of future advancemen…
S59
5G traffic surges under growing AI usage — AI-driven applications are reshaping mobile data norms, and5G networks are feeling the pressure. Analysts warn that upli…
S60
Workshop 6: Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies — Angela Coriz: Thank you. I will try to be quick. So I work at Connect Europe. This is a trade association that represent…
S61
AI for Good Technology That Empowers People — “this is a use case … for a traffic prediction … predicting certain traffic spikes when they had a football match …..
S62
Open Forum #33 Building an International AI Cooperation Ecosystem — This comment established a new analytical framework for the entire discussion. It shifted the conversation from traditio…
S63
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — This comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents di…
S64
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S65
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — This comment introduced a crucial tension between the massive scale of change and the need for distributed, democratic a…
S66
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — -Collaboration and Interoperability as India’s Strategic Advantage: Professor Ganesh Ramakrishnan highlighted interopera…
S67
Responsible AI in India Leadership Ethics &amp; Global Impact — “Techniques you use for responsible AI should be interoperable, open, and standardized”[20]. “We are built on an open st…
S68
Nepal Engagement Session — “So either from a technology point of view, we have the interoperability, the standards which we have chosen, the models…
S69
Keynote-Rishad Premji — Opening framing by the moderator
S70
The Global Power Shift India’s Rise in AI &amp; Semiconductors — -Moderator: Role not specified in detail, appears to be the session moderator who introduced the panelists and managed t…
S71
Media Briefing: Unlocking ASEAN’s Digital Future – Driving Inclusive Growth and Global Competitiveness / DAVOS 2025 — – Anwar Ibrahim: Prime Minister of Malaysia – Joo-Ok Lee: Head of Asia-Pacific from the World Economic Forum Anwar Ibr…
S72
High Level Session 2: Digital Public Goods and Global Digital Cooperation — Amandeep Singh Gill: I think cooperation, collaboration, that’s a no-brainer. In fact, the term digital cooperation is o…
S73
The WSIS Moon Shot: Celebrating 20 years and crystal-balling the next 20! — Intro:And I said coach, you are going to lose, and encourage them. And I said, no, coach. I’m free. I’ll do everything, …
S74
5G Transformation: The power of good policy  — The global rollout of5G networkshas been met with considerable excitement, and rightly so. While the promise of faster d…
S75
Bridging the Digital Divide: Achieving Universal and Meaningful Connectivity (ITU) — In conclusion, the South African government’s efforts to promote connectivity and economic parity are commendable. Initi…
S76
DoT and TRAI to enhance telecom services with new measures — The Department of Telecommunications (DoT) and the Telecom Regulatory Authority of India (TRAI) are taking significantst…
S77
The Ministry of Information, Communications, and the Digital Economy (MICDE) Strategic Plan for 2023-2027 — To realise these goals, several initiatives are planned, such as:
S78
Challenges and solutions for broadband infrastructure deployment in developing countries, rural and remote areas — Robin Zuercher:that also could be covered by wireless and also fiber ring topologies and then the breakdown for like the…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Ashok Kumar
9 arguments129 words per minute1524 words706 seconds
Argument 1
AI as an after‑thought in 5G, becoming native in 6G (Ashok Kumar)
EXPLANATION
Ashok explains that AI was initially added to 5G as an after‑thought, but the design philosophy has shifted for 6G where AI is embedded from the outset. This marks a transition from retrofitting AI to making it a core component of the network.
EVIDENCE
He notes that AI began to be integrated as part of 3G releases and was considered an after-thought, whereas the 6G story is different with AI being part of the initial design. He references the evolution from 5G releases 15 to 18 and the emerging 6G vision. [22-27]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Ashok’s claim that AI was an after-thought in 5G and is native to 6G is corroborated by the design narrative in the Indian 6G roadmap, which contrasts 5G releases 15-18 with the AI-native 6G vision [S1] and notes the early integration of AI starting only in later 5G releases [S6].
MAJOR DISCUSSION POINT
Shift from AI after‑thought to native integration
AGREED WITH
Radhakant Das
Argument 2
ITU’s 6G framework explicitly includes integrated AI and “ubiquitous intelligence” (Ashok Kumar)
EXPLANATION
Ashok points out that the ITU’s 6G framework lists integrated AI as one of six usage scenarios and defines “ubiquitous intelligence” as a key design principle, meaning AI will be embedded in every network element. This formal inclusion signals a strategic priority for AI in future standards.
EVIDENCE
He describes the ITU 6G framework released two years ago, which envisions six usage scenarios including integrated AI, and highlights the design principle of ubiquitous intelligence that requires AI in user equipment, radio, core, and applications. [27-30]
MAJOR DISCUSSION POINT
ITU embeds AI in 6G design
Argument 3
Low‑cost 3GPP/TSDSI membership for startups to enable standard participation (Ashok Kumar)
EXPLANATION
Ashok explains that the Department of Telecom subsidises TSDSI and 3GPP membership for startups, reducing the fee from several lakh rupees to just 10,000 rupees, thereby lowering the barrier for Indian innovators to contribute to global standards.
EVIDENCE
He states that a startup wishing to join 3GPP must be a member of TSDSI and 3GPP, and DOT supports this by offering membership at a very low cost of 10,000 rupees instead of 5-6 lakh. [41-43]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The reduced 3GPP/TSDSI membership fee for startups is documented in the Department of Telecom’s support policy for standards participation [S1].
MAJOR DISCUSSION POINT
Affordable standards participation for startups
Argument 4
6G Accelerated Research Program, testbeds (terahertz, AOC) and collaboration with Bharat 6G Alliance (Ashok Kumar)
EXPLANATION
Ashok outlines a suite of government‑backed initiatives: a 6G Accelerated Research Program that has funded over 100 projects across terahertz, AI, ML and semantic communications; dedicated testbeds; and a partnership with the Bharat 6G Alliance to shape policy and technology roadmaps.
EVIDENCE
He mentions the launch of the 6G Accelerated Research Program two years ago, selection of 100+ projects covering terahertz, AI, ML, semantic communications, and the establishment of terahertz and AOC testbeds, as well as close work with the Bharat 6G Alliance and its working groups. [45-53] and [55-62]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The 6G Accelerated Research Program, its terahertz and AOC testbeds, and partnership with the Bharat 6G Alliance are described in the government’s 6G strategy overview [S1] and the alliance announcement [S17].
MAJOR DISCUSSION POINT
Government‑driven research and testbeds for 6G
Argument 5
Expansion of 5G labs across institutes as a foundation for 6G research (Ashok Kumar)
EXPLANATION
Ashok notes that the government inaugurated 100 5G labs in 100 institutes, which are now operational and serve as a knowledge base and test environment for transitioning to 6G research and use‑case development.
EVIDENCE
He refers to the budget-announced initiative of establishing 100 5G labs across the country, which are currently operational and provide a platform for advancing to 6G. [69-72]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The establishment of 100 operational 5G labs across institutes is detailed in the national 5G rollout plan [S1].
MAJOR DISCUSSION POINT
5G labs as stepping stones to 6G
Argument 6
Industry should adopt existing 5G labs to accelerate 6G use‑case development and testing (Ashok Kumar)
EXPLANATION
Ashok calls on industry participants to partner with one or two of the operational 5G labs to co‑develop and test 6G technologies, leveraging existing infrastructure to speed up innovation.
EVIDENCE
He concludes his address by requesting industry players to adopt one or two 5G labs and collaborate on further technology development. [72-73]
MAJOR DISCUSSION POINT
Leveraging 5G labs for 6G innovation
Argument 7
Active participation in international standards is essential for embedding Indian innovations into 6G and building a domestic end‑to‑end stack.
EXPLANATION
Ashok stresses that joining standardisation bodies allows Indian technology to become part of global specifications and enables the country to develop its own complete 6G solution, rather than merely adopting foreign standards.
EVIDENCE
He says, “It’s an opportunity not only to like participate in the standard so that our technology, our innovations becomes part of the standard, but also to build our own end-to-end 6G technology stack” (sentences [34-35]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s role in international standards, highlighted by hosting WTSA 2024, underscores the importance of active participation for embedding domestic innovations [S19]; the DOT’s emphasis on standards engagement is also noted [S1].
MAJOR DISCUSSION POINT
Standard participation to embed Indian tech
Argument 8
Collaboration with the Bharat 6G Alliance and other ministries helps shape policy and accelerates India’s leadership in 6G.
EXPLANATION
Ashok notes that close work with the Bharat 6G Alliance’s working groups and coordination with ministries such as DST creates a policy framework that guides research, spectrum allocation, and device development, positioning India as a 6G leader.
EVIDENCE
He mentions, “we are also closely working with Bharat 6G Alliance… they have created multiple working groups… we are also working with DST RDI scheme” (sentences [60-62] and [63-68]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Collaboration with the Bharat 6G Alliance and ministries such as DST is outlined in the alliance’s charter and joint RDI initiatives [S17] and the broader 6G strategy document [S1].
MAJOR DISCUSSION POINT
Policy co‑creation via Bharat 6G Alliance
Argument 9
The DST RDI scheme, now including the telecom sector, provides funding and support for scaling research, startups, and academia in 6G.
EXPLANATION
Ashok explains that the Research, Development and Innovation (RDI) scheme of the Department of Science & Technology has been extended to cover telecom, allowing companies and research institutions to apply for grants that accelerate 6G‑related projects.
EVIDENCE
He states, “We have taken up with DST that telecom sector should be included as part of the sector which will be supported. The RDI and Secretary DST had agreed… our companies, our startup in the field of telecom can actually apply” (sentences [63-68]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Inclusion of telecom in the DST RDI scheme and the associated funding mechanisms are mentioned in the government’s RDI expansion briefing [S1].
MAJOR DISCUSSION POINT
RDI scheme supporting telecom R&D
S
Surojeet Roy
7 arguments151 words per minute1614 words639 seconds
Argument 1
AI‑enabled devices (smart glasses, wearables) will generate heavy uplink traffic, requiring edge inferencing (Surojeet Roy)
EXPLANATION
Surojeet describes the emergence of AI‑powered wearables such as smart glasses and body patches, which cannot perform all inference locally and will therefore rely on edge or centralized data centres, creating substantial uplink traffic demands.
EVIDENCE
He lists various form-factors-smart glasses, AR/VR glasses, wearables, body patches-that embed AI functions but may need inferencing support from edge or central data centres, leading to high uplink traffic requirements. [115-124]
MAJOR DISCUSSION POINT
Uplink pressure from AI‑enabled devices
Argument 2
Projected shift from a downlink‑dominant ratio (~10:1) to a more balanced 4:1 ratio, demanding higher uplink capacity (Surojeet Roy)
EXPLANATION
Surojeet cites forecasts that overall traffic will grow 6‑9× by 2033, with AI‑driven traffic rising to about 30 %, and predicts the downlink‑to‑uplink ratio will move from roughly 10:1 to 4:1, necessitating significant uplink capacity upgrades.
EVIDENCE
He references Nokia Bell Labs projections of WAN traffic growth and AI traffic reaching 30 % by 2033, and later notes that the current downlink-to-uplink ratio of about 10:1 could shift to 4:1, implying much higher uplink data rates are needed. [125-132] and [185-190]
MAJOR DISCUSSION POINT
Changing traffic asymmetry toward uplink
Argument 3
AI‑driven techniques such as DeepRx/DeepTx can boost spectral efficiency and capacity by 25‑30 % (Surojeet Roy)
EXPLANATION
Surojeet explains that AI‑based signal processing (DeepRx, DeepTx) can decode signals under poor SNR conditions, increase spectral efficiency, support higher‑order modulation, and thereby raise network capacity by roughly a quarter to a third.
EVIDENCE
He details how deep learning algorithms can optimize communication parameters, improve decoding in low SNR environments, and deliver a 25-30 % capacity increase along with higher-order modulation support. [195-202]
MAJOR DISCUSSION POINT
AI‑enhanced radio performance
Argument 4
Indian‑specific data is crucial to avoid bias and to tailor models to local contexts (Surojeet Roy)
EXPLANATION
Surojeet stresses that AI models must be trained on Indian data to reflect local usage patterns, languages, and conditions; otherwise they risk bias and poor performance for Indian users and informal sector workers.
EVIDENCE
He gives examples of AI assisting carpenters, drivers, and agricultural workers, and argues that models need Indian data to avoid bias and be relevant to local contexts. [300-304]
MAJOR DISCUSSION POINT
Need for domestic data in AI training
AGREED WITH
Audience
Argument 5
6G will require substantially larger bandwidth (≈400 MHz) and fivefold spectral efficiency to deliver roughly twenty‑times the capacity of 5G.
EXPLANATION
Surojeet outlines the quantitative leap needed in spectrum and efficiency, indicating that moving from typical 100 MHz 5G bands to 400 MHz for 6G, together with a 5× increase in spectral efficiency, will enable the projected 20× capacity growth.
EVIDENCE
He says, “we are talking about minimum 400 MHz of bandwidth when we are talking about 6G… we are talking about 5 times spectral efficiency… which means 5 into 4, you are talking about 20 times more capacity coming out from 6G networks” (sentences [202-206]).
MAJOR DISCUSSION POINT
Bandwidth and spectral efficiency requirements for 6G
Argument 6
AI‑driven signal processing (DeepRx/DeepTx) can enable higher‑order modulation and increase network capacity by 25‑30 % even under poor SNR conditions.
EXPLANATION
He describes how deep‑learning‑based receivers and transmitters can decode signals that traditional methods cannot, supporting more complex modulation schemes and thereby boosting overall throughput.
EVIDENCE
He explains, “using AI you can actually decipher those signals… this can give a capacity increase maybe 25-30 % and you can also have higher order modulation supported” (sentences [195-202]).
MAJOR DISCUSSION POINT
AI‑enhanced radio performance
Argument 7
The shift toward AI‑enabled edge computing will invert traditional traffic asymmetry, creating a surge in uplink demand that requires network redesign.
EXPLANATION
Surojeet predicts that as more AI inference moves to the edge, devices will generate far more uplink traffic, changing the downlink‑to‑uplink ratio from roughly 10:1 to 4:1 and necessitating upgrades in uplink capacity and architecture.
EVIDENCE
He notes, “currently we see a downlink to uplink ratio of maybe 10s to 1… with AI applications we are predicting that this pattern will change to maybe 4s to 1… you need much higher data rates in the uplink” (sentences [185-190]).
MAJOR DISCUSSION POINT
Uplink traffic surge due to edge AI
R
Rajeev Saluja
6 arguments163 words per minute1153 words423 seconds
Argument 1
Simple, latency‑sensitive inference should be handled at the edge; complex multi‑agent workflows remain in the core/cloud (Rajeev Saluja)
EXPLANATION
Rajeev proposes a split architecture where straightforward, time‑critical AI inference is performed close to the user at the edge, while more elaborate, multi‑step processes are processed centrally, balancing performance and resource use.
EVIDENCE
He states that most simple agentic inference workloads will be handled at the edge, whereas multi-step, multi-agent complex workflows will stay at the central location, aiming for pervasive and affordable intelligence. [158-162]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Edge-centric AI inference for latency-sensitive tasks and centralised handling of complex workflows are advocated in AI-at-the-edge studies and the Indian 6G vision, which emphasizes edge inference for low-latency services [S22] and the role of AI as the intelligence layer of telecom [S14].
MAJOR DISCUSSION POINT
Edge vs. core AI inference split
Argument 2
Distributing inference reduces power consumption and eases data‑center load (Rajeev Saluja)
EXPLANATION
By moving inference tasks to the edge, power demand is spread across many devices, lessening the concentration of energy use in large data centres and improving overall efficiency.
EVIDENCE
He links the distribution of inference to reduced power consumption and alleviation of data-centre load, supporting the edge-centric approach. [158-162]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Energy-efficient AI deployment and reduced data-centre load are highlighted as design goals in the ethical AI framework for telecom [S14] and in edge-AI efficiency discussions [S22].
MAJOR DISCUSSION POINT
Power efficiency through distributed inference
Argument 3
India must “build, not rent” intelligence to ensure affordability and self‑reliance (Rajeev Saluja)
EXPLANATION
Rajeev argues that India cannot depend on foreign AI solutions; instead it must develop its own intelligence capabilities to keep costs low and guarantee widespread access.
EVIDENCE
He quotes the sentiment that “you cannot rent intelligence,” emphasizing the need to build and scale domestic AI for affordability. [154-157]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The call to ‘build, not rent’ intelligence aligns with India’s digital sovereignty agenda and initiatives to develop indigenous AI capabilities [S21] and with Indo-German collaboration emphasizing local AI development [S20].
MAJOR DISCUSSION POINT
Domestic AI development over import
Argument 4
A sovereign token‑based AI economy is required for end‑to‑end control of data, inference and value delivery (Rajeev Saluja)
EXPLANATION
Rajeev envisions a token‑driven AI ecosystem where every step—from request initiation to inference delivery—is owned and operated within India, ensuring sovereignty over data and AI services.
EVIDENCE
He describes a token-based sovereign AI economy where the entire value chain, from request to inference, is kept inside India to avoid dependence and high costs. [278-282]
MAJOR DISCUSSION POINT
Sovereign token‑driven AI ecosystem
AGREED WITH
Radhakant Das
Argument 5
New value pools: demand analysis, workflow automation, and enhanced security for enterprises (Rajeev Saluja)
EXPLANATION
Rajeev identifies three primary benefits for enterprises from AI‑native 6G: better demand forecasting, automation of end‑to‑end workflows, and stronger security frameworks, all enabled by the high‑speed, low‑latency network.
EVIDENCE
He lists demand analysis, workflow automation, and improved security as the three major value pools enterprises can tap into with 6G-AI convergence. [267-273]
MAJOR DISCUSSION POINT
Enterprise value creation from AI‑6G
Argument 6
Open, loosely‑coupled API architecture is required so AI applications work across vendors and platforms (Rajeev Saluja)
EXPLANATION
Rajeev stresses that an open, API‑driven architecture, similar to the UPI model, is essential for interoperability, allowing AI services to operate across different devices, operators, and ecosystems without proprietary lock‑in.
EVIDENCE
He explains that the end-to-end ecosystem must be open, API-driven, and loosely coupled to avoid proprietary interfaces, citing the UPI example as a successful model. [330-336]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for an open, loosely-coupled API ecosystem mirrors the open platform approach promoted for future networks [S18] and the interoperable AI-telecom framework discussed in the AI-telecom synergy report [S14].
MAJOR DISCUSSION POINT
API‑centric open AI ecosystem
AGREED WITH
Sandeep Sharma, Audience
S
Sandeep Sharma
4 arguments165 words per minute1520 words551 seconds
Argument 1
Open, interoperable ecosystem (e.g., UPI model) is essential for scaling AI services (Sandeep Sharma)
EXPLANATION
Sandeep argues that, like the Unified Payments Interface, an open and interoperable framework is crucial for widespread adoption and scaling of AI services across the country.
EVIDENCE
He cites the UPI example, noting that its success hinged on an open ecosystem, and asserts the same mindset is needed for AI. [262-264]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of an open, interoperable ecosystem akin to UPI is discussed in the open platform vision for beyond-5G/6G [S18] and the AI-telecom synergy analysis [S14].
MAJOR DISCUSSION POINT
Open ecosystem model for AI scaling
AGREED WITH
Rajeev Saluja, Audience
Argument 2
Need for national frameworks, data‑exchange platforms, and audit mechanisms to align pilots with emerging standards (Sandeep Sharma)
EXPLANATION
Sandeep calls for coordinated national frameworks that provide data‑exchange mechanisms, auditing, and safety guardrails, ensuring AI‑6G pilots are interoperable, secure, and aligned with forthcoming standards.
EVIDENCE
He outlines existing coordination, the necessity for national referenceable frameworks, safety guardrails, and alignment with standards, emphasizing that pilots should not be siloed. [237-252]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
National coordination mechanisms, data-exchange platforms, and audit frameworks are recommended in the 6G policy documents and standard-alignment strategies [S18][S19].
MAJOR DISCUSSION POINT
National coordination and safety for AI‑6G pilots
Argument 3
Avoiding siloed development by co‑creating referenceable frameworks and safety guardrails (Sandeep Sharma)
EXPLANATION
Sandeep stresses that collaborative, referenceable frameworks and clear safety guidelines are needed to prevent fragmented development and ensure trustworthy AI deployments.
EVIDENCE
He mentions the need for co-creation of referenceable frameworks, safety guardrails, and national policies to avoid isolated silos. [237-252]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Co-creation of referenceable frameworks and safety guardrails to avoid siloed AI pilots is emphasized in the open-platform and governance recommendations for 6G [S18].
MAJOR DISCUSSION POINT
Co‑creation to prevent siloed AI development
Argument 4
Centralized data exchange enables secure, anonymized model training across industries (Sandeep Sharma)
EXPLANATION
Sandeep proposes a national data‑exchange platform where enterprises can share anonymized data, allowing cross‑industry model training while preserving confidentiality and security.
EVIDENCE
He describes a framework of centralized data exchanges and training exchanges that permit anonymized data sharing for model training across sectors. [338-344]
MAJOR DISCUSSION POINT
Secure national data exchange for AI model training
M
Moderator
1 argument42 words per minute258 words364 seconds
Argument 1
India should transition from a consumer of global technology cycles to a creator of the next intelligence and connectivity frontier.
EXPLANATION
The moderator frames the session as an opportunity for India to move beyond merely using foreign technologies and to become a driver of future AI and communications innovations. This positioning sets a strategic agenda for the discussion.
EVIDENCE
In the opening remarks the moderator states that the goal is to ensure “India moves from being a consumer of global technology cycles to becoming a sharper of the world’s next intelligence and connectivity frontier” (sentence [1]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s ambition to move from technology consumer to creator is reflected in its hosting of WTSA 2024 and leadership in 6G standardisation efforts [S19] and the national digital future roadmap [S1].
MAJOR DISCUSSION POINT
Strategic shift from technology consumer to creator
R
Radhakant Das
7 arguments150 words per minute1679 words670 seconds
Argument 1
Intelligence should be regarded as the basic infrastructure for the next wave of digital evolution.
EXPLANATION
Radhakant describes intelligence as the foundational layer upon which future networks, services, and applications will be built, likening it to traditional physical infrastructure. This view underlines the centrality of AI in upcoming technology roadmaps.
EVIDENCE
He says, “We are at a historic inflection point where the intelligence is the basic infra. based on which the next evolution of this planet will actually continue” (sentences [88-89]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The positioning of AI as the foundational infrastructure is articulated in the ethical AI in telecom analysis, which describes AI as the intelligence layer of future networks [S14] and the Indian 6G vision [S1].
MAJOR DISCUSSION POINT
Intelligence as foundational infrastructure
Argument 2
AI deployments must be energy‑efficient to avoid excessive power consumption in data centres.
EXPLANATION
Radhakant stresses that AI should not cause data centres to overheat or consume disproportionate energy, calling for power‑efficient AI designs. This reflects concerns about sustainability as AI scales.
EVIDENCE
He notes, “we will ensure that AI is kind of energy efficient. It will not be responsible for melting the data centres. It will be power efficient” (sentences [98-101]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Energy-efficient AI and avoiding data-centre overload are highlighted in the trusted AI guidelines for telecom [S14] and edge AI efficiency studies [S22].
MAJOR DISCUSSION POINT
Energy‑efficient AI
Argument 3
Semantic communications and AI should be strategically managed to maximise compute utilisation and minimise waste.
EXPLANATION
He argues that AI workloads need to be orchestrated so that compute capacity is used optimally, avoiding idle resources. This ties AI performance to overall network efficiency.
EVIDENCE
He adds, “we will ensure that every compute capacity is being optimally utilized, not like we have enough compute and we will use it as much as possible” (sentences [102-104]).
MAJOR DISCUSSION POINT
Optimising compute utilisation for AI
Argument 4
Industry should adopt existing 5G labs to accelerate 6G use‑case development and testing.
EXPLANATION
Radhakant urges companies to partner with operational 5G labs, leveraging their infrastructure to fast‑track 6G research and prototype validation. This is a call for practical collaboration.
EVIDENCE
He says, “has also urged some of the industries to take over or adopt a couple of these labs” (sentences [176-179]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The recommendation to leverage existing 5G labs for 6G development is supported by the government’s 5G lab rollout description [S1].
MAJOR DISCUSSION POINT
Leveraging 5G labs for 6G innovation
Argument 5
The emergence of small “tokens” (lightweight data packets) will reshape traffic patterns and must be carefully managed.
EXPLANATION
He highlights that token‑based communication could alter uplink/downlink dynamics, questioning why it would increase traffic burstiness. This points to a need for new traffic engineering approaches.
EVIDENCE
He discusses tokens, stating, “There are a lot of popular talks like it is going to the 6Gs or the AI is going to reverse the traffic pattern. But why? Just tokens” (sentences [208-217]).
MAJOR DISCUSSION POINT
Impact of token‑based traffic on network design
Argument 6
A coordinated national framework, including safety guardrails and standard‑alignment mechanisms, is essential for AI‑6G pilots to avoid siloed development.
EXPLANATION
Radhakant asks how industry, academia, and government can co‑create referenceable frameworks and safety guidelines so that pilots are interoperable and ready for upcoming standards. This stresses governance and collaboration.
EVIDENCE
He asks, “what specific coordination mechanisms or co-creation models do you think we all should work together as industry, academia, government to ensure that these pilots, they align to the standards… also safety guidelines” (sentences [229-236]).
MAJOR DISCUSSION POINT
National coordination and safety for AI‑6G pilots
Argument 7
Sovereignty of the AI ecosystem should be achieved through a token‑based model that balances openness with national control.
EXPLANATION
He raises the question of how much of the AI stack should be open versus sovereign, proposing a token economy that keeps the entire value chain within India. This frames AI sovereignty as both technical and policy‑driven.
EVIDENCE
He asks, “how much of influencing would you like to see… sovereignty… token economy… we need a sovereign token” (sentences [277-287]).
MAJOR DISCUSSION POINT
Token‑based AI sovereignty
A
Audience
3 arguments152 words per minute438 words172 seconds
Argument 1
AI‑related applications need interoperable APIs across devices and platforms to prevent vendor lock‑in.
EXPLANATION
The audience member points out that emerging AI hardware (e.g., Meta glasses) may only work within a single ecosystem, calling for a common AI API architecture that works across operators and service providers.
EVIDENCE
The audience asks, “Should we not think of creating some AI API sort of architecture wherein a product created by one user side should work in different applications… should work with Google, should work with geo-applications” (sentences [312-319]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for interoperable AI APIs across ecosystems is advocated in the open platform blueprint for 6G [S18] and the AI-telecom interoperability report [S14].
MAJOR DISCUSSION POINT
Interoperable AI API architecture
Argument 2
India’s massive data sets and market scale should be leveraged to train AI models locally, reducing dependence on foreign LLMs.
EXPLANATION
The audience highlights India’s advantage of a billion‑user market and abundant data, asking how to use this to train and deploy models domestically, emphasizing self‑reliance in AI.
EVIDENCE
The audience states, “the advantage of India is like any applications we can scale to a billion users… we have a huge data set… how to leverage these two for AI… models are trained here, models are utilized here” (sentences [322-328]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Leveraging India’s large data sets for domestic AI model training aligns with the digital sovereignty and Indo-German collaboration initiatives [S20] and the national AI strategy [S21].
MAJOR DISCUSSION POINT
Domestic training of AI models using Indian data
Argument 3
Monetisation of a network‑API economy is a priority; telecom operators need an open AI ecosystem to enable enterprise use cases.
EXPLANATION
Sidhu from AT&T asks how India is materialising revenue from network‑API exchanges, noting that enterprises (e.g., banks) need visibility across multiple networks, and queries Jio’s role in this emerging market.
EVIDENCE
He asks, “how much of this monetisation of the network API-centric economy is materialising from India’s standpoint… Jio covers… 40-50% of the population” (sentences [346-355]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Monetisation of network-API services and the push for an open AI ecosystem are discussed in the future network open platform proposals [S18] and the WTSA leadership context [S19].
MAJOR DISCUSSION POINT
Network API monetisation and open AI ecosystem
Agreements
Agreement Points
AI must be a core, native component of 6G networks rather than an after‑thought
Speakers: Ashok Kumar, Radhakant Das
AI as an after‑thought in 5G, becoming native in 6G (Ashok Kumar) Intelligence should be regarded as the basic infrastructure for the next wave of digital evolution (Radhakant Das)
Both speakers stress that AI is now embedded in the design of 6G – the ITU framework explicitly includes integrated AI and the principle of ubiquitous intelligence, and intelligence is described as the foundational infrastructure for future networks [27-30][88-89].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with India’s 6G roadmap that positions AI as a native function of the network and stresses AI-at-the-core in standardisation work for future 6G systems [S34][S46].
An open, API‑driven and loosely‑coupled ecosystem is essential for scaling AI services and ensuring interoperability
Speakers: Rajeev Saluja, Sandeep Sharma, Audience
Open, loosely‑coupled API architecture is required so AI applications work across vendors and platforms (Rajeev Saluja) Open, interoperable ecosystem (e.g., UPI model) is essential for scaling AI services (Sandeep Sharma) AI‑related applications need interoperable APIs across devices and platforms to prevent vendor lock‑in (Audience)
All three emphasize that a UPI-style open API layer is needed so AI applications can operate across operators, devices and ecosystems without proprietary lock-in [330-336][262-264][312-319].
POLICY CONTEXT (KNOWLEDGE BASE)
The call for an open, API-driven architecture matches recommendations from the Designing India’s Digital Future workshop and broader AI governance discussions that prioritize interoperable APIs across stakeholders [S34][S49][S33].
Domestic Indian data is crucial to train AI models that are relevant and unbiased for local use‑cases
Speakers: Surojeet Roy, Audience
Indian‑specific data is crucial to avoid bias and to tailor models to local contexts (Surojeet Roy) India’s massive data sets and market scale should be leveraged to train AI models locally, reducing dependence on foreign LLMs (Audience)
Both point out that AI models must be trained on Indian data to reflect local languages, behaviours and conditions, otherwise they risk bias and poor performance [300-304][322-328].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy papers on democratising AI stress the need for high-quality local datasets to build region-specific models, underscoring the importance of Indian data for unbiased AI outcomes [S51].
Edge‑centric AI inference reduces latency, distributes power consumption and eases pressure on central data‑centres
Speakers: Surojeet Roy, Rajeev Saluja, Radhakant Das
AI‑enabled devices will generate heavy uplink traffic, requiring edge inferencing (Surojeet Roy) Simple, latency‑sensitive inference should be handled at the edge; complex workflows remain in the core (Rajeev Saluja) Intelligence should be energy‑efficient and compute utilisation optimised (Radhakant Das)
All agree that moving inference to the edge improves latency, spreads power demand and makes better use of compute resources, while keeping more complex processing in the core/cloud [115-124][185-190][158-162][98-101].
POLICY CONTEXT (KNOWLEDGE BASE)
Research on heterogeneous compute and edge-centric AI highlights latency reduction and lower data-centre load as key benefits of moving inference to the edge [S40][S41][S39].
A coordinated national framework—including standards participation, testbeds, funding schemes and safety guardrails—is needed to align pilots with emerging 6G standards
Speakers: Ashok Kumar, Sandeep Sharma, Radhakant Das
6G Accelerated Research Program, testbeds and collaboration with Bharat 6G Alliance (Ashok Kumar) Need for national frameworks, data‑exchange platforms and audit mechanisms to align pilots with standards (Sandeep Sharma) A coordinated national framework, safety guidelines and standard‑alignment mechanisms are essential for AI‑6G pilots (Radhakant Das)
Each speaker calls for a unified national approach-government-backed research programs, testbeds, data-exchange and safety frameworks-to ensure AI-6G pilots are interoperable and standards-ready [45-53][55-62][237-252][229-236].
POLICY CONTEXT (KNOWLEDGE BASE)
This recommendation echoes calls for a national AI framework that integrates standards, testbeds and safety mechanisms, as emphasized in 6G standardisation and multi-stakeholder governance initiatives [S46][S45][S33].
A sovereign, token‑based AI economy is required to keep the entire value chain within India
Speakers: Rajeev Saluja, Radhakant Das
A sovereign token‑based AI economy is required for end‑to‑end control of data, inference and value delivery (Rajeev Saluja) Sovereignty of the AI ecosystem should be achieved through a token‑based model that balances openness with national control (Radhakant Das)
Both advocate a token-driven model that ensures AI services, data and inference remain domestically owned and controlled, framing it as essential for national sovereignty [278-282][277-287].
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on sovereignty mirrors AI governance discussions that balance open ecosystems with national control over AI infrastructure and data assets [S43][S33].
Similar Viewpoints
Both see the edge as the appropriate place for latency‑critical AI inference, while more complex processing can stay in the core/cloud [115-124][158-162].
Speakers: Surojeet Roy, Rajeev Saluja
AI‑enabled devices will generate heavy uplink traffic, requiring edge inferencing (Surojeet Roy) Simple, latency‑sensitive inference should be handled at the edge; complex workflows remain in the core (Rajeev Saluja)
Both stress the necessity of coordinated national programmes, testbeds and governance frameworks to steer AI‑6G development and avoid siloed pilots [45-53][55-62][237-252].
Speakers: Ashok Kumar, Sandeep Sharma
6G Accelerated Research Program, testbeds and collaboration with Bharat 6G Alliance (Ashok Kumar) Need for national frameworks, data‑exchange platforms and audit mechanisms to align pilots with standards (Sandeep Sharma)
Both propose a token‑centric model to achieve AI sovereignty, balancing openness with national control [278-282][277-287].
Speakers: Rajeev Saluja, Radhakant Das
A sovereign token‑based AI economy is required for end‑to‑end control (Rajeev Saluja) Sovereignty of the AI ecosystem should be achieved through a token‑based model (Radhakant Das)
Unexpected Consensus
Both the moderator (Radhakant Das) and industry speakers highlighted the impact of token‑based traffic patterns on network design, a topic usually reserved for technical specialists
Speakers: Radhakant Das, Surojeet Roy, Rajeev Saluja
Tokens may reshape traffic patterns and must be carefully managed (Radhakant Das) AI‑driven traffic will shift downlink‑to‑uplink ratio from ~10:1 to ~4:1, creating uplink pressure (Surojeet Roy) Token economy will drive AI‑based value creation and sovereignty (Rajeev Saluja)
The convergence of a high-level policy discussion on tokens with detailed technical forecasts about uplink traffic and economic token models was not anticipated, indicating a cross-disciplinary consensus on tokens shaping 6G architecture [208-217][185-190][278-282].
The audience’s concern about interoperable AI APIs found immediate resonance with the government’s and industry’s calls for open, API‑driven ecosystems
Speakers: Audience, Rajeev Saluja, Sandeep Sharma, Ashok Kumar
AI‑related applications need interoperable APIs across devices and platforms (Audience) Open, loosely‑coupled API architecture is required (Rajeev Saluja) Open, interoperable ecosystem (e.g., UPI) is essential (Sandeep Sharma) Government programmes aim to build an end‑to‑end 6G stack that can be open and collaborative (Ashok Kumar)
While audience members typically raise implementation questions, their demand for a common AI API aligned directly with the speakers’ strategic vision for an open ecosystem, showing unexpected alignment between end-user concerns and policy/industry strategy [312-319][330-336][262-264][45-53].
POLICY CONTEXT (KNOWLEDGE BASE)
Audience demand for API-centric interoperability was explicitly voiced in the Designing India’s Digital Future workshop and aligns with global AI interoperability agendas [S34][S49].
Overall Assessment

The discussion revealed strong convergence around five major themes: (1) AI as a native, foundational element of 6G; (2) the necessity of an open, API‑driven ecosystem; (3) the importance of Indian data and sovereign token‑based models; (4) edge‑centric AI inference for latency, power and uplink efficiency; and (5) the need for coordinated national frameworks, testbeds and safety guardrails. These points were echoed across government, industry and academic representatives, indicating a high level of consensus on the strategic direction for India’s 6G and AI roadmap.

High – most speakers, including the moderator, aligned on the same strategic priorities, suggesting that policy, standards participation and industry investment are likely to move forward in a coordinated manner.

Differences
Different Viewpoints
Openness vs sovereignty of the AI ecosystem (open, API‑driven architecture versus a sovereign token‑based model)
Speakers: Rajeev Saluja, Sandeep Sharma, Radhakant Das
Open, loosely-coupled API architecture is required so AI applications work across vendors and platforms (Sandeep Sharma) [262-264] Open, API-driven, loosely-coupled ecosystem similar to UPI is essential for scaling AI services (Rajeev Saluja) [330-336] A sovereign token-based AI economy is required for end-to-end control of data, inference and value delivery (Rajeev Saluja) [278-282][284-287] Sovereignty of the AI ecosystem should be achieved through a token-based model that balances openness with national control (Radhakant Das) [277-287]
Rajeev and Sandeep argue that an open, interoperable API ecosystem (like UPI) is crucial for scaling AI across devices and operators, while Rajeev also promotes a sovereign token-based AI economy that keeps the entire value chain within India, a view echoed by Radhakant. The two positions clash over whether openness or national sovereignty should dominate the AI stack design [262-264][330-336][278-282][284-287][277-287].
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between open API models and sovereign token-based approaches reflects broader AI governance debates that seek to reconcile global interoperability with local sovereignty objectives [S43][S33].
Allocation of AI inference workload between edge and core/cloud
Speakers: Rajeev Saluja, Surojeet Roy
Simple, latency-sensitive inference should be handled at the edge; complex multi-agent workflows remain in the core (Rajeev Saluja) [158-162] AI-enabled devices will generate heavy uplink traffic, requiring edge inferencing and a shift in traffic asymmetry (Surojeet Roy) [114-124][185-190]
Rajeev proposes a split where most simple AI tasks run at the edge and only complex workflows stay in the core, whereas Surojeet emphasizes that the surge in uplink traffic from AI devices will force a broader move of inference to the edge and redesign of the network, suggesting a more extensive edge shift than Rajeev envisions [158-162][114-124][185-190].
POLICY CONTEXT (KNOWLEDGE BASE)
Ongoing policy and technical discussions address how to split AI inference between edge and cloud, as highlighted in edge-centric AI workshops and analyses of cloud-edge workload balance [S39][S40][S41].
Unexpected Differences
Same speaker (Rajeev Saluja) simultaneously promotes an open API ecosystem and a sovereign token‑based AI economy
Speakers: Rajeev Saluja
Open, loosely-coupled API architecture is required … (Rajeev Saluja) [330-336] A sovereign token-based AI economy is required for end-to-end control … (Rajeev Saluja) [278-282][284-287]
It is unexpected that a single participant advocates both a fully open, interoperable API model and a closed, nationally‑controlled token economy, positions that are logically at odds. This internal tension highlights the difficulty of reconciling openness with sovereignty in the Indian AI‑6G strategy.
Audience raises concern about AI‑hardware interoperability (Meta glasses) while panelists focus on network‑level AI integration
Speakers: Audience, Panel (Ashok Kumar, Surojeet Roy, Rajeev Saluja, Sandeep Sharma)
AI-related applications need interoperable APIs across devices and platforms to prevent vendor lock-in (Audience) [312-319] Panel discussion centers on AI integration in the network, edge computing and standards, with no direct response to device-level API standardisation.
The audience’s request for a cross‑vendor AI API for hardware (e.g., Meta glasses) was not addressed by the panel, revealing an unexpected gap between hardware‑level interoperability concerns and the panel’s network‑centric focus.
Overall Assessment

The discussion shows broad consensus that AI must be native to 6G and that coordinated national effort is needed. However, substantive disagreements arise around the degree of openness versus national sovereignty of the AI stack, and the optimal placement of AI inference (edge vs core). These tensions reflect competing priorities of fostering an open, interoperable ecosystem while protecting strategic autonomy and managing infrastructure load.

Moderate – while participants share the same strategic vision, the clash over openness versus sovereignty and edge‑core allocation could slow consensus on policy and standard‑setting, requiring careful balancing in future road‑maps.

Partial Agreements
All speakers concur that AI must be a core, native component of future 6G networks, but they diverge on where the AI functionality should be placed (device‑level, edge, core) and on the balance between building domestic capability versus leveraging existing standards. The shared goal is AI‑native 6G; the disagreement lies in implementation pathways.
Speakers: Ashok Kumar, Surojeet Roy, Rajeev Saluja, Radhakant Das
AI is being embedded natively in 6G from the outset (Ashok Kumar) [27-30] AI-enabled devices will require AI at the edge and will change traffic patterns (Surojeet Roy) [115-124] Intelligence is the basic infrastructure for the next digital evolution (Radhakant Das) [88-89] India must build, not rent, intelligence to ensure affordability and self-reliance (Rajeev Saluja) [154-157]
All agree that a national coordination mechanism is essential for successful AI‑6G pilots, but Ashok focuses on funding and testbeds, while Sandeep and Radhakant stress the creation of referenceable frameworks, data‑exchange mechanisms and safety guardrails. The goal of coordinated development is shared; the means (funding vs framework design) differ.
Speakers: Ashok Kumar, Sandeep Sharma, Radhakant Das
Government-backed 6G Accelerated Research Program, testbeds and collaboration with Bharat 6G Alliance (Ashok Kumar) [45-53][55-62] Need for coordinated national frameworks, data-exchange platforms and safety guardrails to align pilots with standards (Sandeep Sharma) [237-252] Call for coordinated national framework, safety guidelines and standard-alignment for AI-6G pilots (Radhakant Das) [229-236]
Takeaways
Key takeaways
AI transitioned from an after‑thought in 5G to a native, “ubiquitous intelligence” element in the emerging 6G standards (ITU, 3GPP). The Indian government is actively building a 6G ecosystem through low‑cost 3GPP/TSDSI membership for startups, the 6G Accelerated Research Program, terahertz and AOC testbeds, and the Bharat 6G Alliance, plus a network of 5G labs in academia. AI‑enabled edge devices (smart glasses, wearables, sensors) will generate much higher uplink traffic, shifting the traditional downlink‑dominant pattern (≈10:1) toward a more balanced ratio (≈4:1) and requiring new uplink capacity and spectral efficiency improvements. AI techniques such as DeepRx/DeepTx can increase spectral efficiency and overall capacity by 25‑30 % and enable higher‑order modulation, supporting the larger bandwidth (≈400 MHz) envisioned for 6G. Intelligence delivery will be split: latency‑sensitive, simple inference at the edge; complex multi‑agent workflows in the core/cloud, reducing data‑center power load and distributing energy consumption. India aims to “build, not rent” intelligence, creating a sovereign, token‑based AI economy that controls data, inference and value delivery end‑to‑end while keeping costs affordable. An open, API‑driven, interoperable ecosystem (similar to UPI) is essential for scaling AI services across vendors, devices and applications. National frameworks are needed for data exchange, model auditing, safety guard‑rails and alignment of pilots with forthcoming 6G standards to avoid siloed development. Enterprise value from AI‑native 6G will arise from demand analysis, workflow automation and enhanced security, especially for sectors like BFSI, manufacturing, healthcare and mobility. Leveraging existing 5G labs and testbeds is a practical pathway for industry and academia to prototype and validate 6G use‑cases.
Resolutions and action items
DOT to continue subsidising TSDSI and 3GPP membership for Indian startups (cost ~₹10,000). Maintain and expand the 6G Accelerated Research Program and associated testbeds (terahertz, AOC). Strengthen collaboration with the Bharat 6G Alliance and incorporate its working‑group recommendations into policy. Encourage industry participants to adopt one or two of the 100 existing 5G labs for 6G research and use‑case development. Develop a national data‑exchange platform to enable secure, anonymised sharing of industry data for AI model training. Create audit and safety‑guardrail frameworks for AI models operating within telecom networks. Promote an open, loosely‑coupled API architecture for AI applications to ensure cross‑vendor interoperability. Explore deployment of edge compute (e.g., GPUs at cell towers) to democratise AI inferencing and training resources.
Unresolved issues
Exact quantitative split of AI inference workload among device, edge, core and cloud (no definitive percentages provided). Detailed roadmap and timeline for the release 21 (first 6G‑specific) standard and its adoption. Specific mechanisms for governing the proposed token‑based AI economy, including pricing, settlement and regulatory oversight. Concrete standards or guidelines for AI safety, model explainability and real‑time intervention in live networks. Implementation plan for ensuring AI models are trained on India‑specific data to avoid bias, beyond the general call for a national data exchange. How to achieve full interoperability of AI‑driven applications across different vendor ecosystems (e.g., Meta glasses vs other platforms). Funding and resource allocation details for scaling the proposed national frameworks and sandbox environments.
Suggested compromises
Adopt a hybrid ecosystem: keep core AI infrastructure and token economy sovereign to protect national interests while maintaining open, API‑driven interfaces for broader innovation and interoperability. Balance edge and central processing by assigning latency‑sensitive tasks to the edge and more complex workloads to the core, thereby distributing power consumption and reducing data‑center load. Leverage existing 5G labs as a stepping stone to 6G, allowing immediate research while the full 6G standards and testbeds are still under development. Combine open‑source collaboration (e.g., UPI‑style model) with sovereign data‑exchange platforms to enable both innovation and control over sensitive data.
Thought Provoking Comments
Ubiquitous intelligence – every element of our end‑to‑end 6G system, be it user equipment, radio, core or applications, will have AI embedded natively into the system.
Marks a fundamental shift from treating AI as an afterthought (as in 5G) to making it a core design principle of 6G, redefining how the whole network will be built.
Set the conceptual foundation for the whole panel. It prompted speakers to discuss AI‑native RAN designs, edge inference, and the need for new standards, steering the conversation toward integration rather than retro‑fitting.
Speaker: Ashok Kumar
By 2033 about 30 % of traffic will be AI‑driven and the downlink‑to‑uplink ratio will change from roughly 10:1 to about 4:1.
Provides a concrete, data‑driven forecast that highlights a dramatic reversal in traffic patterns, emphasizing the upcoming uplink pressure caused by AI workloads.
Shifted the discussion from abstract AI benefits to concrete network engineering challenges. It led to deeper talks on uplink capacity, spectrum needs, and the necessity of edge compute to handle the surge.
Speaker: Surojeet Roy
We cannot rent intelligence. We must build and democratize it so that the last citizen of India has affordable, pervasive intelligence.
Frames intelligence as a public utility rather than a commercial service, introducing a policy‑level perspective on self‑reliance and inclusivity.
Redirected the conversation toward sovereignty, cost‑effective deployment, and the role of government‑backed ecosystems, influencing later remarks on sovereign AI tokens and open standards.
Speaker: Rajiv Saluja
Latency is no longer just a network KPI; it becomes a productivity KPI for AI‑driven use cases. We need a national framework, data exchanges, and safety guardrails for AI in telecom.
Re‑defines performance measurement, linking network metrics directly to business outcomes, and stresses governance, data sharing, and safety – dimensions often overlooked in technical talks.
Expanded the dialogue to include economic (token) models, regulatory sandboxes, and audit mechanisms, prompting other panelists to address sovereignty and open‑ecosystem concerns.
Speaker: Sandeep Sharma
The AI‑native telco must be a sovereign end‑to‑end ecosystem – from device to cloud to edge – we need a token sovereign.
Introduces the novel concept of a “token sovereign” where the entire AI value chain is domestically owned and controlled, blending technical, economic, and geopolitical considerations.
Created a turning point that sparked debate on openness vs. sovereignty, with other speakers (Sandeep, Surojeet) weighing in on hybrid models and the need for both global interoperability and national control.
Speaker: Rajiv Saluja
Putting GPUs at cell towers can democratize AI by letting idle compute be used for training or inference when the network is not busy.
Offers a concrete, infrastructure‑level solution to distribute AI compute, linking edge resources to the earlier discussed uplink surge and edge inference needs.
Provided a practical implementation path, reinforcing the earlier points about edge AI and influencing the conversation toward feasible deployment strategies.
Speaker: Surojeet Roy
We need an open, API‑driven architecture so AI applications created by one vendor can work across different platforms and operators.
Highlights interoperability as a critical barrier to AI adoption, calling for standardized, loosely‑coupled APIs rather than proprietary silos.
Re‑emphasized the theme of open ecosystems, tying back to earlier remarks on standards participation, and set the stage for concluding remarks about collaborative frameworks.
Speaker: Rajiv Saluja (answer to audience question)
Overall Assessment

The discussion was shaped by a series of pivotal remarks that moved the conversation from a high‑level vision of AI‑enabled 6G to concrete technical, economic, and policy challenges. Ashok Kumar’s framing of “ubiquitous intelligence” established AI as a native design pillar, which was then quantified by Surojeet Roy’s traffic forecasts, prompting a shift toward network capacity and edge‑compute considerations. Rajiv Saluja’s emphasis on building (not renting) intelligence and his later sovereignty argument introduced a national‑self‑reliance narrative, while Sandeep Sharma reframed latency as a productivity metric and called for governance frameworks. Together, these comments redirected the panel toward actionable topics—standard participation, open APIs, distributed compute at cell sites, and the need for a sovereign token economy—thereby deepening the analysis and steering the dialogue toward both technical implementation and strategic policy direction.

Follow-up Questions
What is the precise percentage breakdown of AI inference workloads across device, edge, core network, and cloud environments?
Understanding the distribution is crucial for capacity planning, resource allocation, and designing AI-native 6G architecture.
Speaker: Radhakant Das
How much influence or processing should be allocated to cloud versus edge versus other layers for AI-driven traffic?
Quantifying the influence of each layer helps optimize latency, power consumption, and cost efficiency.
Speaker: Radhakant Das
What specific ROI metrics and success criteria should be used to evaluate AI‑6G anchor use cases in priority sectors (BFSI, manufacturing, healthcare, mobility) within the next 1.5 years?
Clear metrics are needed to justify investments and track the economic impact of early 6G deployments.
Speaker: Radhakant Das
What is the detailed evolution roadmap for 6G (devices, use cases, traffic growth, token economy) up to 2030?
A forward‑looking roadmap will guide research, standardisation, and industry investment decisions.
Speaker: Radhakant Das
What specific coordination mechanisms, co‑creation models, and safety‑guideline frameworks should be established among industry, academia, and government to align pilots with upcoming 6G standards?
Ensuring pilots are standards‑compliant and safe prevents siloed development and accelerates scalable deployment.
Speaker: Radhakant Das
How can an interoperable AI‑API architecture be created so that AI‑enabled devices (e.g., Meta glasses) work across different platforms and applications?
Standardised APIs would prevent vendor lock‑in and enable a broader ecosystem of AI services.
Speaker: Audience (unidentified participant)
What framework should be used to leverage India’s massive data for training large language models locally, including data exchange, anonymisation, and governance?
A national data‑exchange framework would unlock AI potential while protecting privacy and encouraging domestic model development.
Speaker: Audience (unidentified participant)
What are the viable monetisation models for a network‑API‑centric economy (e.g., OneEdge) in India, and how are they being materialised?
Understanding monetisation pathways is essential for telecom operators to create new revenue streams from AI services.
Speaker: Audience (Sidhu, AT&T)
What national framework is needed for AI‑native architectures, including sandbox environments, reference models, and audit mechanisms?
A coordinated framework will ensure pilots are scalable, interoperable, and compliant with future standards.
Speaker: Sandeep Sharma
How should the concept of token sovereignty be defined and operationalised within India’s AI‑6G ecosystem?
Clarifying token sovereignty impacts economic models, data ownership, and regulatory policies.
Speaker: Rajiv Saluja
What is the optimal distribution of power consumption between centralized data centres and edge compute for AI workloads in 6G?
Balancing power use affects sustainability, cost, and network performance.
Speaker: Surojeet Roy
What are the projected uplink traffic growth rates and required spectral‑efficiency enhancements to support AI‑driven use cases?
Accurate traffic forecasts guide spectrum allocation and technology upgrades for 6G.
Speaker: Surojeet Roy
How will AI impact latency and coverage requirements for latency‑sensitive applications such as robotic surgery and autonomous vehicles?
Quantifying these impacts is necessary to design networks that meet strict performance guarantees.
Speaker: Sandeep Sharma
What are the bandwidth requirements (e.g., 400 MHz) and spectral‑efficiency targets (e.g., 5×) for 6G, and how can they be achieved?
Defining these technical targets is essential for hardware design and spectrum policy.
Speaker: Surojeet Roy
What AI use cases can be developed for India’s informal sector workers, and how should models be trained on India‑specific data to avoid bias?
Tailoring AI to the informal economy can boost productivity, but requires culturally relevant data.
Speaker: Surojeet Roy
What safety and audit frameworks are required for AI models operating within telecom networks to ensure explainability and prevent unintended parameter changes?
Robust governance is critical to maintain network reliability and public trust.
Speaker: Sandeep Sharma
Is it feasible to democratise AI by deploying GPUs at cell‑tower sites for distributed training and inferencing, and what are the associated challenges?
Edge compute could lower costs and increase accessibility, but needs evaluation of resource management and security.
Speaker: Surojeet Roy
How should the balance between open (interoperable) and sovereign (nationally controlled) AI ecosystems be defined, and what hybrid approaches are recommended?
Finding the right mix will enable innovation while protecting national interests and data sovereignty.
Speaker: Rajiv Saluja, Sandeep Sharma, Surojeet Roy

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.