Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Amb Thomas Schneider

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Amb Thomas Schneider

Session at a glanceSummary, keypoints, and speakers overview

Summary

Thomas Schneider opened the session by thanking the Indian government and the global audience for gathering at the AI Impact Summit in Delhi, emphasizing that the event’s focus on “people, progress, planet” reflects a shared ambition for inclusive AI development [1-3]. He reiterated that AI must be harnessed to deliver economic and societal benefits for everyone while safeguarding human dignity and the environment [4-6]. Switzerland announced it will host the next AI Summit in Geneva in 2027, noting the strong enthusiasm of Swiss and international stakeholders who are already contributing ideas for the agenda [8-10].


Schneider stressed that Switzerland’s motivation is not to stage a show but to make a substantive contribution to ensuring AI’s transformative power-comparable to the printing press or the combustion engine-raises global quality of life rather than diminishes it [12-14]. He outlined a plan to build on existing governance mechanisms such as the UN Internet Governance Forum, AI for Good Summit, ITU-UNESCO forums, OECD, and prior AI summits, thereby avoiding duplication and leveraging proven platforms [20-21]. Drawing an analogy with the two-century evolution of engine regulation, he argued that a complex mix of technical, legal, and societal norms-already evident in transport and manufacturing-must be developed for AI [27-34].


The speaker highlighted the Vilnius Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law as a principle-based framework that can be adapted globally, offering flexibility for diverse legal traditions while promoting interoperable standards [46-49]. Concluding, Schneider called for broad collaboration, positioning Switzerland as a facilitator that will bridge stakeholders from all regions to create pragmatic, trustworthy structures that enable AI to support peace, prosperity and human dignity, and expressed anticipation for the Geneva summit in 2027 [55-58].


Keypoints

Major discussion points


Inclusive, human-centred AI for the benefit of all peoples and the planet – The speaker stresses that AI must be developed “so that everyone in the world can benefit” while respecting human dignity, autonomy and the environment [4-6][14-15].


Switzerland’s facilitating role and the 2027 Geneva AI Summit – Switzerland will host the next summit, intends to “build on” existing platforms (UN-IGF, AI for Good, OECD, etc.) rather than reinvent them, and will act as a neutral facilitator that brings together diverse stakeholders [8-10][20-23][55-56].


A multi-layered governance approach modeled on the regulation of engines – The talk draws an analogy to the historic governance of combustion engines, arguing that AI will require a “set of thousands of technical, legal, and also non-written societal norms” and a mix of binding and non-binding instruments [27-34][41-45].


The Vilnius Convention as a cornerstone and the need for additional norms – The newly-negotiated Vilnius Convention on AI, Human Rights, Democracy and the Rule of Law is highlighted as a “principle-based framework” that can be adapted globally, while acknowledging that many more sector-specific norms will be required [46-52].


Call for global, especially under-resourced, participation and gap-filling before the summit – The speaker pledges to help “less resourced communities” navigate the complex governance ecosystem and to use the time until Geneva to identify and close gaps in global and regional AI governance [23][42-53][55-56].


Overall purpose / goal


The address aims to set the agenda for the forthcoming AI Impact Summit in Geneva (2027), positioning Switzerland as a collaborative host that will coordinate existing international forums, advance a human-rights-based governance framework (exemplified by the Vilnius Convention), and mobilize worldwide stakeholders-including those from the global north, south, east and west-to co-create pragmatic, interoperable norms that ensure AI’s benefits are shared equitably and its risks are mitigated.


Tone of the discussion


The tone is consistently respectful, optimistic, and constructive. The speaker repeatedly expresses gratitude, enthusiasm, and a willingness to listen (“we are very keen to hear your ideas” [15]), while emphasizing partnership and collective problem-solving. There is no noticeable shift toward negativity or confrontation; the discourse remains collaborative from start to finish.


Speakers

Thomas Schneider


Areas of expertise: Global technology governance, artificial intelligence policy, internet governance, human rights and democracy in AI.


Roles and titles:


– Ambassador and Director of International Relations at Ofcom Switzerland[S3]


– Vice-Chair of the Council of Europe’s Committee on Artificial Intelligence[S3]


– Former Chair of ICANN’s Governmental Advisory Committee (2014-2017)[S1]


Additional speakers:


(None identified in the transcript)


Full session reportComprehensive analysis and detailed insights

Thomas Schneider opened the plenary by thanking the Indian government and the global audience for convening the AI Impact Summit in Delhi, and he framed the event’s three-fold focus on “people, progress, planet” as a shared ambition for inclusive AI development [1-3]. He emphasized that the promise of artificial intelligence must be harnessed so that everyone, regardless of geography, can share in economic and societal progress while safeguarding human dignity, personal autonomy and the health of the planet [4-6].


He then announced that Switzerland will host the next AI Impact Summit in Geneva in 2027. Schneider highlighted the strong enthusiasm already evident among Swiss stakeholders and the positive reactions from international partners, noting that many governments and civil-society actors have begun submitting ideas for the agenda [8-11]. The Swiss motivation for organising the next summit is “not to make a show”, but to make a substantive contribution to the global good-use of AI [12-14].


Drawing on history, he compared AI’s transformative capacity to inventions such as the printing press, radio, television, the internet and the combustion engine, arguing that-like those technologies-AI should raise rather than lower the quality of life for all peoples [13-14]. He also used an industry-regulation analogy, pointing out that, unlike the highly harmonised global airline sector, car regulations remain fragmented, underscoring that AI governance will likely exhibit similar variations across domains [60].


Schneider said the Geneva summit will carry a “Swiss flavour”, meaning it will be grounded in Switzerland’s tradition of constructive, neutral facilitation and will build on, rather than duplicate, existing multistakeholder mechanisms [18-23]. He listed the platforms that will be leveraged, including the UN Internet Governance Forum, the AI for Good Summit, the ITU-UNESCO Global Forum on Ethics of AI, OECD initiatives, the Global Partnership on AI (GPI) and other regional bodies [20-22][61].


To ensure that less-resourced communities can participate, he pledged continued support through long-standing partners such as the Diplo Foundation and the Geneva Internet Platform, enabling these groups to navigate the complex governance ecosystem and have their voices heard [23][55-56].


Schneider then described a layered governance architecture, noting that societies have never relied on a single institution to regulate transformative technologies. He illustrated this with the evolution of engine regulation, where “thousands of technical, legal and non-written societal norms” now coexist to manage transport, manufacturing and energy [27-34][41-45]. Accordingly, Switzerland is analysing existing AI governance instruments, identifying gaps, and drafting both technical norms and binding or non-binding legal instruments [42-46].


Central to these efforts is the Vilnius Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, which he chaired negotiations for among 55 countries. The convention provides a principle-based, flexible framework that can be adopted globally while allowing states to embed the principles within their own legal traditions, thereby promoting interoperable rather than identical regulations [46-51][49-50].


Schneider acknowledged that the Vilnius Convention alone will not be sufficient; additional sector-specific binding and non-binding instruments will be needed to ensure coherence across the AI governance landscape [52-53]. He called for the period leading up to the 2027 summit to be used for systematic gap-identification, the development of pragmatic norms, and the coordination of existing initiatives so that AI can continue to drive innovation while legitimate concerns are addressed [53-54].


Emphasising Switzerland’s role as a neutral facilitator, he said the Swiss team will act as bridge-builders, identifying areas of shared vision among stakeholders from the global north, south, east and west, and translating those consensuses into workable steps [55-56]. He reiterated the importance of inclusive participation, noting that the summit will strive to create “trustworthy cooperation” that respects dignity, promotes peace, prosperity and security, and ultimately allows AI to serve humanity and the planet [57-58].


In closing, Schneider expressed confidence that the collaborative approach-grounded in historical lessons, anchored by the Vilnius Convention, and reinforced by existing multistakeholder platforms-will enable the Geneva AI Impact Summit to produce concrete, interoperable outcomes. He thanked the audience for their support and looked forward to meeting the global community again in Geneva in 2027 [58-59].


Session transcriptComplete transcript of the session
Thomas Schneider

So, dear friends and colleagues from India and from all around the world, it is an honor and pleasure to be here with you in Delhi at this pivotal moment for global AI governance. And first, of course, I want to express my gratitude to the government of India for bringing together a diverse and distinguished group of leaders, innovators, researchers, civil society representatives from all around the world. Switzerland very much welcomes and supports the focus of the AI Impact Summit, which is well presented in the three sutras, people, progress, planet, as we all have learned in the past weeks and months. And we fully agree that we need to develop and use AI in a way that everyone in the world can benefit from the potential that AI offers.

This includes economic and societal social progress for everyone. At the same time, of course, we need to make sure that we are able to develop and use AI in a way that everyone in the world can benefit from the that we respect human dignity and autonomy, as well as our planet, which is the basis for all life that we know, at least so far. We haven’t found other life elsewhere. So we are honored and very proud to be hosting the next AI Summit in Geneva in 2027. It is overwhelming to see already now and feel the momentum and the enthusiasm that we sense on national level among all Swiss stakeholders, as well as the very positive reactions from our partners from all around the world, who are all eager and willing to cooperate with us and contribute to the summit in Geneva.

Already now, we are approached by many governmental and other stakeholders that share their ideas with us about what the Geneva Summit and the road leading up to it should focus on and what it should achieve. And let me assure you that this is very welcome and helpful to us. The Swiss motivation for organizing the next summit is to, not to make a show, it is to substantially and meaningfully contribute to achieving the goal that mankind and the world want to achieve. it is to substantially and meaningfully contribute to achieving the goal that mankind uses the unprecedented potential of AI to achieve the goal that mankind uses for good and not for bad. This potential of AI, which may be at least as transformative as the invention of the printing press, radio, television and the internet, as well as the invention of the combustion and other engines together, this potential must be used to raise and not lower the quality of life of all people in the world and not just a few.

AI must strengthen and not weaken the dignity and autonomy of all people in the global north, south, east and west or whatever we call the region where we live and help us all to live together in peace and prosperity. So we are very keen to hear your ideas about what we could and should do together to achieve this goal. Of course, we do have some ideas on our own, but we have not decided yet about the focus of the Geneva Summit. We will discuss it with you together, shape it together. Of course, there will be a Swiss flavor to the Geneva Summit, which is based on the way we work and what we understand, our role in the international community.

We will try to be constructive. Thank you. creative and innovative and try to find pragmatic and fair solutions through bringing together all stakeholders in their respective roles and with their respective experience and at the same time we will try not to reinvent the wheel and duplicate processes and instruments that already exist and that work but rather we will try to build on them because we do already have a number of dialogue platforms for AI governance and for sharing good practices such as the UN Internet Governance Forum and its national and regional initiatives, the AI for Good Summit and the Global Forum on Ethics of AI organized by ITU, UNESCO and many other UN related processes and forum.

We have other forum like the OECD, GPI and other international and regional organizations and of course we will build on the outcomes of the previous summit in the UK, Korea, Japan, sorry Paris, Japan will follow at some point in time, UK, Korea, Paris and of course here in Delhi and we should not forget There are many academic and other networks that provide expertise and solutions. So we will do our best to bring them all together. And with the help of our longstanding partners from the Diplo Foundation and the Geneva Internet Platform, we will also try to facilitate the orientation in this complex governance ecosystem, in particular for less resourced communities, so that also they know better about what is going on where and where we need to raise our voice so that they are actually heard.

At the same time, we consider the transformative power of AI to be too big, broad and context -specific so that no one single institution and no single instrument will allow us to seize all opportunities and will solve all problems. So we will have to learn to live with a certain complexity of the governance of this transformation. But also, this is not a completely new situation. If we look at how we have governed the transformative power of combustion and other engines in the past 200 years, there are some lessons that we can also apply to AI. While today we are developing AI to automate cognitive labor, we have developed engines to automate physical labor. We have put engines in vehicles or machines to move goods or people from one place to another.

And we have put engines in machines to produce food or other goods automatically. And we do not expect one single institution or instrument to govern all of this. But we have developed a set of thousands of technical, legal, and also non -written societal norms that guide us in the use of these machines. We have regulated also the infrastructure that these machines use. We are setting requirements and liabilities for the people that develop, handle, and steer these machines. And we have developed instruments to protect people that are affected by the impact of these machines. And we are seeing different levels of harmonizations when it comes to regulating machines and engines. As an example, of course, we know that the airline industry is much more harmonized because it’s global than the way we regulate cars.

Cars driving in our streets on one side or the other side, where there’s more diversity possible. So after 200 years, we are still continuing to adapt the governance framework for engine driven machines, depending on the context of use. And we need to do exactly the same with AI. We need to develop appropriate technical, legal and societal frameworks and norms that allow us to develop and use AI for good in many different ways. And this work has already begun. We have analyzed our existing governance frameworks, have started to identify and fill the gaps. We have started to work on technical norms for AI systems. We have started to work on binding and non -binding legal instruments. And of course, in this regard, I’d like to particularly highlight the Vilnius Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law, for which I had the honor to lead the negotiations among 55 countries from all over the world at the Council of Europe in Strasbourg.

This provides for a principle based framework, not just for Europe, but for all countries. It provides for a principle based framework, not just for Europe, but for all countries on our planet that value human rights, democracy and the rule of law. so that our societies and economies can use AI to innovate, while at the same time we uphold our respect to human dignity and autonomy, also in the context of AI. The principles set out by the Vilnius Convention are simple and clear, but the Convention leaves enough leeway to participating states in order to allow to embed these principles in their existing legal and regulatory institutions and traditions. This will allow many countries to become parties to this global convention and to make sure that their governance frameworks may, although not become identical, but at least interoperable.

This Convention, which we hope will be ratified and enter into force very soon, will become one important instrument to make sure that AI is used for the good and not the bad. But of course, there will have to be many more binding and non -binding norms and more sector -specific norms and instruments to complement it, which hopefully will be… at least coherent in their logic and spirit. So we will use the time until the Geneva Summit next year to continue to identify gaps in global and regional governance of AI and achieve our shared objectives so that AI is used for innovation, while at the same time legitimate concerns and risks are appropriately addressed. Switzerland will be the host of the next summit, but we know that we will not be able to achieve anything on our own.

So we look forward to collaborating with all of you, with all countries and all stakeholders from the global north, south, east and west, and we will first try to identify areas where there’s a willingness and a shared vision to make progress together and then work with all of you on pragmatic and workable steps towards this vision. We will only be the facilitators trying to build bridges and build a climate of open and respectful and constructive dialogue, trying to offer pragmatic structures for trustworthy cooperation so that we can all use the potential AI, to say it again, to live together in peace, prosperity and security. Dignity. Dignity. The Swiss Summit team and I personally are looking forward to collaborating with all of you in the coming months, and we look forward to seeing you all in Geneva in 2027.

Thank you for your support and attention.

Related ResourcesKnowledge base sources related to the discussion topics (15)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“Thomas Schneider opened the plenary by thanking the Indian government and the global audience for convening the AI Impact Summit in Delhi, and he framed the event’s three‑fold focus on “people, progress, planet” as a shared ambition for inclusive AI development.”

The knowledge base records Schneider’s opening remarks thanking India and highlighting the three sutras – people, progress and planet – as core principles of the summit [S5] and [S4].

Confirmedmedium

“He emphasized that the promise of artificial intelligence must be harnessed so that everyone, regardless of geography, can share in economic and societal progress while safeguarding human dignity, personal autonomy and the health of the planet.”

Sources note that AI is seen as a tool for equitable development and that its benefits should be broadly distributed across humanity, aligning with Schneider’s statement [S45] and [S10].

Confirmedhigh

“He announced that Switzerland will host the next AI Impact Summit in Geneva in 2027.”

Both the S4 and S46 entries confirm that Schneider announced Switzerland as the host of the 2027 summit in Geneva.

Confirmedmedium

“He used an industry‑regulation analogy, pointing out that, unlike the highly harmonised global airline sector, car regulations remain fragmented, underscoring that AI governance will likely exhibit similar variations across domains.”

The parallel with fragmented engine (car) regulation versus more unified sectors is documented in the knowledge base discussion of AI governance analogies [S13].

Confirmedmedium

“Schneider said the Geneva summit will carry a “Swiss flavour”, meaning it will be grounded in Switzerland’s tradition of constructive, neutral facilitation and will build on, rather than duplicate, existing multistakeholder mechanisms.”

S7 describes Switzerland’s approach of leveraging existing policy architectures and multistakeholder mechanisms rather than creating new institutions, matching the “Swiss flavour” description.

Confirmedmedium

“Schneider then described a layered governance architecture, noting that societies have never relied on a single institution to regulate transformative technologies, illustrated with the evolution of engine regulation where “thousands of technical, legal and non‑written societal norms” now coexist.”

The knowledge base explicitly draws the same analogy between AI governance and the multifaceted regulation of engines, emphasizing the need for layered, diverse norms [S13].

External Sources (54)
S1
Thomas Schneider — From 2014 to 2017, Schneider was the chair of ICANN’s Governmental Advisory Committee (GAC) and in this role negotiated …
S2
Future-proofing global tech governance: a bottom-up approach | IGF 2023 Open Forum #44 — Cedric Sabbah:Sedgwick, Shomael? Hi. Hi. Yeah? You guys hear me? Yes, we can hear you. Awesome. Is now a good time to st…
S3
Day 0 Event #61 Accelerating progress for unified digital cooperation — – Thomas Schneider: Ambassador and Director of International Relations at Ofcom Switzerland, Vice Chair of the Council o…
S4
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Amb Thomas Schneider — “And we fully agree that we need to develop and use AI in a way that everyone in the world can benefit from the potentia…
S5
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Amb Thomas Schneider — Schneider argues that AI development should be inclusive and beneficial for all people worldwide, not just a select few….
S6
Main Topic 3 – Keynote — Marija Pejčinović Burić:Deputy Secretary General Lamanauskas, distinguished guests, ladies and gentlemen. It is a great …
S7
Unpacking the High-Level Panel’s Report on Digital Cooperation: Geneva policy experts propose action plan — Referring to the contributions, Amb. Thomas Schneider, Head of International Relations, Swiss Federal Office of Communic…
S8
WS #98 Towards a global, risk-adaptive AI governance framework — Thomas Schneider: Thank you very much. And actually, yeah, it’s good that somebody, one of the sessions actually trie…
S9
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Joanna Bryson: Hi, yeah, sure. Thanks very much and sorry not to be in Oslo. I wanted to come specifically to your quest…
S10
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — I don’t remember when it was discussed in the past, but this conversation are opening up the floor for bringing the valu…
S11
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — **Ernst Noorman**, Cyber Ambassador for the Netherlands and co-chair of the FOC Task Force on AI and Human Rights, share…
S12
Democratizing AI: Open foundations and shared resources for global impact — ## Introduction and Switzerland’s Strategic Position Bernard Maissen: Yes, thank you. Hello, everybody, dear panelists….
S13
State of play of major global AI Governance processes — The speaker advocates for a nuanced perspective on AI governance, drawing a parallel with the multifaceted regulation of…
S14
Multistakeholder Partnerships for Thriving AI Ecosystems — “And I would say it’s not an innovation gap, it’s a power gap.”[19]. “So all those things need framework and need govern…
S15
AI Governance Dialogue: Steering the future of AI — Doreen Bogdan Martin: Thank you. And we now have a chance together to reflect on AI governance with someone who has a un…
S16
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — Another observation relates to the capacity and resources of civil society, especially in marginalized groups and global…
S17
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S18
Secure Finance Risk-Based AI Policy for the Banking Sector — The moderator emphasizes that AI governance should not be viewed through a completely different lens but should be integ…
S19
WS #97 Interoperability of AI Governance: Scope and Mechanism — Mauricio Gibson: People hear me? Yes. Thank you all for having me. It’s a pleasure to be here. I’m going to build on w…
S20
What is it about AI that we need to regulate? — The Role of International Institutions in Setting Norms for Advanced TechnologiesThe discussions across IGF 2025 session…
S21
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Amb Thomas Schneider — The Vilnius Convention approach of providing principle-based framework while allowing countries flexibility to embed pri…
S22
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Amb Thomas Schneider — “It provides for a principle based framework, not just for Europe, but for all countries on our planet that value human …
S23
Comprehensive Report: UN General Assembly High-Level Meeting on the 20-Year Review of the World Summit on the Information Society (WSIS) Outcomes — responsibilities. We must ensure that artificial intelligence is developed and used in ways that respect human dignity, …
S24
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Ante este panorama, los países del sur global debemos priorizar estrategias y normativas para un uso ético y responsable…
S25
Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37 — In conclusion, the extended analysis highlights the need for a balanced approach to digitalization and climate change. E…
S26
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S27
Closing remarks – Charting the path forward — Bouverot emphasizes that AI governance must address environmental concerns by incorporating sustainability measures. Thi…
S28
Navigating the Double-Edged Sword: ICT’s and AI’s Impact on Energy Consumption, GHG Emissions, and Environmental Sustainability — It is argued that understanding the environmental consequences can catalyse more efficient methods for reducing and mana…
S29
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Legal and regulatory | Sustainable development | Development Reports consistently identify governance of artificial int…
S30
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Amb Thomas Schneider — Schneider argues that AI development should be inclusive and beneficial for all people worldwide, not just a select few….
S31
WS #270 Understanding digital exclusion in AI era — The speaker advocates for a human-centered approach in AI design to ensure inclusivity and accessibility. This approach …
S32
Open Forum #33 Building an International AI Cooperation Ecosystem — Ethical Considerations and Inclusivity Human rights principles | Children rights | Privacy and data protection Pelayo …
S33
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — **Ernst Noorman**, Cyber Ambassador for the Netherlands and co-chair of the FOC Task Force on AI and Human Rights, share…
S34
Planetary Limits of AI: Governance for Just Digitalisation? | IGF 2023 Open Forum #37 — It is crucial for these countries to take responsibility and work towards mitigating their impact on the environment. Ad…
S35
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Amb Thomas Schneider — Already now, we are approached by many governmental and other stakeholders that share their ideas with us about what the…
S36
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — Thank you for inviting me to this important summit. It is an honor to be here in India at this pivotal moment for global…
S37
State of play of major global AI Governance processes — The speaker advocates for a nuanced perspective on AI governance, drawing a parallel with the multifaceted regulation of…
S38
WS #98 Towards a global, risk-adaptive AI governance framework — Thomas Schneider: Thank you very much. And actually, yeah, it’s good that somebody, one of the sessions actually trie…
S39
From principles to practice: Governing advanced AI in action — Discussion of different governance approaches being implemented across regions and stakeholder groups Governance Approa…
S40
Comprehensive Report: European Approaches to AI Regulation and Governance — International Cooperation and Standards The Council of Europe Convention establishes general principles similar to Huma…
S41
Multistakeholder Partnerships for Thriving AI Ecosystems — “And I would say it’s not an innovation gap, it’s a power gap.”[19]. “So all those things need framework and need govern…
S42
Open Forum #71 Advancing Rights-Respecting AI Governance and Digital Inclusion through G7 and G20 — **Implementation Gaps**: Questions from participants highlighted the challenge of connecting high-level policy discussio…
S43
Setting the Rules_ Global AI Standards for Growth and Governance — Esther Tetruashvily responded by describing OpenAI’s efforts to evaluate model performance across various languages and …
S44
UN Tech Envoy: AI report to bridge AI governance gaps — Last week, the United Nations (UN) established a 39-memberHigh-Level Advisory Bodyto address global concerns regarding t…
S45
High Level Dialogue with the Secretary-General — He mentions the potential of artificial intelligence as a tool for development if used equitably.
S46
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — The World Meteorological Organization harnesses AI for climate prediction. The International Labor Organization explores…
S47
Ad Hoc Consultation: Monday 5th February, Morning session — Switzerland has been actively engaged in discussions regarding the adoption of an international resolution, expressing a…
S48
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Matthew Prince Cloudflare — Prince uses the historical example of the printing press to illustrate how transformative technologies succeed when they…
S49
Secure Finance Risk-Based AI Policy for the Banking Sector — Economic Advisor Sanjeev Sanyal fundamentally challenged prevailing approaches to AI regulation, arguing that traditiona…
S50
One-Person Enterprise — Richard Socher argues that AI will lead to the creation of entirely new job categories that we currently cannot predict….
S51
Impact the Future – Compassion AI | IGF 2023 Town Hall #63 — The analysis highlights the role of technology in historical transformations. Throughout history, technology has played …
S52
Steering the future of AI — Legal and regulatory | Cybersecurity Thompson challenges LeCun’s aviation safety analogy by highlighting that the aviat…
S53
Keynotes — O’Flaherty acknowledges that the regulatory work is not finished and that current regulatory models will likely be insuf…
S54
Laying the foundations for AI governance — – **Industry perspective on regulation**: Companies, particularly startups, actually want regulation but need clarity an…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
T
Thomas Schneider
12 arguments184 words per minute1721 words558 seconds
Argument 1
Universal benefit and respect for human dignity, autonomy, and the planet (Thomas Schneider)
EXPLANATION
Schneider stresses that AI should be developed and deployed so that all people worldwide can share its benefits, while safeguarding human dignity, personal autonomy, and the health of the planet. He links economic and societal progress to these ethical imperatives.
EVIDENCE
He states that AI must be used in a way that everyone in the world can benefit, emphasizing economic and societal progress for all [4-5], and adds that this must be done while respecting human dignity, autonomy, and the planet as the basis for life [6].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Schneider stresses that AI should be developed for the benefit of everyone worldwide while upholding human dignity, personal autonomy and environmental protection, as highlighted in his keynote remarks [S4][S5].
MAJOR DISCUSSION POINT
Inclusive and Ethical AI Development
Argument 2
AI’s transformative power likened to historic inventions, must raise global quality of life (Thomas Schneider)
EXPLANATION
Schneider compares AI’s potential impact to that of the printing press, radio, television, the internet, and combustion engines, arguing that AI should be harnessed to improve the quality of life for all humanity rather than exacerbate inequalities.
EVIDENCE
He describes AI as potentially as transformative as the invention of the printing press, radio, television, the internet, and combustion engines, and insists that this potential must raise, not lower, global quality of life [13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He compares AI’s potential impact to the printing press, radio, television, the internet and combustion engines, arguing that this transformative power must raise, not lower, global quality of life [S4][S5].
MAJOR DISCUSSION POINT
Inclusive and Ethical AI Development
Argument 3
Hosting the Geneva Summit to make a substantive contribution, not a showcase (Thomas Schneider)
EXPLANATION
Schneider explains that Switzerland’s motivation for organizing the 2027 Geneva AI Summit is to make a meaningful, substantive contribution to global AI governance rather than simply staging a high‑profile event.
EVIDENCE
He explicitly says the Swiss motivation is “not to make a show, it is to substantially and meaningfully contribute” to the goal of using AI for good [12], and notes that Switzerland will host the next summit in Geneva in 2027 [8].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Schneider explicitly states that Switzerland’s motivation for the 2027 Geneva AI Summit is to make a substantive, meaningful contribution rather than a mere showcase [S4].
MAJOR DISCUSSION POINT
Switzerland’s Role and the Geneva AI Summit 2027
Argument 4
“Swiss flavor” emphasizing constructive, pragmatic, fair solutions built on existing initiatives (Thomas Schneider)
EXPLANATION
Schneider promises that the Geneva Summit will carry a distinct Swiss approach characterized by constructive, pragmatic, and fair problem‑solving, leveraging and extending existing multistakeholder platforms rather than reinventing them.
EVIDENCE
He mentions a “Swiss flavor” based on constructive work, and outlines a plan to be creative, innovative, pragmatic and fair while building on existing platforms such as the UN IGF, AI for Good, ITU, UNESCO, OECD, etc. [18-20].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He promises a distinct “Swiss flavour” characterised by constructive, pragmatic and fair problem-solving, building on existing multistakeholder platforms [S5][S4].
MAJOR DISCUSSION POINT
Switzerland’s Role and the Geneva AI Summit 2027
Argument 5
Build on UN IGF, AI for Good, ITU, UNESCO, OECD and other platforms; avoid duplicating efforts (Thomas Schneider)
EXPLANATION
Schneider argues that the upcoming summit should integrate and reinforce the many existing AI governance forums and initiatives, avoiding duplication and leveraging the work already done by these bodies.
EVIDENCE
He lists the UN Internet Governance Forum, AI for Good Summit, ITU, UNESCO, OECD, GPI and other international and regional organizations, emphasizing that the summit will build on their outcomes rather than reinvent the wheel [20].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Schneider argues that the summit should integrate and reinforce existing initiatives such as the UN IGF, AI for Good, ITU, UNESCO and OECD, avoiding duplication of effort [S5][S4].
MAJOR DISCUSSION POINT
Leveraging Existing Governance Ecosystems
Argument 6
Support less‑resourced communities through partners like the Diplo Foundation and Geneva Internet Platform (Thomas Schneider)
EXPLANATION
Schneider highlights the need to help under‑resourced stakeholders navigate the complex AI governance landscape by partnering with organizations that can provide orientation and amplify their voices.
EVIDENCE
He notes that, together with the Diplo Foundation and the Geneva Internet Platform, Switzerland will facilitate orientation for less-resourced communities so they can understand where to raise their voice and be heard [23].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He highlights partnerships with the Diplo Foundation and the Geneva Internet Platform to help under-resourced stakeholders navigate AI governance and have their voices heard [S5].
MAJOR DISCUSSION POINT
Leveraging Existing Governance Ecosystems
Argument 7
No single institution can govern AI; require technical, legal, and societal norms akin to engine regulation (Thomas Schneider)
EXPLANATION
Schneider contends that AI’s breadth and context‑specificity mean that governance must be multi‑layered, involving technical standards, legal rules, and societal norms, similar to how societies have regulated combustion engines over the past two centuries.
EVIDENCE
He explains that the transformative power of AI is too broad for a single institution, drawing parallels with the historical governance of engines and describing the development of thousands of technical, legal, and societal norms that guide machine use [24-32].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Schneider draws a parallel with the historical regulation of combustion engines, noting that AI’s breadth requires a multi-layered governance system of technical, legal and societal norms rather than a single institution [S4][S5].
MAJOR DISCUSSION POINT
Need for Multi‑Layered Governance Frameworks
Argument 8
Develop binding, non‑binding, and sector‑specific instruments ensuring interoperability (Thomas Schneider)
EXPLANATION
Schneider calls for a suite of governance tools—including binding treaties, non‑binding standards, and sector‑specific rules—that are interoperable across jurisdictions, allowing diverse legal traditions to work together.
EVIDENCE
He mentions work on technical norms, binding and non-binding legal instruments, and the need for many more sector-specific norms that remain coherent in logic and spirit, emphasizing interoperability [45-52].
MAJOR DISCUSSION POINT
Need for Multi‑Layered Governance Frameworks
Argument 9
Provides a principle‑based, flexible framework for AI, human rights, democracy, and rule of law (Thomas Schneider)
EXPLANATION
Schneider presents the Vilnius Convention as a principle‑based instrument that offers a flexible, rights‑focused framework applicable beyond Europe, guiding AI development in line with human rights, democracy, and the rule of law.
EVIDENCE
He highlights the Vilnius Convention on AI, Human Rights, Democracy and the Rule of Law, noting its principle-based nature and its relevance for all countries that value these values [46-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He cites the Vilnius Convention as a principle-based, flexible instrument that can guide AI development in line with human rights, democracy and the rule of law [S4][S5].
MAJOR DISCUSSION POINT
The Vilnius Convention as a Foundational Instrument
Argument 10
Enables global adoption while allowing states to embed principles within their own legal traditions (Thomas Schneider)
EXPLANATION
Schneider explains that the Convention’s flexibility lets states incorporate its principles into existing legal and regulatory frameworks, fostering broad participation and interoperable governance without demanding identical laws.
EVIDENCE
He states that the Convention leaves enough leeway for participating states to embed principles in their own institutions, allowing many countries to become parties and achieve interoperable frameworks, and that it is expected to be ratified soon [49-51].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Schneider explains that the Convention’s design leaves sufficient leeway for states to incorporate its principles into their own legal and regulatory frameworks, facilitating broad participation [S4].
MAJOR DISCUSSION POINT
The Vilnius Convention as a Foundational Instrument
Argument 11
Identify shared vision areas, pursue pragmatic steps, act as facilitators and bridge‑builders (Thomas Schneider)
EXPLANATION
Schneider proposes that Switzerland will work with all stakeholders to pinpoint common goals, take practical actions, and serve as a neutral facilitator that builds bridges among diverse actors.
EVIDENCE
He says Switzerland will look for willingness and shared vision, then work on pragmatic steps, acting as facilitators to build bridges and a climate of respectful dialogue [55-56].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He states that Switzerland will act as a neutral facilitator, building bridges and fostering respectful, constructive dialogue among diverse stakeholders [S5].
MAJOR DISCUSSION POINT
Collaborative Path Forward to the Summit
Argument 12
Use the interim period to close governance gaps, ensuring AI drives innovation while mitigating risks (Thomas Schneider)
EXPLANATION
Schneider urges that the time before the 2027 summit be used to identify and fill gaps in global and regional AI governance, so that AI can foster innovation while addressing legitimate concerns and risks.
EVIDENCE
He notes that the period until the Geneva Summit will be used to continue identifying gaps in AI governance and to ensure AI is used for innovation while appropriately addressing risks [53].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Schneider urges using the time before the 2027 summit to identify and fill AI governance gaps, aligning with calls for a risk-adaptive governance framework and an improved policy architecture built on existing mechanisms [S8][S7].
MAJOR DISCUSSION POINT
Collaborative Path Forward to the Summit
Agreements
Agreement Points
AI should be developed and used so that everyone worldwide benefits while respecting human dignity, autonomy and the planet
Speakers: Thomas Schneider
Universal benefit and respect for human dignity, autonomy, and the planet (Thomas Schneider) AI’s transformative power likened to historic inventions, must raise global quality of life (Thomas Schneider) Support less‑resourced communities through partners like the Diplo Foundation and Geneva Internet Platform (Thomas Schneider) Use the interim period to close governance gaps, ensuring AI drives innovation while mitigating risks (Thomas Schneider)
Schneider repeatedly stresses that AI must be inclusive, ethically grounded and environmentally conscious, linking economic and societal progress to respect for human rights and the planet, and calling for concrete actions before the 2027 summit [4-6][13][23][53].
POLICY CONTEXT (KNOWLEDGE BASE)
This vision echoes the UN General Assembly’s call for AI that respects human dignity, ethical principles and the rule of law, as highlighted in the WSIS 20-year review outcomes [S23], and aligns with recent calls to embed sustainability and planetary health considerations into AI governance [S25][S27][S28].
Governance of AI requires a multi‑layered, pragmatic approach that builds on existing multistakeholder platforms and avoids duplication
Speakers: Thomas Schneider
“Swiss flavor” emphasizing constructive, pragmatic, fair solutions built on existing initiatives (Thomas Schneider) Build on UN IGF, AI for Good, ITU, UNESCO, OECD and other platforms; avoid duplicating efforts (Thomas Schneider) No single institution can govern AI; require technical, legal, and societal norms akin to engine regulation (Thomas Schneider) Develop binding, non‑binding, and sector‑specific instruments ensuring interoperability (Thomas Schneider)
Schneider outlines a Swiss-flavoured, constructive agenda that leverages the UN IGF, AI for Good, ITU, UNESCO, OECD and other bodies, argues that AI governance must be multi-layered like historic engine regulation, and calls for a suite of interoperable binding and non-binding instruments [18-20][24-32][45-52].
POLICY CONTEXT (KNOWLEDGE BASE)
The recommendation mirrors the risk-based, embedded-layer approach advocated for AI in the banking sector, which stresses building on existing regulatory infrastructure rather than creating parallel regimes [S18], and reflects broader calls for interoperable, multi-stakeholder AI governance discussed at IGF sessions [S19][S20].
The Vilnius Convention provides a principle‑based, flexible framework that can be adopted globally while allowing states to embed its principles in their own legal traditions
Speakers: Thomas Schneider
Provides a principle‑based, flexible framework for AI, human rights, democracy, and rule of law (Thomas Schneider) Enables global adoption while allowing states to embed principles within their own legal traditions (Thomas Schneider)
Schneider presents the Vilnius Convention as a rights-focused, principle-based instrument applicable beyond Europe, offering flexibility for national implementation and promoting interoperable governance [46-48][49-51].
POLICY CONTEXT (KNOWLEDGE BASE)
Ambassador Thomas Schneider explicitly described the Vilnius Convention as a principle-based, flexible framework that enables countries to integrate its norms into national law while upholding human rights and the rule of law [S21][S22].
Switzerland will act as a neutral facilitator, identifying shared visions and taking pragmatic steps toward the 2027 Geneva AI Summit
Speakers: Thomas Schneider
Hosting the Geneva Summit to make a substantive contribution, not a showcase (Thomas Schneider) Identify shared vision areas, pursue pragmatic steps, act as facilitators and bridge‑builders (Thomas Schneider) Use the interim period to close governance gaps, ensuring AI drives innovation while mitigating risks (Thomas Schneider)
Schneider emphasizes that Switzerland’s role is to facilitate collaboration, not to stage a show, by pinpointing common goals, building bridges among stakeholders and using the time before the summit to fill governance gaps [8][12][55-56][53].
Similar Viewpoints
Both arguments stress that AI’s historic‑scale impact must be harnessed to improve quality of life for all, linking technological progress with ethical imperatives [4-6][13].
Speakers: Thomas Schneider, Thomas Schneider
Universal benefit and respect for human dignity, autonomy, and the planet (Thomas Schneider) AI’s transformative power likened to historic inventions, must raise global quality of life (Thomas Schneider)
Both highlight a pragmatic, constructive approach that leverages existing multistakeholder platforms rather than creating new parallel structures [18-20].
Speakers: Thomas Schneider, Thomas Schneider
“Swiss flavor” emphasizing constructive, pragmatic, fair solutions built on existing initiatives (Thomas Schneider) Build on UN IGF, AI for Good, ITU, UNESCO, OECD and other platforms; avoid duplicating efforts (Thomas Schneider)
Both argue for a layered governance architecture combining technical standards, legal rules and sector‑specific norms to achieve interoperable outcomes [24-32][45-52].
Speakers: Thomas Schneider, Thomas Schneider
No single institution can govern AI; require technical, legal, and societal norms akin to engine regulation (Thomas Schneider) Develop binding, non‑binding, and sector‑specific instruments ensuring interoperability (Thomas Schneider)
Both present the Vilnius Convention as a flexible, principle‑based instrument that can be adopted worldwide while respecting national legal diversity [46-48][49-51].
Speakers: Thomas Schneider, Thomas Schneider
Provides a principle‑based, flexible framework for AI, human rights, democracy, and rule of law (Thomas Schneider) Enables global adoption while allowing states to embed principles within their own legal traditions (Thomas Schneider)
Unexpected Consensus
Linking AI governance directly to environmental protection and planetary health
Speakers: Thomas Schneider
Universal benefit and respect for human dignity, autonomy, and the planet (Thomas Schneider) AI’s transformative power likened to historic inventions, must raise global quality of life (Thomas Schneider)
While AI discussions often focus on ethics and socio-economic impacts, Schneider explicitly integrates the planet as a foundational element for AI’s beneficial use, an uncommon convergence of AI governance and environmental stewardship [4-6][13].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple recent reports stress the need to couple AI policy with climate and sustainability goals, noting that AI governance must incorporate environmental safeguards to meet planetary limits [S25][S27][S28][S29].
Overall Assessment

Thomas Schneider consistently advocates for an inclusive, rights‑based, and environmentally conscious AI ecosystem, proposes a multi‑layered governance model that builds on existing multistakeholder platforms, highlights the Vilnius Convention as a flexible global instrument, and positions Switzerland as a neutral facilitator for the 2027 Geneva Summit.

High internal consensus – all arguments are mutually reinforcing, indicating a coherent strategic vision that can facilitate broad stakeholder alignment and practical progress on AI governance.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript contains remarks only from Thomas Schneider; no other speakers are present, and all listed arguments are attributed to him. Consequently, there are no identifiable points of contention, no partial agreements, and no unexpected disagreements within the provided material. The discussion reflects a single, coherent perspective on AI governance, inclusive benefit, multi‑layered norms, and the upcoming Geneva Summit.

None – the absence of multiple speakers means the dialogue is unanimous in its goals and approaches, suggesting smooth consensus building for the topics addressed.

Takeaways
Key takeaways
AI development must be inclusive, benefiting all people while respecting human dignity, autonomy, and the planet. Switzerland will host the Geneva AI Summit in 2027, aiming for substantive contributions rather than a showcase, with a distinct “Swiss flavor” of constructive, pragmatic, and fair solutions. Existing governance ecosystems (UN IGF, AI for Good, ITU, UNESCO, OECD, etc.) should be leveraged to avoid duplication and to support less‑resourced communities via partners like the Diplo Foundation and Geneva Internet Platform. AI governance requires multi‑layered, sector‑specific frameworks—technical, legal, and societal norms—similar to the historical regulation of engine‑driven machines. The Vilnius Convention on AI, Human Rights, Democracy and the Rule of Law provides a flexible, principle‑based foundation that can be adapted by diverse legal systems and promote interoperability. The interim period before the Geneva Summit will be used to identify governance gaps, build consensus on shared vision areas, and develop pragmatic steps toward trustworthy cooperation.
Resolutions and action items
Switzerland will coordinate with global stakeholders to shape the agenda and focus of the 2027 Geneva AI Summit. The Swiss team will map existing AI governance initiatives and ensure the summit builds on them rather than duplicating efforts. Partner organizations (Diplo Foundation, Geneva Internet Platform) will be engaged to facilitate participation of less‑resourced communities. A process will be launched to identify gaps in global and regional AI governance frameworks and propose concrete normative instruments. Stakeholders are invited to submit ideas and proposals for the summit’s thematic priorities and practical work‑streams.
Unresolved issues
Specific thematic focus and concrete work‑program for the Geneva Summit have not been decided. How to operationalize interoperability between the Vilnius Convention and other existing or future AI norms remains open. Mechanisms for ensuring meaningful participation and voice of less‑resourced communities are not yet defined. Details on funding, resource allocation, and timeline for gap‑analysis activities were not addressed. The balance between binding and non‑binding instruments, and sector‑specific regulations, needs further clarification.
Suggested compromises
Adopt a “build‑on‑existing” approach: use current platforms and norms instead of creating entirely new structures. Apply the flexible, principle‑based framework of the Vilnius Convention, allowing states leeway to integrate principles within their own legal traditions. Focus on pragmatic, fair solutions that respect diverse regional contexts while aiming for overall coherence in spirit and logic.
Thought Provoking Comments
AI may be as transformative as the invention of the printing press, radio, television, the internet, and the combustion engine, and like those technologies we must develop appropriate technical, legal, and societal frameworks and norms to guide its use.
The historical analogy frames AI not as a novel anomaly but as part of a continuum of transformative technologies, prompting participants to think about governance lessons from past industrial revolutions rather than inventing entirely new solutions.
This analogy shifted the conversation from abstract AI concerns to concrete, familiar governance challenges, opening space for discussion about leveraging existing regulatory experiences (e.g., engine regulation) and setting the stage for later references to harmonisation and sector‑specific norms.
Speaker: Thomas Schneider
We will not expect one single institution or instrument to govern AI; instead we must learn to live with a certain complexity of the governance of this transformation, just as we have done with engines over the past 200 years.
By explicitly rejecting the notion of a monolithic governing body, the comment challenges any simplistic, top‑down approaches and underscores the need for a multi‑layered, distributed governance ecosystem.
This statement acted as a turning point, steering the dialogue toward the importance of coordination among existing platforms (UN IGF, AI for Good, OECD, etc.) and encouraging participants to think about how to interlink rather than replace current mechanisms.
Speaker: Thomas Schneider
The Vilnius Convention on Artificial Intelligence, Human Rights, Democracy and the Rule of Law provides a principle‑based framework that is flexible enough for states to embed the principles in their own legal traditions, aiming for interoperable rather than identical regulations.
Introducing a concrete, near‑term instrument (the Vilnius Convention) moves the discussion from high‑level ideals to actionable policy, while the emphasis on interoperability respects diversity of legal systems.
This comment redirected the conversation toward concrete next steps—ratification, implementation, and complementarity with other binding and non‑binding norms—prompting participants to consider how their own jurisdictions could adopt the convention and what gaps remain.
Speaker: Thomas Schneider
We will work with longstanding partners like the Diplo Foundation and the Geneva Internet Platform to facilitate orientation in this complex governance ecosystem, especially for less‑resourced communities, so that they know where to raise their voice and are actually heard.
Highlighting the inclusion of under‑represented stakeholders brings equity to the forefront and challenges any assumption that governance will be dominated by well‑resourced actors.
This comment broadened the scope of the discussion to include capacity‑building and outreach, prompting participants to think about mechanisms for inclusive participation and potentially influencing agenda items for the upcoming Geneva Summit.
Speaker: Thomas Schneider
We will try not to reinvent the wheel or duplicate processes that already exist and work, but rather build on them—leveraging platforms such as the UN Internet Governance Forum, AI for Good Summit, ITU, UNESCO, OECD, and existing academic networks.
The call for building on existing structures rather than creating parallel ones introduces a pragmatic, efficiency‑driven perspective that challenges any impulse to start from scratch.
This statement reinforced the earlier theme of complexity management and guided the conversation toward mapping current initiatives, identifying overlaps, and defining how the Geneva Summit can act as a coordinating hub rather than a redundant forum.
Speaker: Thomas Schneider
Overall Assessment

Thomas Schneider’s remarks shaped the discussion by moving it from a generic, aspirational framing of AI governance to a nuanced, historically informed, and pragmatically grounded roadmap. His analogies to past technological revolutions, rejection of a single‑institution solution, introduction of the Vilnius Convention, emphasis on inclusivity for less‑resourced actors, and commitment to building on existing platforms collectively redirected participants toward concrete, collaborative actions for the 2027 Geneva Summit. These pivotal comments created turning points that deepened the analysis, broadened stakeholder considerations, and set a clear agenda for future coordination.

Follow-up Questions
What should be the focus and objectives of the Geneva AI Summit in 2027?
Defining the summit’s agenda is essential to ensure that the gathering addresses the most pressing AI governance challenges and delivers tangible outcomes.
Speaker: Thomas Schneider
Which specific gaps exist in current global and regional AI governance frameworks that need to be addressed before the Geneva Summit?
Identifying these gaps will guide targeted work, prevent duplication, and help prioritize actions for the next year.
Speaker: Thomas Schneider
How can less‑resourced communities be better oriented within the complex AI governance ecosystem and have their voices heard?
Ensuring inclusive participation is crucial for legitimacy, equity, and for capturing diverse perspectives on AI impacts.
Speaker: Thomas Schneider
What sector‑specific norms and instruments are required to complement the Vilnius Convention and ensure coherent AI governance?
Sector‑focused rules are needed to translate high‑level principles into practical, enforceable standards across industries.
Speaker: Thomas Schneider
What mechanisms can ensure interoperability of national AI governance frameworks while respecting diverse legal traditions?
Interoperability enables cross‑border cooperation and reduces regulatory friction without imposing a one‑size‑fits‑all model.
Speaker: Thomas Schneider
What are the most effective ways to build on existing platforms (UN IGF, AI for Good, GPI, OECD, etc.) without duplicating efforts?
Leveraging existing initiatives maximizes resources and avoids fragmentation of the governance ecosystem.
Speaker: Thomas Schneider
What lessons from the governance of combustion engines and other transformative technologies can be applied to AI governance?
Historical analogues can provide proven governance structures and highlight pitfalls to avoid in the AI context.
Speaker: Thomas Schneider
What steps are needed to achieve ratification and entry into force of the Vilnius Convention on AI, Human Rights, Democracy and the Rule of Law?
The Convention is a cornerstone principle‑based framework; its swift adoption is vital for global normative alignment.
Speaker: Thomas Schneider
How can we develop and harmonize technical norms for AI systems across different regions and sectors?
Technical standards are foundational for safety, trustworthiness, and cross‑jurisdictional compatibility of AI technologies.
Speaker: Thomas Schneider
What metrics or indicators should be used to assess whether AI is strengthening or weakening human dignity, autonomy, and planetary health?
Clear measurement tools are needed to monitor AI’s societal and environmental impact and to guide policy adjustments.
Speaker: Thomas Schneider
What pragmatic, workable steps can be taken in the next year to move toward the shared vision before the 2027 summit?
Identifying short‑term actions maintains momentum, builds trust among stakeholders, and demonstrates progress toward long‑term goals.
Speaker: Thomas Schneider

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Hemant Taneja General Catalyst

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Hemant Taneja General Catalyst

Session at a glanceSummary, keypoints, and speakers overview

Summary

Speaker 1 opened the session by introducing Hemant Taneja, CEO of General Catalyst, as a prominent proponent of “responsible innovation,” a stance that bridges venture capital with ethical conscience [1-6]. Taneja thanked Prime Minister Modi for gathering AI leaders and emphasized that AI should be designed for human centricity and empowerment [7-11].


He framed the chief opportunity for modern capitalism as “global resilience,” arguing that artificial intelligence is the key driver of national resilience across sectors such as healthcare, data, defense, and energy infrastructure [12-21]. Citing recent shocks-including the pandemic and geopolitical conflicts-he asserted that AI can transform these critical areas [13-20]. Taneja further claimed that India is uniquely positioned to lead, pointing to its status as the world’s strongest growth market and noting AI’s deflationary nature as a catalyst for large-scale uplift [22-25].


According to him, solving health and education challenges for over a billion people would generate worldwide benefits [26]. He highlighted India’s history of leapfrogging digital infrastructure-exemplified by UPI and Aadhaar-as a template for re-imagining other industries [28-30]. Continued infrastructure investment, open-source work, and the U.S.-India technology corridor reinforce the country’s capacity to deploy AI at scale [31-36]. A youthful, tech-savvy demographic adds further momentum, providing a vast talent pool ready to adopt AI [39-41].


Rejecting the narrative that AI will displace jobs, Taneja urged that every new worker be equipped with AI tools to amplify productivity [41-46]. He stressed entrepreneurship as the engine of AI leadership, citing Indian startups such as Sepato, Rafi, and PolicyBazaar Health as proof of the ecosystem’s potential [47-52]. To accelerate this vision, General Catalyst announced a $5 billion, five-year investment in India’s entrepreneurial landscape-the largest commitment of its kind-signaling confidence that Indian innovators will build next-generation companies that bolster both domestic resilience and global competitiveness [52-57].


Keypoints


AI as the engine of national resilience and growth – Taneja frames artificial intelligence as the primary solution for “national resilience” across sectors such as healthcare, defense, data and energy, arguing that AI-driven transformation is the biggest opportunity for capitalism today [12-19][22-24].


India’s structural advantages for AI leadership – He points to the country’s ability to “leapfrog” (citing the UPI and Aadhaar experience), massive infrastructure investment, a young demographic, a vibrant open-source ecosystem, and a strong US-India corridor that together create a fertile environment for scaling AI [28-33][34-38][39-41].


AI as a workforce-empowering tool, not a job-killer – Confronting the narrative that AI will displace young workers, Taneja urges India to “reject that narrative and lean into it,” emphasizing that every new entrant to the labour market should be equipped with AI to boost productivity [41-46].


Entrepreneurship and venture-capital commitment as the catalyst – He stresses that startups are the “most important institutions of the future,” highlighting examples of Indian AI-driven companies and announcing a $5 billion, five-year investment fund to back the Indian entrepreneurial ecosystem [47-55].


Call for responsible, human-centric AI and global collaboration – The speech opens with gratitude to Prime Minister Modi for championing “human centricity” in AI and stresses the need for democratic, open-source collaboration across the US, Europe and India to ensure AI develops responsibly [8-11][34-38].


Overall purpose/goal


The discussion is a strategic advocacy piece aimed at positioning India as a global AI hub. Taneja seeks to rally governmental, corporate and venture-capital support by highlighting AI’s role in national resilience, showcasing India’s unique strengths, dispelling fear-based narratives, and committing substantial capital to nurture home-grown AI entrepreneurship-all under the banner of responsible, human-centric innovation.


Overall tone


The tone is consistently upbeat, confident and persuasive, celebrating India’s potential and the opportunities AI presents. When addressing the job-loss narrative, the tone becomes slightly defensive but quickly returns to optimism, reinforcing a forward-looking, rally-the-troops atmosphere throughout the speech.


Speakers

Speaker 1


– Role/Title: Event moderator / host introducing speakers [S1][S3]


– Area of expertise:


Hemant Taneja


– Role/Title: Chief Executive Officer and Managing Director, General Catalyst USA [S5]; CEO of General Catalyst [S4]


– Area of expertise: Venture capital, responsible innovation, AI strategy, entrepreneurship [S4]


Additional speakers:


(none)


Full session reportComprehensive analysis and detailed insights

Speaker 1 opened the session by situating the forthcoming address within the broader discourse on “responsible innovation” [1-3]. He introduced Hemant Taneja, chief executive of General Catalyst and “one of the most vocal advocates” for a venture-capital model coupled with ethical conscience, thereby setting the tone for a discussion that bridges finance and societal responsibility [4-6].


Taneja began by thanking Prime Minister Narendra Modi for convening the world’s AI thought-leaders and for championing a design principle centred on “human centricity” and “human empowerment” [7-11].


The core argument of the speech was that the biggest opportunity in capitalism today is what Taneja terms “global resilience” [12]. He argued that the succession of shocks over the past five to seven years – a pandemic, geopolitical wars and the rapid emergence of AI – have exposed systemic fragilities, and that artificial intelligence is the primary tool for building “national resilience” across key sectors [13-16]. Those sectors include healthcare, data, deterrence and defense, and energy infrastructure [17-21].


Within this framework, India is presented as uniquely positioned to lead the AI-driven resilience agenda. Taneja described India as “the world’s strongest growth market” and noted that AI’s inherently deflationary character can help uplift a massive, complex economy by lowering costs and expanding access to essential services [22-25]. He further asserted that solving healthcare and education challenges for a population of over a billion would generate solutions with global relevance [26-27].


A central theme was India’s capacity to “leapfrog” traditional development pathways. By recalling the rapid rollout of the Unified Payments Interface (UPI) and the Aadhaar biometric identity system, Taneja illustrated how the country can rethink paradigms in other industries and accelerate AI adoption far beyond the pace seen in more mature economies [28-30].


He reinforced this optimism with concrete structural advantages: sustained investment in physical and digital infrastructure over recent years [31-34]; a vibrant open-source ecosystem; and a deepening US-India corridor that facilitates fluid innovation flows between the United States, Europe and the democratic world [35-38]. He highlighted today’s “packed silica” announcement as a key milestone for the US-India partnership [??]. Coupled with a youthful, tech-savvy demographic, these factors create a fertile environment for scaling AI capabilities at national scale [39-41].


Addressing a common fear that AI will displace jobs, Taneja explicitly rejected the narrative that “artificial intelligence can take the jobs of young people” [41-43]. He urged India to “lean into” AI, equipping every new entrant to the labour market – “a million Indians … every month” – with AI tools to dramatically boost productivity and unlock economic opportunity [44-46][??].


Entrepreneurship was portrayed as the engine of this transformation. Taneja declared startups to be “the most important institutions of the future”, citing Indian AI-driven companies such as Sepato, Rafi and Policy Bazaar Health as evidence that home-grown ventures are already reshaping core societal pillars [47-49]. He expressed confidence that these entrepreneurs will not only reinforce domestic resilience but also emerge as global market leaders [50-52].


To operationalise the vision, General Catalyst announced a $5 billion, five-year investment programme targeting the Indian entrepreneurial ecosystem – described as “the largest of its kind” [52-55][??]. The pledge reflects a deep belief that Indian talent will create the next generation of AI-enabled companies, and it concludes with a direct invitation to “come build with us” [56-57].


The speech moved from outlining opportunities to addressing job-displacement concerns, and then returned to emphasizing entrepreneurship and the $5 billion investment [41-46][52-57].


Session transcriptComplete transcript of the session
Speaker 1

Ladies and gentlemen, moving on. Our next speaker is from one of Silicon Valley’s most influential venture capital firms, General Catalyst. And he has been among the most vocal advocates for what he calls responsible innovation. The idea that companies building the future also bear the greatest responsibility for its consequences. And well, I must say that his perspective bridges the worlds of capital and conscience. Please welcome the CEO of General Catalyst, Mr. Hemant Taneja.

Hemant Taneja

Good afternoon. Let me just start by thanking Shri Prime Minister Modi Ji for getting all the AI thought leaders together in this world. And delivering the message around making sure we shape AI for human centricity. For human centricity. For human empowerment. I think that’s a really important design principle. And stepping up and embracing that and enforcing that as a world leader is exactly what we need today as we work on embedding AI into our society. So the biggest opportunity in capitalism today is what I call global resilience. If you think about the last five, seven years, we have gone through so much on the planet. We’ve had a pandemic. We’ve had wars. We have been learning how to embrace artificial intelligence as an enormous technological shift.

Many, many interesting shocks that have happened to us over the last several years. And the answer to embracing sort of resiliency and delivering transformation is actually artificial intelligence. That is the answer for actually driving what I call national resilience in all the key industries. Whether it’s healthcare. Whether it’s data. Whether it’s deterrence and defense. Whether it’s… scaling of the energy infrastructure so we can deploy AI, all those capabilities to present enormous opportunity and artificial intelligence is the answer for all of them. It’s India’s time to lead when it comes to delivering national resilience. It’s the strongest growth market in the world. And as we have learned over the last few years, when you think about diffusion of AI, growth is an enormous lever for it because it creates opportunity to embrace new technologies and new solutions.

The other thing that’s really interesting is because AI is deflationary by nature, it matches well to what’s required to uplift the opportunities here in India. Solving for needs in healthcare and education and other parts of what we deliver to society, at large, with the complexity of over a billion people, that is, if you can go solve that, you’re going to go solve the problems for the entire planet. So I do think India’s got all the dynamics going for it to lead in using AI to transform different industries. The other thing I would say is the way I expect India to deliver these transformations by leapfrogging. If you go back to the digital infrastructure revolution in India and what we saw with UPI and Aadhaar, the opportunity to completely rethink what the paradigms are going to be in these other industries is what lies ahead.

India has a lot of things going for it when it comes to resiliency and being able to deploy AI. First of all, you’ve got increasing investment in infrastructure. We saw that over the last couple of years. There’s a lot of infrastructure investment. There’s work being done around open source. I think the U .S.-India corridor is incredibly interesting. The packed silica announcement today was an important one. We need to make sure the innovation flows fluidly between US, India, Europe, across all parts of the Western world so that AI can thrive in the democratic world. That is where we want to see this technology come to scale. And it’s got a young demographic. It’s got a lot of potential in terms of being able to deploy a lot of these capabilities.

One topic that is very much top of mind for me is there’s this narrative that artificial intelligence can take the jobs of young people and we need to slow down progress. And my biggest advice on India’s leadership in AI is to reject that narrative and lean into it. I think everybody entering the workforce, and there’s a million Indians that enter the workforce every month. Everybody that enters the workforce is a young person. Everybody that enters the workforce should be fully empowered with AI. Because if you have that kind of productivity behind every single human being, entering the workforce, imagine the productivity we create in every company, in every industry, and how it’s going to unleash the opportunity in the world.

The way India is going to lead in artificial intelligence, from my perspective, is through entrepreneurship. Ultimately, startups are the most important institutions of the future. We’re rebuilding every core pillar of society with new businesses, and India has got an enormous talent pool. So many of you came in for the AI Summit, and we are actively building companies here with many of the entrepreneurs. I think just watching businesses like SEPTO and Rafi and Policy Bazaar Health and others that are transforming these industries, we have great confidence that the Indian entrepreneurs are going to build the next generation companies that not only drive abundance and resilience here in India. but are going to be positioned to be the global leaders in different markets.

So to that end, one of the announcements that I made in our roundtable with Prime Minister Modi yesterday was that we’re increasing our investment. We’re going to be investing $5 billion over the next five years in the Indian entrepreneurial ecosystem. It’s the largest of its kind, and thank you. And it comes from a deep belief that Indian entrepreneurs are going to create some of the most interesting companies of the next generation. So come build with us. Thank

Related ResourcesKnowledge base sources related to the discussion topics (13)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Hemant Taneja is the chief executive of General Catalyst and has been described as one of the most vocal advocates for responsible innovation, bridging capital and conscience.”

The knowledge base identifies Hemant Taneja as CEO of General Catalyst and notes he is a vocal advocate for responsible innovation and for bridging capital with conscience in technology development [S4] and [S6].

Confirmedhigh

“Taneja thanked Prime Minister Narendra Modi for convening the world’s AI thought‑leaders and for championing a design principle centred on “human centricity” and “human empowerment”.”

A transcript excerpt records Taneja’s gratitude to Prime Minister Modi for gathering AI thought-leaders and emphasizing human-centric and human-empowerment goals [S40].

Confirmedhigh

“India is uniquely positioned to lead the AI‑driven resilience agenda and is described as “the world’s strongest growth market”.”

The knowledge base highlights India’s unique position-technical talent, diverse data, vibrant startup ecosystem and supportive policy-that enables it to lead in AI, confirming the claim of a distinctive advantage [S50] and [S51].

Additional Contextmedium

“Key sectors for AI‑enabled national resilience mentioned are healthcare, data, deterrence and defence, and energy infrastructure.”

The source lists healthcare and energy as recognised critical infrastructure sectors, providing partial confirmation for two of the four sectors cited; data and defence are not explicitly covered in the knowledge base [S46] and [S45].

Additional Contextmedium

“India’s sustained investment in physical and digital infrastructure, a vibrant open‑source ecosystem, and a deepening US‑India corridor facilitate fluid innovation flows.”

While the knowledge base does not mention specific investment figures, it does describe an ecosystem that promotes technology development in a sustainable, fair growth-oriented context, which aligns with the reported structural advantages [S35].

External Sources (52)
S1
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S2
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S3
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S4
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Hemant Taneja General Catalyst — -Hemant Taneja: CEO of General Catalyst (venture capital firm), advocate for responsible innovation, focuses on bridging…
S5
Sticking with Start-ups / DAVOS 2025 — – Hemant Taneja: Chief Executive Officer and Managing Director at General Catalyst USA Hemant Taneja and Mohit Bhatnaga…
S6
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Hemant Taneja General Catalyst — Taneja argues that artificial intelligence is the primary solution for achieving national resilience across critical sec…
S7
AI 2.0 Reimagining Indian education system — The discussion positioned India’s educational AI integration within broader national aspirations for global AI leadershi…
S8
Keynote-Rishad Premji — This comment transforms the discussion by repositioning India’s challenges as strengths. It provides the logical foundat…
S9
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Gaud explained Google’s rationale for heavy investment in India beyond the obvious market size. India’s young, tech-eage…
S10
Empowering Workers in the Age of AI — Economic | Development While AI presents both opportunities and challenges, it is neither the primary cause nor the sol…
S11
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — General Catalyst pledged $5 billion investment in Indian entrepreneurial ecosystem over next five years
S12
Welcome Address — Prime Minister Narendra Modi The speech emphasizes that with proper direction, ethical frameworks, and global cooperati…
S13
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — The discussion maintained a consistently optimistic and collaborative tone throughout. It was highly aspirational, with …
S14
(Plenary segment) Summit of the Future – General Assembly, 5th plenary meeting, 79th session — The Prime Minister advocates for the responsible development and use of artificial intelligence. This argument stresses …
S15
WS #123 Responsible AI in Security Governance Risks and Innovation — The discussion maintained a professional, collaborative, and constructive tone throughout. It began with an informative …
S16
Building the Next Wave of AI_ Responsible Frameworks & Standards — Thank you. Good afternoon, everyone. I know it’s Friday afternoon, almost end of a fantastic Global AI Summit. And good …
S17
Leaders TalkX: Accelerating global access to information and knowledge in the digital era — These key comments fundamentally shaped the discussion by establishing a human rights framework, introducing innovative …
S18
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what…
S19
AI That Empowers Safety Growth and Social Inclusion in Action — So capital can definitely incentivize innovation and responsibility, but capital alone cannot do that. We published our …
S20
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The roadmap is built upon core principles including “human and planetary welfare, accountability and transparency, inclu…
S21
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — These key comments fundamentally shaped the symposium by establishing a framework for responsible, human-centric AI adop…
S22
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Hemant Taneja General Catalyst — Taneja argues that artificial intelligence is the primary solution for achieving national resilience across critical sec…
S23
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Hemant Taneja General Catalyst — Good afternoon. Let me just start by thanking Shri Prime Minister Modi Ji for getting all the AI thought leaders togethe…
S24
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — So I think the opportunity is massive, and we are just at the beginning, and the ability to transform this throughout th…
S25
AI 2.0 Reimagining Indian education system — The discussion positioned India’s educational AI integration within broader national aspirations for global AI leadershi…
S26
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Giordano Albertazzi — Albertazzi positioned India as central to the AI evolution, citing several key advantages that make the country particul…
S27
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — India’s advantages in this transformation include demographic energy, linguistic complexity, cultural depth spanning tho…
S28
The Global Power Shift India’s Rise in AI & Semiconductors — Building India’s AI and Semiconductor Ecosystem: The panel discussed India’s positioning in the global AI and semiconduc…
S29
How AI Is Transforming Indias Workforce for Global Competitivene — And those are risks that we need to make sure that we have the right solutions or the right thought process because it i…
S30
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — General Catalyst pledged $5 billion investment in Indian entrepreneurial ecosystem over next five years
S31
Welcome Address — Prime Minister Narendra Modi The speech emphasizes that with proper direction, ethical frameworks, and global cooperati…
S32
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Prime Minister Modi emphasizes his philosophy of working collaboratively with international partners and industry leader…
S33
(Plenary segment) Summit of the Future – General Assembly, 5th plenary meeting, 79th session — The Prime Minister advocates for the responsible development and use of artificial intelligence. This argument stresses …
S34
AI Governance Dialogue: Presidential address — Ettore Balestrero: On behalf of His Holiness Pope Leo XIV, I would like to extend his cordial greetings to all participa…
S35
Opening remarks — Such an ecosystem should promote the development and application of technology within an environmentally conscious, fair…
S36
Any other business /Adoption of the report/ Closure of the session — The address commenced with expressions of gratitude extended to the Chair for her diligent and guiding role in a strenuo…
S37
Opening plenary session and adoption of the agenda — ICT’s widespread use has inadvertently opened doors for malign actors to harness its capabilities for activities that je…
S38
A Digital Future for All (morning sessions) — 7. Space Sustainability and Innovation Christopher Burns: Each year, more than 10 million students graduate from tech…
S39
Emerging Markets: Resilience, Innovation, and the Future of Global Development — Exactly. One is a big time. Thank you. Is it not? And this is a not, because it’s not something that you learn in textb…
S40
https://app.faicon.ai/ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-hemant-taneja-general-catalyst — Good afternoon. Let me just start by thanking Shri Prime Minister Modi Ji for getting all the AI thought leaders togethe…
S41
Keynote-Nikesh Arora — The central thesis of Arora’s presentation revolves around a critical imbalance in AI development priorities. He argues …
S42
AI and Digital in 2023: From a winter of excitement to an autumn of clarity — These geopolitical tensions are having a rippling effect:
S43
9821st meeting — Mr. President, it is an honor to address this council to discuss the critical implications of artificial intelligence in…
S44
Keynote-Demis Hassabis — This discussion features a keynote address by Sir Demis Hassabis, co-founder and CEO of Google DeepMind and Nobel laurea…
S45
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 (continued) – session 4 — Available on the think-tank’s website, this map offers a clear, global perspective on the current state of affairs and a…
S46
Opening of the session — The delegation is particularly concerned about the impact of malicious ICT activities and advocates for broader recognit…
S47
Opening of the session/OEWG 2025 — Multiple speakers highlighted the growing threat of cyberattacks on critical infrastructure, including healthcare, energ…
S48
Agenda item 5: discussions on substantive issues contained inparagraph 1 of General Assembly resolution 75/240 (continued) – session 5 — Bangladesh:Mr. Chair, thank you very much for your extraordinary hard work in presenting the final draft of the third AP…
S49
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — Addressing potential concerns about technological nationalism, Mazumdar-Shaw emphasised that “sovereignty is not isolati…
S50
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — India’s unique position—combining technical talent, diverse datasets, a vibrant startup ecosystem, and supportive policy…
S51
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — And India is definitely leading the way in terms of application layer. There’s no doubt about that. Now, of course, with…
S52
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Okay, good. Thank you. Thank you all for joining and I appreciate it. I am being pitched against my boss, so I’m going t…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
H
Hemant Taneja
16 arguments147 words per minute887 words361 seconds
Argument 1
AI as the engine of national and global resilience
EXPLANATION
Hemant argues that artificial intelligence is the key driver for building resilience at both national and global levels. By leveraging AI, societies can better withstand shocks such as pandemics, wars, and rapid technological change.
EVIDENCE
He describes the recent series of global shocks-including a pandemic and wars-and states that AI is the answer for delivering transformation and national resilience across all key industries [12-21].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Taneja describes AI as the primary solution for achieving national resilience across critical sectors in his keynote presentation [S6].
MAJOR DISCUSSION POINT
AI as a catalyst for resilience
Argument 2
AI drives resilience across key sectors such as healthcare, defense, and energy (Hemant Taneja)
EXPLANATION
He specifies that AI can strengthen critical sectors like healthcare, defence, and energy, making them more robust against future disruptions.
EVIDENCE
He lists healthcare, data, deterrence and defense, and scaling of energy infrastructure as areas where AI can create enormous opportunity and drive resilience [19-21].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He highlights AI’s role in strengthening healthcare, defence and energy systems as part of a broader resilience strategy [S6].
MAJOR DISCUSSION POINT
Sector‑specific AI resilience
Argument 3
AI’s deflationary nature helps uplift opportunities in a large, complex market like India (Hemant Taneja)
EXPLANATION
Hemant notes that AI’s inherent deflationary effect can lower costs and expand access, which is especially valuable for a populous and complex market such as India.
EVIDENCE
He states that AI is deflationary by nature and matches what is required to uplift opportunities in India, especially in healthcare and education for over a billion people [25-26].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote notes that AI’s deflationary effect aligns with India’s development needs, making it a catalyst for large-scale uplift [S6] and [S4].
MAJOR DISCUSSION POINT
Deflationary impact of AI
Argument 4
India’s strategic advantage to lead in AI adoption
EXPLANATION
He claims that India possesses several systemic advantages—digital infrastructure, demographics, and international partnerships—that position it to become a global AI leader.
EVIDENCE
He references India’s existing digital infrastructure, young population, growing infrastructure investment, and strong US-India collaboration as foundations for AI leapfrogging and democratic AI growth [29][31-34][39-40][35-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Taneja points to India’s unique positioning-young demographics, digital infrastructure and growth market status-as a strategic advantage for AI leadership [S4] and [S6].
MAJOR DISCUSSION POINT
India’s AI leadership potential
Argument 5
Existing digital infrastructure (UPI, Aadhaar) enables AI leapfrogging (Hemant Taneja)
EXPLANATION
Hemant points to India’s prior digital breakthroughs—UPI and Aadhaar—as proof that the country can rapidly adopt and scale new AI paradigms.
EVIDENCE
He explicitly mentions the digital infrastructure revolution in India, citing UPI and Aadhaar as examples that allow a complete rethinking of paradigms in other industries [29].
MAJOR DISCUSSION POINT
Digital foundations for AI
Argument 6
Young demographic and growing infrastructure investment create a fertile AI ecosystem (Hemant Taneja)
EXPLANATION
He argues that India’s youthful workforce combined with substantial infrastructure spending creates an environment where AI can thrive.
EVIDENCE
He notes increasing investment in infrastructure over recent years and highlights India’s young demographic as a source of potential AI capability [31-34][39-40].
MAJOR DISCUSSION POINT
Demographic and investment drivers
Argument 7
Strong US‑India collaboration and open‑source initiatives support democratic AI growth (Hemant Taneja)
EXPLANATION
Hemant emphasizes the importance of cross‑border cooperation, especially between the US and India, and open‑source work to ensure AI develops within democratic values.
EVIDENCE
He mentions the US-India corridor, the “packed silica” announcement, and the need for fluid innovation flow across the Western world to let AI thrive in a democratic context [35-38].
MAJOR DISCUSSION POINT
International partnership for AI
Argument 8
AI’s impact on employment and the need to empower the workforce
EXPLANATION
Hemant contends that AI should be seen as a tool to augment, not replace, human labour, and that every new entrant to the workforce must be equipped with AI capabilities.
EVIDENCE
He challenges the narrative that AI will take jobs, urging a rejection of that view and advocating for AI empowerment of every new worker, citing the massive monthly influx of Indian workers [41-46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He argues for universal AI empowerment of the workforce, emphasizing that every new entrant should be equipped with AI tools [S6].
MAJOR DISCUSSION POINT
AI‑augmented employment
AGREED WITH
Speaker 1
Argument 9
Reject the narrative that AI will eliminate jobs; instead, empower every new worker with AI tools (Hemant Taneja)
EXPLANATION
He calls for a shift in mindset away from fear of job loss toward proactive upskilling of the workforce with AI technologies.
EVIDENCE
He directly addresses the narrative that AI can take jobs and advises India to reject it, emphasizing empowerment of every new entrant with AI tools [41-46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Taneja explicitly calls for rejecting the job-loss narrative and upskilling every new worker with AI capabilities [S6].
MAJOR DISCUSSION POINT
Narrative shift on AI and jobs
Argument 10
AI‑augmented productivity of the massive monthly influx of Indian workers will unleash massive economic opportunity (Hemant Taneja)
EXPLANATION
Hemant predicts that coupling AI with the millions of new workers each month will dramatically boost productivity and generate large‑scale economic benefits.
EVIDENCE
He explains that with AI-driven productivity behind each new worker, the cumulative effect will unleash unprecedented opportunity across companies and industries [44-46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He projects that AI-driven productivity gains for India’s monthly workforce influx will generate unprecedented economic opportunity [S6].
MAJOR DISCUSSION POINT
Productivity boost from AI‑enabled labour
Argument 11
Entrepreneurship as the catalyst for AI‑driven transformation
EXPLANATION
He positions start‑ups as the primary engines that will rebuild societal pillars using AI, driving both national resilience and global competitiveness.
EVIDENCE
He states that startups are the most important institutions of the future, rebuilding core pillars with new businesses, and cites examples of Indian AI-driven companies such as SEPTO, Rafi, and Policy Bazaar Health [47-52].
MAJOR DISCUSSION POINT
Start‑ups driving AI transformation
Argument 12
Start‑ups are the primary institutions rebuilding core societal pillars with AI (Hemant Taneja)
EXPLANATION
Hemant asserts that entrepreneurial ventures are the key agents reshaping sectors like health, finance, and others through AI innovation.
EVIDENCE
He remarks that startups are the most important institutions of the future and that they are rebuilding every core pillar of society with AI [47-49].
MAJOR DISCUSSION POINT
Start‑ups as societal rebuilders
Argument 13
Indian entrepreneurs are poised to create next‑generation companies that become global leaders (Hemant Taneja)
EXPLANATION
He expresses confidence that Indian founders will launch companies that not only serve domestic needs but also become leaders in international markets.
EVIDENCE
He points to companies like SEPTO, Rafi, and Policy Bazaar Health as examples of transformation, and states confidence that Indian entrepreneurs will build next-generation global leaders [50-52].
MAJOR DISCUSSION POINT
Global potential of Indian AI start‑ups
Argument 14
Commitment of capital to accelerate AI innovation in India
EXPLANATION
Hemant announces a major financial pledge from General Catalyst to fuel AI‑driven entrepreneurship in India, underscoring the role of capital in scaling innovation.
EVIDENCE
He reveals that General Catalyst will invest $5 billion over the next five years in the Indian entrepreneurial ecosystem, describing it as the largest pledge of its kind and a reflection of deep belief in Indian talent [52-55].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote underscores a major capital pledge to fuel AI-driven entrepreneurship in India, linking investment to accelerated innovation [S4] and [S6].
MAJOR DISCUSSION POINT
Capital commitment for AI
Argument 15
General Catalyst will invest $5 billion over five years in the Indian entrepreneurial ecosystem, the largest pledge of its kind (Hemant Taneja)
EXPLANATION
He specifies the size, duration, and uniqueness of the investment, positioning it as a historic commitment to Indian AI entrepreneurship.
EVIDENCE
He states the $5 billion five-year investment, calling it the largest of its kind and linking it to confidence in Indian entrepreneurs [52-55].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Taneja announces a $5 billion, five-year investment by General Catalyst, described as the largest of its kind for India’s AI ecosystem [S4] and [S6].
MAJOR DISCUSSION POINT
Scale of investment
Argument 16
This investment reflects confidence that Indian talent will generate abundant, resilient AI solutions (Hemant Taneja)
EXPLANATION
He ties the financial pledge to a belief that Indian innovators will produce AI solutions that are both plentiful and resilient, benefiting India and the world.
EVIDENCE
He concludes by saying the investment comes from a deep belief that Indian entrepreneurs will create the most interesting next-generation companies, delivering abundance and resilience [55-56].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He ties the $5 billion commitment to a deep belief that Indian entrepreneurs will deliver abundant, resilient AI solutions for India and the world [S6].
MAJOR DISCUSSION POINT
Confidence in Indian AI talent
Agreements
Agreement Points
Emphasis on responsible, human‑centric AI innovation that balances capital with societal responsibility
Speakers: Speaker 1, Hemant Taneja
AI’s impact on employment and the need to empower the workforce Reject the narrative that AI will eliminate jobs; instead, empower every new worker with AI tools
Speaker 1 introduces Hemant Taneja as a vocal advocate for responsible innovation, linking capital and conscience [2-5]. Hemant reinforces this by thanking the Prime Minister for shaping AI for human centricity and empowerment, and by urging rejection of the job-loss narrative in favour of AI-enabled empowerment of every new worker [8-11][41-46].
POLICY CONTEXT (KNOWLEDGE BASE)
This emphasis aligns with emerging policy roadmaps that place human welfare, accountability and equitable economic growth at the core of AI strategy, such as the AI Policy Research Roadmap which lists human and planetary welfare and equitable growth as key principles [S20]. It also reflects discussions that capital alone cannot ensure responsible outcomes, highlighted by the Davos assessment of 2,000 companies where only 40 % disclosed AI principles, underscoring the need to balance financial incentives with societal duties [S19]. The theme is reinforced by the human-centric governance tool presented at Day 0 Event #173 on Ethical AI [S18] and by broader frameworks for responsible, human-centric AI adoption in governance [S21].
Similar Viewpoints
Both speakers stress that AI development must be guided by responsibility toward people, ensuring that economic benefits do not come at the expense of workers but rather enhance human capability [2-5][8-11][41-46].
Speakers: Speaker 1, Hemant Taneja
AI’s impact on employment and the need to empower the workforce Reject the narrative that AI will eliminate jobs; instead, empower every new worker with AI tools
Unexpected Consensus
Alignment between a moderator’s framing of ‘responsible innovation’ and the CEO’s detailed call for human‑centric AI deployment
Speakers: Speaker 1, Hemant Taneja
AI’s impact on employment and the need to empower the workforce Reject the narrative that AI will eliminate jobs; instead, empower every new worker with AI tools
While Speaker 1 only provided an introductory remark, the content of that remark (responsible innovation, capital’s responsibility) directly mirrors Hemant’s later emphasis on human centricity and workforce empowerment, showing an unanticipated convergence of viewpoints across roles [2-5][8-11][41-46].
POLICY CONTEXT (KNOWLEDGE BASE)
The convergence mirrors recent panel discussions where moderators and industry leaders jointly champion responsible innovation, as seen in the “Building the Next Wave of AI – Responsible Frameworks & Standards” session that emphasized responsible AI deployment [S16]. It is further echoed in the Day 0 Event #173 presentation on a policy tool for human-centric and responsible AI governance [S18] and in the AI-Driven Enforcement symposium that highlighted a framework for responsible, human-centric AI adoption in governance [S21].
Overall Assessment

The brief exchange reveals a clear shared commitment to responsible, human‑centred AI that empowers workers and aligns capital with societal good. Both speakers, despite their different functions, converge on the need to reject fear‑based narratives about AI‑driven job loss and to promote AI as a tool for empowerment.

High consensus on the ethical framing of AI; this alignment strengthens the credibility of calls for AI‑driven resilience and suggests that policy and investment discussions can proceed on a common foundation of human‑centric values.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript contains only an introductory remark by Speaker 1 and a single, extensive presentation by Hemant Taneja. Consequently, there is no direct clash of viewpoints; the speakers are largely aligned on the overarching aim of leveraging AI for societal benefit. The only nuanced difference lies in the framing of that aim—ethical responsibility versus strategic resilience.

Minimal. The lack of opposing positions suggests a consensus on the importance of AI-driven development, implying that policy discussions can move forward without needing to resolve major conflicts among the participants.

Partial Agreements
Both speakers emphasize that AI development should be guided by a purpose beyond profit—Speaker 1 highlights "responsible innovation" and the responsibility of companies building the future [2-5], while Hemant stresses shaping AI for "human centricity" and using it to build resilience for society [8-11][12-21]. They share the goal of aligning AI with societal benefit, but differ in focus: Speaker 1 frames it as an ethical stance for innovators, whereas Hemant frames it as a strategic lever for national resilience and economic growth.
Speakers: Speaker 1, Hemant Taneja
Responsible innovation and the need for AI to serve human centricity AI as the engine of national and global resilience
Takeaways
Key takeaways
Artificial intelligence is positioned as the primary engine for national and global resilience, driving transformation across healthcare, defense, energy, and other critical sectors. AI’s inherently deflationary nature can help uplift opportunities in a large, complex market like India, addressing needs of over a billion people. India possesses strategic advantages for AI leadership, including mature digital infrastructure (UPI, Aadhaar), a young and growing demographic, increasing infrastructure investment, and strong US‑India collaboration with open‑source initiatives. The narrative that AI will eliminate jobs should be rejected; instead, every new entrant to the workforce should be empowered with AI tools to boost productivity and economic opportunity. Entrepreneurship and start‑ups are seen as the main catalysts for rebuilding core societal pillars with AI, and Indian entrepreneurs are expected to create next‑generation, globally‑leading companies. General Catalyst commits to a $5 billion investment over the next five years in the Indian entrepreneurial ecosystem, the largest pledge of its kind, reflecting confidence in Indian talent to deliver resilient AI solutions.
Resolutions and action items
General Catalyst will invest $5 billion over five years in the Indian entrepreneurial ecosystem. Commitment to deepen US‑India AI collaboration and promote open‑source initiatives to ensure democratic AI growth.
Unresolved issues
Specific mechanisms for empowering the rapidly growing Indian workforce with AI tools were not detailed. How to operationalize and govern AI deployment in sensitive sectors such as defense and healthcare remains unclear. Concrete policies or frameworks to ensure AI development aligns with democratic values were not outlined. Details on scaling infrastructure (e.g., energy, data centers) to support AI at national scale were not provided. Potential macro‑economic effects of AI’s deflationary impact and how to mitigate any adverse outcomes were not addressed.
Suggested compromises
None identified
Thought Provoking Comments
The biggest opportunity in capitalism today is what I call global resilience.
Introduces a novel framing that ties the health of the global economy to the capacity of societies to withstand shocks, positioning resilience as the central value proposition for capital markets rather than traditional growth metrics.
Sets the thematic foundation for the entire speech, shifting the conversation from generic AI hype to a purpose‑driven narrative. It primes the audience to view subsequent points about AI, infrastructure, and investment through the lens of building resilience.
Speaker: Hemant Taneja
Artificial intelligence is the answer for actually driving what I call national resilience in all the key industries – healthcare, data, defence, energy, etc.
Positions AI not merely as a tool but as the primary engine for strengthening a nation’s critical sectors, expanding the discussion from abstract benefits to concrete, sector‑wide transformation.
Broadens the scope of the dialogue, prompting listeners to consider AI’s cross‑sectoral impact. It leads to the later mention of specific Indian successes (UPI, Aadhaar) as precedents for AI‑enabled resilience.
Speaker: Hemant Taneja
AI is deflationary by nature, it matches well to what’s required to uplift the opportunities here in India.
Highlights an economic characteristic of AI—its tendency to lower costs—that is often overlooked in policy debates focused on job displacement or ethical concerns.
Introduces a counter‑argument to the fear‑based narrative about AI, paving the way for the next turning point where he directly addresses job‑loss anxieties.
Speaker: Hemant Taneja
There’s this narrative that artificial intelligence can take the jobs of young people and we need to slow down progress. My biggest advice on India’s leadership in AI is to reject that narrative and lean into it. Everybody entering the workforce should be fully empowered with AI.
Directly challenges a prevalent fear about AI and reframes it as an empowerment opportunity, urging a proactive rather than protective stance.
Marks a clear turning point in tone—from descriptive to prescriptive. It shifts the conversation toward human capital development and sets up the argument that entrepreneurship and upskilling are the pathways to harness AI’s productivity gains.
Speaker: Hemant Taneja
The way India is going to lead in artificial intelligence, from my perspective, is through entrepreneurship. Ultimately, startups are the most important institutions of the future.
Elevates startups from being merely participants to being the primary drivers of societal transformation, adding a layer of strategic focus on ecosystem building.
Steers the discussion toward ecosystem dynamics and the role of venture capital, preparing the audience for the concrete investment pledge that follows.
Speaker: Hemant Taneja
We’re increasing our investment. We’re going to be investing $5 billion over the next five years in the Indian entrepreneurial ecosystem – the largest of its kind.
Provides a tangible, high‑stakes commitment that operationalizes the earlier visionary statements, turning rhetoric into measurable action.
Serves as the climax of the speech, converting abstract ideas about resilience, deflationary AI, and entrepreneurship into a concrete financial pledge. It reinforces the earlier call to “come build with us” and signals to the audience that the proposed transformation is backed by significant capital.
Speaker: Hemant Taneja
Overall Assessment

Hemant Taneja’s remarks moved the discussion from a broad, inspirational framing of AI as a catalyst for ‘global resilience’ to a focused, actionable agenda centered on entrepreneurship and massive capital deployment. Key turning points—such as the challenge to the job‑loss narrative and the announcement of a $5 billion investment—reoriented the conversation from theoretical benefits to concrete strategies for India’s AI leadership. These pivotal comments not only introduced fresh perspectives but also shaped the tone, directing the audience toward a proactive, investment‑driven vision of AI‑enabled national resilience.

Follow-up Questions
How can India ensure that AI-driven productivity gains are equitably distributed among its young workforce?
Addressing concerns about job displacement and maximizing inclusive growth is essential for widespread adoption of AI.
Speaker: Hemant Taneja
What mechanisms are needed to facilitate fluid innovation flow between the U.S., India, Europe, and the broader democratic world?
Ensuring cross‑regional collaboration is critical for scaling AI responsibly and maintaining democratic values.
Speaker: Hemant Taneja
What specific strategies will be employed to leverage AI for national resilience across sectors such as healthcare, defense, and energy infrastructure?
Operationalizing AI’s role in resilience requires concrete plans and sector‑specific roadmaps.
Speaker: Hemant Taneja
How will the $5 billion investment over the next five years be allocated among Indian startups, and what metrics will be used to assess impact?
Transparency and clear impact metrics are needed to evaluate the effectiveness of the large capital commitment.
Speaker: Hemant Taneja
What role can open‑source initiatives play in India’s AI ecosystem, and how can they be supported?
Open‑source development can accelerate innovation and democratize access to AI technologies.
Speaker: Hemant Taneja
How can India ‘leapfrog’ traditional digital infrastructure models (e.g., UPI, Aadhaar) to create new AI paradigms in other industries?
Understanding leapfrogging opportunities can guide policy and investment to reshape industry standards.
Speaker: Hemant Taneja
What evidence supports the claim that AI is deflationary, and how can this effect be harnessed to uplift healthcare and education for a billion‑plus population?
Empirical data is needed to validate the economic impact of AI and to design policies that leverage deflationary benefits.
Speaker: Hemant Taneja
What policies or initiatives are needed to counter the narrative that AI will take jobs and instead empower the workforce with AI tools?
Shaping public perception and providing upskilling pathways are crucial for successful AI integration into the labor market.
Speaker: Hemant Taneja

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Digital Democracy Leveraging the Bhashini Stack in the Parliamen

Digital Democracy Leveraging the Bhashini Stack in the Parliamen

Session at a glanceSummary, keypoints, and speakers overview

Summary

The event focused on advancing inclusive voice AI in India through a new policy report and developer toolkit [24-26]. Amitabh Nag emphasized that AI solutions must be continuously updated because their “shelf life” can be as short as three months, and that the diversity of language, culture and individuals requires inclusion to be built into design [5-13]. Ariane Ahildur highlighted the Indo-German partnership that has produced open voice models for nine Indian languages, enabling applications such as health-worker assistants and farmer advice, and linked the work to the Hamburg Declaration on responsible AI for the SDGs [42-48][49-52]. Harleen Kaur presented the policy framework’s four pillars-treating foundational data as public goods, institutionalising sustainable open-source infrastructure, building open and representative models, and strengthening responsible deployment-along with a developer toolkit that addresses representation, data quality and embedding responsible-AI practices [73-78][90-93]. In the panel, Nag described a two-stage data strategy: initial “brute-force” collection to create diverse primary corpora, followed by generation of improvement corpora from user-generated digital interactions, both in open and closed domains [124-131][133-138]. Ghosh argued that instead of exhaustive data gathering, modelling should start from intrinsic linguistic components (e.g., Indo-Aryan vs Dravidian families) and then diversify, illustrating this with a Telugu project that covered four dialects using a cost-effective, region-anchored approach [160-167][174-182]. Kritika K.R. stressed that industry adoption depends on scalable, edge-ready infrastructure, domain-specific model tuning, and compliance-driven on-prem deployment to ensure security and reliability [196-199][252-254]. Thomas Vallianeth warned that voice datasets sit at the intersection of privacy and copyright law, requiring early documentation, licensing checks and privacy-enhancing techniques to protect downstream use [208-213][214-218]. He added that legal safeguards and thorough documentation can reduce subjectivity in disputes over harmful content or licensing, though courts are still catching up with evidentiary standards [292-298][300-301]. Ghosh highlighted the inherent variability in human transcription, recommending multi-layered evaluation that goes beyond word-error rate and incorporates multiple possible outputs and downstream task feedback [228-237][240-244]. Nag echoed this by suggesting that ultimate evaluation should be based on audience comprehension and acceptance rather than absolute ranking, acknowledging contextual differences such as legal or courtroom settings [256-262][267-272]. The participants agreed that a national, collaborative evaluation framework-potentially a single leaderboard managed by Bhashani-could drive continuous improvement and benchmark progress across languages and dialects [313-319][315-319]. They concluded that coordinated policy, technical and legal measures are essential to build a trustworthy, inclusive voice-AI ecosystem in India, with the report and toolkit serving as concrete tools for stakeholders [31-33][90-93][302-304].


Keypoints


Major discussion points


Inclusion and diversity are core design requirements for voice AI.


Nag stresses that AI systems have a short “shelf-life” and must be continuously upgraded because “each person is different… each language is different… each culture is different” [9-13]. He adds that “inclusion is the name of the… design” and that diversity must become a standard [15-16]. Ariane reinforces this by stating that voice AI is a gateway to public services only when it works in local languages and dialects, otherwise it risks exclusion [36-39].


A four-pillar policy framework and accompanying developer toolkit were presented to guide the ecosystem.


Harleen outlines the policy pillars: treating foundational data sets as public goods, institutionalising sustainable open-source infrastructure, building open and representative models, and strengthening responsible deployment [73-78]. The toolkit translates these principles into practice, focusing on representation, data quality, and embedding responsible AI (RAI) throughout the development lifecycle [90-107].


The voice-technology lifecycle faces multi-layered challenges, from data collection to downstream impact.


The report identifies problems at every stage-data collection, curation, model development (linguistic gaps, lack of standards), hosting/licensing (costs, governance), and deployment (bias, exclusion) [66-69]. Nag expands on how continuous data creation (primary and improvement corpora) and user-feedback loops are needed to keep the ecosystem alive [121-138].


Current evaluation methods are inadequate; a new, ecosystem-wide evaluation approach is required.


Ghosh explains that human annotators often disagree on transcriptions, making word-error-rate an insufficient metric and calling for multi-layered, subjective-objective evaluation pipelines [228-241]. Later participants propose a national benchmark/leader-board to drive collaborative competition and yearly progress across languages and dialects [315-319].


Legal, privacy, and trust considerations must be embedded from the start.


Thomas highlights the intersection of copyright and privacy law, the need for clear provenance, licensing, privacy-enhancing techniques, and robust documentation to ensure downstream safety [208-218]. He also notes that building trust through safeguards (e.g., for harmful-content detection) reduces subjectivity in legal disputes [286-293].


Overall purpose / goal of the discussion


The session was the formal launch of a Policy Report and Developers Toolkit on “Open and Responsible Voice Technology Ecosystem in India.” It aimed to showcase the collaborative Indo-German effort, present concrete policy recommendations and practical tools, and mobilise stakeholders (government, industry, academia, NGOs) to adopt inclusive, open-source voice AI that serves public services and sustainable development goals.


Overall tone and its evolution


– The opening remarks (Nag, Ariane) are optimistic and collaborative, emphasizing partnership, shared values, and the promise of inclusive voice AI.


– The middle segment (Harleen, Nag, Ghosh) shifts to a problem-solving, technical tone, detailing concrete challenges across the data-to-deployment pipeline and proposing actionable frameworks.


– The later exchanges (Thomas, Ghosh, Nag) become reflective and cautious, acknowledging legal complexities, evaluation uncertainties, and the need for ongoing dialogue and trust-building.


Overall, the tone remains constructive and forward-looking, moving from enthusiasm about possibilities to a realistic appraisal of the work still required.


Speakers

Speakers from the provided list


Ariane Ahildur


– Expertise: Global health, digital technologies, AI policy and inclusion


– Role/Title: Director General, Department for Global Health, Equality of Opportunity, Digital Technologies and Food Security, German Federal Ministry for Economic Cooperation and Development[S2]


Prasanta Ghosh


– Expertise: Speech technology research, inclusive data-set design, AI evaluation


– Role/Title: Associate Professor, Indian Institute of Science[S4]


Thomas J. Vallianeth


– Expertise: Legal aspects of AI, copyright, data governance, privacy law


– Role/Title: Counsel, Trilegal[S6]


Amitabh Nag


– Expertise: Voice AI, inclusive AI systems, digital public goods, policy implementation


– Role/Title: CEO, DIBD (Digital Innovation for Bharat Development)[S8]


Kritika K.R.


– Expertise: Applied AI for enterprise, voice interfaces, multilingual conversational systems


– Role/Title: Head of Artificial Intelligence & Product Research, SanLogic[S10]


Moderator


– Expertise: Session moderation (no specific domain mentioned)


– Role/Title: Event Moderator[S12]


Nihar Desai


– Expertise: AI policy, ecosystem coordination, panel moderation


– Role/Title: Head, JNI (Joint Network Initiative)[S15]


Harleen Kaur


– Expertise: Research on voice technology policy, developer toolkit creation


– Role/Title: Research Manager, Digital Futures Lab[S18]


Additional speakers (not in the supplied list)


Shailendra Pal Singh


– Expertise: Voice technology ecosystem development, open-source initiatives


– Role/Title: Senior General Manager, Bhashani (mentioned in the closing remarks of the transcript)


Full session reportComprehensive analysis and detailed insights

The event opened with Amitabh Nag emphasizing that voice-AI solutions cannot be treated as static products because their “shelf-life” may be as short as three months, necessitating continual upgrades[4-6]. He argued that inclusion and diversity must be built into design rather than treated as outliers, and that these qualities should eventually become standards[9-13][15-16]. He concluded with optimism about the collaborative journey ahead[17-19].


The moderator thanked Mr Nag and introduced Dr Ariane Ahildur-Brandt, Director General of the Department for Global Health, Equality of Opportunity, Digital Technologies and Food Security of the German Federal Ministry for Economic Cooperation and Development, to deliver the keynote[20-23]. Dr Ahildur-Brandt announced the launch of the “Policy Report and Developers Toolkit Building on Open and Responsible Voice Technology Ecosystem in India”, a product of the Indo-German partnership[24-26]. She highlighted that open voice models have already been created for nine Indian languages, enabling applications such as health-worker assistants and farmer advisory services[42-47]. Emphasising social impact, she described voice as the most natural interface for low-literacy populations and warned that when voice AI does not work in local languages it can reinforce exclusion[34-39]. The initiative is positioned as a concrete contribution to the Hamburg Declaration on Responsible AI for the Sustainable Development Goals[49-52] and a call was made to deepen cooperation and build trust[53-54].


Harleen Kaur, Research Manager at Digital Futures Lab, then presented the consortium findings[55-61]. She outlined challenges across the entire voice-technology lifecycle-from data collection and curation, through model development (where linguistic gaps, lack of standards and unclear data ownership prevail), to hosting, licensing and downstream deployment (where bias and lack of accountability emerge)[66-69]. To address these, she introduced a four-pillar policy framework: (1) treat foundational data sets as public goods, (2) institutionalise sustainable open-source infrastructure, (3) build open and representative models, and (4) strengthen responsible deployment[73-78]. The accompanying developer toolkit translates these pillars into practice by focusing on (a) representation through diversity planning, (b) data-quality and evaluation, and (c) embedding Responsible AI (RAI) practices throughout the development lifecycle[90-107]. Concrete practices such as diversity wish-lists, synthetic data generation, layered data strategies, robust transcription standards, model-cards, and continuous post-deployment monitoring were highlighted[97-104][105-110].


The panel discussion began with Nihar Desai, Head of JNI, welcoming participants and asking Mr Nag how foundational speech data can be treated as a digital public good and whether a “flywheel” of continuous data creation is feasible[113-119]. Mr Nag responded that data creation must proceed on two fronts. First, traditional “brute-force” field collection continues to generate primary corpora covering specific languages, dialects and regions[124-127]. Second, products built on these models can generate an “improvement corpus” by automatically capturing user-generated parallel data, which is then vetted, annotated and fed back into the models[128-132]. He added that both open-domain sources (e.g., YouTube) and closed-domain applications (e.g., enterprise or government tools that solicit user corrections) can be harnessed as systematic programmes to keep the ecosystem alive[133-138][139-144]. Nihar summarised this as a shift from static datasets to lived, feedback-driven resources[146-149].


Dr Prasanta Ghosh then addressed gaps in inclusive dataset design, noting that Indian linguistic diversity stems from cultural, caste and local-knowledge factors and that many languages share intrinsic components (e.g., Indo-Aryan vs Dravidian families)[155-163]. He advocated starting from these intrinsic bases and then adding targeted data to capture unique regional features, thereby reducing cost and timeline while preserving coverage[164-167]. As a concrete illustration, he described the ResPin project for Telugu, which covered four major dialects (Krishna-Guntur, Vishakapatnam-Vizag, Anandpur-Chittoor, Nalgonda). By identifying common acoustic and linguistic traits, the team collected a core set of stimuli that served multiple dialects and then supplemented each region with a smaller, complementary set, achieving a “region-anchored” approach that lowered budget and accelerated development[174-184][185-188].


Ms Kritika K.R. highlighted industry-level challenges, stating that voice is becoming the primary interface for sectors such as healthcare, manufacturing and automotive and that domain-specific adoption requires consistent user-scenario design, knowledge-repo integrations, beta-healthcare pilots, and the combination of ASR with large language models[190-194]. She stressed the need for scalable, sustainable infrastructure-including edge-ready models-to enable wide-scale adoption, and noted that reliable data pipelines and device-level intelligence are essential for real-world use[196-199]. Regarding compliance, she advocated on-premise deployment of fine-tuned open-source models, which allows organisations to meet security and regulatory requirements while still benefiting from community-driven advances[252-254].


Thomas Valunith, Counsel at Trilegal, discussed legal considerations, explaining that voice datasets sit at the intersection of privacy law and copyright law; even publicly available recordings may be subject to third-party rights, necessitating early provenance checks, appropriate licensing and, where possible, privacy-enhancing technologies[200-204][208-213]. He argued that robust documentation from the outset is crucial to enable downstream users to trust the data and to reduce legal exposure[214-218], and that embedding safeguards such as content-moderation rails into the data-collection pipeline can pre-empt disputes over harmful content, thereby engineering trust into the ecosystem[291-298]. When asked about subjective evaluation outcomes, he noted that existing statutes provide mechanisms for privacy and copyright, but evidentiary standards for AI-generated outputs are still evolving; thorough documentation and demonstrable safeguards can satisfy courts and procurement bodies even when evaluation involves a degree of subjectivity[289-301][302-304].


A participant identified as Nishant then called for a national, standardized evaluation framework/leaderboard for Indian languages, referencing NIST-style metrics to ensure comparability across models[315-321].


Dr Ghosh subsequently proposed a single, unified leaderboard under the “Varshini” (Bhashini) umbrella to benchmark progress across languages and to coordinate national evaluation efforts[322-324].


Mr Nag returned to the theme of audience-centric assessment, stating that the ultimate test of a voice system is whether the intended audience understands and accepts the output, not whether it ranks first on a technical leaderboard[256-262]. He acknowledged that different contexts-such as courtroom proceedings versus casual conversation-demand varying levels of linguistic purity and accuracy[267-272], and advocated defining acceptability thresholds based on audience needs and then working backwards to shape model development[273-283].


The panel collectively agreed that (i) inclusive voice AI is a gateway to public services for low-literacy populations, (ii) the four-pillar policy framework provides a roadmap for treating data as a public good, institutionalising open-source infrastructure, building representative models and ensuring responsible deployment, (iii) continuous data creation through primary and improvement corpora is essential, (iv) legal and privacy safeguards must be baked in from the start, and (v) evaluation must move beyond single-metric rankings to audience-centric, multi-layered approaches[15-16][73-78][121-144][208-218][228-244][256-262].


Specific action items were articulated by individual speakers: the moderator called for publishing the report and toolkit and invited participants to study them[326-331]; Harleen urged governments to fund and convene foundational speech data as public goods and to institutionalise open-source infrastructure[73-78]; Mr Nag highlighted the need to implement continuous data-creation pipelines based on both field collection and improvement corpora[124-132]; Ms Kritika emphasized promoting on-premise fine-tuning of open-source models for compliance[252-254]; Thomas advocated organising workshops to refine metrics, licensing models and documentation to support legal robustness[208-218]; and Nishant, reinforced by Dr Ghosh’s Varshini proposal, called for establishing a central, national evaluation platform or leaderboard for Indian languages[315-321][322-324].


Unresolved issues remain, notably the precise mechanisms for scaling feedback-driven data collection across India’s linguistic landscape, balancing coverage of low-resource languages with budget constraints, finalising legally robust evaluation criteria that satisfy procurement and court standards, and defining the governance structure for the proposed national leaderboard[124-127][155-167][284-286][315-321].


In closing, the moderator thanked all speakers, invited Mr Shailendra Pal Singh to felicitate the participants, and urged the audience to study the report and toolkit to advance an inclusive, trustworthy voice-AI ecosystem in India[326-331].


Session transcriptComplete transcript of the session
Amitabh Nag

including, you know, Southeast Asia as well as Africa and other places. So from that perspective, it is very important that we scale these solutions. We have policies, standards, toolkits which are developed which can be actually replicated. And frankly speaking, in this area, in this situation, nothing is static. You have a shelf life which is sometimes three months or six months or even less. Yes. So we have to continuously upgrade the things as we go by. You know, we can’t be saying that this is what we have done, unlike a machine which we have built up and it works for six years or five years. There is no guarantee, no warranty in these kind of systems which we are building in AI.

AI, and the reason for this is diversity. You know, each person is different. Each language is different. Each culture is different. So there is… There is huge amount of diversity and we have to live with the diversity unlike the earlier digital systems which used to work on only standards. You know, they had standards and they would perhaps keep the outliers away. Here, inclusion is the name of the, inclusion is part of the design, diversity is part of the design. And we would perhaps have to go step by step to define those diversities so that they start becoming standards. Right. You know, it’s a very different kind of a setup which is there and happy to be part of this journey, happy to, happy and acknowledged to the help which is being provided.

And hopefully we are going to get across to the next level and higher steps in the journey as we go by in future. Thank you very much.

Moderator

Thank you, Mr. Nag for your insightful words and also for your incredible support throughout the last year over the course of the program. Right. Thank you. I will now invite Dr. Ariane Ahildur -Brandt, Director General of the Department for Global Health, Equality of Opportunity, Digital Technologies and Food Security of the German Federal Ministry for Economic Cooperation and Development to deliver the keynote address. Thank you. Thank you.

Ariane Ahildur

Dear Mr. Naack, dear partners, distinguished guests, it is a great pleasure to welcome you to this launch today. We present to you the Policy Report and Developers Toolkit Building on Open and Responsible Voice Technology Ecosystem in India. The report and the toolkit are the impressive result of a very productive partnership between Germany and India. And it is the result of a joint effort involving a group of distinguished partners and experts. This is why I would like to start by thanking you, Mr. Nack, and your colleagues from Ascini, for the excellent cooperation. And I would like to thank the Digital Futures Lab, Art Park, TriLegal, and NASSCOM for their invaluable support. Dear guests, you will find that the report and toolkit that we are presenting today is full of best practices and lessons learned.

It will provide guidance and hands -on advice to policymakers and to the tech community alike. But for me, this report is more than useful and more than practical content. It also conveys a shared conviction, shared values, and a shared vision for digital inclusion. In fact, when it comes to inclusion, voice technology has a key role to play. For millions of people. Voice is the most natural and powerful interface to the digital world, especially for those with limited literacy or access to digital devices. When voice AI works in local languages and dialects, it will become a gateway to public services, healthcare, education, and economic participation. When it does not, AI risks reinforcing existing devices and may even become an instrument for exclusion.

This is why responsible, inclusive voice AI is not just a technical issue. As I said, it is part of a shared vision, a shared vision between India and Germany. At a time when artificial intelligence is often framed as a global competition, this report offers a different narrative, and this is a narrative of cooperation. The Indo -German Partnership on AI, and particularly on language, and voice technologies shows what is possible when we join forces. Together with Bashini and the Indian Institute of Science, our initiative Fair Forward has created open voice technologies for nine Indian languages. These language models can now be used by NGOs, state agencies and companies. For example, they can be integrated into voice assistance for health workers, which in turn can improve health care for women.

Or they can be used to advise farmers on crop management. This collaboration, based on the principles of openness, fairness and responsibility, is the foundation for AI that truly serves the common good. And it contradicts those who claim that only fierce competition can generate prosperity and innovation. Ladies and gentlemen, this approach, closely aligns with the principles articulated by the International Cooperation on Climate Change. in the Hamburg Declaration on Responsible AI for Sustainable Development Goals. This declaration, presented by BMZ, our ministry, and UNDP last year, has been endorsed by more than 50 stakeholders already, including governments, international organizations, NGOs, and companies. The declaration reminds us that AI should serve the people and the planet, strengthen inclusion, and support sustainable development.

And our report here is a very practical and relevant contribution to that agenda, translating shared principles into concrete guidance. So let us thus deepen cooperation, strengthen trust, and build voice technologies that truly speak to everyone. Thank you for your attention.

Moderator

Thank you so much, Dr. Hillbrand. We shall now move on to the formal launch of the report and toolkit. I’ll invite all the representatives of the consortium from GIZ, Tri -Legal, Art Park, NASSCOM, Digital Futures Lab to please come on stage. And Mr. Nag to present the data. Thank you. Thank you. Thank you. Thank you. Now that we’re done with the formal launch of the report and policy toolkit, just to give you a brief overview, I invite Ms. Harleen Kaur, Research Manager, Digital Futures Lab, to present the report.

Harleen Kaur

Good morning, everyone, and thank you for being present. on a Friday morning for the launch of this report, as well as the developer toolkit. So I’ve linked the outputs in case you’d want to see them. If you can take a quick photo, and I’ll move towards discussing the high points of the findings that we had both for our policy report as well as developer’s toolkit. So when we began this work last year, we found that the challenges that are there in the voice tech arena, they are not limited to data collection alone. So the challenges are multi -layered that start right at the data collection stage and curation stage, but then move on to model development, where we see linguistic diversity gaps, lack of standards, uneven documentation, unclear data ownership and structures being a problem.

But then when we move on to the, hosting and licensing aspect, long -term infrastructure costs, costs, governance of open source assets, as well as sustainability of shared resources is something that we felt was a very important problem that needed to be solved in a certain manner. And the last is downstream deployment and impact, where bias, exclusion and lack of accountability for misuse become more visible. All of these are essentially starting at the data collection stage, but they move on to the life cycle of the voice technology ecosystem in India, specifically when you feel like supporting an open voice ecosystem in India. To lay down our approach for this project, we thought about how can we move on from the traditional government systems where government has primarily acted as a regulator, it enforces rules, it corrects market failures, to a newer active role, and that we have seen with Bhashani.

We encourage governments across the world to adopt this framework where the government acts as a steward of public good. ecosystem convener, as well as a standard setter, not just through licenses, but actually through practice as well. This is the overview of our policy framework. Based on this approach, we have structured our policy framework around the four pillars that you see on the screen. The first is treating foundational data sets as public goods. Second is institutionalizing sustainable open source infrastructure. Third is building open and representative models. And finally, strengthening responsible deployment. And what do we mean when we say this? When we say treat foundational data sets as public good, we are saying that government should be encouraging both funding and convening for public good functions.

For example, supporting languages that are not commercially viable as such. Institutionalizing governance. Governance framework. Thank you. to strengthen RAI practices, for example, through procurement, etc. On open representative models, we believe that local and contextually relevant benchmarks that are curated by government bodies not just at the center, but at the relevant diversity ecosystem, whether it is state, district, etc., is important. Shared national compute infrastructure, preferential treatment to open source ecosystem is something that we propose. On open source infrastructure itself, standardization of documents and promoting collaborative data steward models is something that has already been written in the report. Strengthening responsible deployment, public value sharing is another aspect of the report. We believe that public value sharing comes not just from financial arrangements, but also a buy -in of communities into what kind of…

uses of voice technology are there. And of course, supporting public literacy to protect against misuse and preventing harms is the policy side of our suggestion. Moving on to developer’s toolkit. You know, policy intent alone does not ensure inclusive AI systems. So alongside the policy framework, we’ve developed a developer toolkit that translates some of these principles into practice for developers. So it focuses on three broad areas, representation being the foremost through diversity planning, et cetera. Second being data quality and evaluation. And the third one being embedding RAI practices throughout the lifecycle of development of open voice I’ll just give you a brief overview of what we mean when we say this. So for developers, we have a toolkit that includes best practices that we’ve seen in industry.

And we have a toolkit that we’ve seen in India and outside on what does it mean to ensure adequate representation. on what does it mean to ensure adequate representation. So we have a toolkit that we’ve seen in India and outside So we have a toolkit that we’ve seen in India and outside on what does it mean to ensure adequate representation. Things like having a diversity wish list, making sure that you’re not collecting data from one source, applying linguistic expertise, using synthetic data, training model for linguistic and environmental nuances, and also layered data strategy. Which again means that don’t just use one source of data. Don’t do active or passive collection alone. Use a hybrid layered structure to make your models more diverse.

Once the developer move on from data collection to curation, we suggest many, many ways. This is just a very bird’s eye view overview in which data quality can be enhanced in the constraints that we operate in, in countries like India. And there are suggestions to make the applications inclusive and useful in practice, including robust transcription standards, contextual benchmarks. using data cards, model cards that are standardized, as well as continuous post -deployment monitoring. You can find more details in the report itself. And the last aspect of the developer’s toolkit is actually embedding RAI practices. We’ve taken another lifecycle framework within this where we believe that RAI practices are not the domain of policy alone. At enterprise startup developer level, ensuring a framework that serves to support them by providing them clarity on what does it mean when we say your output should be responsible.

So things like be mindful of engagement with the communities from whom you are taking data, annotation is happening, consent protocols, privacy enhancing techniques. So this report essentially is compliance plus. It actually shares practices that we believe are useful to promote open, responsible AI voice technology ecosystem. Please feel free to engage with the reports We’ll be very happy to take your comments, suggestions Thank you so much

Moderator

Thank you, Harleen We shall now move on to a short panel discussion On voice technologies in India Unpacking the present and future Of the voice AI application ecosystem For India and beyond Joining us today, I will invite to the stage Mr. Amitabh Nag, CEO of DIBD Dr. Prasanta Ghosh, Associate Professor At the Indian Institute of Science Ms. Kritika K .R., Head Artificial Intelligence And Product Researcher, SanLogic Mr. Thomas Valunith, Counsel Trilegal And this discussion will be moderated By the Board of Directors of the Indian Institute of Science And Product Researcher, SanLogic Mr. Nihar Desai, Head of JNI Thank you.

Nihar Desai

Hello. Hello. Am I audible? Okay. Thanks everybody for joining. So, I just delving right deep into it. My first question to you would be Mr. Nag. As we saw in the toolkit, we were arguing that data set like foundational data sets, speech data sets, must be treated as DPIs and DPGs and hence be available in general. From your experience in driving this ecosystem for about two years since I’ve been a part at least, what does it take to continue creation, ongoing facilitation of such innovations being put up as a digital public good while ensuring trust safety, right? And is there a way for us to have a flywheel of data sorts, data goods of sorts?

Amitabh Nag

Yeah, that’s a very important aspect of what we should be doing. That means continue the creation of data sets because it will then improve the models as we go by. Now, continuation of creation of data sets are, I would say that these are going to be in two or three ways, you know. One is the way which we have been… doing, which is the brute data collection, which is going to the various fields and then picking up the data from there and then creating the diversity which is required to actually build the model. So that is one way of doing it and that will continue. We will have to keep the focus with respect to saying that now I am doing for this particular area, this particular dialect, this particular language, while as it will be for other language in some other way.

The second is to actually look at using the products which have been developed using these models and creating such open domain activities to create the digital data. So you are creating the digital data which you are speaking, automatically creating the parallel corpus and then finding a way to actually vet this out and annotate and label and saying that, okay, this is the improvement corpus. That is the second thing. So one, you are creating a primary corpus. Second is… you are creating an improvement corpus which can be again fed back to the model and say that this is what is to be used and that is a big area of work as we look at. Allied to that is a lot of also the digital data is getting created any which way in the open domain which we can actually use to build the corpus again.

So you know YouTube videos today the world is more digital than it was yesterday. But the conscious way of looking at it as a program is what is required. How do I look at it as a program that I will be creating a data corpus at various places and this need not be necessarily an open domain. Open domain is kind of an easy way to work upon it. It can be a closed domain as well that there is an application which is working in an enterprise or a government and the people there are given an option to give suggestions to the translations or the answers or the things which you have gone in and that can get into a wetting pipeline and you are able to create that.

So those applications which are related to this when we are looking at AI portfolio not only languages but otherwise AI portfolio is very important for us to be on a continuous improvement journey. The most important aspect hence would be that if a person for example is working on a enterprise system of mails for example and it is actually deriving some summary of a document in perhaps a known language also or not a known language. The summary differs from what he thinks as a manual activity. He should be able to put that down somewhere and that goes as a feedback to the model. Currently that is something which is a concept which which may or may not exist, some enterprise would have done it, other enterprise would not have done it.

So looking at these kind of interventions which can be run as a program in a conscious way that everybody is able to contribute into the system his or her own things and then take it back from the, you know, improve the model or improve the AI systems, because they still require a lot of interventions from each and every person. The knowledge still is deficient. Thank

Nihar Desai

So what I’m taking away is that data sets need to be more of lived in nature. It’s not static. It has to be built upon by users and by others. And also just the fact that the feedback itself could lead to better data quality and which is something that enterprises might be doing, but it could definitely be done more. Thank you for that input. But to his point on the first question on data set inclusivity, Prashanta, like in going back to your research activity. mostly on inclusive data sets. The toolkit also argues that inclusivity must be designed at the foundational data layer at the time of designing data sets. But still we do find data sets which do lack this aspect.

What’s your take on what are the gaps over here at the research and academia level in terms of designing better inclusive data sets that could hence lead to better applications down the road?

Prasanta Ghosh

That’s a very deep and good question. So to cover the diversity and become more inclusive, one approach would be to cover in the data, right? But if we think about the diversity that is there in Indian languages, right, that is a function of the culture, caste, local knowledge and everything, right? And while we see the diversity, they are not independent elements. There are certain commonalities and certain uniqueness in each of these languages and dialects and accents that we talk about. So one important direction in modeling would be to think about this intrinsic basis components that finally leads to this diversity. Instead of a brute force way of covering data from all parts of the country.

So if you can discover, for example, just an example, I’m not an expert of linguistics, but if you look at the Indian languages, there are two broad, right? One is Indo -Aryan and the other is Dabirian. Now, while there are multiple languages within each of the streams, we may say, well, can we go and then to cater certain technologies? Two speakers of these languages. should we go ahead and collect a good amount of data in everything, each of those. That may not be the only way to think about. How do we balance and make a trade -off between the amount of data we collect? We know that’s challenging and costly as well, to a novel modeling where we start from those intrinsic basis components and then manifest into those individual diversities.

I think that may help us to jointly think about modeling and collection for catering to this diverse population.

Nihar Desai

If you could help the audience with one example of when you say balance both aspects. Let’s say if we could pick up one of your initiatives, Syspin, Respin or Wani or any other data set. How did you manage or balance inclusivity versus model building activities versus maybe other factors that might be coming into factor while designing specifications?

Prasanta Ghosh

Yeah, so the aspect of modeling that I brought out is something I would say not very well established at this moment. But from my experience in the project ResPin, I can give a concrete example. For example, if you take the Telugu as a language, right, there are, we worked with four major dialectal variations. One is in the region of Krishna Guntur, another is Vishakapatnam Vizag, another is Anandpur Chittoor, another is Nalgonda. Now, when you look at their intrinsic variations, we see that there are some commonalities. And then there are some unique aspects in each of those dialects. So now think about a brute force approach that I collect thousand hours in each of them. Versus think of collecting certain kind of stimuli to cover the actuality.

Acoustics case of the speakers, maybe from one region that will automatically cater to the other region. And then collect something that will complement. in each of the other regions, right? So that way, our overall timeline, budget, cost will all go down. And there has to be a novelty in terms of having a model that will start from the intrinsic one and then naturally diversify itself to cater to those populations. So that has to become a region -anchored approach that we started later on in one.

Nihar Desai

I see. Okay. Thanks for that input. Just to summarize, what I’m taking away is that instead of having brute force approach, what we’re essentially saying is balancing across various parameters on the basis of which you would train a model, such as linguistic diversity, acoustic diversity, and then using some sort of a smart approach to dissect the current audience, ways of collecting data, to maximize the output while maximizing bang for the buck. Thanks for that input. But this… This is also slightly… you are coming from the perspective of academia I would like to switch to Dr. Krithika from the perspective of as an applied AI researcher you are also one of the people in this panel who has really deployed speech AI solutions what is your take on challenges that you faced with inclusivity either at the data set layer or the application layer

Kritika K.R

More towards on core of the enterprise applications, knowledge repo integrations are coming up, beta healthcare, or even the manufacturing automobiles. So voice being the go -to interface for different applications and enabling the workforce across the industries is coming up. So in that case, again, as I said, on the consistency with the various user scenario and more specific to the domain adoption. Specialized domain adoption is required. That feedback loop is more important while the system is in the practice or while the system is in progress. I would say that point. And more critical aspect is on giving the scalable and sustainable infrastructure that comes with more optimized models and also like bringing the edge deployments also.

So that the real adoption can be scaled across multiple… industries and the normal usage for… various sectors across the industry. So I’m talking more on the end user perspective and using, getting the data. Data is one source of it, but making it reliable across the infrastructure and also giving the required scalable model at the device intelligence level is also important when it comes to the real adoption of these AI models.

Nihar Desai

Thanks for the input. So I guess after all, industry is also using feedback as a tool. It’s a nice validation over here. Yeah, maybe coming to Thomas, switching tracks to slightly legal sites. We’ve seen that, at least in the toolkit also, we’ve argued that speech models and speech data sets are at the intersection of copyright law, you know, data governance and security, etc. And how do you propose, how do you propose balancing sort of innovation? versus caution on these sites, especially with all the researchers and practitioners in the room?

Thomas J. Vallianeth

Thanks, Nihal. That’s, again, a very helpful question. I think Harleen had articulated it quite well in the beginning when we have to consider the entire ecosystem as a whole. There is a common myth in India that anything that is public is freely available. I think what we have to think about is also that, you know, all data sets operate at the intersection of privacy law and copyright law. Under privacy law, most publicly available data sets are essentially freely available to be used under, you know, even the new legislation. But under copyright law, even if it is publicly available, somebody else may own the copyright on that. So there has to be careful thought put in place right from the beginning itself in terms of what data sets you’re collecting, what is the copyright provenance of it, are you able to defer to, you know, freely licensed and open source kind of material to compile it, compile that data set, and if not, are you able to obtain the licenses to do so?

So the thought process from the beginning in terms of how you’re structuring the way to get this and also how to reduce the surface area of the impact of some of these laws. So for instance, in relation to privacy laws, if you’re collecting somewhat more private data sets, if you can use privacy enhancing technologies or you’re able to extract data such that no personal data is ultimately captured or stored at the point of data collection, all of these are various ways in which you can put in place mechanisms right from the start of when the ecosystem begins to ensure that downstream use cases are also protected in that sense. The second big aspect is, of course, the documentation, right?

Now, the data collector, the data creator is essentially the person who is the gateway to the entire ecosystem in some senses. The documentation has to be robust right from the beginning to enable everybody in the downstream chain to be able to use this data and to ensure that there’s a good and safe and trusted ecosystem created. with respect to that specific data set. So yes, there are flexibilities that are available under the law in terms of how you are able to use voice data sets, but at the same time, there’s some caution that you have to put in place right from the beginning and throughout the life cycle of this in terms of figuring out how to be able to use these data sets effectively.

Of course, the last kind of related aspect to this is to think about the various layers in which these legalities operate. So of course, you can think of the speech data set itself as being copyrighted, but equally, if they are reading out of a book passage or if they’re reading specific performance and so on, there may be separate rights that are allocated in relation to some of these other tangential elements as well. All of these are to be accounted for from the very beginning of the ecosystem itself such that downstream usage is not… in that sense impacted. So I would say, you know, the report’s argument in that sense is that think about it as a whole.

Don’t think of each action in isolation. Think about the entire impact downstream as well. And then account for both either enabling maneuvers under law in terms of documentation, privacy enhancing techniques and so on, or implement the appropriate cautionary mechanisms to ensure that downstream usage is also protected.

Nihar Desai

Yeah, at least in some of the hats that I wear, I am also collecting data sets and those are important points that we keep in mind. And hopefully we’ll be able to take the learnings out of toolkit to actually implement in our processes. Switching tracks slightly to Dr. Prashanta here, we’ve without measurement, right, we don’t really get anywhere in terms of implementing the right frameworks, implementing the right legal processes, etc., in terms of implementing, measuring quality. what you’ve also spoken about evaluations being broken as far as Indian context are concerned can you elaborate a little bit on what challenges we face on a day to day basis where do they come across and how do you foresee this sort of challenges either getting resolved or getting amplified again I think this is an important area that all of us together should explore and contribute to

Prasanta Ghosh

so when we build something like an automatic speech recognition system that is being used in many many applications think of this to be yet another human who is listening to the audio and trying to spit out what is spoken in text now if you go out in the real world as we have realized that multiple number of times and experienced through multiple projects in ResPin as well as Vani and many other projects that I have done and that we are able to do and that we are able to do and that we are able to do and that we are able to do is that if you give a piece of audio to two individuals, they never exactly agree on what they hear.

And I’m telling from my experience, not from two different parts of the country, I’m talking in terms of, you know, two people from the same district. In fact, there was an incident where we realized that these two people were just three kilometers away in terms of their location, but still they did not agree how that should be written from the audio they hear. So what it tells us is there is an inherent variation or variability in the way as an individual, as an Indian, I perceive or I like to see the text as, right? Now, if we accept that fact that exists today, we need to think of building our systems and system evaluation to cater to that variation.

So we need to think of that variability and to be… robust to that variability. So if, as I said in the beginning, if we treat the system also as a human, it will also not agree with another human. So if we just go by word -by -word comparison of how the system performs compared to some of the humans, certainly it will not be 100 % accurate. Or in other words, we calculate using what we call word error rate, which is objective way of evaluating. So a word -based comparison is not probably the right way to go at this point. Maybe the ASR system is doing pretty well, but just because it made a mistake slightly in one of the words, we are penalizing and telling that it’s not doing well.

So now we have to think about how do we solve this problem. It could be that we have a multiple evaluation system where we just don’t use word error rate. That’s one aspect. Another way to think about this will be to build ASR so that it itself can give not just one output, rather multiple outputs. which could be potentially right and then evaluate that not just objectively but also subjectively through human because human can absorb that error and say yes still it’s okay third will be to take that to the downstream application where depending on what you are using could be an LLM or any other QNA system that can absorb that robustness so I think we need to break down the entire evaluation system into multi -layered evaluations and then they are not really independent we need to take feedback all the way down to the final application back to ASR and so on so forth so I guess here individuals from the application areas, individuals from the linguistic background, engineers everyone has to come together and

Nihar Desai

so what I am hearing is that to solve this is more of an ecosystem level challenge right and then And maybe before our ecosystem champion over here, Mr. Nag, before you come in on this, I would just like one industry perspective of Dr. Krithika, how do you solve this from an application standpoint? Prashanta explained this challenge from more of an academic or foundational research standpoint. But how does evaluation play a role in your daily application layer?

Kritika K.R

Yeah, so as I said, right, so the applications are varied. So now the adoption is at the conversational level, right from bringing the analytics out of the data. Then now it is more on the voice interface and the multilingual conversation. Now with the speech -to -speech translation, those things are more prevalent with the conversation right now. Now coming to the industry application, industry aspect of it, yeah, adopting these models to the custom data set is one way. And also right pick of sourcing the data. From the available open source so that this model will be more specialized to those particular tasks. and the work they are supposed to do it. So now coming with the LLMs, these models are more adaptable to the industry jargons or even the core of the industry workflow.

Now making AI with the ASR models also enabling with the LLM, you have various methods from the data creation perspective, leveraging the open source data, and also like custom tuning the data to the various industry use cases. Definitely with the required compliance and these open source models are also enabling the on -prem deployment of these models, which enables the security aspect when it comes to creating the model for different core industry applications so that the models can be much more fine -tuned or trained across the domain, keeping the compliance aspect and the security aspects intact.

Nihar Desai

so having heard both of these perspectives Mr. Nag, how do you just from your experience standpoint, how do we approach resolving this conflict where all of us sort of concur that evaluations need we need a better framework to evaluation but it’s also in some ways nobody’s problem at the moment so is there a way to break this

Amitabh Nag

so let’s step back and let’s evaluate our conversation itself you know, is there a framework by which we can say that who was saying, who has spoken better language right, it was as good as other people understand it you know, if the audience is able to understand what I am speaking and what I am intending to speak that is what is going to be the final evaluation by any aspect. What we have to actually look at it is that we have to reach a level by which it is acceptable to the people who are sitting in front of me. I don’t think we will be able to ever reach a situation where we will be able to say that this is the best, second best, third best.

It is a situation, ultimately the audience decide whether they are in a position to do that. We are looking at few of the use cases where we have actually deployed these technologies and we incidentally, you know, go to various evaluations. One of them is grievance incidentally and when we were giving it to the last, to the person who is actually the owner of the system, the acceptance was supposed to be taken up by various ministries. So one ministry would say that this model is better. The other ministry would perhaps display. It’s a question of perception and ultimately the audience would decide. And some would like the tone of speaking, some would like the modality, some would like the pronunciation.

So it’s all based on what the person’s perception is. Now, is there a common way in which we can say that this is the acceptable thing? Right? But then also we will have differences. You know, many of the public figures, for example, when they speak, you know, Hindi or English or whatever language, there are gaps in the language, but still, you know, they are understood. They are able to connect to the people. So we have a difficult challenge. Rather than looking at it only from a perspective of application or academics, we would have to look at it from a perspective of audience. But then we also have some issues. You know, we have situations where we have a lot of people who are not aware of the situation.

And we have to look at the situation from a perspective of the audience. which require accurate and perfect transcriptions. Like, for example, if I’m arguing a case in a court, you know, I can’t have variations in terms of languages. If I am, for example, trying to be in a meeting where I am saying something, again, I cannot have variations. But for that also, we will perhaps have to step two steps back and look at purity of language with respect to the acceptance. Because most of our language has become impure because of the fact that we are, you know, using mixed code most of the time, especially in the cosmopolitan area. And in the other areas, even if we are having native language, dialects are taking over.

So it’s a very complex problem. It’s not an easy problem to solve. At this point in time, when we are looking at how do we actually take it forward, I would tend to say that we should look at what is acceptable to the audience and then start working back to define an acceptable way by which in which the models can go out in the market.

Nihar Desai

Yeah, that’s an important point that so far we’ve been looking at mostly, at least I have been looking at mostly from the lens of application versus academia, but maybe we need to go what works point of view and not really from just the traditional ranking point of view. But Thomas, in a world where, and this is, we’ve not talked about this and this might be a curveball, but in a world where evaluation is slightly subjective and no longer objective, how does law see this? How do you make decisions for procurement? How do you resolve arguments, differences between two opinions, and especially in cases where both might be right and it’s a gray area? Like, do you foresee these sort of scenarios coming in, especially with Gen AI, which is like…

Like, do you foresee these sort of scenarios coming in, especially with Gen AI, to be fair I think the legal principles at least on this are somewhat more clear at least in terms of some of the more privacy facing or copyright facing principles they occur much before outputs for instance are produced or any of these methodologies are implemented and we have a body of law that existed for many years in India it’s just a question of how do you lead evidence in relation to some of these matters so if it ever comes to the question of is a specific output right or is a specific output implying this or a specific output implying that I think where we haven’t caught up as a country is in terms of how to evaluate the evidentiary standard in relation to that the principles of course are fairly laid out saying that this is how you would decide it but what you would show the court to say this is the evidence for that that’s something I think that’s still evolving but I think it also brings me to I think a larger point and I think we’re making on the in the report as well is that you know there is a measure of trust that needs to be put in place in the ecosystem as a whole, right?

Irrespective of what the outcome of evaluation may be, there are measures that you can put in place right from the get -go. And one example I can give you is in relation to harmful content, right? Now, if there is a debate in relation to whether content is harmful or not, and it is a subjective determination, you can avoid that question to some degree by putting in place the necessary rails and safeguards right from the beginning itself so that trust is engineered into the process already as opposed to having to face that choice kind of downstream. But yes, to your point, and if we’re coming to a place where we need to face that question, I think the principles exist, but how you lead evidence, how you show the court that one is the interpretation over the other, still developing and very, very subjective.

I think some of the cases that, you know, the prominent AI players have in the country will go a long way to develop. Some of those standards, but at least as of now, the court system is still trying to catch up. to some of these principles. Documentation goes a long way to show intent. Methodologies that you have implemented that go to the extent of showing that you assumed reasonably high enough safeguards, reasonably high enough principles. All of these go a large extent to show intent. And so the subjectivity, I think, in that sense is far reduced if you put in place some of these measures that bring trust in the entire ecosystem. So I think at that one flashpoint of failure perhaps is tough to look at for the courts as well.

But if you look at it from an ecosystem perspective, I think there’s a lot of that that may reduce those flashpoints of failure or those flashpoints of evaluation at least from a legal perspective.

Thomas J. Vallianeth

I see. Thanks for that summarization. That the law as such is at a stage where it can accommodate some amount of subjectivity but there needs to be dialogue and more policy decisions to make it crisper and of course follow on into the application of the law. Thanks for that input. last question is leaving the floor open in terms of any inputs we do have the topic at hand is challenges and best practices for speech models and data sets at the ecosystem level or from your experiences any open points any arguments that you would like to make or any sort of a call out that you would like to make to the ecosystem right here it means the call out is means like you know many of the things which were indeterministic or unknown a few days back have started coming into a situation where we are able to crystallize it so I think we need to get into more workshops more discussions to think about it as how to do it take more use cases study more use cases in detail to figure out a framework by which acceptability and evaluations are properly benchmarked.

That’s a good point. Go ahead, Thomas. I have a point to add here, which is, you know, I think there is a certain sense of affinity in this ecosystem towards open source data sets or open models. I would be more thoughtful in terms of how and when these are suitable. Are there particular safeguards you need to put in place for open source data sets is something you need to think about. Are there end -use considerations that need to be tailored? And a good example is, you know, we have, I’ve seen an example where somebody is training a model to detect hate speech, right? Now the safeguards you would put in place to detect hate speech in a model is different from a data set and a model that you would develop to detect regular speech -to -speech translation.

So the decision as to what licensing frameworks, what documentation frameworks are fairly, need to be informed by what end -use case you’re doing, what unique… Thank you. attributes arise as a result of the specific data sets and applications that you are considering and finally on the basis of what downstream users you are expecting the choice needs to be made I think in a little bit more of a conscious fashion Nishant you wanted to say sure this your question actually stimulates me to think about you know English I mean you know sort of models that were built on American English so there have been always a standardization on evaluation in fact NIST evaluation if you look at there have been various protocols and there have been call out every year who beats the best baseline so far achieved I believe we have to do in our country in India at least for Indian languages and it’s very diverse as we just discussed so first of all thinking about how to evaluate and then creating a national level framework for evaluation.

And every year, let’s assess ourselves, all these stakeholders, right?

Prasanta Ghosh

It could be general evaluation, could be application specific in each language or dialect. And then we really have a leaderboard, which, of course, you know, there are many individual leaderboards across the country, but let’s have only one under Varshini, let’s say, right? And that should be elaborate enough to cater to all languages and dialects. And maybe that’s not the right way, but you think through and make sure every year we make progress in each of those three. I think that has to be brought in in the system to bring competitiveness in a collaborative way, of course. And overall, that can help improve the voice technology in Indian languages. And the reason I’m saying it

Nihar Desai

mostly from my understanding and experience with the English that has happened, in the past. Yeah. interesting points, Prasanta, in terms of I hear you sort of speak passionately about evaluation and now you’re taking it one step further in terms of how do we really create a unified framework for evaluation within competitive but yet collaborative manner for the ecosystem housed under a central, unpartial entity like Bhashani. This is a great point. I hope the audience found some of these points helpful and enriching. Thank you so much for making time in what is sure to be a very busy event and hope you have a rest of a good day. Thank you. Invite Mr. Shailendra Pal Singh, Senior General Manager, Bhashani to felicitate the speakers.

Thank you. Mr. Amitabh Nag Dr. Prasanta Ghosh Dr. Krithika K .I. Mr. Thomas Salenat I’m Ms. Harleen Kaur Thank you to all our speakers for walking us through this rich tapestry of voice technologies and their life cycle in the Indian context and we hope you read our report and the toolkit and find it useful. Thank you so much. Thank you so much to the audience for staying with us patiently throughout this entire hour. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (27)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Voice‑AI solutions cannot be treated as static products because their “shelf‑life” may be as short as three months, necessitating continual upgrades.”

The knowledge base states that AI systems in voice technology require continuous upgrades every 3‑6 months or less, confirming the short shelf‑life claim.

Additional Contextmedium

“Voice‑AI solutions need upgrades roughly every 3‑6 months rather than being static for years.”

S3 adds nuance by specifying the upgrade interval as 3‑6 months (or less), providing a broader range than the three‑month figure alone.

Confirmedhigh

“Inclusion and diversity must be built into design rather than treated as outliers, and these qualities should eventually become standards.”

Both S2 and S86 emphasize that diversity and inclusion should be integrated into systems from the start and become standard practice.

Confirmedhigh

“Voice is the most natural interface for low‑literacy populations and when voice AI does not work in local languages it can reinforce exclusion.”

S95 and S97 highlight voice as the most natural user interface, especially for native‑language interaction, and warn that lack of local‑language support can lead to exclusion.

Confirmedmedium

“The initiative is positioned as a concrete contribution to the Hamburg Declaration on Responsible AI for the Sustainable Development Goals.”

S98 explicitly links the discussed work to the Hamburg Declaration on responsible AI for sustainable development.

External Sources (99)
S1
EQUAL Global Partnership Research Coalition Annual Meeting | IGF 2023 — Ariana is an aerospace engineer and technology policy specialist who is passionate about creating gender-inclusive innov…
S3
S4
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — -Prasanta Ghosh(Dr. Prasanta Ghosh) – Associate Professor at the Indian Institute of Science
S5
https://dig.watch/event/india-ai-impact-summit-2026/digital-democracy-leveraging-the-bhashini-stack-in-the-parliamen — Thank you. Mr. Amitabh Nag Dr. Prasanta Ghosh Dr. Krithika K.I. Mr. Thomas Salenat I’m Ms. Harleen Kaur Thank you to all…
S6
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — -Thomas J. Vallianeth(Thomas Valunith/Thomas Salenat in transcript) – Counsel, Trilegal
S7
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — Thomas J. Vallianeth provided legal insights, highlighting a critical misconception: “there is a common myth in India th…
S8
Inclusive AI_ Why Linguistic Diversity Matters — -Amitabh Nag- CEO of Bhashini
S9
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — mostly from my understanding and experience with the English that has happened, in the past. Yeah. interesting points, P…
S10
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — This discussion centered on the launch of a policy report and developer toolkit for building open and responsible voice …
S11
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — The panel discussion revealed several critical challenges in the voice technology ecosystem. Dr. Prasanta Ghosh highligh…
S12
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S13
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S14
Conversation: 02 — -Moderator: Role/Title: Event moderator; Area of expertise: Not specified
S15
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — -Nihar Desai- Head of JNI, Panel Discussion Moderator
S16
IGF Retrospective – Past, Present, and Future — – **Nitin Desai** – Role/Title: Former MAG chair (approximately 5 years), chaired the working group on Internet governan…
S17
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — Agreed with:Amitabh Nag, Nihar Desai — Continuous data creation and improvement through feedback loops Agreed with:Amit…
S18
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — – Thomas J. Vallianeth- Harleen Kaur
S19
How can Artificial Intelligence (AI) improve digital accessibility for persons with disabilities? — Calls for contributions from the private sector and startups to facilitate the inclusion of persons with disabilities. …
S20
A Global Human Rights Approach to Responsible AI Governance | IGF 2023 WS #288 — Community involvement is crucial, and efforts have been made to make content available in multiple languages to ensure i…
S21
Artificial General Intelligence and the Future of Responsible Governance — Yeah, I just, sorry. I just want to build on that. How it’s not just targeted manipulation or the things that we see in …
S22
Building Sovereign and Responsible AI Beyond Proof of Concepts — AI systems must deliver genuine value that goes beyond technical metrics to create meaningful improvements in people’s l…
S23
AI Algorithms and the Future of Global Diplomacy — For example, AI in healthcare is a fantastic opportunity for. Indo -German cooperation, there is fantastic data availabl…
S24
Building Inclusive Societies with AI — Kumar advocates for strong partnerships between public and private sectors to drive national development. He emphasizes …
S25
Keynote-Ankur Vora — “Technologists can choose whether we use AI to take on the world’s greatest challenges or just the most precious.”[1]. “…
S26
How nonprofits are using AI-based innovations to scale their impact — Responsible AI and Safety Integration: Unlike typical software development where quality comes later, this program embed…
S27
Responsible AI in India Leadership Ethics & Global Impact part1_2 — Absolutely. So coming to the first question, you know, that you asked, I think, you know, I think there are obviously th…
S28
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Crampton argues that while pre-deployment testing remains necessary, the shift toward agentic AI systems that can plan, …
S29
High-level AI Standards panel — Sung Hwan Cho: Thank you. I think I fully agree with what Mr. Amandeep told us. I think the coordinated approach, coordi…
S30
WS #270 Understanding digital exclusion in AI era — The discussion underscored the urgency of taking action to prevent further widening of the digital divide as AI technolo…
S31
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Joanna Bryson: Hi, yeah, sure. Thanks very much and sorry not to be in Oslo. I wanted to come specifically to your quest…
S32
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — By engaging users and technical communities, policymakers can gain valuable insights and perspectives, ultimately leadin…
S33
Creating Eco-friendly Policy System for Emerging Technology — In conclusion, the interrelationship among technology, society, and the environment is intricate and multifaceted, with …
S34
Digital Ecosystems and Competition Law: Ecological Approach (HSE University) — Digital platforms and ecosystems are fundamentally transforming economies and presenting new challenges for competition …
S35
Strategy for digital transformation in the higher education sector — Considerations of privacy place great demands on the proper sharing and  processing of personal data. The privacy princi…
S36
Contents — Security and privacy must be designed in from the start. According to a Cisco/Jasper study, a mere 9% of consumers trust…
S37
Operationalizing data free flow with trust | IGF 2023 WS #197 — Maarit Palovirta:I’ll try and respond to the exact angles that you’re asking, but thankfully, we also have Jakob after m…
S38
AI-Driven Enforcement_ Better Governance through Effective Compliance & Services — Disagreement level:Very low level of disagreement with high implications for successful AI implementation. The consensus…
S39
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — These key comments fundamentally shaped the discussion by establishing a sophisticated framework that moved beyond simpl…
S40
AI and Cybersecurity  — Furthermore, the implications of these discussions for Sustainable Development Goals (SDGs) are emphasised, particularly…
S41
How AI Drives Innovation and Economic Growth — Consensus level:High level of consensus across diverse perspectives (World Bank, academia, legal scholarship, developmen…
S42
What is it about AI that we need to regulate? — The discussions revealed that effective data collection for persons with disabilities requires comprehensive stakeholder…
S43
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — This introduces a sophisticated approach to data collection that balances inclusivity with practical constraints. Rather…
S44
Open Forum #37 Her Data,Her Policies:Towards a Gender Inclusive Data Future — Bonnita Nyamwire: Thank you so much, Christelle. So a gender-inclusive data is one that is representative of all genders…
S45
5th ‘Road to Bern via Geneva’ dialogue: On data and Tech4Good — Prof. Karl Aberer(Professor, Distributed Information Systems Laboratory EPFL) talked about data sourcing. He emphasised …
S46
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — The discussion revealed several unresolved challenges requiring continued attention. The fundamental question of evaluat…
S47
Advancing Scientific AI with Safety Ethics and Responsibility — Policy evaluation must expand beyond model-centric assessment to include broader socio-technical factors. This includes …
S48
Opportunities of Cross-Border Data Flow-DFFT for Development | IGF 2023 WS #224 — Certain types of data, such as climate and forest data, were highlighted as global public goods that can benefit society…
S49
Global Internet Governance Academic Network Annual Symposium | Part 2 | IGF 2023 Day 0 Event #112 — Some people argue for the need to establish a mechanism for treating data as a global public good.
S50
Driving Social Good with AI_ Evaluation and Open Source at Scale — However, audience questions revealed tension between this contextual approach and institutional needs for standardizatio…
S51
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The roadmap is built upon core principles including “human and planetary welfare, accountability and transparency, inclu…
S52
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what…
S53
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 2 — The UN High Commissioner for Human Rights argues that AI systems should advance human rights by design, requiring alloca…
S54
Global AI Policy Framework: International Cooperation and Historical Perspectives — Despite coming from different backgrounds (diplomatic/legal vs academic), both speakers advocate for patience and carefu…
S55
Open Forum: Liberating Science — It calls for a multifaceted approach to address these issues and emphasizes the need for robust and reliable information…
S56
Creating Eco-friendly Policy System for Emerging Technology — Additionally, the analysis embraces a more globalised, holistic approach to learning. It backs strategies that encourage…
S57
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — This comment fundamentally reframes how we think about AI system design – moving from standardization that excludes outl…
S58
High-level AI Standards panel — Sung Hwan Cho: Thank you. I think I fully agree with what Mr. Amandeep told us. I think the coordinated approach, coordi…
S59
Harmonizing High-Tech: The role of AI standards as an implementation tool — By inviting active engagement from developing countries, ISO and IEC ensure a diverse and inclusive standardisation proc…
S60
International multistakeholder cooperation for AI standards | IGF 2023 WS #465 — Inclusion of all relevant stakeholders is seen as crucial for effective AI standards. The inclusivity of diverse perspec…
S61
WS #270 Understanding digital exclusion in AI era — The discussion underscored the urgency of taking action to prevent further widening of the digital divide as AI technolo…
S62
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — Good morning, everyone, and thank you for being present. on a Friday morning for the launch of this report, as well as t…
S63
Future-proofing global tech governance: a bottom-up approach | IGF 2023 Open Forum #44 — Various aspects of life are impacted by these technologies
S64
Who Watches the Watchers Building Trust in AI Governance — Agreed with:Shana Mansbach — Current evaluation and testing methods are inadequate
S65
Digital Ecosystems and Competition Law: Ecological Approach (HSE University) — Digital platforms and ecosystems are fundamentally transforming economies and presenting new challenges for competition …
S66
Opening Ceremony — Innovation must be guided by responsibility, with safety and privacy designed into products from the start
S67
Enhancing Digital Resilience: Cybersecurity, Data Protection, and Online Safety — Abitogun highlights the importance of involving end-users in the development of platforms and products. He suggests that…
S68
WEF Business Engagement Session: Safety in Innovation – Building Digital Trust and Resilience — Cybersecurity | Legal and regulatory | Sociocultural Safety and security must be built into products from the beginning…
S69
Democratizing AI: Open foundations and shared resources for global impact — The tone was consistently collaborative, optimistic, and forward-looking throughout the discussion. Speakers maintained …
S70
AI Policy Summit Opening Remarks: Discussion Report — The tone is consistently optimistic and collaborative throughout both speeches. Both speakers maintain an encouraging, f…
S71
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S72
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — The tone is consistently optimistic, collaborative, and forward-looking throughout the discussion. Speakers emphasize “l…
S73
Bridging the AI innovation gap — The tone is consistently inspirational and collaborative throughout. The speaker maintains an optimistic, forward-lookin…
S74
AI and Data Driving India’s Energy Transformation for Climate Solutions — The tone was collaborative and solution-oriented throughout, with speakers building on each other’s insights rather than…
S75
Advancing Scientific AI with Safety Ethics and Responsibility — The discussion maintained a collaborative and constructive tone throughout, characterized by technical expertise and pol…
S76
Safe and Responsible AI at Scale Practical Pathways — The tone was collaborative and solution-oriented, with industry experts and government representatives working together …
S77
Law, Tech, Humanity, and Trust — The discussion maintained a consistently professional, collaborative, and optimistic tone throughout. The speakers demon…
S78
National Disaster Management Authority — The discussion maintained a collaborative and solution-oriented tone throughout, with participants sharing both challeng…
S79
Ad Hoc Consultation: Wednesday 31st January, Morning session — Alongside this consensus, the mention of Paragraph F provides a key insight into the Convention’s mechanics, pointing to…
S80
Ad Hoc Consultation: Thursday 8th February, Afternoon session — By advocating clear and stringent legal duties, a basis for strengthened governance systems is established, underpinning…
S81
Pre 10: Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative — This comment injected realism into what had been a largely theoretical discussion. It forced participants to confront th…
S82
Afternoon session — The discussion began with a collaborative and appreciative tone as various stakeholders shared their visions and commitm…
S83
Mastering Diplomatic Competencies for an Ever-Changing World — 1. Failure to anticipate Russia’s actions in Ukraine: Ischinger reflected on the 2007 Munich Security Conference speech …
S84
Internet standards and human rights | IGF 2023 WS #460 — To address these issues, it is argued that there is a need to integrate multiple stakeholders into the standards process…
S85
Multilingual inclusion and universal acceptance for all communities — Edmon Chung emphasizes the need to integrate multilingual support natively into various systems and platforms. He argues…
S86
WS #166 Breaking Barriers: Empowering Women in Internet Network — This comment introduces a novel approach to ensuring diversity, suggesting it should be built into systems and processes…
S87
AI for food systems — Seizo Onoe argues that standards serve as more than just technical tools – they facilitate international collaboration a…
S88
HIGH LEVEL LEADERS SESSION I — Moderator:Thank you, Honorable Minister. That is quite an optimistic view, where he says it’s pretty much doable to fill…
S89
Keynote-Nikesh Arora — Overall Tone:The tone begins optimistically, celebrating AI’s rapid progress and potential, then shifts to a more cautio…
S90
Opening of the session — They regarded the APR as an inclusive summary that effectively encapsulated the advancement of discussions over the year…
S91
Comprehensive Report: World Economic Forum Panel Discussion on Cybersecurity Resilience — The overall tone of the discussion was urgent but constructive, with panellists demonstrating confidence that with prope…
S92
Effective Governance for Open Digital Ecosystems | IGF 2023 Open Forum #65 — Moderator – Lea Gimpel:Good morning and a warm welcome. Since the room is not full yet, if you want, please also take a …
S93
Opening — Moderator: Thank you very much, Minister. Thank you for your support. Your presence here actually shows very much the su…
S94
High Level Session 4: Securing Child Safety in the Age of the Algorithms — Shivanee Thapa: Please welcome to the stage, the moderator, Ms. Shivanee Thapa, Senior News Editor, Nepal Television. A …
S95
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Durga Malladi argues that voice is the most natural user interface to devices, but it must work in native languages rath…
S96
Artificial intelligence (AI) – UN Security Council — Additionally, the development of AI systems should involve collaboration with local communities to better understand cul…
S97
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Voice interfaces exemplified this distributed approach’s practical applications. Malladi emphasized voice as “the most n…
S98
Multistakeholder Partnerships for Thriving AI Ecosystems — And as I mentioned at the beginning, one of the things that we have been doing with the, as part of the Hamburg Sustaina…
S99
Multistakeholder Partnerships for Thriving AI Ecosystems — Invitation extended for organizations to endorse the Hamburg Declaration with concrete, measurable commitments rather th…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Ariane Ahildur
3 arguments126 words per minute562 words266 seconds
Argument 1
Voice AI as a natural gateway for digital inclusion, especially for low‑literacy populations
EXPLANATION
Ariane emphasizes that voice technology provides the most intuitive interface for people who cannot read or afford digital devices. When voice AI supports local languages and dialects, it can unlock access to public services, health care, education and economic opportunities.
EVIDENCE
She states that “voice is the most natural and powerful interface to the digital world, especially for those with limited literacy or access to digital devices” and that “when voice AI works in local languages and dialects, it will become a gateway to public services, healthcare, education, and economic participation” [34-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Bhashini stack report highlights that “voice is the most natural and powerful interface to the digital world, especially for those with limited literacy or access to digital devices” and that voice AI in local languages can unlock public services [S3].
MAJOR DISCUSSION POINT
Voice AI as a natural gateway for digital inclusion, especially for low‑literacy populations
Argument 2
Responsible, inclusive voice AI is a societal issue, not merely a technical problem.
EXPLANATION
Ariane stresses that inclusion goes beyond engineering; it requires shared values, policy alignment and societal commitment to ensure voice AI benefits everyone.
EVIDENCE
She states that “responsible, inclusive voice AI is not just a technical issue” and links it to a shared vision between India and Germany [39-40].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Responsible AI is framed as a societal challenge that must deliver long-term well-being and align with community values, not just technical metrics [S22]; a human-rights-focused approach stresses multilingual accessibility and inclusivity [S20]; governance safeguards for inclusive outcomes are also emphasized [S25].
MAJOR DISCUSSION POINT
Inclusivity as a societal challenge
Argument 3
Cooperative models, exemplified by the Indo‑German partnership, can outperform competition‑driven narratives.
EXPLANATION
Ariane argues that collaboration between countries demonstrates a viable alternative to the view of AI as a global competition, highlighting joint achievements in open voice technologies.
EVIDENCE
She notes that the Indo-German partnership on AI “shows what is possible when we join forces” and contrasts it with the claim that only fierce competition can generate prosperity [41-42].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The report positions the Indo-German partnership on AI as a concrete example of cooperation over competition [S2][S3]; it cites joint data resources and investment opportunities in healthcare as proof of collaborative potential [S23]; broader public-private partnership benefits are discussed as a model for inclusive development [S24].
MAJOR DISCUSSION POINT
Cooperation versus competition in AI development
H
Harleen Kaur
6 arguments143 words per minute1036 words432 seconds
Argument 1
A four‑pillar policy framework treating foundational data as public goods, institutionalising open‑source infrastructure, building open representative models, and strengthening responsible deployment
EXPLANATION
Harleen outlines a policy approach built around four pillars: making foundational data sets public goods, creating sustainable open‑source infrastructure, ensuring models are open and representative, and enforcing responsible deployment practices. This framework is intended to guide both policymakers and technologists toward inclusive voice AI.
EVIDENCE
She presents the four pillars on screen, describing them as “treating foundational data sets as public goods”, “institutionalizing sustainable open source infrastructure”, “building open and representative models”, and “strengthening responsible deployment” [73-78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Bhashini report outlines the same four-pillar framework, detailing public-good data, sustainable open-source infrastructure, representative model building, and responsible deployment guidance [S2][S3].
MAJOR DISCUSSION POINT
A four‑pillar policy framework treating foundational data as public goods, institutionalising open‑source infrastructure, building open representative models, and strengthening responsible deployment
Argument 2
Foundational speech data should be funded, convened, and governed as public goods to support non‑commercial languages
EXPLANATION
Harleen argues that governments need to finance and coordinate the creation of foundational speech datasets, especially for languages that lack commercial viability, treating them as public goods. This ensures that linguistic diversity is preserved and made available for open‑source AI development.
EVIDENCE
She explains that treating foundational data sets as public goods means “government should be encouraging both funding and convening for public good functions” and cites supporting “languages that are not commercially viable” as a target [79-81].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The policy discussion stresses that governments should fund and convene the creation of foundational speech datasets for languages lacking commercial viability, treating them as public goods [S2][S3].
MAJOR DISCUSSION POINT
Foundational speech data should be funded, convened, and governed as public goods to support non‑commercial languages
AGREED WITH
Amitabh Nag
Argument 3
Government should act as a steward of the public good, setting standards and convening ecosystems rather than merely regulating
EXPLANATION
Harleen proposes a shift from a traditional regulator role to a proactive steward that convenes stakeholders, sets standards, and guides open‑source ecosystems. This stewardship involves both licensing and practical implementation to foster inclusive voice technologies.
EVIDENCE
She describes moving “from the traditional government systems where government has primarily acted as a regulator” to a newer role where the government “acts as a steward of public good, ecosystem convener, as well as a standard setter” [70-73].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The report calls for a shift from a regulator-only role to an ecosystem steward that sets standards, convenes stakeholders, and supports non-viable languages [S2][S3].
MAJOR DISCUSSION POINT
Government should act as a steward of the public good, setting standards and convening ecosystems rather than merely regulating
AGREED WITH
Ariane Ahildur, Moderator
Argument 4
Policy intent alone is insufficient; a developer toolkit operationalises inclusive AI principles for practitioners.
EXPLANATION
Harleen points out that without concrete tools, policy guidance cannot be translated into real‑world inclusive systems, so the toolkit provides actionable best practices.
EVIDENCE
She says “policy intent alone does not ensure inclusive AI systems. So alongside the policy framework, we’ve developed a developer toolkit that translates some of these principles into practice” [89-91].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A developer toolkit is presented as the bridge that translates policy intent into concrete best-practice guidance for developers [S2][S3].
MAJOR DISCUSSION POINT
Need for practical tooling to implement policy
Argument 5
Embedding Responsible AI (RAI) practices throughout the development lifecycle ensures ethical outcomes.
EXPLANATION
The speaker emphasizes that RAI must be integrated at every stage—from data collection to deployment—to guarantee responsible behaviour of voice systems.
EVIDENCE
She describes a lifecycle framework where “RAI practices are not the domain of policy alone” and lists actions such as mindful community engagement, consent protocols and privacy-enhancing techniques [105-108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Responsible AI is advocated to be embedded from data collection to deployment, with guardrails and privacy-enhancing techniques introduced early in the pipeline [S26][S22].
MAJOR DISCUSSION POINT
Lifecycle integration of RAI
Argument 6
Use of data cards, model cards and continuous post‑deployment monitoring enhances transparency and accountability.
EXPLANATION
Harleen recommends standardized documentation (data/model cards) and ongoing monitoring after release to keep systems trustworthy and improve performance over time.
EVIDENCE
She mentions “robust transcription standards, contextual benchmarks, using data cards, model cards that are standardized, as well as continuous post-deployment monitoring” as part of the toolkit [103].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Standardised data/model cards and ongoing post-deployment monitoring are recommended to maintain trust and improve system performance [S28][S27].
MAJOR DISCUSSION POINT
Transparency mechanisms for AI systems
N
Nihar Desai
3 arguments131 words per minute1767 words804 seconds
Argument 1
Evaluation should be audience‑centric, focusing on perceived acceptability rather than absolute ranking
EXPLANATION
Nihar stresses that the success of voice AI should be judged by whether the intended audience understands and accepts the output, rather than by trying to rank models objectively. Perception and usability for end‑users become the primary metric of evaluation.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The report advises that evaluation prioritize audience acceptability, recognizing language use varies by context and application [S2].
MAJOR DISCUSSION POINT
Evaluation should be audience‑centric, focusing on perceived acceptability rather than absolute ranking
AGREED WITH
Prasanta Ghosh, Thomas J. Vallianeth
DISAGREED WITH
Prasanta Ghosh, Thomas J. Vallianeth
Argument 2
Evaluation frameworks should be pragmatic and work‑oriented rather than purely ranking‑centric.
EXPLANATION
Nihar argues that the focus ought to be on what works for the audience and real‑world use cases, moving away from traditional hierarchical rankings of models.
EVIDENCE
He says, “maybe we need to go what works point of view and not really from the traditional ranking point of view” while summarising his take-aways [284-286].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A pragmatic, work-focused evaluation approach is recommended over traditional hierarchical ranking of models [S2].
MAJOR DISCUSSION POINT
Pragmatic, work‑focused evaluation
Argument 3
Legal and procurement processes must accommodate subjective evaluation outcomes through robust documentation and trust mechanisms.
EXPLANATION
Nihar highlights that when evaluation results are not purely objective, clear documentation of safeguards and intent is needed to satisfy legal scrutiny and procurement requirements.
EVIDENCE
He notes that “the principles exist, but how you lead evidence… documentation goes a long way to show intent” and stresses the need for trust-engineered safeguards [286-298].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Robust documentation and trust-engineered safeguards are highlighted as essential for meeting legal and procurement requirements when evaluation outcomes are subjective [S27][S28].
MAJOR DISCUSSION POINT
Documentation to support subjective evaluation in legal contexts
A
Amitabh Nag
4 arguments162 words per minute1513 words558 seconds
Argument 1
Data sets must be continuously created and refined through feedback loops from real‑world use
EXPLANATION
Amitabh explains that voice AI datasets need ongoing collection and improvement, using both primary data gathering and feedback‑driven refinement. He highlights mechanisms such as creating an improvement corpus from product usage and incorporating user corrections back into models.
EVIDENCE
He describes three ways of continuing data creation: “brute data collection” from diverse fields, generating digital data via product usage to build a parallel corpus, and leveraging open-domain sources like YouTube; he also details a feedback loop where enterprise users correct summaries and feed those corrections back to the model [121-144].
MAJOR DISCUSSION POINT
Data sets must be continuously created and refined through feedback loops from real‑world use
AGREED WITH
Harleen Kaur
DISAGREED WITH
Thomas J. Vallianeth
Argument 2
Continuous improvement of models through enterprise feedback (e.g., summarisation corrections) supports practical deployment
EXPLANATION
Amitabh highlights that enterprises can improve AI models by reporting mismatches between system outputs and manual expectations, such as summary errors, which are then fed back into the training pipeline. This creates a virtuous cycle of model refinement aligned with real‑world needs.
EVIDENCE
He gives the example of a user correcting a document summary generated by an AI system and feeding that correction back into the model as part of a feedback pipeline [139-144].
MAJOR DISCUSSION POINT
Continuous improvement of models through enterprise feedback (e.g., summarisation corrections) supports practical deployment
Argument 3
AI systems lack warranties and have short shelf lives, requiring continuous upgrades to keep pace with diversity.
EXPLANATION
Nag explains that unlike static machines, AI solutions quickly become outdated due to rapid changes in language, culture and technology, so they must be regularly refreshed.
EVIDENCE
He states, “There is no guarantee, no warranty in these kind of systems which we are building in AI” and notes the need for continual upgrades because of diversity [8-9][4-6].
MAJOR DISCUSSION POINT
Need for ongoing updates due to AI’s short shelf life
Argument 4
Inclusion and diversity must be embedded in system design from the outset rather than treated as an afterthought.
EXPLANATION
Nag argues that designing for inclusion and diversity should be a core part of the architecture, ensuring that voice AI can serve varied languages, cultures and users.
EVIDENCE
He says, “Inclusion is the name of the, inclusion is part of the design, diversity is part of the design” and calls for step-by-step definition of diversities to become standards [15-16].
MAJOR DISCUSSION POINT
Design‑time inclusion and diversity
P
Prasanta Ghosh
5 arguments160 words per minute1184 words443 seconds
Argument 1
Efficient inclusive data collection can be achieved by modelling intrinsic linguistic components rather than brute‑force gathering
EXPLANATION
Prasanta suggests that instead of collecting massive amounts of data for every dialect, modeling should start from intrinsic linguistic bases (e.g., language families) and then extend to specific variations. This approach reduces cost and time while still covering diversity.
EVIDENCE
He discusses using intrinsic components of Indian languages, noting the two broad families Indo-Aryan and Dravidian, and proposes balancing data collection by focusing on shared acoustic characteristics before adding region-specific data, thereby lowering timeline and budget [155-167].
MAJOR DISCUSSION POINT
Efficient inclusive data collection can be achieved by modelling intrinsic linguistic components rather than brute‑force gathering
DISAGREED WITH
Amitabh Nag
Argument 2
Human transcriptions vary widely; reliance on word‑error‑rate alone is insufficient, requiring multi‑layered, both objective and subjective, evaluation methods
EXPLANATION
Prasanta points out that even human annotators disagree on transcriptions, making word‑error‑rate an inadequate sole metric. He advocates for layered evaluation that combines objective scores with subjective human judgment and downstream task performance.
EVIDENCE
He recounts instances where two annotators from nearby locations produced different transcriptions, explains the limitations of word-error-rate, and proposes multi-layered evaluation including multiple outputs, human assessment, and downstream application feedback [228-240].
MAJOR DISCUSSION POINT
Human transcriptions vary widely; reliance on word‑error‑rate alone is insufficient, requiring multi‑layered, both objective and subjective, evaluation methods
AGREED WITH
Nihar Desai, Thomas J. Vallianeth
DISAGREED WITH
Nihar Desai, Thomas J. Vallianeth
Argument 3
A unified national evaluation framework or leaderboard for Indian languages and dialects would foster collaborative competition and track progress
EXPLANATION
Prasanta recommends establishing a single, comprehensive leaderboard (e.g., under the Bhashani initiative) that evaluates models across all Indian languages and dialects annually. This would encourage healthy competition while promoting collaboration.
EVIDENCE
He proposes creating “only one under Varshini” leaderboard that is elaborate enough for all languages and dialects, with yearly progress assessments to drive improvement [315-321].
MAJOR DISCUSSION POINT
A unified national evaluation framework or leaderboard for Indian languages and dialects would foster collaborative competition and track progress
Argument 4
Human transcription variability requires ASR systems to be robust to differing interpretations.
EXPLANATION
Ghosh points out that even annotators from nearby locations disagree on transcriptions, so ASR evaluation must accommodate this inherent variability rather than rely on a single reference.
EVIDENCE
He recounts an incident where two annotators three kilometres apart produced different transcriptions, highlighting intrinsic variation and the need for robust systems [228-235].
MAJOR DISCUSSION POINT
Robustness to human transcription variability
Argument 5
A balanced approach that combines brute‑force data collection with smart, intrinsic‑component modelling optimises cost and coverage.
EXPLANATION
Ghosh suggests that instead of collecting massive data for every dialect, modelling should start from shared linguistic bases and then add targeted data, reducing timeline and budget while preserving diversity.
EVIDENCE
He discusses covering diversity by focusing on intrinsic components of Indian languages, noting the two broad families and proposing a trade-off between brute-force collection and smart modelling [155-167].
MAJOR DISCUSSION POINT
Cost‑effective inclusive data collection
T
Thomas J. Vallianeth
4 arguments171 words per minute1138 words397 seconds
Argument 1
Voice data sits at the intersection of privacy and copyright law; early documentation, clear licensing, and privacy‑enhancing techniques are essential
EXPLANATION
Thomas explains that voice datasets are subject to both privacy and copyright regulations, requiring careful provenance checks, licensing decisions, and the use of privacy‑enhancing technologies from the outset. Proper documentation is crucial to ensure downstream compliance.
EVIDENCE
He notes that “all data sets operate at the intersection of privacy law and copyright law”, stresses the need to verify copyright provenance and obtain licenses, and recommends privacy-enhancing techniques and robust documentation from the beginning of the ecosystem [208-218].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Voice datasets are described as operating at the intersection of privacy and copyright law, requiring provenance checks, licensing, and privacy-enhancing techniques from the start [S27].
MAJOR DISCUSSION POINT
Voice data sits at the intersection of privacy and copyright law; early documentation, clear licensing, and privacy‑enhancing techniques are essential
DISAGREED WITH
Amitabh Nag
Argument 2
Trust‑engineered safeguards (e.g., content‑moderation rails) reduce downstream legal ambiguity
EXPLANATION
Thomas argues that embedding safeguards such as content‑moderation rails early in the development process builds trust and minimizes legal disputes later. Documentation of intent and safeguards can demonstrate compliance to courts and regulators.
EVIDENCE
He gives the example of implementing “content-moderation rails” to pre-empt harmful-content debates, stating that documentation and demonstrated safeguards reduce downstream legal ambiguity and support evidentiary standards [291-298].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Embedding content-moderation rails and documenting safeguards are presented as ways to lower legal ambiguity and demonstrate compliance [S28].
MAJOR DISCUSSION POINT
Trust‑engineered safeguards (e.g., content‑moderation rails) reduce downstream legal ambiguity
AGREED WITH
Harleen Kaur, Nihar Desai, Ariane Ahildur
Argument 3
Subjective evaluation outcomes must be supported by robust documentation and trust mechanisms to satisfy legal and procurement standards
EXPLANATION
Thomas highlights that when evaluation results are subjective, thorough documentation and pre‑installed trust mechanisms become essential to meet legal and procurement requirements. This approach helps demonstrate that reasonable safeguards were in place.
EVIDENCE
He emphasizes that “documentation goes a long way to show intent” and that “methodologies… showing that you assumed reasonably high enough safeguards” can reduce subjectivity for legal scrutiny [292-298].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for thorough documentation and pre-installed trust mechanisms to validate subjective evaluation results for legal scrutiny is emphasized [S27][S28].
MAJOR DISCUSSION POINT
Subjective evaluation outcomes must be supported by robust documentation and trust mechanisms to satisfy legal and procurement standards
AGREED WITH
Prasanta Ghosh, Nihar Desai
Argument 4
Open‑domain data can be leveraged responsibly if provenance and licensing are verified early in the pipeline.
EXPLANATION
Thomas emphasizes that before using publicly available voice data, it is essential to check copyright provenance and obtain appropriate licenses, ensuring downstream compliance.
EVIDENCE
He notes that “all data sets operate at the intersection of privacy law and copyright law” and stresses verifying copyright provenance and licensing before compilation [208-210].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Verification of copyright provenance and appropriate licensing of open-domain voice data is recommended before inclusion in datasets [S27].
MAJOR DISCUSSION POINT
Early verification of open‑domain data provenance
K
Kritika K.R.
3 arguments138 words per minute432 words186 seconds
Argument 1
Scalable, sustainable infrastructure—including edge deployment and optimized models—is critical for wide‑scale industry adoption
EXPLANATION
Kritika stresses that for voice AI to be adopted across sectors, it must run on scalable, sustainable infrastructure, including edge devices and models optimized for performance. This ensures reliability and broad reach in diverse industrial settings.
EVIDENCE
She mentions the need for “scalable and sustainable infrastructure that comes with more optimized models” and highlights “edge deployments” to enable real-world adoption across multiple industries [196-199].
MAJOR DISCUSSION POINT
Scalable, sustainable infrastructure—including edge deployment and optimized models—is critical for wide‑scale industry adoption
AGREED WITH
Amitabh Nag
Argument 2
Domain‑specific fine‑tuning of open‑source models, combined with on‑prem deployment, ensures compliance and data security
EXPLANATION
Kritika explains that tailoring open‑source voice models to specific industry domains and deploying them on‑premises addresses both compliance requirements and data security concerns. This approach allows models to be customized while keeping sensitive data within organizational firewalls.
EVIDENCE
She describes “domain-specific fine-tuning of open-source models” and notes that “on-prem deployment… ensures compliance and data security” for core industry applications [250-254].
MAJOR DISCUSSION POINT
Domain‑specific fine‑tuning of open‑source models, combined with on‑prem deployment, ensures compliance and data security
Argument 3
On‑premise deployment of fine‑tuned open‑source models addresses compliance and data‑security concerns for sensitive industry applications.
EXPLANATION
Kritika argues that keeping models within an organization’s firewall while customizing them for specific domains satisfies regulatory requirements and protects proprietary data.
EVIDENCE
She explains that “on-prem deployment… ensures compliance and data security” for core industry applications, linking domain-specific fine-tuning with on-premise hosting [250-254].
MAJOR DISCUSSION POINT
Compliance‑focused on‑premise model deployment
M
Moderator
3 arguments68 words per minute267 words232 seconds
Argument 1
Public acknowledgment of contributors reinforces collaborative spirit and sustains partnership momentum.
EXPLANATION
By thanking Mr. Nag for his insightful words and year‑long support, the moderator highlights the importance of recognizing contributions to keep partners engaged and motivated.
EVIDENCE
The moderator expresses gratitude to Mr. Nag, stating that his support over the last year has been invaluable, and thanks him again before moving on to the next speaker [20-21].
MAJOR DISCUSSION POINT
Recognition of contributions sustains collaboration
Argument 2
A formal launch with representatives from multiple organisations showcases the multi‑stakeholder nature of the initiative.
EXPLANATION
Inviting consortium members from GIZ, Tri‑Legal, Art Park, NASSCOM and the Digital Futures Lab to the stage emphasizes that the report and toolkit are the product of a broad partnership.
EVIDENCE
The moderator calls on all the representatives of the consortium to come on stage for the formal launch of the report and toolkit, listing each partner organization [57-58].
MAJOR DISCUSSION POINT
Multi‑stakeholder involvement in the launch
Argument 3
Opening a panel discussion with diverse experts signals the need for multi‑perspective dialogue to address voice AI challenges.
EXPLANATION
By transitioning from the formal launch to a short panel that includes academia, industry and legal experts, the moderator underlines the value of cross‑sector conversation for shaping the ecosystem.
EVIDENCE
The moderator thanks Harleen and announces a short panel discussion on voice technologies, naming the various experts who will participate [112].
MAJOR DISCUSSION POINT
Multi‑perspective dialogue for voice AI
Agreements
Agreement Points
Evaluation of voice AI should move beyond single metrics to multi‑layered, audience‑centric approaches, supported by robust documentation for legal and procurement contexts.
Speakers: Prasanta Ghosh, Nihar Desai, Thomas J. Vallianeth
Human transcriptions vary widely; reliance on word‑error‑rate alone is insufficient, requiring multi‑layered, both objective and subjective, evaluation methods Evaluation should be audience‑centric, focusing on perceived acceptability rather than absolute ranking Subjective evaluation outcomes must be supported by robust documentation and trust mechanisms to satisfy legal and procurement standards
All three speakers stress that traditional word-error-rate or ranking-centric metrics are inadequate; evaluations must consider user acceptability, incorporate objective and subjective layers, and be documented to meet legal requirements [228-240][256-260][292-298].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with calls for multi-layered, audience-centric evaluation in AI governance discussions, such as the need for multi-layered evaluation approaches and audience-centric metrics highlighted at the Bhashini Stack forum [S46] and the broader recommendation to expand policy evaluation beyond model-centric metrics [S47]. It also reflects the tension between contextual evaluation and standardized benchmarking noted by government agencies [S50].
Foundational speech data should be treated as a public good and continuously expanded through real‑world feedback loops.
Speakers: Harleen Kaur, Amitabh Nag
Foundational speech data should be funded, convened, and governed as public goods to support non‑commercial languages Data sets must be continuously created and refined through feedback loops from real‑world use
Harleen argues for public-good funding and governance of foundational datasets, while Amitabh describes ongoing data collection and improvement via user feedback, together emphasizing a sustainable, evolving data ecosystem [79-81][121-144].
POLICY CONTEXT (KNOWLEDGE BASE)
Treating data as a global public good is advocated in policy dialogues, e.g., the proposal to treat climate and forest data as public goods and integrate free data flows [S48], and calls for mechanisms to treat data as a global public good [S49].
Embedding Responsible AI practices throughout the lifecycle and establishing trust‑engineered safeguards are essential for ethical and legally sound voice AI systems.
Speakers: Harleen Kaur, Thomas J. Vallianeth, Nihar Desai, Ariane Ahildur
Embedding Responsible AI (RAI) practices throughout the development lifecycle ensures ethical outcomes Trust‑engineered safeguards (e.g., content‑moderation rails) reduce downstream legal ambiguity Legal and procurement processes must accommodate subjective evaluation outcomes through robust documentation and trust mechanisms Responsible, inclusive voice AI is a societal issue, not merely a technical problem
Harleen highlights lifecycle integration of RAI, Thomas and Nihar stress documentation and safeguards to meet legal standards, and Ariane frames inclusive voice AI as a societal challenge, collectively underscoring the need for ethical, trustworthy design and deployment [105-108][291-298][286-298][39-40].
POLICY CONTEXT (KNOWLEDGE BASE)
Responsible AI lifecycle practices and trust-engineered safeguards echo the UN High Commissioner’s emphasis on human-rights-by-design AI with safety, transparency, participation and accountability frameworks [S53], as well as the governance-compliance distinction highlighted in AI governance panels [S39] and the core principles of ethical AI roadmaps [S51].
Cooperation and multi‑stakeholder engagement are critical for advancing inclusive voice technology ecosystems.
Speakers: Ariane Ahildur, Moderator, Harleen Kaur
Cooperative models, exemplified by the Indo‑German partnership, can outperform competition‑driven narratives A formal launch with representatives from multiple organisations showcases the multi‑stakeholder nature of the initiative Government should act as a steward of the public good, setting standards and convening ecosystems rather than merely regulating
Ariane promotes international cooperation, the Moderator highlights the multi-partner launch, and Harleen calls for government stewardship, together emphasizing collaborative governance and partnership across sectors [41-42][57-58][70-73].
POLICY CONTEXT (KNOWLEDGE BASE)
Multi-stakeholder engagement is emphasized in discussions on inclusive data collection for persons with disabilities, which call for comprehensive stakeholder involvement across domains [S42], and in broader AI governance consensus across agencies and industry partners [S38].
Scalable, sustainable infrastructure and continuous model updates are necessary to meet diverse user needs and ensure long‑term relevance.
Speakers: Kritika K.R., Amitabh Nag
Scalable, sustainable infrastructure—including edge deployment and optimized models—is critical for wide‑scale industry adoption AI systems lack warranties and have short shelf lives, requiring continuous upgrades to keep pace with diversity
Kritika stresses the need for robust, edge-ready infrastructure for industry uptake, while Amitabh points out the rapid obsolescence of AI models, together calling for ongoing technical refreshes and scalable deployment strategies [196-199][4-6][8-9].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for scalable, sustainable infrastructure aligns with the linkage of AI to Sustainable Development Goals on industry, innovation and infrastructure (SDG 9) and the importance of robust legal frameworks for long-term relevance [S40].
Similar Viewpoints
Both emphasize that speech datasets should be treated as public resources that are continuously expanded and improved through active feedback mechanisms [79-81][121-144].
Speakers: Harleen Kaur, Amitabh Nag
Foundational speech data should be funded, convened, and governed as public goods to support non‑commercial languages Data sets must be continuously created and refined through feedback loops from real‑world use
Both propose smarter, iterative data collection strategies that go beyond brute‑force methods, leveraging linguistic insights and user feedback to build inclusive datasets [155-167][121-144].
Speakers: Prasanta Ghosh, Amitabh Nag
Efficient inclusive data collection can be achieved by modelling intrinsic linguistic components rather than brute‑force gathering Data sets must be continuously created and refined through feedback loops from real‑world use
Both stress that thorough documentation and pre‑emptive safeguards are essential to bridge technical evaluation with legal and procurement requirements [291-298][286-298].
Speakers: Thomas J. Vallianeth, Nihar Desai
Trust‑engineered safeguards (e.g., content‑moderation rails) reduce downstream legal ambiguity Legal and procurement processes must accommodate subjective evaluation outcomes through robust documentation and trust mechanisms
Both highlight the importance of verifying data provenance/licensing and using on‑prem, domain‑tuned deployments to meet compliance and security needs, linking legal diligence with technical implementation [250-254][208-210].
Speakers: Kritika K.R., Thomas J. Vallianeth
Domain‑specific fine‑tuning of open‑source models, combined with on‑prem deployment, ensures compliance and data security Open‑domain data can be leveraged responsibly if provenance and licensing are verified early in the pipeline
Unexpected Consensus
Alignment between legal safeguards and industry deployment strategies on on‑premise, compliant AI models.
Speakers: Thomas J. Vallianeth, Kritika K.R.
Trust‑engineered safeguards (e.g., content‑moderation rails) reduce downstream legal ambiguity Domain‑specific fine‑tuning of open‑source models, combined with on‑prem deployment, ensures compliance and data security
While Thomas approaches the topic from a legal-policy perspective and Kritika from an industry-technical angle, both converge on the need for early verification of data provenance and on-prem, secure deployment to satisfy compliance, a convergence not explicitly anticipated in the agenda [291-298][250-254].
POLICY CONTEXT (KNOWLEDGE BASE)
Aligning legal safeguards with industry deployment mirrors the consensus on AI governance principles across government and industry [S38] and the distinction between governance and compliance that underpins alignment strategies [S39].
Both academic researchers and practitioners advocate for smarter, feedback‑driven data collection rather than exhaustive brute‑force methods.
Speakers: Prasanta Ghosh, Amitabh Nag
Efficient inclusive data collection can be achieved by modelling intrinsic linguistic components rather than brute‑force gathering Data sets must be continuously created and refined through feedback loops from real‑world use
Prasanta proposes intrinsic linguistic modeling to reduce collection costs, while Amitabh emphasizes continuous, feedback-based data generation; together they reveal a shared belief in iterative, intelligent data strategies beyond simple mass collection [155-167][121-144].
POLICY CONTEXT (KNOWLEDGE BASE)
This perspective matches the Bhashini Stack’s recommendation for linguistically-modelled sampling over brute-force gathering [S43] and Prof. Aberer’s emphasis on participative, opportunistic data sourcing as an alternative to exhaustive collection [S45].
Overall Assessment

The discussion shows strong convergence among government, academia, industry, and legal experts on four core themes: (1) the need for multi‑layered, audience‑centric evaluation supported by documentation; (2) treating foundational speech data as a public good with continuous, feedback‑driven enrichment; (3) embedding responsible AI practices and trust safeguards throughout the lifecycle; (4) fostering cooperative, multi‑stakeholder governance and scalable infrastructure for sustainable deployment.

High consensus across diverse stakeholder groups, indicating a solid foundation for coordinated policy formulation, joint funding mechanisms, and shared technical standards that can accelerate inclusive voice AI development in India.

Differences
Different Viewpoints
Approach to inclusive data collection – brute‑force field gathering vs linguistically‑modelled sampling
Speakers: Amitabh Nag, Prasanta Ghosh
Data sets must be continuously created and refined through feedback loops from real‑world use Efficient inclusive data collection can be achieved by modelling intrinsic linguistic components rather than brute‑force gathering
Amitabh stresses continuing data creation through large-scale field collection, product-driven parallel corpora and open-domain sources as the main ways to keep datasets fresh [124-133]. Prasanta argues that instead of brute-force collection for every dialect, modelling should start from intrinsic linguistic bases (e.g., language families) and then add targeted data, reducing cost and timeline [155-167].
POLICY CONTEXT (KNOWLEDGE BASE)
The debate mirrors the Bhashini Stack’s proposal for linguistically-modelled sampling to balance inclusivity and practicality [S43] and Prof. Aberer’s discussion of participative, non-conventional data sources as a quality-focused alternative to brute-force methods [S45].
Preferred evaluation methodology – audience‑centric acceptability vs multi‑layered objective‑subjective framework vs national leaderboard
Speakers: Nihar Desai, Prasanta Ghosh, Thomas J. Vallianeth
Evaluation should be audience‑centric, focusing on perceived acceptability rather than absolute ranking Human transcriptions vary widely; reliance on word‑error‑rate alone is insufficient, requiring multi‑layered, both objective and subjective, evaluation methods Voice data sits at the intersection of privacy and copyright law; early documentation, clear licensing, and privacy‑enhancing techniques are essential
Nihar proposes that success be judged by whether the intended audience understands and accepts the output, de-emphasising hierarchical rankings [284-286]. Prasanta points out that even human annotators disagree, making word-error-rate inadequate, and calls for layered evaluation including multiple outputs and downstream task performance [228-240]; he also suggests a unified national leaderboard to track progress [315-321]. Thomas focuses on the legal dimension, arguing that because evaluation can be subjective, robust documentation, licensing checks and privacy safeguards must be built in from the start to satisfy legal and procurement requirements [208-218][291-298].
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between audience-centric metrics and standardized leaderboards is reflected in the call for multi-layered evaluation approaches and audience-centric metrics [S46], as well as the documented need for standardized benchmarking by agencies [S50] and the recommendation to broaden policy evaluation beyond single metrics [S47].
Openness of data sources – open‑domain as easy way to generate data vs need for strict licensing and privacy safeguards
Speakers: Amitabh Nag, Thomas J. Vallianeth
Data sets must be continuously created and refined through feedback loops from real‑world use Voice data sits at the intersection of privacy and copyright law; early documentation, clear licensing, and privacy‑enhancing techniques are essential
Amitabh treats open-domain sources such as YouTube as a convenient way to harvest data for continuous improvement, describing it as an “easy way to work upon it” [136-138]. Thomas counters that even publicly available voice data may be subject to copyright and privacy constraints, requiring provenance checks, licences and privacy-enhancing technologies before use [208-218].
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions on open-domain data versus licensing and privacy echo stakeholder-engagement recommendations for using existing open data while ensuring privacy safeguards in disability data collection [S42] and broader debates on treating data as a public good with appropriate safeguards [S48].
Unexpected Differences
Speed of data acquisition vs legal caution on open‑domain data
Speakers: Amitabh Nag, Thomas J. Vallianeth
Data sets must be continuously created and refined through feedback loops from real‑world use Voice data sits at the intersection of privacy and copyright law; early documentation, clear licensing, and privacy‑enhancing techniques are essential
Amitabh treats open-domain sources (e.g., YouTube) as a quick, low-friction way to generate new corpora for continuous improvement [136-138]. Thomas, however, warns that even such publicly available data may be encumbered by copyright and privacy obligations, demanding rigorous provenance checks before use [208-218]. The tension between rapid data harvesting and legal diligence was not anticipated given the overall collaborative tone of the event.
POLICY CONTEXT (KNOWLEDGE BASE)
Balancing rapid data acquisition with legal caution aligns with the emphasis on robust legal frameworks for AI implementation and the need for careful compliance highlighted in AI governance panels [S40] and inter-agency consensus on governance principles [S38].
Use of a national leaderboard for evaluation versus audience‑centric, non‑ranking approach
Speakers: Prasanta Ghosh, Nihar Desai
A unified national evaluation framework or leaderboard for Indian languages and dialects would foster collaborative competition and track progress Evaluation should be audience‑centric, focusing on perceived acceptability rather than absolute ranking
Prasanta proposes a single, country-wide leaderboard to benchmark models across languages, implying a competitive ranking system [315-321]. Nihar argues that ranking is less useful than measuring whether the audience finds the system acceptable, suggesting a shift away from hierarchical scores [284-286]. The clash between a formal ranking mechanism and a purely acceptability-based view was not foreseen.
POLICY CONTEXT (KNOWLEDGE BASE)
The preference for non-ranking, audience-centric evaluation versus national leaderboards is discussed in the Bhashini Stack’s call for multi-layered, audience-centric evaluation [S46] and the documented tension between contextual approaches and institutional benchmarking requirements [S50].
Overall Assessment

The discussion revealed consensus on the importance of inclusive, continuously improving voice AI, but significant divergence on how to achieve it. The main fault lines concern data‑collection strategy (brute‑force vs linguistically‑modelled), evaluation methodology (audience acceptability vs multi‑layered metrics and national leaderboard), and the balance between rapid open‑domain data harvesting and legal safeguards. These disagreements reflect differing priorities of practitioners (industry), researchers (academia) and legal experts, and they could affect the speed and coherence of policy implementation and tool‑kit adoption.

Moderate to high – while all participants share the same overarching goal of inclusive voice AI, the contrasting technical, methodological and legal approaches indicate that reaching a unified implementation framework will require substantial negotiation and cross‑sector coordination.

Partial Agreements
All three agree that voice AI systems need ongoing improvement and that evaluation must reflect real‑world usage, but they diverge on the means: Amitabh favours continuous data harvesting, Prasanta advocates linguistically‑guided sampling, while Nihar stresses a pragmatic, audience‑focused evaluation rather than technical ranking [124-133][155-167][284-286].
Speakers: Amitabh Nag, Prasanta Ghosh, Nihar Desai
Data sets must be continuously created and refined through feedback loops from real‑world use Efficient inclusive data collection can be achieved by modelling intrinsic linguistic components rather than brute‑force gathering Evaluation should be audience‑centric, focusing on perceived acceptability rather than absolute ranking
Harleen, Amitabh and Prasanta all endorse the goal of inclusive, sustainable voice AI, but differ on implementation: Harleen proposes a policy‑driven four‑pillar approach with public‑good data; Amitabh focuses on continuous data pipelines and feedback loops; Prasanta suggests a more technical, linguistic‑model‑centric data‑collection strategy to reduce cost [73-78][15-16][155-167].
Speakers: Harleen Kaur, Amitabh Nag, Prasanta Ghosh
A four‑pillar policy framework treating foundational data as public goods, institutionalising open‑source infrastructure, building open representative models, and strengthening responsible deployment Inclusion and diversity must be embedded in system design from the outset rather than treated as an afterthought Efficient inclusive data collection can be achieved by modelling intrinsic linguistic components rather than brute‑force gathering
Both agree that documentation and trust‑engineered safeguards are crucial for legal compliance, but Thomas frames it primarily as a pre‑emptive legal safeguard, whereas Nihar emphasises its role in supporting subjective evaluation outcomes in procurement and court contexts [208-218][291-298].
Speakers: Thomas J. Vallianeth, Nihar Desai
Voice data sits at the intersection of privacy and copyright law; early documentation, clear licensing, and privacy‑enhancing techniques are essential Legal and procurement processes must accommodate subjective evaluation outcomes through robust documentation and trust mechanisms
Takeaways
Key takeaways
Voice AI is a critical gateway for digital inclusion, especially for low‑literacy and underserved populations. A four‑pillar policy framework was proposed: treat foundational speech data as public goods, institutionalise sustainable open‑source infrastructure, build open and representative language models, and strengthen responsible deployment and public‑value sharing. Speech data should be regarded as a living public good that is continuously expanded and refined through user feedback and real‑world deployments. Inclusive data collection can be made more efficient by modelling intrinsic linguistic components rather than brute‑force gathering of every dialect. Legal and ethical safeguards are essential: early documentation, clear licensing, privacy‑enhancing techniques, and trust‑engineered safeguards reduce downstream legal risk. Industry adoption requires scalable, edge‑friendly infrastructure, domain‑specific fine‑tuning of open models, and on‑prem deployment to meet security and compliance needs. Current evaluation practices (e.g., word‑error‑rate) are insufficient for India’s linguistic diversity; a multi‑layered, audience‑centric evaluation approach is needed. A unified national evaluation framework or leaderboard for Indian languages and dialects would foster collaborative competition and track progress.
Resolutions and action items
Publish and disseminate the Policy Report and Developer Toolkit to policymakers, NGOs, and the tech community. Governments (central and sub‑national) to act as stewards of the public‑good ecosystem: fund foundational data, set standards, and convene stakeholders. Implement continuous data‑creation pipelines: primary corpus collection plus improvement‑corpus feedback loops from deployed applications. Adopt the suggested four‑pillar framework in national AI strategies and incorporate it into procurement guidelines. Develop and maintain a centralized, India‑wide evaluation platform/leaderboard (potentially under Bhashani) for speech models across languages and dialects. Organise follow‑up workshops and working groups to refine evaluation metrics, licensing models, and trust‑engineered safeguards. Encourage open‑source model fine‑tuning for specific industry domains with on‑prem deployment options. All consortium members to provide comments on the report and toolkit and to share best‑practice case studies.
Unresolved issues
Concrete mechanisms for scaling continuous, user‑driven data collection across the vast linguistic landscape remain undefined. How to balance inclusivity (coverage of low‑resource languages) with cost and timeline constraints in practice. Standardised, legally robust evaluation criteria that satisfy both technical and procurement requirements are still lacking. Clarification is needed on evidentiary standards for AI‑generated outputs in courts and how subjective evaluations will be adjudicated. Specific licensing models and documentation templates for open‑source speech datasets that satisfy both copyright and privacy law have not been finalised. The exact governance structure for the proposed national leaderboard and how competition will be managed collaboratively is not settled.
Suggested compromises
Use a hybrid data‑collection strategy: combine brute‑force primary corpus gathering with improvement‑corpus feedback from deployed systems. Adopt a mixed evaluation methodology that blends objective metrics (e.g., modified error rates) with subjective, audience‑centric assessments. Implement a collaborative‑competition model: a shared national leaderboard that encourages progress while maintaining open‑source principles. Governments to act both as regulators and as ecosystem conveners/standard‑setters, providing guidance without stifling innovation. Apply privacy‑enhancing technologies and clear licensing at the outset to satisfy legal requirements while still enabling open‑source sharing. Prioritise domain‑specific fine‑tuning of open models on‑premises to meet security/compliance needs without restricting broader community access.
Thought Provoking Comments
AI systems in voice technology have no static shelf‑life; they must be continuously upgraded because of the immense diversity of people, languages and cultures. Inclusion and diversity have to be built into the design rather than treated as outliers.
Highlights the fundamental difference between traditional digital standards and AI‑driven voice tech, emphasizing that static standards cannot capture linguistic and cultural variability.
Set the thematic foundation for the whole panel, prompting subsequent speakers to address how policies, data collection, and model design must accommodate ongoing change rather than rely on fixed standards.
Speaker: Amitabh Nag
Voice AI is a gateway to public services for millions with limited literacy; when it works in local languages it enables inclusion, but when it does not it can reinforce exclusion. This is a narrative of cooperation, not competition, between India and Germany.
Frames voice technology as a social inclusion tool and positions the Indo‑German partnership as a model of collaborative, responsible AI, contrasting the dominant competition narrative.
Re‑oriented the discussion from technical challenges to broader societal impact, encouraging participants to consider policy and ethical dimensions alongside technical solutions.
Speaker: Ariane Ahildur‑Brandt
Our policy framework rests on four pillars: treating foundational data sets as public goods, institutionalising sustainable open‑source infrastructure, building open and representative models, and strengthening responsible deployment.
Provides a clear, structured roadmap that links high‑level policy goals to concrete actions, bridging the gap between abstract principles and practical implementation.
Guided the conversation toward concrete policy levers and gave other panelists reference points for discussing data, governance, and deployment challenges.
Speaker: Harleen Kaur
Beyond brute‑force data collection, we can generate an ‘improvement corpus’ from the products that use our models—capturing user corrections and feedback to continuously refine the system.
Introduces the concept of a feedback‑driven data pipeline, turning deployed applications into sources of high‑quality training data, thus addressing sustainability of data resources.
Shifted the dialogue from static data gathering to a dynamic, user‑informed loop, prompting further discussion on how enterprises can embed such mechanisms.
Speaker: Amitabh Nag
Instead of collecting massive amounts of data for every dialect, we should model the intrinsic linguistic families (e.g., Indo‑Aryan vs Dravidian) and use smart sampling to capture commonalities and unique features, reducing cost while preserving coverage.
Proposes a theoretically grounded, cost‑effective strategy for handling India’s linguistic diversity, challenging the prevailing brute‑force approach.
Prompted a re‑evaluation of data collection strategies, leading to concrete examples (Telugu dialects) and influencing later suggestions about evaluation and benchmarking.
Speaker: Prasanta Ghosh
Legal compliance must be baked in from the start: verify copyright provenance, apply privacy‑enhancing technologies, and maintain robust documentation so downstream users have clear, trustworthy data.
Brings the often‑overlooked legal dimension into the technical discussion, emphasizing proactive risk mitigation rather than retroactive fixes.
Introduced a new thread on legal safeguards, causing other participants to acknowledge the need for early documentation and influencing later remarks on trust engineering.
Speaker: Thomas J. Vallianeth
Word‑error‑rate is insufficient for evaluating Indian ASR because human transcribers disagree; we need multi‑layered, possibly subjective, evaluation that can handle multiple plausible outputs and downstream task tolerance.
Critically examines a core metric, exposing its limitations in a multilingual, dialect‑rich context and suggesting richer evaluation frameworks.
Steered the conversation toward rethinking evaluation standards, leading to further debate on audience perception, legal implications, and the proposal for a national leaderboard.
Speaker: Prasanta Ghosh
Ultimately, evaluation should be judged by the audience’s ability to understand the output; there is no absolute ranking—acceptability varies by context, such as courts versus casual conversation.
Shifts the focus from technical precision to user‑centric acceptability, highlighting the subjective nature of “correctness” in real‑world applications.
Prompted participants to consider context‑specific standards and reinforced the earlier point about audience‑driven design, influencing the concluding call for a unified, yet flexible, evaluation framework.
Speaker: Amitabh Nag
We need a national, collaborative evaluation framework—perhaps a single leaderboard under Bhashani—that annually benchmarks models across languages and dialects, fostering healthy competition while aligning with shared standards.
Synthesises earlier insights into a concrete institutional proposal, addressing the need for systematic progress tracking.
Provided a tangible next step for the ecosystem, tying together technical, policy, and legal strands into an actionable roadmap.
Speaker: Nihar Desai (summarising Prasanta’s suggestion)
Overall Assessment

The discussion was shaped by a series of pivotal insights that moved it from a generic launch announcement to a deep, multidisciplinary exploration of voice AI in India. Amitabh Nag’s opening remarks framed AI as a living, diverse system, prompting the panel to focus on continuous improvement. Ariane Ahildur‑Brandt broadened the lens to societal inclusion and international cooperation, while Harleen Kaur supplied a concrete policy scaffold. Subsequent contributions from Nag (feedback‑driven data), Ghosh (linguistic‑family‑based data strategy), and Vallianeth (legal safeguards) each introduced new dimensions—data lifecycle, cost‑effective collection, and compliance—that redirected the conversation toward actionable mechanisms. The critique of traditional evaluation metrics by Ghosh and the audience‑centric perspective offered by Nag reframed success criteria, culminating in a consensus for a national, collaborative evaluation framework. Collectively, these comments transformed the dialogue from descriptive to prescriptive, aligning technical, policy, legal, and user‑experience considerations into a coherent roadmap for an inclusive, sustainable voice technology ecosystem.

Follow-up Questions
How can we continue creation and facilitation of foundational speech datasets as digital public goods while ensuring trust and safety, and is it possible to establish a data‑flywheel for continuous improvement?
Sustaining and scaling inclusive voice data is essential for long‑term ecosystem health and for keeping models up‑to‑date with diverse user needs.
Speaker: Nihar Desai (to Amitabh Nag)
What are the gaps at the research and academia level in designing more inclusive foundational data sets, and how can these gaps be addressed?
Identifying academic shortcomings is crucial to produce datasets that enable equitable AI applications across India’s linguistic diversity.
Speaker: Nihar Desai (to Prasanta Ghosh)
Can you provide a concrete example of how you balanced inclusivity, model‑building constraints, and other factors in a specific dataset (e.g., ResPin)?
Practical case studies illustrate trade‑offs and guide future projects in efficiently allocating resources while maintaining inclusivity.
Speaker: Nihar Desai (to Prasanta Ghosh)
How should we balance innovation with legal caution (copyright, privacy, data governance) when developing speech models and datasets?
Ensuring compliance while fostering rapid development is key to avoiding legal setbacks and maintaining trust among stakeholders.
Speaker: Nihar Desai (to Thomas Vallianeth)
What day‑to‑day challenges do we face in measuring and evaluating ASR systems in the Indian context, and how might these challenges be resolved or mitigated in the future?
Robust evaluation metrics are needed to track progress, detect bias, and improve model performance across diverse languages and dialects.
Speaker: Nihar Desai (to Prasanta Ghosh)
In a setting where evaluation becomes partly subjective, how does the law view this, how should procurement decisions be made, and how can disputes between differing AI outputs be resolved?
Legal clarity on evidentiary standards and procurement criteria is necessary to manage risk and ensure fair competition in AI deployments.
Speaker: Nihar Desai (to Thomas Vallianeth)
Develop a unified national evaluation framework and centralized leaderboard for Indian language speech models to benchmark progress and foster healthy competition.
A standardized, transparent benchmarking system would drive continuous improvement and enable cross‑project comparability.
Speaker: Prasanta Ghosh, Thomas Vallianeth
Create multi‑layered evaluation metrics that go beyond word error rate, incorporating subjective judgments and downstream application impact.
Current objective metrics miss nuances of Indian linguistic variation; richer metrics would better reflect real‑world performance.
Speaker: Prasanta Ghosh, Kritika K.R.
Organize more workshops and detailed use‑case studies to crystallize frameworks for acceptability and evaluation of voice technologies.
Collaborative learning events can surface best practices, identify gaps, and accelerate consensus on standards.
Speaker: Thomas Vallianeth
Establish safeguards and licensing frameworks for open‑source datasets that are tailored to specific end‑uses (e.g., hate‑speech detection vs. speech‑to‑speech translation).
Different applications pose distinct ethical and legal risks; customized licensing ensures responsible deployment.
Speaker: Thomas Vallianeth
Institutionalize sustainable open‑source infrastructure and governance models for the voice ecosystem to ensure long‑term viability.
Stable infrastructure and clear governance are prerequisites for maintaining and scaling open voice resources.
Speaker: Harleen Kaur
Treat foundational speech datasets as public goods, with government funding and convening, especially for languages that are not commercially viable.
Public‑good status encourages investment in under‑served languages, promoting digital inclusion.
Speaker: Harleen Kaur
Develop standardized documentation practices (e.g., data cards, model cards) to enhance transparency and responsible deployment of voice AI.
Clear documentation supports accountability, reproducibility, and trust among developers and regulators.
Speaker: Harleen Kaur
Apply privacy‑enhancing technologies during data collection to avoid capturing personal data and comply with privacy legislation.
Proactive privacy measures reduce legal risk and protect user rights from the outset.
Speaker: Thomas Vallianeth
Create a feedback loop (‘flywheel’) where user corrections and real‑world usage continuously feed back into improving datasets and models.
A systematic feedback mechanism ensures models evolve with changing language use and user needs, maintaining relevance over time.
Speaker: Amitabh Nag

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Driving Enterprise Impact Through Scalable AI Adoption

Driving Enterprise Impact Through Scalable AI Adoption

Session at a glanceSummary, keypoints, and speakers overview

Summary

The town-hall convened by Cambridge’s vice-chancellor explored how the abundance of AI-generated information creates new dilemmas around knowledge [1-4]. The panel brought together a university leader, the CEO of Udemy, and the co-founder of Cohere, representing nonprofit academia, for-profit online learning, and enterprise AI respectively [5-9].


When asked what resource is becoming scarcest in an era of instant answers, Hugo highlighted a “poverty of attention” and a loss of trust in AI-provided answers that lack provenance [40-46]. Aidan warned that large language models can give users a false sense of deep mastery, arguing that testing without AI assistance is essential to gauge true understanding [48-53]. Debbie reported that the live poll showed critical thinking and sustained attention tied for the top concerns among the audience [65-70].


Hugo described Udemy’s evolution from a massive catalog of 250 000 courses to an AI-driven platform that can quickly assess learners, personalize pathways, and provide role-play simulations for skill acquisition [73-78][92-107]. Cohere, according to Aidan, builds secure enterprise LLMs that stay within a client’s perimeter and helps organisations shift workers from performing tasks to orchestrating AI agents [110-118]. Both companies agreed that AI must be used as a tool rather than a replacement, emphasizing the need to teach students how to ask the right questions and to maintain rigorous testing to prevent cheating [171-183][224-228].


Hugo added that AI-enabled tutors can adapt content to auditory or visual learners, augmenting but not replacing the storytelling role of human teachers [156-165][184-188]. Audience members raised practical concerns, such as using AI role-play to motivate learners and the challenge of detecting AI-generated work, which Aidan said requires better detection cues and a focus on “no-calculator” assessments [199-206][321-334].


In response to worries about loss of agency, Hugo argued that future education will need trusted, specialized models, explainability research, and a renewed focus on front-end questioning and back-end validation [340-353][273-280]. Aidan noted that newer reasoning models with internal monologues and retrieval-augmented generation can provide audit trails that support critical evaluation of AI answers [372-383]. The discussion concluded that while AI will excel at middle-layer processing, human expertise in framing problems, critical thinking, and ensuring trust remains indispensable, and the traditional university degree may need to be re-examined in light of these shifts [407-419][273-280].


Keypoints


Major discussion points


Human attention and trust are the new scarce resources in an AI-saturated knowledge landscape.


Hugo notes that “when you have a wealth of information, you have a poverty of attention” and highlights the emerging “trust piece” as equally important [40-44]. He later adds that most LLMs “don’t give you… where that answer came from” [46-47], underscoring the trust dilemma.


AI can personalize and adapt learning, but it must be coupled with human oversight and robust assessment.


Hugo describes how AI can “do a quick assessment… break apart the class… feedback loop” to tailor instruction to individual learners [92-107]. Aidan echoes that AI’s ability to “personalize the experience… engage them more effectively” can help sustain attention [144-147]. Both agree that human teachers remain essential for framing questions and providing the “storytelling” element [156-165].


Critical thinking and rigorous testing become essential safeguards against false mastery.


The audience’s poll showed “critical thinking… sustained attention” as top concerns [65-70], and Hugo later stresses the need to “expand… evaluate the level of critical thinking” [273-277]. Aidan stresses that “testing is essential… you need to take away the tool and see what the human alone understands” [50-53][171-183], positioning testing as the “gold standard” for authentic learning.


The relationship between traditional universities and for-profit online platforms is shifting, raising questions about the future of the degree bundle.


Hugo outlines Udemy’s scale (250 k courses, 80 M learners) and its pivot “to become an AI platform to reskill the workforce” [73-81][92-108]. Debbie counters that universities focus on “critical thinking, deep mastery… fundamentals” that may not align directly with business needs [267-270]. Hugo later frames the university degree as a “convenient bundle” that may need to be “re-visited” in light of AI-driven economics [407-416].


Explainability and transparency of LLMs are critical to maintaining agency and trust.


Hugo warns that reliance on “black-box” answers erodes the ability to “learn from their deduction process” [340-353]. Aidan adds that newer models are being built with internal reasoning and Retrieval-Augmented Generation (RAG) to provide “chains of thought” and cite sources, improving auditability [372-382].


Overall purpose / goal of the discussion


The town-hall was convened to surface and interrogate the “dilemmas around knowledge” that arise as AI makes information instantly accessible. Panelists explored how AI reshapes education, what skills (attention, trust, critical thinking) become scarce, and how institutions-both traditional universities and for-profit platforms-might adapt their models, curricula, and assessment practices to preserve learning quality and societal trust.


Overall tone and its evolution


The conversation began with a formal, exploratory tone, introducing the problem space and inviting diverse perspectives [1-12]. As the dialogue progressed, optimism about AI’s potential (personalization, rapid reskilling) emerged [92-107][144-147]. Mid-session, the tone shifted to caution and critical reflection, emphasizing attention scarcity, trust deficits, and the need for rigorous testing [40-47][50-53][273-277]. Toward the end, the tone became balanced and forward-looking, acknowledging both the transformative promise of AI and the enduring value of human judgment, explainability, and the university’s broader mission [340-353][407-416]. Throughout, the panel maintained a collaborative, solution-oriented demeanor while oscillating between optimism and prudent concern.


Speakers

Debbie Prentice – Professor and Vice Chancellor of the University of Cambridge; represents the not-for-profit education sector [S5].


Hugo Sarazen – President, Chairperson and Chief Executive Officer of Udemy, a global online learning platform.


Aidan Gomez – Co-founder and Chief Executive Officer of Cohere, an enterprise AI company developing large language models [S7].


Audience – General audience members participating via questions and comments (no specific titles provided).


Additional speakers:


Anna Van Niels – Director of the Livium Trust (audience member).


Nathaniel – Founder/operator of an education company in Australia (audience member).


Pranjal Sharma – Author and analyst from India (audience member).


Kian – CEO of the AI company Workera (audience member).


(Unnamed audience members) – Various participants who asked questions or made comments without providing a name or title.


Full session reportComprehensive analysis and detailed insights

Opening remarks and panelist introductions


Professor Debbie Prentice, Vice-Chancellor of the University of Cambridge, opened the town-hall by framing the “dilemmas around knowledge” that have existed since the invention of schools and libraries but are now amplified by AI, which makes information instantly available to everyone [1-4]. She introduced the panel: Aidan Gómez, co-founder and CEO of Cohere, an enterprise AI firm building large language models (LLMs) for business use [5-9]; and Hugo Sarazen, President, Chairperson and CEO of Udemy, a for-profit online-learning platform that supplies courses to corporations and individuals worldwide [5-9].


Poll on the scarcest resource


The first interactive question asked participants to identify which resource is becoming scarcest in a world of instant answers and AI assistance, offering options such as sustained attention, independent judgment, deep mastery, motivation, and trust [22-30]. The live Slido poll showed “critical thinking” and “sustained attention” tied for the top responses [65-70]. The panel repeatedly emphasized that critical thinking and sustained attention are the scarcest cognitive resources in an AI-rich environment [65-70].


Individual viewpoints on scarcity


Hugo Sarazen argued that the real scarcity is human attention. Citing Herbert Simon, he noted that “when you have a wealth of information, you have a poverty of attention” [40-41] and warned that LLMs create a “trust piece” because they often provide answers without indicating provenance [44-47]. He stressed that learners are overwhelmed by data volume and that traditional teaching methods must evolve to address this attention deficit [42-44][41-44].


Aidan Gómez focused on the danger of a false sense of deep mastery. He explained that LLMs can generate concise, surface-level explanations that make users feel they understand a subject when they have not achieved genuine depth [48-49]. He advocated for testing that removes the AI tool to verify authentic knowledge [48-53] and likened AI to a calculator-useful for speed and accuracy but still requiring human grounding [143-148]. He also noted that AI-personalised content can help retain attention, but only if learners are equipped with strong critical-thinking skills [143-147].


Debbie Prentice offered a complementary view centred on self-knowledge. She argued that learners no longer have reliable cues to gauge mastery, undermining self-awareness and motivation [57-63]. She reiterated the poll’s outcome, emphasizing critical thinking as the audience’s primary concern [66-68].


Company overviews and AI pivots


Udemy hosts roughly 250 000 courses, serves 80 million learners, works with 17 000 enterprises, and is supported by 85 000 instructors across more than 46 languages [73-81]. Hugo announced a strategic shift from a pure marketplace to an AI-driven reskilling platform, using AI for rapid assessment, class segmentation, and continuous feedback loops [92-108][104-107]. He illustrated this with AI-enabled role-play simulations that let a new salesperson practise selling a product while receiving rubric-based scoring, thereby accelerating skill acquisition [199-206][202-214]. Hugo also referenced the U.S. “Alpha School” as a concrete example of AI accelerating learning [230-242].


Cohere builds secure, on-premise LLMs for enterprise customers, ensuring data never leaves the client’s perimeter [110-118]. Their focus is on helping organisations move workers from manual tasks to orchestrating AI agents, with particular attention to sectors where data security is paramount, such as finance, telecoms, healthcare, and government [120-121].


Discussion of attention, personalization, and trust


Aidan highlighted that AI-driven personalization can mitigate attention loss by adapting to auditory or visual preferences and delivering instant feedback [143-147]. All three speakers agreed that human educators remain indispensable: Hugo described teachers as “storytellers” whose motivation-driving presence can be augmented, not replaced, by AI [184-188]; Debbie stressed the university’s role in fostering critical thinking and self-knowledge beyond mere content delivery [425-426]; Aidan reiterated the calculator analogy, insisting that human judgment must always ground AI-generated answers [143-148].


Trust, explainability, and reasoning models


Hugo warned that black-box answers prevent learners from seeing the reasoning process, jeopardising agency, and called for specialised, purpose-built models that can cite trusted sources [340-353][346-352]. Aidan described newer reasoning models that generate internal “chains of thought” and employ Retrieval-Augmented Generation (RAG) to cite external sources, thereby deepening understanding while offering auditability [260-280][386-393]. He also noted that many AI-generated answer detectors are unreliable and explained how Cohere can embed subtle cues to identify AI-generated text [450-456].


Human-centered assessment debate


Aidan argued that the “gold-standard” assessment must be conducted without AI to verify authentic knowledge [48-53] and contrasted this with AI-driven rapid assessment and continuous feedback, which Hugo championed as a way to personalise learning pathways [104-107][273-277]. Both agreed on the need for rigorous evaluation but differed on whether AI should be excluded from the testing environment.


Labour-market alignment and ROI concerns


Hugo reported that many corporate learning programmes purchased during the pandemic lack clear ROI, with organisations often measuring only completion rates or hours [128-136]. He suggested AI can deliver bite-size, in-flow learning that is measurable and directly tied to business outcomes [140-141][420-421]. Aidan echoed this, noting that AI can accelerate the creation of new programmes to meet shifting market demand, though identifying required skills still depends on human policymakers [254-262][265-266].


Future of the university degree “bundle”


Hugo framed the traditional degree as a “convenient bundle” of education, accreditation, and rite-of-passage that may need to be reconsidered as AI reshapes delivery economics [408-416]; he suggested the possibility of “unbundling” rather than asserting it will happen. Debbie defended the broader societal value of universities, arguing they continue to teach critical thinking and deep mastery, which remain “worth its weight in gold” [425-426]. Aidan saw AI as a tool that can help universities quickly develop relevant curricula, narrowing the gap between academia and industry [254-262].


Audience Q&A highlights


Motivation: Anna Van Niels asked how AI could motivate learners without a human instructor; Hugo responded with AI-driven role-play and feedback mechanisms that simulate a “gym” of repeated practice [199-206][215-216].


AI in physical classrooms: Nathaniel queried the role of AI amid policy debates on bans; Hugo stressed teaching students to ask the right questions and develop critical-thinking skills before using AI tools [230-242][237-238].


Applied knowledge: Pranjal Sharma emphasized the need for “applied knowledge” bridging white-, gray- and blue-collar jobs; Aidan argued that AI-generated, instantly scalable programmes can deliver such knowledge, provided the right skill signals are identified [254-266].


Testing and detectors: Aidan discussed the limitations of AI-generated answer detectors and advocated for tool-free testing to ensure authentic mastery [450-456].


Degree “bundle”: Hugo suggested reconsidering the bundled degree model in light of AI-driven alternatives [408-416].


AI outpacing humans: Hugo warned that loss of agency would be dangerous and that specialised, trustworthy models are essential to preserve human influence [308-317].


Consensus and open challenges


The panel reached strong consensus that critical thinking, sustained attention, and trustworthy, explainable AI are the scarce resources that must be protected as AI democratizes knowledge. AI is viewed as a powerful augmentative tool capable of personalising learning, providing rapid assessment, and generating source-backed answers, but it cannot replace human educators who provide narrative, motivation, and ethical judgement. Unresolved issues include designing dual-track assessments (with and without AI), building specialised transparent models for high-stakes domains, and re-thinking credentialing to balance the historic university “bundle” with the efficiency of AI-driven platforms.


Session transcriptComplete transcript of the session
Debbie Prentice

Good afternoon, everyone, and thank you for joining this town hall discussion where we will be talking about a topic that university and education leaders are all buzzing about, which is namely dilemmas around knowledge. This has been a topic for us since schools were first invented, libraries were first invented, and it’s still with us today. It’s extremely relevant today in an age in which AI is changing, making knowledge available broadly to everybody all the time. But it doesn’t mean that there aren’t still dilemmas around knowledge, and we’re going to probe these today. I’m Professor Debbie Prentice, and I’m the Vice Chancellor of the University of Cambridge. I’m very pleased to introduce you to our panelists for this session.

So we have Aidan Gomez, who is the co -founder and chief executive officer of Cohere, an enterprise AI company developing advanced language models for use by business. And we also welcome Hugo Sarazen, who is president and chairperson. Chief Executive Officer of Udemy, which provides a wide range of business and leadership development courses, including AI courses, to businesses and organizations around the world in fields such as financial services, higher education, government, manufacturing, and technology. We have some fascinating questions to discuss this afternoon around knowledge, misinformation, AI, attention spans, and even the nature of expertise. And we’re going to bring the audience in early and often, so I hope that you’ll all participate with us. We, as panelists, come from very different perspectives.

Aidan and Hugo run very successful businesses selling a product. They are from the for -profit educational technology sector, and I’m from the not -for -profit sector. So there are different pressures, different opportunities, different challenges that we face in this space. Before we get started with… Before we get started with our panel discussion, I’d like to remind the online audience that… If you are sharing with us through your social channels, you should use the hashtag, hashtag WEF26. And whether you’re joining online today or here in person, and it’s great to see so many of you here. Thank you so much for coming. Please feel free to get involved in the session by reacting to the questions we discuss in our conversation and also by submitting questions to panelists via the Slido app.

Okay? Okay, so our first question is, in a world of instant answers and AI assistance, what is becoming the scarcest resource? Okay, the answers are from a list of options. Is it sustained human attention, independent judgment and critical thinking, deep understanding and mastery, motivation to learn in the first place, or trust in what we know and who to believe? And actually I said or. That could be and. You can choose as many of these as you. as you want. Okay? So you can see on the screen, actually, as people are responding via the Slido app, but I want to ask our panelists, what would you say? So you can see the answers on the screen.

What would you say, Hugo?

Hugo Sarazen

Well, I think it’s a complicated question, and I think there’s a lot of all of the above. If you take a historical perspective, knowledge was scarce. That was a source of power. Our countries fought for that. And we also had experts that built knowledge over time, but very few polymath. Very few. Those ones that were were very, very, very important. Now today, you have LLMs that can learn everything, and they can learn across different domains, and they can become the polymath. So every data center, every time we say there’s a new infrastructure that’s being added, we’re adding millions and millions, millions of polymath. And that becomes a democratization. of that knowledge. The problem is, and there’s an amazing quote from Herbert Simon, when you have a wealth of information, you have a poverty of attention.

And I think that’s what’s happening for a lot of learners, and that’s why traditional methods need to change. And we’re going to come up and talk, I’m sure, about how learning needs to evolve, what the process, what’s the role of traditional institution in changing, what’s the role corporation need to, and what individual needs to do. So I think attention is one big component. The second is a lot of, when you go to LLM and AI and you ask for a question, it will give you an answer. It will feel very comfortable with that answer. It doesn’t explain. Explainability in AI is a whole field, a whole domain, and most of these LLMs don’t give you that.

So if you have a society that begins to rely on product, that give you an answer but don’t tell you where that answer is, answer came from how do you learn and what do you have in terms of trust so i think the trust piece is also equally important so i’ll stop at that we can go well further but

Aidan Gomez

yeah i was looking at the uh the poll up there and i for whatever reason the first one that came to me was deep mastery which seems to be the most unpopular choice so i think um you know when you exist in a world where it’s so fast and easy to get answers to whatever question you might have or to get a very surface level answer to even a complex question like whatever like how does quantum mechanics work it’ll give you a four paragraph response um but that’s not deep understanding of the subject matter and so i think llms can’t chat bots they can fool you into thinking that you understand something when you don’t and i view that as a core risk as we integrate these LLMs into an education environment, is this false sense of mastery or understanding.

We can discuss the different solutions to that. I think that testing is essential to it. The idea that you need to take away the tool and see what the human alone understands and has retained. The ability for you to assess depth has to take away those tools. I think that is, from my perspective, what’s most at risk.

Debbie Prentice

That’s interesting. My answer is a variant on yours. I, of course, wanted to reject all five. But I think it’s because of where I come from, coming from the university sector. I wanted to say self -knowledge for the learner. It’s part of what you’re saying. You don’t know if you’re mastered and you don’t know if you’re interested in it. You don’t know if you get it. It comes to you. So much of what you learn, so much of what you learn, what you learn comes from what is difficult and what is compelling. So for those cues to no longer be actually useful cues for self -understanding means how will you even know, but that’s my answer anyway.

So we can see what the – whoops, it went away. I think critical thinking was the one that won out at the end. It looked like critical thinking was actually the audience preferred. We can keep coming back to this, but I want to use this as a jumping – oh, there we go. Okay, yeah, critical thinking and then sustained attention. They were neck and neck for most of the time, yeah, and then trust and then deep mastery, right. That’s interesting. So I want to talk a little bit about each of what you do. So we can start with you, Hugo. Tell us about Udemy.

Hugo Sarazen

So Udemy is a 15 -year -old company that, at the time, did a – pretty cool thing around introducing online learning. It was a great innovation to change accessibility and the cost of reaching out to millions and millions of people and created a creator economy around that. So we now have 250 ,000 courses, 80 million learners on a regular basis. We serve 17 ,000 large enterprise. We have 85 ,000 instructors that kind of come to this marketplace to offer their wear. They’re very deeply committed. They know stuff and they want to share it to the world. And we do it in about 40 % of our revenues are in the U .S. The rest is around the world. So we’re in tons of languages, 46 plus. And the funny story, I’ve only been in the world for less than a year.

When I came in my first on -haul and the people who may be listening online who were working on this, they were like, oh, I’m going to do this. I’m going to do this. I’m going to do this. I’m going to do this. I’m going to do this. I’m going to do this. I’m going to do this. town hall, I came in and I said, we’re going to exit online learning. That is a wonderful innovation. It did a bunch of great things, but it doesn’t solve the problem of today. And with AI, we can do so many different things. So I want to make a hard pivot of the business toward becoming an AI platform to reskill the workforce of the future.

And we can talk about that. And I don’t want to take too much time, but there’s a lot of ways you can use AI to do some of the things you were suggesting to kind of help build the mastery, how to do assessment using AI, how to use AI role play to immerse people. And it also does the thing that I think is so, so important. Traditional online learning and actually traditional learning. You’re an instructor and you teach to the average, right? You create your curriculum and you think you’re going to hit like the most of the people. You can’t get for the super fast. You can’t get for the super slow. You’re on online learning.

And then different people have different starting points, and we don’t have an easy way to accommodate that. Now with AI, you can do a quick assessment. You can break apart the class. You can have feedback loop and reinforce that in a very, very powerful way. And I think that’s one of the things that’s going to emerge of using AI to kind of re -skill the workforce. It’s going to build on that previous generation of online learning to do something pretty remarkable and quite different moving forward.

Debbie Prentice

Thank you. Aidan?

Aidan Gomez

Yes, so Cohere builds large language models. So we’re one of the developers of this core piece of technology that powers things like ChatGPT and all these different applications. We’re focused purely on the enterprise side of house, and so we work with businesses to put those models to work inside the organization. We give them access to internal data and systems that the humans have access to. And then we teach or we work with our customer to teach. the workforce to shift their role from being the ones individually doing the work to managing a team of these models or agents to carry out that work. Our big differentiator is on the security side. So there’s no data exiting our customer’s perimeter.

Instead, we send all of our models and software to them, and they keep it self -contained. Yeah.

Debbie Prentice

So you have certain customers who will only subscribe to you, right?

Aidan Gomez

Yeah. Certainly critical industries, financial services, telco, healthcare, and then, of course, government applications as well. Anything that’s a national security concern, and arguably education is within that remit, that’s a place that we do extremely well.

Debbie Prentice

That’s interesting. So, Hugo, what can we learn from the arc of progress from MOOCs and… online education to now AI -driven?

Hugo Sarazen

I think a few things. The first one is, you know, if you look at the traditional learning processes and methods that we had, there was a void. And that’s why online learning took off and that’s why there’s a whole industry. And it addressed a bunch of problems around, you know, getting to skills, specific skills, and also getting to certification and then helping organization rescale. So that was a very, very, very powerful thing. What is now becoming a lot more a priority, and in the last six months, I spent an enormous amount of time, I spoke to 400 CHROs and head of learning and development in a large enterprise. So the pattern that I saw is they had an enormous proliferation of tools and things that were bought during the pandemic.

During the COVID era. very few could explain the ROI. How do you measure the ROI of learning? It’s a really good question. And everybody kind of defaulted to, did they take the class? Did they complete the class? Hours of learning. And as a business leader, it’s not particularly helpful. And it gets even worse. When they get certification in Google Cloud or AWS or Cyber something, to know that you’ve certified yourself two years ago, I’m a business leader. I want to know, are you current? Are you relevant today? So I think the arc now is moving in the enterprise to an ability to do in the flow of work learning, do it at bite size, do it in an adaptive way, and then we can come back to what adaptive means, and with an ROI, an ability to measure what skills people are deploying in real time.

So you’re now beginning to create a workforce management tool that is powered by an operating learning system.

Debbie Prentice

so Aidan you said that you said that you were not as worried about uh sustained human attention as you were some of the others how does Coherence solve the attention problem

Aidan Gomez

um well I mean I don’t know if Coherence solves the attention problem I think it it’s definitely a concern there’s lots of pressures on our attention span I think um social media short -form content is uh driving a lot of that um I’m certainly on the receiving end of that you know after 30 seconds because of TikTok my attention span ends and I need to talk about something else and also just the way that we do business now are in these short 30 minute meetings where you completely swap context and so I think those are difficult challenges not related to AI that are still applying pressure on human attention span um but it has a pretty good impact on the the pretty strong consequence on how people learn and how students can learn when they’re constantly being distracted when they to sit with material over time.

I think AI can perhaps assist in resolving that by its ability to personalize the experience to the individual and engage them more effectively. And so if you have a generic education offering, which, you know, bores some part of the population, excites the other, you’re missing, you’re underserving that population that gets bored. But if we can have a very targeted, scalable approach for each individual, giving them something that’s engaging, exciting, if they are auditory learners or visual learners, we can tailor it to them and hopefully keep their attention better than we might otherwise would. So AI might be part of the solution as opposed to the source of the problem.

Debbie Prentice

Yugo, does your vision of AI comport with that?

Hugo Sarazen

It completely matches. And I think, you know, there is a well -known piece of research from the 80s from a university, a Chicago professor. It’s the Bloom two -sigma problem. And they did some research where they looked at the ability to learn with one -on -one coaching. It was two -sigma higher than the classroom. But the economics of doing that was not there. That’s why we have these big classrooms, and that’s why there are bigger classrooms for first years. It doesn’t deliver the same learning experience. Now, to Aidan’s point, with AI, you can personalize the experience. You can adapt it, and you can create feedback loops that a professor cannot today. You’ve got 40 students. You cannot pick up who’s not easily.

Some teachers are amazing, and they have the ability to do incredible things. But now you have the ability to have that feedback. So I think we’re going to see a lot of AI expert tutors and coaches that will have context and that will have been trained. on a body of knowledge that is hopefully trusted, hopefully accurate, and will help in the way that you like to learn. So if you’re an auditory learner, we’re going to give it to you that way. And if you’re a visual, we’ll give it to you that way. I think that’s a really exciting and promising world we’re entering from that point of view. So we’re going to go to questions from the audience in just a second.

So start thinking about your question. I’m just going to ask one more question of our panelists myself, which is where do humans fit in in this brave new world of AI -based education? I think all of us who are educators know that at some point we need human intervention in the process, even with the most fabulous technology. Where do you think they need to come in?

Aidan Gomez

I think they’re the customer. So they’re the ones that we’re serving with this technology. And so we need to be able to serve them. We need to create. the best possible product for them. If we just do surface -level education that’s very confirmatory, oh yeah, you’ve got it, great, you know, a bit sycophantic, then they won’t be effective in the real world when they actually enter the job market. And so there’s a burden on us as product creators to create the most effective product to teach people skills and give them knowledge. And I think that AI is actually an incredibly effective tool towards that. But I do still believe that it’s a tool. It’s like a calculator.

It’s something that you can lean on to give you faster answers, more thorough answers. But we still need to ground ourselves in the human without the tool. And so testing becomes, it’s always been important, of course, but I think it becomes absolutely critical now because you can fake your way through an education system much more easily. And so having very strict testing regimens is going to be essential.

Hugo Sarazen

I have a variation on this I do think the teachers, the instructors are partly the customers but I do think they need to be in the loop they’re amazing storytellers they have a way if I ask anybody in this room who was your favorite teacher in high school and I pause for 5 seconds, there’s somebody in your mind right now what was special about that person and you cannot replicate that but you can augment that you can make that person now be able to maybe teach you on something that they were not like my favorite teacher in high school was a physics teacher I loved the way he presented, I loved the way he engaged and it was so motivating my chemistry teacher was not that but now I can augment with AI and have the voice, not just the voice but the way he thought, the way he presented the information apply to a different topic.

And I think that gets pretty exciting as well. You may finally understand chemistry. I may finally understand chemistry. I stayed away from chemistry because of that. But physics I love.

Debbie Prentice

Okay, I wanna open up to questions from the audience. So I will call on you the old fashioned way. If you raise your hand. Oh, you have to, sorry, you have to speak into my ear.

Audience

Anna Van Niels, director of the Livium Trust. I guess learning is a bit like working out it’s got to hurt to be effective. How do you think AI enabled tech of various kinds can help with that motivation issue? You’ve talked about the teacher being the one that absolutely the motivates, but a lot of the systems we’re talking about in the workplace, et cetera, you’re not gonna have that human in the loop. So can we do things with AI and tech that could prompt that?

Hugo Sarazen

Yeah, I’m gonna offer a few suggestions. And this is not like future, this exists today. So you can do AI role -playing and you can do AI role -playing. in a way that makes you go through the learning process. And I’m going to use a business example. So if you’re a new salesperson and you have a new product that you need to sell, you can load up the specs of that product into an AI role play and practice selling to a person. And there will be a rubric against which we’re going to score you. And we’re going to discover whether or not you are competent at selling this product that you’re responsible for. So that’s a business example.

I can do the same thing in a call center. You know, we have one of the largest call center outsourcers. There are 20 ,000 call center agents they need to onboard every month. That is incredibly complicated. But now you can load, you know, the most common error cause, the most common tickets, the product specs. And instead of taking three weeks to onboard somebody, through the process of learning, of experimenting, you can load up the specs of the product that you need to sell. You can do a role play and get to accelerate that learning by doing a lot of practice. So it’s simulation. So that’s one powerful example. I think the other one is AI can give you feedback and monitor the progress you’re making in a way that we can bring you back to that point in the gym where you’re struggling with whatever exercise you’re doing.

We’re going to make you do that exercise more and more and get that repetition in a way that reinforces the gap that you have.

Audience

Hi, I’m Nathaniel. I run an education company in Australia. Now, as a region, Australia has an interesting relationship with technology. As many of you may know, we’ve just recently had a social media ban for young people under 16. And in a similar vein, we don’t really have a good consensus around the role of AI. So my question is, what do you believe the role is for AI in physical classrooms? And what would you say to people who might be on the side of banning versus not banning it?

Aidan Gomez

Yeah, I think I’m interested to hear your answer. But from my side, I think it’s a tool like a calculator. I think also a duty of the education system now is to teach people how to use this AI, how to engage with it, how to most effectively use that tool. And so it certainly should exist as part of the classroom and as part of schooling. But like I said, it can become a crutch and it can be used to cheat. And so we have to come up with ways to ensure that students aren’t misusing it or using it in the ways that are unproductive to their learning. I’m excited to hear your answer.

Hugo Sarazen

I’ve got two -part answer. The first one is any business process or any endeavor, you have the problem statement asking the right question, you have the solving, and then you have the quality assurance in the back. It’s a feedback loop that you go through a circle all the time. And education is… No different. What AI does well is that middle part. It doesn’t do a whole lot in the front end and the back end. So what we need to teach young students and adults is how to ask the right question. The critical thinking, I love that it came out at the very top. Super, super important. But you can, as you said, the calculator is a calculator.

The fact that I can’t do multiplication table all the way to 100 is not that relevant for my day -to -day job. But the fact that I can be critical in my thinking, I can summarize, I can contextualize, I think those are the skills you want. Second part, for those who are curious, I have no relationship, but I am just fascinated. There’s a school in the U .S. called Alpha School. And they’ve got a really powerful model. They are using AI. They are encouraging students to use AI. And they are demonstrating that I’m going to get all the stats wrong, but they get two. the learning in half the time or three times the learning half the time and then the kids in the afternoon they go learn and learn how to be a civic leader or you know a leader in all sorts of other contexts instead of spending all their time where you know historically you would have learned you know various dates it’s not that relevant to know the dates of specific things but it’s relevant to understand the context of those events and I think that’s where we can focus a lot of the effort

Audience

Thank you Terrific topic to be discussed at Davos I’m Pranjal Sharma I’m from India I’m an author and analyst we’re looking at a lot of the micro pieces but I’d like to focus on the macro we have a situation today where we’re all skilled up but nowhere to go right last year I think ILO says 7 million fewer jobs were created not to mention the existing jobs that disappeared So there is a cry from the industry. Firstly, they don’t know who to hire and why to hire and what to hire, and they don’t even know what to test that credentials on. The second part is there’s a huge disconnect between what they want and what academia is offering.

Plus, the concept of a degree shouldn’t exist, and even continuous learning in terms of applied knowledge is missing. So I think the core phrase to be used here is applied knowledge. How do you create information for a person to be able to earn a livelihood, irrespective of white, gray, blue collar? And I think that’s the gap of applied knowledge delivered in the right way to the right people at the right time.

Aidan Gomez

From a labor market perspective, I think there’s a good case to be concerned about the impact of AI and what might happen, and reskilling is going to be an essential component of that. Thank you. The mismatch in the market between what education institutions are offering and what the market is demanding, I think that is a major issue that we need to figure out how to solve. I think AI can be a part of speeding up delivery of new programs and courses and keeping up with changes in demand much faster than we have in the past. The process of scaling up educational infrastructure to meet a shift in market demand has been historically extremely slow and laborious.

But with AI, we’re able to create programs much faster. The models are infinitely scalable. They’re always awake 24 -7. They never get annoyed at the student. So we have these incredibly compelling tutors to deploy at scale against the problem of teaching the population the skills that we need. But I think the issue might be in identifying the skills that we need, and that’s still going to have to come first. From us, the humans, the business leaders. the policymakers. So that might be the core constraint. We need a direction to be set against to start building the solution.

Debbie Prentice

I think too, I mean, what I would say is I think that, you know, universities aren’t teaching to what businesses need necessarily. We’re teaching things that we believe are fundamentally important, and I would defend that. I mean, we’re teaching critical thinking, and we’re teaching deep mastery, and we’re teaching them to people at a critical moment in their lives, most of them, where they actually really need to have a go and learn these skills. They may need additional skills when they go out into the workplace, and that, as far as I’m concerned, is what the kinds of products that you’re talking about are for. Good, thank you. Let’s go back to the critical thinking because now in the university the students widely use the AI assistance and get the instant answer.

In that case, how can we teach them to increase their capability of critical thinking to make factual check, logical check, scientific check, ethical check to the instant answer they got from models?

Hugo Sarazen

conclusion. The AI will outdo the human. So where we can be competitively differentiated versus the AI is in the front and in the back end. So we need to adapt the curriculum to make sure that people are asking the right questions with the right context. And it is critical thinking. It is critical thinking, but we need to expand and we need to have a better way to evaluate the level of critical thinking these students have when they hit the workforce so that you can evaluate. And then the same on assessing. I mean, AI is marvelous right now. It generates codes like there’s no tomorrow, but it’s mostly garbage. It is, you know, we have bottlenecks and quality assurance in the back end.

So how do you kind of create the new tools and you teach people to have, you know, the critical thinking to see if this is using the right library, is it using the right pattern? Is it using the right data? I think that’s one of the core, you know, change that. academic institution, organization like me, an individual need to do, as you do your self -development, you need to kind of really lean into this ability to ask the right question. Because the middle part, you don’t have a competitive advantage. You will be outgunned. And the thing that is even more crazy, historically, like people did PhD. I have a PhD. I went like super deep on one little topic and I got buried somewhere in the sinkhole.

And it took my entire body of effort to get there. And to be a polymath is very hard. To be able to understand, I know nothing about chemistry. I know nothing about biology, psychology. My dad did that. So I got something rubbed up on me, maybe. But AI is a polymath by design. It has the data set across all of that. So the middle part is a foregone conclusion, folks. You need to get, get good at the front and the back end.

Aidan Gomez

Yeah, I was going to say another thing, which is teaching is a skill in the same way coding is a skill or doing math is a skill. And so it’s a core capability that we as model developers need to invest in. And it’s not something that is easily benchmarked. And it’s not something that is accurately tracked at the moment. But I think the more this rolls out, I mean, it’s already in the hands of every student on the face of the planet. It’s going to become imperative that we’re able to track the performance of models in teaching tasks to ensure that they’re actually effective and improve that over time. That’s just so like a technical level that is not done presently.

I don’t know of a teaching benchmark, but I can point to probably 30 code ones, 50 math ones, you know, biology, et cetera.

Audience

All right. it happens from time to time I think that psychology is rubbing off well when you say AI is a polymath by design, it’s a brilliant thought you know, it was you articulated it very well which also means that by definition humans cannot compete so we basically have to end the session and say that doom is nigh

Hugo Sarazen

well, I don’t think so I mean, I’m more optimistic so the polymath thing is real I mean, if you do, again, historical perspective he who had Leonardo da Vinci on his team had an advantage to build a war machine or a better court or whatever now there’s going to be a similar debate, like who assembles these polymath AI thingy has an advantage that is a foregone conclusion, that’s why there’s all these battles for a But I think we cannot, as the human race, give up that ability to influence. I think that we made a point, I think you did at the very beginning. Like, these models typically are not designed, though some of them can be designed, to explain their reasoning.

So if as a society we begin to rely on this thing that is super facile, that gives us an answer, and we don’t have the questioning, and we don’t kind of do the checking and the validating, we lose agency on important decision. And I think that is one of the things that we need to focus on deeply as a society. It also leads to the guardrail, the ethical things, and all that other stuff. We need to go there, because in the middle, it’s going to come up with answers that will be amazing in biology, and will solve things in biology, because I got trained in English language, I don’t know. But it’s going to be pretty wild, but we cannot lose agency around this polymath.

I mean, every data center is going to have… hundreds of millions of polymaths in there.

Audience

yeah I just want to shed a thinking I believe there’s a type of paradox within companies about this critical thinking let me say it this way we senior professionals we know how to judge what the AI is doing so I ask them one day for the AI to model whatever and I could judge my juniors they were not able to judge because they don’t have the experience so but to some extent I could fire them because I don’t need them anymore because of these AI technologies but maybe there will be a gap so at some point in time AI can enhance a lot what I do but if you don’t train let’s say the new generation the junior who will be the future who will be in the future able to do this critical thinking on what AI is doing I don’t have the answers obviously companies need to take efficiency and we need to do our best to reduce cost whatever but I think it’s something we as a society will have to think a lot about

Debbie Prentice

it’s fair thank you here we’ve got one here you wanted you were up right yeah i didn’t just call call you

Audience

hi thank you for your insights i’m i’m kian i’m the CEO of an AI company called workera um i really like what you said even on um testing the human and i think in in the world of testing right now there’s almost two camps one that says you can test them with the calculator we can test them without the calculator and there’s also overlaid on top of it the risks of proctoring and understanding um who’s cheating who’s not cheating and what can you tell about it so how are you thinking about that idea of testing with or without the calculator

Aidan Gomez

yeah the uh the cheat like can you tell whether a piece of text was written by AI it’s really tough a lot of the detectors out there are total scams they’ll say 100 % AI even when it’s not used at all so they’re extremely overconfident very high error rate on both sides uh false positive false negative. And but the answer to that question is, you can. Like, you can insert into language models subtle cues to indicate for the reader, this was written by an AI. You can not sample from natural language, language that I’m drawing from right now. You can sample from a slightly shifted distribution and use certain words much more than any normal like any human would use.

And then as soon as those words appear, you have a good piece of evidence that this was written by a language model. And so us language modeling companies do that. We shift the distribution of the language model so that when its text gets read, we have some ability to say, you know, I can assign a likelihood that that was generated by my model. So you can detect that to some extent, but many of the tools are scams. And so I think we need to make better tools and put them in the hands of educators more readily. Thank you. On testing with and without the calculator, I have a pretty strong focus on without the calculator.

I think everything needs to be ripped away, and you, standing alone as yourself, need to prove your knowledge. That is like the gold standard test of what you have learned retained. But of course, like I was saying earlier, using the language model is a skill itself, and we should have space to test that, in which case, of course, you’re going to need the LLM in the loop.

Debbie Prentice

Let me seize the chair’s prerogative here to ask, because I’m curious what you would both say to this question. What happens in this brave new world of polymaths and not showing your work and not explaining your answer to expertise or authority? So, you know, we have at Cambridge, you know, library after library of big books that tell you the truth, or that was always the… the idea, right? You would go look it up somewhere. What do you do in a world in which looking it up is no longer… there’s not a dictionary, there’s not a truth?

Hugo Sarazen

I’ll start. I think most technology go back and forth. There’s a pendulum. We’re in the pendulum that bigger is better. We’re throwing everything under the sun. Every Reddit quote is now part of training every large language model. And that is good. It’s going to give you an average answer for average problem. Now, over time, I think we’re going to come back and say, you do need specialized, trusted, and we need to have confidence that we did use the right source. And I think there will be a space for that. At least I want to hope that that will be the case, that we’re going to come back and we’re going to have these specialized model that will not only be rag, but they are going to be defined from scratch with the right intent.

And they don’t need to be a zillion, trillion function points and whatever. I mean, they just need to be trained on the expertise. And then you do need to trust it. It’s going to be incredibly important. I think we also need a lot of research on explainability. And Ben Gio at the University of Montreal, one of the guys who got the Turin Awards, has been very vocal around this. We need to kind of go back and explain a lot more. These are statistical models. This is all this is. These are huge matrices, and they’re like weights assigned to different things. So this is not a piece of software where you say if, then, this, that.

This is just statistics. So it, on average, gives good answers. But it depends on the data. And you need to come back and put a bunch of tools to put the explainability into the model. And there are ways to do it. It’s not yet super advanced. And I think we need to invest in that so that we do have the confidence, build a trust. And I do think it’s part of the learning. The learning question you have. Because if the models are black box, you lose. the ability to learn from their deduction process, which doesn’t exist. It’s just a statistical model. There’s no deduction. So anyway, those are my two ideas.

Aidan Gomez

Yeah, over the course of last year, there was a paradigm shift in the type of model that gets to use now. We don’t just use input -output direct response models like you were alluding to. Every model now is a reasoning model. And so before it actually responds, it has an internal monologue where it thinks through the problem, tries to reason about it, and then delivers a response. It is primitive. It’s a year old, but it’s getting much better. And so I think exposing that to the user and showing these chains of thought, this reasoning is an important solution. And then like you say, RAG, which is retrieval augmented generation, where the model isn’t just drawing on its own knowledge, but it’s actually making direct and specific reference to external knowledge.

So we can plug it in into the Cambridge library. I went to Oxford, so the Bodley. And it can cite directly back from those sources. And that provides some degree of both reasoning and RAG provides some degree of auditability. So you can have a little bit more confidence in the response because you can check its work.

Debbie Prentice

Just out of curiosity, what’s driving that? What’s driving the need for reasoning?

Aidan Gomez

Because the models were brittle. They would very confidently answer with the wrong solution. And it turns out humans don’t put the same amount of energy into answering every question. But that was the prior expectation on these models. You would ask them, what’s 1 plus 1? And it would immediately respond and put the same amount of effort into answering that question. And you would ask it to prove some unsolved Erdos problem or something. And it would put the same amount of effort as 1 plus 1 into that. That was obvious. You know, there are some problems that we should spend. days, weeks, months, years, decades, putting effort in to solve, and there are others that can be responded to instantly.

It’s just a better, more robust intelligence.

Debbie Prentice

That’s fascinating. We have time for one more question. Anything pressing in there?

Audience

Thank you. Yeah, I’m very interested to ask the question of just circling back to the beginning where we said we have like public sector university as well as a technology, a tech platform being in the same room. The question I have on my mind is that with right now, like in the U .S. especially, education cost is so astronomically high and prohibitive. Lots of people are saying the narrative goes as like there’s no point going to university anymore. And I would see in that world, there would be a lot of attention turned to online education. I think we’re all very familiar with Udemy. What is the gaps between an online education and an accredited college or an elite college?

Has there ever been customer or market demand for online education to move towards a model or imitate a traditional college experience? Has that ever surfaced as a need? And just comparing the gaps there.

Hugo Sarazen

I’m going to say something maybe controversial, but it’s fun. The university degree is a bundle. It’s a convenient bundle that as a society we chose to create. So you learn something, you get an accreditation, and you get a degree. have a rite of passage. You know, these kids are at a moment, they leave home, they go, and they, and that bundle is a convenient, and we bundle that with research, because the same people could now pass on their knowledge to others. It is a convenient bundle as a society. It has worked well for, you know, a long time. Oxford and Cambridge are examples of long -standing institutions that had a version of this bundle. It changes over time.

Is it time to revisit whether all of these components need to fit together because of the economics and what AI can do to change the economic of delivery? Maybe. I think the second…

Debbie Prentice

Think it quickly.

Hugo Sarazen

Yeah, quickly. And the second piece is just the adaptability. If you have the labor market that moves so fast, you’re now going to begin to put more weight on the addressing a specific need for a specific skill. So I think that is a reality in addition to that potential unbundling of that whole experience.

Debbie Prentice

You have a good word for the university. I’m actually interested to hear from the university’s perspective. Then I’ll just end by saying I think that they are currently serving very different functions. Right now, university does so much more than provide knowledge that it still is worth its weight in gold, and it is gold. But we’ll see how the space develops. With that, I’m getting all kinds of signals from the producers, so we’ve got to end it. But thank you very much. Thank you for your questions, and thank you to our panelists. Thank you. To be continued… To be continued… Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (18)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“Professor Debbie Prentice is Vice‑Chancellor of the University of Cambridge and opened the town‑hall discussion on the “dilemmas around knowledge”.”

The knowledge base identifies Deborah Prentice as Vice-Chancellor of the University of Cambridge and notes that the town-hall was framed around dilemmas around knowledge [S10] and [S4].

Confirmedmedium

“Aidan Gómez is the CEO (and co‑founder) of Cohere, an enterprise AI firm building large language models for business use.”

Sources list Aidan Gómez as CEO of Cohere, confirming his leadership role and the company’s focus on AI and LLMs, though the co-founder detail is not mentioned [S8] and [S7].

Confirmedmedium

“The town‑hall discussion was introduced as a conversation about the long‑standing “dilemmas around knowledge” that have existed since schools and libraries were invented.”

The introductory remarks in the knowledge base explicitly state that the town-hall would address “dilemmas around knowledge,” echoing the report’s framing [S4] and [S5].

External Sources (84)
S1
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S2
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S3
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S4
Driving Enterprise Impact Through Scalable AI Adoption — 1283 words | 124 words per minute | Duration: 616 secondss Good afternoon, everyone, and thank you for joining this tow…
S5
Driving Enterprise Impact Through Scalable AI Adoption — -Debbie Prentice- Professor and Vice Chancellor of the University of Cambridge, representing the not-for-profit educatio…
S6
Driving Enterprise Impact Through Scalable AI Adoption — – Hugo Sarazen- Aidan Gomez – Hugo Sarazen- Debbie Prentice
S7
Media Briefing: Unlocking the North Star for AI Adoption, Scaling and Global Impact / DAVOS 2025 — – Aidan Gomez: CEO of Cohere Aidan Gomez: And there were definitely indications that it was a promising architecture f…
S8
Lift-off for Tech Interdependence? / DAVOS 2025 — – Aidan Gomez: CEO at Cohere Aidan Gomez: I’ll be quick. So I think, from our perspective, Cohere is focused on prod…
S9
AI expert Aidan Gomez joins Rivian board — Aidan Gomez, co‑founder and chief executive of AI specialist Cohere, has been appointed to theboard of electric‑vehicle …
S10
Knowledge in the Age of AI: World Economic Forum Town Hall Discussion — Critical thinking as essential human skill Prentice defends the university model by arguing that universities teach ski…
S11
Certifying humanity: Labeling content amid AI flood — As a result, trust is no longer formed through close inspection. Few readers have the time, expertise, or tools to verif…
S12
Artificial intelligence (AI) – UN Security Council — Across different sessions, participants expressed concerns about the lack of transparency in AI algorithms, which can le…
S13
Artificial intelligence (AI) and the human condition — The need for a superior intelligence has accompanied us since the dawn of humanity. So far, religious beliefs have survi…
S14
DiploFoundation — Ambassador Petru Dumitriu provides a reality check on persuasion in diplomacy. Nowadays, in multilateral conferences, or…
S15
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — Arun and good morning everyone I represent the systems which champions the cause of industrial AI platforms. Now to this…
S16
Keynote-N Chandrasekaran — “It is the age of abundant intelligence where the scarce resources are trust, stewardship, and human capability.”[39]. “…
S17
Keynote-N Chandrasekaran — Evidence:We are at a defining moment in the age of abundant intelligence where trust, stewardship, and human capability …
S18
Diplomatic policy analysis — Overreliance on technology:While machine learning and analytics are powerful tools, they are not infallible. Overdepende…
S19
Education, Inclusion, Literacy: Musts for Positive AI Future | IGF 2023 Launch / Award Event #27 — Eve Gaumond:Thank you very much. I would like to thank you for inviting me to comment . I would like to build upon three…
S20
Safeguarding Children with Responsible AI — As Davin concluded with “measured optimism,” the discussion highlighted both tremendous potential and significant risks,…
S21
Open Forum: Liberating Science — It advocates for open discussions and debates, the accountability of politicians, transparency in science, and the ident…
S22
Policymaker’s Guide to International AI Safety Coordination — Teo emphasizes that policymakers must understand what actually works in practice, not just what appears effective on pap…
S23
Policymaker’s Guide to International AI Safety Coordination — Minister Teo’s aviation safety comparison focused on Singapore’s experience with A380 aircraft operations, describing ho…
S24
The Gig Economy: Positioning Higher Education at the Center of the Future of Work (USAID Higher Education Learning Network) — A challenge faced by universities is the disconnect between the skills and knowledge they provide and the skills demande…
S25
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — Traditional education system faces challenges as students question value of expensive degrees
S26
7th edition — The Internet has opened new possibilities for education. Online/e-learning initiatives use the Internet as a medium for …
S27
Building Trust through Transparency — Conversely, the rise of disinformation and fake news erodes trust, undermining the very foundation of society. Trust rem…
S28
HIGH LEVEL LEADERS SESSION IV — This approach ensures that technology is used as a means to empower individuals and promote equality. Mitigating risks a…
S29
Education, Inclusion, Literacy: Musts for Positive AI Future | IGF 2023 Launch / Award Event #27 — Eve Gaumond:Thank you very much. I would like to thank you for inviting me to comment . I would like to build upon three…
S30
AI & Child Rights: Implementing UNICEF Policy Guidance | IGF 2023 WS #469 — Steven:Thanks, Vicky. And good afternoon, everyone. Good morning to those online. It’s a pleasure to be here. So I’m a d…
S31
AI tools deployed to set tailored attendance goals for English schools — England will introduceAI-generated attendance targetsfor each school, setting tailored improvement baselines based on th…
S32
AI, smart cities, and the surveillance trade-off — The Barcelona model demonstrates that AI in cities doesn’t have to mean surrendering decision-making to algorithms. Mach…
S33
WS #219 Generative AI Llms in Content Moderation Rights Risks — All speakers agree that despite technological advances, human oversight and involvement in content moderation remains cr…
S34
Generative AI in Education — In conclusion, the summary underscores the need for a balanced integration of GAI in education, advocating for its use a…
S35
Enhancing rather than replacing humanity with AI — Development is guided by principles of dignity, fairness, and flourishing, rather than solely by technical capabilities….
S36
How nonprofits are using AI-based innovations to scale their impact — Summary:There was remarkably strong consensus among all speakers on key principles: the value of cohort-based learning, …
S37
Driving Enterprise Impact Through Scalable AI Adoption — Debbie Prentice defended the continued value of universities, arguing that they serve “so much more than provide knowled…
S38
Driving Indias AI Future Growth Innovation and Impact — In the Indian context, as the audience is aware, we had a lot of catching up to do. And it’s fair to say that a lot of w…
S39
AI (and) education: Convergences between Chinese and European pedagogical practices — – Hao Liu- Norman Sze- Jovan Kurbalija- Donis Sadushaj Multiple audience members argued that universities serve purpose…
S40
AI for Safer Workplaces & Smarter Industries_ Transforming Risk into Real-Time Intelligence — Explanation:There was unexpected consensus that fear about AI is widespread across different age groups and demographics…
S41
Defying Cognitive Atrophy in the Age of AI: A World Economic Forum Stakeholder Dialogue — Moderate to high disagreement with significant implications. While speakers agreed on the importance of human developmen…
S42
Welfare for All Ensuring Equitable AI in the Worlds Democracies — It’s fascinating maybe we’ll come back to it as we talk to a close. Let me shift gears a little bit and talk a bit more …
S43
From summer disillusionment to autumn clarity: Ten lessons for AI — In contrast, the focus on existing harms – education, discrimination, job loss, etc. – frames the problem in terms of ac…
S44
WSIS Action Line Facilitators Meeting: 20-Year Progress Report — Access to Information and Knowledge Society Development Sociocultural | Content policy | Online education The digital …
S45
Keynote-N Chandrasekaran — “It is the age of abundant intelligence where the scarce resources are trust, stewardship, and human capability.”[39]. “…
S46
Keynote-N Chandrasekaran — Evidence:We are at a defining moment in the age of abundant intelligence where trust, stewardship, and human capability …
S47
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — Gong Ke: Thank you. Based on the observation of my Institute in the past years to the Chinese practices, I think there a…
S48
Keynote-Martin Schroeter — Organizational and public trust in AI systems is established through implementing clear operational boundaries and ensur…
S49
Secure Finance Risk-Based AI Policy for the Banking Sector — Trust is built when systems are predictable, explainable, and accountable. Trust deepens when innovation aligns with pub…
S50
IndoGerman AI Collaboration Driving Economic Development and Soc — Yes, thank you for inviting me. Yeah, what is Fraunhofer? doing in the field of AI. You all know AI is on one hand large…
S51
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Anne Flanagan: Hello, apologies that I’m not there in person today. I’m in transit at the moment, hence my picture on yo…
S52
Diplomatic policy analysis — Overreliance on technology:While machine learning and analytics are powerful tools, they are not infallible. Overdepende…
S53
Can we test for trust? The verification challenge in AI — Anja Kaspersen discussed the role of technical professional organizations like IEEE in AI governance conversations. She …
S54
Why science metters in global AI governance — Finally, let us be clear. Science informs, but humans decide. Our goal is to make human control a technical reality, not…
S55
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S56
Advancing Scientific AI with Safety Ethics and Responsibility — And this cannot come just from the policy side, right? So we need to bring in all the participatory approach which will …
S57
How Multilingual AI Bridges the Gap to Inclusive Access — I’ll also say one last thing, which is I myself grew up in Beirut in Lebanon, a very tiny country, but that everybody ha…
S58
Keynote-N Chandrasekaran — “It is the age of abundant intelligence where the scarce resources are trust, stewardship, and human capability.”[39]. “…
S59
Driving Enterprise Impact Through Scalable AI Adoption — This comment reframes AI not just as a tool but as creating millions of polymaths – historically rare individuals with e…
S60
Keynote-N Chandrasekaran — Evidence:We are at a defining moment in the age of abundant intelligence where trust, stewardship, and human capability …
S61
Artificial intelligence (AI) – UN Security Council — Firstly, synthetic data offers a solution to privacy issues byallowing developers to create data that mimics the statist…
S62
Rethinking learning: Hope, solutions, and wisdom with AI in the classroom — But this adaptation won’t happen without effort. It requires educators willing to experiment with new approaches even wh…
S63
(Interactive Dialogue 3) Summit of the Future – General Assembly, 79th session — Avendis Consulting: I thank my esteemed co-chair. We will continue with our list of speakers. I now give the floor to …
S64
Education, Inclusion, Literacy: Musts for Positive AI Future | IGF 2023 Launch / Award Event #27 — Eve Gaumond:Thank you very much. I would like to thank you for inviting me to comment . I would like to build upon three…
S65
IGF 2024 Global Youth Summit — AI has the potential to tailor education to each student’s specific requirements. This personalization can enhance the l…
S66
Truth vs Myth in Elections / DAVOS 2025 — Critical thinking and education are key to combating disinformation
S67
Viewing Disinformation from a Global Governance Perspective | IGF 2023 WS #209 — Misinformation poses a significant threat to the stability of state institutions, as it undermines citizens’ confidence …
S68
Driving Enterprise Impact Through Scalable AI Adoption — “…when you exist in a world where it’s so fast and easy to get answers to whatever question you might have or to get a…
S69
Open Forum: Liberating Science — The importance of critical thinking and being cautious of fake news is another crucial aspect discussed in the analysis….
S70
The Gig Economy: Positioning Higher Education at the Center of the Future of Work (USAID Higher Education Learning Network) — A challenge faced by universities is the disconnect between the skills and knowledge they provide and the skills demande…
S71
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — Traditional education system faces challenges as students question value of expensive degrees
S72
ISSN 1011-6702 — A related trend has been the increasing involvement of public universities in revenuegenerating activities. 38 While hig…
S73
HIGH LEVEL LEADERS SESSION IV — In conclusion, while AI and new technologies offer immense potential, it is crucial to address concerns such as inequali…
S74
Large Language Models on the Web: Anticipating the challenge | IGF 2023 WS #217 — Lastly, more transparency is needed in the selection and curation of training data for LLMs. The value of training data …
S75
Building Trust through Transparency — Conversely, the rise of disinformation and fake news erodes trust, undermining the very foundation of society. Trust rem…
S76
How Small AI Solutions Are Creating Big Social Change — At the end, what we are reaching out in Europe is the constitution of a large European health data space. 450 million ci…
S77
Making AI less ‘thirsty’ — A new studyof the University of Colorado Riverside and the University of Texas Arlington uncovers the ‘secret water foot…
S78
Inclusive AI Starts with People Not Just Algorithms — This discussion at the India AI Summit focused on scaling human potential through artificial intelligence, featuring lea…
S79
The cognitive cost of AI: Balancing assistance and awareness — The rapid integration of AI tools like ChatGPT into daily life has transformed how we write, think, and communicate. AI …
S80
Keynote-Bejul Somaia — Scarcity of capital. Scarcity of infrastructure. Scarcity of opportunity. but perhaps the deepest and most consequential…
S81
Town Hall: How to Trust Technology — Furthermore, the previous mental model of default trust in technology may not apply in the case of LLM technology. On th…
S82
From Innovation to Impact_ Bringing AI to the Public — The discussion addresses concerns about AI’s impact on education with nuanced perspectives on institutional evolution. S…
S83
Future-Ready Education: Enhancing Accessibility & Building | IGF 2023 — 2. It is crucial to incorporate digital literacy, digital skills, and re-skilling in the education system. Pedagogical c…
S84
DIGITAL DIVIDENDS — In addition to foundational skills, workers are being required to use more critical thinking and problem solving, commu…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Debbie Prentice
7 arguments124 words per minute1283 words616 seconds
Argument 1
Critical thinking emerges as the most valued scarce skill (Debbie Prentice)
EXPLANATION
Debbie observes that among the options presented to the audience, critical thinking received the most votes, indicating it is seen as the most valuable scarce skill in an AI‑rich environment. She links this to the need for learners to assess their own understanding and motivation.
EVIDENCE
She notes that the poll results showed critical thinking won out, with it and sustained attention neck-and-neck, followed by trust and deep mastery [65-68]. She also emphasizes that critical thinking helps learners know whether they have truly mastered material [70-73].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Poll results and panel discussion highlighted critical thinking as the top scarce skill, aligning with the World Economic Forum’s emphasis on critical thinking as essential [S10] and the discussion’s mention of teaching critical thinking at a pivotal moment [S5].
MAJOR DISCUSSION POINT
Critical thinking emerges as the most valued scarce skill
AGREED WITH
Hugo Sarazen, Aidan Gomez, Audience
DISAGREED WITH
Hugo Sarazen, Aidan Gomez
Argument 2
Lack of explainability undermines confidence in AI‑provided knowledge (Debbie Prentice)
EXPLANATION
Debbie argues that when AI systems cannot explain how they arrived at an answer, users lose trust in the information provided, which hampers effective learning and decision‑making.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust erosion due to AI’s lack of source attribution is documented in analyses of AI transparency and trust challenges [S11][S12][S15].
MAJOR DISCUSSION POINT
Lack of explainability undermines confidence in AI‑provided knowledge
AGREED WITH
Hugo Sarazen, Aidan Gomez
DISAGREED WITH
Hugo Sarazen, Aidan Gomez
Argument 3
Universities cultivate critical thinking and self‑knowledge beyond mere content delivery (Debbie Prentice)
EXPLANATION
Debbie stresses that universities aim to develop learners’ self‑knowledge and critical thinking, not just deliver factual content. She argues that these deeper competencies are essential for understanding one’s own learning progress.
EVIDENCE
She explains that self-knowledge for the learner is crucial because learners cannot tell if they have truly mastered or are interested in material, and that critical thinking was the audience’s preferred answer [57-63][65-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The integrated value of universities in fostering critical thinking and self-knowledge is discussed in the WEF town hall and in remarks defending university missions beyond content delivery [S10][S4].
MAJOR DISCUSSION POINT
Universities cultivate critical thinking and self‑knowledge beyond mere content delivery
Argument 4
Universities often teach for intrinsic value rather than direct business relevance (Debbie Prentice)
EXPLANATION
Debbie points out that universities prioritize teaching fundamental concepts such as critical thinking and deep mastery, which may not align directly with immediate corporate skill needs, but she defends this intrinsic educational mission.
EVIDENCE
She states that universities teach what they believe is fundamentally important-critical thinking and deep mastery-rather than tailoring to business needs, and that additional workplace skills can be provided by other products [267-270].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Debate on universities focusing on intrinsic skills rather than immediate business needs is reflected in the panel’s contrast between academic and corporate priorities [S4][S5].
MAJOR DISCUSSION POINT
Universities often teach for intrinsic value rather than direct business relevance
Argument 5
Despite cost pressures, universities provide broader societal functions that remain valuable (Debbie Prentice)
EXPLANATION
Debbie concludes that even though university degrees are expensive, they deliver more than knowledge, offering societal benefits such as research, accreditation, and a rite of passage that remain valuable.
EVIDENCE
She remarks that universities do much more than provide knowledge, describing them as “worth its weight in gold” and highlighting their broader societal role [425-426].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
University’s broader societal role, including research, accreditation and social rites, is highlighted in the discussion of unbundling and integrated value [S10][S4].
MAJOR DISCUSSION POINT
Despite cost pressures, universities provide broader societal functions that remain valuable
DISAGREED WITH
Hugo Sarazen, Aidan Gomez
Argument 6
Sectoral differences between for‑profit ed‑tech firms and not‑for‑profit universities shape distinct pressures, opportunities and approaches to knowledge dilemmas
EXPLANATION
Debbie points out that the panelists come from very different organisational models – commercial companies that sell products versus a public university. These differing incentives influence how each sector tackles issues such as knowledge scarcity, monetisation and public good.
EVIDENCE
She notes that “Aidan and Hugo run very successful businesses selling a product… they are from the for-profit educational technology sector, and I’m from the not-for-profit sector” highlighting the contrasting pressures and opportunities each faces [13-15].
MAJOR DISCUSSION POINT
Sectoral differences affect knowledge‑related strategies
Argument 7
Universities must explicitly teach learners how to critically evaluate AI‑generated answers, including fact‑checking, logical, scientific and ethical validation
EXPLANATION
Debbie stresses that as AI tools provide instant answers, students need structured training to assess the reliability of those answers. Critical thinking should be taught as a set of verification skills rather than assumed to emerge automatically.
EVIDENCE
She asks, “how can we teach them to increase their capability of critical thinking to make factual check, logical check, scientific check, ethical check to the instant answer they got from models?” indicating a demand for explicit instruction on evaluating AI output [272-276].
MAJOR DISCUSSION POINT
Teaching verification of AI outputs
H
Hugo Sarazen
11 arguments170 words per minute3459 words1214 seconds
Argument 1
Attention as the new scarce commodity (Hugo Sarazen)
EXPLANATION
Hugo cites the classic Herbert Simon quote that an abundance of information creates a poverty of attention, arguing that human attention has become the scarcest resource for learners in the age of AI.
EVIDENCE
He references Herbert Simon’s observation that “when you have a wealth of information, you have a poverty of attention” and notes that this scarcity of attention is a key problem for learners today [40-43].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The scarcity of human attention amid information overload is noted in the discussion and aligns with observations on sustained attention becoming scarce [S5].
MAJOR DISCUSSION POINT
Attention as the new scarce commodity
AGREED WITH
Debbie Prentice, Aidan Gomez
DISAGREED WITH
Aidan Gomez, Debbie Prentice
Argument 2
Trust in AI outputs becomes a limiting factor (Hugo Sarazen)
EXPLANATION
Hugo warns that AI systems often provide answers without indicating their source, which erodes users’ trust and hampers learning because learners cannot verify the provenance of information.
EVIDENCE
He explains that AI products give answers without explaining where they came from, making trust a crucial issue for society [46-47].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust concerns arising from AI’s opaque answers are supported by literature on trust erosion and the need for explainability [S11][S12][S15].
MAJOR DISCUSSION POINT
Trust in AI outputs becomes a limiting factor
AGREED WITH
Aidan Gomez, Debbie Prentice
Argument 3
Personalised, adaptive tutoring can sustain engagement (Hugo Sarazen)
EXPLANATION
Hugo argues that AI can tailor learning experiences to individual preferences—visual, auditory, or other learning styles—thereby keeping learners’ attention and motivation higher than generic content.
EVIDENCE
He states that AI can deliver targeted, scalable content that matches a learner’s modality (auditory or visual), which helps maintain attention better than generic offerings [145-146].
MAJOR DISCUSSION POINT
Personalised, adaptive tutoring can sustain engagement
AGREED WITH
Aidan Gomez, Debbie Prentice
Argument 4
AI‑enabled role‑play simulations accelerate skill acquisition (Hugo Sarazen)
EXPLANATION
Hugo describes how AI‑driven role‑play can simulate real‑world scenarios, such as sales or call‑center interactions, providing rapid, repeatable practice and immediate feedback, thus speeding up skill development.
EVIDENCE
He gives examples of loading product specs into an AI role-play for sales training and for onboarding 20,000 call-center agents, noting that this simulation accelerates learning through practice and scoring rubrics [202-214].
MAJOR DISCUSSION POINT
AI‑enabled role‑play simulations accelerate skill acquisition
Argument 5
Need for specialised, trusted models that cite sources (Hugo Sarazen)
EXPLANATION
Hugo calls for AI models that are purpose‑built, trained on vetted data, and capable of citing their sources, so users can trust the outputs and verify their accuracy.
EVIDENCE
He explains that future specialized models will be trained from scratch with the right intent, will cite expertise, and that trust and explainability will be essential for reliable use [346-352].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for purpose-built, source-citing models correspond with calls for trusted AI systems that provide provenance and transparency [S11][S12][S15].
MAJOR DISCUSSION POINT
Need for specialised, trusted models that cite sources
DISAGREED WITH
Aidan Gomez
Argument 6
Teachers as storytellers and mentors; AI should augment, not replace them (Hugo Sarazen)
EXPLANATION
Hugo emphasizes that teachers bring narrative and motivational qualities that AI cannot replicate, but AI can augment teachers by extending their voice and presentation style to new subjects.
EVIDENCE
He recounts how a favorite high-school teacher’s storytelling could be amplified with AI, allowing the teacher’s voice and approach to be applied to topics they never taught, such as chemistry [184-188].
MAJOR DISCUSSION POINT
Teachers as storytellers and mentors; AI should augment, not replace them
AGREED WITH
Aidan Gomez, Debbie Prentice
DISAGREED WITH
Aidan Gomez
Argument 7
Current ROI of corporate learning is unclear; AI can deliver bite‑size, measurable skill outcomes (Hugo Sarazen)
EXPLANATION
Hugo notes that many enterprises cannot demonstrate the return on investment of traditional learning programs, and proposes AI‑driven, bite‑size, in‑flow learning that can be measured in real time for skill deployment.
EVIDENCE
He reports speaking with 400 CHROs, finding that most learning tools lack clear ROI, and suggests AI can provide adaptive, bite-size learning with real-time skill measurement [128-141].
MAJOR DISCUSSION POINT
Current ROI of corporate learning is unclear; AI can deliver bite‑size, measurable skill outcomes
Argument 8
The traditional degree bundles education, accreditation and rite‑of‑passage; AI may prompt unbundling (Hugo Sarazen)
EXPLANATION
Hugo describes the university degree as a historical bundle of education, accreditation, and social rite of passage, and suggests that AI‑driven delivery could lead to a re‑evaluation and possible unbundling of these components.
EVIDENCE
He explains that the degree bundles learning, accreditation, and a rite of passage, and questions whether AI-enabled economics will force a revisit of this bundle [408-416].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concept of degree unbundling is directly addressed in the World Economic Forum town hall, where panelists discuss separating learning, accreditation, and social rites [S10][S5].
MAJOR DISCUSSION POINT
The traditional degree bundles education, accreditation and rite‑of‑passage; AI may prompt unbundling
DISAGREED WITH
Debbie Prentice, Aidan Gomez
Argument 9
AI democratizes knowledge by turning every data centre into a polymath, fundamentally reshaping power structures that historically linked knowledge scarcity to authority
EXPLANATION
Hugo argues that large language models can learn across domains and, when deployed at scale, make vast amounts of expertise instantly available, thereby eroding the traditional link between knowledge ownership and power.
EVIDENCE
He describes how “LLMs can learn everything… become the polymath… every data centre… we’re adding millions of polymath… that becomes a democratization of that knowledge” [37-40] and contrasts this with the historical view that “knowledge was scarce… a source of power” [32-35].
MAJOR DISCUSSION POINT
Democratization of knowledge via AI
Argument 10
With abundant AI‑generated answers, education must pivot to teaching people how to ask the right questions, because inquiry, not just answers, preserves human agency
EXPLANATION
Hugo emphasizes that the real educational challenge is helping learners formulate effective queries, as AI excels at providing answers but does not guide the questioning process.
EVIDENCE
He notes that “the pattern… they had an enormous proliferation of tools… How do you measure the ROI of learning? … we need to teach people to ask the right question” and later says “we need to adapt the curriculum to make sure that people are asking the right questions with the right context” [128-141][236-238].
MAJOR DISCUSSION POINT
Teaching question‑formulation in an AI‑rich world
Argument 11
AI enables rapid, scalable assessment and feedback loops that can replace one‑size‑fits‑all curricula, allowing personalized learning paths for both fast and slow learners
EXPLANATION
Hugo describes how AI can instantly assess learner proficiency, segment classes, and provide continuous feedback, thereby accommodating diverse learning speeds without the constraints of traditional batch teaching.
EVIDENCE
He explains that “with AI you can do a quick assessment… break apart the class… you can have feedback loop and reinforce that in a very, very powerful way” [104-107].
MAJOR DISCUSSION POINT
AI‑driven assessment and feedback for personalized learning
A
Aidan Gomez
10 arguments166 words per minute1973 words710 seconds
Argument 1
Deep mastery is at risk of being replaced by surface answers (Aidan Gomez)
EXPLANATION
Aidan warns that large language models can give quick, superficial responses that give learners a false sense of mastery, thereby endangering true deep understanding.
EVIDENCE
He observes that LLMs can produce a four-paragraph answer to complex questions like quantum mechanics, which feels like mastery but lacks depth, creating a false sense of understanding [48-49].
MAJOR DISCUSSION POINT
Deep mastery is at risk of being replaced by surface answers
DISAGREED WITH
Hugo Sarazen, Debbie Prentice
Argument 2
Rigorous testing without AI is essential to verify true understanding (Aidan Gomez)
EXPLANATION
Aidan stresses that to assess genuine learning, assessments must be conducted without AI assistance, ensuring that learners retain knowledge independently.
EVIDENCE
He states that testing is essential, requiring the tool to be removed so that human understanding can be measured, and later reiterates the importance of testing without the calculator [50-53][331-334].
MAJOR DISCUSSION POINT
Rigorous testing without AI is essential to verify true understanding
AGREED WITH
Hugo Sarazen, Debbie Prentice
DISAGREED WITH
Hugo Sarazen
Argument 3
AI reasoning and retrieval‑augmented generation improve depth of learning (Aidan Gomez)
EXPLANATION
Aidan explains that newer AI models incorporate internal reasoning (chain‑of‑thought) and retrieval‑augmented generation (RAG), which allow them to cite sources and provide more transparent, deeper answers.
EVIDENCE
He describes reasoning models that generate an internal monologue before answering and RAG that pulls directly from external knowledge bases like Cambridge or Oxford libraries, enhancing auditability [374-382].
MAJOR DISCUSSION POINT
AI reasoning and retrieval‑augmented generation improve depth of learning
AGREED WITH
Hugo Sarazen, Debbie Prentice
DISAGREED WITH
Hugo Sarazen
Argument 4
Retrieval‑augmented generation and chain‑of‑thought reasoning increase auditability (Aidan Gomez)
EXPLANATION
Aidan argues that exposing the model’s reasoning steps and linking answers to cited sources makes AI outputs more auditable and trustworthy for users.
EVIDENCE
He notes that reasoning models now produce chain-of-thought explanations and that RAG enables direct citation of external sources, providing a degree of auditability and confidence [374-382].
MAJOR DISCUSSION POINT
Retrieval‑augmented generation and chain‑of‑thought reasoning increase auditability
AGREED WITH
Hugo Sarazen, Debbie Prentice
Argument 5
Human judgment remains necessary; AI is a tool like a calculator (Aidan Gomez)
EXPLANATION
Aidan likens AI to a calculator: a powerful aid that speeds up work but still requires human oversight and grounding, emphasizing that human judgment cannot be replaced.
EVIDENCE
He says AI is a tool like a calculator, useful for faster and more thorough answers, but stresses the need for human grounding and strict testing to prevent cheating [175-182].
MAJOR DISCUSSION POINT
Human judgment remains necessary; AI is a tool like a calculator
AGREED WITH
Hugo Sarazen, Debbie Prentice
DISAGREED WITH
Hugo Sarazen
Argument 6
Rapid creation of AI‑generated programs can close the gap between academia and industry (Aidan Gomez)
EXPLANATION
Aidan points out that AI can accelerate the development of new educational programs, allowing institutions to keep pace with fast‑changing labor‑market demands.
EVIDENCE
He explains that AI can create programs quickly, scale infinitely, and operate 24/7, helping to address the mismatch between academic offerings and market needs, though skill identification must come from humans first [254-262].
MAJOR DISCUSSION POINT
Rapid creation of AI‑generated programs can close the gap between academia and industry
DISAGREED WITH
Hugo Sarazen, Debbie Prentice
Argument 7
Credential relevance depends on continuous testing and up‑to‑date skill verification (Aidan Gomez)
EXPLANATION
Aidan argues that for credentials to stay meaningful, they must be continuously validated through testing that reflects current skill requirements, rather than relying on outdated certifications.
EVIDENCE
He notes that business leaders want to know if a certification is current, emphasizing the need for ongoing testing and up-to-date skill verification [138-140] and reiterates the importance of strict testing regimens [175-182].
MAJOR DISCUSSION POINT
Credential relevance depends on continuous testing and up‑to‑date skill verification
Argument 8
Enterprise‑focused AI deployments must guarantee data security by keeping models and data within the customer’s perimeter, preventing any data exfiltration
EXPLANATION
Aidan highlights that Cohere’s competitive edge is its security architecture, which ensures that proprietary data never leaves the client’s environment, addressing privacy and compliance concerns for sensitive sectors.
EVIDENCE
He states, “Our big differentiator is on the security side… there’s no data exiting our customer’s perimeter. Instead, we send all of our models and software to them, and they keep it self-contained” [115-118].
MAJOR DISCUSSION POINT
Data‑centric security for enterprise AI
Argument 9
Current AI‑generated text detectors are unreliable, often producing false positives and negatives; the industry needs better provenance‑embedding techniques to identify AI output
EXPLANATION
Aidan points out that many existing detection tools are essentially scams, misclassifying human‑written text as AI‑generated and vice‑versa, and calls for more robust methods that embed identifiable cues in model outputs.
EVIDENCE
He remarks that “the detectors out there are total scams… they’ll say 100 % AI even when it’s not used… very high error rate on both sides” and then describes how “you can insert into language models subtle cues… shift the distribution… so we have some ability to say… it was generated by my model” [321-330].
MAJOR DISCUSSION POINT
Improving detection of AI‑generated content
Argument 10
Embedding subtle provenance cues into model outputs creates a measurable likelihood that a piece of text was generated by a specific AI, offering a path toward trustworthy detection
EXPLANATION
Aidan suggests that by deliberately altering the sampling distribution, AI systems can leave traceable signatures, enabling downstream tools to assess the probability of AI authorship.
EVIDENCE
He explains that “you can shift the distribution of the language model… when its text gets read, we have some ability to say… I can assign a likelihood that that was generated by my model” [323-329].
MAJOR DISCUSSION POINT
Provenance‑based AI output identification
A
Audience
5 arguments164 words per minute886 words323 seconds
Argument 1
Traditional degree structures are becoming obsolete; education should focus on delivering applied knowledge that directly equips individuals for diverse livelihoods
EXPLANATION
The audience member argues that the concept of a degree no longer serves the labor market, and that learning should be oriented toward practical, applied skills that can be quickly translated into employment across sectors.
EVIDENCE
He states, “the concept of a degree shouldn’t exist… the core phrase to be used here is applied knowledge… how do you create information for a person to be able to earn a livelihood… irrespective of white, gray, blue-collar?” [247-252].
MAJOR DISCUSSION POINT
Shift from degree bundles to applied knowledge
Argument 2
AI‑enabled role‑play and feedback mechanisms can sustain learner motivation by simulating real‑world tasks, replicating the motivational role of a human instructor
EXPLANATION
The audience raises the problem of motivation without a human teacher, and suggests that AI‑driven simulations can provide the engagement and practice needed to keep learners motivated.
EVIDENCE
Anna Van Niels asks, “How do you think AI enabled tech of various kinds can help with that motivation issue?” and Hugo responds with examples of AI role-play for sales and call-center training that provide practice and scoring [194-214].
MAJOR DISCUSSION POINT
AI‑driven simulations as motivational tools
Argument 3
Reliance on AI may erode critical thinking among junior employees, creating a future skills gap; organisations must deliberately cultivate critical thinking despite AI efficiencies
EXPLANATION
The audience member warns that senior professionals can still judge AI output, but juniors may lose the habit of critical evaluation, leading to a long‑term deficit in analytical capability.
EVIDENCE
He observes, “we senior professionals we know how to judge what the AI is doing… juniors… not able to judge… you could fire them… there will be a gap… companies need to think about this” [318-326].
MAJOR DISCUSSION POINT
Preserving critical thinking in an AI‑augmented workforce
Argument 4
Testing frameworks must evolve to balance the use of AI tools (the “calculator”) with integrity safeguards, requiring better detection of AI‑generated work and clear policies on when tools may be used
EXPLANATION
The audience raises concerns about whether assessments should allow AI assistance, noting the tension between testing with and without calculators and the need for reliable cheating detection.
EVIDENCE
Kian asks, “how are you thinking about that idea of testing with or without the calculator… what can you tell about it?” and Aidan later discusses the difficulty of detecting AI-generated text and the need for robust tools [320-327][321-330].
MAJOR DISCUSSION POINT
Designing AI‑aware assessment regimes
Argument 5
The emergence of AI polymaths raises existential concerns about human relevance, suggesting society must retain agency over decision‑making and not surrender to opaque, black‑box outputs
EXPLANATION
An audience participant expresses a bleak view that AI’s polymath capabilities could render human contribution meaningless, highlighting the need for societal safeguards.
EVIDENCE
The audience remarks, “by definition humans cannot compete so we basically have to end the session and say that doom is nigh” after Aidan’s description of AI as a polymath [308-312].
MAJOR DISCUSSION POINT
Human agency versus AI polymath dominance
Agreements
Agreement Points
Critical thinking is the most valued scarce skill and essential for evaluating AI-generated answers
Speakers: Debbie Prentice, Hugo Sarazen, Aidan Gomez, Audience
Critical thinking emerges as the most valued scarce skill (Debbie Prentice) Trust in AI outputs becomes a limiting factor (Hugo Sarazen) Rigorous testing without AI is essential to verify true understanding (Aidan Gomez) Preserving critical thinking in an AI‑augmented workforce (Audience)
All participants highlighted critical thinking as the key capability needed in an AI-rich environment: Debbie noted the poll showed critical thinking won out and stressed its role in self-knowledge [65-68][272-276]; Hugo emphasized its importance for asking the right questions and maintaining trust [237-238][236-238]; Aidan argued that rigorous testing without AI is crucial, underscoring critical thinking’s role in assessment [182-183]; audience members warned that junior workers may lose critical thinking without deliberate cultivation [318-326].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions emphasize critical thinking as a core human capability that must be preserved amid AI proliferation, highlighted in university advocacy for critical thinking skills [S37] and in frameworks identifying human capability as a scarce resource alongside trust and stewardship [S45].
Human attention has become a scarce resource in the age of abundant information
Speakers: Debbie Prentice, Hugo Sarazen, Aidan Gomez
Attention as the new scarce commodity (Hugo Sarazen) Critical thinking emerges as the most valued scarce skill (Debbie Prentice) – includes sustained attention AI can assist by personalising experiences to keep attention (Aidan Gomez)
The panel agreed that attention is limited: Hugo cited Herbert Simon’s observation that information wealth creates a poverty of attention [40-43]; Debbie referenced sustained attention as a poll option and its close ranking with critical thinking [68-69]; Aidan described how short-form media erodes attention spans, noting personal attention ends after 30 seconds of TikTok content [143-144].
POLICY CONTEXT (KNOWLEDGE BASE)
The shift from information scarcity to abundance has been documented as creating attention scarcity, a concern noted in digital policy analyses of the information-age transition [S44] and echoed in debates on cognitive atrophy in AI contexts [S41].
Explainability and trust in AI outputs are essential for effective learning
Speakers: Hugo Sarazen, Aidan Gomez, Debbie Prentice
Trust in AI outputs becomes a limiting factor (Hugo Sarazen) Lack of explainability undermines confidence in AI‑provided knowledge (Debbie Prentice) Retrieval‑augmented generation and chain‑of‑thought reasoning increase auditability (Aidan Gomez)
All three stressed the need for AI systems to be transparent: Hugo warned that AI often gives answers without source, eroding trust [46-47] and called for models that cite sources [346-352]; Aidan highlighted reasoning models and RAG that provide citations and auditability [374-382]; Debbie linked trust to the poll option and expressed concern about lack of explainability (implicit in her focus on trust) [66-69].
POLICY CONTEXT (KNOWLEDGE BASE)
International AI governance sessions stress transparency, explainability, and accountability as prerequisites for trust in AI systems, as outlined in multistakeholder guidelines on AI transparency [S47] and organizational trust frameworks requiring explainable decisions [S48][S49][S42].
AI should augment, not replace, human educators and judgment
Speakers: Aidan Gomez, Hugo Sarazen, Debbie Prentice
Human judgment remains necessary; AI is a tool like a calculator (Aidan Gomez) Teachers as storytellers and mentors; AI should augment, not replace them (Hugo Sarazen) Universities provide broader societal functions that remain valuable (Debbie Prentice)
The speakers concurred that AI is a supportive tool: Aidan likened AI to a calculator, emphasizing the need for human grounding [175-182]; Hugo argued teachers bring narrative and motivation that AI can only augment, and that human intervention remains necessary [169-170][184-188]; Debbie defended the university’s broader role beyond content delivery, underscoring the continued need for human institutions [425-426].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy statements across sectors advocate AI as a tool to support human judgment rather than supplant it, exemplified by the Barcelona model for augmenting planners [S32], content-moderation best practices retaining human oversight [S33], and principles of enhancing rather than replacing humanity [S35][S52].
Personalised, adaptive AI-driven learning can improve engagement and skill acquisition
Speakers: Hugo Sarazen, Aidan Gomez, Debbie Prentice
Personalised, adaptive tutoring can sustain engagement (Hugo Sarazen) AI reasoning and retrieval‑augmented generation improve depth of learning (Aidan Gomez) AI can personalize experiences to keep attention (Debbie Prentice)
All agreed on the promise of personalised AI: Hugo described AI tailoring content to auditory/visual styles and role-play simulations that keep learners engaged [145-146][202-214]; Aidan explained reasoning and RAG models that provide deeper, source-backed answers, enhancing learning quality [374-382]; Debbie noted AI’s potential to deliver targeted, engaging material to maintain attention [144-146].
POLICY CONTEXT (KNOWLEDGE BASE)
Educational policy reports call for balanced integration of generative AI to personalize learning while maintaining human oversight, highlighting the need for supportive tools that enhance engagement without becoming crutches [S34] and stressing responsible AI design focused on real user needs [S36].
Rigorous assessment, often without AI assistance, is needed to verify true mastery
Speakers: Aidan Gomez, Hugo Sarazen, Debbie Prentice
Rigorous testing without AI is essential to verify true understanding (Aidan Gomez) AI‑driven assessment and feedback loops can personalize learning (Hugo Sarazen) Self‑knowledge for the learner requires assessment of mastery (Debbie Prentice)
The panel stressed the importance of assessment: Aidan advocated testing without tools to gauge retained knowledge [50-53][331-334]; Hugo highlighted AI’s ability to quickly assess and segment learners, providing feedback loops [104-107]; Debbie emphasized learners’ inability to know if they have mastered material without assessment [57-63].
POLICY CONTEXT (KNOWLEDGE BASE)
Verification and testing frameworks argue for assessment methods that can operate independently of AI to ensure authentic mastery, as discussed in AI verification challenges [S53] and recommendations for cautious AI integration in education assessments [S34].
Similar Viewpoints
Both see AI as a massive scaling force that makes expertise widely available: Hugo describes LLMs as polymaths that democratise knowledge across domains [37-40]; Aidan notes AI can create and deploy educational programs instantly, helping institutions keep pace with market demand [254-262].
Speakers: Hugo Sarazen, Aidan Gomez
AI democratizes knowledge by turning every data centre into a polymath (Hugo Sarazen) Rapid creation of AI‑generated programs can close the gap between academia and industry (Aidan Gomez)
Both acknowledge the historic role of university degrees while recognizing that AI could pressure a re‑evaluation of this bundled model: Hugo questions whether the bundle should stay together in light of AI economics [408-416]; Debbie affirms universities still deliver essential societal functions beyond knowledge [425-426].
Speakers: Hugo Sarazen, Debbie Prentice
The traditional degree bundles education, accreditation and rite‑of‑passage; AI may prompt unbundling (Hugo Sarazen) Despite cost pressures, universities provide broader societal functions that remain valuable (Debbie Prentice)
Both stress the necessity of provenance and explainability: Hugo calls for purpose‑built models that can cite expertise [346-352]; Aidan describes RAG and reasoning that provide source citations and audit trails [374-382].
Speakers: Aidan Gomez, Hugo Sarazen
Need for specialised, trusted models that cite sources (Hugo Sarazen) Retrieval‑augmented generation and chain‑of‑thought reasoning increase auditability (Aidan Gomez)
Unexpected Consensus
Both for‑profit ed‑tech leaders and a not‑for‑profit university champion the need for human‑centered learning despite AI’s capabilities
Speakers: Hugo Sarazen, Aidan Gomez, Debbie Prentice
Teachers as storytellers and mentors; AI should augment, not replace them (Hugo Sarazen) Human judgment remains necessary; AI is a tool like a calculator (Aidan Gomez) Universities provide broader societal functions that remain valuable (Debbie Prentice)
Despite representing different business models, all three emphasized that AI cannot replace human educators or judgment and that human-centric learning remains essential. This cross-sector alignment was not anticipated given their differing incentives [169-170][184-188][175-182][425-426].
POLICY CONTEXT (KNOWLEDGE BASE)
Stakeholder dialogues underline the shared commitment across profit models to preserve human-centered learning, with university leaders emphasizing critical thinking development [S37] and cross-cultural pedagogical discussions stressing human interaction as essential [S39], alongside calls for responsible AI from nonprofit sectors [S36].
Overall Assessment

The discussion revealed strong convergence on several fronts: critical thinking and trust/explainability are seen as the primary scarce competencies; attention is recognized as limited; AI is viewed as a powerful augmentative tool rather than a replacement for human educators; personalised, adaptive learning and rigorous assessment (often without AI) are essential to ensure genuine mastery. While participants come from diverse sectors, they share a common belief that human agency, judgment, and institutional roles remain indispensable alongside AI.

High consensus across speakers on the need to preserve human‑centric skills (critical thinking, attention, judgment) and to embed explainability and trust in AI systems. This consensus suggests that future policy and practice should focus on hybrid models that combine AI’s scalability with robust human oversight, assessment frameworks, and curricula that foreground critical thinking.

Differences
Different Viewpoints
What is the scarcest resource in an AI‑rich learning environment
Speakers: Hugo Sarazen, Aidan Gomez, Debbie Prentice
Attention as the new scarce commodity (Hugo Sarazen) Deep mastery is at risk of being replaced by surface answers (Aidan Gomez) Critical thinking emerges as the most valued scarce skill (Debbie Prentice)
Hugo argues that human attention is the most limited resource when information is abundant [40-44]. Aidan contends that deep mastery is disappearing because LLMs give superficial answers that create a false sense of understanding [48-53]. Debbie reports that the audience voted critical thinking as the top scarce skill, suggesting she sees critical thinking as the key resource to protect [65-68].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy frameworks identify trust, stewardship, and human capability as the most limited assets in the AI era [S45][S46], while analyses of the digital transition highlight human attention as a newly scarce resource [S44].
How to achieve trustworthy, explainable AI outputs
Speakers: Hugo Sarazen, Aidan Gomez, Debbie Prentice
Need for specialised, trusted models that cite sources (Hugo Sarazen) AI reasoning and retrieval‑augmented generation improve depth of learning (Aidan Gomez) Lack of explainability undermines confidence in AI‑provided knowledge (Debbie Prentice)
Hugo calls for purpose-built models that are trained on vetted data and can explicitly cite their sources to build trust [346-352]. Aidan proposes that newer reasoning models with chain-of-thought and RAG can provide auditability and source attribution, thereby improving trust [374-382]. Debbie (argument) stresses that without explainability users lose confidence, highlighting the same problem from a policy perspective. The speakers differ on the technical route to trustworthy AI.
POLICY CONTEXT (KNOWLEDGE BASE)
Guidelines propose multi-step processes for transparency and explainability, including consensus building and technical safeguards, as detailed in AI transparency sessions [S47] and organizational trust standards emphasizing explainable decisions [S48][S49][S42].
Future role and structure of university degrees versus AI‑driven education
Speakers: Hugo Sarazen, Debbie Prentice, Aidan Gomez
The traditional degree bundles education, accreditation and rite‑of‑passage; AI may prompt unbundling (Hugo Sarazen) Despite cost pressures, universities provide broader societal functions that remain valuable (Debbie Prentice) Rapid creation of AI‑generated programs can close the gap between academia and industry (Aidan Gomez)
Hugo suggests AI could unbundle the historic university degree into separate learning, accreditation and social-rite components [408-416]. Debbie defends the university’s broader societal role, arguing that degrees remain “worth its weight in gold” beyond mere knowledge delivery [425-426]. Aidan sees AI as a way for universities to quickly develop new curricula, narrowing the market-academia mismatch [254-262]. The three positions diverge on whether the degree model should be retained, re-shaped, or accelerated via AI.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on the evolving function of universities stress their role beyond knowledge transmission, focusing on critical thinking and social learning, challenging purely AI-driven credentialing models [S37][S39][S34].
Assessment methodology – testing with versus without AI assistance
Speakers: Aidan Gomez, Hugo Sarazen
Rigorous testing without AI is essential to verify true understanding (Aidan Gomez) AI‑enabled rapid assessment and feedback loops can personalize learning (Hugo Sarazen)
Aidan stresses that assessments must be conducted without AI tools to ensure genuine knowledge retention [50-53][331-334]. Hugo argues that AI can instantly assess learners, segment classes, and provide continuous feedback, enabling personalized pathways [104-107]. They share the goal of accurate assessment but disagree on whether AI should be excluded or integrated.
POLICY CONTEXT (KNOWLEDGE BASE)
AI governance literature calls for ongoing verification of AI systems and suggests assessment designs that can operate independently of AI to ensure reliability [S53], while educational policy recommends balanced AI integration without over-reliance [S34].
Extent to which AI should replace or augment human teachers
Speakers: Hugo Sarazen, Aidan Gomez
Teachers as storytellers and mentors; AI should augment, not replace them (Hugo Sarazen) Human judgment remains necessary; AI is a tool like a calculator (Aidan Gomez)
Hugo emphasizes the unique narrative and motivational role of teachers, proposing AI as an augmenting layer that can extend a teacher’s voice to new subjects [184-188]. Aidan likens AI to a calculator-useful but always requiring human grounding and judgment, warning against over-reliance [175-182]. The disagreement centers on how far AI can go in supplanting the teacher’s core functions.
POLICY CONTEXT (KNOWLEDGE BASE)
Cross-sector policy consensus maintains AI should augment rather than replace educators, citing models where AI supports human judgment in planning [S32], content moderation [S33], and broader AI ethics principles advocating human oversight [S35][S52].
Preferred technical approach to building trustworthy AI models
Speakers: Hugo Sarazen, Aidan Gomez
Need for specialised, trusted models that cite sources (Hugo Sarazen) AI reasoning and retrieval‑augmented generation improve depth of learning (Aidan Gomez)
Hugo calls for purpose-built, domain-specific models that are trained from scratch and can provide source citations to ensure trust [346-352]. Aidan focuses on enhancing generic models with internal reasoning (chain-of-thought) and retrieval-augmented generation to increase auditability and depth [374-382]. Both aim for trustworthy AI but propose different architectural solutions.
POLICY CONTEXT (KNOWLEDGE BASE)
Technical roadmaps outline essential steps for trustworthy AI, including transparency mechanisms, accountability structures, and verification processes as outlined in AI transparency guidelines [S47] and organizational trust frameworks [S48][S49].
Unexpected Differences
Optimism about AI democratizing knowledge versus existential dread of human irrelevance
Speakers: Hugo Sarazen, Audience (comment after Aidan’s polymath description)
AI democratizes knowledge by turning every data centre into a polymath (Hugo Sarazen) By definition humans cannot compete so we basically have to end the session and say that doom is nigh (Audience)
Hugo views AI-driven polymaths as a democratizing force that will reshape power structures but still leaves room for human agency [37-40][309-317]. An audience member reacts with a fatalistic view that AI’s polymath capability renders humans obsolete, a perspective Hugo does not share, revealing an unexpected clash between optimism and pessimism about AI’s societal impact [308-312].
POLICY CONTEXT (KNOWLEDGE BASE)
Stakeholder dialogues capture both hopeful narratives of AI expanding access and concerns about human relevance, reflecting documented fears and calls for education-driven mitigation [S40] and discussions on cognitive atrophy and human development in the AI age [S41][S45].
Unbundling of university degrees versus defense of the degree bundle
Speakers: Hugo Sarazen, Debbie Prentice
The traditional degree bundles education, accreditation and rite‑of‑passage; AI may prompt unbundling (Hugo Sarazen) Despite cost pressures, universities provide broader societal functions that remain valuable (Debbie Prentice)
While both discuss the future of higher education, Hugo suggests AI could lead to a re-evaluation and possible separation of learning, accreditation and social rites, whereas Debbie defends the existing bundled model as still “worth its weight in gold” [408-416][425-426]. The disagreement on whether the bundle should be retained or dismantled was not anticipated given their shared focus on education.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy debates weigh the traditional bundled degree model against modular, AI-enabled learning pathways, with university advocates emphasizing holistic education and social interaction [S37][S39] and educational AI reports urging balanced integration rather than disaggregation [S34].
Overall Assessment

The panel exhibits substantive disagreement on which competency is most scarce (attention, deep mastery, or critical thinking), on the technical route to trustworthy AI (specialised source‑citing models vs reasoning/RAG), on the future architecture of university degrees, and on assessment methodology (AI‑free testing vs AI‑driven personalization). They also differ on the extent to which AI should replace versus augment teachers. These divergences reflect contrasting sectoral incentives (for‑profit ed‑tech vs not‑for‑profit academia) and varying risk tolerances regarding AI’s societal impact.

High – the speakers frequently clash on core priorities and implementation strategies, indicating that consensus on shaping AI‑enhanced education will require bridging sectoral perspectives and aligning on shared goals such as critical thinking, trust, and equitable access.

Partial Agreements
All three agree that fostering critical thinking and ensuring trustworthy outcomes are essential for future education. However, Debbie emphasizes poll results and self‑knowledge, Aidan stresses testing without AI, while Hugo focuses on model design and source citation. The shared goal is robust critical reasoning, but the pathways differ [65-68][50-53][346-352].
Speakers: Debbie Prentice, Aidan Gomez, Hugo Sarazen
Critical thinking emerges as the most valued scarce skill (Debbie Prentice) Rigorous testing without AI is essential to verify true understanding (Aidan Gomez) Need for specialised, trusted models that cite sources (Hugo Sarazen)
Both see AI as a means to improve learning outcomes: Hugo through adaptive, modality‑specific tutoring and role‑play simulations, Aidan through reasoning chains and source‑cited answers. They differ on the primary mechanism—personalisation vs reasoning/auditability—but concur on AI’s positive educational impact [145-146][374-382].
Speakers: Hugo Sarazen, Aidan Gomez
Personalised, adaptive tutoring can sustain engagement (Hugo Sarazen) AI reasoning and retrieval‑augmented generation improve depth of learning (Aidan Gomez)
Both value the human element in education—Hugo highlights teachers’ narrative power, Debbie stresses universities’ role in developing self‑knowledge and critical thinking. They differ on whether AI should primarily augment teachers or be used to deliver content at scale, yet share the objective of preserving human‑centric learning [184-188][57-63].
Speakers: Hugo Sarazen, Debbie Prentice
Teachers as storytellers and mentors; AI should augment, not replace them (Hugo Sarazen) Universities cultivate critical thinking and self‑knowledge beyond content delivery (Debbie Prentice)
Takeaways
Key takeaways
In an AI‑rich environment, human attention and critical thinking become the scarcest resources; deep mastery and trust in information are also under threat. AI can transform learning through personalised, adaptive tutoring, role‑play simulations and real‑time skill measurement, but it must be coupled with rigorous testing to verify true understanding. Explainability, source citation and retrieval‑augmented generation are essential to build trust in AI‑generated knowledge. Human educators remain vital as mentors, storytellers and judges of AI output; AI should augment rather than replace them. There is a growing mismatch between university curricula and labour‑market needs; AI can accelerate creation of relevant programs and provide measurable ROI for corporate learning. The traditional university degree bundles education, accreditation and a rite‑of‑passage; AI may prompt a re‑evaluation and possible unbundling of these components, though universities still offer broader societal value.
Resolutions and action items
Udemy will pivot toward an AI platform for workforce reskilling, incorporating AI‑driven assessment, adaptive learning paths and role‑play simulations. Cohere will continue to focus on secure, on‑premise deployment of large language models for enterprises, including education‑related use cases. Both companies acknowledge the need to develop specialised, trusted models that can cite sources and provide explainability; research and product work in this area was proposed. Aidan highlighted the need for new testing frameworks and teaching‑performance benchmarks that evaluate both AI‑assisted and unaided competence. Panelists agreed to emphasise curricula that teach critical‑question formulation and verification skills, positioning these as core educational outcomes.
Unresolved issues
How to reliably assess and certify critical‑thinking ability and trustworthiness of AI outputs at scale. Effective policies for integrating AI into physical classrooms while preventing misuse and cheating. Standardised methods for measuring ROI of corporate learning programmes beyond completion metrics. Designing credentialing systems that reflect applied knowledge and remain relevant across white‑, gray‑ and blue‑collar jobs. Ensuring junior employees develop the ability to evaluate AI recommendations, given senior staff may rely heavily on AI. Robust detection of AI‑generated work and fair proctoring solutions; current tools are unreliable.
Suggested compromises
Use AI to personalise and engage learners while keeping human teachers in the loop for storytelling, motivation and contextual guidance. Combine AI‑driven front‑end (question formulation) and back‑end (verification, explainability) with human oversight for the middle reasoning stage. Adopt a dual testing approach: assess knowledge without AI to ensure foundational understanding, and also evaluate proficiency in using AI as a tool. Blend AI‑enabled skill‑specific training with traditional university education that cultivates critical thinking and self‑knowledge. Develop specialised, trusted AI models for particular domains while maintaining broader, open models for general use.
Thought Provoking Comments
“When you have a wealth of information, you have a poverty of attention.” (Herbert Simon quote)
Highlights that the real scarcity in the AI era is not information but human attention, reframing the core challenge from data access to cognitive capacity.
Shifted the conversation from what knowledge is scarce to how attention limits learning. It prompted Hugo and others to discuss attention‑related solutions (personalisation, AI‑driven engagement) and set up later debates about whether AI exacerbates or alleviates the attention problem.
Speaker: Hugo Sarazen
“LLMs can fool you into thinking you understand something when you don’t… testing is essential – you need to take the tool away to see what the human alone understands and has retained.”
Identifies a concrete risk of AI—false sense of mastery—and proposes a concrete mitigation (removing the tool for assessment), moving the discussion from abstract concerns to actionable educational practice.
Prompted a deeper dive into assessment methods, leading to later exchanges about rigorous testing, AI‑generated text detection, and the need for new benchmarks. It also influenced Hugo’s remarks on role‑playing simulations as a way to test competence.
Speaker: Aidan Gomez
“Self‑knowledge for the learner… we don’t know if we’ve mastered, if we’re interested, if we get it. Those cues are no longer useful.”
Introduces the idea that learners lack meta‑cognitive feedback in an AI‑rich environment, expanding the debate beyond content delivery to learner self‑awareness.
Steered the panel toward discussing how AI could provide or hinder self‑assessment, linking to Hugo’s Bloom two‑sigma discussion and Aidan’s emphasis on testing. It also set up the later focus on critical thinking as a way to regain self‑knowledge.
Speaker: Debbie Prentice
“The Bloom two‑sigma problem shows one‑on‑one coaching yields dramatically higher learning, but economics prevented scaling. AI can give us that personalised tutoring at scale.”
Connects a classic educational research finding with current AI capabilities, offering a concrete vision of how AI could solve a long‑standing scalability issue.
Created a turning point where the conversation moved from problem‑statement to solution‑orientation, leading to discussions of AI‑driven adaptive learning, role‑play simulations, and the potential for AI tutors to replicate the benefits of individualized instruction.
Speaker: Hugo Sarazen
“LLMs now have an internal monologue – a reasoning chain‑of‑thought – and can be coupled with retrieval‑augmented generation to cite sources, giving auditability.”
Introduces a technical advancement that directly addresses the earlier concern about explainability and trust, showing how AI can become more transparent.
Redirected the debate from distrust of black‑box models to concrete pathways for explainability, influencing Hugo’s later remarks about the need for specialized, trusted models and prompting audience questions about testing and verification.
Speaker: Aidan Gomez
“The university degree is a bundle – a rite of passage, accreditation, research community – but AI may force us to unbundle those components.”
Challenges the traditional higher‑education paradigm by suggesting that AI could separate credentialing from knowledge delivery, opening a strategic discussion about the future role of universities.
Prompted a broader reflection on the purpose of elite institutions versus online platforms, leading to audience questions about cost, relevance, and the gap between online education and accredited degrees. It also set the stage for the final discussion on the societal value of universities.
Speaker: Hugo Sarazen
“Testing with and without the calculator – we should have a gold‑standard test where the tool is removed, but also recognise using the tool is a skill itself.”
Frames assessment as a dual‑track problem: measuring pure knowledge and measuring effective tool use, thereby enriching the conversation about how to evaluate learning in an AI‑augmented world.
Influenced the audience’s follow‑up on AI‑generated text detection and sparked a nuanced dialogue about integrity, proctoring, and the design of new benchmarks for AI‑assisted learning.
Speaker: Aidan Gomez
Overall Assessment

The discussion’s trajectory was shaped by a handful of pivotal insights that moved it from a broad framing of ‘knowledge dilemmas’ to concrete educational strategies. Hugo’s attention‑scarcity observation reframed the problem space, while Aidan’s warning about false mastery and his proposal for tool‑free testing introduced a practical countermeasure. Debbie’s focus on self‑knowledge added a meta‑cognitive layer, prompting the panel to consider learner awareness. Hugo’s reference to the Bloom two‑sigma problem offered a historic solution that AI could finally scale, steering the conversation toward personalized tutoring and role‑play simulations. Aidan’s technical update on chain‑of‑thought reasoning and retrieval‑augmented generation provided a tangible path to explainability, directly addressing earlier trust concerns. Finally, Hugo’s challenge to the bundled university degree model opened a strategic debate about the future of higher education. Together, these comments acted as turning points that deepened analysis, redirected the dialogue toward actionable solutions, and shaped the overall narrative from problem identification to envisioning a re‑imagined learning ecosystem.

Follow-up Questions
How can we improve explainability of AI models to build trust and support learning?
Explainability is essential for users to understand where answers come from, which impacts trust and the ability to learn from AI-generated content.
Speaker: Hugo Sarazen
How can enterprises effectively measure the ROI of learning initiatives, especially with AI-driven tools?
Understanding ROI is critical for justifying investments in learning technologies and ensuring they deliver measurable business value.
Speaker: Hugo Sarazen
What methods can be used to deliver adaptive, bite‑size, in‑flow learning and measure skill deployment in real time?
Adaptive learning aligned with work tasks can improve relevance and efficiency, but requires tools to track skill usage continuously.
Speaker: Hugo Sarazen
How can AI‑enabled technology help motivate learners, especially when human instructors are not present?
Motivation is a key driver of effective learning; identifying AI‑driven ways to sustain it could enhance outcomes in workplace and online settings.
Speaker: Anna Van Niels (audience)
What should be the role of AI in physical classrooms, and how should policymakers respond to calls for banning versus allowing AI?
Clarifying AI’s place in traditional education informs policy decisions and balances innovation with concerns about misuse.
Speaker: Nathaniel (audience)
How can we create and deliver applied knowledge that enables people across white‑, gray‑ and blue‑collar jobs to earn a livelihood, bridging the gap between academia and industry?
Addressing skill mismatches and providing practical, market‑relevant knowledge is vital for employment and economic stability.
Speaker: Pranjal Sharma (audience)
How can universities teach students to apply critical thinking—fact‑checking, logical, scientific, and ethical evaluation—to instant AI‑generated answers?
Ensuring students don’t accept AI outputs uncritically preserves deep learning and intellectual rigor.
Speaker: Debbie Prentice
What tools or frameworks can be developed to evaluate the level of critical thinking that learners possess when they enter the workforce?
Assessing critical‑thinking skills post‑training is necessary to gauge effectiveness and inform future curriculum design.
Speaker: Hugo Sarazen
How can we benchmark the teaching capability of AI models to ensure they are effective educators?
Standardized benchmarks would allow comparison of AI tutors, driving improvements and ensuring educational quality.
Speaker: Aidan Gomez
How should testing be designed— with or without AI tools (the “calculator”)—to detect cheating and assess true competence?
Balancing tool‑assisted assessment with integrity safeguards is crucial for fair evaluation of learner abilities.
Speaker: Kian (audience)
What research is needed to develop specialized, trusted AI models and improve explainability for high‑stakes applications?
Specialized, transparent models are required for domains where accuracy and source verification are paramount.
Speaker: Hugo Sarazen
How can reasoning models (internal monologue) and Retrieval‑Augmented Generation (RAG) be advanced to provide auditability and source citation in AI responses?
Enhancing reasoning and citation capabilities would increase confidence in AI outputs and support learning verification.
Speaker: Aidan Gomez
What processes are needed to accurately identify emerging skill demands before building AI‑driven reskilling solutions?
Skill identification guides the development of relevant training programs and ensures alignment with labor market needs.
Speaker: Aidan Gomez
How can the performance of AI teaching models be continuously tracked and improved over time?
Ongoing performance monitoring is essential for maintaining educational effectiveness and adapting to learner feedback.
Speaker: Aidan Gomez
How can organizations train junior employees to develop critical thinking about AI outputs, given that senior staff may already possess this ability?
Ensuring the next generation can evaluate AI critically prevents skill erosion and supports long‑term workforce competence.
Speaker: Audience member (unnamed)
What are the gaps between online education platforms (e.g., Udemy) and accredited elite colleges, and is there market demand for unbundling the traditional college experience?
Understanding these gaps informs strategic decisions about the future of higher education and potential hybrid models.
Speaker: Audience member (unspecified)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden

Session at a glanceSummary, keypoints, and speakers overview

Summary

The opening remarks introduced the AI Impact Summit and highlighted the value of keynote sessions for deepening understanding of artificial intelligence challenges and opportunities [1-8]. Speaker 1 then welcomed Her Excellency Ebba Bush, Deputy Prime Minister of Sweden, noting Sweden’s reputation as an innovation powerhouse and its role at the nexus of energy policy and AI infrastructure [9-13].


In her address, Bush said she would discuss why the summit matters, the need for public legitimacy, and the importance of cooperation for AI sovereignty [17-19]. She emphasized India’s status as the world’s largest and youngest democracy and argued that the Global South must be fully included in shaping technology governance and global standards [19-21][26-28]. Bush recalled the historical reaction to the printing press-fear, loss of control, and job anxiety-to illustrate the recurring emotional curve that accompanies disruptive technologies [45-53][55-58]. She argued that AI is not merely an algorithmic upgrade but a fundamental shift involving energy, compute capacity, data, and trust, and that nations mastering AI infrastructure will determine future economic growth and democratic resilience [60-66][68-71].


Turning to data centers, she described them as energy-intensive “invisible” facilities that can nevertheless become long-term local job anchors, support renewable investment, and serve hospitals, research, defense and industry [76-84][85-88]. Bush warned that citizens vote for tangible outcomes, not technology, and that policymakers must translate AI’s complexity into benefits to earn trust [90-94][95-96]. She outlined three pillars of true AI sovereignty: jurisdictional control of data, sovereign compute capacity, and strategic choice of partners [102-105].


Sweden, she said, offers abundant clean energy that reduces AI training carbon footprints, deep industrial expertise in scaling complex systems, and trusted institutions that can move from strategy to execution [116-124][125-133][134-136]. She highlighted Europe’s critical role through companies such as ASML, ARM, SAP and Ericsson, noting that collaboration with Europe is essential for a robust AI stack [127-131]. Bush concluded that the partnership between India’s scale and Sweden’s precision can create inclusive AI that strengthens democracy, drives sustainable growth and expands opportunity [146-152]. She urged leaders to make AI legitimate, understandable and beneficial so that it will be embraced like electricity rather than feared like the printing press, calling for a collaborative, open and democratic AI future [152-158].


Keypoints

Major discussion points


AI legitimacy requires public understanding and trust.


The speaker draws a parallel between the printing press and today’s AI, noting that every major technology first provokes fear, then gains trust, legitimacy, and finally transforms society - a pattern we are now experiencing with AI [45-58]. She stresses that AI is a “fundamental shift” that must be made understandable to avoid the fear that once surrounded the printing press [60-66].


AI sovereignty is about strategic cooperation, not isolation.


No single nation can build resilient AI infrastructure alone; democracies must cooperate and choose their dependencies wisely. True sovereignty rests on three pillars: jurisdictional control of data, sovereign compute capacity, and strategic partner selection [97-106].


Sweden-India partnership leverages complementary strengths.


Sweden offers abundant clean energy, deep industrial expertise, and trusted institutions that can power low-carbon AI training [116-124]; India contributes massive scale, diverse data, and sovereign AI models that serve 1.4 billion people [146-151]. Together they aim to combine “scale with engineering excellence” to build trustworthy, inclusive AI systems.


Data centers are both a challenge and an opportunity.


While AI-driven data centers are energy-intensive and can impact local environments, they can also become long-term job anchors, enable renewable investments, and support critical services such as hospitals and research [76-88].


Overall purpose / goal


The discussion seeks to rally international leaders around a shared vision of responsible AI development: establishing legitimacy and public trust, defining a cooperative model of AI sovereignty, and forging a concrete Sweden-India partnership that pairs clean-energy-rich, trustworthy European capabilities with India’s scale and societal reach. The ultimate aim is to shape a democratic, inclusive, and sustainable global AI industry.


Tone of the discussion


– The opening is formal and celebratory, welcoming the keynote and emphasizing the significance of the summit [1-9].


– It then shifts to a reflective and cautionary tone, using historical analogies to warn of fear and misunderstanding [45-58].


– The speaker adopts a strategic and collaborative tone, outlining cooperation, sovereignty, and the specific strengths each country brings [97-106][116-124][146-151].


– The concluding remarks become optimistic and inspirational, urging collective action to make AI “empowering” rather than feared [152-157].


Overall, the tone moves from courteous introduction to thoughtful analysis, then to constructive partnership framing, and finally to hopeful exhortation.


Speakers

Speaker 1


– Role/Title: Moderator / Event host (introducing the keynote speaker)


– Area of Expertise: Not specified


Ebba Bush


– Role/Title: Deputy Prime Minister and Minister for Energy, Business, and Industry of Sweden


– Area of Expertise: AI policy, energy infrastructure, business and industry strategy, AI sovereignty


Additional speakers:


(none identified)


Full session reportComprehensive analysis and detailed insights

The AI Impact Summit opened with Speaker 1, who thanked the previous keynote and noted how such sessions deepen participants’ grasp of artificial-intelligence challenges and opportunities [1-8]. He highlighted the growing public awareness of AI and the specific difficulty of integrating AI-driven data centres into national power grids, before introducing the next speaker – Deputy Prime Minister of Sweden, Her Excellency Ebba Bush – and underscoring Sweden’s reputation as an “innovation powerhouse” that sits at the crossroads of energy policy and AI infrastructure [9-13].


Bush began with a warm greeting, saying “Namaste, ap kärsahein.” [14] She outlined the three themes of her address – “why it is important to be here, reflections on public legitimacy, and cooperation and AI sovereignty” [14-19]. Framing India as the world’s largest and youngest democracy and a leading voice in shaping the future global order, she argued that the Global South must be fully represented in the development of technology-governance standards [19-21][26-28]. She also noted Sweden’s sizeable delegation – the second-largest after France – as evidence of a strategic, long-term partnership between the two regions [22-24].


Drawing a historical parallel, Bush recalled the reaction to the printing press in the 15th century, noting that fear, concerns about loss of control and job displacement were the initial responses [45-53]. She argued that, as with the press, the “danger” lies not in the technology itself but in a lack of understanding, and that societies typically move through a cycle of fear, trust, legitimacy and finally worldwide transformation [55-58]. This analogy set the stage for her claim that artificial intelligence is a “fundamental shift” that extends beyond algorithms to encompass energy, compute capacity, data and trust [60-64]. She warned that nations that master AI infrastructure will dictate future economic growth, industrial competitiveness and democratic resilience, whereas those that merely consume externally built AI will fall behind [65-71]; she phrased this as “the future will be decided by those that build the largest, most trusted AI models.” [65-71]


Addressing the practical implications of AI, Bush described data-centres as “invisible” yet highly energy-intensive facilities that often provoke public opposition because they appear to consume electricity without clear local benefit [76-84]. She counter-pointed this perception by outlining how, if properly managed, data-centres can become long-term job anchors, stimulate renewable-energy investment and support critical services such as hospitals, research, defence and industry [85-88]. This reflects the two speakers’ different emphases: Speaker 1 foregrounds the grid-stress narrative [12], while Bush highlights the socioeconomic-benefit narrative [85-88].


Bush then stressed that citizens vote for tangible outcomes, not for technology per se, and that policymakers must translate AI’s complexity into understandable benefits to convert fear into trust [90-96]. She linked this requirement to the broader concept of AI legitimacy, echoing the policy-level emphasis on transparency, explainability and stakeholder inclusivity found in contemporary AI-governance literature [S49][S50][S17].


To operationalise legitimacy, Bush presented a three-pillar model of true AI sovereignty: (i) jurisdictional control over where data are stored and processed; (ii) sovereign compute capacity for advanced models; (iii) strategic partner choice based on strength rather than dependency [97-105]. She argued that no single democracy can build resilient AI infrastructure alone, and that cooperative frameworks among like-minded democracies are essential [97-99].


She enriched her economic framing with cultural metaphors, describing the data-ocean as “samudra” and AI as the “churning rod” that yields “amrit, the nectar of progress” [38-42]. She also asserted, “We must not see AI as a replacement for the human spirit, but as a power multiplier for human dignity.” [120-122] Sweden, she said, is “a sort of pathfinder, helping define the routes that will shape global AI infrastructure for decades.” [135-137]


Sweden’s clean-energy advantage was then detailed: the country exports more electricity per capita than any other European nation and can run AI-training workloads with roughly one-third the carbon footprint of a typical US hyperscaler [116-120]. Coupled with deep industrial expertise in scaling complex systems, this enables Sweden to modernise traditional industry while constructing new AI infrastructure [122-124]. Swedish institutions are portrayed as highly trustworthy, allowing swift movement from strategy to execution [134-136].


India, by contrast, contributes massive scale, rapid digital deployment and sovereign AI models that reflect the linguistic and cultural diversity of its 1.4 billion-person population [146-151]. Bush argued that combining India’s “engine” of scale with Sweden’s “filter” of precision and trust will produce inclusive AI that empowers farmers, small businesses, teachers and doctors, thereby delivering a societal transformation rather than mere technological innovation [147-150][148-150].


She reinforced the European dimension by citing key players in the AI stack: ASML’s extreme-ultraviolet lithography machines, ARM’s processor architectures, SAP’s enterprise systems and Ericsson’s leadership in 5G and a frontrunner in 6G [127-131]. This underscores the necessity of European collaboration to complement Swedish capabilities [125-133].


Sweden’s policy commitments were outlined next. During the current parliamentary term, substantial funds have been earmarked for AI research, development and implementation, and a high-ambition AI strategy has been launched to chart concrete steps toward sustained leadership [140-144]. An AI workshop targeting the public sector aims to foster safe and efficient adoption, while AI “gigafactories” in the Nordics are being built with near-zero-carbon energy, leveraging political stability, rule of law and a culture of trust [136-138][139-144]. Moreover, “Sweden offers Europe and all of our global partners what the AI transition actually needs.” [130-132] These actions translate rhetoric into measurable initiatives, aligning with broader calls for transparent, accountable AI governance [S18][S10].


In her concluding remarks, Bush reiterated that fear stems from misunderstanding, but once people see value they will defend AI [152-154]. She urged leaders to move beyond regulation toward making AI legitimate, understandable and beneficial, envisioning a future where AI is embraced like electricity – invisible, indispensable and empowering [155-157]. The speech closed with a call for a collaborative, open, competitive, democratic and inclusive AI ecosystem [158].


Agreements and points of contention – Both speakers concur that AI’s disruptive nature demands public legitimacy and trust [6][45-58]; however, they differ in emphasis, with Speaker 1 focusing on grid-stress challenges [12] and Bush emphasizing the socioeconomic benefits of data-centres [85-88]. Additionally, while Speaker 1 suggests that knowledge-sharing sessions are a primary route to legitimacy, Bush stresses sovereign infrastructure and strategic partnerships as the core mechanism [7-8][97-105].


Thought-provoking comments – Bush’s repeated analogies – the printing-press emotional curve, AI as “control of energy, compute capacity, data and trust”, the “samudra” metaphor and the “amrit” churning rod – each reframed abstract concerns into concrete policy lenses, steering the audience from hype to a nuanced understanding of AI’s societal role [45-58][60-64][38-42][41-42].


Open questions – The address raised several unresolved issues, including how to operationalise the three pillars of AI sovereignty, how to balance data-centre energy demand with grid stability, and how to design metrics that translate AI complexity into tangible public benefits [97-105][90-96][85-88]. These questions align with broader research agendas on explainable AI, digital sovereignty and sustainable AI infrastructure [S49][S55][S56].


Session transcriptComplete transcript of the session
Speaker 1

Thank you so much, Mr. Cristiano Amon, for that very, very interesting session. And I’m sure each one of us must have gained something, some new insights out of it. Are you all excited about such sessions, such keynote speakers? Louder yes would do better. Thank you. I think we all keep reading about AI. We all are aware of the challenges in front of the world when it comes to AI. But, capital but, B -U -T, such sessions are actually adding such new perspectives to our understanding of AI, the challenge, and also the future, what to expect in future. So I think it’s really time to thank our keynote speakers who are adding such great value to our understanding of artificial intelligence, as well as to this AI Impact Summit.

And ladies and gentlemen, now, it’s my honor to invite Her Excellency, Ebba Bush, Deputy President of the United States, and the President of the United States, Prime Minister and Minister for Energy and Business, Sweden. Sweden has long been a quiet powerhouse of innovation, from Ericsson to Spotify to some of Europe’s most promising AI startups. As Deputy Prime Minister, Ms. Ebba Bush is navigating the critical nexus between energy policy and AI infrastructure. Now, that’s a challenge I think every nation will face as the data centers demand ever -growing share of national power grids. Ladies and gentlemen, please join me in welcoming Deputy Prime Minister of Sweden, Her Excellency, Ebba Bush.

Ebba Bush

Thank you so much, Excellencies, distinguished guests, dear friends. Namaste, ap kärsahein. And let me begin by expressing my sincere gratitude I am very grateful to the European Council for the towards the government of India and to the organizers of this important summit. It is truly an honor to be here in beautiful, beautiful India. Given this unique chance to address you all today, I would like to talk about three points. About why, first of all, it is important to be here, some reflections on public legitimacy, and finally, about cooperation and AI sovereignty. India today is not only the world’s largest democracy, it is a leading voice in shaping the future global order. Your leadership matters, your perspective matters, and the Global South must be fully included when we shape the rules of innovation, technology governance, and global standards.

I am here today as a European, as a proud European, and I am here today as a European, as a proud European, and I am here today as a European, as a proud European, and I am here today as a European, as a proud European, and I am here today as a European, as a proud European, and I am here today as a European, as a proud European, and I am here today as a European, as a proud European, Swede that represents the second largest international delegation here at the AI Summit after France. That’s worth an applaud in itself. Thank you so much. The Nordics are deeply engaged here in India, and we are here because we believe this partnership is strategic.

It is long -term and built on trust. India is not only the world’s largest democracy, it is also the world’s youngest democracy. And I am impressed with the long -term vision of India for a better life for young people, with a commitment that stretches across generations. And Sweden shares this long -term commitment. Since India first gained independence, Swedish companies have worked alongside Indian partners, and we have grown together. And as India makes strategic investments in sovereign and democratic development, we have developed different AI models and advanced research ensuring that 1 .4 billion people can benefit from AI. This is not only industrial policy. It is in many ways poverty reduction. It is empowerment. It is development leap of historic proportions.

Sweden intends to be a reliable and innovative partner as India continues its economic rise. Prime Minister Modi often talks about and speaks of India as Vishmamitra, a friend to the world. Today, we stand at a new frontier where that friendship is more vital than ever, the frontier of artificial intelligence. Sweden is a proud friend of India. In the ancient scriptures, we read of the Samudramanta. The churning of the cosmic ocean. It teaches us that collaboration is the only way to truly unlock the deepest treasures. Today, the vast ocean of data is our samudra and AI is our churning rod. Clearly. Thank you. So as you understand, clearly, there are very, very good reasons why we are here and why this summit is taking place in India, in New Delhi.

And that brings me to the point of legitimacy. In 1450, with modern time telling, when the printing press was introduced, the reaction from the status quo was not excitement. It was fear. Power had long depended on being able to control information and suddenly knowledge could scale. And if you look back at the argument. That we heard that. They’re a bit familiar, actually. This will spread the wrong ideas. People won’t know what to trust. Society will lose control. And people, especially writers at the time, will lose their jobs. But the printing press wasn’t dangerous. What was dangerous was not understanding it. Those who understood it could soon reach a nation in only two weeks and a whole continent possibly in two months.

Every major technological shift follows the same sort of emotional curve. It goes from fear, then trust, then legitimacy, and finally, a worldwide transformation. We are now living through another such moment. And artificial intelligence isn’t just another digital upgrade. It is a fundamental shift. AI is no longer about algorithms alone. It is about control of energy, compute capacity, data, and trust. Nations that master AI infrastructure will shape economic growth, industrial competitiveness, and democratic resilience for decades. It’s going to make a massive shift. Make no mistake, we are not digitalizing the old economy. We are building an entirely new global AI industry, one that will redefine the foundations of productivity, of healthcare, of defense, energy systems, and of course also public administration.

The nations that lead this transformation, they will prosper. Those that merely consume AI built elsewhere will fall behind. The future will not be decided necessarily by the ones that builds the biggest models. But rather, the future will be decided by the ones that build the biggest models. the ones that build the most trusted systems. So for me, the question is not whether or not this transformation will happen. The question is who shapes it and on what values. And that is why I am here. So let’s talk a little bit about something else that is often misunderstood. Data centers. Because AI, much like fire, it is powerful. And in this sense, it is invisible. And it is very energy intensive.

Demanding of energy intense data centers, often on the countryside, rupturing forests and fields. To many citizens, data centers look like someone else’s internet using our electricity. At least that’s the debate in Sweden and I know in many other countries. But I believe that that’s the debate. And I think that’s the debate. That perception is incomplete. In reality, they can be long term. local job anchors if implemented and used correctly. They can enable renewable energy investments. They can be infrastructure for hospitals, for research, defense and industry. And they are the factories of the new economy. And this brings us to the core political challenge. People do not vote for technology. People vote for outcomes. A job, a hospital that works, energy they can afford.

If AI is to become electable in our democracies, policymakers must find a way to translate complexity into tangible benefit. Fear turns into trust when we understand. And when understanding grows. So how do we get there? No nation can build resilient AI infrastructure alone. Democracies have to cooperate. AI sovereignty does not mean isolation. It means choosing your dependencies. To be able to choose our dependencies and the values that shape global AI, we also need a measure of sovereignty over AI. True sovereignty, the way I see it, rests on three pillars. First, jurisdictional control, knowing where your data is stored and processed. Second, infrastructure capacity, having sovereign compute for advanced models. And third, strategic choice, selecting partners from a place of strength, not dependency.

And in a turbulent world, you need to choose your friends carefully. Sweden is choosing India. India provides the incredible scale and speed, the very engine of this movement. Europe and Sweden can provide precision and trust, the filter that ensures that what we extract is the amrit, the nectar of progress for all. Just as Lord Vishwakarma unified divine vision with practical tools, we must unify the human heart with machine power. We must not see AI as a replacement for the human spirit, but as a power multiplier for human dignity. And when we combine India’s digital scale with Sweden’s systematic trust, we do more than build code. We build a future where technology never outweighs. Sweden offers Europe and all of our global partners what the AI transition actually needs.

needs. So now you’ll have a little bit of Swedish bragging, which is not that very common. But first of all, we have an abundant of clean and reliable energy. We export more electricity per capita than any other European country. AI is becoming the most efficient way to export energy without exporting electrons. In Sweden, AI training can run a roughly one third of the carbon footprint of a typical US hyperscaler operations. This transforms us from energy exporter to intelligence exporter, a fundamentally more valuable position. But energy alone is not enough. And that brings me to the second Swedish strength, industrial depth. Sweden has deep expertise in scaling complex industrial systems. We are modernizing traditional industry while building new AI.

Infrastructure. And Europe cannot be underestimated. You cannot bypass the European Union in the AI stack. Consider just ASML in the Netherlands, the only company in the world producing extreme ultraviolet lithography machines essential for advanced ships. Or ARM in the United Kingdom, whose architectures power most of the world’s smartphones and an increasing share of data center properties. Processors. Or SAP in Germany, embedded in the mission -critical enterprise systems of the global economy. And of course, Ericsson from Sweden, a global leader in 5G and a frontrunner in 6G, the backbone of edge computing and AI -enabled networks. You cannot build the AI ecosystems with Europe. And you shouldn’t, because we’ll be a reliable partner. Third, but not least, trusted institutions.

When you make a deal with a Swede, that is a handshake that you can trust. And Sweden offers the ability to move from strategy to execution. In the Nordics, Sweden, Norway, Finland and Denmark, we are now building AI gigafactories, manufacturing intelligence at industrial scale with near zero carbon energy. We combine clean power, political stability, rule of law, technological sophistication and a culture of trust. We see ourselves as a sort of pathfinder, helping define the routes that will shape global AI infrastructure for decades. At the same time, we are making strategic commitments. During this parliamentary term, We have committed a substantial amount of funds to AI research, AI development and implementation, therefore ensuring that Sweden seizes the economic and societal benefits of this transformative technology.

Building on that foundation, we are today presenting in Sweden an AI strategy with high ambitions. The strategy will outline concrete steps that will steer Sweden towards sustained AI leadership. Our strategy not only demonstrates the scale of current commitment, but also maps a path forward for Sweden’s future. And we have launched an AI workshop to help public sector adopt AI safely and efficiently, because trust is built not by slogans, but by implementation. And this implementation brings me back to India. India understands scale. India understands development. Your investments in sovereign AI models ensures that AI speaks all of your languages, reflects your society and serves your people. This is what real inclusion truly looks like. When 1 .4 billion people gain access to AI tools that empower farmers, small businesses, teachers and doctors, that is not just innovation, that is transformation.

Information partnerships between India and Sweden combine scale with engineering excellence, market dynamism with institutional trust. Together, we can ensure AI strengthens democracy, drives sustainable growth and expands opportunity. I’d like to sum up by saying people fear what they do not understand. But what people understand and see value in. They will defend. Our task as leaders is not merely to regulate AI, it is to make it legitimate, to make it understandable, and most importantly, to make it beneficial. If we succeed, AI will not be feared like the printing press. It will be embraced like electricity, invisible, indispensable, but empowering. Let us shape this new industry together, open, competitive, democratic, and inclusive. The future of AI must empower our people and

Related ResourcesKnowledge base sources related to the discussion topics (32)
Factual NotesClaims verified against the Diplo knowledge base (8)
Confirmedhigh

“Speaker 1 thanked the previous keynote and introduced the next speaker, performing a standard event transition.”

The knowledge base records that Speaker 1 performed a standard event transition, thanking the previous speaker and introducing the next keynote presenter [S81] and also expressed gratitude to the organizers [S82].

Confirmedhigh

“Deputy Prime Minister Ebba Bush opened her remarks with the Hindi greeting “Namaste, ap kärsahein.””

The source notes that Deputy Prime Minister Bush opened her remarks with a greeting in Hindi, specifically “Namaste, ap kärsahein” [S6].

Confirmedhigh

“Sweden’s delegation was the second‑largest after France at the AI Impact Summit.”

The knowledge base states that Sweden represented the second-largest international delegation at the AI Summit, behind France [S6].

Confirmedhigh

“Bush compared the reaction to the 15th‑century printing press to current AI concerns, noting fear, loss of control and job displacement as the initial responses.”

The source describes the historical reaction to the printing press as fear, concerns about loss of control and job displacement, matching Bush’s analogy [S90].

Confirmedhigh

“Nations that master AI infrastructure will dictate future economic growth and competitiveness, while those that merely consume externally built AI will fall behind; the future will be decided by those that build the largest, most trusted AI models.”

Ebba Busch’s keynote contains the same message: nations that lead the AI transformation will prosper, those that only consume AI built elsewhere will fall behind, and the future will be decided by the ones that build the biggest, most trusted models [S5].

Additional Contextmedium

“Data‑centres are highly energy‑intensive infrastructure that affect a country’s AI capacity and require both technical and human resources.”

The knowledge base links AI capacity to access to compute, data-centres, and other infrastructure, emphasizing that capacity depends on both technical resources and people’s ability to make informed decisions [S94].

Additional Contextmedium

“Societies typically move through a cycle of fear, trust, legitimacy and then worldwide transformation when confronting new technologies.”

The source notes that public perception of technology is dominated by negative aspects, mistrust and fear, which adds nuance to the described fear-trust-legitimacy cycle [S92].

Additional Contextmedium

“Artificial intelligence represents a fundamental shift that goes beyond algorithms to include energy, compute capacity, data and trust.”

The knowledge base highlights that AI capacity is tied to infrastructure such as compute power and data-centres, underscoring the broader dimensions of AI beyond pure algorithms [S94].

External Sources (96)
S1
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S2
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S3
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S4
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Ebba Bush- Deputy Prime Minister and Minister for Energy, Business, and Industry of the Kingdom of Sweden -Sweden- Rep…
S5
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden — This transcript contains a single keynote speech by Deputy Prime Minister Ebba Bush with only brief introductory comment…
S7
Webinar – session 1 — Emerging technologies, particularly artificial intelligence, were identified as presenting both opportunities and challe…
S9
Main Session | Policy Network on Artificial Intelligence — Brando Benifei: Thank you. Thank you very much. First of all, I’m really happy to be able to talk in this very impor…
S10
Artificial intelligence (AI) – UN Security Council — In conclusion, the discussions highlighted the importance of fostering transparency and accountability in AI systems. En…
S11
Skilling and Education in AI — In two significant areas, one is in agriculture, which is the highest employer, biggest employer anywhere. It’s also one…
S12
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — “AI governance faces a fundamental challenge”[17]. “Trust in public administration is built on fairness and safety”[44]….
S13
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — -AI’s Massive Energy Demands and Infrastructure Challenges: The discussion highlighted that AI data centers are becoming…
S14
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Anita Gurumurthy emphasised that despite improvements in chip efficiency, energy demand from data centres continues crea…
S15
Resilient infrastructure for a sustainable world — Cross-sectoral collaboration necessary because no single actor can make society resilient alone
S16
Opening & Plenary segment: Summit of the Future – General Assembly, 3rd plenary meeting, 79th session — – Ebba Busch, Deputy Prime Minister of Sweden Wavel Ramkalawan: Mr. President of the General Assembly, distinguished g…
S17
AI diplomacy — Finally, we must insist on transparency. Much of the work today is focused on solving the “black box” problem by creatin…
S18
AI as critical infrastructure for continuity in public services — Inclusivity of all affected stakeholders creates legitimacy and trust. Transparency, public comment periods and accounta…
S19
Harnessing Collective AI for India’s Social and Economic Development — Artificial intelligence | Human rights and the ethical dimensions of the information society | Data governance Professo…
S20
Keynote-Jeet Adani — Distinguished global leaders, innovators and friends, good afternoon and namaste. We gather here today at a decisive inf…
S21
Global AI Policy Framework: International Cooperation and Historical Perspectives — Mirlesse outlines practical steps for implementing open sovereignty, emphasizing domestic AI deployment in key sectors w…
S22
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — Virkkunen articulated sovereignty as “having choice in partnerships, not being forced into dependencies,” emphasizing st…
S23
Keynote by Marcus Wallenberg Chairman SEB & Saab — Marcus Wallenberg delivered a comprehensive discussion on AI development and the potential for Sweden-India collaboratio…
S24
NRIs MAIN SESSION: DATA GOVERNANCE — Liu Chuang:Yeah, ladies and gentlemen, that is a very good opportunity for me to share our information to you. I think t…
S25
WS #111 Addressing the Challenges of Digital Sovereignty in DLDCs — Local data centers contribute to economic growth and job creation in developing countries. They provide opportunities fo…
S26
WS #43 States and Digital Sovereignty: Infrastructural Challenges — Korstiaan Wapenaar: Thank you very much, colleagues for having me. Thank you. in and for the opportunity to participa…
S27
The Geoeconomics of Energy and Materials/ DAVOS 2025 — Muhammad Taufik: Well, certainly, I think central to any national oil company’s duty is to ensure energy security, a…
S28
Opening Remarks (50th IFDT) — The overall tone was formal yet warm and celebratory. Speakers expressed pride in the IFDT’s accomplishments and gratitu…
S29
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S30
Opening Ceremony — The tone is consistently formal, diplomatic, and optimistic yet cautionary. Speakers maintain a celebratory atmosphere a…
S31
Keynote-António Guterres — The moderator provides a ceremonial introduction of Antonio Guterres, highlighting his role as UN Secretary General and …
S32
AI as critical infrastructure for continuity in public services — Inclusivity of all affected stakeholders creates legitimacy and trust. Transparency, public comment periods and accounta…
S33
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — Government AI carries higher risks than private sector applications, including ethical, operational, exclusion, and publ…
S34
AI as critical infrastructure for continuity in public services — This comment provides a concrete, measurable example of how AI exclusion occurs, moving beyond abstract discussions of i…
S35
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — “AI governance faces a fundamental challenge”[17]. “Trust in public administration is built on fairness and safety”[44]….
S36
Diplomatic policy analysis — Digital divides:Not all countries have equal access to advanced analytical tools, perpetuating inequalities in diplomati…
S37
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Anita Gurumurthy emphasised that despite improvements in chip efficiency, energy demand from data centres continues crea…
S38
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — Policy and Regulatory Framework Challenges: Speakers identified the need for better coordination between central and sta…
S39
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — -Policy and Regulatory Framework Challenges: Speakers identified the need for better coordination between central and st…
S40
Shaping the Future AI Strategies for Jobs and Economic Development — -Infrastructure and Energy Challenges: Significant discussion around the massive infrastructure requirements for AI depl…
S41
Shaping the Future AI Strategies for Jobs and Economic Development — Infrastructure and Energy Challenges: Significant discussion around the massive infrastructure requirements for AI deplo…
S42
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Ho emphasised that this represents a fundamental shift in venture capital focus, with investors now examining decades-ol…
S43
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — “When it comes to discovery, we need to develop foundation models for proteins, RNA, cellular circuits and systems biolo…
S44
The Intelligent Coworker: AI’s Evolution in the Workplace — Kallot advocates for countries to maintain control over critical citizen data by building sovereign AI systems. This inv…
S45
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — So I would start with a couple of technical aspects. So to James’ point, the model size is indeed not only the only thin…
S46
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden — “Sweden has deep expertise in scaling complex industrial systems.”[33]. “And of course, Ericsson from Sweden, a global l…
S47
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden — Bush described the Nordic countries as building “AI gigafactories” that manufacture intelligence at industrial scale wit…
S48
Smaller Footprint Bigger Impact Building Sustainable AI for the Future — Mensch highlighted the strategic advantage of training models in regions with clean energy, specifically mentioning trai…
S49
AI diplomacy — Finally, we must insist on transparency. Much of the work today is focused on solving the “black box” problem by creatin…
S50
AI as critical infrastructure for continuity in public services — Inclusivity of all affected stakeholders creates legitimacy and trust. Transparency, public comment periods and accounta…
S51
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden — “It goes from fear, then trust, then legitimacy, and finally, a worldwide transformation.”[45]. “Every major technologic…
S52
Artificial intelligence (AI) – UN Security Council — During another session, one speaker highlighted that”Technical explainability is crucial for ensuring transparency and a…
S53
Keynote-Jeet Adani — Distinguished global leaders, innovators and friends, good afternoon and namaste. We gather here today at a decisive inf…
S54
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — Deep science requires a lot of research and development. It requires patient capital. But the societal and economic retu…
S55
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — Virkkunen articulated sovereignty as “having choice in partnerships, not being forced into dependencies,” emphasizing st…
S56
Panel Discussion Data Sovereignty India AI Impact Summit — Summary:Both speakers agree that sovereignty should involve strategic partnerships and collaboration rather than complet…
S57
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — Sovereignty doesn’t mean isolation – need cooperation, open science and shared global ethics
S58
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden — She positioned the partnership as combining complementary strengths: India provides scale and speed (the engine), while …
S59
Keynote by Marcus Wallenberg Chairman SEB & Saab — Marcus Wallenberg delivered a comprehensive discussion on AI development and the potential for Sweden-India collaboratio…
S60
Keynote by Marcus Wallenberg Chairman SEB & Saab — He explained that Sweden has taken a research-focused approach to AI development through the WASP program, which his fam…
S61
WS #111 Addressing the Challenges of Digital Sovereignty in DLDCs — – Dr. Toshikazu Sakano highlighted the opportunity for data centers to help grow local engineering talent. Ulandi Exner…
S62
Scaling AI for Billions_ Building Digital Public Infrastructure — I think as what happens with all technologies, and AI is no different in that sense. It is, of course, as we’ve been hea…
S63
The Geoeconomics of Energy and Materials/ DAVOS 2025 — Muhammad Taufik: Well, certainly, I think central to any national oil company’s duty is to ensure energy security, a…
S64
Opening Remarks (50th IFDT) — The overall tone was formal yet warm and celebratory. Speakers expressed pride in the IFDT’s accomplishments and gratitu…
S65
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S66
World Economic Forum Annual Meeting Closing Remarks: Summary — The tone is consistently positive, celebratory, and grateful throughout the discussion. It begins with formal appreciati…
S67
Opening Ceremony — The tone is consistently formal, diplomatic, and optimistic yet cautionary. Speakers maintain a celebratory atmosphere a…
S68
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — The tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shif…
S69
Pathways to De-escalation — The overall tone was serious and somewhat cautious, reflecting the gravity of cybersecurity challenges. While the speake…
S70
Diplomacy and technology: An introduction — We need to be careful with historicalanalogiesas they can be both useful and misleading.
S71
Language (and) diplomacy — A fourth function of historical analogies is as an ‘anti-depressant; a colourful imagery which neutralises a boring and …
S72
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S73
Any other business /Adoption of the report/ Closure of the session — It illustrates a nation that is both inward-looking concerning its policy developments and outwardly grateful for the ai…
S74
Conversation: 01 — The conversation maintained a consistently diplomatic and collaborative tone throughout. It was optimistic about AI’s po…
S75
Welcome address — The tone is formal, diplomatic, and consistently optimistic throughout. The speaker maintains an authoritative yet colla…
S76
Upskilling for the AI era: Education’s next revolution — The tone is consistently optimistic, motivational, and action-oriented throughout. The speaker maintains an enthusiastic…
S77
Scaling Innovation Building a Robust AI Startup Ecosystem — Overall Tone:The tone was consistently celebratory, appreciative, and inspirational throughout. It began formally with t…
S78
Keynote by Uday Shankar Vice Chairman_JioStar India — The tone is consistently optimistic and visionary throughout, beginning with congratulatory remarks and maintaining an i…
S79
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — The tone is consistently optimistic, confident, and inspirational throughout. The speaker maintains an enthusiastic and …
S80
Keynote by Uday Shankar Vice Chairman_JioStar India — Overall Tone:The tone is consistently optimistic and visionary throughout, beginning with congratulatory remarks and mai…
S81
Keynote by Naveen Tewari Founder & CEO, inMobi India AI Impact Summit — Speaker 1 performs a standard event transition, thanking the previous speaker and introducing the next keynote presenter…
S82
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — Naveen Tiwari begins his presentation by expressing gratitude to the event organizers, specifically mentioning the AI Im…
S83
Powering AI Global Leaders Session AI Impact Summit India — -Speaker: Role/title not specified, appears to be a moderator or host introducing the session and thanking partners A n…
S84
Responsible AI for Shared Prosperity — -David Lammy- Deputy Prime Minister of the UK
S85
Keeping up with Smart Factories / DAVOS 2025 — Gan Kim Yong: Can I ask a question? Can I ask a question? Yeah, please. Maybe I can ask, Mr. Park. Mr. Deputy Prime…
S86
https://app.faicon.ai/ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-ebba-busch-deputy-prime-minister-sweden — Infrastructure. And Europe cannot be underestimated. You cannot bypass the European Union in the AI stack. Consider just…
S87
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 2 — Thank you so much, Your Excellency Ashwini Vaishnav, and colleagues, ladies and gentlemen, friends of AI. First of all, …
S88
Donor roundtable: Enabling impact at scale in supporting inclusive and sustainable digital economies — In addition, the analysis underscores the crucial role of development cooperation in shaping trade frameworks, building …
S89
https://app.faicon.ai/ai-impact-summit-2026/regional-leaders-discuss-ai-ready-digital-infrastructure — Thank you. And I think with the regional cooperation integration agenda being also top of mind for ADB, I just ask my co…
S90
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-ebba-busch-deputy-prime-minister-sweden — And that brings me to the point of legitimacy. In 1450, with modern time telling, when the printing press was introduced…
S91
Importance of Professional standards for AI development and testing — Havey believes that failures like the Post Office scandal result from poor implementation practices, inadequate testing,…
S92
Wrap up — Public perception of technology is dominated by negative aspects, and mistrust/fear of technology makes societies less s…
S93
Driving Indias AI Future Growth Innovation and Impact — These key comments fundamentally shaped the discussion by expanding it beyond technical infrastructure to encompass trus…
S94
AI Governance Dialogue: Steering the future of AI — Capacity is linked to being connected to infrastructure, of course. And that includes access to compute, data centers, a…
S95
WS #97 Interoperability of AI Governance: Scope and Mechanism — Olga Cavalli: Thank you very much about trust. as we know, artificial intelligence is based off a big amount of data,…
S96
Beyond human: AI, superhumans, and the quest for limitless performance & longevity — Both speakers challenge traditional medical paradigms – Zhavoronkov argues aging should be classified as a treatable dis…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
3 arguments152 words per minute247 words96 seconds
Argument 1
AI sessions provide new perspectives on AI challenges and the future (Speaker 1)
EXPLANATION
Speaker 1 claims that the AI keynote sessions give participants fresh insights into the difficulties surrounding AI and help them anticipate future developments. The remark emphasizes the educational value of such forums for a better understanding of AI’s trajectory.
EVIDENCE
Speaker 1 says that such sessions are “adding such new perspectives to our understanding of AI, the challenge, and also the future, what to expect in future” [7] and notes that “We all are aware of the challenges in front of the world when it comes to AI” [6].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Webinar and policy sessions highlight AI opportunities, challenges, and the need for fresh perspectives, aligning with the claim [S7][S9][S13].
MAJOR DISCUSSION POINT
AI sessions provide new perspectives on AI challenges and the future (Speaker 1)
Argument 2
Global awareness of AI challenges highlights the need for legitimacy and public trust (Speaker 1)
EXPLANATION
Speaker 1 points out that everyone is increasingly aware of AI‑related challenges, implying that this awareness creates a demand for legitimate governance and public confidence in AI systems. The statement suggests that broad recognition of AI issues underpins calls for trustworthy frameworks.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UN Security Council and Leaders’ Plenary emphasize transparency, accountability, and fairness as foundations for public trust in AI systems [S10][S12].
MAJOR DISCUSSION POINT
Global awareness of AI challenges highlights the need for legitimacy and public trust (Speaker 1)
AGREED WITH
Ebba Bush
Argument 3
AI‑driven data centers demand an ever‑growing share of national power grids, creating energy‑policy challenges (Speaker 1)
EXPLANATION
Speaker 1 warns that the rapid expansion of AI‑intensive data centres is putting increasing pressure on national electricity networks, turning energy policy into a critical issue for governments. This highlights the infrastructural strain caused by AI compute demands.
EVIDENCE
Speaker 1 observes that “the data centers demand ever-growing share of national power grids” and calls it “a challenge I think every nation will face” [12].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI Impact Summit India and Open Forum discussions note the massive electricity consumption of AI data centers and the resulting environmental and policy concerns [S13][S14][S12].
MAJOR DISCUSSION POINT
AI‑driven data centers demand an ever‑growing share of national power grids, creating energy‑policy challenges (Speaker 1)
AGREED WITH
Ebba Bush
DISAGREED WITH
Ebba Bush
E
Ebba Bush
5 arguments125 words per minute1934 words924 seconds
Argument 1
Sweden‑India partnership is strategic, long‑term, and built on trust (Ebba Bush)
EXPLANATION
Ebba Bush describes the collaboration between Sweden and India as a deliberate, enduring alliance founded on mutual confidence. She frames it as a cornerstone for joint AI development and broader strategic cooperation.
EVIDENCE
She states that “The Nordics are deeply engaged here in India, and we are here because we believe this partnership is strategic” and follows with “It is long-term and built on trust” [24-25].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Keynote remarks explicitly describe Sweden as a proud, reliable, long-term partner of India, and her plenary appearance reinforces this strategic relationship [S5][S16].
MAJOR DISCUSSION POINT
Sweden‑India partnership is strategic, long‑term, and built on trust (Ebba Bush)
Argument 2
AI transformation follows a fear‑trust‑legitimacy curve; societies must understand AI to embrace it (Ebba Bush)
EXPLANATION
Ebba Bush argues that, like past disruptive technologies, AI first provokes fear, then gains trust, and finally achieves legitimacy, after which it transforms societies. She stresses that public understanding is essential for acceptance and responsible adoption.
EVIDENCE
She draws a historical parallel with the printing press, describing a sequence of “fear, then trust, then legitimacy, and finally, a worldwide transformation” [45-58] and adds that “We are now living through another such moment” with AI representing a “fundamental shift” [59-62].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bush’s keynote draws a historical parallel with the printing press, outlining the fear-trust-legitimacy progression for disruptive technologies [S5].
MAJOR DISCUSSION POINT
AI transformation follows a fear‑trust‑legitimacy curve; societies must understand AI to embrace it (Ebba Bush)
AGREED WITH
Speaker 1
Argument 3
When properly managed, data centers can become local job anchors, enable renewable energy, and support critical sectors like health and defense (Ebba Bush)
EXPLANATION
Ebba Bush highlights that data centres, if integrated wisely, can generate stable employment, stimulate renewable‑energy projects, and provide essential computing power for hospitals, research, defence and industry, turning them into assets rather than burdens.
EVIDENCE
She notes that data centres can be “long term local job anchors if implemented and used correctly” and can “enable renewable energy investments” while serving as “infrastructure for hospitals, for research, defense and industry” and describing them as “the factories of the new economy” [85-88].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bush characterises data centres as “factories of the new economy” that can create jobs and support essential services, while Leaders’ Plenary cites eco-efficient data centre models as examples [S6][S12].
MAJOR DISCUSSION POINT
When properly managed, data centers can become local job anchors, enable renewable energy, and support critical sectors like health and defense (Ebba Bush)
AGREED WITH
Speaker 1
DISAGREED WITH
Speaker 1
Argument 4
True AI sovereignty rests on jurisdictional control, sovereign compute capacity, and strategic partner choice (Ebba Bush)
EXPLANATION
Ebba Bush defines AI sovereignty as comprising three pillars: knowing where data is stored and processed, possessing independent high‑performance compute resources, and selecting partners from a position of strength rather than dependency. These elements together ensure autonomous AI development.
EVIDENCE
She enumerates the three pillars as “First, jurisdictional control, knowing where your data is stored and processed. Second, infrastructure capacity, having sovereign compute for advanced models. And third, strategic choice, selecting partners from a place of strength, not dependency” [102-105].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bush’s keynote outlines the three pillars of AI sovereignty-jurisdictional control, infrastructure capacity, and strategic choice-providing a clear framework for autonomous AI development [S6][S5].
MAJOR DISCUSSION POINT
True AI sovereignty rests on jurisdictional control, sovereign compute capacity, and strategic partner choice (Ebba Bush)
Argument 5
Democracies must cooperate because no single nation can build resilient AI infrastructure alone (Ebba Bush)
EXPLANATION
Ebba Bush asserts that building robust AI infrastructure requires collective effort, stating that individual nations lack the resources to do it alone and therefore democracies need to work together. Cooperation is presented as essential for achieving AI resilience and sovereignty.
EVIDENCE
She says “No nation can build resilient AI infrastructure alone. Democracies have to cooperate” and adds that “AI sovereignty does not mean isolation” [97-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Resilient infrastructure literature stresses cross-sectoral collaboration, and policy sessions call for global AI cooperation, supporting the need for democratic collaboration [S15][S9].
MAJOR DISCUSSION POINT
Democracies must cooperate because no single nation can build resilient AI infrastructure alone (Ebba Bush)
Agreements
Agreement Points
AI challenges require legitimacy and public trust
Speakers: Speaker 1, Ebba Bush
Global awareness of AI challenges highlights the need for legitimacy and public trust (Speaker 1) AI transformation follows a fear‑trust‑legitimacy curve; societies must understand AI to embrace it (Ebba Bush)
Both speakers stress that the disruptive nature of AI generates fear and that societies must move through trust to achieve legitimacy, making public understanding and trust essential for responsible AI adoption [6][45-58].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy frameworks treat AI as critical infrastructure and require inclusive stakeholder engagement, transparency and accountability to build legitimacy and public trust, as outlined in OECD and related policy briefs [S32][S33][S34].
AI‑driven data centres create energy‑policy challenges but can also deliver socioeconomic benefits if managed well
Speakers: Speaker 1, Ebba Bush
AI‑driven data centers demand an ever‑growing share of national power grids, creating energy‑policy challenges (Speaker 1) When properly managed, data centers can become local job anchors, enable renewable energy, and support critical sectors like health and defense (Ebba Bush)
Speaker 1 points out the growing electricity demand of AI data centres, while Ebba Bush highlights that, with proper policies, those centres can become sources of jobs, renewable-energy investment and essential services, showing a shared view of both risk and opportunity [12][85-88].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple forums highlight the rising energy demand of AI data centres and the need for renewable integration, while also noting potential socioeconomic gains such as job creation and regional development when managed responsibly [S37][S40][S46][S47].
Similar Viewpoints
Both recognise that AI’s societal impact follows a fear‑trust‑legitimacy trajectory and that building public trust is crucial for its acceptance [6][45-58].
Speakers: Speaker 1, Ebba Bush
Global awareness of AI challenges highlights the need for legitimacy and public trust (Speaker 1) AI transformation follows a fear‑trust‑legitimacy curve; societies must understand AI to embrace it (Ebba Bush)
Both see AI data centres as a major energy consumer that poses policy challenges, yet also acknowledge their potential to generate jobs and support critical infrastructure when governed responsibly [12][85-88].
Speakers: Speaker 1, Ebba Bush
AI‑driven data centers demand an ever‑growing share of national power grids, creating energy‑policy challenges (Speaker 1) When properly managed, data centers can become local job anchors, enable renewable energy, and support critical sectors like health and defense (Ebba Bush)
Unexpected Consensus
Sweden’s dual role as an innovation hub and a source of clean, low‑carbon AI compute
Speakers: Speaker 1, Ebba Bush
Sweden has long been a quiet powerhouse of innovation, from Ericsson to Spotify to some of Europe’s most promising AI startups (Speaker 1) But first of all, we have an abundant of clean and reliable energy… AI training in Sweden can run at roughly one third of the carbon footprint of a typical US hyperscaler operations (Ebba Bush)
While Speaker 1 highlights Sweden’s innovation ecosystem, Ebba Bush adds a complementary dimension of clean energy and low-carbon AI training, revealing an unexpected consensus that Sweden can contribute both cutting-edge AI expertise and sustainable compute capacity [10][116-120].
POLICY CONTEXT (KNOWLEDGE BASE)
Sweden is repeatedly cited as a model for coupling advanced AI innovation with near-zero-carbon electricity (hydro, nuclear) and industrial scaling, described as “AI gigafactories” that combine clean power, stability and technical expertise [S46][S47][S48].
Overall Assessment

The speakers converge on three core ideas: (1) AI’s transformative power creates societal fear that must be turned into trust and legitimacy; (2) AI data centres pose significant energy challenges but also offer socioeconomic opportunities if managed responsibly; (3) International cooperation, exemplified by Sweden’s innovative and clean‑energy strengths, is essential for building resilient AI infrastructure.

High consensus on the need for legitimacy, energy‑policy attention, and collaborative approaches, suggesting that future policy discussions are likely to focus on joint governance frameworks, sustainable AI infrastructure, and trust‑building measures.

Differences
Different Viewpoints
Framing of AI data centers – burden on national power grids vs opportunity for jobs, renewable energy and essential services
Speakers: Speaker 1, Ebba Bush
AI‑driven data centers demand an ever‑growing share of national power grids, creating energy‑policy challenges (Speaker 1) When properly managed, data centers can become local job anchors, enable renewable energy, and support critical sectors like health and defense (Ebba Bush)
Speaker 1 warns that expanding AI data centres will strain electricity networks and pose a policy challenge [12]. Ebba Bush counters that data centres, if deployed correctly, can create jobs, spur renewable investment and serve hospitals, research and defence, describing them as “factories of the new economy” [85-88]. The two speakers therefore disagree on whether data centres are primarily a problem or a strategic asset.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between grid strain and economic opportunity mirrors expert discussions on AI data-centre energy impacts, renewable integration and job creation in sustainable AI deployment strategies [S37][S40][S46][S47].
Unexpected Differences
Role of AI‑focused events versus sovereign infrastructure in building trustworthy AI
Speakers: Speaker 1, Ebba Bush
AI sessions provide new perspectives on AI challenges and the future (Speaker 1) True AI sovereignty rests on jurisdictional control, sovereign compute capacity, and strategic partner choice (Ebba Bush)
It is unexpected that Speaker 1 places the primary solution for AI legitimacy in educational sessions and knowledge-sharing [7][8], while Ebba Bush argues that legitimacy comes from building sovereign technical capacity and choosing partners wisely [102-105]. The two approaches target the same goal (trustworthy AI) but diverge sharply on the means, revealing an unanticipated tension between soft-skill awareness and hard-infrastructure sovereignty.
POLICY CONTEXT (KNOWLEDGE BASE)
Trustworthy AI is linked to sovereign infrastructure and robust governance; policy analyses stress national data-sovereignty, audit trails and public participation as essential beyond ad-hoc events or conferences [S33][S44][S32].
Overall Assessment

The discussion shows limited direct conflict; the main disagreement centres on the characterization of AI data centres, while both speakers converge on the need for legitimacy, trust and international cooperation. The disagreement is moderate and mainly technical, suggesting that policy coordination will need to reconcile energy‑policy concerns with the strategic vision of data‑centre‑driven AI ecosystems.

Moderate – the speakers share common goals (trustworthy, inclusive AI) but differ on the framing of data‑centre impacts and the primary pathway to legitimacy, which could affect policy alignment on energy and infrastructure.

Partial Agreements
Both speakers agree that public legitimacy and trust are essential for AI adoption. Speaker 1 stresses that growing awareness of AI challenges creates a demand for legitimacy [6][7], while Ebba Bush describes a historical fear‑trust‑legitimacy trajectory and the need for understanding to turn fear into trust [45-58][94-96]. However, they differ on how to achieve that legitimacy: Speaker 1 points to AI sessions and awareness‑raising, whereas Ebba Bush stresses sovereign infrastructure, strategic partnerships and democratic cooperation [97-99][102-105].
Speakers: Speaker 1, Ebba Bush
Global awareness of AI challenges highlights the need for legitimacy and public trust (Speaker 1) AI transformation follows a fear‑trust‑legitimacy curve; societies must understand AI to embrace it (Ebba Bush)
Takeaways
Key takeaways
AI forums and international summits provide fresh perspectives on AI challenges and help shape future policy. The Sweden‑India partnership is presented as a strategic, long‑term collaboration built on mutual trust and complementary strengths. AI adoption follows a fear‑trust‑legitimacy curve; societies must understand and see value in AI to accept it. Data‑center energy demand is a critical policy issue, but properly managed centres can create jobs, support renewable energy, and serve essential sectors such as health and defense. True AI sovereignty relies on three pillars: jurisdictional control of data, sovereign compute capacity, and strategic choice of partners. No single democracy can build resilient AI infrastructure alone; coordinated international cooperation is essential. Sweden highlights its clean energy surplus, industrial depth, and trusted institutions as assets for building AI gigafactories and exporting AI intelligence. Implementation matters: Sweden has committed funding, launched an AI strategy and a public‑sector AI workshop to move from rhetoric to concrete action.
Resolutions and action items
Sweden will allocate a substantial amount of funds during the current parliamentary term to AI research, development, and implementation. Sweden will publish and roll out a high‑ambition AI strategy outlining concrete steps toward sustained AI leadership. An AI workshop for the public sector will be launched to promote safe and efficient AI adoption. Sweden commits to cooperate with India on building sovereign AI models that serve India’s linguistic and societal diversity. Both Sweden and India will explore joint development of AI‑enabled data‑center infrastructure that leverages renewable energy and creates local employment.
Unresolved issues
Specific mechanisms for ensuring jurisdictional control of data and transparent data‑location policies were not detailed. How to balance the growing energy demand of AI data centres with national grid stability and environmental concerns remains open. The exact criteria and processes for selecting strategic AI partners beyond the Sweden‑India example were not defined. Regulatory frameworks needed to translate AI complexity into tangible public benefits were discussed but not concretized. Methods for measuring and maintaining public legitimacy and trust in AI deployments were not fully addressed.
Suggested compromises
AI sovereignty is framed as strategic choice rather than isolation—countries can retain autonomy while still partnering with trusted allies. Data centres can be positioned as both energy‑intensive facilities and local economic anchors, combining renewable energy integration with job creation. Sweden offers its clean energy and trusted institutional framework to offset concerns about AI’s environmental footprint, while India contributes scale and rapid deployment capability.
Thought Provoking Comments
Every major technological shift follows the same sort of emotional curve. It goes from fear, then trust, then legitimacy, and finally, a worldwide transformation.
She connects the current AI wave to historical patterns (e.g., the printing press), providing a conceptual framework that helps the audience understand the stages of societal acceptance.
This analogy set the tone for the rest of the speech, prompting the audience to think beyond hype and consider the psychological journey of technology adoption. It paved the way for later discussion on legitimacy and trust.
Speaker: Ebba Bush
AI is no longer about algorithms alone. It is about control of energy, compute capacity, data, and trust.
The statement expands the definition of AI from a purely software‑centric view to an infrastructure‑centric perspective, highlighting the material foundations of the technology.
It shifted the conversation from abstract AI capabilities to concrete concerns such as energy consumption and data sovereignty, leading directly into the subsequent focus on data‑center debates and AI infrastructure.
Speaker: Ebba Bush
The nations that lead this transformation will prosper. Those that merely consume AI built elsewhere will fall behind.
This bold claim underscores the strategic geopolitical stakes of AI, framing it as a matter of national competitiveness rather than just a technical advancement.
It introduced a competitive narrative that reframed the discussion as a call for self‑reliance, influencing the audience to consider policy choices and prompting the later elaboration on AI sovereignty.
Speaker: Ebba Bush
AI sovereignty does not mean isolation. It means choosing your dependencies.
She clarifies a commonly misunderstood term, turning ‘sovereignty’ from a protectionist slogan into a nuanced strategy of selective partnership.
This clarification acted as a turning point, moving the dialogue from fear of dependence to a constructive discussion about strategic alliances, which set up the three‑pillar framework that follows.
Speaker: Ebba Bush
True sovereignty, the way I see it, rests on three pillars: jurisdictional control, infrastructure capacity, and strategic choice.
Providing a concrete three‑pillar model transforms an abstract concept into actionable policy areas, giving listeners a clear roadmap.
The framework organized the remainder of the speech, guiding the audience through specific policy levers and enabling deeper analysis of each pillar in later sections (e.g., data‑center location, compute resources, partner selection).
Speaker: Ebba Bush
Data centers are often seen as someone else’s internet using our electricity, but they can be long‑term local job anchors, enable renewable energy investments, and serve hospitals, research, defense and industry.
She reframes a common public concern into an opportunity narrative, linking AI infrastructure to socioeconomic benefits and sustainability.
This comment directly addressed a potential objection from the audience, shifting the tone from defensive to optimistic and opening space for discussion on how AI can be a public good.
Speaker: Ebba Bush
Sweden offers clean, reliable energy and can run AI training with roughly one third of the carbon footprint of a typical US hyperscaler.
By quantifying Sweden’s environmental advantage, she ties national strengths to global AI competitiveness and sustainability goals.
The statement reinforced the partnership argument with India, positioning Sweden as an attractive partner and moving the conversation toward concrete collaboration possibilities.
Speaker: Ebba Bush
When 1.4 billion people gain access to AI tools that empower farmers, small businesses, teachers and doctors, that is not just innovation, that is transformation.
She connects AI deployment to inclusive development, emphasizing scale and human impact rather than purely economic metrics.
This broadened the discussion to social inclusion, reinforcing the earlier point about legitimacy and helping the audience envision tangible outcomes of AI adoption.
Speaker: Ebba Bush
People fear what they do not understand. But what people understand and see value in, they will defend.
A concise summation that circles back to the opening historical analogy, highlighting the need for education and demonstrable benefits to build public trust.
Served as a concluding pivot, turning the speech from a policy‑heavy exposition to a call for action, encouraging stakeholders to focus on transparency and demonstrable value to secure legitimacy.
Speaker: Ebba Bush
Overall Assessment

The discussion was driven almost entirely by Ebba Bush’s keynote, whose remarks repeatedly introduced new analytical lenses—historical analogy, infrastructure focus, sovereignty framework, and inclusive development. Each of these insights acted as a turning point, shifting the conversation from generic enthusiasm about AI to concrete policy challenges, partnership opportunities, and societal implications. By reframing fears (printing press, data‑center opposition) as opportunities for trust and legitimacy, she guided the audience toward a nuanced understanding of AI as both a geopolitical lever and a tool for inclusive growth. Consequently, the key comments collectively shaped the session from a superficial celebration of AI into a strategic dialogue about how nations, especially Sweden and India, can co‑create a trustworthy, sustainable, and sovereign AI ecosystem.

Follow-up Questions
What mechanisms can translate AI complexity into tangible benefits for citizens to build public legitimacy and trust?
She highlighted the need for policymakers to make AI outcomes understandable to voters, indicating a gap in practical frameworks.
Speaker: Ebba Bush
How can AI sovereignty be measured and operationalized across its three pillars: jurisdictional control, infrastructure capacity, and strategic choice?
She outlined these pillars but did not provide metrics, suggesting further research is needed.
Speaker: Ebba Bush
What are the environmental and socio‑economic impacts of large AI data centers on local communities, and how can they be turned into long‑term job anchors and renewable energy investments?
She discussed public concerns about data centers, indicating a need for detailed impact studies.
Speaker: Ebba Bush
What models of international cooperation can enable democracies to jointly build resilient AI infrastructure without creating dependency?
She emphasized that no nation can build AI infrastructure alone, pointing to a research gap in governance frameworks.
Speaker: Ebba Bush
How can AI models be developed to serve the linguistic and cultural diversity of 1.4 billion people in India, ensuring true inclusion?
She mentioned sovereign AI models that reflect society, implying the need for research on multilingual, culturally aware AI.
Speaker: Ebba Bush
What ethical frameworks ensure AI acts as a ‘power multiplier for human dignity’ rather than a replacement for the human spirit?
She warned against AI outweighing humanity, indicating a need for deeper ethical studies.
Speaker: Ebba Bush
How does the carbon footprint of AI training in Sweden compare to typical US hyperscaler operations, and what strategies can further reduce emissions?
She cited a one‑third carbon footprint claim, suggesting comparative analysis and mitigation research.
Speaker: Ebba Bush
What role do European industrial assets (e.g., ASML, ARM, SAP, Ericsson) play in the global AI stack, and how can dependencies be managed?
She referenced these companies as essential, indicating a need to study supply‑chain resilience.
Speaker: Ebba Bush
What are the best practices for implementing AI workshops that help the public sector adopt AI safely and efficiently?
She mentioned launching an AI workshop, implying further study on effective rollout and governance.
Speaker: Ebba Bush
How can AI gigafactories be built with near‑zero carbon energy, combining clean power, political stability, and technological sophistication?
She described Nordic AI gigafactories, highlighting a research area on sustainable large‑scale AI infrastructure.
Speaker: Ebba Bush

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ananya Birla Birla AI Labs

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ananya Birla Birla AI Labs

Session at a glanceSummary, keypoints, and speakers overview

Summary

The speaker launched Birla AI Labs, an entity serving the Aditya Birla Group’s AI needs and advancing frontier research to boost India’s AI role [19-27]. He likened this moment to the Industrial Revolution, noting AI amplifies cognitive abilities beyond technological shifts [10-15].


The lab’s first mandate embeds AI across the group’s $120 billion operations, leveraging a 160-year legacy and vast data [24-33]. Examples include cutting Birla Estates’ timelines by 90% to save 2,000 days, halving underwriting time in Aditya Birla Capital, and an system tracking KPIs for Hindalco [34-44]. In Tantra micro-finance, AI targets at least 30% productivity gains, expanding loan access for rural women, while Love Etc uses AI to create marketing assets [46-55]. These deployments shift focus from AI feasibility to speed and depth of adoption in complex sectors [56-58].


The second mandate creates a research lab with talent from Oxford, IIT Madras and ISRO, focusing on foundation models for time-series data-a $600 billion market [60-68]. The ‘Time to Time’ paper questioned model understanding of market crashes, and a beta AI platform was launched at IIT Bombay; a Delhi University study examined student curiosity effects [66-75]. He urged an ecosystem linking academia, industry and policy to help India grow from $4 trillion to $40 trillion [92-96]. Birla AI Labs will act as a responsible technology builder, collaborating with partners to co-author India’s AI chapter [97-100].


He tied the group’s legacy of adapting through independence, liberalisation and globalisation-a ‘muscle memory’ for tectonic shifts-to the claim ‘We are here to build a new world’ [79-86]. Acknowledging that the playbook for the future remains unwritten, he invited stakeholders to write it together [98-100].


Keypoints


AI as a historic, transformative force for India – The speaker frames the current AI wave as surpassing the Industrial Revolution, emphasizing that AI amplifies cognitive abilities and will be pivotal in moving India from a $4 trillion to a $40 trillion economy by 2047 [9-16][17-18].


Birla AI Labs’ dual mandate – The new unit will (1) act as an apex AI body for the Aditya Birla Group, delivering business-focused solutions, and (2) operate as a frontier research lab that creates proprietary AI products for the broader market [24-28][59-62].


Real-world AI deployments across the Group’s businesses – Concrete examples are given: compressing project timelines at Birla Estates, cutting underwriting and credit-assessment times at Aditya Birla Capital, real-time shop-floor intelligence and digital twins at Hindalco, productivity gains for the micro-finance arm Tantra, and hyper-personalized marketing in the consumer brands [33-55].


Frontier research initiatives – The lab is pursuing (a) structured-foundation models for time-series data, probing whether models truly understand market crashes [64-67]; (b) studies on AI’s impact on human cognition and agency, with results to be presented at King’s College [68-71]; and (c) an AI-native research-productivity platform deployed at IIT Bombay [72-76].


Call for a collaborative AI ecosystem – The speaker argues that no single institution can navigate the AI epoch alone; building India’s $40 trillion future will require sustained collaboration among academia, industry, and policy, with Birla AI Labs positioning itself as a responsible leader [92-99].


Overall purpose/goal


The discussion serves to launch and legitimize Birla AI Labs, outlining its strategic vision, showcasing early successes, highlighting cutting-edge research, and rallying stakeholders around a broader, collaborative ecosystem that will enable India to lead the global AI narrative.


Overall tone


The speaker begins with humble reverence and nervous excitement ([1-7]), shifts to confident, visionary enthusiasm when describing AI’s potential and the lab’s ambitions ([9-18][24-28]), adopts a pragmatic, results-driven tone while detailing business deployments ([33-55]), moves to scholarly, responsible seriousness in the research segment ([64-76]), and concludes with a collective, duty-bound call for partnership and stewardship ([92-99]). The tone evolves from personal modesty to bold optimism and finally to a measured, collaborative responsibility.


Speakers

Speaker 1 – Role/Title: (not specified in the transcript); Area of Expertise: (not specified in the transcript)


Additional speakers:


(none)


Full session reportComprehensive analysis and detailed insights

The speaker opened by greeting the audience with a traditional “Namaste”, thanked the host for the introduction and expressed honour at being among distinguished leaders such as the Prime Minister, Sundar Pichai and Rishi Sunak. He admitted feeling nervous and humbled by the high-profile company he kept, noting that the occasion was “clearly not a joke” [1-7].


He then framed the present AI wave as a historic turning point that will eclipse the Industrial Revolution. While the latter amplified physical labour, AI amplifies cognitive capability, creating a “Cambrian explosion of possibilities” that rewrites the relationship between human effort and economic output. In his view, this transformation is essential for moving India from a $4 trillion to a $40 trillion economy by 2047 [9-16][17-18].


Against this backdrop he announced the creation of Birla AI Labs, a unit with a dual mandate. First, it will act as an apex AI body for the Aditya Birla Group, delivering bespoke solutions that unlock value across the conglomerate’s businesses. Second, it will operate as a frontier research laboratory that builds proprietary AI products for the open market. The lab’s positioning-combining real-world data, domain expertise and enterprise-scale deployment with the freedom to innovate-gives it a rare strategic advantage [24-28][59-62].


The speaker highlighted the Group’s scale and heritage: a 160-year-old operation with decades of operational data across manufacturing, financial services, commodities and consumer businesses; a $120 billion market capitalisation; presence in 42 countries; and a workforce of over 250 000 employees. These assets already enable tangible early gains as advanced analytics and AI reshape supply chains, workforce management and other core functions [31-33].


Under the first mandate, the lab is already delivering measurable efficiency gains. At Birla Estates, AI is used to compress project-concept timelines by 90 %, freeing more than 2 000 man-days per year and dramatically improving speed to market [34-36]. Contract-intelligence tools will now give our teams a unified, accurate view of every agreement, flagging potential claims before they escalate[34-37]. In Aditya Birla Capital, AI-driven underwriting has cut turnaround time by half and reduced credit-assessment preparation by 90 %; an AI-enabled sales programme is projected to generate over $100 million in gross sales [37-38]; and the customer-service platform now resolves first-call queries at a rate exceeding 90 % [37-38].


In the heavy-industry arm Hindalco, a proprietary factory-intelligence system integrates 24 operational KPIs in real time, turning static spreadsheets into a living intelligence layer that flags anomalies before they escalate. The team is also building a digital twin of its smelters and furnaces, topped with an AI layer that will orchestrate the entire coal-sourced power ecosystem [42-45].


The micro-finance subsidiary Tantra is embedding AI across sales, audit and quality-control functions, targeting at least a 30 % productivity lift. This translates into loan officers reaching more customers and, crucially, women in villages gaining access to capital that was previously out of reach [46-49].


Within the consumer-brand portfolio, AI is powering hyper-personalised marketing and real-time inventory intelligence. Brands such as Love Etc and Contraband are using AI-creativity tools to move from campaign ideation to final asset delivery at a fraction of traditional costs, underscoring that AI now works reliably in complex, capital-intensive sectors [53-55].


Having proved feasibility, the strategic question shifts from “if” AI can be applied in complex, capital-intensive sectors to “how fast and how deep” we should deploy it amid inherent ambiguity [56-58].


The second mandate focuses on frontier research. The lab’s first major research vertical tackles structured foundation models for time-series and tabular data, a market estimated at $600 billion [64-68]. Their paper “Time to Time”, accepted while the team was in San Diego, asks whether such models truly understand phenomena like market crashes or merely fit curves; a researcher showed that injecting the signature of a historical crash into a model’s hidden states can shift its forecast, suggesting genuine structural learning [64-68].


Ethical and societal impact is also a priority. A study conducted with Delhi University students examined how large-language-model usage influences curiosity and cognitive agency; the findings will be presented at King’s College, reinforcing the lab’s belief that AI developers must embed human-impact research into their core work [68-71].


Complementing research, the lab launched an AI-native research and productivity platform in December 2024 at IIT Bombay. The platform combines genetic search, real-time data processing and multimodal intelligence to deliver contextual insights across the internet, documents and financial data, and is already being used in the speaker’s own office to boost day-to-day efficiency [72-76].


The speaker then reflected on the Aditya Birla Group’s long-standing alignment with India’s national narrative-building through independence, liberalisation and globalisation-and described this legacy as a “muscle memory” for navigating tectonic shifts. He noted that the generations before him, his brother and his sister, learned to read the early tremors, to invest before consensus forms, and to build for decades rather than just quarters[84-86].


Emphasising collaboration, he argued that no single institution, however large, can steer the AI epoch alone. Realising India’s $40 trillion future will require a sustained ecosystem that brings together academia, industry and policy. Birla AI Labs has built, and is building, a credible global research presence, presenting at top venues, partnering with leading institutions and attracting talent that could work anywhere in the world but has chosen to build for India[58-61].


He reiterated that, at the Aditya Birla Group and through Birla AI Labs, the organization is “here to build a new world”, a refrain he repeated to stress the ambition [79-86].


In closing, he acknowledged that the playbook for the next phase of AI-driven growth has not yet been written, and invited stakeholders to co-author it with the Group. He thanked the audience and expressed honour at having shared the vision [98-100][101].


Session transcriptComplete transcript of the session
Speaker 1

Namaste. Thank you so much for that introduction. Good evening everyone. It is truly an honor to be here today. In his May 2011 address to the British Parliament, President Obama said, and I quote, I am told that the last three speakers here have been the Pope, Her Majesty, the Queen, and Nelson Mandela, which is either a very high bar or the beginning of a very funny joke. Unquote. Even though I am no President Obama, I feel something similar standing before you this evening. Being in the company of leaders like our Honorable Prime Minister Modiji, Sundar Pichai, Rishi Sunak, Sam Altman, Mukesh Ambani, Narayan Murthy, and my father, to name a few, this for sure is a very high bar that I will surely not reach.

I am very nervous and this is clearly not a joke. Under the leadership of our Honorable Prime Minister, it has been extraordinary to see to watch India step into a driving role in the global AI discourse. As a young leader, I feel extremely grateful to have this platform and I feel that it is our responsibility to make sure that we deliver. India’s journey from a $4 trillion economy to a $40 trillion economy in the arc that stretches from where we are today to the Vixit Bharat we aspire to be by 2047, technology will play a decisive role. This moment carries a similar taste to that of the Industrial Revolution. A period where the relationship between human labor and economic output was fundamentally rewritten.

And yet, I would argue, that what we are witnessing today is even more profound. The Industrial Revolution amplified our physical capabilities. AI is amplifying our cognitive ones. What we are living through is nothing less than a Cambrian explosion of possibilities. A phase where entirely new forms of value and new modes of human potential are emerging at a pace that defies our linear thinking. We are standing at a seminal moment in the history of human progress. A moment of extraordinary possibilities. In our humble attempt to translate these possibilities into reality, I am here to introduce Birla AI Labs. When it comes to the Aditya Birla group and AI, I want to be clear. My father has been at this for a while.

Deliberately, quietly, and steadily. Not for the spectacle, but to deliver tangible value to our stakeholders. Birla AI Labs has a dual mandate. The first is to service my father’s direction and act as an apex AI body for the Aditya Birla Group, building solutions alongside our business tech teams to unlock new value across our businesses. The second is to operate as a frontier research lab doing ongoing original research at cutting edge and translating that science into proprietary AI products for the open market. This dual positioning gives Birla AI Labs a rare advantage. Real world data, domain know -how and enterprise scale deployment through the group paired with the freedom to build category defining products for global markets.

Let me share what this looks like in practice. As a part of our first mandate driven by my father, we are executing AI deployment, across the Aditya Birla Group. The group has been operational for the last 160 plus years, and we have decades of operational data across manufacturing, financial services, commodities, consumer businesses, and a growing bench of talent that understands both the science and the business, giving us an undeniable moat. With a $120 billion market cap operating across 42 countries with over 250 ,000 employees, we are witnessing tangible early gains across our diverse portfolio as advanced analytics and AI reshape everything from supply chains to workforce management. Let me start with Birla Estates. We will be using AI to compress project concept timelines by 90%.

Freeing over 2 ,000 man days a year. The immediate impact is efficiency. Architects and developers are no longer constrained by the time it takes to test an idea Contract intelligence tools will now give our teams a unified, accurate view of every agreement Flagging potential claims before they escalate Moving into financial services and the transformation takes a completely different shape Aditya Birla Capital has built one of the most ambitious Gen AI programs in India’s financial sector Not by picking a single use case, but by going after the entire value chain at once Underwriting turnaround time is down 50 % Credit assessment preparation has been cut by 90 % A fully AI -enabled sales program is already targeting more than $100 million in gross sales And it’s not just about the sales, it’s also about the value of the product While the customer service platform is pushing first call resolution beyond 90 % What makes this remarkable is not any individual number.

It is the concurrence. Then there is Hindalco. And here the story shifts register entirely. This is about applying intelligence in one of the most physically demanding energy -intensive industries in the world. On the shop floor, a proprietary factory intelligence integrates 24 operational KPIs in real -time, turning what were once static spreadsheets into a living intelligence layer that surfaces anomalies before they escalate. And what we are building is more ambitious still. A digital twin for our smelters and furnaces, and an AI layer on top that will orchestrate the entire coal -sourced power ecosystem. But if there is one place in our portfolio where AI feels most consequential, most human, it is for Tantra. Tantra our microfinance business.

Here we are embedding AI across sales, audit and quality control and we expect it to unlock at least 30 % in productivity gains. It means a loan officer can reach more people. It means a woman in a village gets access to capital that she would not have had access to otherwise. That kind of efficiency does not just improve margins, it has the potential to improve lives. The consumer businesses brings this completely full circle. I have come to believe through my experience that to build great brands today, product must be backed by content -driven distribution. Our fashion, retail and jewelry businesses are deploying AI for hyper -personalized marketing and real -time inventory intelligence. Within Birla Cosmetics, our brand Love Etc and Contraband are using AI creativity tools to move from campaign ideation to final asset delivery at a fraction of the traditional cost.

What this tells me is that the question is no longer whether AI can work in complex, capital -intensive real -world industries. We have seen it and we know it can. The real question is how fast and how deep should we go, given the ambiguity that surrounds artificial intelligence. Now, on to our second mandate. And this is where Birla AI Labs operates as a frontier research lab. The conviction here is simple. India’s next great institution will emerge at the intersection of deep research, applied engineering and market creation. That is our North Star. To do this, we have assembled a global team of researchers and engineers from Oxford and IIT Madras to BITS Pilani, ISRO Google and Goldman Sachs.

Our first major research vertical is in structured foundation. A field that a recent Forbes article estimates at a $600 billion market opportunity. Often overlooked, a vast majority of the world’s data sits in time series and tabular formats Stock prices, sensor readings, supply chain signals, weather patterns, energy consumptions, patient vitals This data could actually power predictive intelligence in industry, in finance, in infrastructure, in healthcare In December, our team was in Europe in San Diego where our paper, Time to Time, was accepted It asks a very provocative question Do these time series foundation models actually understand what a market crash is? Or are they just fitting curves? A researcher showed that you can reach inside a model’s hidden states Inject the signature of a historical crash and watch the forecast shift accordingly This is not a question of time This is not curve fitting This is a model that has learned something about the structure of the world Our researchers are working at that frontier This thesis for Birla AI Labs has been presented at the Oxford AI Summit and the World Summit AI in the Netherlands in 2025 This lab has built, is building I would say a credible global research presence presenting at top venues partnering with leading institutions and attracting talent that could work anywhere in the world but has chosen to build for India A second research vertical is one that I believe the industry has a moral obligation to pursue AI now mediates the everyday decisions, relationships and information environments of over 1 .7 billion people worldwide Yet the study of what this does to human cognition, agency and daily life remains nascent We at Birla AI Labs want to do something about this We are here to help We conducted a study with Delhi University students to measure how language model usage affects curiosity and cognitive agency among students.

The results of this study will be presented at King’s College this June. This is the kind of research that industry too often leaves to others. But I believe that those of us who are building AI have a responsibility to understand its human consequences, not as an afterthought, but as a core part of the enterprise. Alongside the research, we are also building tools. In December 2024, we launched a beta version of an AI native research and productivity platform at IIT Bombay. Combining a genetic search, real -time data processing and multimodal intelligence to deliver contextual insights across the Internet, documents and financial data. That platform is now being used across my own office. to drive day -to -day efficiency.

It is a tangible example of what happens when frontier research meets applied deployment. And that is exactly the loop that we at Birla AI Labs are designed to close. This approach, building at the frontier while staying rooted in real -world application, is not new to us. The Aditya Birla Group’s history has been very intertwined with the story of our nation. My forefathers have built through every chapter of India’s journey, through independence, through liberalization, through globalization. Ours is a history of reading the moment, adapting with conviction, and building institutions that outlast the disruptions that gave birth to them in the first place. This century of building has given us something very invaluable, a muscle memory for navigating tectonic shifts We are here to build a new world.

We are here to build a new world. We are here to build a new world. We are here to build a new world. We are here to build a new world. The generations before me, my brother and my sister, have learned to read the early tremors, to invest before consensus forms, and to build for decades rather than just quarters. Every generation of our group has faced a moment where the old playbook had to be rewritten. What is different today is the elements of high ambiguity and uncertainty, which, if we look at closely, can give rise to immense opportunity. And that is precisely what makes this moment so thrilling and so consequential. But here is what I have come to believe very, very strongly.

No single institution, no matter how large or how well resourced, can navigate this epoch alone. The journey from $4 trillion to $40 trillion will not be powered by industry acting in isolation. It will require something way more fundamental. It will require us to build an ecosystem that brings academia, industry and policy into genuine, sustained collaboration. As India writes its AI chapter, we intend to be on the front lines, not as observers, not as fast followers, but as honest and true responsible builders of technology, of institutions and of the ecosystem this country needs to lead. And we will do so with utmost responsibility. The playbook for what comes next has not yet been written. And at the Aditi Birla Group and at Birla AI Labs, we look forward to writing it together.

Thank you all so very much. It’s been an honour.

Related ResourcesKnowledge base sources related to the discussion topics (6)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“The speaker opened by greeting the audience with a traditional “Namaste”.”

The transcript of the keynote shows the speaker beginning with “Namaste” [S7].

Confirmedhigh

“He admitted feeling nervous and humbled by the high‑profile company he kept, noting that the occasion was “clearly not a joke”.”

The speaker explicitly says “I am very nervous and this is clearly not a joke” in the source material [S4] and [S8].

Additional Contextlow

“He expressed honour at being among distinguished leaders such as the Prime Minister, Sundar Pichai and Rishi Sunak.”

The knowledge base records the speaker thanking “Prime Minister Modi and distinguished leaders” and mentions Sundar Pichai speaking, but does not reference Rishi Sunak, indicating only the Prime Minister and Sundar Pichai are documented as present [S39].

Confirmedmedium

“He framed the present AI wave as a historic turning point that will eclipse the Industrial Revolution.”

Demis Hassabis is quoted as saying AI will be “100 times more impactful than the industrial revolution,” supporting the claim that AI will eclipse it [S45].

External Sources (55)
S1
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S2
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S3
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S4
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ananya Birla Birla AI Labs — “AI is amplifying our cognitive ones.”[1]. “The Industrial Revolution amplified our physical capabilities.”[2]. “This mo…
S5
Open Forum #33 Building an International AI Cooperation Ecosystem — Dai Wei: Distinguished guests, ladies and gentlemen, good day to you all. I’m delighted to join you in this United Natio…
S6
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — “Our program, AlphaFold, that solved the 50‑year grand challenge of protein folding, I think is just the first example o…
S7
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ananya Birla Birla AI Labs — This discussion features a speech by a young leader from the Aditya Birla Group announcing the launch of Birla AI Labs, …
S8
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-ananya-birla-birla-ai-labs — Deliberately, quietly, and steadily. Not for the spectacle, but to deliver tangible value to our stakeholders. Birla AI …
S9
https://app.faicon.ai/ai-impact-summit-2026/ai-and-data-driving-indias-energy-transformation-for-climate-solutions — is incredible. So we need to leverage that and that can be leveraged if we have the data. Now in terms of institutional …
S10
Debating Technology / Davos 2025 — Yann LeCun: Well, I think the answer to this is diversity. So, again, if you have two or three AI systems that all com…
S11
AI for Good Technology That Empowers People — Speaker 1 promotes the research being conducted at Indian institutions, encouraging ITU colleagues to visit labs and eng…
S12
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — Speaker 1’s presentation represents a masterful progression from current state analysis to future vision, punctuated by …
S13
WS #187 Bridging Internet AI Governance From Theory to Practice — Vint Cerf: Well, thank you so much for this opportunity. I want to remind everyone that I am not an expert on artificial…
S14
Opening Ceremony — Nandini Chami: Esteemed delegates and dear friends, at IGF 2025, which coincides with the 20th year review of the World …
S15
Building Trustworthy AI Foundations and Practical Pathways — Throughout the discussion, the speakers emphasise that these are not theoretical concerns but issues causing real harm. …
S16
Inclusive AI Starts with People Not Just Algorithms — Speaker 1 emphasizes that technological change affects individuals personally, and success depends on developing collabo…
S17
Bridging the AI innovation gap — LJ Rich: to invite our opening keynote. It’s a pleasure to invite to the stage the director of the Telecommunications St…
S18
Skilling and Education in AI — In two significant areas, one is in agriculture, which is the highest employer, biggest employer anywhere. It’s also one…
S19
Building Climate-Resilient Systems with AI — Speaker 1 emphasizes the unique role universities play in bringing together diverse stakeholders and disciplines to addr…
S20
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S21
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ananya Birla Birla AI Labs — Central to the presentation is India’s journey from a $4 trillion to a $40 trillion economy, with technology playing a “…
S22
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ananya Birla Birla AI Labs — This discussion features a speech by a young leader from the Aditya Birla Group announcing the launch of Birla AI Labs, …
S23
Building Sovereign and Responsible AI Beyond Proof of Concepts — AI systems must deliver genuine value that goes beyond technical metrics to create meaningful improvements in people’s l…
S24
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what…
S25
Impact the Future – Compassion AI | IGF 2023 Town Hall #63 — David Hanson:And this includes things like regulations that are protecting animal rights for research purposes and how y…
S26
Assessing the Promise and Efficacy of Digital Health Tool | IGF 2023 WS #83 — However, there are challenges in applying ethical considerations in profit-driven AI innovations. There is often a clash…
S27
Commission on Science and Technology for Development — (United Nations – A. Shaping the enabling environment 39. There are policy implications that arise from the char…
S28
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-ananya-birla-birla-ai-labs — It is a tangible example of what happens when frontier research meets applied deployment. And that is exactly the loop t…
S29
Advancing Scientific AI with Safety Ethics and Responsibility — Need for collaborative approaches bringing together different stakeholders from requirements definition through deployme…
S30
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S31
Open Forum #33 Building an International AI Cooperation Ecosystem — Participant: ≫ Distinguished guests, dear friends, it is a great honor to speak to you today on a topic that is reshapin…
S32
Harnessing Collective AI for India’s Social and Economic Development — Artificial intelligence | Human rights and the ethical dimensions of the information society Professor Bullock warns th…
S33
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ananya Birla Birla AI Labs — This discussion features a speech by a young leader from the Aditya Birla Group announcing the launch of Birla AI Labs, …
S34
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ananya Birla Birla AI Labs — Namaste. Thank you so much for that introduction. Good evening everyone. It is truly an honor to be here today. In his M…
S35
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Marco Zennaro provided concrete examples of TinyML applications that address real-world challenges across diverse sector…
S36
From principles to practice: Governing advanced AI in action — Udbhav Tiwari provided concrete examples of real-world implementation challenges through Signal’s experience. His most s…
S37
Cybersecurity regulation in the age of AI | IGF 2023 Open Forum #81 — Abraham Zarouk:Thank you. The INCD focuses on three main domains when addressing AI. The first domain is protecting AI. …
S38
AI for Good Technology That Empowers People — Thank you, Bajesh. Thank you for bringing out the Indian research in the topic and bringing out the 8GI framework also. …
S39
Keynote-Sundar Pichai — Namaste. Thank you. Thank you. Prime Minister Modi and distinguished leaders. It’s wonderful to be back in India. Every …
S40
Keynote-Sundar Pichai — Namaste. Thank you. Thank you. Prime Minister Modi and distinguished leaders. It’s wonderful to be back in India. Every …
S41
https://app.faicon.ai/ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-ananya-birla-birla-ai-labs — Namaste. Thank you so much for that introduction. Good evening everyone. It is truly an honor to be here today. In his M…
S42
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ebba Busch Deputy Prime Minister Sweden — Deputy Prime Minister Bush opened her remarks with a greeting in Hindi (“Namaste, ap kärsahein”) and expressed gratitude…
S43
Who needs diplomats when you have Taylor Swift? — There was a moment, in the early part of 2024, that spoke volumes about the peculiar state of modern power. As the natio…
S44
Keynote-Demis Hassabis — This discussion features a keynote address by Sir Demis Hassabis, co-founder and CEO of Google DeepMind and Nobel laurea…
S45
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Hassabis provides a dramatic comparison to illustrate AI’s potential impact, suggesting it will be 100 times more impact…
S46
World Economic Forum Panel: Sovereignty and Interconnectedness in the Modern Economy — Economic | Infrastructure Tooze suggests that the convergence of artificial intelligence advancement with the first yea…
S47
AI, automation, and human dignity: Reimagining work beyond the paycheck — But this moment feels different, and not just because of the pace of change. Studies in technology and social change sug…
S48
Indias Roadmap to an AGI-Enabled Future — Thank you. India under the India mission has 38 ,000 I think scaling to more than 50 ,000 GPUs which is so much more tha…
S49
Opening of the session/OEWG 2025 — China: Thank you, Chair. First of all, China would like to thank the Chairman for his efforts in convening this meetin…
S50
Harnessing Collective AI for India’s Social and Economic Development — The real value unlock, which is sustaining, is actually when you get AI to do something which humans can’t do or are not…
S51
Responsible AI in India Leadership Ethics & Global Impact part1_2 — Thank you so much, Andy. That was fantastic. It set the context very rightly for the next discussion coming up. So a war…
S52
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — Simon Chesterman:Thanks, yeah, on the role of companies, I do think it’s sort of amazing how things have changed. So bac…
S53
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — <strong>Sihao Huang:</strong> of these agents work with each other smoothly. And protocols are so important because that…
S54
[Parliamentary Session 3] Researching at the frontier: Insights from the private sector in developing large-scale AI systems — 4. Exploring public-private collaborations to balance innovation and regulation Ammari highlighted META’s open-source a…
S55
National Strategy for Artificial Intelligence — ## October 28, 2019 President Moon Jae-in Fellow Koreans and the main architects of the Republic of Korea’s artificial …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
10 arguments135 words per minute2035 words899 seconds
Argument 1
AI as a cognitive amplifier surpassing the Industrial Revolution (Speaker 1)
EXPLANATION
The speaker claims that while the Industrial Revolution expanded physical capabilities, the current AI era is amplifying human cognition to an even greater extent, creating a rapid explosion of new possibilities.
EVIDENCE
He contrasts the Industrial Revolution’s amplification of physical labor with AI’s amplification of cognitive abilities, describing the present moment as a “Cambrian explosion of possibilities” that rewrites value creation at a pace beyond linear thinking [13-15].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote repeatedly describes AI as amplifying human cognition and draws a direct parallel to the Industrial Revolution, noting it is a comparable (or even greater) transformative wave [S4][S6][S7].
MAJOR DISCUSSION POINT
AI as a cognitive amplifier
Argument 2
Leveraging AI to reach a $40 trillion economy by 2047 (Speaker 1)
EXPLANATION
The speaker links India’s ambition to grow its economy from $4 trillion to $40 trillion by 2047 directly to the deployment of AI technologies, positioning AI as a decisive driver of that growth.
EVIDENCE
He states that “India’s journey from a $4 trillion economy to a $40 trillion economy … technology will play a decisive role” indicating AI’s central role in the projected expansion [9].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The presentation highlights India’s target of moving from a $4 trillion to a $40 trillion economy with technology, especially AI, playing a decisive role [S4][S7].
MAJOR DISCUSSION POINT
AI-driven economic transformation
Argument 3
Dual mandate – serve the Group’s business units and conduct frontier research (Speaker 1)
EXPLANATION
Birla AI Labs is organized with two core purposes: to act as an internal AI hub for the Aditya Birla Group’s businesses and to operate as a frontier research laboratory developing cutting‑edge AI products for the broader market.
EVIDENCE
The speaker outlines the dual mandate, noting that the lab will both service his father’s direction as an apex AI body for the Group and pursue original research to create proprietary AI products for the open market [24-27].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The lab’s dual mandate-to act as an apex AI body for the Aditya Birla Group while pursuing frontier research for the broader market-is explicitly outlined in the source describing its organizational purpose [S8].
MAJOR DISCUSSION POINT
Dual internal and research mandate
Argument 4
Early wins: 90 % project‑timeline reduction at Birla Estates, 50 % underwriting speed‑up in financial services, real‑time factory intelligence at Hindalco, 30 % productivity lift in Tantra micro‑finance, hyper‑personalised marketing in consumer brands (Speaker 1)
EXPLANATION
The speaker provides concrete examples of AI deployments across several Birla Group businesses, highlighting substantial efficiency gains and productivity improvements achieved so far.
EVIDENCE
He cites a 90 % reduction in project concept timelines at Birla Estates freeing 2,000 man-days per year [34-36]; a 50 % cut in underwriting turnaround and 90 % reduction in credit-assessment preparation in financial services [37]; a real-time factory intelligence layer integrating 24 KPIs at Hindalco [42-43]; a projected 30 % productivity lift in the Tantra micro-finance business [47-49]; and hyper-personalised marketing and real-time inventory intelligence in consumer brands such as Love Etc and Contraband [53-55].
MAJOR DISCUSSION POINT
Early AI-driven operational gains
Argument 5
Structured foundation‑model research on time‑series data, demonstrating models can “understand” market crashes (Speaker 1)
EXPLANATION
Birla AI Labs is pursuing frontier research on foundation models for time‑series data, arguing that these models can capture structural market dynamics rather than merely fitting curves.
EVIDENCE
The speaker describes the research vertical focused on structured foundation models for time-series data, noting a recent paper “Time to Time” that probes whether models truly understand market crashes, and cites experiments where injecting a crash signature shifts forecasts, indicating learned world structure [64-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Research on time-series foundation models that embed market-crash signatures and shift forecasts when those signatures are injected is described in the source material [S4][S7].
MAJOR DISCUSSION POINT
Time‑series foundation‑model research
Argument 6
Launch of an AI‑native research and productivity platform (used at IIT Bombay and the speaker’s office) (Speaker 1)
EXPLANATION
The lab released a beta AI‑native platform that combines genetic search, real‑time data processing, and multimodal intelligence to provide contextual insights, and it is already being used internally for productivity.
EVIDENCE
He reports that in December 2024 a beta version of the platform was launched at IIT Bombay, integrating genetic search and multimodal intelligence, and that the platform is now used across his own office to drive day-to-day efficiency [73-76].
MAJOR DISCUSSION POINT
AI‑native productivity platform
Argument 7
Study on how large language model usage influences curiosity and cognitive agency among students (Speaker 1)
EXPLANATION
Birla AI Labs conducted empirical research with Delhi University students to assess the impact of LLM usage on their curiosity and sense of agency, with findings slated for presentation at an academic conference.
EVIDENCE
The speaker mentions a study carried out with Delhi University students measuring how language-model usage affects curiosity and cognitive agency, with results to be presented at King’s College in June [68-69].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A study conducted with Delhi University students measuring the impact of LLM usage on curiosity and cognitive agency is mentioned in the keynote summary [S4].
MAJOR DISCUSSION POINT
LLM impact on student cognition
Argument 8
Assertion that AI builders must proactively investigate human consequences, not treat them as an afterthought (Speaker 1)
EXPLANATION
The speaker argues that developers of AI have an ethical responsibility to understand and address the societal and human effects of their technologies from the outset, rather than as a secondary concern.
EVIDENCE
He states that AI builders must understand human consequences “not as an afterthought, but as a core part of the enterprise” and emphasizes this responsibility as a guiding principle [71-72].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The speaker’s call for ethical responsibility and the moral dimension of AI development is highlighted in the presentation’s ethical discussion [S4] and reinforced by inclusive-AI perspectives [S16].
MAJOR DISCUSSION POINT
Ethical responsibility of AI developers
Argument 9
Claim that no single institution can navigate the AI epoch alone; need for sustained academia‑industry‑policy partnership (Speaker 1)
EXPLANATION
The speaker contends that addressing the challenges and opportunities of AI requires a collaborative ecosystem that brings together academia, industry, and government in a long‑term partnership.
EVIDENCE
He asserts that “no single institution… can navigate this epoch alone” and calls for an ecosystem that unites academia, industry, and policy for sustained collaboration [92-96].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for a collaborative ecosystem spanning academia, industry, and policy is emphasized repeatedly in the keynote and related commentary [S4][S7][S16].
MAJOR DISCUSSION POINT
Need for collaborative AI ecosystem
Argument 10
Commitment to act as a responsible, front‑line builder of AI technology, institutions, and ecosystem for the nation (Speaker 1)
EXPLANATION
The speaker pledges that Birla AI Labs will take an active, responsible role in shaping India’s AI future, not merely observing but building technology, institutions, and the broader ecosystem with accountability.
EVIDENCE
He emphasizes the group’s intention to be “honest and true responsible builders of technology, of institutions and of the ecosystem this country needs to lead” and affirms this commitment will be pursued with utmost responsibility [96-98].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The speaker pledges responsible nation-building through AI technology, institutions, and ecosystem development, a stance echoed in the moral-responsibility segment of the talk [S4].
MAJOR DISCUSSION POINT
Responsible AI nation‑building
Agreements
Agreement Points
AI as a transformative catalyst and driver of India’s $40 trillion economic goal
Speakers: Speaker 1
AI as a cognitive amplifier surpassing the Industrial Revolution (Speaker 1) Leveraging AI to reach a $40 trillion economy by 2047 (Speaker 1)
Speaker 1 argues that AI amplifies human cognition far beyond the physical amplification of the Industrial Revolution and positions AI as a decisive engine for India’s ambition to grow from a $4 trillion to a $40 trillion economy by 2047 [13-15][9].
POLICY CONTEXT (KNOWLEDGE BASE)
The ambition to lift India’s economy from $4 trillion to $40 trillion hinges on technology, with AI positioned as a decisive growth driver in the Birla AI Labs keynote, reflecting national policy emphasis under Prime Minister Modi’s leadership [S21][S22].
Dual mandate linking internal business deployment with frontier research and tangible early wins
Speakers: Speaker 1
Dual mandate – serve the Group’s business units and conduct frontier research (Speaker 1) Early wins: 90 % project‑timeline reduction at Birla Estates, 50 % underwriting speed‑up in financial services, real‑time factory intelligence at Hindalco, 30 % productivity lift in Tantra micro‑finance, hyper‑personalised marketing in consumer brands (Speaker 1) Structured foundation‑model research on time‑series data, demonstrating models can “understand” market crashes (Speaker 1) Launch of an AI‑native research and productivity platform (used at IIT Bombay and the speaker’s office) (Speaker 1)
Speaker 1 outlines a dual mandate to act as an apex AI body for the Aditya Birla Group while pursuing frontier research, backs it with concrete efficiency gains across multiple business units, showcases structured time-series foundation-model research, and demonstrates a beta AI-native productivity platform, illustrating a coherent blend of real-world deployment and cutting-edge innovation [24-27][34-36][37][42-43][47-49][53-55][64-68][73-76].
POLICY CONTEXT (KNOWLEDGE BASE)
Birla AI Labs explicitly aims to close the loop between frontier research and real-world applications, embodying a dual mandate of business deployment and early wins [S28]; this aligns with UN Commission guidance on linking emerging frontier technologies to business models for innovation [S27].
Ethical responsibility and the need for a collaborative AI ecosystem
Speakers: Speaker 1
Assertion that AI builders must proactively investigate human consequences, not treat them as an afterthought (Speaker 1) Claim that no single institution can navigate the AI epoch alone; need for sustained academia‑industry‑policy partnership (Speaker 1) Commitment to act as a responsible, front‑line builder of AI technology, institutions, and ecosystem for the nation (Speaker 1)
Speaker 1 stresses that AI developers must embed human-impact considerations from the start, argues that only a sustained partnership among academia, industry and policy can meet AI challenges, and pledges that Birla AI Labs will act as a responsible nation-building AI actor [71-72][92-96][96-98].
POLICY CONTEXT (KNOWLEDGE BASE)
Responsible AI frameworks stress delivering societal value beyond technical metrics and call for multi-stakeholder collaboration, as outlined in discussions on sovereign and responsible AI [S23] and ethical AI governance tools [S24]; broader calls for a robust, collaborative AI ecosystem are echoed in policy dialogues [S29][S31].
Similar Viewpoints
Both arguments highlight the translation of frontier research into usable tools, showing that cutting‑edge time‑series foundation‑model work is being operationalised through an AI‑native productivity platform [64-68][73-76].
Speakers: Speaker 1, Speaker 1
Structured foundation‑model research on time‑series data, demonstrating models can “understand” market crashes (Speaker 1) Launch of an AI‑native research and productivity platform (used at IIT Bombay and the speaker’s office) (Speaker 1)
Both points underline the importance of understanding AI’s impact on human cognition and agency, with empirical research on students supporting the broader ethical call for responsible AI development [68-69][71-72].
Speakers: Speaker 1, Speaker 1
Study on how large language model usage influences curiosity and cognitive agency among students (Speaker 1) Assertion that AI builders must proactively investigate human consequences, not treat them as an afterthought (Speaker 1)
Both articulate a vision of collective, responsible nation‑building in AI, stressing partnership and accountability as core to India’s AI future [92-96][96-98].
Speakers: Speaker 1, Speaker 1
Claim that no single institution can navigate the AI epoch alone; need for sustained academia‑industry‑policy partnership (Speaker 1) Commitment to act as a responsible, front‑line builder of AI technology, institutions, and ecosystem for the nation (Speaker 1)
Unexpected Consensus
Alignment of profit‑driven efficiency gains with a strong ethical stance on AI’s societal impact
Speakers: Speaker 1, Speaker 1
Early wins: 90 % project‑timeline reduction at Birla Estates, 50 % underwriting speed‑up in financial services, real‑time factory intelligence at Hindalco, 30 % productivity lift in Tantra micro‑finance, hyper‑personalised marketing in consumer brands (Speaker 1) Assertion that AI builders must proactively investigate human consequences, not treat them as an afterthought (Speaker 1)
It is noteworthy that the same speaker simultaneously emphasizes large-scale efficiency and productivity improvements for commercial advantage while also insisting that AI developers must treat human consequences as a core responsibility, a pairing that is not always present in corporate AI narratives [34-36][37][42-43][47-49][53-55][71-72].
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between profit motives and ethical safeguards is highlighted in analyses of profit-driven AI innovations, which note the need for regulatory frameworks to balance efficiency with societal well-being [S26]; similar concerns are raised in responsible AI literature emphasizing long-term societal benefits over pure efficiency gains [S23].
Overall Assessment

Speaker 1 presents a highly coherent set of arguments that interlink AI as a transformative cognitive amplifier, a dual‑mandate organisational model, demonstrable early business wins, frontier research, ethical responsibility, and a call for a collaborative ecosystem. The internal consistency across economic, technical, and ethical dimensions indicates a strong, unified vision.

Very high internal consensus – all arguments reinforce each other, suggesting a unified strategic stance that could shape policy, industry practice, and research priorities in India’s AI trajectory.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript contains only a single speaker (Speaker 1) delivering a continuous keynote. No other participants are recorded, and all listed arguments originate from the same speaker. Consequently, there are no points of contention, no partial agreements, and no surprising divergences within the discussion.

Minimal – the absence of multiple speakers means the discourse is wholly cohesive, implying that the presented ideas face no internal challenge within this session.

Takeaways
Key takeaways
AI is positioned as a cognitive amplifier that can drive India’s transformation to a $40 trillion economy by 2047, surpassing the impact of the Industrial Revolution. Birla AI Labs has a dual mandate: (1) to serve the Aditya Birla Group’s businesses with AI solutions, and (2) to conduct frontier research and create market‑ready AI products. Early internal deployments have delivered measurable gains: 90 % project‑timeline reduction at Birla Estates, 50 % faster underwriting in financial services, real‑time factory intelligence at Hindalco, 30 % productivity lift in Tantra micro‑finance, and hyper‑personalised marketing in consumer brands. Frontier research focuses on structured foundation‑model work for time‑series data, showing models can capture concepts such as market crashes, and on building AI‑native research/productivity platforms now used at IIT Bombay and the speaker’s office. The lab is also addressing ethical and societal impacts, exemplified by a study on large language model usage affecting student curiosity and cognitive agency, and by asserting that AI builders must embed human‑impact research into their core work. A collaborative AI ecosystem involving academia, industry, and policy is deemed essential; no single institution can navigate the AI epoch alone, and Birla AI Labs commits to being a responsible front‑line builder of technology and institutions for India.
Resolutions and action items
Scale AI deployment across the Aditya Birla Group’s verticals (real‑estate, financial services, metals, micro‑finance, consumer brands) to replicate early efficiency gains. Continue and expand frontier research on structured foundation models for time‑series data, with aims to commercialise proprietary AI products. Roll out the AI‑native research and productivity platform beyond the pilot at IIT Bombay to other academic and corporate users. Publish and disseminate findings from the Delhi University study on LLM impact at King’s College and use insights to shape responsible AI practices. Initiate and nurture a sustained collaboration framework linking academia, industry, and government policy to build India’s AI ecosystem. Maintain a governance stance that embeds ethical impact assessment into all AI development cycles within the group.
Unresolved issues
The optimal depth and speed of AI adoption across complex, capital‑intensive industries remain ambiguous. Specific policy and regulatory mechanisms needed to support responsible AI at national scale were not defined. How to systematically measure and mitigate long‑term cognitive and societal effects of pervasive LLM usage remains an open question. Mechanisms for scaling the proposed academia‑industry‑policy partnership, including funding models and governance structures, were not detailed.
Suggested compromises
None identified
Thought Provoking Comments
The Industrial Revolution amplified our physical capabilities. AI is amplifying our cognitive ones.
This analogy reframes AI not as a mere technological upgrade but as a fundamental shift in how humans think and create value, positioning AI as a cognitive counterpart to a historic economic transformation.
It sets the thematic foundation for the rest of the speech, moving the audience from familiar historical narratives to the novel, high‑stakes context of AI, and primes listeners to view subsequent examples as part of a larger civilizational change.
Speaker: Speaker 1
What we are witnessing today is nothing less than a Cambrian explosion of possibilities – a phase where entirely new forms of value and new modes of human potential are emerging at a pace that defies our linear thinking.
The metaphor of a Cambrian explosion conveys the rapid, unprecedented diversification of AI applications, emphasizing both opportunity and the difficulty of predicting outcomes.
This vivid imagery shifts the tone from descriptive to urgent, creating a sense of momentum that leads into the detailed rollout of Birla AI Labs’ initiatives.
Speaker: Speaker 1
Birla AI Labs has a dual mandate: (1) to serve the group’s internal AI needs, and (2) to operate as a frontier research lab creating proprietary AI products for the open market.
Introducing a two‑pronged strategy highlights a novel organizational model that blends commercial deployment with open‑ended research, challenging the conventional separation between corporate labs and academic institutes.
This statement acts as a structural turning point, segmenting the speech into ‘internal impact’ and ‘global research’ sections, and prepares the audience for the concrete case studies that follow.
Speaker: Speaker 1
Our micro‑finance business, Tantra, is embedding AI across sales, audit and quality control, expecting at least a 30 % productivity gain – meaning a loan officer can reach more people and a woman in a village gets access to capital she would not have had otherwise.
By linking AI deployment directly to social inclusion, the comment expands the conversation from profit‑centric metrics to human impact, challenging the audience to consider AI’s role in equitable development.
It deepens the narrative, moving from high‑level economic ambition to tangible societal benefit, and reinforces the speaker’s claim that AI can be both profitable and purposeful.
Speaker: Speaker 1
Do these time‑series foundation models actually understand what a market crash is? Or are they just fitting curves? A researcher showed you can inject the signature of a historical crash into a model’s hidden states and watch the forecast shift accordingly.
This provocative question interrogates the epistemic limits of current AI models, pushing the audience to think beyond performance metrics toward genuine understanding—a rare self‑critical stance in corporate talks.
It creates a pivot from showcasing successes to acknowledging scientific uncertainty, setting the stage for the lab’s research agenda and signaling intellectual humility.
Speaker: Speaker 1
The industry has a moral obligation to study how AI mediates everyday decisions for 1.7 billion people; we conducted a study with Delhi University students on how language‑model usage affects curiosity and cognitive agency.
This claim introduces ethics and human‑centered research as core responsibilities, challenging the prevailing view that AI development is primarily a technical or commercial pursuit.
It broadens the conversation to include societal responsibility, prompting listeners to view Birla AI Labs as a steward of AI’s societal impact rather than just a profit‑driven entity.
Speaker: Speaker 1
No single institution, no matter how large or well‑resourced, can navigate this epoch alone. It will require an ecosystem that brings academia, industry and policy into genuine, sustained collaboration.
This call for a collaborative ecosystem confronts the siloed nature of AI innovation, advocating for a systemic solution that integrates multiple stakeholders.
It serves as a concluding rallying point, shifting the tone from a corporate showcase to a collective invitation, and leaves the audience with a forward‑looking, inclusive vision.
Speaker: Speaker 1
The question is no longer whether AI can work in complex, capital‑intensive real‑world industries. We have seen it and we know it can. The real question is how fast and how deep should we go, given the ambiguity that surrounds artificial intelligence.
By reframing the debate from feasibility to depth and speed of adoption, the comment pushes participants to grapple with strategic pacing and risk management rather than technical capability alone.
It transitions the discussion from case‑study evidence to strategic decision‑making, prompting listeners to consider governance, scaling, and responsible rollout.
Speaker: Speaker 1
Overall Assessment

The speech’s momentum is driven by a series of high‑impact statements that repeatedly reset the conversation’s focus—from historical analogy to organizational strategy, from commercial wins to ethical responsibility, and finally to ecosystem building. Each thought‑provoking comment acts as a micro‑turning point, steering the audience from awe‑inspiring possibilities to concrete examples, then to critical self‑examination, and ultimately to a collaborative call to action. Collectively, these remarks shape the discussion into a layered narrative that balances ambition with humility, profit with purpose, and isolated innovation with collective stewardship.

Follow-up Questions
Do these time series foundation models actually understand what a market crash is? Or are they just fitting curves?
Assessing whether foundation models capture underlying market dynamics rather than merely curve‑fitting is crucial for trustworthy financial forecasting and risk management.
Speaker: Speaker 1
How fast and how deep should we go, given the ambiguity that surrounds artificial intelligence?
Determining the appropriate pace and depth of AI adoption will guide strategic investments and mitigate risks associated with rapid, unchecked deployment.
Speaker: Speaker 1
What research is needed to advance structured foundation models for time‑series and tabular data?
Time‑series and tabular data represent a massive, under‑exploited asset; developing robust foundation models can unlock predictive intelligence across finance, industry, healthcare, and more.
Speaker: Speaker 1
How does extensive language‑model usage affect human cognition, curiosity, and agency, especially among students?
Understanding AI’s impact on cognition and agency is essential for responsible deployment and for shaping educational policies and safeguards.
Speaker: Speaker 1
What are the performance, usability, and scalability implications of AI‑native research and productivity platforms like the one launched at IIT Bombay?
Evaluating such platforms will inform how frontier research can be translated into everyday productivity tools across enterprises.
Speaker: Speaker 1
How can an ecosystem that genuinely integrates academia, industry, and policy be built to steer India’s AI future?
A sustained, collaborative ecosystem is needed to ensure responsible, inclusive, and globally competitive AI development and governance.
Speaker: Speaker 1

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session, led by Jeetu Patel, examined the rapid evolution of artificial intelligence, its emerging phases, and the strategic role India can play in the AI era [3][8-12]. Patel noted that AI development is proceeding faster than ever, moving from early chatbot “magic” into a “second phase” where autonomous agents perform tasks, and soon into a “third phase” of physical AI that will reshape work across many dimensions [8-12]. He argued that this acceleration has already flipped the software development model, citing Cisco’s first product built entirely by AI and warning that the innovation curve now looks vertical, demanding AI to be present in every loop rather than humans [14]. Patel identified three fundamental constraints that could impede AI progress: infrastructure, a context gap, and a trust deficit [18-23][26-31]. The infrastructure constraint stems from insufficient global power, compute, network bandwidth, memory and data-center capacity, which he described as “oxygen for AI” [20-24]. The context gap refers to agents lacking the trillions of tokens of situational awareness humans use, leading to poor decisions unless they are enriched with proprietary enterprise data and the growing volume of machine-generated time-series data [27-30][51-66]. He also stressed that organizations must redesign workflows so that agents drive processes rather than being retrofitted to existing ones [68-71]. The trust deficit arises when users cannot rely on AI systems, requiring both protection of agents from attacks such as jailbreaking and the ability to inject runtime guardrails that prevent unintended actions [73-84]. Patel asserted that Cisco is developing solutions across all three areas, building networks for agents, creating richer contextual layers, and delivering security and observability from GPU utilization to agent behavior [84-91]. He highlighted India’s unique advantages for this AI future: a large, youthful talent pool, a strong digital foundation exemplified by Aadhaar and UPI, and massive scale that provides the data volume AI needs [98-105]. Patel expressed optimism that, with coordinated effort and safe, secure deployment, AI can help solve humanity’s hardest problems, from disease to poverty and education [105]. He concluded by thanking the audience and emphasizing the partnership with India as essential to building that future [106-107].


Keypoints


AI is moving through rapid development phases that are reshaping work.


Patel describes a shift from “intelligent chatbots” to autonomous agents and soon to “physical AI,” arguing that this will “fundamentally re-imagine work across a multitude of dimensions” [8-13].


Three fundamental constraints could stall AI progress: infrastructure, context, and trust.


He identifies “an infrastructure constraint” – insufficient power, compute, bandwidth, and data-center capacity [19-24]; a “context gap” where agents lack the rich, real-time information humans use to make decisions [26-30]; and a “trust deficit” that hinders adoption unless safety and security are built in [31-34][35-36].


A major mindset shift is required: from human-in-the-loop to AI-in-every-loop, treating AI as augmented teammates rather than mere tools.


Patel argues that “instead of having a human in every loop… we need to flip that model and make sure that AI is in every loop” and that AI will become “augmented teammates… working on behalf of humans” [14-16].


Cisco is positioning itself to address these constraints through infrastructure, context-enrichment, and security/observability solutions.


He notes Cisco’s work on “networks that agents will run on,” “contexts that makes it richer for these agents,” and “security that governs these agents” with end-to-end observability across the stack [84-91].


India is highlighted as a strategic partner with unique advantages for AI leadership.


Patel points to India’s “huge talent pool,” “strong digital foundation” (Aadhaar, UPI), and “massive scale” that provides the data needed for AI, expressing optimism about a collaborative future [92-105][106].


Overall purpose/goal:


The discussion aims to map the current AI landscape, pinpoint critical barriers, propose a new collaborative paradigm (AI-in-every-loop), showcase Cisco’s role in overcoming those barriers, and rally Indian stakeholders to partner in building a secure, scalable AI ecosystem that drives global competitiveness and societal benefit.


Overall tone:


The talk begins with high-energy optimism about AI’s transformative potential, moves into a sober, analytical assessment of the three major constraints, shifts to a solution-focused and confident tone when describing Cisco’s initiatives, and concludes with a hopeful, partnership-oriented message that balances excitement with caution about risks. The tone remains constructive throughout, with a noticeable transition from visionary enthusiasm to pragmatic problem-solving and finally to collaborative optimism.


Speakers

Jeetu Patel – President and Chief Product Officer, Cisco Inc.; expertise in AI, networking, and trusted AI solutions. [S1]


Speaker 1 – Event host/moderator (role introducing the keynote speaker). [S4]


Additional speakers:


None identified beyond the listed speakers.


Full session reportComprehensive analysis and detailed insights

The session opened with Speaker 1 stressing that work without resilient, secure infrastructure is both timely and essential [1] and then welcomed Mr Jeetu Patel [2].


Patel thanked the organizers, noted that the AI summit attracted about 250 000 attendees [4], and praised Prime Minister Narendra Modi and Minister Vaishnav for convening the dialogue on AI’s future in India [5-6].


He described AI’s rapid evolution as moving through distinct phases: we are now in the “second phase,” where autonomous agents perform tasks with minimal human supervision [8-9], and he previewed an imminent “third phase – physical AI” that will fundamentally reshape work [13].


Patel highlighted a shift in software development: Cisco has delivered its first product that was 100 % generated by AI, with no human-written code [14]. This has made the innovation curve almost vertical, while the absorption rate of technology still lags behind the pace of innovation [14-16]. Consequently, AI must be placed “in every loop” rather than humans remaining the constant loop [14-16].


He identified three core constraints to further AI progress.


Infrastructure constraint – the world lacks sufficient power, compute, network bandwidth, memory and data-centre capacity, which he called “oxygen for AI” [19-24].


Context gap – agents currently miss the trillions of tokens of situational awareness that humans process every second; Patel illustrated this with an ER-doctor analogy, showing that an agent without patient history would be forced to guess like a coin flip [42-49]. Closing the gap requires enriching agents with proprietary enterprise data and the rapidly growing volume of machine-generated time-series data (≈55 % of future data growth) [64-66], and redesigning workflows so that agents sit at the centre rather than being retro-fitted [68-71].


Trust deficit – the risk with AI has shifted from giving the wrong answer to taking the wrong action[73-77]. Mitigation involves (a) protecting agents from jailbreaks, prompt-injection, tool abuse and data-poisoning, and (b) protecting the world from rogue agents via dynamic runtime guardrails and real-time governance [80-84]. Patel also introduced a new metric of global competitiveness: the ability to safely, securely and efficiently generate AI tokens [35-36].


Cisco is positioning itself to address these constraints in the same order. It is building AI-ready networks and “token-generation factories” that can sustain the steady-state compute demand of autonomous agents, moving away from the spiky consumption of earlier chatbots, while providing end-to-end observability-from GPU utilisation to model performance and agent behaviour [38-42][87-91]. To bridge the context gap, Cisco is developing solutions that connect enterprise data and machine-generated data to agents and urging organisations to redesign processes so that agents drive workflows [51-71]. For trust, Cisco is implementing runtime security measures, guardrails and full-stack observability to protect both agents and the external world [80-84][90-91].


Patel then highlighted India’s strategic advantages: a vast, youthful talent pool that fuels the economy [98-100]; a strong digital foundation exemplified by Aadhaar and UPI, providing scalable identity and payment infrastructure [100-102]; and massive scale that supplies the data volume AI systems need to learn and operate effectively [103-105].


He concluded with optimism, stating that AI can help cure diseases, alleviate poverty and expand equitable education [105]. “The future will be built when humans can confidently delegate tasks to AI, rather than AI building the future on its own,” he said [71-73]. “If every organization and nation can generate AI tokens efficiently and securely, measurable progress will follow.” [71-73] Patel thanked the audience, reaffirmed Cisco’s partnership with India, and urged collective effort to build a safe, secure and prosperous AI future [106-108].


Session transcriptComplete transcript of the session
Speaker 1

works without resilient, secure infrastructure is both timely and essential. Ladies and gentlemen, please welcome Mr. Jeetu Patel.

Jeetu Patel

Namaste. I feel very happy to see India’s progress. So firstly, congratulations to all of you for hosting one of the most spectacular AI summits that the world has ever seen with about 250 ,000 attendees. And congratulations to His Honorable Prime Minister, Mr. Narendra Modi, as well as Minister Vaishnav. For actually bringing us all together to talk about what the possibilities of the future are with AI. So what I thought I’d do is I wanted to actually walk you through. where we are today and what the possibilities are and what the constraints are going to be that we need to overcome. But let me just take a step back and say that we are probably moving at a pace that is faster than we’ve ever either expected or seen before with AI.

And we are now squarely in the second phase of AI. So we started with this kind of notion of intelligent chatbots that answered questions for us that felt like magic three years ago. And now we are at this point where agents are conducting tasks and jobs for us almost fully autonomously. And we are actually soon going to go to the third phase, which is physical AI as well. And what this is going to do is fundamentally reimagine work across a multitude of dimensions and vectors that we had never even imagined before. Now, if you think about what AI is doing, it’s basically forcing us to rethink every assumption that we’ve had in society. and I think a lot of these are going to be positive and we also need to be mindful of the downsides that might be there but if you really think deep and long and hard the first thing that I’d say is the modern development process for software development has completely changed and flipped at this point in time where AI is going to in fact at Cisco we have our first product that was 100 % built and coded with AI where there was no human writing a single line of code what that actually has as an implication is that your exponential curve of innovation is almost going to feel like a vertical line and how we need to adjust for that because right now what’s happening is that the rate of change is going to accelerate but as that acceleration is happening what you’ll find is the absorption rate of technology is going to increase and the absorption rate of technology is still not quite at the same level as the innovation rate of the technology itself And so rather than having a human in every loop, which is the way that we’ve thought about it, we need to flip that model and make sure that AI is in every loop rather than thinking about a human in the loop.

And the big mindset shift that’s starting to occur is this notion that, you know, these aren’t just productivity tools. These are going to be augmented teammates into our society where they will be working on behalf of humans for humans to go out and conduct things that we actually need additional capacity for. So then the question to ask is, what could hold progress back for AI? And we think there are three things that could fundamentally be impediments for the progress of AI. The first one is an infrastructure constraint. And what I mean by that is there’s just not enough power, compute, and network bandwidth in the world. Now there’s not enough memory, enough capacity. There’s not enough capacity to build out the data centers.

These are massive constraints and infrastructure is oxygen for AI. So if you don’t have enough amount of infrastructure, you’re not going to be able to make sure that you can fulfill and harness the full potential of AI. So infrastructure is the first big constraint that we see. The second big constraint is this notion of a context gap. If we think of each one of us in our lives, the way that we think about acknowledging, kind of gathering information, is we are taking trillions of tokens of context every second and assessing it in our brains and actually informing ourselves of what we need to do as we move forward. These agents are going to need to have that same level of context enrichment.

And if they don’t, they’ll still make decisions, but they’re not going to be very good decisions. You know? So the second one is this fundamental context gap. And then the third area is a trust deficit. If you don’t trust these systems, you’re never going to be able to use them. And you’ll actually see that there’s an impedance to adoption. as a result of the absence of trust. So you have to make sure that you actually start to think about safety and security at a fundamentally different level than what we’ve seen before. And if you think about what the new metric for success is, the new metric for global competitiveness moving forward is our ability to safely, securely, and efficiently generate tokens for the use of AI.

And every country and every company is going to actually get measured by your ability to safely, securely, and efficiently generate tokens, and that’ll directly impact your economic prosperity as well as national security. So let’s go into each one of these three constraints and talk about what the dynamics are and how do we need to make sure that we overcome them. Because if you look at the pattern on the infrastructure side, specifically with what’s happening with agents, what you’re starting to see is as we move from chatbots to agents, the pattern of inferencing. is going from this very spiky kind of compute, you know, consumption that used to happen to a much more steady state, you know, kind of persistent demand signal that you’re starting to see in the market, right?

And that’ll have a very, very different level of infrastructure requirements than what we might have seen before. And so I think we have to keep that in mind as we’re building out the rest of, you know, these kind of token generation factories that we’re building out. We’ll need to make sure that they can actually accommodate for that second, you know, kind of behavior model rather than the first one. Now, as you go into the context gap, imagine this. Imagine if you’re an ER doctor and imagine that you actually had an unresponsive patient and you had no charts on the patient, you had no history about the patient, you had no symptoms that you knew that the patient was experiencing.

How would you be able to go treat that patient? You might still be able to do it, but you’re going to make a bunch of guesses, right? An agent without context is still going to make decisions, but those decisions might not be the kind of decisions we want that agent to make. And so we have to make sure that we figure out effective and efficient ways to enrich context for that agent, for the AI. And in the absence of that, it’s just going to be forced to guess. And those guesses are as good as you flipping a coin and your head showing up. So how do you close that context gap? And so the way you close the context gap, the first one is these models have been trained on human -generated data that’s publicly available on the Internet.

But we are running out of human -generated data publicly available on the Internet. So now what you’re starting to see is there’s a tremendous amount of enterprise data that’s actually intellectual property of these companies. Can we make sure that we can enrich these models with this proprietary enterprise data for the purposes of that organization so that they can create competitive differentiation? And so the first one is this. No. The notion of connecting enterprise data to AI and agents. The second big area is this notion of enriching agents with machine data. Because right now what you see is most of the data is human -generated data that these AI models have actually been using. We need to make sure that we use machine data.

What does machine data mean? It’s time series data. When something, you and all of us humans, we start our day by actually consuming machine data. We might check the weather. That’s machine data. As you have more and more agents, 55 % of the growth of data in the world is going to be machine data. As these agents work 7 by 24, you’ll have much, much more data and logs, metrics, events, traces that will actually need to be consumed. So that second area of agents being enriched with machine data is going to be critical for these agents actually operating with a sufficient level of context. And third is you have to embed AI in every workflow. You can’t just actually think about that as, I’m just going to use this machine data.

I’m just going to have a tool that augments to my broken process. You have to fundamentally rethink the process that accommodates these agents. agents don’t adjust to us. We have to adjust our process to the agents so that they can actually be effective for us. So that’s the second big area is the context gap. And then the third area that we talked about was this notion of a trust deficit. And in AI, the risk with AI now is no longer that AI is going to give us the wrong answer. The risk with AI is AI is going to take the wrong action. And when you actually start having an AI take the wrong action, the consequences are far more grave than just giving you the wrong answer.

So what do we have to do? So there’s two areas that we think are going to be really important. The first one is we got to make sure that these agents get protected from the world, which means that jailbreaking an agent, having prompt injection attacks, making sure that there’s a level of tool abuse or data poisoning. We have to make sure that we can protect the agents from that happening. And then the second thing we have to do is we have to make sure that we can protect the world from the agents so that the agent starts going rogue. It’s having behavior that’s unintended. that we can actually make sure that we can provide effective guardrails at runtime because no longer is governance a document.

It’s going to be a runtime implementation so that as the agent is working, if you start seeing that the agent’s doing something that’s not going to be in the best interest of humans, we have to be able to inject guardrails into that dynamically at runtime so that we can make sure that that creates a level of trust for the system. Now, it turns out that Cisco is actually building solutions across all of these three areas. And so the way that we think about ourselves is we want to invent and innovate in making sure that the critical infrastructure for the AI era is as simple to deploy, as safe and secure as we want it to be, and as context -enriched as it can be.

So what are we doing? We’re building networks that agents will run on. We’re building contexts that makes it richer for these agents to operate. in a way that allows us to make sure that we can delegate safely and securely to them and feel good about the outcomes that we’re going to get. And we’re going to make sure that we’re going to have security that governs these agents. And all of this is going to be done with a tremendous amount of observability and visibility that says in every layer of the stack, from the way that the GPU is getting utilized, to how the model is performing, to how the apps are being built, to how the agents are performing, we can have observability from bottom to top.

The entire stack. Because if we can do that and figure out that every company, every country is generating tokens in the most effective and secure way, then you’re going to see that there’s progress being made. Now, why is this a tremendous opportunity for India? Because India is not just going to use AI. The way that we’ve seen over the course of the past week is you’re actually helping shape the direction of the entire world with AI. And I think there’s a few… There’s a few key areas… that we should all feel very hopeful for and why India can be a tremendous contributor to AI for the rest of the world. Not just for India, but for the entire world.

The first one is we have a huge talent pool of young, vibrant, intelligent, smart, educated people within India that actually contribute to the workforce. We’ve got one of the largest groups of people under 30 that are actually going out and contributing to the economy. Number two is we have a very strong digital foundation, having common identity with Aadhaar, having UPI. These are things that in India you all might take for granted, but these are very rare to come by in countries, especially at scale. And third, India actually has massive, massive scale. And why is that important? Because AI works best with scale. Because AI works best when you have the most amount of data. right and so the way i think about this is we have a tremendous opportunity ahead of us now the future is not going to be built by ai alone in fact the future gets built when humans can confidently put ai to work and delegate jobs and tasks to ai in a way that we feel safe and secure and so i i’m actually as hopeful as i’ve ever been however i also feel like there’s tremendous amount of risks of these things can go going sideways and so we as a community have to band together and make sure that we actually work as an ecosystem to keep ai safe and secure because if we do that in the right way we’re going to solve the hardest problems that humanity has faced and reduce and hopefully end suffering in so many different areas we might be able to cure the best uh the the hardest diseases that we’ve not been able to overcome we might be able to overcome poverty we might be able to overcome you , know the gaps around education so that can be evenly distributed to people we can make sure that we can improve people’s quality of life So I think there’s a lot to be excited about, but those constraints need to be kept in mind.

And we are so grateful to be partnering with India in this journey ahead. So thank you all. Take care.

Related ResourcesKnowledge base sources related to the discussion topics (23)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“Work without resilient, secure infrastructure is both timely and essential”

The keynote transcript explicitly states that “works without resilient, secure infrastructure is both timely and essential” [S1] and repeats the same wording in a separate entry [S3].

Confirmedhigh

“Cisco has delivered its first product that was 100 % generated by AI, with no human‑written code”

Cisco’s own briefing notes that the company has already produced a product written entirely with AI-generated code and plans more such products soon [S61].

Additional Contextmedium

“The world lacks sufficient power, compute, network bandwidth, memory and data‑centre capacity – the “oxygen for AI””

Analyses of AI infrastructure highlight power, compute and bandwidth as critical physical limits for AI deployment, echoing the “oxygen for AI” metaphor [S70] and broader infrastructure barriers identified in global AI policy discussions [S42].

Confirmedmedium

“The risk with AI has shifted from giving the wrong answer to taking the wrong action”

AI safety experts note a fundamental change in risk profile: autonomous systems can now act beyond immediate supervision, turning the primary danger into incorrect actions rather than merely incorrect answers [S60].

Confirmedmedium

“Mitigation involves protecting agents from jailbreaks, prompt‑injection, tool abuse and data‑poisoning”

Cisco’s security briefing stresses that AI agents need safeguards against jailbreaks, prompt-injection, tool abuse and data-poisoning before they are deployed in workforces [S61].

Confirmedmedium

“Protecting the world from rogue agents via dynamic runtime guardrails and real‑time governance”

The same Cisco statement highlights the need for runtime security measures and real-time governance to prevent malicious or uncontrolled AI agent behavior [S61].

Confirmedmedium

“Cisco is implementing runtime security measures, guardrails and end‑to‑end observability for AI agents”

Cisco’s public comments describe the rollout of runtime security controls and comprehensive observability (GPU utilisation, model performance, agent behaviour) for its AI offerings [S61].

External Sources (71)
S1
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — -Speaker: No specific role, title, or area of expertise mentioned in the transcript For closing the context gap, Patel …
S2
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — Note: The transcript appears to conclude mid-sentence with the introduction of Jeetu Patel from Cisco, suggesting additi…
S3
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — Impact:This analogy deepened the technical discussion by making the context problem relatable and urgent. It shifted the…
S4
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S5
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S6
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S7
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Hemant Taneja General Catalyst — Good afternoon. Let me just start by thanking Shri Prime Minister Modi Ji for getting all the AI thought leaders togethe…
S8
Invest India Fireside Chat — Evidence:Points to existing UPI payment stack and Aadhaar system as infrastructure foundation. Mentions his wife’s work …
S9
Digital policy in 2019: A mid-year review — Technological innovation is creating new possibilities. Artificial intelligence developments are moving at a fast pace, …
S10
Open Forum: A Primer on AI — Artificial Intelligence is advancing at a rapid pace
S11
Comprehensive Discussion Report: AI Agents and Fiduciary Standards — Pentland presented a future where AI agents would handle virtually every business and government process, essentially ad…
S12
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Eltjo Poort: thank you Isadora yeah and thanks for giving me the opportunity to say a few things I there’s a little bit …
S13
AI as critical infrastructure for continuity in public services — “If they don’t know if they can work with some solutions… they will step back and they will go to the more trusted loc…
S14
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S15
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Human-in-the-lead approach rather than human-in-the-loop mentality
S16
Comprehensive Report: AI’s Impact on the Future of Work – Davos 2026 Panel Discussion — Economic | Future of work Palsule argues that traditional HR policies designed for humans alone are inadequate for the …
S17
Comprehensive Summary: The Future of Robotics and Physical AI — Rus argues for a paradigm shift where machines are designed to understand and adapt to human behavior rather than requir…
S18
Cisco to reinvent network security for the AI era — Cisco hasintroduceda major evolution in security policy management, aiming to help enterprises scale securely without in…
S19
The Evolving Dynamics of Cyberspace: Assessing The Landscape Of Changing Strategic Priorities In Cyberspace — A multi-stakeholder approach is deemed necessary for addressing global challenges like public health and cybersecurity. …
S20
Building Indias Digital and Industrial Future with AI — So, in fact, I was part of one of the entity which set up and contributes to the largest DPI infrastructure today. I use…
S21
AI Innovation in India — Bagla articulated a compelling vision of India’s unique advantages in the global AI landscape, asserting that India will…
S22
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — And it’s a core technology that a country like India must understand. from the foundational level. Otherwise, we will be…
S23
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — India’s Strategic Advantages Rather than viewing India’s complexity as a challenge, Raghavan presented it as the countr…
S24
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Hemant Taneja General Catalyst — Taneja argued that India is uniquely positioned to lead in AI deployment due to its status as the world’s strongest grow…
S25
Critical infrastructure — AI plays a pivotal role in safeguarding critical infrastructure systems. AI can strengthen the security of critical infr…
S26
Artificial Intelligence &amp; Emerging Tech — Connectivity issues in developing countries for leveraging AI are also highlighted. This negative sentiment emphasizes t…
S27
AI as critical infrastructure for continuity in public services — Resilience, data control, and secure compute are core prerequisites for trustworthy AI. Systems must stay operational an…
S28
Discussion Report: Sovereign AI in Defence and National Security — Infrastructure involves resilience at a critical level. And just yesterday, after the Munich drone attack that almost st…
S29
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you….
S30
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — At Cisco, he’s leading the company’s transformation into an AI -native networking and security powerhouse. In a world ob…
S31
S32
S33
Open Forum: A Primer on AI — Artificial Intelligence is advancing at a rapid pace
S34
Creatives warn that AI is reshaping their jobs — AI isacceleratingacross creative fields, raising concerns among workers who say the technology is reshaping livelihoods …
S35
Workers report major gains from AI use — ChatGPT nowreaches more than 800 million userseach week, and this rapid uptake is fuelling a surge in enterprise AI adop…
S36
UNSC meeting: Artificial intelligence, peace and security — Malta:Thank you, President. And I thank the UK Presidency for holding today’s briefing on this highly topical issue. I a…
S37
When Code and Creativity Collide: AI’s Transformation of Music and Creative Expression — Economic | Development Juliet Mann argues that artificial intelligence is advancing at an unprecedented pace compared t…
S38
AI as critical infrastructure for continuity in public services — “If they don’t know if they can work with some solutions… they will step back and they will go to the more trusted loc…
S39
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — This comment elevated the discussion from technical considerations to geopolitical implications, connecting AI infrastru…
S40
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Jeetu Patel President and Chief Product Officer Cisco Inc — Impact:This comment elevated the discussion from technical considerations to geopolitical implications, connecting AI in…
S41
Building Climate-Resilient Systems with AI — This comment grounded the discussion in practical realities and influenced subsequent speakers to address implementation…
S42
Global AI Policy Framework: International Cooperation and Historical Perspectives — Werner identifies three critical barriers that prevent AI for good use cases from scaling globally. He emphasizes that d…
S43
Inclusive AI Starts with People Not Just Algorithms — Combine human intelligence with artificial intelligence in a coexistence model rather than viewing them as competing for…
S44
Comprehensive Summary: The Future of Robotics and Physical AI — Rus argues for a paradigm shift where machines are designed to understand and adapt to human behavior rather than requir…
S45
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Human-in-the-lead approach rather than human-in-the-loop mentality
S46
Panel Discussion Inclusion Innovation & the Future of AI — No, I think let’s be honest here. Over the last few years we’ve seen amazing benefits coming from AI, right? Beautiful s…
S47
Panel Discussion Inclusion Innovation &amp; the Future of AI — That’s not okay. So we’re not underestimating the risks. But we can’t approach governance from a risk management control…
S48
Cisco to reinvent network security for the AI era — Cisco hasintroduceda major evolution in security policy management, aiming to help enterprises scale securely without in…
S49
Building Indias Digital and Industrial Future with AI — So, in fact, I was part of one of the entity which set up and contributes to the largest DPI infrastructure today. I use…
S50
https://app.faicon.ai/ai-impact-summit-2026/building-indias-digital-and-industrial-future-with-ai — So, in fact, I was part of one of the entity which set up and contributes to the largest DPI infrastructure today. I use…
S51
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Hemant Taneja General Catalyst — Taneja argued that India is uniquely positioned to lead in AI deployment due to its status as the world’s strongest grow…
S52
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Hemant Taneja General Catalyst — The discussion featured Hemant Taneja, CEO of General Catalyst venture capital firm, speaking at an AI summit about resp…
S53
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — And it’s a core technology that a country like India must understand. from the foundational level. Otherwise, we will be…
S54
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vivek Raghavan Sarvam AI — India’s Strategic Advantages Rather than viewing India’s complexity as a challenge, Raghavan presented it as the countr…
S55
AI Innovation in India — Bagla articulated a compelling vision of India’s unique advantages in the global AI landscape, asserting that India will…
S56
Opening of the session/OEWG 2025 — Bosnia and Herzegovina: Thank you, Mr. Chair. At the outset, I would like to thank you, Mr. Chair, and your team, as w…
S57
Keynote-Mukesh Dhirubhai Ambani — Distinguished guests, my fellow Indians, namaste. The Global AI Impact Summit is a defining moment in India’s tech histo…
S58
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Sovereignty does not mean solitude. We must work together. But it does mean that we have to work with like -minded count…
S59
How AI agents are quietly rebuilding the foundations of the global economy  — AI agents have rapidly moved from niche research concepts to one of the most discussed technology topics of 2025. Search…
S60
AI Safety at the Global Level Insights from Digital Ministers Of — This shift represents a fundamental change in risk profile, as these autonomous systems can operate beyond immediate hum…
S61
Cisco warns AI agents need checks before joining workforces — The US-based conglomerate Ciscois promotinga future in which AI agents work alongside employees rather than operate as m…
S62
Hello from the CyberVerse: Maximizing the Benefits of Future Technologies — There’s often a lag between adoption of technologies and regulations governing them
S63
Leading in the Digital Era: How can the Public Sector prepare for the AI age? — Tawfik Jelassi:Thank you very much, Professor Abassi, for your words. You’re right. The rate of technological advances h…
S64
[Tentative Translation] — During the period of the Fifth Basic Plan, efforts were made to improve the research environment. However…
S65
Contents — The exponential acceleration in the development, convergence and adoption of new technologies in the past few decades is…
S66
Ethical AI_ Keeping Humanity in the Loop While Innovating — Well, in my current role in ETIO, which is the think tank for government of India, we’re looking at what are the unlocks…
S67
The fading of human agency in automated systems — In practice, however, being “in the loop” frequently means supervising outputs under conditions that make meaningful jud…
S68
How AI Is Transforming Diplomacy and Conflict Management — Well, thank you. Thank you so much for inviting me. Is it working? For inviting me to this early morning. And I find thi…
S69
AI Meets Agriculture Building Food Security and Climate Resilien — Technology evaluation requires ongoing research, data collection, feedback loops, and keeping humans in the loop rather …
S70
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Subramaniam identifies three fundamental physical limitations that India cannot avoid: land, water, and power. These con…
S71
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Brandon Mello introduced a sobering statistic: 95% of AI pilots never reach production deployment. The primary barriers …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
J
Jeetu Patel
11 arguments180 words per minute2540 words845 seconds
Argument 1
AI is transitioning from chatbots to autonomous agents and soon to physical AI, fundamentally reimagining work (Jeetu Patel)
EXPLANATION
Patel describes a rapid evolution of AI from early chatbots that seemed magical to today’s agents that can perform tasks autonomously, and anticipates a third phase of physical AI. He asserts that this shift will completely reshape how work is performed across many dimensions.
EVIDENCE
He notes that three years ago AI was limited to intelligent chatbots, but now agents are conducting tasks and jobs almost fully autonomously, and that the next phase will involve physical AI that will fundamentally reimagine work across dimensions that were previously unimaginable [10-13]. He also emphasizes that the pace of AI development is faster than ever seen, placing us in the second phase of AI [8-9].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Patel describes a rapid progression from early chat-based bots to autonomous agents and foresees a third phase of physical AI that will fundamentally reshape work across industries [S1][S3].
MAJOR DISCUSSION POINT
Evolution of AI and its societal impact
Argument 2
Software development has flipped: AI can now build 100 % of code without human programmers, creating a vertical acceleration of innovation (Jeetu Patel)
EXPLANATION
Patel claims that AI has already produced a product at Cisco that was entirely coded by AI, with no human writing a single line. This breakthrough turns the usual incremental innovation curve into a near‑vertical surge, outpacing human‑in‑the‑loop models.
EVIDENCE
He explains that Cisco’s first product was 100 % built and coded with AI, illustrating a paradigm shift where AI replaces the human-in-the-loop model and creates a vertical acceleration of innovation [14].
MAJOR DISCUSSION POINT
Software development transformation
Argument 3
Infrastructure shortage – insufficient power, compute, network bandwidth, memory, and data‑center capacity limits AI potential (Jeetu Patel)
EXPLANATION
Patel identifies a fundamental bottleneck: the world lacks enough power, compute, bandwidth, memory, and data‑center capacity to sustain AI growth. He likens infrastructure to oxygen for AI, arguing that without it the technology cannot reach its full potential.
EVIDENCE
He lists the specific constraints: not enough power, compute, network bandwidth, memory, and data-center capacity, describing them as massive constraints and calling infrastructure “oxygen for AI” [19-25].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He characterises the global shortage of power, compute, bandwidth, memory and data-centre capacity as “oxygen for AI”, highlighting it as a core bottleneck for AI growth [S3][S1].
MAJOR DISCUSSION POINT
Key constraints hindering AI progress – Infrastructure
Argument 4
Context gap – agents lacking rich, real‑time contextual information will make poor decisions, similar to a doctor without patient history (Jeetu Patel)
EXPLANATION
Patel argues that AI agents need the same level of contextual awareness that humans process every second; without it, their decisions will be akin to guesses. He illustrates this with a medical analogy of an ER doctor lacking patient charts.
EVIDENCE
He defines the “context gap” as agents missing trillions of tokens of context, leading to poor decisions, and then uses the scenario of an ER doctor without patient history to show how decisions would be guesswork [26-30][42-49].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Patel defines a “fundamental context gap” where agents miss trillions of tokens of real-time context, leading to guesswork, illustrated with the ER-doctor analogy [S1][S3].
MAJOR DISCUSSION POINT
Key constraints hindering AI progress – Context gap
Argument 5
Trust deficit – without safety, security, and runtime guardrails, users will not adopt AI, and rogue agent behavior poses serious risk (Jeetu Patel)
EXPLANATION
Patel warns that lack of trust—stemming from safety and security concerns—will block AI adoption. He stresses that agents must be protected from attacks and must not act rogue, requiring runtime guardrails.
EVIDENCE
He describes a “trust deficit” where lack of safety and security impedes adoption, noting that AI risks now include taking the wrong action rather than just giving a wrong answer, and calls for protection against jailbreaking, prompt injection, data poisoning, as well as runtime guardrails to prevent rogue behavior [31-35][73-84].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He warns of a “trust deficit” and calls for protection against jailbreaking, prompt-injection, tool abuse and data-poisoning, plus dynamic runtime guardrails to prevent rogue behaviour [S1][S3].
MAJOR DISCUSSION POINT
Key constraints hindering AI progress – Trust deficit
Argument 6
Building AI‑ready networks and “token generation factories” to support the steady‑state compute demand of autonomous agents (Jeetu Patel)
EXPLANATION
Patel outlines Cisco’s strategy to create infrastructure that can handle the persistent, steady‑state compute load of autonomous agents, rather than the spiky demand of earlier chatbots. He refers to these as “token generation factories”.
EVIDENCE
He explains that as AI moves from chatbots to agents, the inferencing pattern shifts to a steady-state demand, requiring different infrastructure, and mentions building token generation factories to accommodate this new behavior model [38-42]. Later he notes Cisco is building networks for agents and ensuring observability across the stack [87-90].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Patel outlines Cisco’s plan to create “token generation factories” and AI-ready networks that handle the steady-state inferencing pattern of autonomous agents [S3].
MAJOR DISCUSSION POINT
Cisco’s approach – Infrastructure
AGREED WITH
Speaker 1
Argument 7
Enriching agents with proprietary enterprise data and machine (time‑series) data, and redesigning workflows so processes adapt to agents (Jeetu Patel)
EXPLANATION
Patel says agents need richer context, which can be supplied by proprietary enterprise data and machine‑generated time‑series data. He also calls for a redesign of business processes so that they are built around agents rather than the other way around.
EVIDENCE
He describes using enterprise data that is intellectual property to enrich models, adding machine data (time-series such as weather, logs, metrics) which will constitute 55 % of future data growth, and stresses the need to embed AI in every workflow and adjust processes to agents [51-71].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He proposes connecting proprietary enterprise data and machine-generated time-series data to models, and redesigning workflows around agents rather than the reverse [S3].
MAJOR DISCUSSION POINT
Cisco’s approach – Context enrichment
Argument 8
Deploying runtime security, guardrails, and end‑to‑end observability across the stack to protect both agents and the world from misuse (Jeetu Patel)
EXPLANATION
Patel emphasizes the need for continuous, runtime security measures that both shield agents from attacks and prevent agents from causing harm. He also highlights comprehensive observability from hardware to applications to ensure secure, trustworthy operation.
EVIDENCE
He outlines protecting agents from jailbreaking, prompt injection, tool abuse, and data poisoning, and protecting the world from rogue agents via dynamic runtime guardrails, followed by a description of end-to-end observability across the stack-from GPU utilization to model performance to agent behavior [80-84][90-91].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Patel stresses continuous runtime security, guardrails, and end-to-end observability from GPU utilisation to agent behaviour to ensure safe operation [S1].
MAJOR DISCUSSION POINT
Cisco’s approach – Security and observability
Argument 9
A massive, youthful talent pool gives India the human capital to drive AI innovation (Jeetu Patel)
EXPLANATION
Patel points out that India’s large population of young, educated individuals provides a deep reservoir of talent for AI research, development, and deployment.
EVIDENCE
He notes that India has a huge talent pool of young, vibrant, intelligent, educated people, including one of the largest groups of people under 30 contributing to the economy [98-100].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He highlights India’s huge pool of young, educated talent, one of the world’s largest under-30 workforces, as a strategic AI advantage [S1][S7].
MAJOR DISCUSSION POINT
India’s strategic advantage – Talent
Argument 10
Strong digital foundations such as Aadhaar and UPI provide a ready‑made infrastructure for AI deployment (Jeetu Patel)
EXPLANATION
Patel highlights India’s existing digital identity (Aadhaar) and digital payments (UPI) systems as foundational infrastructure that can accelerate AI adoption and integration.
EVIDENCE
He cites India’s strong digital foundation, mentioning common identity with Aadhaar and the UPI payment system as rare, scalable assets [100-102].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Patel points to India’s Aadhaar identity system and UPI payments network as rare, scalable digital foundations that accelerate AI adoption [S3][S8].
MAJOR DISCUSSION POINT
India’s strategic advantage – Digital infrastructure
Argument 11
India’s enormous scale of data generation offers the volume AI needs, positioning the country as a global AI contributor (Jeetu Patel)
EXPLANATION
Patel argues that India’s massive population creates a huge amount of data, which is essential for training and operating AI systems, thereby giving India a strategic role in the global AI ecosystem.
EVIDENCE
He states that India has massive scale, which is important because AI works best with the most amount of data, positioning India as a tremendous contributor to AI for the world [102-105].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He notes India’s massive population generates vast data volumes, giving the country a pivotal role in the global AI ecosystem [S1][S3].
MAJOR DISCUSSION POINT
India’s strategic advantage – Scale of data
Agreements
Agreement Points
Both speakers stress that resilient and secure infrastructure is essential for AI progress.
Speakers: Speaker 1, Jeetu Patel
Infrastructure shortage — insufficient power, compute, network bandwidth, memory, and data‑center capacity limits AI potential (Jeetu Patel) Building AI‑ready networks and “token generation factories” to support the steady‑state compute demand of autonomous agents (Jeetu Patel)
Speaker 1 opens by saying work cannot proceed without resilient, secure infrastructure [1], and Patel later describes infrastructure as “oxygen for AI” and outlines the shortage of power, compute, bandwidth, memory and data-center capacity [19-25] as a core constraint, and explains Cisco’s effort to build AI-ready networks and token-generation factories to meet steady-state demand [38-42][87-90].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors policy framing that treats AI as critical infrastructure, where resilience and security are seen as prerequisites for trustworthy and secure AI deployment. Such framing appears in analyses of sovereign AI and critical-infrastructure protection [S25][S27][S28] and is reinforced by industry leaders who stress that AI models cannot operate without resilient, secure infrastructure [S30][S31].
Similar Viewpoints
Patel repeatedly emphasizes that a robust, secure, and observable infrastructure is a prerequisite for safe AI deployment. He first frames infrastructure as a bottleneck [19-25], then describes Cisco’s construction of AI‑ready networks and token‑generation factories to handle continuous inferencing [38-42][87-90], and finally calls for runtime security and full‑stack observability to protect both the agents and external systems [80-84][90-91].
Speakers: Jeetu Patel
Infrastructure shortage — insufficient power, compute, network bandwidth, memory, and data‑center capacity limits AI potential (Jeetu Patel) Building AI‑ready networks and “token generation factories” to support the steady‑state compute demand of autonomous agents (Jeetu Patel) Deploying runtime security, guardrails, and end‑to‑end observability across the stack to protect both agents and the world from misuse (Jeetu Patel)
Patel argues that closing the “context gap” requires feeding agents with richer data sources—both proprietary enterprise data and machine‑generated time‑series data—and re‑engineering business processes to place agents at the centre of workflows. He defines the context gap and uses the ER‑doctor analogy [26-30][42-49], then details the need for enterprise data, machine data, and workflow redesign [51-71].
Speakers: Jeetu Patel
Context gap — agents lacking rich, real‑time contextual information will make poor decisions, similar to a doctor without patient history (Jeetu Patel) Enriching agents with proprietary enterprise data and machine (time‑series) data, and redesigning workflows so processes adapt to agents (Jeetu Patel)
Patel links the trust deficit directly to the need for technical safeguards. He describes how lack of safety and security blocks adoption [31-35][73-84] and then outlines concrete measures—protection from jailbreaking, prompt‑injection, data‑poisoning, and dynamic runtime guardrails together with full‑stack observability—to build that trust [80-84][90-91].
Speakers: Jeetu Patel
Trust deficit — without safety, security, and runtime guardrails, users will not adopt AI, and rogue agent behavior poses serious risk (Jeetu Patel) Deploying runtime security, guardrails, and end‑to‑end observability across the stack to protect both agents and the world from misuse (Jeetu Patel)
Unexpected Consensus
The critical role of resilient, secure infrastructure for AI initiatives
Speakers: Speaker 1, Jeetu Patel
Infrastructure shortage — insufficient power, compute, network bandwidth, memory, and data‑center capacity limits AI potential (Jeetu Patel) Building AI‑ready networks and “token generation factories” to support the steady‑state compute demand of autonomous agents (Jeetu Patel)
Speaker 1’s brief opening remark about work being impossible without resilient, secure infrastructure [1] aligns closely with Patel’s detailed discussion of infrastructure as the “oxygen for AI” and his plans for AI-ready networks [19-25][38-42][87-90]. The convergence of a short introductory comment with a comprehensive technical exposition was not anticipated.
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis on resilient, secure infrastructure aligns with emerging guidelines that position AI as essential to national-level critical infrastructure, highlighting the need for robust connectivity, data sovereignty, and secure compute to enable AI initiatives [S26][S27][S28]. It is further supported by authoritative statements from technology executives underscoring that AI cannot function without such infrastructure [S30][S32].
Overall Assessment

The discussion shows strong internal coherence in Patel’s presentation, with multiple arguments converging on three pillars: (1) infrastructure capacity and security, (2) contextual richness for agents, and (3) trust through runtime safeguards. The only external agreement comes from Speaker 1, whose opening statement mirrors Patel’s infrastructure emphasis, creating a limited but clear cross‑speaker consensus.

High consensus within the primary speaker’s framework (multiple arguments reinforce each other), but modest overall consensus across speakers, limited to the shared view on infrastructure resilience. This suggests that while the speaker’s agenda is well‑aligned, broader stakeholder alignment would require additional voices to echo the same points.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The discussion shows virtually no substantive disagreement. The only point of convergence is the shared emphasis on the necessity of robust infrastructure for AI, with Speaker 1 offering a high‑level endorsement and Patel providing detailed constraints. No speakers present opposing views on how to achieve AI goals, and there are no surprising conflicts.

Minimal – the participants are aligned on the core issue of infrastructure, implying a cooperative stance that facilitates consensus‑building around AI development priorities.

Partial Agreements
Both speakers stress that resilient and secure infrastructure is a prerequisite for AI progress. Speaker 1 frames it as a timely and essential condition, while Patel elaborates on the specific shortages (power, compute, bandwidth, memory, data‑center capacity) that constitute a critical bottleneck for AI development [1][19-25].
Speakers: Speaker 1, Jeetu Patel
Speaker 1: works without resilient, secure infrastructure is both timely and essential. Jeetu Patel: Infrastructure shortage – insufficient power, compute, network bandwidth, memory, and data‑center capacity limits AI potential (Jeetu Patel)
Takeaways
Key takeaways
AI is rapidly moving from chatbots to autonomous agents and soon to physical AI, fundamentally reshaping work and society. Software development has been transformed: AI can now generate 100 % of code without human programmers, creating a near‑vertical acceleration of innovation. Three major constraints limit AI progress: (1) insufficient infrastructure (power, compute, bandwidth, memory, data‑center capacity), (2) a context gap where agents lack rich, real‑time information, and (3) a trust deficit caused by safety, security, and governance concerns. Cisco’s strategy to address these constraints includes building AI‑ready networks and steady‑state “token generation factories,” enriching agents with proprietary enterprise and machine (time‑series) data, redesigning workflows for AI‑centric processes, and implementing runtime security guardrails with end‑to‑end observability. India possesses strategic advantages for the AI era: a large, youthful talent pool, strong digital foundations (Aadhaar, UPI), and massive data scale, positioning it as a global AI contributor.
Resolutions and action items
Cisco will continue developing AI‑ready network infrastructure and token‑generation factories to meet the steady compute demand of autonomous agents. Cisco will work on solutions to enrich AI agents with enterprise data and machine‑generated time‑series data, and promote workflow redesign to accommodate AI‑in‑the‑loop models. Cisco will implement runtime security, guardrails, and comprehensive observability across the AI stack to mitigate trust and safety risks. India and Cisco will collaborate to leverage India’s talent, digital identity, and data scale for advancing AI capabilities globally.
Unresolved issues
How to rapidly expand global infrastructure (power, compute, bandwidth, memory, data‑center capacity) to meet AI’s growing demands. Effective methods for closing the context gap, especially for real‑time enrichment of agents with proprietary and machine data. Establishing universally accepted metrics and standards for AI trust, safety, and runtime governance. Specific governance frameworks and policies needed to prevent rogue AI behavior and ensure widespread adoption.
Suggested compromises
None identified
Thought Provoking Comments
We are now squarely in the second phase of AI… and we are actually soon going to go to the third phase, which is physical AI… this will fundamentally re‑imagine work across dimensions we never imagined before.
Frames AI evolution as a series of distinct phases, giving the audience a macro‑level roadmap and highlighting that we are on the cusp of a transformative ‘physical AI’ era.
Sets the stage for the rest of the talk, prompting listeners to think beyond chatbots and consider broader societal impact; it leads directly into the discussion of new constraints (infrastructure, context, trust) that must be solved for the next phase.
Speaker: Jeetu Patel
At Cisco we have our first product that was 100 % built and coded with AI where there was no human writing a single line of code.
Provides a concrete, unprecedented example of AI‑generated software, illustrating the speed of innovation and the shift from human‑centric to AI‑centric development.
Triggers the “AI in every loop” idea, shifting the conversation from tools that assist humans to AI becoming the primary creator; it underpins later points about accelerating innovation curves and the need to flip the human‑in‑the‑loop model.
Speaker: Jeetu Patel
Infrastructure is oxygen for AI. If you don’t have enough power, compute, network bandwidth, or memory, you won’t be able to harness the full potential of AI.
Identifies a foundational, often overlooked bottleneck and uses a vivid metaphor that makes the constraint tangible.
Introduces the first of three major constraints, steering the discussion toward concrete resource challenges and prompting the audience to consider investment in data‑center capacity and steady‑state compute for agents.
Speaker: Jeetu Patel
The second big constraint is a context gap… Imagine an ER doctor with no patient history – the agent would be forced to guess, and those guesses are as good as a coin flip.
Uses a relatable medical analogy to explain why lack of rich context leads to poor decisions, making an abstract technical issue emotionally resonant.
Shifts the conversation from hardware limits to data quality and relevance, leading to a deeper dive into enterprise data, machine data, and the need to redesign workflows around agents.
Speaker: Jeetu Patel
The risk with AI now is no longer that it gives the wrong answer, but that it takes the wrong action. We must protect agents from jailbreaks and also protect the world from rogue agents with runtime guardrails.
Reframes the trust issue from static correctness to dynamic safety, emphasizing real‑world consequences of autonomous actions.
Creates a turning point toward governance and security, prompting discussion of runtime guardrails, observability, and the shift from policy documents to active protection mechanisms.
Speaker: Jeetu Patel
The new metric for global competitiveness moving forward is our ability to safely, securely, and efficiently generate tokens for the use of AI.
Proposes a novel, quantifiable benchmark that ties AI capability directly to economic and national security outcomes.
Elevates the conversation to a geopolitical level, encouraging participants to think about national strategies, measurement, and how infrastructure, context, and trust feed into this metric.
Speaker: Jeetu Patel
India has a huge talent pool, a strong digital foundation (Aadhaar, UPI), and massive scale – AI works best with scale and data, so India can be a tremendous contributor to the world’s AI future.
Highlights specific national strengths, turning a global discussion into a localized call to action and optimism for India’s role.
Shifts tone from cautionary to hopeful, rallying the audience around a shared mission and setting up a concluding emphasis on partnership and responsibility.
Speaker: Jeetu Patel
Overall Assessment

Jeetu Patel’s remarks acted as the engine of the discussion, each serving as a pivot that redirected focus and deepened analysis. By first framing AI’s evolutionary phases, he prepared the audience for a forward‑looking mindset. Subsequent comments introduced concrete constraints—infra­structure, context, and trust—each illustrated with vivid analogies that transformed abstract challenges into relatable narratives. The introduction of a new competitiveness metric and the emphasis on India’s unique assets reframed the conversation from technical hurdles to strategic, geopolitical opportunities. Collectively, these thought‑provoking statements guided the flow from a broad vision of AI’s potential to a nuanced roadmap of the resources, data practices, and governance needed to realize that vision, while also galvanizing the audience around a shared national purpose.

Follow-up Questions
What could hold progress back for AI?
Identifies the need to investigate the three major constraints—infrastructure, context gap, and trust deficit—that could impede AI advancement.
Speaker: Jeetu Patel
How do we close the context gap?
Seeks solutions for providing AI agents with sufficient contextual information, a critical step for reliable decision‑making.
Speaker: Jeetu Patel
How can we protect AI agents from jailbreaks, prompt‑injection attacks, tool abuse, and data poisoning?
Calls for research into robust security mechanisms to safeguard agents from malicious manipulation.
Speaker: Jeetu Patel
How can we protect the world from rogue AI agents and implement effective runtime guardrails?
Highlights the need for dynamic, real‑time governance to prevent unintended harmful actions by AI.
Speaker: Jeetu Patel

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Collaborative AI Network – Strengthening Skills Research and Innovation

Collaborative AI Network – Strengthening Skills Research and Innovation

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by framing the session around AI diffusion and the need to treat AI as a form of digital public infrastructure (DPI) that can be trusted, interoperable and shareable [10-14]. Saurabh Garg emphasized that AI must first find concrete use cases before delivering value, likening it to a “solution in search of a problem” and outlining four foundational resources-compute, data sets, talent and models-required for a DPI [12-13][14-15]. He detailed the criteria for “AI-ready” data-discoverability, trustworthiness, interoperability and usability-while stressing privacy safeguards and the importance of locally relevant datasets to capture linguistic and cultural contexts [15-18][16]. To operationalize these ideas, Garg introduced the proposed METRI platform, a voluntary, multi-stakeholder framework aimed at democratizing the four resources and accelerating AI diffusion [23-29].


Speaker 2 built on this by noting that invention of AI has largely occurred in the West, but impact must be generated in the Global South, proposing “100 diffusion pathways by 2030” and highlighting a tripartite collaboration among Kenya, Italy and India with Kizom as a partner [40-54][55-58]. The G7 AI hub representative described how limited access to compute, data and talent hampers adoption, and argued that the hub can unlock resources, create business cases for data-centers, and foster co-architected solutions for smallholder farmers and women entrepreneurs across borders [61-68]. Janet Zhou warned against “pilotitis,” citing historical successes in health and poverty reduction that were achieved when governments were involved from the design phase, and she advocated for institutional capacity, inclusive governance and shared infrastructure to scale AI responsibly [76-86]. She illustrated these principles with the MOSIP open-source digital ID platform, explaining that technical standards, operational support, training and financing are essential even after the “rails” are built [178-193].


Brazil’s Beatriz Vasconcellos shared a concrete example of building a national data ecosystem for early-childhood and environmental policies, centralising chatbot services, and creating a Secretariat for Shared Services to simplify procurement and avoid vendor lock-in [94-102][211-216]. She also warned that outsourcing AI capabilities to large vendors undermines national capacity, likening it to outsourcing a country’s army, and called for building domestic expertise through experimentation and incremental “muscle” development [222-230]. Across speakers, a consensus emerged that interoperable digital rails-such as UPI, digital IDs, and emerging multilingual voice stacks-must be invisible yet trustworthy, enabling AI applications to scale without creating new silos [140-155][168-170].


The discussion concluded that achieving the 100 pathways will require coordinated playbooks, standards, and public-private partnerships that embed AI within existing DPI, ensure inclusive governance, and translate pilots into production-scale services [240-255][178-190]. In sum, the panel agreed that democratizing foundational AI resources, strengthening institutional trust, and building modular, open infrastructure are critical to moving AI from isolated pilots to widespread, equitable impact worldwide [14-15][76-86][240-255].


Keypoints


Democratizing AI’s foundational resources and treating AI as a Digital Public Infrastructure (DPI).


Saurabh Garg highlighted the four core inputs-compute, data sets, talent, and models-and stressed that data must be “AI-ready” (discoverable, trustworthy, interoperable, and usable) to enable trustworthy, shareable AI services [14-15][16-18]. He described the early-stage work on a multi-stakeholder platform called METRI aimed at modular, voluntary development of these resources, positioning AI alongside established DPIs such as Aadhaar and UPI [23-29].


The “100 AI diffusion pathways by 2030” agenda and the need for cross-border collaboration.


Participants noted that AI, like electricity, was invented in the West but must be diffused through use-cases in the Global South [40-44][50-53]. The panel agreed on co-architecting pathways that combine language, voice, and sector-specific solutions for smallholders, women entrepreneurs, and other vulnerable groups, leveraging public-private partnerships and shared infrastructure [67-68][140-148].


From pilot projects to production-scale impact: institutional trust and shared infrastructure.


Both Janet Zhou and Beatriz Vasconcellos emphasized that “pilotitis” is a common barrier and that scaling requires governments to be at the design table, building inclusive, trustworthy institutions and interoperable data ecosystems [71-82][85-86]. Brazil’s approach of creating shared data platforms, thematic data ecosystems, and centralized chatbot services illustrates how breaking data silos and standardizing APIs can move AI from isolated pilots to nationwide services [94-102][120-127].


Digital public infrastructure as the “invisible rail” that makes AI diffusion seamless.


The discussion linked existing DPI such as UPI, DigiLocker, and national digital IDs to emerging AI rails (e.g., multilingual language stacks, voice AI) that should operate silently within everyday workflows [145-152][153-160][165-167]. The panel argued that when AI sits on these trusted rails, it becomes a low-friction, scalable public good rather than a noisy, proprietary product.


Addressing frictions: open standards, vendor lock-in, capacity building, and financing.


Beatriz described Brazil’s “Secretariat for Shared Services” that centralizes procurement to avoid duplicated vendor solutions, while also stressing the need to develop domestic AI talent rather than outsourcing critical capabilities [212-219][222-230]. Janet added that beyond building rails, programmatic support-training, operational assistance, and financing (e.g., World Bank backing for MOSIP) -is essential to lubricate adoption [178-186][191-193].


Overall purpose/goal:


The panel aimed to map a collective roadmap for scaling AI in emerging economies by treating AI as a public utility, sharing best-practice frameworks (e.g., METRI, the use-case adoption framework), and coordinating cross-national pathways that turn pilot projects into sustainable, inclusive services.


Overall tone:


The conversation began with an upbeat, forward-looking tone, celebrating the vision of AI as a democratized public good. As the dialogue progressed, it shifted to a more pragmatic, problem-solving tone, acknowledging concrete challenges such as data readiness, institutional trust, and vendor lock-in. The final minutes retained a courteous, appreciative tone but grew hurried as the session was cut short, ending with thanks and a brief farewell.


Speakers

Janet Zhou – Global development lead for AI across multiple geographies (moderator/participant) [S1][S2]


Speaker 1 – Event moderator/host introducing speakers and guiding the discussion [S3][S5]


Speaker 2 – Event moderator/host facilitating Q&A and panel flow [S9][S11]


Speaker 3 – Panel participant, referenced as “Shalini” in the dialogue; role not specified [S6][S7][S8]


Saurabh Garg – Secretary, Ministry of Statistics and Programme Implementation, Government of India [S12][S13]


Beatriz Vasconcellos – Representative of the Brazilian government discussing AI adoption and digital public infrastructure [S15][S16]


Additional speakers:


Kizom – Representative from Kizom, answering questions about the “100 pathways to 2030” initiative (no external source)


Shalini – Participant addressed by the panel, likely a moderator or co-host (no external source)


Selena – Founder/lead of Zindi, overseeing a network of 100,000 data scientists across Africa (no external source)


Full session reportComprehensive analysis and detailed insights

Opening – The moderator asked all panelists and the two senior officials (Mr Shankar and Mr Saurabh) to pose for a quick photograph before proceeding, and then invited Mr Saurabh Garg, Secretary of MOSPE India, to deliver the keynote address [1-8].


Keynote (Saurabh Garg) – Garg framed the discussion around “AI diffusion”, insisting that artificial intelligence must first demonstrate concrete use-cases (“a solution in search of a problem”) before delivering value [12-13]. He positioned AI as a potential Digital Public Infrastructure (DPI), comparable to Aadhaar or UPI, and argued that for AI to become a trusted, interoperable public good it must satisfy the same criteria that underpin existing DPIs [14-15]. Garg identified four foundational resources – compute, data sets, talent and models – and explained that democratising these inputs is essential for a scalable AI ecosystem [12-15]. To operationalise this vision he introduced METRI (Multi-stakeholder AI for Resilient and Trustworthy Infrastructure), a voluntary, modular platform that enables non-committal collaboration on the four resources [23-28]. He then detailed the four criteria for AI-ready data: (i) discoverable through universal metadata, (ii) trustworthy via quality assessments, (iii) interoperable with unique identifiers, and (iv) usable – standardised and classified according to international norms [15-18]. Garg stressed that access must be balanced with privacy safeguards because data carries linguistic, cultural and local context that determines the relevance of AI-driven inferences [16-18].


Invention vs. impact (moderator) – The moderator noted that AI’s invention has largely occurred in the West, but its impact must be generated in the Global South. Citing a book by Jeffrey Ding, the moderator announced the “100 diffusion pathways by 2030” target, aimed at co-designing sector-specific AI solutions for small-holder farmers, women entrepreneurs and other vulnerable groups [40-44][53-54]. The moderator then asked Kizom to comment on how such pathways could be operationalised.


Kizom (G7 AI Hub representative) – Kizom described the G7 AI Hub’s work to unlock the three scarce resources – compute, data and talent – for Africa and other Global South regions [61-66]. He outlined business cases for on-continent data-centres and GPU clusters and highlighted the importance of diaspora talent returning to “co-architect” AI solutions that combine language, voice and domain knowledge for small-holder farmers and women entrepreneurs [67-68]. Kizom emphasized the need for public-rail infrastructure (identity, payments, data exchange) to scale AI services across borders [55-58].


Janet Zhou (World Bank / MOSIP) – Zhou warned against “pilotitis”, observing that many technologies remain stuck in proof-of-concept phases because governments are brought in only after pilots have concluded [76-77]. She cited historic successes in reducing child mortality and extreme poverty as examples where early government involvement, inclusive institutions and shared, trustworthy DPI enabled population-scale impact [78-82]. Zhou presented the MOSIP open-source digital-ID platform as a template for building trustworthy DPI: open technical standards, reference implementations, operational support, peer-to-peer learning and World Bank financing together “lubricate” adoption [178-186].


Beatriz Vasconcellos (Brazil) – Vasconcellos outlined Brazil’s “one-government-for-each-person” vision, building thematic data ecosystems (early childhood, environment, land, climate) across five ministries with common metadata, identifiers and APIs [94-110]. She described a centralised chatbot procurement process managed by a Secretariat for Shared Services, allowing ministries to obtain a vetted AI solution with a single digital transaction and thereby avoiding duplicated procurement [211-216]. Vasconcellos warned that over-reliance on large external vendors creates strategic lock-in, likening it to outsourcing a nation’s army, and called for domestic capability building through experimentation and “muscle” development [222-230].


AI “rails” discussion (Kizom) – Responding to the moderator’s question about DPI “rails”, Kizom reiterated the “invisible rail” metaphor: users already rely on UPI, DigiLocker and digital IDs, and AI should be embedded invisibly within these rails so that it becomes a seamless part of daily life rather than a noisy, stand-alone product [145-155]. He highlighted emerging public rails such as India’s Bhashini language stack and Africa’s Zindi network of data scientists, which can democratise access for speakers of low-resource languages [158-167].


Safety and guard-rails (moderator & panel) – The moderator asked the panel about safety and guard-rails for AI applications in agriculture and health. Panelists agreed that robust validation, transparent model provenance and domain-specific ethical guidelines are needed to prevent misuse and to protect vulnerable populations [155-160].


Programmatic friction-removal (Janet Zhou) – Zhou returned to the MOSIP example, emphasizing the need for standardised road-rules (technical norms, stop-sign conventions) and operational support (training, delegations, financing) to move from rails to real-world adoption [178-186].


Hardest challenge (Beatriz Vasconcellos) – Vasconcellos identified the broader digital-transformation challenge: changing entrenched processes, centralising procurement, creating a dedicated AI Secretariat, and resisting vendor lock-in. She stressed building national AI talent and capabilities through experimentation rather than continuously buying packaged solutions [222-230].


Use-Case Adoption Framework (Kizom) – Kizom described the framework co-developed with the X-TEP Foundation, Gates Foundation and ~20 countries. The framework differentiates vertical sectoral impact (education, health, climate) from horizontal unlocks (language data, compute, AI-ready data, interoperability) and stresses co-design of pathways, fusion of vertical and horizontal elements, and the role of public-good voice-optimisation tools [240-248].


Audience Q&A – Diversity & voice adoption – An audience member asked about diversity and diffusion. Kizom answered that voice adoption is key to inclusion, enabling AI to reach low-resource language speakers and act as an equaliser [264-267].


Moderator’s final reflections – The moderator highlighted multilinguality as a “leveler”, referenced the recent Amul AI launch and the need for multi-model, open-choice AI stacks (VSA Step Foundation) to avoid vendor lock-in and ensure flexibility [260-266]. He noted that the session was abruptly ended when the room was cleared [271-273].


Closing – The moderator thanked everyone, offered a souvenir on behalf of the India AI team, and formally closed the panel [274-275].


Key take-aways


– AI should be treated as DPI that is trusted, interoperable and shareable, mirroring Aadhaar and UPI [14-15][145-155].


– The four foundational resources – compute, data sets, models and talent – must be democratised; METRI was proposed as a voluntary, multi-stakeholder platform to achieve this [23-28].


– The G7 AI Hub aims to unlock compute, data and talent for Africa, addressing the resource gap highlighted by Kizom [61-66].


– “Pilotitis” can be overcome by involving governments at the design stage, building inclusive institutions and shared infrastructure, as exemplified by MOSIP and historic health-poverty successes [76-82][178-186].


– Brazil’s thematic data ecosystems and centralised chatbot procurement demonstrate how standardised APIs, shared services and a Secretariat for Shared Services can break data silos and avoid vendor lock-in [94-110][211-216][222-230].


– Emerging multilingual and voice stacks (Bhashini, Zindi) act as inclusion rails, turning AI into an equaliser across linguistic communities [158-167][264-267].


– The Use-Case Adoption Framework links sectoral impact with horizontal unlocks and provides a playbook for the 100 diffusion pathways [240-248].


The panel moved from an aspirational framing of AI as a public utility to a concrete, policy-oriented roadmap that stresses data readiness, modular infrastructure, institutional trust, safety guard-rails and the avoidance of vendor lock-in as prerequisites for equitable, population-scale AI impact [12-18][23-28][76-82][211-216][240-248].


Session transcriptComplete transcript of the session
Speaker 1

request all the panelists along with Mr. Shankar and Mr. Saurabh for a picture, please, because everyone has different schedules. So we just want to get a quick photo of this moment before we move ahead. Yeah, content first. All right. Thank you so much. Panelists, you can take your seat. To take us forward, I’d like to invite to deliver a keynote Mr. Saurabh Garg, who is the Secretary of MOSPE India. If you can take us forward. Thank you so much.

Saurabh Garg

Thank you. Good afternoon and great to be here on this session. We’re talking of diffusion, AI diffusion. I’ll just speak of one or two aspects of it because I’m sure the panelists would lend a lot more color to this topic. Just to take off where Shankar left, sometimes he’s talking about use cases and that’s very necessary because AI is perhaps something like a solution in search of a problem. So until and until we don’t find use cases for that, it will not be able to give the value that it potentially can and I think that’s really, really important. We’re talking of AI being a possible DPI, a digital public infrastructure. and I suppose for that some steps would be needed to ensure that it becomes trusted, interoperable and shareable.

I think those are aspects which a DPI like Aadhaar or UPI has and I think we are still in early days but the mechanisms for that, how we can ensure that it would be possible and given that we talk of four resources as foundational AI resources, compute, data sets, talent and models apart from obviously the frameworks that would be necessary for this and I mention this because I had the privilege of chairing the democratizing AI resources group, working group of the AI summit and various… various options that we discussed with other countries on how we can ensure democratization of these four foundational resources. obviously each of them would have a different mechanism but one thing I would just go down in slightly greater details is on the data sets back part which is also something that we are doing within the Ministry of Statistics across the two different ministries and and states and why I am saying data sets is also because perhaps data is also the raw material for AI models so it’s a very foundational resource in that sense and compute is perhaps something that we can acquire and therefore we have discussions around models whether they need to be more efficient they are right now extremely power both compute and energy intensive or we can make them lighter going forward that is something which is work in progress I think it will take some time before the small and domain small domain models come in which will perhaps improve diffusion but data is something that would need to be AI ready going forward and in AI ready I would probably make four things that it needs to be one is discoverable how do you ensure that data is easily discoverable and that’s perhaps by ensuring that the metadata is understood by everyone and that makes it easier for any models also to understand second is on the trustworthiness of the data and that’s the quality assessments that we have whether it’s trustworthy and it’s credible and that would determine its use the third is in its interoperability with the two sets of data sets how interoperable they are what are the kind of unique identifiers it has to be able to identify what is it how to link different data sets and the fourth is its usability across systems and that would be dependent on the standardization and the classifications that we use which are internationally similar so that different conclusions do not come from the same data set.

And obviously, the focus would have to be on access and dissemination so that it is available for use while preserving the privacy of the data, the safeguards that would need to be built. And why I am saying about data is because this would be also where a lot of the local contexts, linguistic contexts, cultural contexts will come in and that will come in from the data sets that are. We talk of ensuring that it is locally relevant, the inferences and the solutions and I suppose the data would determine its relevance. So, we have to be very careful about that. So, we have to be very careful about that. So, we have to be very careful about that.

So, we have to be very careful about that. and ensure that it’s useful at different levels. So I’ll stop here, apart from the fact saying that for democratizing AI resources, the working group discussed with the others and a kind of a platform has been suggested going forward, which has been named as METRI. METRI in Hindi means friendship for those who are not aware. And it’s an acronym for multi -stakeholder AI for resilient and I’m forgetting what’s the T for. Now that, sorry, trustworthy. So and infrastructure. So that’s the acronym that we hope to be able to. But what the concept is that on a modular level, on a voluntary basis level, on a non -commitment level.

how we can develop on the foundational AI resources of availability of compute, data sets, models and talent. And I think the way we are able to develop this and move towards a DPI for AI resources, I am sure diffusion would become all the more easier. So thank you for this opportunity and look forward to a great time. Thank you.

Speaker 2

Thank you everyone. And we will carry on. We don’t have enough panels in which all of us are women. So three cheers for that. Don’t look at each other. You guys had a great contribution. So a couple of weeks back, some of us got together and we said, invention has happened in West. Impact has to happen at each one of us. What’s the gap between invention and impact? And that’s where we came out and thought about we thought about adoption and then we said, isn’t it diffusion? And why did we pick diffusion? We actually read a book. We read a book by Jeffrey Ding. He’s a professor in it’s in D .C. Why am I forgetting the name of the institute?

It’s D .C. Georgetown. Sorry. Georgetown in D .C. And we read about AI diffusion and that GPT, the general purpose technology like electricity, it diffused into the society over several decades. It was created in Europe but actually diffused in U .S. quite a lot. U .S. capture and also chemical engineering which Shankar talked about that chemical the chemical engineering creation if you see, if you remember chemistry, Bohr model you know all those were Germans but actually it’s US who capitalized on that. AI is like that. Invention happened in the West. We all know that. But it’s the global south is going to have use cases who are going to diffuse it into sectors into the and the horizontal enablers have to happen across these sectors for us to benefit, for us to have more economic benefit out of AI.

So that’s when all of us said that yes we will do 100 diffusion pathways by 2030. And one of the partner in crime was Kizom. She is here with us and Kizom my first question is to you. Tell us about how you think because Kenya comes in, you are based in Italy and we did a tripartite with Kenya, Italy and India. How do you think 100 pathways to 2030 pan out for you and what does it mean for you? How do you think it will happen?

Speaker 3

absolutely Shalini how long do I have to answer this question short version long version short version

Speaker 2

as long as people are okay with stories you can carry on

Speaker 3

um well I mean as Saurabhji the chair of the working group for democratization of AI spoke about there are some fundamental resources or inputs AI need in order for it to actually work in a way that can help a common citizen or a small business owner and some of those foundations that he spoke about our AI ready data compute and those are the things that I in my role at the United Nations development program working in Africa in parts of Latin America in Asia discovered that they were able to do in the United Nations development program working in Africa in parts of Latin America in Asia discovered that they were able to do in the United Nations development program working in Africa in Asia discovered that there is a constraint on access to some of these there is a constraint on access to some of these foundational resources And so this G7 AI hub was created to address that constraint by, one, unlocking additional resources from, of course, the friendly G7 countries that wanted to focus on parts of Africa.

But also, as we do that, to think about what is the business case for data centers, for GPUs on the continent? How do you break data silos, even though the global south is so rich in data? As well as how do you orchestrate talent, especially since we saw that much of, you know, let’s say Microsoft’s or big tech’s talent pool on the continent of Africa and in other parts of the world, were actually coming from the global south countries. And over the last one year or so, I’ve seen this tremendous momentum of many of the African people who worked in big tech or large companies moving back to the continent because they actually don’t want the continent to be…

left behind. They want to be co -architects of the future, this fundamental shift that humanity is going through. And this is where when we talk about 100 AI diffusion pathways, it is about co -architecting pathways where we look at how do we bring not just language data, but voice adoption into solutions that a smallholder farmer can use, that a woman entrepreneur can use, and not just as pilots, but to think about it from a infrastructure perspective, a digital public infrastructure perspective where we can scale to millions of farmers, go across national boundaries and be able to look across borders either as digital public goods or as expansion of private sector innovations or public private partnerships. So as Shankar said, diffusion pathways could be many, and it’s for

Speaker 1

Thank you, Kizom. I’ll come to you, Janet. You lead global development for AI across multiple geographies. But most of them are stuck in pilots, right? How does AI become production scale? And do you think it’s funding that they lack only? Or are there more diffusion pathways that we can create so that actually AI pilots most of population scale?

Janet Zhou

Hello? Hi. Maybe I would first start by saying the problem of pilotitis is actually one that sort of predates AI. And we have many technologies that are enormously beneficial for pilotizing. humanity that I think are currently still stuck in not having diffused. But when I think about the positive examples, the places where I think as a global community, we’ve had tremendous scaled impact, right? Having or reducing child mortality by half since 2000, 170 million people out of extreme poverty. The common threads are often that we’ve managed to figure out how to get both governments and markets to really focus and work for the most vulnerable populations. And so whether it’s vaccines that we’re talking about or instant payment systems, often it is really just ensuring that government is there at the design phase, at the table, in the driver’s seat, not brought in after the pilot results come in.

It is very much focused on making sure that… We make it easier for local innovators to be able to enter markets. So whether that… That’s you can aggregate low margin demand, you can streamline market entry, but really making it easy to lower the cost to serve for the most kind of vulnerable people at the edge. And then it is very much also building institutional capacity and making sure that, you know, it’s not there’s playbooks and training and all of that, but really shared infrastructure that allow sort of all boats to rise and making sure that that infrastructure is trustworthy, is inclusive, sort of creates, I think, a really positive feedback loop because I loved what Nanda Nilakani expressed, which is that we, you know, really rely on institutions for trust, not on algorithms.

And I think one of the ways that institutions become trustworthy is by being inclusive and making sure that they actually serve the people that otherwise, would be less to benefit.

Speaker 1

Yeah, absolutely. I think that’s a key that how do you trust the institutions and AI output will, you know, suppose it’s coming out of a AI advisory application. Do you trust that or do you trust the institution which is giving in a physical or will the institution adopt this AI advisory so that there’s more trust on the advice itself being given? I mean, that’s a quite hybrid and risky manner and institutions have to understand AI and adopt and first trust the AI output before they say that this is ours. I think that that part is key on AI adoption. Bia, tell us about Brazil that, you know, a very different perspective. Just let us first let us understand that how is the AI adoption in that region?

And are you also stuck in this pilot? pilot to production and is there a gap and how do you see that being bridged?

Beatriz Vasconcellos

Perfect. So I think there are many different ways and perspectives to think about AI. In the Brazilian government we chose to establish a vision for one government for each person. So that means we are going fully on the personalization and even in the agentic state vision, right? So for that we need to be thinking about some shared infrastructure and shared capabilities. So what we did was starting with the data. We have a project now to not just catalog but also prepare the data sets for training. We are also building some shared platforms for personalization and to understand citizens’ characteristics. So it’s… within our state -owned enterprises, we have two large IT state -owned enterprises, and we are making them collaborate on a shared platform in which we have some canonical data sets about citizens, and every ministry contributes with different characteristics, and we are creating different labels for every citizen.

And then one different way in which we are trying to break the data silos, which, of course, is a very big issue, is to think about the data ecosystems. So we came up with this concept, and it doesn’t mean that we’re doing data lakes. It means that we’re thinking about interoperability from a thematic perspective. So one example is the early childhood ecosystem, data ecosystem. So we know that a lot of policies related to early childhood, they have different data requirements, and they need to use similar registries, and we’re going to look at some of these different data systems and we’re going to look at some of these different data systems and we’re going to look at some of these different data systems and we’re going to look at some of these different data systems and we’re going to look at some of these different data systems So we created this ecosystem.

We brought together five ministries, Ministry of Health, Education, Social Development, Management, and Human Rights. And we cataloged the policies and what kind of data would be needed. And then we started creating the standards for that specific ecosystem. So we prioritized early childhood and environmental and land ecosystems. Land, environmental, land, and climate. It’s in the same group. So we are starting with that, and it seems to be an interesting approach. It seems to be working. The other thing is coming back to the GPI discussion. It is very helpful for us to have the digital ID and authentication to implement this vision. So what we’re doing now is we started, well, a lot of people in the government want to do Gen AI, right?

because I think it’s the easiest and maybe most famous type of AI implementation. So a lot of government entities and ministries wanted to do their own chatbots. So it was being spread all over. So what we did was also to try to centralize that capability. And we started with informational chats. So what kind of policies or information would be helpful. Now we are just starting the transactional part of the chatbots. So the idea is that a citizen will be able to actually complete a service request or get their service done through the chat. And that’s only possible because we have the gov .vr authentication. So we know that the person is actually the right person.

And then the third step, which we still haven’t entered, but that’s the vision, is the agentic state. So the agentic state. To build the agent specific for that person. And that will only be able to happen once we have the data platform. infrastructure. So that’s more or less how we’re thinking about it.

Speaker 1

Okay. And thanks for bringing DPI into the picture because my next question is on that. Nandan announced yesterday 100 Pathways to 2030 because it comes back, comes from a lot of experience on DPI. And Kizo, my next question is to you that, you know, you were also in the DPI journey, working with India. Do you think in AI, there are rails like, you know, DPI lays down rails, roads, which then other countries can take. Do you think in AI, how do use cases cross borders? How do use cases, what are the pathways? What are the playbooks that different countries can benefit from? How do you think that can happen?

Speaker 3

Shalini, great question. And I’m assuming this room is fully aware of or is a user of digital public infrastructure. Raise your hands if you’re not. Oh my God. One or two people. It’s probably one of the reasons why I don’t. Okay, we’re not going to get into that right now. You use UPI, right? You use DigiLocker. You don’t use DigiLocker, but you use DigiYatra. No? Okay. I think you should. But you use UPI. Okay, so he’s a DPI user. And that’s the beauty of digital public infrastructure. You actually want to be invisible. And one of the sort of design, one of the ambitions that we have as part of the AI diffusion pathways is that we actually don’t want AI to be this noisy, chaotic technology.

We want it to be so invisible because it’s actually part of your life. Part of not just our life, because obviously for us it’s very convenient. We’re English speakers. and so we are at the summit but for a small holder farmer, for a small micro -entrepreneur business, a woman who is crossing borders between Guinea and Sierra Leone for example so to go back to your question Shalini one as Bia was already starting to say as she is seeing in Brazil and as certainly we are seeing in many parts of the world including in India, when you have data that’s already interoperable and public rails such as identity payments, data exchange then the power of AI is much more easier to bring to that same service that you wanted to reach out that’s now an AI trackbot to a farmer on those rails so that’s fantastic but then we are also seeing an emergence of additional rails and I think that’s a great point For those of you who are from India, you probably have heard of someone using Bhashani, which is built on AI for Bharat and sort of the Indic language stack.

So that is definitely a public rail. And I know that in different parts of the world, there are many such rails being created. And I hope that we see the emergence of rails, but also the convergence of rails. Because as the French president yesterday was saying, along with Honorable Prime Minister Modi, that it’s not that we need to do more, it’s that we need to do better together. So this is where the public rails really need to come together. And then I want to recognize Selena from Zindi here from Africa. She runs a public infrastructure. She runs a network of 100 ,000 data scientists across Africa. And that’s already infrastructure. It’s public interest, public value. And we’re at a place where we’re trying to figure out what is the business case.

how do we still make them sustainable by creating those innovation layers on top of the public rails that are also emerging on AI but it’s not like you have to compete between DPI and AI it’s like the DPI principles of interoperability, modularity, reusability becoming a digital public good those still remain quite intact and this is how we might see population scale the scale towards impact

Speaker 1

Thank you Kizum for explaining it so well and actually that’s happening because not just the language the multilinguality voice AI that is becoming a DPI because you should be able to interact in voice and the voice stack is something which should be available for most of the people to build on top of it safety, the guardrails they are DPI in itself they can become DPI in itself how do you do safety safe conversations in agriculture So how do you do safe conversations if someone is calling up for patient care in health care? And can those conversations become a playbook in itself? So these are the playbooks which can get created. So thank you so much for talking about it.

I’ll come to you, Janet, that, you know, the frictions which are there, right? I mean, do you think there could be certain programs or investments to be done where such frictions? Because everybody is building like the full stack. Hey, I’ll do language translation. Hey, I will do compute I need, data I need. So how do you remove the frictions and do you think some programs and investments can help this?

Janet Zhou

You know, I was thinking about this question and the example that kind of came to mind that maybe kind of illustrates it really well. I was thinking about the MOSIP program, which is really kind of an open source platform. It’s inspired by the Adar program, but it is part of a larger effort with World Bank and many other partners to actually try to take that open source, you know, vendor free lock in national ID system and bring it to many, many countries. And when I thought about the components of that, the programming components, I know a lot of it was around ensuring that there was sort of an open production ready reference implementation frame. And maybe if we’re going to continue on the road analogy, I was trying to think of like, what would that be?

And you still, if you have a road, you still need to pick sides of the road and agree on which sides of the road everyone’s going to drive. And you have to agree that a stop sign means stop and that red means stop. And so there. Are still, I think, a set of programmatic standards and norms. that really makes it easier, not only for the adoption, but for those that have adopted to be then able to benefit from that adoption. And then, you know, a lot of when I think about programmatically what has happened in something like MOSIP is, in addition to the technical implementation, there was a lot of operational support, a lot of just examples, countries visiting each other.

And, you know, I think India has sent many delegations to many countries to help explain their story, share their pathway. There’s training that needs to be had, right? You still have to get your driver’s license and prove that you know how to use it. And so, you know, I think even after building the rails, there’s still plenty of program implementation work to actually really help facilitate and lubricate that adoption. And, of course, financing as well, which kind of came through the World Bank program. So, you know, it’s a bit of… no sort of single bullet, even after, I think, after… having the rails set. There’s still, I think, a lot of work to be done, program implementation and operational support.

Speaker 2

Thank you. Bia, what’s the hardest challenge? I mean, this all sounds very easy. Have diffusion pathways, go and build it. But it has to be operational. It has to be adopted. They’re people, right? The human in the loop is the most important in AI. We can never ignore that. What’s the hardest challenge that you see in this? Just one? Just one? Oh, we’re lucky.

Beatriz Vasconcellos

about those. So obviously, it’s not just creating applications. It’s the same old story of digital transformation, right? It’s just at a different level, but you’ve got to change the processes, the way things work. So what we’re doing, I think maybe three interesting things that we are trying to do. Also, I’m not trying to sell it, right? Everything that we are doing, we’re passing. So let’s see what works and what doesn’t. But one thing that we’re doing now is in the Ministry of Management, we have a Secretariat for Shared Services, and they didn’t used to work with AI. So the idea is that we make it very, very simple for any ministry to use a service that is centralized in the Ministry of Management.

So for example, with these chatbots that I was telling you about, we came up, we’ve centralized the procurement, and we chose one that was going to be a service that was going to be a service that was going to be a service vendor to help us build the solution. and each ministry was doing their own. So we said, hey, if you buy it through the centralized service, you only need, it takes just a few hours, you just need to sign a document and transfer some money digitally to the Ministry of Management and you can use the service. So you don’t have to go through any procurement. So that’s one way that we’re trying to overcome the problem of multiple solutions and difficult implementation.

We also came up with an interesting, I think, institutional arrangement, which is, when we’re talking about AI, we’re talking about innovation and new capabilities. We’re talking about innovation capabilities through the Ministry of Management, which means… that they’re building the whole process of how you can come up with first a policy goal, like what the AI project is going to target, how do you experiment. They build the process for experimenting. They have analysts looking at the data and seeing if things are working. So that’s something that seems to be working well, and we think it’s going to be good. The other real challenge that we have, I think, is with the vendors. And I’m using my development hat here from my previous background.

I think everyone is talking about AI and how every agency and ministry needs to be doing something on AI. And obviously there are some big vendors who are saying, yeah, like you government, you don’t have the capabilities. We have the capabilities. We can do it very fast. We do it at scale. And if you start making these decisions day after day, you’re not going to build any capabilities. You’ll just… You’re not going to build sources. and I use an analogy with for example the army no one thinks that it’s reasonable to outsource your army to a country that has a stronger army or a better army but in terms of digital we’re doing it every day like for every decision oh this country, this company does it better so we’re just going to outsource and there are some essential capabilities it’s not just an AI tool or something like we’re playing with national data, we have some very strategic goals also so I think if we don’t think about building these capabilities even if you start small and it takes a while to build the muscles we’ve got to build the muscles so we’re trying to incentivize also the agencies to test and experiment and don’t buy prepackaged solutions because we’ve got to build our own muscles

Speaker 2

yeah I think you brought in a very valid point which is what a lot of people are scared of is a vendor lock -in oh we’re going to have to do this and we’re going to have to do this and you would have seen Amul AI which got launched by the Prime Minister and VSA Step Foundation made it possible and the one key thing was there that how do you keep it multi -model like multiple models should be able to do it why just one and that’s been a key thing that how do you give choice to people how do you give you know not logged into the system because that’s where the diffusion works.

Diffusion is not about like concentrated western LLMs all together and just deploy it. It’s about actually walking the path give choice, replaceability have domain knowledge, have data with you because data is there with us in our enterprise systems and we don’t want you know learning from them. How can you separate that so it’s about actually and now this know -how that we have got We want to share with everybody. And that playbook is a diffusion pathway. That’s exactly and that gives an example for that. Kizum, you and me co -authored a paper, which is up in Atlantic Council. And we also talk about the use case adoption framework. Would you like to tell people about the use case adoption framework and how that can be a friction remover?

Speaker 3

Oh, absolutely. And I’m looking for the key author that I saw of the use case adoption framework, Tanvi Lal, director at People Plus AI at XTREP Foundation. So, you know, when we were preparing for the AI Impact Summit many months ago, which feels like many years ago, we started with this idea saying adoption. Adoption is proving to be a challenge. what are we learning from our experience? And this is where XCHEP looked at Mahavish Star, its work with AI for Bharat, its ongoing conversations with Entropic and other private sector companies on safety tooling. And I did the same across a number of countries, and I think together we consulted about 20 -plus countries, had convenings in South Africa to New York to, I don’t know, many, many more places along with the Gates Foundation as well.

And what we learned was that the impact of what technology like artificial intelligence can do sits in sectors, so education, health, climate change, but its ability to move from pilot to scale depends on the horizontal unlocks. So this 100 AI diffusion pathways, underpinning that is this framework that we call… the AI adoption framework, the use case adoption framework. where we see impact in sectors where you need contextual data, contextual knowledge, process, workflows, things that have to change in a department of education to a department of health and so on. But then the horizontal unlocks are the language data, compute. Generally, how do you make AI data AI -ready? Or how do you make data interoperable because a farmer is going to be buying things, selling things, getting public services?

So we have to think about it from a user life perspective. So this is really, I think, a bit about the use case adoption framework that we’ve done together with countries, Gates Foundation, with XTEP. And we hope that this helps us ground our 100 AI diffusion pathways because as Shalini was saying, this is not about just going and saying, I have the solution, you adopt it. we’re not going to see that impact with that approach. We’ll have to co -design some pathways. We’ll have to fuse verticals and horizontals. And this is where, at least when I talk to many innovators, private sector companies in the global south, I see them saying, aha, this is how we co -architect the future.

This is where, when we develop a voice optimization solution as a public good, that goes out to the world. We are builders of the future too. So this, you know, it’s just such a powerful kind of learning that we’ve put together into this 100 AI diffusion pathways towards impact.

Speaker 2

Thank you. Thank you, Kizam. I’m looking at the time, and I would like to ask the audience to have like two questions. So please raise your hand if, yeah. I saw yours first, and okay. would anybody like to take it

Speaker 3

yeah yeah I was so distracted by the crowd that’s coming in we’re getting kicked out guys so I think your question was how to address diversity and diffusion but if you can’t read can you hear? because this is where I think voice adoption is something that is key to the inclusion agenda and the impact agenda of AI so I would say to answer your question, voice adoption

Speaker 2

yeah actually that’s why AI becomes more equalizer and it actually bridges the divide right so there are inequalities and how do you bring in new language, today bringing a new language into a model has become fairly easy fairly so bringing a new language which we can talk about yeah that’s available there is data which is logged in PDFs into various regions and people are not knowing that today that’s become easier so that’s how it is a leveler that’s a trusted source that’s a trusted source so I’ll maybe talk to you later about what Mr. Saurabh Garg talked about and how evidence one last question yeah I think you’re talking about a pivotal moment right I think you’re talking about a pivotal moment one like you know I am not a fortune teller right but what I can do is I do understand the ecosystem about AI I think the fact that multilinguality can be one very big change because it draws people in what is change about change is always about people that how people are able to when UPI was initially talked about it was like bank said I have to change my whole system about it the user friendliness of it and the fact that it’s so easy to deploy and by people is what drew to it so any AI moment which draws people in because of the interoperability the usability and the fact that itself will become has it happened no can it happen yes and multilinguality is one of them but we have to see that how it pans out Okay, thank you so much Thank you very much We have been kicked out of the room and a great panel Thank you, bye

Speaker 1

Thank you everyone for joining us and sharing your thoughtful views On behalf of India AI team we would like to offer a souvenir with our sincere thanks Thank you so much Thank you Thank you

Related ResourcesKnowledge base sources related to the discussion topics (15)
Factual NotesClaims verified against the Diplo knowledge base (8)
Confirmedhigh

“The moderator asked all panelists and the two senior officials (Mr Shankar and Mr Saurabh) to pose for a quick photograph before proceeding, and then invited Mr Saurabh Garg, Secretary of MOSPE India, to deliver the keynote address.”

The transcript excerpts confirm that the moderator called for a group photograph before the discussion began [S26] and [S82] (also echoed in [S83]). The source does not mention Mr Saurabh Garg’s title, but the photo request is verified.

Confirmedhigh

“Garg framed the discussion around “AI diffusion”, insisting that artificial intelligence must first demonstrate concrete use‑cases (“a solution in search of a problem”) before delivering value.”

The speaker’s wording matches the description in the knowledge base that AI can be “a solution in search of a problem” until real use-cases are found [S2].

Confirmedhigh

“He positioned AI as a potential Digital Public Infrastructure (DPI), comparable to Aadhaar or UPI.”

The knowledge base explicitly states that AI is being discussed as a possible DPI, likening it to Aadhaar and UPI [S1].

Confirmedhigh

“Garg identified four foundational resources – compute, data sets, talent and models – and explained that democratising these inputs is essential for a scalable AI ecosystem.”

The source refers to “four resources” that underpin a DPI, aligning with the four pillars mentioned by Garg [S1].

Confirmedhigh

“He detailed the four criteria for AI‑ready data: (i) discoverable through universal metadata, (ii) trustworthy via quality assessments, (iii) interoperable with unique identifiers, (iv) usable – standardised and classified according to international norms.”

The knowledge base lists the same four elements for AI-ready data infrastructure – discoverability, trustworthiness, interoperability and usability [S45].

Confirmedhigh

“Garg stressed that access must be balanced with privacy safeguards because data carries linguistic, cultural and local context that determines the relevance of AI‑driven inferences.”

The source notes Garg’s emphasis on protecting individual privacy while making data valuable for AI applications [S85].

Additional Contextmedium

“Garg’s four AI‑ready data criteria imply the need for universal metadata, quality assessment mechanisms, unique identifiers and international standards.”

The detailed description of each criterion (metadata structures, quality metrics, identifiers, and standard classifications) is provided in the knowledge base, adding nuance to the summary’s brief list [S45].

Additional Contextmedium

“The four foundational resources (compute, data, talent, models) are the only pillars required for AI democratization.”

The knowledge base also mentions a broader set of six foundational pillars – compute, capability, collaboration, connectivity, compliance, and data – offering additional context to Garg’s four-resource framing [S18].

External Sources (87)
S1
Collaborative AI Network – Strengthening Skills Research and Innovation — – Beatriz Vasconcellos- Janet Zhou – Speaker 1- Janet Zhou
S2
Collaborative AI Network – Strengthening Skills Research and Innovation — Speakers:Beatriz Vasconcellos, Janet Zhou Speakers:Janet Zhou, Beatriz Vasconcellos Speakers:Janet Zhou, Speaker 3 Sp…
S3
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S4
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S5
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S6
Building the Workforce_ AI for Viksit Bharat 2047 — -Speaker 1- Role/Title: Not specified, Area of expertise: Not specified -Speaker 3- Role/Title: Not specified, Area of …
S7
S9
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — -Speaker 2: Role appears to be event moderator or host. Area of expertise and specific title not mentioned.
S10
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S11
Policy Network on Artificial Intelligence | IGF 2023 — Moderator 2, Affiliation 2 Speaker 1, Affiliation 1 Speaker 2, Affiliation 2
S12
The Foundation of AI Democratizing Compute Data Infrastructure — 700 words | 130 words per minute | Duration: 321 secondss I’m Saurabh Garg. I’m secretary in the Ministry of Statistics…
S13
The Foundation of AI Democratizing Compute Data Infrastructure — -Saurabh Garg: Secretary in the Ministry of Statistics and Program Implementation in the Government of India
S14
https://dig.watch/event/india-ai-impact-summit-2026/the-foundation-of-ai-democratizing-compute-data-infrastructure — As far as the dependencies, that’s the second part of the question that you asked me. I think one of the areas is that w…
S15
Collaborative AI Network – Strengthening Skills Research and Innovation — – Beatriz Vasconcellos- Speaker 1 – Speaker 3- Beatriz Vasconcellos
S16
Collaborative AI Network – Strengthening Skills Research and Innovation — Speakers:Beatriz Vasconcellos, Janet Zhou Speakers:Beatriz Vasconcellos, Speaker 1 Speakers:Beatriz Vasconcellos, Spea…
S17
S18
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Dr. Garg also referenced observations about the contrast between current AI systems requiring gigawatts of power and hum…
S19
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Dr. Garg advocates for treating compute infrastructure as a public utility that enables innovation and research. Rather …
S20
WS #53 Promoting Children’s Rights and Inclusion in the Digital Age — – Speaker 3 (Janatu): Department of Public Administration, Kumile University Speaker 3: Hello, everyone. My topic is…
S21
Internet Society’s Collaborative Leadership Exchange (CLX) | IGF 2023 Day 0 Event #95 — Speaker 3:I’m Jeremy. I’m from Myanmar. Today I just would like to point out the digital guidelines about the online gov…
S22
AI in Africa: Beyond the algorithm — ### The Systematic Exclusion of the Global South Kallot concluded with a call for South-to-South collaboration and a vi…
S23
Developing capacities for bottom-up AI in the Global South: What role for the international community? — ## Areas of Different Emphasis and Debate ## Conclusion and Next Steps ## Key Participants and Perspectives ## Major …
S24
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Lacina Kone: What you just mentioned, I would say the impact is actually worse than all of this. We talked about the bia…
S25
Building Population-Scale Digital Public Infrastructure for AI — bought which farmers use and millions of farmers today, 2 .5 million farmers have downloaded this app. And this was buil…
S26
Building Population-Scale Digital Public Infrastructure for AI — In other words, by 2030, all of us together across the world will develop these pathways to diffuse the use of AI in a p…
S27
How Multilingual AI Bridges the Gap to Inclusive Access — “AI can only serve the public good if it serves all languages and all cultures.”[1]. “Today, linguistic exclusion remain…
S28
How Multilingual AI Bridges the Gap to Inclusive Access — It’s wonderful. At NTU Singapore, we’re the newest members of ICAIN, but it’s fantastic that the… And I’ve only been a…
S29
WS #119 AI for Multilingual Inclusion — – Encouraging learning and use of multiple languages 5. Promoting Language Equity and Inclusion Athanase Bahizire: Th…
S30
Effective Governance for Open Digital Ecosystems | IGF 2023 Open Forum #65 — Digital Public Infrastructure (DPI) is defined as society-wide digital capabilities that are essential for citizens, ent…
S31
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — Harleen Kaur outlined the policy framework built around four pillars: treating foundational datasets as public goods, in…
S32
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — – Strategies for including marginalized communities in policy-making processes 3. Contextualising Policies and Technolo…
S33
Multi-stakeholder Discussion on issues about Generative AI — Thus, collaboration, dialogue, and capacity-building around AI are encouraged. Collaboration is necessary due to the cro…
S34
AI for agriculture Scaling Intelegence for food and climate resiliance — “We will move from pilots to platforms, from fragmented data to interoperable systems, from experimentation to execution…
S35
AI Meets Agriculture Building Food Security and Climate Resilien — We must build intelligence into our public systems to help everyone. For India, the change is essential. It is the key t…
S36
Keynote-Rishad Premji — “The conversation has fundamentally shifted from possibility to practicality.”[16]”From experimentation to adoption and …
S37
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — Summary:All speakers strongly advocate for open, interoperable standards that enable cross-vendor compatibility and prev…
S38
High-Level sessions: Setting the Scene – Global Supply Chain Challenges and Solutions — Acknowledging the complexities of modern supply chains exacerbated by geopolitical disturbances, the COVID-19 pandemic a…
S39
Regulating Open Data_ Principles Challenges and Opportunities — It is also evident in the market concentration of hyperscale cloud providers whose global dominance shapes where data is…
S40
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — In conclusion, the discussions shed light on the challenges posed by time and financial commitments in capacity building…
S41
Artificial Intelligence &amp; Emerging Tech — Jörn Erbguth:Thank you very much. So I’m EuroDIG subject matter expert for human rights and privacy and also affiliated …
S42
WS #294 AI Sandboxes Responsible Innovation in Developing Countries — – Jimson Olufuye- Jai Ganesh Udayasankaran- Sophie Tomlinson Cross-border and regional cooperation enhances sandbox eff…
S43
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Development | Sociocultural Emphasis on building use cases in key sectors and creating shareable repositories across ge…
S44
Digital Skills : — | EBSIC-EDIC – Observer status for the Ministry of Investment, Regional Development and Informatization of the Slovak Re…
S45
Regional Leaders Discuss AI-Ready Digital Infrastructure — Dr. Saurabh Garg opened the discussion by outlining four essential elements for AI-ready data infrastructure. First, dis…
S46
Collaborative AI Network – Strengthening Skills Research and Innovation — Garg detailed four critical requirements for AI-ready data: discoverable (through proper metadata), trustworthy (through…
S47
Collaborative AI Network – Strengthening Skills Research and Innovation — This comment provides a systematic framework for thinking about data preparation for AI, moving beyond generic discussio…
S48
AI as critical infrastructure for continuity in public services — “I believe that there is perhaps awareness challenge as well as the capacity challenge, because I think that this whole …
S49
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — The discussion revealed a common theme across different contexts: the gap between policy ambition and implementation cap…
S50
WS #150 Language and inclusion – multilingual names — These key comments shaped the discussion by broadening its scope from purely technical considerations to include policy,…
S51
WS #226 Strengthening Multistakeholder Participation — Elizabeth Bacon from Public Internet Registry emphasized that technical internet governance requires diversity of views …
S52
Building Population-Scale Digital Public Infrastructure for AI — The discussion reveals subtle but important disagreements about implementation approaches rather than fundamental goals….
S53
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — I mean, access to compute is what makes or breaks a startup. So the way in India, the way I see it, the way we have star…
S54
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — DPI has a lot of aspects and characteristics this is one of the very productive and efficient way to ensure the interope…
S55
Keynote by Dr. Pramod Varma Co-founder & Chief Architect NFH India AI Impact Summit — Friday evening can be really hard. It’s tiring right after a long week. So thank you for having me here and I don’t want…
S56
Building Public Interest AI Catalytic Funding for Equitable Compute Access — The Maitri framework is built around six foundational pillars: compute, capability, collaboration, connectivity, complia…
S57
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S58
The Foundation of AI Democratizing Compute Data Infrastructure — The emphasis on community participation, data sovereignty, and alternative technical architectures suggests AI developme…
S59
Transforming Rural Governance Through AI: India’s Journey Towards Inclusive Digital Democracy — Open architecture principles prove essential for avoiding vendor lock-in whilst maintaining data residency within India….
S60
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — AI policies in Africa should ideally espouse a context-specific and culturally sensitive orientation. The prevailing ten…
S61
How Multilingual AI Bridges the Gap to Inclusive Access — “AI can only serve the public good if it serves all languages and all cultures.”[1]. “Today, linguistic exclusion remain…
S62
Opening address of the co-chairs of the AI Governance Dialogue — While this transcript captures only the opening remarks of the AI Governance Dialogue, the key comments identified estab…
S63
Agents of Change AI for Government Services & Climate Resilience — Summary:There is unanimous agreement that while AI agents offer significant benefits, robust guardrails, transparency, a…
S64
How nonprofits are using AI-based innovations to scale their impact — Responsible AI and Safety Integration: Unlike typical software development where quality comes later, this program embed…
S65
Agents of Change AI for Government Services &amp; Climate Resilience — This identifies a critical gap often overlooked in AI deployment discussions – the human capacity building required for …
S66
How Trust and Safety Drive Innovation and Sustainable Growth — Alexandra Reeve Givens This insight identifies a critical gap in current regulatory approaches – that AI creates an ‘en…
S67
Effective Governance for Open Digital Ecosystems | IGF 2023 Open Forum #65 — Digital Public Infrastructure (DPI) is defined as society-wide digital capabilities that are essential for citizens, ent…
S68
Collaborative AI Network – Strengthening Skills Research and Innovation — “We’re talking of AI being a possible DPI, a digital public infrastructure.”[1]. “I think those are aspects which a DPI …
S69
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — Harleen Kaur outlined the policy framework built around four pillars: treating foundational datasets as public goods, in…
S70
Panel Discussion AI in Digital Public Infrastructure (DPI) India AI Impact Summit — So I think, firstly, India’s journey in DPIs has been a fascinating one. It makes me immensely proud that whichever coun…
S71
Multi-stakeholder Discussion on issues about Generative AI — Thus, collaboration, dialogue, and capacity-building around AI are encouraged. Collaboration is necessary due to the cro…
S72
Building Population-Scale Digital Public Infrastructure for AI — Excellent point. Excellent point, Trevor. And I think you brought out the inherent stress in the phrase diffusion pathwa…
S73
Building Population-Scale Digital Public Infrastructure for AI — The 100 diffusion pathways by 2030 initiative represents an ambitious attempt to systematise this transformation. Succes…
S74
Building Scalable AI Through Global South Partnerships — Building Collaborative Pathways: The panel discussed the “100 Pathways to 2030” initiative and the importance of reducin…
S75
AI for agriculture Scaling Intelegence for food and climate resiliance — “We will move from pilots to platforms, from fragmented data to interoperable systems, from experimentation to execution…
S76
Collaborative AI Network – Strengthening Skills Research and Innovation — Zhou argues that developing strong institutional capabilities and creating shared infrastructure leads to a virtuous cyc…
S77
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — Gradual transition from proof-of-concepts to production to scale, allowing trust to be built incrementally
S78
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — Summary:All speakers strongly advocate for open, interoperable standards that enable cross-vendor compatibility and prev…
S79
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — The panelists drew parallels between current AI standardization efforts and the historical success of internet protocols…
S80
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — In conclusion, the discussions shed light on the challenges posed by time and financial commitments in capacity building…
S81
Digital Public Goods and the Challenges with Discoverability | IGF 2023 — Furthermore, there are considerable challenges surrounding procurement processes and system lock-overs that impede the a…
S82
https://app.faicon.ai/ai-impact-summit-2026/building-population-scale-digital-public-infrastructure-for-ai — Thank you so much, Mr. Nandan. At this point, I would love to invite our panelists up to the stage. We’ll start by takin…
S83
https://dig.watch/event/india-ai-impact-summit-2026/mahaai-building-safe-secure-smart-governance — Thank you so much. Thank you to all our esteemed panelists and senior officers who are here. May I request all our panel…
S84
Scaling AI for Billions_ Building Digital Public Infrastructure — “Because trust is starting to become measurable, right, through provenance, through authenticity, as well as verificatio…
S85
Regional Leaders Discuss AI-Ready Digital Infrastructure — Dr. Garg emphasizes the need to create mechanisms that allow data to be valuable beyond AI applications while ensuring i…
S86
IGF 2023 WS #313 Generative AI systems facing UNESCO AI Ethics Recommendation — The analysis explores different perspectives on the impact of Artificial Intelligence (AI) on society. One viewpoint hig…
S87
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — We make systems and making decisions about who receives public services, who qualifies for a loan, or who is flagged for…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Saurabh Garg
5 arguments130 words per minute866 words397 seconds
Argument 1
AI must be trusted, interoperable, and shareable like Aadhaar/UPI (Saurabh Garg)
EXPLANATION
Saurabh argues that for AI to function as a Digital Public Infrastructure it must embody the same qualities of trust, interoperability and shareability that underpin successful Indian DPI examples such as Aadhaar and UPI. Without these attributes AI cannot deliver the public value it promises.
EVIDENCE
He explicitly states that AI should become a possible DPI and that it needs to be trusted, interoperable and shareable, citing Aadhaar and UPI as existing DPI models that exhibit these characteristics [14-15].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for systems that are trusted, interoperable, shareable and reusable is highlighted in [S12], and the focus on building trustworthy AI systems is discussed in [S17].
MAJOR DISCUSSION POINT
Trust and interoperability as DPI fundamentals
AGREED WITH
Speaker 3, Janet Zhou
Argument 2
Four foundational resources (compute, data sets, models, talent) and the METRI multi‑stakeholder platform for voluntary, modular development (Saurabh Garg)
EXPLANATION
Saurabh outlines the four core AI resources—compute, data sets, models and talent—and proposes the METRI platform as a voluntary, modular mechanism for stakeholders to collaboratively develop and democratize these resources. METRI is presented as a multi‑stakeholder infrastructure to accelerate AI diffusion.
EVIDENCE
He lists the four foundational resources and describes METRI as a multi-stakeholder AI platform that is voluntary, modular and non-committal, intended to develop compute, data sets, models and talent [15-30].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The four core AI resources and the proposal of the METRI platform are described in [S1]; further mention of METRI as a suggested platform appears in [S2].
MAJOR DISCUSSION POINT
Foundational AI resources and collaborative platform
AGREED WITH
Speaker 3, Janet Zhou
DISAGREED WITH
Beatriz Vasconcellos, Janet Zhou
Argument 3
Emphasis on modular, non‑committal collaboration to build AI resources as a public good (Saurabh Garg)
EXPLANATION
Saurabh stresses that AI resource development should follow a modular, voluntary approach without binding commitments, allowing diverse stakeholders to contribute to a public‑good AI ecosystem. This design mirrors the flexibility needed for a digital public infrastructure.
EVIDENCE
He describes the METRI concept as operating on a modular, voluntary, non-commitment level to develop foundational AI resources [28-29].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A non-binding, voluntary, modular approach for AI collaboration is outlined in the Maitri framework in [S18], echoing the modular, non-committal stance. Similar modular principles are noted in [S12].
MAJOR DISCUSSION POINT
Modular, voluntary collaboration for AI as a public good
Argument 4
Data must be discoverable, trustworthy, interoperable, and usable; standardized metadata and identifiers are key (Saurabh Garg)
EXPLANATION
Saurabh defines AI‑ready data through four criteria: discoverability, trustworthiness, interoperability, and usability, emphasizing the need for common metadata standards and unique identifiers to enable seamless AI model consumption. These standards are essential for building trustworthy AI systems.
EVIDENCE
He details the four requirements for AI-ready data-discoverable, trustworthy, interoperable, usable-and links them to metadata, quality assessments, unique identifiers and standard classifications [15].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The four pillars of AI-ready data-discoverability, trustworthiness, interoperability, and usability-are broken down with metadata standards and unique identifiers in [S2]; the importance of trusted, standardized data is also emphasized in [S12].
MAJOR DISCUSSION POINT
Four pillars of AI‑ready data
AGREED WITH
Speaker 3, Beatriz Vasconcellos
Argument 5
METRI platform serves as a playbook for modular, stakeholder‑driven AI development (Saurabh Garg)
EXPLANATION
Saurabh positions METRI not only as a platform but also as a practical playbook that guides stakeholders through modular, collaborative development of AI resources, ensuring alignment with DPI principles. It is intended to streamline the diffusion process.
EVIDENCE
He notes that the METRI platform has been suggested as a way forward and is envisioned as a modular, voluntary framework for developing compute, data sets, models and talent [23-30].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
METRI is presented as a practical playbook guiding modular, stakeholder-driven AI development in [S1], with additional framing of its role as a framework in [S2].
MAJOR DISCUSSION POINT
METRI as a practical playbook
S
Speaker 3
3 arguments143 words per minute1512 words631 seconds
Argument 1
Diffusion pathways should be invisible, built on existing public rails such as UPI and DigiLocker (Speaker 3)
EXPLANATION
Speaker 3 argues that AI diffusion should be seamless and invisible to users, leveraging existing digital public infrastructure like UPI and DigiLocker as foundational rails. By embedding AI into these familiar services, adoption becomes effortless across diverse contexts.
EVIDENCE
He references common Indian DPI services (UPI, DigiLocker, DigiYatra) and states that AI should be invisible, built on such public rails to reach farmers, micro-entrepreneurs and cross-border users [145-155].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Large-scale farmer applications built on existing digital rails illustrate invisible diffusion in [S25]; the broader vision of AI diffusion pathways leveraging existing infrastructure is described in [S26].
MAJOR DISCUSSION POINT
AI invisibility through existing DPI rails
AGREED WITH
Saurabh Garg, Janet Zhou
DISAGREED WITH
Beatriz Vasconcellos
Argument 2
G7 AI Hub created to unlock compute, data, and talent in Africa and other Global South regions (Speaker 3)
EXPLANATION
Speaker 3 describes the G7 AI Hub as an initiative that mobilizes additional compute capacity, data access and talent resources for Africa and other Global South regions, aiming to reduce the resource gap that hampers AI diffusion. The hub is presented as a response to constraints identified by UNDP work.
EVIDENCE
He explains that the G7 AI hub was created to address constraints on foundational resources by unlocking compute, data and talent for Africa and other Global South areas [61-66].
MAJOR DISCUSSION POINT
G7 AI Hub addressing resource constraints
DISAGREED WITH
Beatriz Vasconcellos
Argument 3
Use‑case Adoption Framework links sectoral impact with horizontal unlocks (compute, data, language) to guide diffusion pathways (Speaker 3)
EXPLANATION
Speaker 3 introduces a Use‑case Adoption Framework that connects sector‑specific impact (education, health, climate) with horizontal enablers such as compute, data and language, providing a structured playbook for scaling AI diffusion pathways. The framework is intended to co‑design solutions that are both vertical and horizontal.
EVIDENCE
He outlines the framework, noting that impact in sectors depends on horizontal unlocks like language, data, compute, and that the framework guides the 100 AI diffusion pathways [240-248].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The linkage of sectoral impact with horizontal unlocks (language, data, compute) is explained in [S2]; the framework guiding the 100 AI diffusion pathways is referenced in [S1].
MAJOR DISCUSSION POINT
Framework linking vertical impact with horizontal enablers
AGREED WITH
Janet Zhou, Beatriz Vasconcellos, Speaker 2
S
Speaker 2
2 arguments122 words per minute1028 words504 seconds
Argument 1
Invention occurs in the West; impact requires 100 diffusion pathways by 2030 to bring AI to the Global South (Speaker 2)
EXPLANATION
Speaker 2 observes that AI inventions predominantly originate in the West, but stresses that achieving meaningful impact requires establishing 100 diffusion pathways by 2030, especially targeting the Global South. This approach aims to translate invention into widespread societal benefit.
EVIDENCE
He notes that AI was invented in the West, cites historical diffusion of technologies, and states the commitment to 100 diffusion pathways by 2030 for the Global South [46-53].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The commitment to 100 diffusion pathways by 2030 for the Global South is documented in both [S1] and [S2].
MAJOR DISCUSSION POINT
Bridging invention‑impact gap with 100 pathways
Argument 2
Multilingual and voice AI act as inclusion rails, turning AI into an equalizer across languages and regions (Speaker 2)
EXPLANATION
Speaker 2 argues that multilingual and voice AI capabilities serve as inclusion rails that level the playing field, enabling AI to reach linguistically diverse populations and reduce digital inequities. He highlights the ease of adding new languages as a catalyst for broader adoption.
EVIDENCE
He explains that multilinguality can be added easily, making AI a leveler and trusted source, and that voice adoption is key to inclusion, especially for underserved regions [264-267].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The role of multilingual AI as a democratic imperative and inclusion rail is discussed in [S27], [S28] and [S29].
MAJOR DISCUSSION POINT
Multilingual/voice AI as equalizer
AGREED WITH
Speaker 3
S
Speaker 1
1 argument120 words per minute647 words323 seconds
Argument 1
Question on cross‑border use‑cases and rail‑based playbooks for AI diffusion (Speaker 1)
EXPLANATION
Speaker 1 raises a question about how AI use‑cases can transcend national boundaries and whether established digital rails can provide reusable playbooks for AI diffusion across countries. The inquiry seeks guidance on creating interoperable pathways.
EVIDENCE
He asks whether AI use-cases can cross borders, what the pathways are, and how playbooks can be shared, referencing DPI rails and prior experience with 100 pathways [132-138].
MAJOR DISCUSSION POINT
Cross‑border AI use‑cases and playbooks
J
Janet Zhou
2 arguments154 words per minute715 words277 seconds
Argument 1
Pilotitis is solved by early government involvement, shared infrastructure, and inclusive institutions that build trust (Janet Zhou)
EXPLANATION
Janet explains that the persistent problem of “pilotitis” can be overcome by involving governments from the design phase, creating shared infrastructure, and fostering inclusive institutions that engender trust in AI solutions. These measures ensure that pilots scale to production.
EVIDENCE
She cites examples such as vaccines and instant payment systems, emphasizing early government participation, shared infrastructure and inclusive institutions as keys to scaling impact [76-82].
MAJOR DISCUSSION POINT
Early government role to overcome pilotitis
AGREED WITH
Beatriz Vasconcellos, Speaker 3, Speaker 2
Argument 2
MOSIP open‑source identity platform illustrates how standards, operational support, and financing lubricate adoption of digital infrastructure (Janet Zhou)
EXPLANATION
Janet presents the MOSIP open‑source digital ID platform as a case study showing that technical standards, operational assistance, peer‑to‑peer learning, and financing (e.g., World Bank) are essential to promote widespread adoption of digital public infrastructure. The example demonstrates how playbooks translate into real‑world deployment.
EVIDENCE
She describes MOSIP’s open-source nature, its reliance on reference implementations, the need for agreed-upon standards (stop signs, road sides), operational support such as country delegations, training, and financing through the World Bank [179-192].
MAJOR DISCUSSION POINT
Standards, support and financing enable DPI adoption
AGREED WITH
Saurabh Garg, Speaker 3
DISAGREED WITH
Saurabh Garg, Beatriz Vasconcellos
B
Beatriz Vasconcellos
3 arguments154 words per minute1218 words474 seconds
Argument 1
Centralized procurement and shared services reduce implementation friction across ministries (Beatriz Vasconcellos)
EXPLANATION
Beatriz outlines a strategy where a centralized procurement unit within the Ministry of Management provides AI services to all ministries, simplifying acquisition, cutting procurement time, and avoiding duplicated solutions. This shared‑service model aims to streamline implementation.
EVIDENCE
She explains that ministries can obtain AI services through a single centralized procurement process, requiring only a simple document and digital transfer, eliminating separate procurement procedures [211-216].
MAJOR DISCUSSION POINT
Centralized procurement to lower friction
AGREED WITH
Janet Zhou, Speaker 3, Speaker 2
DISAGREED WITH
Speaker 3
Argument 2
Building national AI capability is essential; over‑reliance on external vendors leads to lock‑in and loss of strategic capacity (Beatriz Vasconcellos)
EXPLANATION
Beatriz warns that excessive dependence on foreign AI vendors creates lock‑in, undermining national strategic capacity. She advocates building domestic expertise and encouraging agencies to experiment rather than buying pre‑packaged solutions.
EVIDENCE
She discusses vendor lock-in, compares outsourcing to military reliance, and stresses the need to develop internal capabilities and incentivize agencies to test and experiment [222-230].
MAJOR DISCUSSION POINT
Avoiding vendor lock‑in through capacity building
DISAGREED WITH
Speaker 3
Argument 3
Brazil’s thematic data ecosystems (early childhood, environment, land) create standards and interoperability across ministries (Beatriz Vasconcellos)
EXPLANATION
Beatriz describes Brazil’s approach of building thematic data ecosystems—such as early childhood, environmental and land—where multiple ministries collaborate to define standards, catalog policies and data needs, and ensure interoperability. This ecosystem model facilitates data sharing and AI readiness.
EVIDENCE
She details the creation of data ecosystems, the involvement of five ministries, the cataloguing of policies, and the development of standards for early childhood and environmental data, illustrating how interoperability is achieved [99-110].
MAJOR DISCUSSION POINT
Thematic data ecosystems for interoperability
AGREED WITH
Saurabh Garg, Speaker 3
Agreements
Agreement Points
AI diffusion requires trusted, interoperable, and shareable digital public infrastructure similar to Aadhaar/UPI
Speakers: Saurabh Garg, Speaker 3, Janet Zhou
AI must be trusted, interoperable, and shareable like Aadhaar/UPI (Saurabh Garg) Diffusion pathways should be invisible, built on existing public rails such as UPI and DigiLocker (Speaker 3) Pilotitis is solved by early government involvement, shared infrastructure, and inclusive institutions that build trust (Janet Zhou)
All three speakers stress that for AI to deliver public value it must be embedded in a trusted, interoperable, and shareable digital public infrastructure, leveraging existing DPI services (e.g., Aadhaar, UPI, DigiLocker) and involving government early to build trust. [14-15][145-155][76-82]
POLICY CONTEXT (KNOWLEDGE BASE)
The need for trusted, interoperable digital public infrastructure mirrors India’s Aadhaar/UPI-style DPI model and is echoed in global discussions on population-scale AI infrastructure [S53][S54][S59].
A modular, voluntary, multi‑stakeholder approach (e.g., METRI) is essential for democratizing AI resources
Speakers: Saurabh Garg, Speaker 3, Janet Zhou
Four foundational resources (compute, data sets, models, talent) and the METRI multi‑stakeholder platform for voluntary, modular development (Saurabh Garg) Diffusion pathways should be invisible, built on existing public rails … modular, voluntary collaboration (Speaker 3) MOSIP open‑source identity platform illustrates how standards, operational support, and financing lubricate adoption of digital infrastructure (Janet Zhou)
The speakers converge on the need for a modular, non-binding, multi-stakeholder platform to develop AI foundations, citing METRI, the principle of invisible modular diffusion, and the MOSIP open-source model as concrete examples. [28-30][152-155][179-186]
POLICY CONTEXT (KNOWLEDGE BASE)
A voluntary, modular, multi-stakeholder framework such as METRI aligns with the Maitri non-binding modular approach and multistakeholder participation principles advocated in recent AI policy dialogues [S56][S51][S58].
AI‑ready data must be discoverable, trustworthy, interoperable, and usable, supported by common metadata and identifiers
Speakers: Saurabh Garg, Speaker 3, Beatriz Vasconcellos
Data must be discoverable, trustworthy, interoperable, and usable; standardized metadata and identifiers are key (Saurabh Garg) AI invisibility relies on interoperable public rails and data interoperability (Speaker 3) Brazil’s thematic data ecosystems (early childhood, environment, land) create standards and interoperability across ministries (Beatriz Vasconcellos)
All three emphasize that data readiness hinges on discoverability, trustworthiness, interoperability, and usability, requiring standardized metadata, unique identifiers, and thematic ecosystems to enable AI integration. [15][145-155][99-110]
POLICY CONTEXT (KNOWLEDGE BASE)
Four pillars of AI-ready data-discoverability, trustworthiness, interoperability, and usability-have been codified by Dr. Saurabh Garg and adopted in collaborative AI networks [S45][S46][S47].
Scaling AI from pilots to production needs clear frameworks/playbooks, shared services and financing mechanisms
Speakers: Janet Zhou, Beatriz Vasconcellos, Speaker 3, Speaker 2
Pilotitis is solved by early government involvement, shared infrastructure, and inclusive institutions that build trust (Janet Zhou) Centralized procurement and shared services reduce implementation friction across ministries (Beatriz Vasconcellos) Use‑case Adoption Framework links sectoral impact with horizontal unlocks (compute, data, language) to guide diffusion pathways (Speaker 3) Invention has happened in the West; impact requires 100 diffusion pathways by 2030 (Speaker 2)
The participants agree that moving AI beyond pilots requires institutional frameworks, shared procurement/services, and financing, exemplified by early government engagement, centralized AI service hubs, a use-case adoption framework, and the 100-pathway target. [76-82][211-216][240-248][46-53]
POLICY CONTEXT (KNOWLEDGE BASE)
Scaling from pilots to production is highlighted in policy roadmaps that call for playbooks, shared services and financing mechanisms, as discussed in local AI policy pathways and capacity-building reports [S43][S48][S49][S65].
Multilingual and voice AI act as inclusion rails, turning AI into an equalizer across languages and regions
Speakers: Speaker 2, Speaker 3
Multilingual and voice AI act as inclusion rails, turning AI into an equalizer across languages and regions (Speaker 2) Emergence of language stacks such as Bhashani for Bharat illustrates public rails for multilingual AI (Speaker 3)
Both speakers highlight that adding languages and voice capabilities creates inclusive rails that democratize AI access for diverse linguistic communities. [264-267][158-160]
POLICY CONTEXT (KNOWLEDGE BASE)
Multilingual and voice AI are framed as democratic inclusion tools in multiple policy briefs, emphasizing language equity as a core AI objective [S50][S61][S60].
Avoiding vendor lock‑in and building domestic AI capability is essential for strategic autonomy
Speakers: Beatriz Vasconcellos
Building national AI capability is essential; over‑reliance on external vendors leads to lock‑in and loss of strategic capacity (Beatriz Vasconcellos)
Beatriz stresses the strategic risk of vendor lock-in and calls for domestic capacity building and experimentation rather than reliance on packaged foreign solutions. [222-230]
POLICY CONTEXT (KNOWLEDGE BASE)
Avoiding vendor lock-in and fostering domestic AI capability is a recurring theme in open-architecture DPI strategies and sovereignty-focused AI recommendations [S59][S58][S53].
Similar Viewpoints
Both argue that AI diffusion should leverage existing trusted DPI services (e.g., UPI) to make AI seamless and trustworthy for users. [14-15][145-155]
Speakers: Saurabh Garg, Speaker 3
AI must be trusted, interoperable, and shareable like Aadhaar/UPI (Saurabh Garg) Diffusion pathways should be invisible, built on existing public rails such as UPI and DigiLocker (Speaker 3)
Both propose institutional mechanisms—early government participation and shared service models—to overcome pilotitis and streamline AI deployment. [76-82][211-216]
Speakers: Janet Zhou, Beatriz Vasconcellos
Pilotitis is solved by early government involvement, shared infrastructure, and inclusive institutions that build trust (Janet Zhou) Centralized procurement and shared services reduce implementation friction across ministries (Beatriz Vasconcellos)
Both see language and voice capabilities as critical inclusion rails that enable AI to reach underserved populations. [264-267][158-160]
Speakers: Speaker 2, Speaker 3
Multilingual and voice AI act as inclusion rails, turning AI into an equalizer across languages and regions (Speaker 2) Emergence of language stacks such as Bhashani for Bharat illustrates public rails for multilingual AI (Speaker 3)
Both stress the necessity of data standards, metadata, and interoperable ecosystems to make data AI‑ready. [15][99-110]
Speakers: Saurabh Garg, Beatriz Vasconcellos
Data must be discoverable, trustworthy, interoperable, and usable; standardized metadata and identifiers are key (Saurabh Garg) Brazil’s thematic data ecosystems (early childhood, environment, land) create standards and interoperability across ministries (Beatriz Vasconcellos)
Unexpected Consensus
Cross‑border AI use‑cases can be enabled through shared digital rails
Speakers: Speaker 1, Speaker 3
Question on cross‑border use‑cases and rail‑based playbooks for AI diffusion (Speaker 1) AI invisibility built on public rails can reach users across national borders (Speaker 3)
While Speaker 1 only raised the question, Speaker 3 immediately aligned by describing how existing DPI rails (e.g., UPI, Bhashani) already support cross-border AI services, indicating an implicit consensus on the feasibility of transnational diffusion pathways. [132-138][155-160]
POLICY CONTEXT (KNOWLEDGE BASE)
Cross-border AI use-cases are promoted through joint sandboxes and shared repositories, reflecting a consensus on regional cooperation for AI deployment [S42][S43][S52].
Overall Assessment

There is strong convergence among participants on four core pillars: (1) embedding AI within trusted, interoperable digital public infrastructure; (2) adopting modular, voluntary, multi‑stakeholder platforms (e.g., METRI, MOSIP) to democratize foundational AI resources; (3) establishing robust data standards to make data AI‑ready; and (4) creating institutional frameworks, shared services, and financing to move AI from pilots to production, with multilingual/voice capabilities highlighted as key inclusion rails.

High consensus – the majority of speakers echo each other’s positions, indicating a shared understanding that coordinated DPI‑based, standards‑driven, and inclusive approaches are essential for scaling AI impact. This alignment suggests that policy recommendations and collaborative initiatives (e.g., METRI, use‑case adoption framework, MOSIP‑style support) are likely to gain broad support across stakeholders.

Differences
Different Viewpoints
Governance model for democratizing AI resources – voluntary, modular METRI platform versus strong national capacity building and standardized operational support
Speakers: Saurabh Garg, Beatriz Vasconcellos, Janet Zhou
Four foundational resources (compute, data sets, models, talent) and the METRI multi‑stakeholder platform for voluntary, modular development (Saurabh Garg) Building national AI capability is essential; over‑reliance on external vendors leads to lock‑in and loss of strategic capacity (Beatriz Vasconcellos) MOSIP open‑source identity platform illustrates how standards, operational support, and financing lubricate adoption of digital infrastructure (Janet Zhou)
Saurabh proposes a voluntary, modular METRI platform as a playbook for collaborative development of AI resources [23-30]. Beatriz warns that dependence on external vendors creates lock-in and stresses building domestic capability through centralized procurement and shared services [222-230]. Janet highlights the need for clear technical standards, operational assistance, peer-to-peer learning and financing (MOSIP) to ensure adoption [179-192]. The three positions differ on the degree of external coordination versus national capacity and standardisation required to democratise AI resources.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between a voluntary METRI-style platform and a strong national capacity model reflects ongoing debates on centralized versus distributed AI governance captured in recent AI infrastructure dialogues [S52][S56][S58].
Approach to user‑facing AI – invisible integration on existing DPI rails versus visible centralized AI service procurement
Speakers: Speaker 3, Beatriz Vasconcellos
Diffusion pathways should be invisible, built on existing public rails such as UPI and DigiLocker (Speaker 3) Centralized procurement and shared services reduce implementation friction across ministries (Beatriz Vasconcellos)
Speaker 3 argues that AI should be embedded invisibly within familiar DPI services like UPI, DigiLocker, making it seamless for users [145-155]. Beatriz proposes a visible, centralized AI procurement unit that ministries can access through a simple process, aiming to lower friction and avoid duplicated solutions [211-216]. The disagreement lies in whether AI should be hidden within existing services or offered as a distinct, centrally managed service.
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions on invisible AI integration versus visible centralized procurement echo concerns about transparency and safety guardrails raised in AI governance forums [S52][S63][S64].
Reliance on private‑sector platforms versus public‑led infrastructure for AI diffusion
Speakers: Speaker 3, Beatriz Vasconcellos
G7 AI Hub created to unlock compute, data, and talent in Africa and other Global South regions (Speaker 3) Building national AI capability is essential; over‑reliance on external vendors leads to lock‑in and loss of strategic capacity (Beatriz Vasconcellos)
Speaker 3 highlights the G7 AI Hub and networks like Zindi as public-interest infrastructure that can accelerate AI diffusion by providing compute, data and talent resources [61-66][162-166]. Beatriz cautions that heavy dependence on external vendors can cause lock-in and erode strategic capacity, advocating for domestic capability building [222-230]. The tension is between leveraging external multi-stakeholder platforms and preserving national autonomy.
POLICY CONTEXT (KNOWLEDGE BASE)
The choice between private-sector platforms and public-led infrastructure is debated in the context of community-driven AI ecosystems and public-interest AI frameworks [S52][S58][S59][S41].
Unexpected Differences
Perceived ease of adding multilingual/voice capabilities versus the need for substantial capacity and governance
Speakers: Speaker 2, Beatriz Vasconcellos
Multilingual and voice AI act as inclusion rails, turning AI into an equalizer across languages and regions (Speaker 2) Building national AI capability is essential; over‑reliance on external vendors leads to lock‑in and loss of strategic capacity (Beatriz Vasconcellos)
Speaker 2 presents multilingual addition as a relatively simple technical step that can quickly level the playing field [264-267]. Beatriz, however, stresses that even such additions require domestic expertise and warns against depending on external vendors, implying that language support is not merely a plug-and-play feature but tied to broader capacity building [222-230]. The contrast between a “simple” solution and the need for deep national capability was not anticipated.
POLICY CONTEXT (KNOWLEDGE BASE)
While multilingual AI is touted as easy to add, policy analyses stress the substantial capacity and governance needed to deliver it responsibly [S61][S65][S50].
Invisible AI integration versus the necessity of visible safety and trust guardrails
Speakers: Speaker 3, Speaker 2
Diffusion pathways should be invisible, built on existing public rails such as UPI and DigiLocker (Speaker 3) AI becomes an equalizer; multilingual/voice AI must be trusted, safe, and include guardrails for sectors like health (Speaker 2)
Speaker 3 advocates for AI to operate silently within existing DPI services, making it unnoticed by end-users [145-155]. Conversely, Speaker 2 stresses that AI’s equalizing power depends on trust, safety, and guardrails-especially in sensitive domains like healthcare-suggesting that visibility and explicit safeguards are essential [264-267]. The tension between a completely invisible AI layer and the demand for overt safety mechanisms was unexpected.
POLICY CONTEXT (KNOWLEDGE BASE)
The need for visible safety and trust guardrails against invisible AI integration is highlighted in responsible AI guidelines and enforcement-invisibility studies [S63][S64][S66][S52].
Overall Assessment

The panel largely shares a common vision of scaling AI beyond pilots through trustworthy, interoperable data and public‑infrastructure‑based diffusion pathways. However, substantive disagreements emerge around the governance model (voluntary METRI platform vs. strong national standards and capacity building), the user‑experience design (invisible AI embedded in existing DPI vs. visible centralized services), and the reliance on external versus domestic resources for multilingual and voice capabilities. These divergences reflect differing priorities between multistakeholder, modular collaboration and nation‑centric capacity development.

Moderate – while there is consensus on the overarching goal of AI diffusion, the varied approaches to governance, implementation, and capacity raise notable friction. If unresolved, these disagreements could lead to fragmented diffusion strategies, with some regions pursuing open modular platforms and others reinforcing national, standards‑driven pathways, potentially limiting the coherence and speed of AI adoption across the Global South.

Partial Agreements
All three agree that AI must move beyond isolated pilots to production‑scale impact. Janet stresses early government participation and shared infrastructure to avoid pilotitis [76-82]. Beatriz proposes a centralized procurement model to streamline scaling [211-216]. Saurabh suggests a multi‑stakeholder METRI platform to democratise the core resources needed for scaling [23-30]. While the end goal is shared, the pathways—government‑led design, centralized procurement, or voluntary multi‑stakeholder platform—differ.
Speakers: Janet Zhou, Beatriz Vasconcellos, Saurabh Garg
Pilotitis is solved by early government involvement, shared infrastructure, and inclusive institutions that build trust (Janet Zhou) Centralized procurement and shared services reduce implementation friction across ministries (Beatriz Vasconcellos) Four foundational resources (compute, data sets, models, talent) and the METRI multi‑stakeholder platform for voluntary, modular development (Saurabh Garg)
Both emphasize the centrality of standards and trustworthy data for AI diffusion. Saurabh outlines four pillars of AI‑ready data—discoverability, trustworthiness, interoperability, usability—linked to metadata and identifiers [15]. Janet shows how MOSIP’s technical standards, reference implementations, and operational support enable adoption of digital ID infrastructure [179-192]. They concur on the importance of standards but differ on the concrete institutional mechanisms (AI‑specific data standards vs. identity‑focused DPI standards).
Speakers: Saurabh Garg, Janet Zhou
Data must be discoverable, trustworthy, interoperable, and usable; standardized metadata and identifiers are key (Saurabh Garg) MOSIP open‑source identity platform illustrates how standards, operational support, and financing lubricate adoption of digital infrastructure (Janet Zhou)
Takeaways
Key takeaways
AI diffusion should be built as Digital Public Infrastructure (DPI) that is trusted, interoperable, and shareable, similar to Aadhaar and UPI. Four foundational AI resources – compute, data sets, models, and talent – must be democratized; the METRI multi‑stakeholder platform was proposed to enable modular, voluntary collaboration on these resources. The G7 AI Hub aims to unlock compute, data, and talent for Africa and other Global South regions, addressing resource constraints. ‘Pilotitis’ can be overcome by early government involvement, shared infrastructure, inclusive institutions, and clear standards that build trust. Centralized procurement and shared services (e.g., Brazil’s chatbot procurement) reduce implementation friction across ministries. Building national AI capability is essential; over‑reliance on external vendors leads to lock‑in and loss of strategic capacity. Data readiness requires discoverability, trustworthiness, interoperability, and usability, supported by standardized metadata, unique identifiers, and international classifications. Multilingual and voice AI act as inclusion rails, turning AI into an equalizer across languages and regions. A Use‑case Adoption Framework links sectoral impact with horizontal unlocks (compute, data, language) to guide the 100 AI diffusion pathways toward 2030. Open‑source platforms like MOSIP illustrate how standards, operational support, and financing lubricate adoption of digital infrastructure.
Resolutions and action items
Launch and develop the METRI platform as a voluntary, modular framework for democratizing AI resources. Commit to defining and implementing 100 AI diffusion pathways by 2030. Operationalize the G7 AI Hub to provide compute, data, and talent resources for Africa and other Global South regions. Adopt Brazil’s centralized procurement model for AI services to streamline cross‑ministerial adoption. Leverage MOSIP‑style open‑source reference implementations for AI identity and trust infrastructure. Continue cross‑border collaboration among India, Kenya, Italy, and other partners to co‑design diffusion pathways.
Unresolved issues
Concrete mechanisms and governance structures for sharing AI use‑cases and playbooks across borders remain undefined. Specific funding models and sustainability plans for METRI and other public AI rails have not been detailed. Detailed step‑by‑step roadmap for moving AI pilots to production at population scale is still lacking. How to systematically ensure multilingual and voice AI coverage for all low‑resource languages was not fully addressed. Data privacy safeguards while making data AI‑ready need clearer operational guidelines.
Suggested compromises
Adopt a voluntary, modular, non‑committal approach (METRI) rather than imposing mandatory standards, allowing stakeholders to participate at their own pace. Use centralized procurement as an optional service for ministries, preserving autonomy while reducing friction. Balance the use of external vendor solutions with investment in building internal national AI capabilities to avoid lock‑in.
Thought Provoking Comments
AI should be treated as a Digital Public Infrastructure (DPI). To make data AI‑ready it must be discoverable, trustworthy, interoperable and usable.
Frames AI adoption in terms of public‑good infrastructure and distills data readiness into four concrete criteria, moving the conversation from abstract hype to actionable standards.
Set the conceptual foundation for the whole panel. Subsequent speakers (e.g., Beatriz on Brazil’s data ecosystems, Janet on MOSIP standards, Kizom on public rails) all referenced these criteria, steering the discussion toward governance, interoperability and trust.
Speaker: Saurabh Garg
We have proposed a modular, voluntary platform called METRI (Multi‑stakeholder AI for Resilient and Trustworthy Infrastructure) to democratise compute, data, models and talent.
Introduces a concrete institutional mechanism for collaborative AI resource sharing, highlighting a practical pathway to democratise foundational AI assets.
Prompted other panelists to discuss collaborative models (e.g., G7 AI hub, shared services in Brazil) and reinforced the need for multi‑stakeholder platforms, influencing the later talk on co‑architecting pathways.
Speaker: Saurabh Garg
AI is a general‑purpose technology like electricity; invention happened in the West, but impact depends on diffusion pathways in the Global South. We aim for 100 diffusion pathways by 2030.
Uses a historical analogy to re‑frame AI diffusion as a development challenge, introducing the ambitious 100‑pathway target that becomes a recurring theme.
Shifted the tone from technical description to a development agenda, prompting speakers to align their national experiences (India, Kenya, Brazil) with the 100‑pathway goal.
Speaker: Speaker 2 (Shalini)
The G7 AI hub is designed to unlock foundational resources—data, compute, talent—by creating business cases for data centres in Africa and encouraging talent to return and co‑architect the future.
Highlights systemic constraints in the Global South and proposes a concrete international collaboration to address them, emphasizing talent retention and infrastructure investment.
Expanded the conversation beyond national policies to global cooperation, leading to deeper discussion on capacity building, data sovereignty, and the role of international hubs.
Speaker: Speaker 3 (Kizom)
Pilotitis is not new; scaling impact requires governments to be at the design table from the start, building inclusive, trustworthy institutions and shared infrastructure rather than adding AI after pilots.
Identifies governance and institutional design as the critical bottleneck for scaling AI, moving the focus from technology to policy and institutional frameworks.
Redirected the dialogue toward early government involvement and standards, influencing later remarks on MOSIP, operational support, and the need for normative frameworks.
Speaker: Janet Zhou
Brazil is building a ‘one government for each person’ vision with shared data platforms, thematic data ecosystems (e.g., early childhood), and centralized chatbot services linked to digital ID.
Provides a concrete, multi‑layered example of DPI in action, illustrating how data interoperability and shared services can operationalise AI at scale.
Served as a real‑world case study that validated earlier theoretical points, prompting other speakers to discuss similar ecosystem approaches and the importance of standardisation.
Speaker: Beatriz Vasconcellos
AI should be an invisible layer on top of existing DPI rails—like UPI or DigiLocker—so that users don’t notice the AI; multilingual voice stacks and public rail convergence are essential for inclusive diffusion.
Reframes AI as seamless infrastructure rather than a standalone product, emphasizing usability, multilingualism, and the convergence of public rails across borders.
Steered the conversation toward user‑centric design and the technical notion of “rails,” leading to discussions on multilingual models, Zindi’s data‑science network, and the need for interoperable standards.
Speaker: Speaker 3 (Kizom)
The hardest challenge is avoiding vendor lock‑in and building internal capabilities; we must centralise procurement, create institutional arrangements for experimentation, and resist the temptation to outsource critical AI functions.
Points out a systemic risk—dependency on external vendors—that can undermine sovereign AI capability and sustainability.
Triggered a deeper examination of procurement reforms and capacity‑building strategies, reinforcing earlier calls for institutional ownership and the development of domestic AI talent.
Speaker: Beatriz Vasconcellos
Our Use‑Case Adoption Framework separates vertical sector needs from horizontal unlocks (data, compute, language). Successful diffusion requires co‑designing pathways that fuse these layers.
Introduces a structured methodology to move from pilot to scale, linking sector‑specific outcomes with cross‑cutting resources.
Provided a concrete roadmap that tied together earlier themes (data readiness, infrastructure, governance) and gave the panel a shared language for future collaboration.
Speaker: Speaker 3 (Kizom)
Overall Assessment

The discussion evolved from a high‑level introduction of AI as a digital public good to a nuanced, action‑oriented dialogue about how to operationalise that vision. Saurabh Garg’s framing of AI‑ready data and the METRI platform laid the conceptual groundwork, which was then expanded by Shalini’s historical diffusion analogy and the 100‑pathway ambition. Kizom’s emphasis on global‑south constraints and the ‘invisible AI rail’ metaphor, combined with Janet’s governance‑centric critique of pilotitis and Beatriz’s concrete Brazilian DPI example, shifted the conversation toward institutional design, standards, and capacity building. The recurring focus on interoperability, multilingualism, and avoiding vendor lock‑in crystallised into a shared adoption framework, giving the panel a clear, collaborative roadmap. These pivotal comments collectively redirected the panel from abstract enthusiasm to concrete, policy‑driven strategies for scaling AI responsibly across borders.

Follow-up Questions
How does AI move from pilot projects to production scale? Is funding the only barrier or are there additional diffusion pathways needed?
Understanding the factors that enable scaling is essential for turning AI pilots into widespread impact.
Speaker: Speaker 1 (Shalini)
How can trust be established between AI advisory outputs and the institutions delivering them? Should institutions adopt AI first to build trust in the advice?
Trust in AI recommendations is a prerequisite for institutional adoption and user acceptance.
Speaker: Speaker 1 (Shalini)
What is the current state of AI adoption in Brazil? Are pilots stuck in the pilot phase, what gaps exist, and how can they be bridged?
A country‑specific case study can reveal common obstacles and solutions for scaling AI.
Speaker: Speaker 1 (Shalini)
How can safe AI‑driven conversations be ensured in sectors such as agriculture and healthcare, and can these safety protocols become reusable playbooks?
Safety guardrails are critical for user confidence and for creating transferable best‑practice templates.
Speaker: Speaker 1 (Shalini)
What programs or investment mechanisms can remove frictions in the AI stack (language, compute, data) to enable smoother diffusion?
Identifying systemic supports can accelerate the deployment of end‑to‑end AI solutions.
Speaker: Speaker 1 (Shalini)
What is the hardest challenge in moving AI diffusion pathways from concept to operational adoption?
Pinpointing the primary bottleneck helps prioritize interventions for effective diffusion.
Speaker: Speaker 2 (Bia)
Can you describe the use‑case adoption framework and how it can act as a friction remover for AI diffusion?
A clear framework could guide stakeholders in scaling AI solutions across sectors and regions.
Speaker: Speaker 2 (Bia)
What mechanisms are needed to make datasets AI‑ready (discoverable, trustworthy, interoperable, usable) for a digital public infrastructure?
AI‑ready data is a foundational resource; research is needed on standards, metadata, and privacy‑preserving access.
Speaker: Saurabh Garg
What is the business case for building data‑center and GPU infrastructure in Africa to unlock AI resources?
Assessing economic and social returns will inform investment decisions for continental AI capacity.
Speaker: Kizom (Speaker 3)
How can existing digital public infrastructure (e.g., UPI, DigiLocker) serve as rails for cross‑border AI use‑cases and diffusion?
Leveraging DPI can reduce friction and enable scalable AI services across jurisdictions.
Speaker: Speaker 3 (Kizom)
How can multilingual voice AI act as an equalizer to bridge language‑based digital divides?
Research on language inclusion can expand AI benefits to underserved linguistic communities.
Speaker: Speaker 2 (Bia)
What strategies can mitigate vendor lock‑in and promote the development of national AI capabilities instead of outsourcing?
Building domestic expertise ensures sustainability, security, and alignment with public goals.
Speaker: Beatriz Vasconcellos
How effective is the METRI platform in democratizing AI resources (compute, data, models, talent) across stakeholders?
Evaluating METRI’s impact will inform future governance models for shared AI infrastructure.
Speaker: Saurabh Garg
How do institutional capacity‑building programs and shared standards (e.g., MOSIP) facilitate AI adoption at scale?
Understanding the role of operational support and standards can guide replication of successful models.
Speaker: Janet Zhou

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Democratizing AI Building Trustworthy Systems for Everyone

Democratizing AI Building Trustworthy Systems for Everyone

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel examined how to coordinate a global AI ecosystem while ensuring trustworthiness and equitable diffusion, noting that the biggest challenge is managing international collaboration [5][6-9]. Dr. Garg identified governance of sharing mechanisms, the interdependence of hardware-software-protocol layers, and a shortage of talent as the principal obstacles [6-9]. Microsoft’s response, presented by Natasha Crampton, is organized around five pillars: building data-center and connectivity infrastructure with sovereign controls; scaling AI skilling, especially for teachers in India; expanding multilingual and multicultural AI through initiatives such as Lingua Africa; supporting local innovation by partnering with Indian and other frontier AI firms; and contributing data on AI adoption to policy-making projects like the World Bank study [33-38][46-53][54-61][62-64][65-69]. Crampton stressed that none of these pillars can succeed without deep, long-term partnerships across governments, NGOs and the private sector [72-74]. When asked how Microsoft can reconcile diverse national regulations-such as the “Brussels effect” of GDPR-she explained that Microsoft builds configurable controls into its models, offers open-weight versions, and relies on a partner ecosystem to adapt technology to local legal and cultural contexts [75-78][80-86][92-93]. Peter Mattson argued that the biggest barrier to AI adoption is reliability, which can only be addressed through common, industrial-scale benchmarks that measure safety, security and multilingual performance [108-119][120-124]. He illustrated this with the MedPerf project, which uses federated evaluation and confidential compute to test models on diverse health data worldwide, demonstrating how benchmark infrastructure can enable trustworthy AI in critical domains [135-137]. The discussion also highlighted the role of philanthropic organisations; a Gates Foundation representative noted that in regions lacking foundational infrastructure, funding and expertise from foundations are essential to provide trustworthy AI capabilities [183-190]. Wendy Hall added that open data is a prerequisite for trustworthy AI, but emphasized that not all data can be fully open and that global data-governance frameworks are needed to enable cross-border sharing while respecting privacy [305-307]. Natasha concluded that achieving trustworthy AI diffusion requires deliberate measurement of both technical systems and economic impacts to verify that interventions are effective [320-324]. Mattson echoed this, stating that comprehensive, cost-effective measurement across many dimensions is crucial for the future of AI reliability [328-330]. The panelists collectively agreed that coordinated investment, skill development, adaptable governance, robust benchmarking, and transparent data practices are essential to democratize AI worldwide [33-38][46-53][108-119][320-324]. The discussion underscored that without such multi-stakeholder collaboration and rigorous measurement, the promise of AI for the global south and beyond will remain unrealized [5][72-74].


Keypoints


Major discussion points


Global AI collaboration faces governance, talent, and inter-dependence challenges.


Justin asks what the biggest hurdle is for the international working group, and Dr Saurabh Garg outlines the need for governance of sharing mechanisms, the scarcity of skilled talent, and the complexity of an ecosystem that spans hardware, software and ethics [5-12].


Microsoft’s five-pillar strategy to accelerate AI diffusion in the Global South.


Natasha Crampton describes a coordinated effort built on (1) infrastructure (data-centres, connectivity, sovereignty controls) [28-46], (2) large-scale skilling programmes (e.g., 2 million Indian teachers) [46-53], (3) multilingual and multicultural AI development [54-61], (4) support for local innovation and data contributions to policy-making [62-69], and (5) partnerships with governments and foundations [70-74].


Trustworthiness hinges on reliable benchmarks and measurable performance.


Peter Mattson stresses that AI adoption depends on trust, which requires common, industrial-grade yardsticks for reliability, safety and security [106-124]. He gives concrete examples such as federated evaluation for healthcare models and the need for scalable, high-quality multilingual benchmarks [130-138].


Open data and robust data-governance are essential for trustworthy AI.


Wendy Hall points out that while open data fuels innovation, not all data can be fully open; instead, exchangeable, share-able datasets and clear governance frameworks (including UN-level recommendations) are needed to enable cross-border data flows and trustworthy AI systems [258-305].


Inclusive, culturally-sensitive AI is required for societal impact.


Both Natasha and Wendy highlight that AI must adapt to local languages, cultures and legal contexts to be trusted and useful [80-89]; Wendy adds that gender and age inclusion, as well as transparent measurement of social effects, are critical for an “all-inclusive” AI future [258-270].


Overall purpose / goal of the discussion


The panel was convened to explore how the global AI community-governments, private sector, academia and philanthropic organisations-can jointly democratise and make AI trustworthy, especially for underserved regions. Participants examined practical levers (infrastructure, talent, standards, open data, measurement) and policy-level considerations (governance, sovereignty, inclusivity) needed to ensure that AI benefits are widely and equitably realised.


Overall tone and its evolution


The conversation begins with a courteous, appreciative tone (thanks, acknowledgments) and quickly moves into a constructive, problem-solving mode as speakers identify challenges and propose concrete frameworks. As the dialogue progresses, the tone becomes more optimistic and visionary-highlighting ambitious investments, partnerships, and the potential of reliable AI-while still retaining a pragmatic focus on standards and measurement. Intermittent informal remarks and light humor (e.g., references to “alpha-male” attitudes) add a collegial feel, and the session closes on a collaborative, forward-looking note, urging continued measurement and open collaboration.


Speakers

Justin Carsten – Moderator and facilitator of the panel discussion.


Dr. Saurabh Garg – Secretary, Ministry of Statistics and Programme Implementation, Government of India; expert in AI governance, infrastructure, and the AI ecosystem. [S2][S3][S4]


Peter Mattson – President of ML Commons and CEO; Senior Staff Engineer at Google; specialist in AI benchmarking, reliability, and open-benchmark development. [S1]


Natasha Crampton – Microsoft’s first Chief Responsible AI Officer, leads the Office of Responsible AI; focuses on responsible AI, AI diffusion, infrastructure, skilling, and multilingual AI.


Wendy Hall – Dame Professor, Regius Professor of Computer Science and Associate Vice President International Engagement at the University of Southampton; Director of Web Science, former co-chair of the UK AI Review and member of the AI Council; expertise in AI policy, trustworthiness, and AI measurement/metrology. [S13]


Participant (Harish) – Representative from the Gates Foundation; works on AI applications for health and agriculture in the Global South, emphasizing trustworthiness, sustainability, edge computing, and open-source solutions.


Additional speakers


Dr. Clark – Panelist who discussed model efficiency, domain-specific AI, and diffusion across regions.


Vint Cerf – Mentioned as a potential panelist (did not speak).


Brad – Referenced in the discussion (did not speak).


Nigel Shadbow – Referenced in the discussion (did not speak).


Dr. Aya – Mentioned as a Gates Foundation representative (did not speak).


Full session reportComprehensive analysis and detailed insights

The session opened with a series of thanks from the moderator, Justin Carsten, who highlighted the unprecedented scale of the summit and the spirit of collaboration that had emerged among the panelists and the wider community [1-5]. He then asked the working-group chair, Dr Saurabh Garg, to identify the single most significant obstacle to coordinating an international AI effort [5].


Dr Garg responded that the difficulty lay less in physical resources than in the governance of those resources. He noted that while foundational computing infrastructure must be shared, the real challenge is managing the inter-dependence of hardware, software and the ethical protocols that bind them [6-7]. He added that establishing robust sharing mechanisms and governance frameworks is essential [7-8], and that the shortage of skilled talent and institutional capability compounds the problem – infrastructure can be bought, but expertise must be cultivated [8-10]. Finally, he argued that the focus should be on ensuring that each nation can trust the systems that manage AI priorities and values, rather than insisting that every country own every layer of the stack [11-12].


Justin affirmed Dr Garg’s assessment and then turned to Microsoft’s first Chief Responsible AI Officer, Natasha Crampton, introducing her role in leading the Office of Responsible AI and its mandate to translate Microsoft’s AI principles into practice through governance, stakeholder collaboration and the shaping of new laws and standards [18-24]. Justin also noted that Brad [Last Name] had delivered a keynote earlier in the week based on a joint blog post with Natasha, which set the stage for today’s discussion [??]. Crampton announced that Microsoft would commit US $50 billion by the end of the decade to accelerate AI diffusion in the Global South [29-31]. She explained that diffusion data show the AI uptake in the Global North is roughly twice that of the Global South, creating an urgent need for a coordinated response [32-33].


Microsoft’s strategy is organised around five inter-linked pillars. The first pillar concerns the build-out of physical infrastructure – investment in data-centres, connectivity and electricity – with a strong emphasis on national sovereignty, achieved through configurable controls in both public and private cloud offerings [33-40][41-44]. The second pillar focuses on large-scale skilling; a flagship programme will train two million Indian teachers in AI-specific skills, thereby reaching the future workforce through the education system [46-53]. The third pillar targets multilingual and multicultural AI, exemplified by collaborations with ML Commons (a nonprofit that develops open AI benchmarks) to extend safety benchmarks to languages such as Hindi, Tamil, Malay, Japanese and Korean, and by the Lingua Africa initiative that gathers rich, locally-sourced spoken-language data for under-represented communities [54-61][62-63]. The fourth pillar supports local innovation, with Microsoft partnering with Indian frontier AI firms and contributing data on AI adoption to policy-making projects, including a World Bank-led study that will monitor where AI diffusion is faster or slower than expected [64-69]. The final pillar underscores the necessity of deep, long-term partnerships with governments, NGOs and other private-sector actors to sustain all five pillars [70-74].


When Justin raised the “Brussels effect” of GDPR and asked how Microsoft could reconcile broad, global AI standards with the diverse legal regimes of individual nations, Crampton explained that Microsoft designs its models with built-in configurable controls, allowing downstream users to adapt the technology to local laws and values while retaining a default configuration that reflects Microsoft’s own responsible-AI stance [75-78][80-86]. She also highlighted Microsoft’s practice of releasing open-weight models and the importance of a robust partner ecosystem to enable localisation and compliance [92-93].


Peter Mattson of ML Commons shifted the discussion to the technical foundations of trustworthiness. He argued that the principal barrier to AI adoption is not capability but reliability, which can only be assured through common, industrial-scale benchmarks that measure safety, security and multilingual performance [108-119][120-124]. To illustrate, he described the MedPerf project, which uses federated evaluation and confidential compute to test healthcare models on diverse, geographically-distributed datasets, thereby providing a reproducible, high-quality benchmark for critical domains [135-137]. Mattson warned that current research-level benchmarks are insufficient; they must be transformed into dependable, scalable frameworks that can keep pace with the emergence of multi-turn, agentic AI systems [128-133][168-170].


Justin echoed the need for reliable large language models and for the benchmarks that validate them, noting that the collaborative nature of ML Commons-bringing together industry, academia and practitioners-produces extensive author lists and expertise that must be leveraged beyond single-paper publications [138-144][145-146].


A representative from the Gates Foundation, Harish, highlighted the role of philanthropy in regions that lack foundational infrastructure. He stressed that trustworthy AI in low-connectivity settings must work on the edge, support local languages, and be energy-efficient; otherwise frontline health and agriculture workers will lose trust in the technology [183-190][191-200]. He also argued that open-source, lower-parameter models are crucial for affordability, and that sustainable funding models are needed to avoid creating new digital divides within countries [215-220][221-228]. He further emphasized the need for energy-efficient, lower-parameter models to keep AI sustainable [190-195], highlighted emerging multi-parameter, multi-state compute hardware as a future avenue for edge inference [215-220], called for real-world evidence to demonstrate societal benefits such as job creation and improved health outcomes [225-230], and warned that without such evidence trustworthiness cannot be established [235-240].


Professor Dame Wendy Hall offered a complementary perspective on openness and governance. While affirming that open data accelerates innovation, she cautioned that not all data can be fully public; instead, exchangeable, shareable datasets combined with robust cross-border data-governance frameworks and global data registries are required to protect privacy and sovereignty [305-311]. Hall announced the UK’s establishment of the Centre for AI Measurement and the AI Security Institute-founded by Rishi Sunak at Bletchley Park-which now operates as the Network for AI Measurement and Evaluation, a component of a broader network of safety institutes [290-300][310-315]. She referenced India’s Aadhaar system and the MOSIP framework as models of large-scale digital public infrastructure [310-315]. She also reflected on cultural differences in AI perception, noting that in India AI is seen as an inclusive opportunity, whereas in the UK public discourse is dominated by fear and scepticism [258-267][268-277][278-284].


Across the discussion, several points of agreement emerged. All speakers concurred that effective governance of shared AI resources is indispensable [6-7][80-86][305-311]; that technology must be adaptable to local legal and cultural contexts [75-78][80-86][37-39]; that long-term, multi-stakeholder partnerships are the engine for delivering the five-pillar strategy and for open-source benchmark ecosystems [70-74][92-93][8-9]; and that systematic measurement-whether through industrial benchmarks, AI metrology or multi-dimensional impact metrics-is essential for building trustworthy AI [106-124][290-300][320-324][328-330].


The panel highlighted complementary emphases rather than outright disagreements. Dr Garg stressed governance of sharing mechanisms as the primary bottleneck, while Crampton emphasized the massive infrastructure investment needed to close the North-South gap [6-9][33-45]. Mattson and Crampton both championed open-weight models and open benchmarks to foster trust, whereas Hall cautioned that unrestricted data release can jeopardise privacy and sovereignty [92-94][305-311]. Crampton presented a $50 billion private-sector commitment, which was complemented by Harish’s call for low-parameter, open-source solutions and philanthropic support for low-resource settings [29-33][215-220]. Finally, measurement approaches differed: Mattson advocated for industrial-scale benchmarks and federated evaluation, while Hall proposed a national AI metrology programme with a specific “trust factor” metric [120-124][290-300].


In concluding remarks, the panelists reaffirmed a shared vision: coordinated investment, capacity-building, culturally-sensitive AI design, robust governance and transparent, cost-effective measurement are all required to democratise trustworthy AI worldwide. Natasha Crampton reiterated Microsoft’s commitment to the five-pillar plan and to ongoing data sharing with policy bodies [320-324]; Peter Mattson stressed that reliable benchmarks must evolve to cover multi-turn, agentic systems while remaining affordable [328-330]; and Wendy Hall called for a new science of AI metrology to provide the “trust factor” that will guide future regulation [290-300]. The session concluded with Justin thanking the panelists, a round of applause, and a reminder that continued collaboration among governments, industry, academia and philanthropic organisations is essential to resolve the outstanding challenges of governance, financing, openness and measurement, thereby ensuring that AI’s benefits reach the bottom-50 % of the global population without deepening existing divides [70-74][305-311][320-324][350-352].


Session transcriptComplete transcript of the session
Justin Carsten

Thank you. Thank you. you you Thank you. Thank you. Thank you. Thank you. you you you you Thank you. Thank you. Thank you so much, Dr. Garg. It really highlights one of the things about collaboration, and I’ll be talking to… a number of the panelists about… about that and that i’ve been so impressed this week at how much people are really coming together for the community you know this is a much bigger summit than we’ve had previously many more people really opening it up to everyone but if i can just ask you one thing on because the working group that you’re doing i think is is excellent it’s going to be really important um what do you see is the biggest challenge around that what do you think you know your vast experience that you’ve got of coming together do you think um there’s any particular challenges in coordinating that international effort

Dr. Saurabh Garg

of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control every layer of the resources that is there and while foundational resources the foundational computer resources sharing would be a major challenge but i think a bigger challenge might be to manage the interdependence of the AI ecosystem because it spans hardware, software, and the protocols, so to say, or the ethics around that. So I think one of the biggest challenges would be the governance around this sharing mechanisms, sharing protocols, and managing the framework. And the other would be what would be the talent and the institutional capability, which is in a way required. Well, the infrastructure can be acquired, but expertise has to be developed.

And I think that’s critical to ensure that if you want to democratize and ensure that GlobalSoft is integral to that, and that’s where it would be. And I think, you know, we don’t need to focus so much on whether each country is owning each layer of the AI, but how one can do that. What is the capability and confidence in the systems that manage that we have the required methods to ensure that it takes care of the priorities and the values that each country wants to push forward?

Justin Carsten

Thank you so much. And I agree with you. It’s a big challenge, but I’m glad that you’re there to take that forward. And this week, you may have seen the photograph of Modi here with many of the leaders in tech. And it’s a great pleasure that one of the large organizations in the private sector, Microsoft, has got representation here. So I come to you, Natasha. So Natasha Crampton is Microsoft’s first chief responsible AI officer and leads the Office of Responsible AI. And it was interesting how long that’s been going. I heard earlier this week. But she’s putting Microsoft’s AI principles into practice by defining, enabling, and governing the company’s approach. to responsible AI. The office also collaborates with internal and external stakeholders to shape new laws, norms, and standards to help ensure that the promise of AI technologies is realized for the benefit of all.

As I said, that’s been a key theme. So I saw Brad speak yesterday. It was a fantastic speech, and that was based upon a recent blog post that you and Brad put out just a couple of days ago. So can you tell us a little bit about that for some people who haven’t had the chance to absorb in this session, please?

Natasha Crampton

Sure. Thank you, Justin, and it’s a pleasure to be here with the panel and the audience today. So I think our announcement earlier in the week was about how Microsoft is contributing to bringing AI to the global south, and the headline that you might have seen is that we’re on. Hi. to spend 50 billion US dollars in order to do that by the end of the decade. What we’re seeing from the diffusion data that we have access to and that we’ve publicly published already is that there is an urgent need to focus on the diffusion and what it’s going to take to do that broadly and beneficially of AI to the global south because we are already seeing that diffusion in the global north is roughly double what we see in the global south.

And so for Microsoft, as a private sector player here, we think we have a role to play in helping to close that gap and we see it as being centred on five different components. First, as Dr. Garb mentioned initially, we need to help build out the infrastructure that is needed for broad AI diffusion. So this is both… Investments in data centres to power AI applications, but it’s also investments in connectivity as well. There are real electricity needs that need to be met. We’re trying to do that with an eye towards the sovereignty of countries around the world. We realise that the world is a fragmented place, and so we design our data centres and also the services that run on top of them with a recognition that there needs to be real agency for the countries hosting those data centres.

And so we have a range of different controls that we put in to our data centres, which include sovereignty controls and public clouds. Sometimes we build private clouds. But most importantly, it’s all built on a foundation of collaborating with our government partners around the world. The scale of the infrastructure… The infrastructure investment that’s needed is just so great. It’s really hard to see how we’ll achieve what we need to without significant private sector investment as well as funding from a range of different sources as well, governments, venture capitalists and others. So the first limb is all about infrastructure. The second limb is all about skilling. What we’ve learnt from the history of diffusion of other general purpose technologies like electricity, for example, is that the countries that succeed in these really transformative economic moments are not actually the countries that necessarily invent the new general purpose technology.

They’re the countries that diffuse and adopt that technology fastest. And if you look back at history, skilling turns out to be one of the major unlocks to that adoption and broad diffusion. So, as I said, We’ve made a range of skilling announcements. One that I’m particularly energised by myself is a very specific one focused on educating educators to help them with an AI -driven educational future. And of course, when you teach teachers, you’re teaching students, and therefore the workforce of the future as well. So we committed to teach AI -specific skills to 2 million Indian teachers in partnership, of course, with Indian national standards and training institutions, which is an exciting thing to me to support the future.

Third, the third limb is all about investments in multilingual and multicultural AI. You know, AI is… It’s no good to you if it does not work in the language… that you speak and the culture in which you use the system. So we’ve been pleased to collaborate with Peter Mattson from ML Commons on an expansion to represent Hindi, Tamil, Malay, Japanese, Korean, of some safety benchmarks that ML Commons has played a key role in standing up. But we’re working upstream of testing and evaluation as well. So we’re pleased to announce a Lingua Africa initiative where we are working with local communities in partnership with the Gates Foundation and others to really make sure that we’re collecting lots of that really rich local data with and for communities.

All of that data is not well represented on the internet and spoken languages. And spoken languages in particular require that careful collection. is all about supporting local innovation. I think it’s critically important that as the private sector we really deeply understand that AI will only be meaningful in people’s lives if it’s actually solving the local problems that matter to them. So we announced some initiatives here in India and further afield that are designed to really support that local innovation. Last, we announced also as part of the new Delhi Frontier AI commitments that several leading Indian AI companies and Frontier AI companies from around the world signed on to yesterday that we’re going to be contributing our data as to what we can see about adoption and usage of AI in the economy into some central projects.

Including one led by the World Bank. So that policy makers are in a good… position to understand how is AI being adopted in the economy? Where are the places where it’s going faster than expected? Where are the places where it’s going slower? Because I think that kind of data is incredibly useful for policy making because it allows you to spot those places where you might need a skilling intervention or an infrastructure intervention.

Justin Carsten

That was fantastic. And if you ever want to know about really believing in something, having such a complex blog and then just reeling off the five pillars, and that really just shows that commitment, I think, that we’re seeing from Microsoft taking that leading role. And actually, collaboration has been, since Brad’s presidency really, has been one of the things that he really encouraged about saying, look, we’ve got to work together.

Natasha Crampton

Absolutely. I mean, not one of those five limbs is possible without deep partnership. And that coordination of those five pillars is really important. Thank you. Thank you. of those partnerships and deeply investing in them over time is really what’s going to give us the outsized impact here.

Justin Carsten

And if we think about this, because Microsoft is a global corporation, you’ve got lots of countries, each with, just as Dr Garg said, they’ve got their own customisations, they think. They’ve got their own local laws and regulations. And some things, you know, there’s something called the Brussels effect around GDPR, for example, which went pretty global, but it’s not the case for AI, for example. How do you think you manage that challenge of trying to make sure that it’s broad enough but focuses for the individual needs of nations? Have you come across that challenge?

Natasha Crampton

Yes, that is part of what I work on day in, day out at Microsoft, because part of my role is working very closely with our product teams to make sure that we are building our product. our models in a way that’s trusted and trustworthy by design. And so we are building products and technologies that we aim to share with the world. And it is absolutely true that not every part of the world has the same rules or expectations. And part of what we need to do is to make sure that we’re building technology in a way that has enough sort of controls and choices that people can make downstream of what we choose to do at Microsoft to apply that technology in their own context.

So we ourselves do have a point of view about how we want our technology to show up in the world. So, you know, we do think carefully about if we’re making available a service that’s got some configurable controls, we do think carefully about what we think the default should be. But we also really do recognize… the need for that agency, and we do deeply understand that not every part of the world is homogenous. I think it’s, you know, here in India, it’s just a beautiful place to recognize the sort of linguistic and cultural diversity of the world. Quite honestly, if we don’t build technology that can be easily adapted and applied in people’s local contexts with their values, with their laws, we’re just missing the opportunity to, you know, have our technology reach the world.

So there are complex challenges. Sometimes there are direct conflicts between what one jurisdiction wants and what another jurisdiction has sort of declared as a matter of law. They can be worked through, and this is partly why you also need a great partner ecosystem, right? Being able to make available models open source or in an open -weight space. which Microsoft has long done, for example, with our five family of models. This is another way of empowering the ecosystem to adapt and build based on that.

Justin Carsten

Thank you so much. And you just touched on, you mentioned ML Commons and you touched about culturally sensitive. And it’s interesting, there is a report that’s been released by ML Commons this week on robust and defensible benchmarks. And part of that was some great work from the Singaporean agency IMDA, which the response from an AI, it has to be culturally sensitive. And that’s the point that you made. I think culture is important because what is seen as acceptable in one culture may not be in another. So that brings me nicely to Dr. Peter Mattson, who is the president of ML Commons and also a CEO. He’s a senior staff engineer at Google. So he founded ML Commons himself and was previously the head of the programming systems and applications group at NVIDIA.

So on that ML Commons, I think it’s done some great work, as we’ve heard. It’s played a major role in benchmarking performance and efficiency of AI. How do you see that open benchmarks can contribute to building sovereign capabilities, Peter?

Peter Mattson

I think that’s a fantastic question. I’m going to start with a very broad context and then narrow it down to that specific. And the broad context I want to start in is why is trust and reliability so vital for AI? AI has tremendous potential to change everything we do. But in order for it to do that, people need to feel comfortable adopting it. And we’re all… smart, we don’t adopt things we don’t trust. You don’t give them your banking information. You don’t give them your business information. You don’t give them your medical information or trust what they say or do about it if they’re not reliable. And so the question becomes, how do we make AI reliable?

Because if I had to point to anything that’s holding back AI today, it’s not capability, it’s reliability, right? Is it correct? Is it secure? Is it safe all the time? And if we can make AI truly reliable, the potential for benefits to everyone around the world, and frankly, the potential for businesses and markets is fantastic. But the way that we drive that is with metrics, is with evaluations. AI is an incredibly complex black box system. So to make it better, you need to have common yardsticks that you use to measure progress. And we need those common yardsticks back. widely for all aspects of reliability. So you alluded to the work on security with IMDA. Natasha alluded to some of the work around multilingual safety that we’re collaborating with Microsoft on and with folks at Google as well.

These are examples of what’s necessary to drive that push towards reliability. But they’re very technically hard. This is something that I don’t think people appreciate enough. They see someone publish a paper. We made a benchmark for something, right? And they made a data set and they did it once. But there’s a tremendous amount of technology to go to industrial quality benchmarking, which is what we need for industrial level reliability. There’s one. We need to work to take the experiments we’re doing in multilingual benchmarking and turn those into a dependable framework that empowers people around the world to produce very high quality. quality, multilingual safety and security benchmarks, and then to maintain and evolve them over time, right?

If ML Commons can help lift the resources there so that people can make the choices about language and culture where they have expertise without having to grapple with the really hard technical questions of how you do AI benchmarking, we hope that could be very empowering. An example from the healthcare space, we have a MedPerf project that uses what we call federated evaluation, where it sends models out to different facilities and then tests them on a small bit of data and accumulates the results. This is how you do healthcare benchmarking for reliability, for correctness, against very, very diverse data sets, potentially around the world. It’s technology like that, like dependable industrial scale multilingual safety and security, or medical benchmarking, or medical benchmarking, made possible by the with data sets across disparate legal systems through technology like Federated Eval and Confidential Compute that we believe really unlocks that future of high reliability systems.

Justin Carsten

That’s excellent. Thank you. And the repeated use of that term reliable. So what we need is reliable LLMs, but we need the reliable benchmarks, as you said. Yes, yes. And I think this point about healthcare is really interesting because what we need to do is, you mentioned industrial scale as well, we need this process that can be trusted. And that’s one thing that I found working with ML Commons, how we all come together, the people from industry, many academics around the world. You just look at any of the papers released, so you can go to the website, and how many authors and how many years of expertise is donated to that effort. Yes, yes. Where do you see, Peter, the next sort of big movements for ML commons?

Because these yardsticks will change. You’ve done healthcare. Where do you think is the important area for you in benchmarking in the near future?

Peter Mattson

I think thanks to the contributions from all of those experts. I truly think it is a testament to the industry that we are getting very in -demand experts from some of the leading companies to contribute to this work. Like, people really care about doing AI right. That is unarguable if you look at, as you say, the author list. What we need to do is leverage that expertise to scale. It’s not enough to do a benchmark and publish a paper. We need to make that benchmark available to the industry. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper.

It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. It’s not enough to do a benchmark and publish a paper. prompt response. You ask a question, you look at the answer, you see whether it’s safe or secure or correct.

But the future, as everyone knows, is multi -turn and agentic. And so we need to drive, you know, wider and deeper at the same time. There is tremendous demand for what we do. It is tremendously resource -intensive, and

Justin Carsten

You mentioned the work of Google, so I’m going to come to Dr. Aya from the Gates Foundation in a moment, just talking about some of the conversations. So we were hoping to have Vint Cerf, who some of you may know. I know, Wendy, you know him very well. But he doesn’t travel so much, does he? No, yeah, that’s the thing. He couldn’t travel. He’s got some issue that he couldn’t. improve public health and economic development. He’s a strategic partner between Indian researchers, you’re based over here in India, global partners and Gates Foundation teams in areas including vaccine preventable diseases, disease surveillance and modelling. So thank you for joining us today. We’ve heard a little bit, of course, India has really pushed forward with its digital public infrastructure.

And we’ve heard in the last session, Dr. Gog was in from Sanjay Jain, your colleague, about Mosef, which is modelled on Adha in some ways and is an open source initiative. So what I’d like to ask you is, where countries lack foundational infrastructure, what role do philanthropic organisations like the Gates Foundation play in enabling access to… to trustworthy AI capabilities?

Participant

Thank you so much for inviting me. I think this is obviously a very complex question, not fully settled, I will say for sure. So I mean, most of my experience in this field is in India. So I think, first off, I’d like to start by saying it’s great that India is hosting this summit. It’s fantastic. And showcasing a lot of the work that the country has done, the capability and the use cases that we are very closely supporting. I think the trustworthy question is very much, and I would say sustainability as well is another question that we have to think about, is about what sort of models do we need to have? Are they large centralized models?

Or are they dispersed decentralized models on the edge? do we need in countries with poor connectivity so trustworthiness has got many aspects to it is it going to be ready to work when you want it to work suppose again my work a lot of it is in health and agriculture and things like that so if you are a front line worker how do you make sure that they can if they have to make inferences and primary care can they make inferences if needed on the edge if you are a health system person and you want to improve the working of a health system making sure the right experts are in the right facility the right medicines are there patients are taken care of there is a great opportunity to make this very high quality but again the question becomes how do you access the compute how quickly can inferences come how easy it is to prompt there is all this which is very, because if it doesn’t work well, then you lose trust.

That’s the, it just doesn’t work. The next level question is language. I think Dr. Garth talked about it, the whole Bhashani project in India and there are similar projects that we’ve been involved in and there’s been a lot of debate even within the foundation as to which models can perform on language well. Which systems can interpret super complex, I think we heard from the other speakers about how complex this is, what works well. So trustworthiness will partly come from how systems respond and the lived experience in terms of simple things like, is it accessible? Is it the right language? Is it relevant? I mean, India is a continent on its own between different states, the health system and approaches are often different based on local policies.

how does it work in terms of policy in a particular state? One thing I’m particularly familiar with is pregnancy risk stratification. We talk a lot about how to reduce maternal mortality, infant mortality, stillbirth. The rules in Uttar Pradesh, for example, may be different from the rules in Telangana. How do you make sure that if you have a tool that supports frontline workers in understanding and improving identification of risk of pregnant mothers, how do you make sure that it works in that context? So this context is important. I think trust has all of these things built into it. I’ll also talk a little bit about sustainability questions. Sustainability also requires these kinds of questions to be answered well.

What’s the energy consumption? Are there simpler, lower parameter, lower energy consuming models rather than the giant models? To me, it’s a core question. And I think… it’s nice to know that there are researchers in the country who are thinking about that. Beyond that, can compute hardware itself look different? You know, beyond digital, let’s say, I saw these researchers recently looking at, you know, multi -parameter, multi -state compute capabilities and that was really fascinating. I just saw it two weeks ago because I was prepping for a bunch of meetings. Can those be great opportunities? Maybe they are further in the future to improve the likelihood of edge computing and edge inferences. So there’s a lot of, and then I think finally, open source.

I think open source is going to be in my mind a critical aspect of it. You’ll have to see how far open source movement takes track here. I believe because many governments in the global south may not be able to afford the large amounts of money that may be needed for a long period of time. How do you do these use cases well? So that I think is going to be another aspect of it that allows for adoption, trust at the highest levels. Again, I’m talking about the bottom 50 % of the pyramid. Top 10 % of the pyramid, they’ll do what they have to do. But ultimately to build trust, you need to get to the bottom 50 % of the pyramid.

And so there are different in quotes, markets here at UL. People who can pay at different levels. Even within a country like India, obviously there’s multiple different levels. How can you make sure that this thing can reach everybody and don’t create a divide, not just between global north and global south, but even within countries, you want to make sure that this doesn’t create a divide. And that’s, I think, another important part of building societal trust. The last point, which I think is also important is, what is the impact on society of this technology? I think this is going to be an important one as well. Are you able to create jobs, employment, and there’s a meta question about how does

Justin Carsten

Thank you so much. And we’ll come back to some of those points in a minute if I may, Harish. Because, as you may have seen, we’ve just been joined by Dame Professor Wendy Hall, someone I’ve…

Wendy Hall

Professor Dame, but don’t mind. Carsten, you should know that. You’re a Brit.

Justin Carsten

I’m not a Dame. But if you were a Sir. It’s always Professor Sir. But if I keep being nice to you, maybe you’ll put a word in for me. So I’ve known Wendy for a long time. She’s a Regis Professor of Computer Science and Associate Vice President, International Engagement at the University of Southampton, where she’s also Director of Web Science. There are so many accolades. She’s been a Dame Commander since 2009 and is a Fellow of the Royal Society and the Royal Academy of Engineering and the ACM and was President of lots of those organisations, including the British Computer Society, BCS, sorry. and most notably she was the co -chair of the UK government’s AI review and a member of the AI council.

We’ve talked also about skills actually, Wendy. We were both on the, I think you were probably leading it, but I was just a member of it, the review with Nigel Shadbow into computer science, if you remember.

Wendy Hall

No, he did that one. That was Professor Sir Nigel. No, I didn’t.

Justin Carsten

Okay, okay. Anyway, you’ve been involved in advising many governments around the world and could you tell us a little bit about the UK’s approach to developing sovereign AI capabilities?

Wendy Hall

No, I’m not going to answer that question because this is a trustworthy panel, right? And I want to talk about trustworthiness. Okay. And that’s why I was asking what the panel was about because I’m doing three panels this morning and I’ve got a lunch date to go to, so an important one. So I was asking Peter what the panel was about and he said, because it’s about trustworthy AI, right? Yeah. so I want to say if you don’t mind Carsten I could tell you what the UK is doing it’s very parochial I’m very excited that this conference has been in India but I have a love hate relationship with it it’s been a really difficult conference to navigate 250 ,000 people here but you end up talking to rooms of tens of people ok it’s out on YouTube does AI need this sort of jamboree I don’t know for the future but it is fabulous to have the spotlight on India I’m a member of the MOSIP

Justin Carsten

of course you are

Wendy Hall

I’ve been involved I’m in awe of what India has done with the Aadhaar and built the digital public infrastructure and I want to see how that works I would love to see how that works in the UK but it doesn’t translate it works in developing countries it’s much harder to translate it to an old world that has long established rules and regs and ways of working and anyway so that’s I’m really excited it’s here and it was fabulous also to see the young people here because in the UK and I think it’s probably true in most of Europe and the US people are really worried about AI they’re scared because that’s what they get, they get scaremongering they’re scared it’s going to attack them they’re scared it’s going to wipe the world out they’re scared they’re going to lose their job here the kids are going wow what an opportunity right and for India I mean that’s been an eye opener for me I mean I know I’ve been working in India long enough to know I mean I helped introduce the web into India right web and internet and the website and stuff work I’ve done here and I know what you can do with the power of that technology for people that can’t read and write and live in the rural areas I mean it’s just amazing what it does, add AI on top of that, they’re not worried about the deep fakes yet what they want is to get the information to their people in the fields the farmers in the fields in rural India I suppose deep fakes, I mean I don’t know but that’s not what they’re worried about at the moment so it has been fabulous and I love the slogan here, in India AI is all inclusive but it isn’t AI is missing out 50 % of the population right this technology and I’ve been fighting this sort of thing all my career totally male dominated totally male dominated and I love, I’m very sorry but the way we talk about women’s safety women aren’t involved in these discussions right?

children aren’t involved in these discussions 50 % of us are women and we’re not involved in the discussions about keeping us safe actually we need to keep men safe too right? men suffer from deep fakes as much as women do so you know well maybe someone’s not agreeing with that but you know it could be disproportionately hitting women and children but I don’t want to exclude the men here so I have become I have become even more passionate I talked about it in my keynote on Wednesday not in the talk itself but in the conversation that it’s so important that this is really all inclusive and that women are involved at the top level in the decision making about what we do and I think take for example the Australian experiment to stop the kids under 16 using social media.

Now that is an experiment. Everything about this world is a global experiment and people are doing different bits of it. The web was like that. The web itself from the genius that is Tim Berners -Lee was a worldwide experiment. There are many different ways that you could have built a hypermedia network on top of the internet. Boy, I tried to do one myself. And it was better than the web. But what Tim did was give it away, make it fantastic, make it open. And actually that led to the rise of the use of it. But it’s also left us with the stuff we’ve got today. Because anyone can do anything on the web. So bad people can do bad things.

And bad things happen unintentionally. The unintended consequences is what I call my talk on Wednesday. So this ban on social media, we need to we’ve got to be able to study the effects. Now, I know the Australians are. We heard Macron say here in France it’s going to be under 15. Keir Starmer’s saying under 16. But he changed his mind on a penny, so it’ll probably change. But that’s a joke for the Brits. But I think Spain has said under 16. In the US, of course, Trump says, no, we won’t need to worry about safety. But I made this joke in the other panel. And he’s the man that drank bleach during COVID. But the point is we have to study.

And people say, oh, it’s all moving so fast. The alpha males say that, right? The alpha males say, it’s all moving so fast. And I’m bigger, better, faster, and cheaper than you are, right? All that sort of alpha male stuff. We have to think about how we actually measure the effects of what we’re doing. So… two good things that have come out of the UK this is my last point just this last month the National Physical Laboratory I’m their AI advisor but that’s beside the point it’s like the UK equivalent of NIST they do our metrology it’s a word I’ve learnt to say very well weather forecasting is metrology studying the weather if we can do that we can do flipping AI because that’s complicated the thing about AI is of course it’s got people in it not just physical objects doing things systems so it’s harder in that sense but the National Physical Laboratory announced two weeks ago backed by the UK government the Centre for AI Measurement and the UK AI Security Institute which was founded by Rishi Sunak at Bletchley Park from Bletchley Park is part of the network of security institutes.

And the US, this is the man again who drank bleach during COVID, says no regulation. So we can’t talk about the network being a network of safety institutes. Why would we want to be safe? Sorry, joke. But they’ve renamed it the Network for AI Measurement and Evaluation. Now, this is brilliant. Brilliant. So with my ACM hat on and everything else I can do in the dying embers of my career. No, it’s not dying yet. But the, is to start a science of AI that’s about AI metrology. But what we’re doing, of course, is we are measuring the effects of social machines, which is difficult. You have to like, so, you know, the social scientists have taught me how you have to gather the data.

How do you gather the evidence? and we can do it we don’t have there is time to do this the world is not going to end at the end of this year because of AI other things yes but not because of AI so that’s where I want to leave you the thought I think if we can develop this new science put all our the compute power the best brains from social science and computer science and psychology and all the other disciplines we need, the law, everything we can really start to think about how we measure trust one of the metrics in AI metrology will be the trust factor I leave it there thank you very much a round of applause please thank you and I’m ever so sorry you can ask me what I’ve got to go in two minutes

Justin Carsten

I’ll ask you one thing very briefly then open data you’ve been a proponent of right Tim and Nigel

Wendy Hall

yeah yeah yeah yeah yeah

Justin Carsten

so I just wanted openness collaboration is important we’ve talked about open source what role do you think open data has in trustworthiness

Wendy Hall

well there’s two things about that, the open data movement has been really important but not all data can be open it can’t be and I mean you can have data that is exchangeable shareable that won’t necessarily be open so another thing I’m on is the UN, it’s the CSTD Commission for Science and Technology in the UN data governance working group and I could tell you in much more detail about that for me data governance we ignore that when we talk about AI governance we ignore data governance at our peril and we’ve really got to build on that from the UN report we did the General Assembly accepted all the points we recommended they’re being implemented that’s the other panel I should have been on today there’s a UN panel and they accepted everything that we recommended the global scientific panel the global dialogue the global fund and the Secretary General yesterday asked for three billion that’s not very much you know for a global fund to develop AI in the global south but our recommendations on data governance were not accepted because people would not the countries would not vote for them because it’s so difficult it’s so complicated and so there’s another thing I’m working hard on is how can we actually get some you know how do we do cross -border data sharing how do we get the data flows so we can actually share data sets and another thing we need to do which is something I want to do is build data tell people where the data is we need data repositories or at least registries that’s around the world so researchers know where the data is so they can do this study I’ll leave you with that that’s something else I was on my agenda

Justin Carsten

thank you so much Wendy I’m going to Yes, thank you. Thanks so much. I’m going to go to each of the panelists for just 30 seconds. I’ll start with Dr. Clark, then Harish, then Natasha, and then Peter, just to make us busy. Just one comment for the audience about how we really push this democratizing AI and trustworthiness.

Dr. Saurabh Garg

Yes. I think one issue which I mentioned in the earlier panel is that we perhaps need to give a lot more attention to the models because that will also help more efficient models will help reduce the requirement for compute and energy, which is among the biggest costs presently. And having models which are more domain specific would also enable better usage of those models and widen diffusion across. Thank you so much.

Justin Carsten

Harish.

Participant

Just very quickly, I think real world evidence is going to be very important in terms of, is it actually useful? I think we all assume it’s useful, but I’m talking about social and the development sector. I can imagine so many ways it’s useful, but it would be good to make sure we build evidence on how it can be trusted and, of course, be useful, metricize this a bit more. Thank you.

Justin Carsten

Thank you. Ms. Asha? Well, I

Natasha Crampton

think one of the points that has come out clearly in this discussion is that trustworthy AI diffusion is not going to just happen by itself. We have to make choices that lead to that outcome. And so for that reason, I am excited about these attempts at measurement in multiple dimensions, measurement of the systems, but also measurement in the changes of our economy so that we can then start to see whether the interventions that we’re putting in place are actually having the desired effect. Because we get to write this future, but we have to actively guide it. And I think data in multiple dimensions is really important. keys are there. Thank you. And the

Justin Carsten

final word on measurement should go to Peter. So Peter. I’m going

Peter Mattson

to echo the obvious point, which is that measurement is tremendously important. And then the hidden point, which is the scope of measurement is vast. And so we need to get really good at it, both in terms of quality and the efficiency, the cost efficiency with which we can implement it and with which we can evolve it. Thank you. Could you

Justin Carsten

please give a round of applause to an excellent panel. Thank you so much. Thank you. Hello, hello, hello, hello, hello. Hello. Hello. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (21)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“Moderator Justin Carsten opened the session by noting the significance of the summit and introducing each panelist.”

The knowledge base identifies Justin Carsten as the moderator who opened by noting the significance of the summit and introducing the panelists [S1] and [S9].

Confirmedhigh

“Dr Saurabh Garg was the working‑group chair who discussed the main obstacle to coordinating an international AI effort.”

Saurabh Garg is listed as a speaker and identified with a governmental role, confirming his participation in the discussion [S2].

Confirmedhigh

“Garg said the difficulty lies less in physical resources than in the governance of those resources, emphasizing inter‑dependence of hardware, software and ethical protocols.”

The knowledge base notes that Garg argues the problem is weak institutions and governance rather than scarcity of resources, aligning with his statement [S84] and his advocacy for public-utility-style compute governance [S85].

Additional Contextmedium

“Establishing robust sharing mechanisms and governance frameworks is essential for AI coordination.”

Garg’s broader commentary stresses the need for intelligent prioritisation and governance frameworks to allocate compute for public-interest uses [S85].

Additional Contextmedium

“A shortage of skilled talent and institutional capability compounds the problem; infrastructure can be bought but expertise must be cultivated.”

The knowledge base highlights Garg’s focus on talent development and institutional capacity as critical complements to infrastructure investment [S85].

Confirmedmedium

“Brad delivered a keynote earlier in the week based on a joint blog post with Natasha Crampton.”

The transcript notes that Brad’s speech was based on a recent blog post co-authored with the moderator, matching the report’s description [S21].

Confirmedmedium

“Microsoft’s third pillar involves collaborations with ML Commons to extend safety benchmarks to languages such as Hindi, Tamil, Malay, Japanese and Korean, and the Lingua Africa initiative for under‑represented language data.”

Peter Mattson from ML Commons is listed as a participant in the session, confirming the involvement of ML Commons in multilingual AI efforts [S9].

External Sources (95)
S1
Democratizing AI Building Trustworthy Systems for Everyone — – Peter Mattson- Natasha Crampton
S2
The Foundation of AI Democratizing Compute Data Infrastructure — 700 words | 130 words per minute | Duration: 321 secondss I’m Saurabh Garg. I’m secretary in the Ministry of Statistics…
S3
The Foundation of AI Democratizing Compute Data Infrastructure — -Saurabh Garg: Secretary in the Ministry of Statistics and Program Implementation in the Government of India
S4
Democratizing AI Building Trustworthy Systems for Everyone — – Dr. Saurabh Garg- Natasha Crampton – Dr. Saurabh Garg- Natasha Crampton- Justin Carsten
S5
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Participant** – (Role/title not specified – appears to be Dr. Esther Yarmitsky based on context)
S6
Keynote Address_Revanth Reddy_Chief Minister Telangana — -Participant: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or organizer…
S7
Leaders TalkX: Moral pixels: painting an ethical landscape in the information society — – **Participant**: Role/Title: Not specified, Area of expertise: Not specified
S8
Democratizing AI Building Trustworthy Systems for Everyone — – Dr. Saurabh Garg- Natasha Crampton- Justin Carsten – Natasha Crampton- Participant- Justin Carsten
S9
Democratizing AI Building Trustworthy Systems for Everyone — The session was moderated by Justin Carsten, who opened by noting the significance of the summit and introducing each pa…
S10
Multi-stakeholder Discussion on issues about Generative AI — Natasha Crampton:So, I’m Natasha Crankjian from Microsoft. I’m incredibly optimistic about AI’s potential to help us hav…
S11
Towards a Safer South Launching the Global South AI Safety Research Network — – Mr. Abhishek Singh- Ms. Natasha Crampton- Ms. Chenai Chair – Ms. Natasha Crampton- Dr. Rachel Sibande
S12
Democratizing AI Building Trustworthy Systems for Everyone — – Dr. Saurabh Garg- Natasha Crampton – Dr. Saurabh Garg- Natasha Crampton- Justin Carsten – Natasha Crampton- Particip…
S13
From Technical Safety to Societal Impact Rethinking AI Governanc — -Dame Wendy Hall- Regius Professor of Computer Science, Associate Vice President and Director of the Web Science Institu…
S14
Beyond North: Effects of weakening encryption policies | IGF 2023 WS #516 — Prateek Waghre:Thank you very much for having me. I was told I have about 10 minutes, so I’ve just started my timer to m…
S15
Session — Technology choices should be adapted to local contexts and implemented gradually
S16
How can trade rules shape the future of the digital economy? (Third World Network) — The analysis emphasises the need for regulatory space for governments in the face of rapidly evolving technology. One ke…
S17
Towards a Safer South Launching the Global South AI Safety Research Network — Ms. Crampton commits Microsoft to fulfilling the New Delhi Frontier AI commitments regarding multilingual and multicultu…
S18
Closing Session  — Sustained collaboration between governments, industry, and other stakeholders is essential for translating recommendatio…
S19
Responsible AI for Shared Prosperity — Impact:This comment provided empirical validation for the urgency of the initiatives being discussed and introduced the …
S20
AI for Social Empowerment_ Driving Change and Inclusion — What is that? What is that regulation? And I think one of the – we have this AI – the Global Index on Responsible AI tha…
S21
https://app.faicon.ai/ai-impact-summit-2026/democratizing-ai-building-trustworthy-systems-for-everyone — Because if I had to point to anything that’s holding back AI today, it’s not capability, it’s reliability, right? Is it …
S22
From Technical Safety to Societal Impact Rethinking AI Governanc — Dame Wendy Hall introduced the concept of systematic measurement for what she termed “social machines” – socio-technical…
S23
How Trust and Safety Drive Innovation and Sustainable Growth — Summary:All speakers agreed that trust is the foundational requirement for AI adoption. Without trust, people simply won…
S24
WS #100 Integrating the Global South in Global AI Governance — Salma Alkhoudi: So this slide is probably well before I get to the slide, just really loud. Is this good? Okay. I’m Se…
S25
Smart Regulation Rightsizing Governance for the AI Revolution — However, significant implementation challenges remain, particularly around scaling coalition-building approaches beyond …
S26
Why science metters in global AI governance — Low to moderate disagreement level with high consensus on core principles but divergent views on implementation strategi…
S27
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — Since its adoption in May 2019, 48 countries and the European Union have adhered to the OECD Principles on Artificial In…
S28
Welfare for All Ensuring Equitable AI in the Worlds Democracies — Evidence:Microsoft’s five-area investment plan includes infrastructure (connectivity, energy access), working with local…
S29
The perils of forcing encryption to say “AI, AI captain” | IGF 2023 Town Hall #28 — It asserts that fundamental rights, including privacy and child safety, should be considered co-equal, with no single ri…
S30
Opening and Sustaining Government Data | IGF 2023 Networking Session #86 — By making data open and transparent, governments can build public trust and foster civil society initiatives. Overall, t…
S31
AI regulation offers development opportunity for Latin America — Latin America isuniquelypositioned to lead on AI governance by leveraging its social rights-focused policy tradition, em…
S32
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — AI policies in Africa should ideally espouse a context-specific and culturally sensitive orientation. The prevailing ten…
S33
Safeguarding Children with Responsible AI — Cultural, contextual, and inclusion considerations
S34
Ministerial Roundtable — Careful understanding of opportunities for cultural and language aspects is important, requiring upskilling and knowledg…
S35
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — – Owen Lauder- Wifredo Fernandez- Austin Marin Artificial intelligence | Building confidence and security in the use of…
S36
Democratizing AI Building Trustworthy Systems for Everyone — A consensus emerged around the importance of measurement for building trust. Mattson advocated for “common yardsticks th…
S37
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — Just as cars have standardized fuel economy ratings and crash test results that help consumers make informed decisions, …
S38
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Standardized measurement approaches are needed to inform evidence-based policy decisions
S39
GPAI: A Multistakeholder Initiative on Trustworthy AI | IGF 2023 Open Forum #111 — Alan Paic:Yes, it was not about further countries joining. Well, I can also mention that. So we do have a membership pro…
S40
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — She explains that private sector will invest in expensive compute facilities, but government and donor organizations mus…
S41
UN Secretary-General report outlines voluntary financing options for AI capacity building — The UN Secretary-General has issued areport onInnovative Voluntary Financing Options for Artificial Intelligence Capacit…
S42
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — “Trustability because we need to trace the systems, the models, the data that we use for AI.”[49]. “Verifiability is the…
S43
WS #31 Cybersecurity in AI: balancing innovation and risks — Trust is subjective and can’t be measured with traditional statistical methods. A conceptual framework is needed to defi…
S44
Democratizing AI Building Trustworthy Systems for Everyone — “Because if I had to point to anything that’s holding back AI today, it’s not capability, it’s reliability, right?”[62]….
S45
How AI Drives Innovation and Economic Growth — Summary:The speakers show broad agreement on AI’s transformative potential for development but significant disagreements…
S46
How AI Drives Innovation and Economic Growth — The speakers show broad agreement on AI’s transformative potential for development but significant disagreements on impl…
S47
Comprehensive Report: Preventing Jobless Growth in the Age of AI — Moderate disagreement with significant implications. While speakers agreed on broad goals, their different assessments o…
S48
AI Innovation in India — No meaningful disagreements were present. This was a celebratory and supportive environment where speakers complemented …
S49
Building Population-Scale Digital Public Infrastructure for AI — Thanking participants openly signals respect and appreciation, reinforcing the relational capital that underpins the spr…
S50
Multistakeholder Partnerships for Thriving AI Ecosystems — Evidence:He provides a specific example of the challenge: ‘If I am a startup in India, I have built a good tool, how do …
S51
The Foundation of AI Democratizing Compute Data Infrastructure — High level of consensus across diverse stakeholders (academic, government, civil society, private sector, international …
S52
Shaping AI’s Story Trust Responsibility &amp; Real-World Outcomes — High level of consensus with strong alignment on fundamental principles and practical approaches. This suggests the AI g…
S53
The Foundation of AI Democratizing Compute Data Infrastructure — Consensus level:High level of consensus across diverse stakeholders (academic, government, civil society, private sector…
S54
Day 0 Event #257 Enhancing Data Governance in the Public Sector — Approach to balancing data protection with data sharing Balancing Data Protection with Data Sharing and Innovation Leg…
S55
WS #208 Democratising Access to AI with Open Source LLMs — Participants debated the role of regulation versus open-source approaches in addressing monopolies and ensuring equitabl…
S56
Building Population-Scale Digital Public Infrastructure for AI — One of the big barriers that we are currently seeing is the fragmentation that is occurring out there… thousands of [p…
S57
Leveraging AI4All_ Pathways to Inclusion — The discussion revealed that many AI products remain stuck in pilot stage due to surrounding system challenges rather th…
S58
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — Summary:Sharma identifies compute resources and research talent as the main barriers, suggesting regulatory issues are l…
S59
WS #100 Integrating the Global South in Global AI Governance — Salma Alkhoudi: So this slide is probably well before I get to the slide, just really loud. Is this good? Okay. I’m Se…
S60
Smart Regulation Rightsizing Governance for the AI Revolution — However, significant implementation challenges remain, particularly around scaling coalition-building approaches beyond …
S61
Global AI Governance: Reimagining IGF’s Role &amp; Impact — ## Key Governance Challenges Ivana Bartoletti: Thank you very much and so sorry for not being able to be physically wit…
S62
Global dialogue on AI governance highlights the need for an inclusive, coordinated international approach — Global AI governance was the focus of a high-levelforumat the IGF 2024 in Riyadhthat brought together leaders from gover…
S63
Why science metters in global AI governance — Low to moderate disagreement level with high consensus on core principles but divergent views on implementation strategi…
S64
Democratizing AI Building Trustworthy Systems for Everyone — A key component addresses multilingual and multicultural AI development, as “AI is no good to you if it does not work in…
S65
Welfare for All Ensuring Equitable AI in the Worlds Democracies — Evidence:Microsoft’s five-area investment plan includes infrastructure (connectivity, energy access), working with local…
S66
Developing capacities for bottom-up AI in the Global South: What role for the international community? — ## Conclusion and Next Steps ## Major Discussion Points ## Practical Applications and Examples ## Unresolved Question…
S67
Towards a Safer South Launching the Global South AI Safety Research Network — Ms. Crampton commits Microsoft to fulfilling the New Delhi Frontier AI commitments regarding multilingual and multicultu…
S68
Day 0 Event #251 Large Models and Small Player Leveraging AI in Small States and Startups — Jeff Bullwinkel: Well, thank you very much, Natalie. It’s great to be here in Oslo. Welcome to everybody here in the roo…
S69
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — “Trustability because we need to trace the systems, the models, the data that we use for AI.”[49]. “Verifiability is the…
S70
Town Hall: How to Trust Technology — Models’ factuality can be formally measured.
S71
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — very much. Yeah, so maybe you know, I will just introduce a little bit Candela. It’s a startup coming from the CNRS lab….
S72
AI regulation offers development opportunity for Latin America — Latin America isuniquelypositioned to lead on AI governance by leveraging its social rights-focused policy tradition, em…
S73
Global Perspectives on Openness and Trust in AI — And then exclusive partnerships and the systems being opaque. So those were the things identified in the market study. A…
S74
Open Forum #58 Collaborating for Trustworthy AI an Oecd Toolkit and Spotlight on AI in Government — Jibu Elias: Thank you very much, Yoichi-san. It’s an honor to be here to share my experience building a responsible AI e…
S75
Artificial intelligence (AI) – UN Security Council — The discussions across various sessions highlighted several risks associated with the over-reliance on AI-powered conten…
S76
Artificial Intelligence &amp; Emerging Tech — Umut Pajaro Velasquez:Hello everyone, well as Jennifer will say I will be presenting mainly the outputs from the youth l…
S77
Ministerial Roundtable — Sociocultural | Development The discussion highlighted the importance of carefully understanding the opportunities pres…
S78
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Fadi Daou:Okay, thank you. Thank you Michel and this is definitely a tension and maybe a balance at some point between t…
S79
AI for Good Impact Awards — D’hondt emphasizes that artificial intelligence systems should be developed with principles of inclusivity and openness,…
S80
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S81
Dynamic Coalition Collaborative Session — The discussion began with an optimistic, collaborative tone as panelists shared their expertise and perspectives. Howeve…
S82
WS #97 Interoperability of AI Governance: Scope and Mechanism — Yik Chan Chin: Thank you, Olga. So, I speak on behalf of the PNAI because I’m the co-leader of the subgroup on the inte…
S83
Laying the foundations for AI governance — – The four fundamental obstacles identified by the moderator: time, uncertainty, geopolitics, and power concentration G…
S84
ACKNOWLEDGEMENT — Apparently, the problem is not scarcity of resources but that of weak institutions that can ensure transparency and good…
S85
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Dr. Garg advocates for treating compute infrastructure as a public utility that enables innovation and research. Rather …
S86
Consular diplomacy and e-diplomacy — In thistalk at ted.comby Parag Khanna, he illustrates how the world is in fact built around physical resources rather th…
S87
Defence against the DarkWeb Arts: Youth Perspective | IGF 2023 WS #72 — Technological sovereignty involves hardware, software and protocols
S88
Critical Infrastructure in the Digital Age: From Deep Sea Cables to Orbital Satellites — Cybersecurity | Infrastructure Some of the most interesting and complex challenges are not just software challenges, bu…
S89
Media Briefing: Unlocking ASEAN’s Digital Future – Driving Inclusive Growth and Global Competitiveness / DAVOS 2025 — De Vusser emphasizes the need for strategic investments in AI talent development and building trust in AI systems. This …
S90
The Digital Town Square Problem: public interest info online | IGF 2023 Open Forum #132 — The analysis highlights the lack of infrastructure and human capacity as major challenges that need to be addressed. The…
S91
AI: Lifting All Boats / DAVOS 2025 — Bill Thomas: Well, the nice thing about going last is you get to tie a bunch of these things together. And I will say …
S92
Resilient and Responsible AI | IGF 2023 Town Hall #105 — The need for international development partners to customize their support based on the priorities of each country was e…
S93
Day 0 Event #174 Human Rights Impacts of AI on Marginalized Populations — Alisson Peters: Thank you very much to all of our special representatives and envoys. I think as you heard here at the…
S94
Closure of the session — Chair:Thank you very much, Japan, for your statement. Thank you very much, Japan. I was just consulting with the Secreta…
S95
Digital divides &amp; Inclusion — Bhanu Nipayan:Yes, thank you, Chair, for again, giving me an opportunity to speak a few things. In fact, in a very inter…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Dr. Saurabh Garg
2 arguments132 words per minute297 words134 seconds
Argument 1
Governance challenges of sharing mechanisms and ecosystem interdependence (Dr. Saurabh Garg)
EXPLANATION
Dr. Garg highlights that the biggest obstacles to international AI collaboration lie in governing how resources and protocols are shared. He stresses that the AI ecosystem’s interdependence across hardware, software, and ethical standards makes coordination complex.
EVIDENCE
He notes that while foundational computer-resource sharing is a major challenge, a larger issue is managing the interdependence of the AI ecosystem, which spans hardware, software, protocols and ethics [6]. He adds that governance of sharing mechanisms, protocols and the overall framework is one of the biggest challenges [7].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The difficulty of governing shared resources and the interdependence of hardware, software, and ethical protocols is highlighted in [S9].
MAJOR DISCUSSION POINT
Governance challenges
AGREED WITH
Natasha Crampton, Wendy Hall
DISAGREED WITH
Natasha Crampton
Argument 2
Developing domain‑specific, efficient models lowers compute and energy demands, supporting broader diffusion (Dr. Saurabh Garg)
EXPLANATION
Dr. Garg argues that creating more efficient, domain‑specific AI models can reduce the need for massive compute and energy, making AI more affordable for low‑resource settings. This efficiency is essential for widening AI diffusion globally.
EVIDENCE
He states that focusing on more efficient models will help reduce compute and energy costs, which are among the biggest current expenses, and that domain-specific models enable better usage and wider diffusion [310-312].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Garg’s call for smaller, domain-specific models that consume less power is corroborated by [S2] and [S3].
MAJOR DISCUSSION POINT
Efficient model development
AGREED WITH
Peter Mattson, Participant
N
Natasha Crampton
4 arguments140 words per minute1432 words611 seconds
Argument 1
Need for adaptable technology that respects local laws, values, and provides configurable controls (Natasha Crampton)
EXPLANATION
Natasha explains that Microsoft designs AI products to be trustworthy by design, recognizing that legal and cultural expectations differ across jurisdictions. The technology therefore includes configurable controls and defaults that allow local adaptation.
EVIDENCE
She describes her role in ensuring products are built trustworthy by design, noting that not every part of the world has the same rules or expectations, so Microsoft builds in controls and choices for downstream users and carefully considers default settings while preserving agency for local contexts [80-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The importance of technology that can be adapted to local legal and cultural contexts is discussed in [S15] and [S16]; Natasha’s emphasis on trustworthy-by-design aligns with statements in [S10].
MAJOR DISCUSSION POINT
Adaptable, locally‑respectful technology
AGREED WITH
Justin Carsten, Wendy Hall
Argument 2
$50 billion investment and a five‑pillar framework (infrastructure, skilling, multilingual AI, local innovation, data sharing) to close the North‑South AI gap (Natasha Crampton)
EXPLANATION
Natasha announces Microsoft’s commitment of $50 billion by the end of the decade to accelerate AI diffusion to the Global South. The strategy is organized around five pillars: infrastructure, skilling, multilingual AI, local innovation, and data sharing.
EVIDENCE
She states that Microsoft will spend $50 billion by decade’s end and outlines the five-pillar approach, beginning with infrastructure investments in data centres and connectivity, followed by skilling programmes, multilingual AI work, local innovation initiatives, and data-sharing projects with partners such as the World Bank [29-33][45-49].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Microsoft’s $50 billion commitment and the five-pillar approach are detailed in [S17] and [S1].
MAJOR DISCUSSION POINT
Five‑pillar investment plan
DISAGREED WITH
Dr. Saurabh Garg
Argument 3
Deep, long‑term partnerships with governments, NGOs, and industry are required to deliver each pillar (Natasha Crampton)
EXPLANATION
Natasha stresses that the success of each pillar depends on sustained collaboration with a range of stakeholders, including governments, NGOs, venture capitalists and other private‑sector partners. These partnerships provide the financing, policy alignment and local expertise needed for impact.
EVIDENCE
She notes that the scale of infrastructure investment requires significant private-sector funding together with government and venture-capital sources [41-44], and later emphasizes that none of the five limbs can succeed without deep, long-term partnerships [72-74].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The necessity of deep, sustained partnerships for the five pillars is emphasized in [S9], [S18] and reinforced in [S1].
MAJOR DISCUSSION POINT
Partnerships for pillar delivery
AGREED WITH
Peter Mattson, Dr. Saurabh Garg, Justin Carsten
Argument 4
Sharing AI adoption and usage data with policy‑making bodies helps track impact and guide interventions (Natasha Crampton)
EXPLANATION
Natasha describes Microsoft’s commitment to share data on AI adoption and usage with organisations such as the World Bank, enabling policymakers to monitor where AI is spreading faster or slower and to target interventions accordingly.
EVIDENCE
She explains that Microsoft will contribute data on AI adoption to central projects, including one led by the World Bank, so policymakers can see where AI is adopted quickly, where it lags, and where skilling or infrastructure interventions are needed [65-69].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Microsoft’s plan to share AI adoption data with bodies such as the World Bank is mentioned in [S17] and [S9].
MAJOR DISCUSSION POINT
Data sharing for policy insight
P
Peter Mattson
2 arguments172 words per minute901 words314 seconds
Argument 1
Reliable AI depends on common, industrial‑grade benchmarks; multilingual safety and federated evaluation are key (Peter Mattson)
EXPLANATION
Peter argues that trustworthy AI requires standardized, industrial‑grade benchmarks that provide common yardsticks for reliability. He highlights multilingual safety benchmarks and federated evaluation as essential components for achieving this reliability.
EVIDENCE
He states that reliable AI needs common industrial-grade benchmarks and that metrics and evaluations are the yardsticks for progress [120-124]. He cites work on multilingual safety benchmarks and the Lingua Africa initiative as examples, and describes the MedPerf federated evaluation project that tests models across diverse, distributed datasets for healthcare reliability [124-128][135-137].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The call for industrial-grade benchmarks and multilingual safety benchmarks is supported by [S1] and [S9].
MAJOR DISCUSSION POINT
Benchmarks for reliable AI
AGREED WITH
Justin Carsten, Wendy Hall, Natasha Crampton, Dr. Saurabh Garg
DISAGREED WITH
Wendy Hall, Justin Carsten
Argument 2
Future benchmarks must support multi‑turn, agentic systems and be cost‑efficient at scale (Peter Mattson)
EXPLANATION
Peter points out that upcoming AI systems will be multi‑turn and agentic, so benchmarks must evolve to assess such interactions. He also stresses that building and maintaining these benchmarks is resource‑intensive, requiring cost‑effective solutions.
EVIDENCE
He observes that future AI will be multi-turn and agentic, necessitating broader and deeper benchmark coverage, and notes the high resource intensity of creating and sustaining such benchmarks [168-170].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Future benchmark needs for multi-turn, agentic systems and the requirement for cost-effective scaling are discussed in [S1]; the broader concern about reliability as a barrier is noted in [S21].
MAJOR DISCUSSION POINT
Next‑generation benchmark needs
W
Wendy Hall
2 arguments156 words per minute1740 words667 seconds
Argument 1
Establishing AI metrology and a “trust factor” metric is crucial for measuring AI safety and reliability (Wendy Hall)
EXPLANATION
Wendy outlines the UK’s new AI metrology initiative, which aims to create systematic metrics—including a “trust factor”—to evaluate AI safety and reliability. She argues that such measurement is essential for building confidence in AI systems.
EVIDENCE
She describes the establishment of the UK’s AI Measurement and Security Institute, the creation of a “trust factor” metric, and the broader effort to develop AI metrology as a science for measuring trust and safety [290-300].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The UK AI metrology initiative and the proposed “trust factor” metric are described in [S22] and referenced again in [S9].
MAJOR DISCUSSION POINT
AI metrology and trust metric
AGREED WITH
Peter Mattson, Justin Carsten, Natasha Crampton, Dr. Saurabh Garg
DISAGREED WITH
Peter Mattson, Justin Carsten
Argument 2
Not all data can be openly released; robust data‑governance frameworks, cross‑border sharing mechanisms, and global data registries are needed (Wendy Hall)
EXPLANATION
Wendy emphasizes that while open data is valuable, many datasets cannot be fully open due to privacy, security, or sovereignty concerns. She calls for strong data‑governance frameworks, mechanisms for cross‑border data flows, and worldwide data registries to manage data responsibly.
EVIDENCE
She notes that the open-data movement is important but not all data can be open, stresses the need for exchangeable yet non-open data, and outlines work on UN data-governance, cross-border sharing, and the creation of global data repositories or registries [305-311].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Hall’s clarification that open data must be balanced with privacy and the need for exchangeable frameworks is provided in [S9].
MAJOR DISCUSSION POINT
Balanced data openness and governance
AGREED WITH
Dr. Saurabh Garg, Natasha Crampton
DISAGREED WITH
Peter Mattson, Natasha Crampton
J
Justin Carsten
2 arguments81 words per minute1457 words1070 seconds
Argument 1
Collaboration and systematic measurement are essential to steer trustworthy AI diffusion (Justin Carsten)
EXPLANATION
Justin underscores that collaborative efforts and systematic measurement are key to guiding the diffusion of trustworthy AI. He prompts panelists to consider how open benchmarks can support sovereign AI capabilities.
EVIDENCE
He asks Peter how open benchmarks can contribute to building sovereign capabilities, linking collaboration with measurement as a driver for trustworthy AI diffusion [94-100].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The emphasis on collaboration coupled with systematic measurement for trustworthy AI appears in [S9] and is reinforced by the trust-centric discussion in [S23].
MAJOR DISCUSSION POINT
Collaboration + measurement
AGREED WITH
Peter Mattson, Wendy Hall, Natasha Crampton, Dr. Saurabh Garg
DISAGREED WITH
Peter Mattson, Wendy Hall
Argument 2
Ongoing measurement and evaluation guide the development of trustworthy AI systems (Justin Carsten)
EXPLANATION
Justin reiterates that continuous measurement and evaluation are central to advancing trustworthy AI, emphasizing that democratizing AI depends on robust metrics and assessment frameworks.
EVIDENCE
He references his earlier question about open benchmarks and adds a comment about pushing democratizing AI and trustworthiness, highlighting the role of measurement in the process [94-100][306-308].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The role of continuous measurement in advancing trustworthy AI is highlighted in [S9] and the reliability focus in [S21].
MAJOR DISCUSSION POINT
Measurement drives trustworthy AI
P
Participant
2 arguments158 words per minute1007 words381 seconds
Argument 1
Philanthropic organisations should address sustainability, edge computing, language diversity, and affordability to enable trustworthy AI in health, agriculture, and other sectors (Participant)
EXPLANATION
The participant argues that philanthropic bodies must focus on sustainable, low‑resource AI solutions, including edge computing, multilingual support, and affordable models, to ensure trustworthiness in sectors like health and agriculture.
EVIDENCE
She discusses the need for sustainable models, edge-computing capabilities for low-connectivity settings, language diversity, and affordability, especially for frontline health workers and agricultural applications, emphasizing that trust hinges on reliability, accessibility, and relevance [184-210].
MAJOR DISCUSSION POINT
Philanthropy for sustainable, inclusive AI
Argument 2
Open‑source models and lower‑parameter alternatives can reduce cost barriers for the Global South (Participant)
EXPLANATION
The participant highlights that open‑source AI models and smaller, lower‑parameter versions can lower entry costs, making AI more accessible to low‑resource regions.
EVIDENCE
She notes that open-source approaches are critical because many governments in the Global South cannot afford large, expensive models, and that open-source can help bridge that gap [215-220].
MAJOR DISCUSSION POINT
Open‑source as cost reducer
AGREED WITH
Dr. Saurabh Garg, Peter Mattson
DISAGREED WITH
Natasha Crampton, Participant (Harish)
Agreements
Agreement Points
Governance of shared AI resources and data is a critical challenge for international collaboration
Speakers: Dr. Saurabh Garg, Natasha Crampton, Wendy Hall
Governance challenges of sharing mechanisms and ecosystem interdependence (Dr. Saurabh Garg) Need for adaptable technology that respects local laws, and provides configurable controls (Natasha Crampton) Not all data can be openly released; robust data‑governance frameworks, cross‑border sharing mechanisms, and global data registries are needed (Wendy Hall)
All three speakers stress that effective AI diffusion requires strong governance structures-whether for sharing computational resources, embedding configurable controls for local legal contexts, or establishing data-governance frameworks and registries-to manage interdependence and sovereignty concerns [6-7][80-86][305-311].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with the UN Secretary-General’s report on voluntary financing options for AI capacity building that stresses shared governance frameworks for data and models [S41] and reflects ongoing discussions on balancing data protection with sharing in public-sector initiatives [S54]; multistakeholder governance models are highlighted as essential in AI diffusion roadmaps [S50].
Technology must be adaptable to local laws, values and cultural contexts
Speakers: Natasha Crampton, Justin Carsten, Wendy Hall
Need for adaptable technology that respects local laws, values, and provides configurable controls (Natasha Crampton) Question about managing the challenge of broad AI standards versus individual nation needs (Justin Carsten) Designing data centres with sovereignty controls and recognising fragmented world (Wendy Hall)
Natasha explains Microsoft builds in configurable controls for downstream adaptation; Justin highlights the tension between global standards and national regulations; Wendy notes Microsoft’s sovereignty controls for data centres, all underscoring the need for locally-respectful AI designs [75-78][80-86][37-39].
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between open AI sharing and national data-sovereignty requirements is evident in debates on regulation versus open-source LLMs [S55] and in analyses of data residency constraints that affect deployment in the Global South [S58].
Long‑term, multi‑stakeholder partnerships are essential to deliver AI diffusion pillars
Speakers: Natasha Crampton, Peter Mattson, Dr. Saurabh Garg, Justin Carsten
Deep, long‑term partnerships with governments, NGOs, and industry are required to deliver each pillar (Natasha Crampton) Partner ecosystem is needed to make models open‑source and adaptable (Peter Mattson) Talent and institutional capability are required for coordination (Dr. Saurabh Garg) Collaboration has been a key theme throughout the panel (Justin Carsten)
All speakers point to the necessity of sustained collaboration across public, private and civil-society actors to build infrastructure, skilling, multilingual AI and data-sharing initiatives [72-74][92-93][8-9][70-71].
Systematic measurement, benchmarking and metrics are vital for trustworthy AI
Speakers: Peter Mattson, Justin Carsten, Wendy Hall, Natasha Crampton, Dr. Saurabh Garg
Reliable AI depends on common, industrial‑grade benchmarks; multilingual safety and federated evaluation are key (Peter Mattson) Collaboration and systematic measurement are essential to steer trustworthy AI diffusion (Justin Carsten) Establishing AI metrology and a “trust factor” metric is crucial for measuring AI safety and reliability (Wendy Hall) Measurement of systems and economic impact is needed to gauge interventions (Natasha Crampton) Efficient models reduce compute/energy, aiding diffusion (Dr. Saurabh Garg) – implicitly a measurement of resource use
Multiple speakers converge on the idea that reliable, trustworthy AI requires common benchmarks, continuous evaluation, and new metrics such as a trust factor to guide policy and investment decisions [120-124][94-100][290-300][320-324][310-312].
POLICY CONTEXT (KNOWLEDGE BASE)
A broad consensus on AI metrology appears in standards work that likens AI metrics to automotive fuel-economy and crash-test ratings [S35][S37] and in calls for common yardsticks to build trust across sectors [S36][S38][S44].
Developing efficient, domain‑specific or lower‑parameter models lowers compute and energy costs, supporting broader diffusion
Speakers: Dr. Saurabh Garg, Peter Mattson, Participant
Developing domain‑specific, efficient models lowers compute and energy demands, supporting broader diffusion (Dr. Saurabh Garg) Future benchmarks are resource‑intensive; need cost‑efficient solutions (Peter Mattson) Open‑source models and lower‑parameter alternatives can reduce cost barriers for the Global South (Participant)
All three highlight that smaller, more efficient or open-source models are essential to make AI affordable and scalable, especially for low-resource settings [310-312][168-170][215-220].
POLICY CONTEXT (KNOWLEDGE BASE)
Compute scarcity is identified as a primary barrier for AI adoption in the Global South, with lower-parameter open-source models proposed as a mitigation strategy [S58] and open-source LLM debates highlighting cost-reduction benefits [S55].
Similar Viewpoints
Both stress that AI systems and data must be designed with sovereignty and local regulatory considerations in mind, requiring flexible controls rather than blanket openness [80-86][305-311].
Speakers: Natasha Crampton, Wendy Hall
Need for adaptable technology that respects local laws, values, and provides configurable controls (Natasha Crampton) Not all data can be openly released; robust data‑governance frameworks, cross‑border sharing mechanisms, and global data registries are needed (Wendy Hall)
Both advocate for a formal measurement science (benchmarks, metrology, trust metrics) to assess AI reliability and safety [120-124][290-300].
Speakers: Peter Mattson, Wendy Hall
Reliable AI depends on common, industrial‑grade benchmarks; multilingual safety and federated evaluation are key (Peter Mattson) Establishing AI metrology and a “trust factor” metric is crucial for measuring AI safety and reliability (Wendy Hall)
Both identify governance and partnership structures as the linchpin for successful AI diffusion across borders [6-7][72-74].
Speakers: Dr. Saurabh Garg, Natasha Crampton
Governance challenges of sharing mechanisms and ecosystem interdependence (Dr. Saurabh Garg) Deep, long‑term partnerships with governments, NGOs, and industry are required to deliver each pillar (Natasha Crampton)
Unexpected Consensus
Open‑source, lower‑parameter models as a cost‑reduction strategy for the Global South
Speakers: Natasha Crampton, Participant
Deep, long-term partnerships … (Natasha Crampton) – includes mention of open-weight models and open-source availability [92-93] Open-source models and lower-parameter alternatives can reduce cost barriers for the Global South (Participant)
While Natasha’s primary focus is on a $50 billion investment and five-pillar framework, she also notes Microsoft’s practice of releasing open-weight models, aligning directly with the participant’s call for open-source, low-parameter solutions-a convergence not explicitly anticipated given the corporate-philanthropic divide [92-93][215-220].
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions on open-source LLMs emphasize their role in reducing entry costs and addressing compute inequities, especially for low-resource regions [S55]; these ideas are linked to financing models that combine private investment with donor support for early-stage research [S40].
Academic and private‑sector alignment on AI metrology and trust metrics
Speakers: Wendy Hall, Peter Mattson
Establishing AI metrology and a “trust factor” metric … (Wendy Hall) Reliable AI depends on common, industrial‑grade benchmarks … (Peter Mattson)
Wendy, a UK academic, and Peter, a leader of an open-source benchmarking consortium, both champion a formal metrology/benchmarking regime, showing unexpected harmony between academic policy-making and industry-driven benchmark development [290-300][120-124].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for joint standards development is reflected in AI metrology proposals that integrate interdisciplinary measurement of societal impact [S36] and in U.S. AI standards initiatives that call for industry-academic collaboration on trust metrics [S35][S38].
Overall Assessment

The panel exhibits strong convergence on four core themes: (1) the necessity of robust governance and data‑sharing frameworks; (2) the importance of locally‑adaptable, culturally‑sensitive AI designs; (3) the centrality of long‑term, multi‑stakeholder partnerships; and (4) the critical role of systematic measurement, benchmarking and new trust metrics. Additional consensus appears around efficient, lower‑parameter models as a means to lower barriers for the Global South.

High consensus – most speakers, across academia, industry, and civil society, articulate overlapping solutions, indicating a shared understanding that trustworthy AI diffusion hinges on governance, adaptability, partnership, and measurement. This alignment suggests that future policy and investment initiatives are likely to be coordinated around these pillars, enhancing the prospects for equitable AI deployment.

Differences
Different Viewpoints
What constitutes the primary barrier to AI diffusion – governance of sharing mechanisms versus massive infrastructure investment
Speakers: Dr. Saurabh Garg, Natasha Crampton
Governance challenges of sharing mechanisms and ecosystem interdependence (Dr. Saurabh Garg) $50 billion investment and a five‑pillar framework (infrastructure, skilling, multilingual AI, local innovation, data sharing) to close the North‑South AI gap (Natasha Crampton)
Dr. Garg argues that the biggest obstacle is governance of sharing mechanisms and managing the interdependent AI ecosystem [6-9], while Natasha stresses that the scale of required infrastructure investment (data centres, connectivity, electricity) is the critical hurdle that must be addressed first [33-45]. Both agree AI diffusion is needed, but they disagree on which challenge is most urgent.
POLICY CONTEXT (KNOWLEDGE BASE)
Stakeholders are split: some prioritize governance frameworks for data and model sharing [S54][S55], while others point to the lack of compute infrastructure and financing as the dominant obstacle [S40][S58].
Extent to which data and AI models should be openly shared versus protected by governance frameworks
Speakers: Wendy Hall, Peter Mattson, Natasha Crampton
Not all data can be openly released; robust data‑governance frameworks, cross‑border sharing mechanisms, and global data registries are needed (Wendy Hall) Reliable AI depends on common, industrial‑grade benchmarks; open‑weight models and open benchmarks are essential for trustworthiness (Peter Mattson) Microsoft builds products with configurable controls to allow local adaptation, but also contributes data to policy‑making bodies (Natasha Crampton)
Wendy stresses limits to openness and calls for strong data-governance and controlled sharing [305-311]. Peter and Natasha promote openness – Peter through open-weight models and industrial benchmarks [92-94][120-124], Natasha by providing configurable, adaptable technology and sharing adoption data [80-86][65-69]. The speakers thus diverge on how much openness is feasible versus how much control is required.
POLICY CONTEXT (KNOWLEDGE BASE)
The debate mirrors ongoing policy discussions on balancing data protection with openness, as seen in public-sector data governance workshops [S54] and in forums contrasting hard regulation with open-source collaboration [S55].
Preferred mechanism for financing and delivering AI capabilities – large private‑sector investment versus open‑source and philanthropic models
Speakers: Natasha Crampton, Participant (Harish)
$50 billion investment and a five‑pillar framework to close the North‑South AI gap (Natasha Crampton) Open‑source models and lower‑parameter alternatives can reduce cost barriers for the Global South (Participant)
Natasha outlines a $50 billion private-sector commitment with partnerships to build infrastructure, skills, and data sharing [29-33][45-49]. The participant argues that open-source, low-parameter models are crucial for affordability and sustainability, especially for health and agriculture sectors, suggesting philanthropy and open-source as primary levers [215-220]. Both aim for broader AI diffusion but disagree on the dominant financing and delivery model.
POLICY CONTEXT (KNOWLEDGE BASE)
A blended financing approach combining private capital, donor funding, and voluntary contributions has been advocated in UN reports and multistakeholder forums, highlighting alternatives to sole private-sector dominance [S40][S41][S55].
Approach to measuring AI trustworthiness – industrial‑grade benchmarks versus a broader AI metrology and “trust factor” metric
Speakers: Peter Mattson, Wendy Hall, Justin Carsten
Reliable AI depends on common, industrial‑grade benchmarks; multilingual safety and federated evaluation are key (Peter Mattson) Establishing AI metrology and a “trust factor” metric is crucial for measuring AI safety and reliability (Wendy Hall) Collaboration and systematic measurement are essential to steer trustworthy AI diffusion (Justin Carsten)
Peter emphasizes the need for industrial-scale benchmarks and federated evaluation to ensure reliability [120-124][135-137]. Wendy proposes a national AI metrology programme with a specific “trust factor” metric to assess safety [290-300]. Justin underscores the importance of ongoing measurement but does not specify the method, aligning with both but highlighting the need for systematic metrics [94-100][306-308]. The disagreement lies in the preferred measurement framework and metrics.
POLICY CONTEXT (KNOWLEDGE BASE)
Industrial benchmarks are promoted in standards bodies [S35][S37], while scholars such as Wendy Hall argue for a broader metrology science that captures societal impacts beyond performance metrics [S36][S43][S44].
Unexpected Differences
Wendy Hall’s critical view of the UK’s AI approach and broader AI hype versus other speakers’ optimistic framing of AI diffusion
Speakers: Wendy Hall, Other panelists (e.g., Justin, Natasha, Peter)
Establishing AI metrology and a “trust factor” metric is crucial for measuring AI safety and reliability (Wendy Hall) The panelists repeatedly emphasize optimism about AI diffusion, large investments, and collaborative solutions (e.g., Natasha’s $50 billion plan, Peter’s benchmarks, Justin’s collaboration focus)
Wendy adopts a skeptical tone, describing the UK’s AI work as “parochial” and highlighting fears around AI, whereas the rest of the panel consistently projects confidence in large-scale investment and technical solutions. This contrast in tone and assessment of AI’s current state was not anticipated given the overall collaborative optimism of the session [258-267][29-33][120-124].
POLICY CONTEXT (KNOWLEDGE BASE)
The contrast reflects a pattern identified in conference summaries where optimistic narratives about AI’s transformative potential coexist with cautionary critiques of hype and governance gaps [S45][S46][S36].
Overall Assessment

The discussion revealed several points of tension: (1) whether governance or infrastructure is the primary bottleneck; (2) the degree of openness permissible for data and models; (3) the preferred financing model—private‑sector investment versus open‑source/philanthropy; (4) the optimal measurement framework for trustworthy AI. While participants shared a common goal of expanding trustworthy AI globally, they diverged on the strategic levers needed to achieve it.

Moderate to high. The disagreements are substantive, touching on policy, financing, and technical standards, and they could shape divergent pathways for AI diffusion. If not reconciled, they may lead to fragmented approaches—some regions prioritising heavy infrastructure investment and private funding, others emphasizing open‑source, governance, and metrology—potentially slowing coordinated global progress.

Partial Agreements
All three agree that measurement is central to trustworthy AI, but they differ on the concrete approach: Justin calls for broad collaboration and systematic measurement, Peter stresses industrial benchmarks and federated evaluation, while Wendy advocates a national metrology institute and a specific trust metric [94-100][120-124][290-300].
Speakers: Justin Carsten, Peter Mattson, Wendy Hall
Collaboration and systematic measurement are essential to steer trustworthy AI diffusion (Justin Carsten) Reliable AI depends on common, industrial‑grade benchmarks; multilingual safety and federated evaluation are key (Peter Mattson) Establishing AI metrology and a “trust factor” metric is crucial for measuring AI safety and reliability (Wendy Hall)
All three concur that trustworthy AI requires mechanisms that can be adapted to diverse contexts, but Natasha focuses on product configurability, Garg on governance of shared resources, and Peter on benchmark standards to ensure reliability [80-86][6-9][120-124].
Speakers: Natasha Crampton, Dr. Saurabh Garg, Peter Mattson
Need for adaptable technology that respects local laws, values, and provides configurable controls (Natasha Crampton) Governance challenges of sharing mechanisms and ecosystem interdependence (Dr. Saurabh Garg) Reliable AI depends on common, industrial‑grade benchmarks (Peter Mattson)
Takeaways
Key takeaways
International AI collaboration faces major governance challenges, especially around sharing mechanisms, ecosystem interdependence, and aligning diverse legal and cultural norms. Microsoft announced a $50 billion, five‑pillar strategy to accelerate AI diffusion to the Global South, focusing on infrastructure, skilling, multilingual/cultural AI, local innovation, and data sharing. Trustworthiness and reliability of AI depend on industrial‑grade, multilingual, and federated benchmarks; measurement (AI metrology) and a “trust factor” metric are essential for guiding development. Philanthropic organisations can bridge gaps in low‑resource settings by supporting sustainable, edge‑focused, lower‑parameter models, language diversity, and open‑source solutions. Open data is valuable but must be balanced with privacy and sovereignty concerns; robust data‑governance frameworks and global data registries are needed.
Resolutions and action items
Microsoft will invest $50 billion by 2030 to build AI infrastructure (data centres, connectivity, sovereign cloud options) in the Global South. Microsoft commits to up‑skill 2 million Indian teachers on AI fundamentals and to expand multilingual AI initiatives (e.g., Lingua Africa, safety benchmarks for Hindi, Tamil, Malay, Japanese, Korean). Microsoft and partner organisations will contribute AI adoption and usage data to a central repository managed in part by the World Bank to inform policy decisions. ML Commons will work toward industrial‑scale, multilingual safety and security benchmarks, including federated evaluation frameworks for sectors such as healthcare. The UK’s National Physical Laboratory will launch the Centre for AI Measurement and the AI Security Institute to develop AI metrology and a “trust factor” metric. Wendy Hall will continue work on UN‑led data‑governance initiatives, including the creation of cross‑border data‑sharing mechanisms and global data registries.
Unresolved issues
How to create a globally acceptable governance model for AI resource sharing that respects each nation’s values, laws, and sovereignty while remaining operationally efficient. Sustainable financing models for low‑resource regions beyond philanthropic grants and private‑sector investment. Technical and policy solutions for edge‑computing AI in areas with poor connectivity and limited power supply. Standardisation of open‑source licensing and model distribution to ensure affordability without compromising security or intellectual‑property rights. Concrete methods for measuring and reporting the societal impact of AI (e.g., employment effects, gender equity) across diverse contexts.
Suggested compromises
Design AI products with configurable controls and default settings that can be adapted by downstream users to meet local regulatory and cultural requirements. Combine sovereign cloud deployments with shared infrastructure to balance national data‑sovereignty concerns and economies of scale. Leverage open‑source models and weight‑spaces to empower local ecosystems while providing optional proprietary services for higher‑performance needs. Adopt a partnership‑centric approach where private‑sector investment, government funding, and philanthropic support are coordinated rather than competing. Use open‑data registries for discoverability while keeping sensitive datasets under controlled, shareable‑but‑not‑public licenses.
Thought Provoking Comments
One of the biggest challenges would be the governance around sharing mechanisms, sharing protocols, and managing the framework, as well as the talent and institutional capability required to democratize AI globally.
Highlights that technical resources are not the only bottleneck; governance structures and skilled human capital are critical for equitable AI diffusion, shifting focus from infrastructure to systemic issues.
Prompted subsequent speakers to discuss governance, measurement, and partnership models, leading to deeper conversation about how to operationalize responsible AI across nations.
Speaker: Dr. Saurabh Garg
Microsoft is committing $50 billion by 2030 to close the AI diffusion gap between the Global North and South, organized around five pillars: infrastructure, skilling, multilingual/multicultural AI, local innovation, and data sharing for policy insight.
Introduces a concrete, multi‑dimensional strategy from a major private sector player, framing the discussion around tangible actions rather than abstract challenges.
Set the agenda for the rest of the panel, leading others (e.g., Peter Mattson, Wendy Hall) to reference these pillars when discussing benchmarks, measurement, and inclusive development.
Speaker: Natasha Crampton
Reliability is the real blocker for AI adoption; we need industrial‑scale, trustworthy benchmarks—especially multilingual safety and federated evaluation—to make AI systems dependable across diverse contexts.
Identifies reliability, not capability, as the core obstacle and proposes concrete technical solutions (federated evaluation, confidential compute) that bridge research and real‑world deployment.
Shifted the conversation toward the necessity of rigorous evaluation frameworks, influencing Wendy Hall’s later emphasis on AI metrology and prompting discussion on measurement.
Speaker: Peter Mattson
We need a new science of AI metrology—systematic measurement of AI’s effects, including a ‘trust factor’—backed by interdisciplinary collaboration and institutions like the UK’s Centre for AI Measurement.
Frames trustworthiness as a measurable scientific discipline, moving the dialogue from policy statements to actionable, quantifiable metrics.
Created a turning point where the panel moved from describing challenges to proposing a structured approach for assessing trust, influencing the final remarks of both Natasha and Peter on measurement.
Speaker: Prof. Dame Wendy Hall
In low‑connectivity settings, trustworthiness must consider edge inference, language relevance, energy consumption, and open‑source models to ensure AI reaches the bottom 50 % of the population without creating new divides.
Brings a ground‑level perspective on practical constraints in the Global South, emphasizing sustainability, accessibility, and inclusivity beyond high‑level policy.
Added nuance to earlier high‑level commitments, prompting the panel to acknowledge the need for lightweight, localized solutions and reinforcing the importance of the skilling and multilingual pillars.
Speaker: Harish (Participant, Gates Foundation)
We must give more attention to developing more efficient, domain‑specific models to reduce compute and energy costs, which will broaden diffusion and make AI more sustainable.
Connects model architecture choices directly to sustainability and diffusion, linking technical design decisions with the broader equity goals discussed earlier.
Reinforced the earlier points about infrastructure and talent, and steered the conversation toward concrete research directions for the community.
Speaker: Dr. Saurabh Garg (later comment)
Open data is essential but not all data can be open; we need exchangeable, shareable datasets, cross‑border data flows, and global registries to enable trustworthy AI development.
Balances the ideal of openness with practical privacy and sovereignty concerns, highlighting data governance as a missing piece in AI governance discussions.
Extended the dialogue on governance to include data stewardship, influencing the panel’s concluding emphasis on measurement and the need for robust data infrastructures.
Speaker: Prof. Dame Wendy Hall
Overall Assessment

The discussion was shaped by a series of pivotal insights that moved it from a broad acknowledgment of AI’s global challenges to a concrete, multi‑layered roadmap. Dr. Garg’s focus on governance and talent set the stage for Natasha Crampton’s detailed five‑pillar investment plan, which anchored the conversation in actionable commitments. Peter Mattson’s emphasis on reliability and benchmarking introduced the technical foundation needed to realize those commitments, while Wendy Hall’s call for AI metrology transformed abstract trust concerns into measurable objectives. Contributions from the Gates Foundation highlighted on‑the‑ground constraints, ensuring the dialogue remained grounded in real‑world applicability. Together, these comments redirected the panel toward a synthesis of policy, technical standards, measurement, and inclusive implementation, culminating in a consensus that democratizing trustworthy AI requires coordinated governance, robust metrics, and sustained investment across infrastructure, talent, and data ecosystems.

Follow-up Questions
What governance models and sharing protocols are needed to coordinate international AI resource sharing?
He identified governance of sharing mechanisms as a major challenge, indicating a need for research into effective international frameworks.
Speaker: Dr. Saurabh Garg
How can talent and institutional capability be developed in regions lacking AI expertise?
He highlighted that infrastructure can be acquired but expertise must be built, pointing to a gap in capacity‑building research.
Speaker: Dr. Saurabh Garg
How can the Lingua Africa initiative ensure high‑quality, privacy‑preserving local language data collection?
She described the initiative to gather rich local data, raising questions about data quality, consent, and privacy in multilingual contexts.
Speaker: Natasha Crampton
What mechanisms will be used to share AI adoption and usage data with policymakers (e.g., World Bank project) and how will it inform policy?
She mentioned contributing adoption data to central projects, suggesting a need to study effective data‑to‑policy pipelines.
Speaker: Natasha Crampton
How can industrial‑scale, reliable benchmarking be achieved for multilingual safety and security?
He noted the gap between research benchmarks and industrial‑grade reliability, calling for robust benchmarking infrastructure.
Speaker: Peter Mattson
What are the technical and governance challenges of federated evaluation for healthcare AI across disparate legal systems?
He described federated evaluation as a promising approach but complex, indicating a research need on cross‑jurisdictional implementation.
Speaker: Peter Mattson
How should benchmarks evolve to assess multi‑turn, agentic AI systems?
He warned that future AI will be multi‑turn and agentic, requiring new benchmark designs.
Speaker: Peter Mattson
What low‑parameter, energy‑efficient models are suitable for edge inference in low‑connectivity settings?
He raised concerns about compute availability and edge deployment, highlighting a research gap in lightweight, localized models.
Speaker: Harish (Participant)
How can open‑source models be sustained financially for long‑term use in the Global South?
He pointed out affordability issues for governments in the Global South, suggesting a need for sustainable funding models for open‑source AI.
Speaker: Harish (Participant)
What standards and frameworks are needed for cross‑border data sharing and data registries to support trustworthy AI?
She emphasized the lack of global data governance and the need for registries, indicating a research agenda on international data sharing protocols.
Speaker: Wendy Hall
How can AI metrology be standardized globally, and what metrics should be used to quantify trustworthiness?
She discussed the emerging AI measurement institutes and the concept of a ‘trust factor’, calling for standardized metrology.
Speaker: Wendy Hall
What impact will the UK’s Centre for AI Measurement and AI Security Institute have on international AI governance?
She introduced new UK institutions, prompting inquiry into their effectiveness and influence on global standards.
Speaker: Wendy Hall
How can real‑world evidence be systematically collected and metricized to assess AI’s usefulness in development sectors?
He stressed the importance of evidence and metrics for trust and utility, indicating a need for robust evaluation frameworks.
Speaker: Harish (Participant)
What research is needed to create domain‑specific, efficient AI models that reduce compute and energy costs?
He suggested focusing on efficient, domain‑specific models to lower barriers to diffusion, highlighting a research direction.
Speaker: Dr. Saurabh Garg
How can multi‑dimensional measurement of AI’s economic and societal impact be operationalized?
She called for measurement across multiple dimensions to gauge intervention effectiveness, suggesting methodological development.
Speaker: Natasha Crampton
What cost‑effective methods can be developed for large‑scale AI measurement and evaluation?
He noted the resource intensity of measurement, indicating a need for scalable, affordable evaluation techniques.
Speaker: Peter Mattson
How can AI products be designed with configurable controls that satisfy diverse legal and cultural requirements across jurisdictions?
She described the need for adaptable technology to meet varying national laws and values, pointing to design research on configurable AI.
Speaker: Natasha Crampton
What criteria should determine when data can be openly shared versus when it must remain restricted, balancing openness and privacy?
She highlighted that not all data can be open, raising the need for principled guidelines on data openness versus restriction.
Speaker: Wendy Hall

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Designing Indias Digital Future AI at the Core 6G at the Edge

Designing Indias Digital Future AI at the Core 6G at the Edge

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session focused on “AI at the Core, 6G at the Edge” and examined how India can move from a consumer of global technology to a leader in the next intelligence-driven communications frontier [1][10]. Ashok Kumar explained that, unlike earlier generations, the ITU’s 6G framework already embeds artificial intelligence as one of six usage scenarios and defines “ubiquitous intelligence” so that every element of the end-to-end system will have native AI [27-30]. He noted that this shift marks a historic opportunity for India’s ecosystem of MSMEs, startups, academia and research institutions to influence standards and build a complete 6G stack [33-35].


The government is facilitating participation by subsidising TSDSI and 3GPP membership to enable startups to join standards bodies at a reduced fee of ₹10,000 [42]. It launched a 6G Accelerated Research Program that has funded more than 100 projects covering terahertz, AI, machine learning and semantic communications, and is supporting testbeds such as terahertz and AOC facilities [45-53]. Additional collaborations include a joint roadmap with ANRF to evolve a release-18 system through releases 19-21, partnership with the Bharat 6G Alliance to shape policy, and coordination with DST’s RDI scheme to bring telecom into national research funding [56-68]. The Ministry of Telecom has also established 100 operational 5G labs across institutes, which are being leveraged as a foundation for early 6G research and are being offered to industry for joint experimentation [69-73].


In the panel, Radhakant Das highlighted that 6G will become a distributed computer fabric where AI is the basic infrastructure across radio, core, satellite and sensor layers [88-92]. Surojeet Roy described the proliferation of AI-enabled devices such as smart glasses and wearables, and projected that AI-driven traffic could rise to 30 % of total data by 2033, shifting the traditional downlink-dominant pattern to a much higher uplink demand [115-119][125-130][185-190]. He further argued that AI-based signal processing (e.g., DeepRx/DeepTx) can improve spectral efficiency by 25-30 % and that 6G will require up to 400 MHz bandwidth to deliver five-fold capacity gains over 5G [195-205]. Rajiv Saluja emphasized the need to “democratize intelligence,” moving inference to the edge for most workloads while reserving complex multi-agent tasks for central clouds, and stressed that a sovereign, end-to-end AI ecosystem is essential for national self-reliance [149-160][278-282]. Sandeep Sharma added that AI traffic growth demands attention to latency, uplink coverage and a token-economy model, and called for national frameworks to coordinate data exchanges, safety guardrails and open, API-driven architectures similar to the UPI model [162-173][237-244][260-264][337-340][341-343].


Audience members raised concerns about interoperability of AI-powered devices and the need for open AI APIs and centralized data marketplaces, to which Rajiv replied that Jio is building an open, multilingual AI stack and that open APIs are critical for scaling to India’s billion-user base [330-336]. The participants agreed that achieving an AI-native 6G future will require coordinated government policies, industry standards participation, sovereign data and model development, and an open ecosystem that can be scaled cost-effectively across the country [45-48][278-282][337-344].


Keypoints


Major discussion points


Government-driven 6G research, standards and ecosystem building – The Department of Telecom (DoT) is funding low-cost 3GPP/TSDSI membership for startups, running a “6G Accelerated Research Program” that has selected 100+ projects, establishing terahertz and AOC testbeds, collaborating with the Bharat 6G Alliance, and extending 5G labs across 100 institutes to seed 6G work [35-44][55-63][69-73].


AI as a native, “ubiquitous intelligence” element of 6G – The ITU 6G framework (released two years ago) already embeds AI in all six usage scenarios and defines “ubiquitous intelligence” as a core design principle, meaning every element-from user equipment to core and applications-will have AI built-in from the outset [26-31][28-30].


Technical shift toward AI-driven traffic and uplink-heavy loads – Nokia forecasts AI-related traffic rising to ~30 % of total WAN traffic by 2033, and predicts the downlink-to-uplink ratio will move from ~10:1 to about 4:1, demanding higher uplink capacity, wider (≈400 MHz) bandwidth, and AI-enhanced RAN functions such as DeepRx/DeepTx that can boost capacity by 25-30 % [125-132][185-205].


Business and societal value of “democratizing intelligence” – Industry leaders stress moving from connectivity to intelligence, delivering edge inference for most workloads, and creating new enterprise value pools (demand analytics, workflow automation, security) while keeping costs low through a token-economy model [149-158][267-276].


Sovereignty versus openness of the AI-6G ecosystem – There is a strong call for a sovereign, end-to-end AI stack (device-to-cloud-edge) built on Indian data, while also insisting on open, API-driven interfaces, national data-exchange frameworks, and safety guardrails to avoid vendor lock-in and ensure interoperability [277-286][330-337][341-344].


Overall purpose / goal


The session was convened to map India’s strategic roadmap for “AI at the Core, 6G at the Edge,” aligning government policy, industry capabilities, and academic research. Participants aimed to showcase ongoing public-sector initiatives, discuss technical and economic implications of AI-native 6G, and chart collaborative actions that will enable India to move from a technology consumer to a global standards contributor and creator of a sovereign, inclusive digital future.


Overall tone


The discussion began with a formal, optimistic opening by the moderator and the government speaker, emphasizing opportunity and national ambition. As the panel progressed, the tone shifted to a more technical and analytical one-detailing traffic forecasts, bandwidth needs, and AI-enhanced RAN concepts-while also acknowledging practical challenges (power consumption, data silos, safety). Throughout, the conversation remained constructive and forward-looking, ending on a collaborative note that called for open standards, sovereign data frameworks, and joint industry-government effort.


Speakers

Moderator


– Role/Title: Session moderator


– Expertise/Area: Moderating technology and AI/6G discussions


Ashok Kumar


– Role/Title: Director General, Department of Daily Communication, Government of India (Department of Telecom)


– Expertise/Area: Government policy and ecosystem development for 5G/6G and AI integration


Rajeev Saluja


– Role/Title: Vice President, 5G Radio, Reliance Jio


– Expertise/Area: 5G/6G radio technology, network deployment, intelligence democratization [S6]


Surojeet Roy


– Role/Title: Senior Telecommunications Leader, Head of Technology, Technology and Solutions, COE, Nokia India


– Expertise/Area: Telecommunications standards, AI-native RAN, 6G research and development [S2]


Sandeep Sharma


– Role/Title: Vice President & Global Head of Emerging Technologies, Network Services, Tech Mahindra


– Expertise/Area: AI innovation, network services, AI-driven use cases and ROI for industry sectors


Radhakant Das


– Role/Title: Head, Technology Engineering and Innovation Function for Network Solutions and Services (NSS), Tata Consultancy Services (TCS); Panel moderator


– Expertise/Area: Network solutions, AI-enabled 6G architecture, industry-academia-government collaboration [S3][S5]


Audience


– Role/Title: Various participants and questioners from the live and virtual audience


– Expertise/Area: Diverse (questions on interoperability, AI APIs, data sharing, etc.)


Additional speakers:


Radhika – participated in the closing ceremony by handing over the memento (role not otherwise specified).


Full session reportComprehensive analysis and detailed insights

The session opened with the moderator emphasizing India’s ambition to shift from a consumer of global technology cycles to a leader in the next intelligence-driven communications frontier, and invited Mr Ashok Kumar, Director-General of the Department of Daily Communication, to give the keynote address [1-2][10]. He framed the discussion around the theme “AI at the Core, 6G at the Edge”, signalling a strategic focus on embedding artificial intelligence into the very fabric of future networks [10-11] and also highlighted the importance of semantic communications and of designing AI that is power-efficient, avoiding excessive data-centre energy consumption [11-12].


Mr Kumar traced the evolution of mobile generations, noting that 2G, 3G and 4G were primarily aimed at connecting people, while later extensions such as NB-IoT added machine connectivity as an afterthought [12-14]. He explained that the ITU’s 5G framework (IMT-2020) was the first to include massive machine connectivity and ultra-low latency as core usage scenarios, but AI was still added only to solve specific network-function problems rather than being built-in [15-19][22-25]. In contrast, the ITU 6G framework released two years ago already lists integrated AI as one of six usage scenarios and enshrines “ubiquitous intelligence” as a key design principle, meaning that every element of the end-to-end 6G system-from user equipment to radio, core and applications-will have AI embedded natively [26-31][27-30].


The Department of Telecom (DoT) is translating this vision into concrete programmes. It subsidises 3GPP/TSDSI membership for startups at a token fee of ₹10 000, dramatically lowering the cost of standards participation [42]. In parallel, the DoT launched a “6G Accelerated Research Program” that has funded more than 100 projects covering terahertz communications, AI/ML, semantic communications and sensing [45-48][49-52]. Supporting infrastructure includes terahertz and AOC testbeds, a joint roadmap with ANRF to evolve a Release-18 system through Releases 19-21 (with Release 21 expected to be the first 6G-era specification) [55-63][56-58][60-62], and the Bharat 6G Alliance, which runs working groups on technology, spectrum and devices to feed policy recommendations to the government [55-63]. Additional backing comes from the Department of Science & Technology’s (DST) RDI scheme, now extended to telecom, and from the Ministry of Electronics & Information Technology (MeitY); 100 operational 5G labs across research institutes are being repurposed as seedbeds for early 6G experimentation [64-73].


After the keynote, the moderator introduced the panel-Rajiv Saluja (Reliance Jio), Surojeet Roy (Nokia India), Sandeep Sharma (Tech Mahindra) and himself (TCS)-and reiterated the session’s subtitle: “Designing India’s Next Resilient, Innovative and Efficient Digital Frontier” [75-80][86-94]. He described 6G as a distributed computer fabric where AI will serve as the basic infrastructure across radio, core, satellite (non-terrestrial) and sensor layers, enabling intelligence “everywhere” [88-92].


Device-level AI & traffic shift (Surojeet Roy). Roy outlined the proliferation of AI-enabled devices such as smart glasses, wearables and body patches, which will rely on edge inference because their form factor cannot host full-scale AI processing [115-124]. He warned that this will generate a substantial uplink traffic surge, reversing the historic downlink-dominant pattern; Nokia’s forecasts suggest AI-driven traffic could reach roughly 30 % of total WAN traffic by 2033, and the downlink-to-uplink ratio may shift from about 10:1 to 4:1 [125-132][185-190].


AI in the RAN (Surojeet Roy). Roy explained that deep-learning-based receivers and transmitters (DeepRx/DeepTx) are already proving capable of extracting useful signals in very low signal-to-noise conditions, potentially boosting spectral efficiency by 25-30 % and enabling higher-order modulation [195-202]. To accommodate the expected traffic, 6G will need up to 400 MHz of contiguous bandwidth-four times the typical 100 MHz used in 5G-combined with a five-fold increase in spectral efficiency, which together could deliver about twenty times the capacity of current 5G networks [203-206].


Business perspective (Rajiv Saluja). Saluja stressed that the next decade must focus on “democratising intelligence” rather than merely expanding connectivity. He argued that most simple AI inference workloads should be pushed to the edge, while complex multi-agent workflows remain in central clouds, thereby creating three emerging enterprise value pools: demand analytics from new data streams, workflow automation that lifts humans up the value chain, and stronger end-to-end security frameworks [149-160]. He also introduced the notion of “token sovereignty”, noting that control over latency, coverage and token consumption will become productivity-oriented KPIs [162-165].


Token-economy & KPI discussion (Sandeep Sharma). Sharma described a token-economy model in which latency, coverage and token consumption are treated as explicit KPIs; even a modest 10-20 % latency reduction can yield large efficiency gains [140-155]. He highlighted the need for national sandboxes with safety guardrails and auditability of AI models that can modify live network parameters [237-244][260-264].


Sovereignty vs openness. Saluja argued that India must build a sovereign, end-to-end AI stack-device, edge, cloud and intelligence layers-all owned and operated domestically, to keep costs low, ensure security and embed Indian cultural values [278-282]. Sharma advocated an open, API-driven and loosely-coupled ecosystem to avoid vendor lock-in and to enable monetisation of network APIs, citing the UPI model as a successful precedent [330-334][260-264]. Moderator Das prompted a discussion on a hybrid approach that balances openness with sovereign control [295-302].


Data & model training. Saluja noted that AI models must be trained on Indian data to support multilingual intelligence across all regional languages [333-336]. Sharma expanded on this by proposing a national data-exchange platform where anonymised enterprise data can be pooled for large-scale model training while safeguarding confidentiality [337-344]. Both agreed that such a framework would accelerate AI development and reduce dependence on foreign large-language models [321-328][337-344].


Audience Q&A.


– A participant asked whether AI-enabled devices such as Meta glasses should conform to a common AI-API architecture so that applications work across different operators and platforms; Saluja responded that Jio is building an open, multilingual AI stack with API-driven interfaces to scale to India’s billion-user base [312-320][330-336].


– Another question raised the monetisation of network APIs (OneEdge); Rajiv pledged to discuss the commercial model offline [340-345].


– Additional queries touched on leveraging idle GPU resources at cell towers for training and on the national data-exchange platform; Sharma and Roy offered brief comments [237-244][260-264][337-344].


The panel concluded by reaffirming the need for coordinated action across government, industry and academia. DoT will continue subsidising standards participation, expanding the 6G Accelerated Research Program, and leveraging the 100 5G labs for joint experimentation [42][45-53][69-73]. The Bharat 6G Alliance will keep feeding policy recommendations, while a national data-exchange and sandbox framework will be pursued to align pilots with forthcoming 6G standards and safety guidelines [237-244][341-344]. The moderator thanked all speakers, invited them to receive their mementos, and closed the session [73-82][363-371].


Overall, the session underscored India’s coordinated push-through policy, funding, testbeds and open standards-to embed AI at the core of 6G and to build a sovereign yet interoperable digital ecosystem.


Session transcriptComplete transcript of the session
Moderator

opportunity, ensuring that India moves from being a consumer of global technology cycles to becoming a sharper of the world’s next intelligence and connectivity frontier. To kick off the discussion, I would like to invite Mr. Ashok Kumar, Deepthi Director General, Department of Daily Communication, Government of India, to deliver a keynote address. Thank you.

Ashok Kumar

So my colleague panelist, the expert panelist here, the distinguished dignitaries in the hall, and other participants gathered here, Thank you,

Moderator

Mr. Ashok. Thank you, Mr. Ashok. Thank you, Mr. Ashok. So it’s

Ashok Kumar

my privilege to deliver the keynote address before such a gathering. So although the hall is like empty, but I suppose many of our participants are online. The theme of this session, AI at the Core, 6G at the Edge, captures the transformative journey which we have started now. So let me go back slightly back in the history. When we rolled out 2G, 3G and 4G, the vision was to connect V, human beings, and as technology progressed, we started connecting machines and objects through innovations like NB -IoT, as all of us know. Although they were not part of the original vision and we can say that those were evolutions, extensions and maybe we can also call that as afterthought.

When the work on 5G started at ITU way back in 2012, if you recall, after three years of deliberations with all the state 190 plus countries and also the sector members like industry, academia. So ITU released a 5G framework, they call it IMT 2020. And for the first time. The usage scenario, the three usage scenario envisioned by ITU included support for massive connectivity of objects and machines and also the applications which required very, very low latency. So, what we should say that for the first time technology was designed, it was not an afterthought even for machines, not only for we humans but also for the machines. As we know that the 5G journey started with 3G BP release 15 and that was also delivered in three parts, right?

Just to start early, so they had three part of releases of release 15 and then every one and half years or two years we have the next evolution of the 5G technology. And when we reached to release 18 and that is also called 5G advanced. So, basically AI, artificial intelligence began to be integrated at part of 3G. To solve the network functions or to solve some of the network functions requirement. So again, this was some sort of an afterthought, right? Because we started, our vision was not the native integration of AI into the 5G system, but as technology evolved, we started doing that, perhaps the precursor of the 6G. The shift now which we are seeing in 6G, the story is different.

If you look at the ITU framework for 6G, which was released two years back, so that has got six usage scenarios, they have envisioned six usage scenarios, and one of the usage scenarios is integrated artificial intelligence and communications. So now, the artificial intelligence is part of the initial thought itself, and more important, along with those six usage scenarios, what ITU conceived is the four overarching principles, and the fifth is the three main principles, and the sixth is the three main principles, and the sixth is the three main principles, The four, the key design principle we can say, and one of the design principle if you read is ubiquitous intelligence. So when we say ubiquitous intelligence, what we mean is that every element of our end -to -end 6G system, be it the user equipment or be it the radio or be it core or be it applications, everyone will use AI embedded natively into the system.

So the earlier generations, if you talk about connected humans and objects or machines, 6G will actually correct the intelligence as it is envisioned in the ITU document. And of course, 3GPP has started working on all those aspects. So this is a kind of… It’s a historic opportunity for me in India, particularly for our… ecosystem, that is our MSME, startup, academia and everyone. So it’s an opportunity not only to like participate in the standard so that our technology, our innovations becomes part of the standard, but also to build our own end -to -end 6G technology stack. So what are the different government efforts since I come from government, Department of Telecom, so I would also like to touch upon what are different efforts the government is trying to do to create a robust ecosystem of 6G research and innovations.

Of course, government alone cannot do everything, but whatever effort we are trying. So one of the important aspects is about whatever technology we are trying to develop, right, whatever IP we are trying to create, if that enters into the standard, 3GPP standard itself, it’s good for us that we are shaping the standards. The India is also. So, I mean, we started doing such activities from 5G onwards. Before that, we were not at all participating in the 6G, I mean, telecom technologies standard making. So, to support our startup, et cetera, onto this, so if a startup company want to, say, participate in 3GPP standards, that company has to be member of first our TSDSI and also individual members of 3GPP and that’s a cost, right?

So, at DOT, we are supporting TSDSI so that our startups can be member of TSDSI and 3GPP at a very, very low cost of 10 ,000, not 5 lakh, 6 lakh, and they can participate. So, that’s, it’s a continuous thing which we try, trying and doing. Interesting. In addition to that, as we know that unless we do our own kind of a research and technology development, even before the standard starts, building up and then take it to the standard. So to support that activity, we had come out with a scheme called 6G Accelerated Research Program. So that was floated, I think, two years back. And we have selected 100 plus 6G related projects in different area. That includes terahertz technology, artificial intelligence, machine learning, semantic communications.

And every aspect of sensing, every aspect of the vision of the 6G. And those projects are progressing. And we are trying to help them also participate into the standard. In addition to that, we have also supported some 6G related testbed like terahertz testbed and one AOC testbed, which is doing very good work as of now. In addition to that, there are many other. Programs which are sort of in progress. For example, recently we worked with. ANRF wherein we are trying to come out with a scheme wherein we are trying to build end -to -end system based on release 18 and evolve it to release 19, 20 and 21. As you know that release 21 would be the first release of 60.

So we are trying to do that and perhaps that will come very soon, maybe in the next two quarters that will be out. In addition to that, I would also like to take name of Bharat 6G Alliance here because we are also closely working with Bharat 6G Alliance as government. So Bharat 6G Alliance has created multiple working group on technology, on spectrum, on devices and some of the members of the alliance have been working on the technology and some of the members of the chair of those working groups are here as part of this session also. So, basically, Bharat 6G alliance is kind of suggesting government that what next to be done to be leader in 6G and based on that, we are trying to, I mean, shape the policies of the government.

In addition to Department of Telecom, our other ministries like Maiti is also supporting various 6G related projects. I would take name of the scheme of DST, which is RDI. So, once you have a technology, perhaps you want to scale RDI will come handy. And we have taken up with DST that telecom sector should be included as part of the sector which will be supported. In the RDI and Secretary DST had agreed to this particular aspect and whenever the schemes are getting floated, our companies, our startup in the field of telecom can actually apply. As part of DST, they also have, they have been running cyber physical programs. So, they are also, they are supporting some of the.

5G and 6G related projects. One most important thing which DOT did previous years, which was actually announced in the budget and inaugurated by our Honorable Prime Minister was 100 5G lab in 100 different institutes across the country. Those labs are actually operational. So those are some of the points where actually 6G research has also started because once you have good knowledge of 5G and if you are able to develop use cases or 5G network elements itself, perhaps you are ready to do something on 6G. And so my request to industry here, those who are online, that please adopt one or two 5G lab and try to work with them that what more can be done in the technology area.

With this, I want to conclude my address by inviting the esteemed panelists to deliberate and provide some answers to your questions. Thank you. Thank you. not only to the government, but also to the industry, MSME and startup and academia on the way forward on this

Moderator

Thank you, sir. So now we are moving to our very next segment, the panel discussion. Our first speaker is Rajiv Seluja, Vice President, 5G Radio at Reliance Jio. Also joining us is Surojeet Roy, a Senior Telecommunications Leader, Head of Technology, Technology and Solutions, COE, at Nokia India. Sandeep Sharma, a technology leader and AI innovator, Vice President and Global Head of Emerging Technologies, Network Services at Tech Mahindra. The dialogue will be moderated by Radhakant Das, who heads the Technology Engineering and Innovation Function for Network Solutions and Services, NSS, at TCS. Before we start this panel discussion, I would like all the speakers to have group photographs, please. May I also request Ashok Kumar, sir, to be here?

Thank you. Thank you, sir.

Radhakant Das

Okay. Can we start? Great. So, good morning, our distinguished guests. My colleagues from the Government, Industry and Academy. All of you who are online, good morning to you all as well. So, this entire topic, which you can read out, A at the core and 6G at the edge, and Designing India’s Next Resilient, Innovative and Efficient Digital Frontier. We are at a historic inflection point where the intelligence is the basic infra. based on which the next evolution of this planet will actually continue. And we have seen until 5G, but in the 6G, a lot of hope. And we see that 6G not only emerges a faster network as an option, but as a distributed computer fabric.

It’s going to have a platform that enables the intelligence everywhere across radio, core, and age, including the satellite, which is non -terrestrial networks, and the sensor ecosystems. Devices in 6G and AI will take a major role. We’ll talk about how the 6G payloads or the designs will actually be AI -native, how it will drive the overall objective of bringing AI and 6G together. As a success. The professor has already pointed out the standards of already… will take in AI native to the 6G standards which is coming up in Magda’s next two quarters. It’s quite optimistic but yes, we are looking forward for the faster to come. And thanks for the government to give all the support to the industry, academia and the Vara 6G Alliance is also doing a great job and our Honourable Minister and Prime Minister are actually actively supporting and giving directions time to time to get this forward.

So we will focus on the edge interfaces at a scale. We’ll talk about semantic communications where like you would have seen India has really put a very strategic point of view that AI is, we will ensure that AI is kind of energy efficient. It will not be responsible for melting the data centres. It will be power efficient. And we will ensure that every compute capacity is being optimally utilized, not like we have enough compute and we will use it as much as possible. And data is a strategic fuel for this AI. And the networks, telecom networks, it’s not only 6G, but all kinds of connectivity networks, they will drive this data, this strategic fuel to the users, to the sensors, to the cloud, to the computing systems and deliver it.

So here we go. We start, I think, all our panelists are there. I think their names are already there in the backside of the screen. So I’ll just start with some of the questions. So maybe we’ll start with Surajit. Yeah. So Surajit, I have. I have the first question for you. The India in the context of India’s, in the context of 6G vision. where networks are expected to reason, self -organize and optimize across ecosystem, run, core, edge and of course when you say edge, it includes devices and the sensors. How do you see AI is transforming the RAN for the 6G in the day one? You may throw some light on that please.

Surojeet Roy

Yeah, sure. So, I think we can talk about it in few steps. For example, first one is on the devices. So, we have many form factor of devices coming up. We already have smart glasses launched. We had this AR, VR glasses earlier where we could not see outside, but then now we have glasses which look more like the normal glasses we wear. But those are having this AI functionalities, right? you can do lot many you know work in the background and nobody would know that you are actually looking at something else while you are talking to a person so I think from the device perspective the intelligence is being built up in the devices handsets there was a talk that maybe all these smart glasses and wearables will take away the handset but then I guess these handsets are going to stay for a while I don’t know till when but at least for the next 4 -5 years those are going to stay and we will also have lots of wearables right and maybe some you know body patches as well which can sense your heart rate so as a person as a user I see that we will be having multiple devices going forward not only one device we will have handset, we will have smart glasses, we will have wearables right and this all will be having AI enabled devices capabilities, but because of the form factor, these devices might not be able to do all the inferencing tasks on their own, which means that there will be some inferencing help needed from the data centers, whether it is centralized or edge data centers, which means there will be lots of traffic requirements towards the network, especially in the uplink.

Radhakant Das

Okay. So, Rojit, if you would like to expand it a little further, the inferencing is now tiered or distributed, as what I am mentioning. So, what percent is, maybe you can take an average application, will be there residing in the devices or sensor side? What percent is in the RAM? What percent is in the core of the network premises? And what will go to the cloud?

Surojeet Roy

Yeah, I think we do not have those exact numbers, but I think if I look at the data traffic as such, the WAN traffic. So, it is going to grow maybe six to nine times from now. till 2033 there is a position we have from Nokia Bell Labs and out of the total traffic in 2033 almost 30 % will be AI driven right. So 30 % traffic will be AI driven. It can be direct AI which can be slightly lesser but the indirect AI where you know once you use any application it drives you towards some other application and that increases your data. Maybe right now we have 5 % of the AI traffic it will go to 30 % in next 3 years. Not 3 years I think the projection is by 2033 around 2033.

So it might get to 30%. It is getting embedded to all our life faster than we have thought about.

Radhakant Das

So any of you would like to address this thing like how much of influencing would you like to see from the agency? Any of you would like to address this thing like how much of influencing would you like Like for example, of course, cloud has to do large part of it.

Surojeet Roy

Yeah, on that, what I can comment, maybe Sandeep and Rajiv can also add. So first thing is, you know, it really depends on the use case. Physical AI use cases like autonomous vehicles, robots, I think autonomous vehicles are definitely picking up in US and China. But if I look at India, I think it’s going to take some time because we don’t follow rules. You know, we have a bad habit of driving. You know, I think the AI models have to tune to understand how the drivers drive in India. Right. So I think autonomous vehicles will take some time. But those are the use cases, autonomous vehicles, industrial robots, maybe robotic surgeries, where you need much lower latency.

Those are the ones where the inferencing might be needed at the edge. But I think for normal consumers and normal use cases, we can still manage with the inferencing at the central location. Right. But the main problem is. having a centralized data center establishing that is a problem because I think the power consumption and the power requirements site infrastructure, those are a major challenge and that’s why we see a trend that the data centers are gradually moving towards the edge. Maybe not driven by only the use cases but maybe driven by the infrastructure.

Radhakant Das

Yeah, a lot of, a heavy dose of this data center related concerns were there for last four days in the summit. So Rajiv if I can come to you question for you is are you witnessing a shift from telcos as a connectivity providers to intelligence utilities and how does your organizations plan to deliver intelligence at the lowest cost?

Rajeev Saluja

Right, you know, so in the past decade like Ashok sir also mentioned was about democratizing the connectivity, right? Today more than 99 % of India’s population is connected by high speed broadband, right? The next decade is going to be about how can we democratize intelligence. So how can the last citizen of India have the strongest intelligence ecosystem built? That is the whole objective towards which we are working. And like our chairman said yesterday, you cannot rent intelligence. We cannot, as India, we cannot afford to rent intelligence. We need to build it. We need to scale it. And the complete infrastructure that we are building, we are building up from connectivity to the cloud, to the edge, and then the intelligence ecosystem on top.

So just to add to your previous question, we believe that most of the simple agentic and inferencing workloads will get handled at the edge. And only the multi -step, multi -agent, complex workflows. those are the ones which will get handled at the central location but our whole focus is how can we create an ecosystem an end -to -end ecosystem which can ensure all pervasive and an affordable intelligence to every citizen of this country that’s the whole focus

Radhakant Das

Thank you So Rajiv, what you are actually referring to is if we distribute the inferences and the processing across so the power requirement will get distributed and also we will not have a lot of concentration of power consumption and the data centers itself it’s a good thought so maybe Sandeep, we’ll just come to you how do you see AI and 6G anchor use cases can deliver the ROI within next, let’s say one and a half year from the India’s priority sectors such as BFSI, manufacturing, healthcare, mobility and how do you see that and how do you put the metrics as a success?

Sandeep Sharma

I think fairly good question honestly speaking and if you look at AI and 6G are two parallel things. They are going to merge but as on today we see lot of AI traffic is getting generated. Maybe it’s 6G or 5G or maybe on wireline. And the pattern is also evolving drastically the type of AI traffic that is running. So till now, till 2G, 3G, 4G we thought of only voice and data is the actual traffic for which network should be defined. But going further, depending on different type of use cases, different latencies use case need, network has to be defined for three parallel dimensions which is latency. Latency is there today in the network but we don’t take much of attention because most of the use cases are not latency sensitive.

The other thing is coverage. Coverage is equally important. reason being the uplink sensitivity of the traffic is getting more and more relevant in the AI type of traffic. And finally, one thing that we all should be aware of, the token economy is something which drives all the use cases. How much token you are going to consume, at what pace, at what latency, drives many of the use cases, efficiency or not. So if we bifurcate it from the industry to industry perspective, if you look at the industries which are more sensitive to delay, or maybe the robotic surgery, the hospital industry, and maybe the floor machines where robots are taking all the production control, their latency plays an important role.

So we should be using 6G -centric or the 5G -centric networks to realize as good as low latency, so that the tokens which are exchanged should be acknowledged well in time, and we have a faster time to resolve. And even we have observed that even if you reduce 10 to 20 % of latency, the efficiency improves drastically so it’s no more a network KPI it’s a productivity KPI for those use cases if we talk about the coverage perspective AI is going to be more uplink heavy more bursty and we need persistent traffic around it requests will keep on coming and that persistence in uplink will only be achieved if you have a good reliable coverage and the scenarios like when you have to do a lot of tracking of the assets lot of monitoring of the assets you need to have certain use cases realized on that those are the immediate use cases that industry will look at and finally all these AI specific things will only scale when you have some national framework around it you have certain national sandboxes around it so whatever is coming into the ecosystem it’s well tested across a diverse set of vendors diverse set of customers diverse set of ecosystem players because use cases for AI may not be related to the one use cases which we have seen so far so these three dimensions we should look at and once we look at the economics of the token then coming back to the question that you asked Rishabh where the influence should happen I think it’s not about only the where but at what cost so that defines how the influencing traffic will shape up

Radhakant Das

has also urged some of the industries to take over or adopt a couple of these labs. I think what you are suggesting as a part of sandbox on the applications, they are already happening. I think more the Department of Telecom and GOI should be working on that part. Okay, Surajit, we’ll come back to you again. Again, let’s say for the next four years, until 2030, how do you see the evolution, Surajit? And starting from the devices, use cases, traffic growth, and how do you see the impact of AI derived from the networks? One is tokens, the number of tokens, we’ll start using the KPI, which Sandeep has already mentioned about. So, what’s your opinion on that?

Yes.

Surojeet Roy

Yeah, I think we touched upon it, I think, but just to be more specific about it, So the uplink traffic is going to see a significant increase. So currently we see a downlink to uplink ratio of maybe 10s to 1 or 12s to 1. I don’t know the exact number, but that’s the range we’re talking about. But with this AI applications, we are predicting that this pattern will change to maybe 4s to 1, 4 in the downlink and 1 in the uplink. So what it means is that you need much higher data rates in the uplink. Today the networks are sort of not built for that, which means there will be lots of enhancements required in the network. This can come a bit from the 5G advanced, and then more enhancement will come when we go to 6G.

There will be lots of improvement on the spectral efficiency in the uplink, and then using AI in the RAN. We can improve the coverage. I’ll give you some example how that can be done. So, for example, you know, the communication between the transmitter and receiver, it involves the signal received, the interference, the noise floor, right, and the scheduling. And there is lots of data, huge amount of data which is involved there. So, I think with AI using the deep learning algorithms, we can create, you know, some logics which can help optimize this entire communication. So, and then with AI, this communication can be adaptive as well. So, we are talking about something called DeepRx, DeepTx, where Nokia is very much, you know, engaged.

And we have done some initial proof of concepts. And using that, what we have seen is that even in an environment where you have the signal to noise ratio, which is much worse than what, for example, 5G can decipher. using AI you can actually decipher those signals and that can give a capacity increase maybe 25 -30 % and what you can also do is you can have higher order modulation supported. So this is going to increase the capacity of the network and then as I mentioned the multitude of devices which will require lots of low latency use cases, much higher capacity, we are talking about minimum 400 MHz of bandwidth when we are talking about 6G. So today 5G networks are primarily running with say 100 MHz typical bandwidth.

We are talking about 400 MHz of bandwidth which might be required and we are talking about 5 times spectral efficiency. So which means 5 into 4, you are talking about 20 times more capacity coming out from 6G networks. But I think this is an evolution, right? So we are doing the standalone networks right now and you know this voice over NR, slicing, this will… you know, get, I would say, advanced and will have the entire network having slicing capabilities, voice will transform to voice for NR and then gradually you go towards 6G where you will be building the networks which are more AI native.

Radhakant Das

Very interesting point you brought in, Surajit. So what you mentioned is, it’s very interesting, I didn’t think about it earlier. You were saying even a single digit designer, I can extract more information. Actually, we are going to improve Sagan’s principle. That’s a good aspect, right? Exactly. And also, one thing if you can just throw on, the tokens are smaller packets. You just have instructions and some questions. Why it should increase the opening bust? Ideally, it should not. There are a lot of popular talks like it is going to the 6Gs or the AI is going to reverse the traffic pattern. But why? Just tokens.

Surojeet Roy

So I think it depends on the, because you have to send the contextual information, right? For example, you are standing somewhere and you want to send a 360 degree view of where you are and you want to send it to the inferencing application so that it can help you, you know, understand whatever question you have. So I think that contextual information, sending it upwards, right, it will take lots of data requirement and primarily we are not doing it today. That’s the main reason because and this type of tasks will increase and that’s going to increase the uplink requirement.

Radhakant Das

Rajiv, do you want to add something to this?

Rajeev Saluja

No, the only thing which I wanted to add on the uplink side was that, you know, there are going to be multi -modal agents. So right now the traffic that we see is which consumer initiates, right, but when the agents and then on top multi -modal agents who are orchestrating end -to -end workflows, then they start initiating the traffic, that’s when the uplink also starts. So you will have multiple agents.

Radhakant Das

A2A traffic.

Rajeev Saluja

Yep.

Radhakant Das

Good. So Sandeep, the next question for you, there are a lot of AI6Z pilots are happening. I think whatever the organizations we have seen in last four days of AI Impact Summit, a lot of them are there. And what specific coordination mechanisms or co -creation models do you think we all should work together as industry, academia, government to ensure that these pilots, they align to the standards, they just don’t build on the silos while 6Z standard is maybe two quarters or maybe couple of more quarters away. So how we put the standardization as a perspective which can be adopted later stage. Also safety guidelines. A lot of safety issues will come. The more we are excited about how great things can happen, also the more of the things are exposed.

And we have seen the goals. We have seen the work, what it’s doing and how things are really getting into out of control. and so any AI maybe outcome based AI native deployment if you just can throw some light on that

Sandeep Sharma

Frankly speaking your question is so long I am not sure how long should be the answer I got it I got it just kidding so I think if you look at the perspective lot many good things are being done in the country there are lot many good organizations as we heard in the keynote that there is a Biosense there is a DSDA as well so lot of coordination is already in place the problem is with the pilot and the scale gap is not a technology gap is basically a gap of how we put things together in the frameworks which are scalable and referenceable as well so as I mentioned in my previous response that we need to have some national frameworks around AI native architectures once that is in place I think the quicker thing can be done is that let’s align the fundamental what type of use case are being driven and how the data needs to flow around it.

Other part is that India, as a consumer, we have a huge amount of data across the industries. And data is like bread and butter for AI. There’s no AI if there’s no data. But the data today is either siloed within the industries. If you look at the sector, they don’t combine the data together. So a national framework of putting the data together creating national exchanges where data can come in and people or organizations are allowed to train the data, train the models with the data. And we can have certain models which are more industry specific. And that plethora of variation of data, putting it together gives a very useful reference of creating frameworks which can be referenced or replicated, not only in India, globally as well.

And certain organizations are already in place to take care of that. I think more and more efforts, more and more programs are needed. thirdly if you look at more and more the safety guardrails I think we need to have certain framework in place how the AI is audited monitored within the telecom network as well we can’t allow some model to take a change of any parameter in live traffic if we can’t audit it maybe policy frameworks for intervention if certain models are changing in network parameter how and why they are changing certain explanation needs to be brought in and it can’t be done in isolation reason being if you do it in isolation again there is no clarity will come in to have a national policy around it will improve the reliability or explainability of the models hence people will come together rather than creating a differentiation of another layer of security that may encompass certain things that should be known to the larger audience and certain things that we all should do as an industry that let’s contribute more and more in these forums which government has started like Bharat 6G Alliance.

I am part of the 6G use case group, work very closely with Shokji and I think many things are already in place. We drafted certain white papers which could be referenced around AI, what type of implications it will bring into the network and we collaborate well with the 5G 100 labs as well. The things that we have done already, we should encourage them to take these things to the next level. Certain things could be referenced, certain things could be evolved. Not everything may be done right, but there is an opportunity to do everything right in 6G in terms of coordination, in terms of national referenceable frameworks. Good. Have I missed anything? It was a long question.

Radhakant Das

No, no, no. Thanks for that answer. So what actually we are seeing is in the security perspective, we are seeing we have… already a work is happening on interest in terms of policy perspective. These are in place and typically what if I have to bring in the DPI stuff or public and digital public infrastructure. So you have a lot of learning from all these sectors and how to deal with it. Even in the telcos, even in all the industry sectors, how to keep that as a creative, how that particular data which is not been seen in the telco ecosystems but still we are all responsible to deal with that. I think with the AI coming in, maybe some way we need to understand the DPI of DPI.

Sandeep Sharma

And just to add here, you brought a very good point. I think whenever I give a reference, the importance of open ecosystem, interoperable ecosystem, I always give an example of UPI. UPI wouldn’t have been a success if we had not promoted the open ecosystem around it. I think same mindset is needed in the AI era.

Radhakant Das

Good, good. Thanks. So Rajiv, I have the next question for you. how does the AI native telco change the way enterprises consume technology and what are the new value pools that will emerge out of it?

Rajeev Saluja

I think it’s a very good question see Sandeep brought out a very important aspect of latency and the second was on the uplink so first of all 6G is the solution to both the problems right enterprises in particular will benefit a lot from the advent of and the confluence of 6G and AI so there are three major drivers of value pools which enterprises can derive from 6G and AI the first would be demand analysis so you know they will be able to analyze what kind of demand is coming today their entire data is limited to the research that they do right But with the new data streams flowing in, they will be able to understand what new services can they provide to their customers and how can they embed intelligence into those new services that they are able to deliver.

The second important value pool which enterprise would be able to deliver is the workflow automation. So today, a lot of work which is manual will get automated. They will be able to orchestrate end -to -end workflows and humans will go up the value chain. That is the second important value which enterprises can derive from the confluence of 6G and AI. The third most important part which enterprises can derive out of 6G and AI is how can they make their end -to -end processes, how can they make their end -to -end security framework more robust. And see, till now, whenever we used to speak about digitization of enterprises, it used to stop at ERP implementation. Or basic business process automation.

Now, AI and 6G are going to take it to a completely different level. India is a wireless economy we don’t have fiber penetration in this country so the only way enterprises can go and reach to the last mile in this country is through the confluence of 6G and AI

Radhakant Das

so another extension to this question so how do you see sovereignty of the entire ecosystem we should deploy when you say sovereignty it is a very complex question one side that we think that ok if we make it open only it will grow at the same time sovereignty is also asked for every country maybe you can actually make entire European continent will only have one sovereign stuff what do you see on that and how much we should make it as a sovereign how much we should make it open what is your viewpoint on this

Rajeev Saluja

see this token economy in which we are going to go in the next 5 to 7 years so sovereignty is going to be a token sovereignty right in my point of view it will be very important for us to build our own intelligence and then deploy and scale it. We cannot be dependent on the world to deliver the intelligence to us because that will simply be too expensive for us to handle. So in order to make sure that intelligence reaches the last person in the most remote area and the last remote enterprise in India, the most important thing we need to have is have a token sovereign. We need to have a sovereign AI ecosystem, an end -to -end ecosystem starting from device to the cloud to the edge to the intelligence layers on top.

This end -to -end ecosystem has to be sovereign and we don’t have an option in this.

Radhakant Das

So you are saying end -to -end ecosystem platforms or stacks are sovereign?

Rajeev Saluja

Yes.

Radhakant Das

Token may or may not be sovereign or you can classify as a sovereign or as a general public one?

Rajeev Saluja

Correct, but we are basically calling it as a sovereign token. What I basically mean is that right from the time the request gets initiated by an agent or by a human. to the time an inference happens and the value gets delivered to the human or to the enterprise, this entire value chain has to be made in India, has to be sovereign.

Radhakant Das

So you have some view on it?

Sandeep Sharma

I think I was just supporting him with the gesture, but honestly speaking, the level of intelligence that country needs may not be a priority for the intelligence for other regions who are creating their own intelligence. So having a sovereign AI has an economy sense as well and has an importance for our own social values that we build in the system. AI is not only about telecom, AI is a bigger base. What we are getting as a query output as a new generation, they should be very well aware of what is right and that can be ascertained only if we have certain sovereignty developed in the ecosystem for our own nation.

Radhakant Das

So while we are talking about sovereignty, we should be very specific about it. so there is something which we need to keep as a there are certain things which you need to keep it as open stuff for learning from each other community learning across the country across the planet and all so that’s something would you like to comment and maybe I would request Surajit to also comment on that as well

Sandeep Sharma

I think just to start and Surajit will elaborate more the era of going further will not be a abstract one or abstract zero we need to look over the hybrid ecosystem which works best as a mix of which type of AIs as a mix of which type of compute and mix of which type of influencing and industry or the economy is going to be more use case and I would say efficiency driven so we should be leveraging which is best for to satisfy that particular use case

Radhakant Das

Surajit thank you

Surojeet Roy

Yeah, I think just to add, I was reading one Niti Aayog report, you know, where we are aiming for a 30 trillion economy by 2047 as part of the Vixit Bharat initiative. So out there it was very much mentioned that there are approximately 490 million informal users, you know, workers. So for example, all these carpenters, drivers, so they are the informal users and they are not yet equipped with all the applications which might enhance their productivity. So I think from that perspective, AI use cases can be significantly helpful out here. We can have, you know, smart robots working in the, you know, fields for helping on the agriculture. then there may be use cases where maybe an electrician or carpenter, you send a video of your work and they can, that AI can generate a list of the tools they need, what all steps they have to come prepared with, right?

But for all this, I think the most important part is the model needs to be trained based on data which is coming from India. Because if you train the models based on data which is coming outside India, then maybe it is not tuned for, you know, the India specific use cases. There will be a bias there. So I think that’s why it is important.

Radhakant Das

So the cultural perspective you are hitting upon, the cultural perspective has to be understood by us. So there is a bell. So are we going to have some questionnaire sessions, Q &A sessions? Any questions? Maybe we have just two minutes left.

Audience

Can you hear? Yeah, we can. In fact, I had a lot of questions. Okay, so let me first, my question would be around interoperability. So in mobile world, we see that whatever user equipment you buy from the market, it works on all the operators, right? When we are moving towards having AI -related applications, we see there is some problem. So I was looking at meta glasses they were exhibiting. So basically, the meta glass being, say, coming out in the market will only work with the meta. So should we not think of creating some AI? API sort of architecture wherein a product created by one. user side should work in different applications. It should work with Google.

It should work with geo -applications sort of. That’s the first question which I have. The second question is about the model training which Surjeet was trying to address. So I mean the advantage of India is like any applications we can scale to a billion users. That is one. And the second advantage is that we have a huge data set on many aspects. So how to leverage these two for AI because although we may not be good at LLMs, various LLMs which we have today. Of course new companies have started. Servum and all have started working on. But we are good at having a data and the market. So how to leverage that so that. I mean models are trained here models are utilized here so these are the two questions which I have in mind thank you.

Rajeev Saluja

Sir I will try to attempt to answer your question the first part I think Sandeep also mentioned see this entire ecosystem end -to -end ecosystem has to be open has to be API driven and loosely coupled so that you know there is no proprietary interface from one point to another so the whole work which is going on right now at least in our organization is to make sure that how this end -to -end ecosystem can become open can become efficient and can scale you brought out a second very important point about India’s scale right and this scale is going to reduce the cost of intelligence and that affordability is a also a very important factor for us to deliver value to our you know 140 million sorry 140 crore people that reduction in cost is a very very important factor The third important point which I want to make here is that when you talk of LLMs, they are important.

But delivering intelligence is not about LLMs or training LLMs. It is about delivering this entire ecosystem to the last mile. When it comes to LLMs, the way we are building intelligence in India is in every language. These models have to be trained. So every person, whether it is from the south, from Kerala, or from Assam, or from any state in the northeast, they should be able to get this intelligence in their local language made. That is the whole work we are doing right now as part of Jio.

Sandeep Sharma

Thank you, Rajiv. I think just to answer the second part of the question, there is a lot of data, how we can ensure. I think a framework of having is centralized data exchanges and centralized processes. Training exchanges. where enterprise can port their data with a certain anonymization so that no confidential data passed out but industries can come and train the models specific with the data which is available from the enterprises or from the end users within India. But I think central exchange mechanism is need to be placed.

Surojeet Roy

So just to add, I think democratizing the AI is also very important. It should be accessible to everybody at much lower cost and I think in that direction putting GPUs at cell towers can be one way of doing it because what you can do is when the network is not very busy and the resources are free, those resources can be given to the users to train their models or do some inferencing functions because those resources are there at every site. So that can be one way of helping on this direction.

Radhakant Das

Thank you. So you have another question? All right.

Audience

Morning. My name is Sidhu. I’m from AT &T. One quick question, now that Rajiv is also here. See, across the world, telecom companies are realizing that not having a network API exchange and then monetizing that is becoming a problem for many of the large enterprise customer use cases. For example, if a bank wants to understand their customer behavior, customers have got multiple networks, so they don’t get the visibility, right? So with OneEdge, I think Jio and Airtel have also joined hands last year. On the U .S. side, some work is happening, but I wanted to understand how much of this monetization of the network API -centric economy is materializing from India’s standpoint. I know Jio covers almost, I don’t know, 40%, 50 % of the overall population in India, so you might throw some.

I don’t know if you’re going to throw some. I don’t know if you’re going to throw some. I don’t know if you’re going to throw some. I don’t know if you’re going to throw some.

Rajeev Saluja

I will try to answer this quickly because the time is up and we can take this discussion offline. But see, we are committed to an open AI ecosystem to drive value. And like I said, enterprise value cannot be delivered unless the end -to -end ecosystem is open and connected. I think we are ringing the bell, but I will take this discussion offline. Thank you, sir. Thank you.

Radhakant Das

So we

Moderator

have time to stop. Any other questions that we can remind of now? Thank you. Thank you, everyone. May I request Radhika and Das to hand over the memento to all our speakers? May I also request Ashok, sir, to please come on stage and kindly collect your memento? Thank you. Thank you so much. Thank you. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (7)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“The session opened with the moderator emphasizing India’s ambition to shift from a technology consumer to a global leader in the next intelligence‑driven communications frontier.”

The knowledge base describes the discussion as focusing on transforming India from a technology consumer to a global leader in next-generation connectivity and intelligence [S3].

Confirmedhigh

“The theme of the discussion was “AI at the Core, 6G at the Edge”.”

The session title and theme are recorded in the knowledge base as “AI at the Core 6G at the Edge” [S1].

!
Correctionhigh

“Ashok Kumar, Director‑General of the Department of Daily Communication, gave the keynote address.”

The knowledge base identifies Ashok Kumar as being from the Department of Telecom, not a Department of Daily Communication [S3].

Confirmedmedium

“The moderator introduced the panelists (Rajiv Saluja, Surojeet Roy, Sandeep Sharma and himself) and managed the discussion format.”

The moderator’s role in introducing the panelists and managing the discussion is noted in the knowledge base [S52].

Additional Contextmedium

“The ITU’s 5G framework (IMT‑2020) was the first to include massive machine connectivity and ultra‑low latency as core usage scenarios, but AI was only added later to solve specific network‑function problems.”

ITU’s 5G performance requirements were documented in a 2017 draft report, confirming the existence of a formal 5G framework, though the knowledge base does not mention AI’s role, providing background on the standard’s scope [S55].

Additional Contextlow

“The ITU 6G framework released two years ago already lists integrated AI as one of six usage scenarios and enshrines “ubiquitous intelligence” as a key design principle.”

While the knowledge base does not directly reference the 6G framework, recent research on high-frequency, low-power switches for 6G communications illustrates ongoing technical work that aligns with AI-enabled 6G research [S60].

Additional Contextlow

“Earlier mobile generations (2G, 3G, 4G) were primarily aimed at connecting people, with later extensions such as NB‑IoT adding machine connectivity as an afterthought.”

A related discussion notes the evolution from 3G (first internet on mobile) to 4G (widespread mobile internet) and then 5G driving industrialisation, providing context for the claim about the progression of connectivity focus [S47].

External Sources (60)
S1
Designing Indias Digital Future AI at the Core 6G at the Edge — – Rajeev Saluja- Sandeep Sharma – Surojeet Roy- Sandeep Sharma
S2
Designing Indias Digital Future AI at the Core 6G at the Edge — -Surojeet Roy: Senior Telecommunications Leader, Head of Technology, Technology and Solutions, COE, at Nokia India – exp…
S3
Designing Indias Digital Future AI at the Core 6G at the Edge — Thank you, sir. So now we are moving to our very next segment, the panel discussion. Our first speaker is Rajiv Seluja, …
S4
https://app.faicon.ai/ai-impact-summit-2026/designing-indias-digital-future-ai-at-the-core-6g-at-the-edge — Thank you, sir. So now we are moving to our very next segment, the panel discussion. Our first speaker is Rajiv Seluja, …
S5
Designing Indias Digital Future AI at the Core 6G at the Edge — -Radhakant Das: Heads the Technology Engineering and Innovation Function for Network Solutions and Services (NSS) at TCS…
S6
Designing Indias Digital Future AI at the Core 6G at the Edge — -Rajeev Saluja: Vice President, 5G Radio at Reliance Jio – expertise in telecommunications and 5G/6G technology developm…
S7
Designing Indias Digital Future AI at the Core 6G at the Edge — Ashok Kumar from the Department of Telecom established the foundational premise that 6G represents a fundamental departu…
S8
Designing Indias Digital Future AI at the Core 6G at the Edge — These key comments fundamentally shaped the discussion by elevating it from a technical conversation about 6G and AI int…
S9
Scaling Innovation Building a Robust AI Startup Ecosystem — -Shri Ashok Gupta: Title – Director STPI Gurugram; Role – Dignitary presenting mementos -Shri Atul Kumar Singh: Title -…
S10
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S11
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S12
Conversation: 02 — -Moderator: Role/Title: Event moderator; Area of expertise: Not specified
S13
WS #280 the DNS Trust Horizon Safeguarding Digital Identity — – **Audience** – Individual from Senegal named Yuv (role/title not specified)
S14
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S15
Nri Collaborative Session Navigating Global Cyber Threats Via Local Practices — – **Audience** – Dr. Nazar (specific role/title not clearly mentioned)
S16
AI for Good Technology That Empowers People — “So, you know, AI being available at the edge, not from, you know, the very basic thing that we all use every day is you…
S17
5G traffic surges under growing AI usage — AI-driven applications are reshaping mobile data norms, and5G networks are feeling the pressure. Analysts warn that upli…
S18
Operationalizing data free flow with trust | IGF 2023 WS #197 — Current innovations are affecting data traffic patterns
S19
Building Indias Digital and Industrial Future with AI — As India advances in digital public infrastructure and its AI ambitions, the key is how we ensure these systems remain t…
S20
Workshop 6: Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies — Angela Coriz: Thank you. I will try to be quick. So I work at Connect Europe. This is a trade association that represent…
S21
Democratizing AI Building Trustworthy Systems for Everyone — “I think open source is going to be in my mind a critical aspect of it”[32]. “Sustainability also requires these kinds o…
S22
The Future of Public Safety AI-Powered Citizen-Centric Policing in India — Open architecture and data sovereignty are critical for long-term sustainability and avoiding vendor lock-in while maint…
S23
Designing Indias Digital Future AI at the Core 6G at the Edge — Other part is that India, as a consumer, we have a huge amount of data across the industries. And data is like bread and…
S24
Designing Indias Digital Future AI at the Core 6G at the Edge — Frankly speaking your question is so long I am not sure how long should be the answer I got it I got it just kidding so …
S25
Driving Indias AI Future Growth Innovation and Impact — Professor Bhaskar Chakravarti emphasized the critical importance of trust infrastructure beyond technical capabilities, …
S26
Secure Finance Risk-Based AI Policy for the Banking Sector — Sanyal emphasizes that while India has vast data from its large population covering health, consumer behavior, and other…
S27
Open Internet Inclusive AI Unlocking Innovation for All — For India specifically, the strategy of building specialised, efficient models for local needs whilst developing soverei…
S28
Indias Roadmap to an AGI-Enabled Future — This opening statement reframes the entire AI development paradigm from a sovereignty perspective, challenging the commo…
S29
Global AI Policy Framework: International Cooperation and Historical Perspectives — -Sovereignty vs. Openness in AI Development: The concept of “open sovereignty” emerged as a key theme – the idea that co…
S30
Is the AI bubble about to burst? Five causes and five scenarios — Centralised, closed platforms vs. decentralised, open ecosystems. Historically,open systems often win in the long run– …
S31
Discussion Report: Sovereign AI in Defence and National Security — Faisal advocates for a strategic approach where countries focus their limited sovereign resources on the most critical c…
S32
U.S. AI Standards_ Shaping the Future of Trustworthy Artificial Intelligence — Open standards address sovereignty concerns by allowing users to maintain control over their systems and data while usin…
S33
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — – Owen Lauder- Michael Brown- Austin Marin Industry-led, consensus-based approach to standards development is preferred…
S34
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — This comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents di…
S35
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards — Dr. Singh explains that unlike 5G where AI is an add-on, 6G will have AI integrated into every component. The networks w…
S36
Trusted Connections_ Ethical AI in Telecom & 6G Networks — Evidence:I don’t believe that any player, even though they would be the smartest people on the planet, are able to fully…
S37
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — Consensus level:High level of consensus with significant implications for AI governance policy. The agreement among indu…
S38
Multi-stakeholder Discussion on issues about Generative AI — Moderator – Yoichi Iida:Okay, thank you very much, the panelists. And as you see, we have a very excellent set of paneli…
S39
Government notices · GoewermentskennisGewinGs — It is understood that this policy reform will fundamentally change the market structure of the sector in that it will…
S40
Next-Gen Industrial Infrastructure / Davos 2025 — Huang Shan: Thank you. So Christophe, as a senior consultant, I think that you spend more time on the road than at y…
S41
Comprehensive Report: China’s AI Plus Economy Initiative – A Strategic Discussion on Artificial Intelligence Development and Implementation — Zhang and Professor Gong Ke agreed on the fundamental importance of infrastructure development for AI advancement. Their…
S42
Comprehensive Discussion Report: Governance Frameworks for Reducing Digital Divides in African and Francophone Contexts — N’diaye emphasizes that public policies should consider all aspects of digital infrastructure deployment, including the …
S43
Setting the Rules_ Global AI Standards for Growth and Governance — Standards ecosystem must be interoperable and modular to avoid reinventing approaches for each use case
S44
Open Forum #33 Building an International AI Cooperation Ecosystem — Participant: ≫ Distinguished guests, dear friends, it is a great honor to speak to you today on a topic that is reshapin…
S45
Designing Indias Digital Future AI at the Core 6G at the Edge — This discussion focused on India’s strategic approach to integrating artificial intelligence with 6G technology to trans…
S46
Designing Indias Digital Future AI at the Core 6G at the Edge — Kumar explains that the Department of Telecom supports startups wanting to participate in 3GPP standards by enabling TSD…
S47
Open/secure 5G and supplier diversification — OTICs are meant to bring together an ecosystem for the solutions beyond 5G and 6G The development of such a lab undersc…
S48
Trusted Connections_ Ethical AI in Telecom & 6G Networks — Evidence:In the upcoming 6G technology, AI will no longer be an application layer. It will be intrinsic. The telecom net…
S49
Workshop 6: Perception of AI Tools in Business Operations: Building Trustworthy and Rights-Respecting Technologies — Angela Coriz: Thank you. I will try to be quick. So I work at Connect Europe. This is a trade association that represent…
S50
https://app.faicon.ai/ai-impact-summit-2026/designing-indias-digital-future-ai-at-the-core-6g-at-the-edge — So I think it depends on the, because you have to send the contextual information, right? For example, you are standing …
S51
Creating New Value across Industries and Society — – -Establishing commercial and societal linkages at the usecase level – -Analysing how macroeconomic value could be real…
S52
The Global Power Shift India’s Rise in AI &amp; Semiconductors — -Moderator: Role not specified in detail, appears to be the session moderator who introduced the panelists and managed t…
S53
360° on AI Regulations — Much of the discussion around artificial intelligence involves potential dual use capabilities
S54
5G Transformation: The power of good policy  — The global rollout of5G networkshas been met with considerable excitement, and rightly so. While the promise of faster d…
S55
ITU agrees on key 5G performance requirements — ITU drafteda reporton minimum network requirements of 5G networks. The proposed framing standard is to be approved by th…
S56
DoT and TRAI to enhance telecom services with new measures — The Department of Telecommunications (DoT) and the Telecom Regulatory Authority of India (TRAI) are taking significantst…
S57
India’s telecommunications transformation: New right-of-way rules unveiled — The Indian Department of Telecommunications (DoT) has made a significantadvancementby notifying new right-of-way rules u…
S58
https://app.faicon.ai/ai-impact-summit-2026/panel-discussion-01 — So government try to be the accelerator to build this meaningful connectivity
S59
High-level dialogue on Shaping the future of the digital economy (UNCTAD) — Decisions at government level translating into concrete actions
S60
Researchers develop high-frequency, low-power switch to revolutionise 6G communications — Researchers atUAB, theUniversity of Texas at Austinand theUniversity of Lilledeveloped atelecommunications switchthat op…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Ashok Kumar
4 arguments129 words per minute1524 words706 seconds
Argument 1
Funding and low‑cost membership for startups to join TSDSI and 3GPP, enabling Indian firms to influence standards (Ashok Kumar)
EXPLANATION
Ashok Kumar explains that the Department of Telecom subsidises the cost for startups to become members of TSDSI and 3GPP, lowering the fee from several lakh rupees to just 10,000 rupees. This financial support allows Indian firms to take part in global standard‑setting activities.
EVIDENCE
He notes that a startup normally needs to join TSDSI and become an individual member of 3GPP, which is expensive, but the DOT is providing membership at a very low cost of 10,000 rupees instead of 5-6 lakh rupees, enabling participation in standards work [41-42].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Department of Telecom subsidises TSDSI membership to ₹10,000 instead of ₹5-6 lakh, removing financial barriers for startups to participate in global standards work [S3] [S1].
MAJOR DISCUSSION POINT
Reducing financial barriers for startups to engage in standards
Argument 2
Launch of the 6G Accelerated Research Program, testbeds (terahertz, AOC) and 100 5G labs to create an indigenous end‑to‑end 6G stack (Ashok Kumar)
EXPLANATION
Ashok Kumar describes a government‑led 6G Accelerated Research Program launched two years ago that has selected over 100 projects covering terahertz technology, AI, machine learning, semantic communications and more. The programme also backs testbeds and the existing network of 100 5G labs to foster an indigenous 6G ecosystem.
EVIDENCE
He states that the scheme selected 100 plus 6G-related projects in areas such as terahertz, AI, ML and semantic communications, and that testbeds like a terahertz testbed and an AOC testbed are being supported [45-52]; he also mentions the 100 5G labs inaugurated across institutes that are now operational and serve as a foundation for 6G research [69-71].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The 6G Accelerated Research Program has selected over 100 projects across terahertz, AI, ML, and supports testbeds and the network of 100 5G labs as a foundation for 6G research [S3] [S1].
MAJOR DISCUSSION POINT
Building a national research and test‑bed ecosystem for 6G
Argument 3
Formation of the Bharat 6G Alliance with working groups on technology, spectrum and devices to shape policy and guide industry (Ashok Kumar)
EXPLANATION
Ashok Kumar says the government works closely with the Bharat 6G Alliance, which has set up multiple working groups covering technology, spectrum and devices. The alliance advises the government on policies needed for India to become a 6G leader.
EVIDENCE
He notes that Bharat 6G Alliance has created multiple working groups on technology, spectrum and devices, and that members of these groups are present in the session, providing policy suggestions to the government [59-62].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bharat 6G Alliance has set up multiple working groups on technology, spectrum and devices that provide policy recommendations to the government [S3].
MAJOR DISCUSSION POINT
Collaborative policy formulation through industry alliance
Argument 4
ITU’s “ubiquitous intelligence” principle makes AI a native element of every 6G component, not an afterthought (Ashok Kumar)
EXPLANATION
Ashok Kumar explains that the ITU 6G framework includes a design principle called ‘ubiquitous intelligence’, which mandates that AI be embedded natively in all parts of the 6G system – from user equipment to radio, core and applications – unlike earlier generations where AI was added later.
EVIDENCE
He describes the principle, stating that every element of the end-to-end 6G system will use AI embedded natively, correcting the earlier approach where AI was an afterthought [28-30].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
ITU’s 6G framework includes the design principle “ubiquitous intelligence”, mandating AI be embedded natively in all parts of the end-to-end 6G system [S1].
MAJOR DISCUSSION POINT
AI as integral design principle for 6G
S
Surojeet Roy
5 arguments151 words per minute1614 words639 seconds
Argument 1
AI‑enabled wearables and smart glasses will rely on edge inference because of limited on‑device compute (Surojeet Roy)
EXPLANATION
Roy points out that emerging form‑factor devices such as smart glasses, wearables and body‑patch sensors embed AI functions but cannot perform all inference locally, so they will depend on edge or centralized data‑centers for processing, creating significant uplink traffic.
EVIDENCE
He mentions the availability of smart glasses and wearables with AI capabilities, notes that because of form-factor constraints they cannot do all inferencing on-device and will need help from edge or centralized data centres, leading to increased uplink traffic [115-124].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Wearables and smart glasses cannot perform full inference locally and will depend on edge or centralized data centres for processing, creating significant uplink traffic [S3] [S16].
MAJOR DISCUSSION POINT
Edge computing necessity for AI‑enabled wearables
AGREED WITH
Sandeep Sharma, Rajeev Saluja
Argument 2
AI workloads will reverse the traditional downlink‑uplink ratio, driving a need for far higher uplink capacity and bandwidth (Surojeet Roy)
EXPLANATION
Roy predicts that AI‑driven applications will cause uplink traffic to surge, shifting the current downlink‑to‑uplink ratio of roughly 10:1 to about 4:1, thereby demanding much higher uplink data rates and spectrum.
EVIDENCE
He states that the present downlink-uplink ratio is in the tens to one, but with AI applications it will change to roughly 4:1, meaning a need for much higher uplink capacity and bandwidth [185-190].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analysts predict the downlink-uplink ratio will shift from roughly 10:1 to about 4:1 due to AI-driven traffic, requiring much higher uplink capacity [S17].
MAJOR DISCUSSION POINT
Changing traffic patterns due to AI
Argument 3
AI‑driven signal processing (DeepRx/DeepTx) can increase capacity by 25‑30 % and enable higher‑order modulation in poor‑SNR conditions (Surojeet Roy)
EXPLANATION
Roy describes AI‑based deep‑learning algorithms (DeepRx, DeepTx) that can improve signal decoding even when the signal‑to‑noise ratio is low, delivering a 25‑30 % capacity boost and allowing higher‑order modulation schemes.
EVIDENCE
He reports proof-of-concept results where AI enabled decoding of signals with much worse SNR than 5G can handle, yielding a 25-30 % capacity increase and supporting higher-order modulation [199-202].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Proof-of-concept results for DeepRx/DeepTx show a 25-30 % capacity boost and support for higher-order modulation under low-SNR scenarios [S3].
MAJOR DISCUSSION POINT
AI enhancing physical‑layer performance
Argument 4
Shifting inference to the edge reduces latency and power consumption for latency‑critical use cases such as autonomous vehicles and robotic surgery (Surojeet Roy)
EXPLANATION
Roy notes that use cases like autonomous vehicles, industrial robots and robotic surgery require ultra‑low latency; performing inference at the edge rather than in centralized data centres cuts both latency and the power demands of large data‑centre infrastructure.
EVIDENCE
He cites autonomous vehicles, industrial robots and robotic surgery as examples needing low latency, and explains that centralized data centres pose power-consumption challenges, prompting a shift of inference to the edge [137-146].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Edge inference cuts latency and power demands for latency-sensitive applications like autonomous vehicles, industrial robots and robotic surgery [S16].
MAJOR DISCUSSION POINT
Edge inference for latency‑sensitive applications
Argument 5
Projections that AI‑related traffic will grow to roughly 30 % of total network traffic by 2033, reshaping traffic patterns (Surojeet Roy)
EXPLANATION
Roy references a Nokia Bell Labs forecast that AI‑driven traffic, currently about 5 %, will rise to around 30 % of total traffic by 2033, fundamentally altering network traffic composition.
EVIDENCE
He cites the projection that AI traffic will be about 30 % of total network traffic by 2033, up from roughly 5 % today, based on Nokia Bell Labs data [126-131].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Nokia Bell Labs forecasts AI-driven traffic will rise from ~5 % today to ~30 % of total traffic by 2033, altering network traffic composition [S1] [S18].
MAJOR DISCUSSION POINT
Future AI traffic share
R
Radhakant Das
3 arguments150 words per minute1679 words670 seconds
Argument 1
AI is positioned as the foundational infrastructure—a distributed computer fabric that permeates radio, core, edge and satellite layers (Radhakant Das)
EXPLANATION
Das frames AI as the basic infrastructure that will underpin the next evolution of networks, creating a distributed computing fabric that spans radio, core, edge, non‑terrestrial networks and sensor ecosystems.
EVIDENCE
He states that intelligence is the basic infra, a distributed computer fabric that will be present across radio, core, edge, satellite and sensor ecosystems, and that 6G will embed AI everywhere [88-94].
MAJOR DISCUSSION POINT
AI as core infrastructure
Argument 2
The moderator highlighted AI as the core infrastructure that will shape India’s next resilient digital frontier (Moderator)
EXPLANATION
As the session moderator, Das emphasizes that AI, placed at the core of the discussion, will drive India’s resilient, innovative and efficient digital future, positioning it as a strategic national priority.
EVIDENCE
He remarks that AI is the basic infra shaping the next digital frontier, calling the moment a historic inflection point where intelligence underpins future evolution [88-94].
MAJOR DISCUSSION POINT
Strategic framing of AI for national digital future
Argument 3
A hybrid approach is advocated: maintain openness for learning and innovation while preserving sovereign control over critical layers (Radhakant Das)
EXPLANATION
Das argues that while openness is essential for cross‑learning and innovation, certain components of the AI/6G stack should remain sovereign to safeguard security, cultural relevance and economic control.
EVIDENCE
He says that some aspects must stay open for learning across communities, while others need to be sovereign, and invites Surajit to comment, indicating a mixed-approach stance [295-302].
MAJOR DISCUSSION POINT
Balancing openness and sovereignty
DISAGREED WITH
Rajeev Saluja, Sandeep Sharma
M
Moderator
1 argument42 words per minute258 words364 seconds
Argument 1
The moderator highlighted AI as the core infrastructure that will shape India’s next resilient digital frontier (Moderator)
EXPLANATION
The moderator stresses that AI is the foundational element of the upcoming digital frontier, positioning it as the basic infrastructure that will drive India’s next wave of resilient, innovative, and efficient digital development.
EVIDENCE
During the opening of the panel, the moderator notes that AI is the basic infra and a distributed computer fabric that will permeate radio, core, edge and satellite layers, marking a historic inflection point for India’s digital future [88-94].
MAJOR DISCUSSION POINT
Strategic emphasis on AI as core infrastructure
R
Rajeev Saluja
4 arguments163 words per minute1153 words423 seconds
Argument 1
Vision of “democratizing intelligence” – building an affordable, end‑to‑end AI ecosystem that reaches every Indian citizen (Rajeev Saluja)
EXPLANATION
Saluja states that after achieving near‑universal broadband connectivity, the next goal is to democratize intelligence, creating an affordable AI ecosystem that reaches even the most remote citizen, emphasizing that intelligence cannot be rented but must be built and scaled domestically.
EVIDENCE
He mentions that more than 99 % of the population is connected, and the objective is to democratize intelligence so that every citizen has access to a strong AI ecosystem, insisting that India must build and scale intelligence rather than rent it [149-158].
MAJOR DISCUSSION POINT
Inclusive AI access for all citizens
Argument 2
Three emerging enterprise value pools: (1) demand analysis through new data streams, (2) workflow automation that lifts humans up the value chain, (3) stronger end‑to‑end security frameworks (Rajeev Saluja)
EXPLANATION
Saluja outlines three key value pools for enterprises enabled by AI and 6G: using new data streams for demand analysis, automating workflows to move humans to higher‑value tasks, and enhancing end‑to‑end security.
EVIDENCE
He lists demand analysis, workflow automation, and stronger security frameworks as the three major enterprise value pools derived from the convergence of 6G and AI [267-272].
MAJOR DISCUSSION POINT
Enterprise benefits from AI‑6G convergence
Argument 3
A sovereign end‑to‑end AI stack, built and operated in India, is necessary to control costs, ensure security and reflect local cultural contexts (Rajeev Saluja)
EXPLANATION
Saluja argues for a ‘token sovereign’ AI ecosystem where the entire chain—from request to inference—is Indian, reducing dependence on foreign intelligence, lowering costs, and ensuring cultural relevance.
EVIDENCE
He describes the need for a token-sovereign AI ecosystem, stating that the whole value chain must be built in India to control costs, ensure security and reflect local contexts [278-282].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Emphasis on a token-sovereign AI ecosystem built in India to manage costs, security and cultural relevance, supported by the need for Indian-sourced training data [S3].
MAJOR DISCUSSION POINT
Sovereign AI ecosystem for national self‑reliance
DISAGREED WITH
Sandeep Sharma, Radhakant Das
Argument 4
An open, API‑driven and loosely‑coupled ecosystem is essential to avoid siloed solutions and to enable monetization of network APIs (Rajeev Saluja)
EXPLANATION
Saluja emphasizes that the AI/6G ecosystem must be open, API‑based and loosely coupled to prevent proprietary lock‑in and to allow enterprises to monetize network APIs effectively.
EVIDENCE
He states that the ecosystem should be open, API-driven and loosely coupled, and that this openness is crucial for monetizing network APIs and delivering value at scale [330-334].
MAJOR DISCUSSION POINT
Openness for ecosystem interoperability and monetization
S
Sandeep Sharma
4 arguments165 words per minute1520 words551 seconds
Argument 1
AI traffic’s latency, coverage and “token economy” dimensions become productivity KPIs, with even a 10‑20 % latency reduction yielding large efficiency gains (Sandeep Sharma)
EXPLANATION
Sharma points out that AI traffic introduces new performance dimensions—latency, coverage and a token‑economy model—that become key productivity indicators, and that modest latency improvements (10‑20 %) can dramatically boost efficiency.
EVIDENCE
He discusses latency, coverage, and the token economy as new KPIs, noting that a 10-20 % latency reduction can lead to substantial efficiency improvements [162-174].
MAJOR DISCUSSION POINT
New KPIs for AI‑driven traffic
Argument 2
National coordination mechanisms (industry, academia, government) and sandbox frameworks are needed to align pilots with upcoming standards and ensure safety guardrails (Sandeep Sharma)
EXPLANATION
Sharma calls for a national framework that brings together industry, academia and government, along with sandbox environments, to ensure AI/6G pilots are aligned with forthcoming standards and incorporate safety safeguards.
EVIDENCE
He mentions the need for national frameworks, sandbox mechanisms, safety guardrails, and coordination among stakeholders to align pilots with standards and ensure safety [237-254].
MAJOR DISCUSSION POINT
Coordination and safety governance for AI/6G pilots
Argument 3
Safety policies must require auditability and explainability of AI models that control live network parameters (Sandeep Sharma)
EXPLANATION
Sharma stresses that AI models influencing live network parameters must be auditable and explainable, requiring policy frameworks that monitor and intervene when models change network behavior.
EVIDENCE
He outlines the need for auditability, explainability, and policy frameworks to monitor AI model changes in live networks, emphasizing that models must be transparent and safe [247-252].
MAJOR DISCUSSION POINT
Governance of AI models for network safety
Argument 4
Centralized data exchanges and training platforms are needed to leverage India’s large data sets for AI model development while preserving confidentiality (Sandeep Sharma)
EXPLANATION
Sharma proposes establishing national data exchanges where anonymized enterprise data can be shared for AI model training, enabling scale while protecting confidential information.
EVIDENCE
He describes a framework of centralized data exchanges and training platforms where enterprises can contribute anonymized data for model training, facilitating large-scale AI development while safeguarding privacy [337-344].
MAJOR DISCUSSION POINT
Data sharing infrastructure for AI development
A
Audience
1 argument152 words per minute438 words172 seconds
Argument 1
Call for interoperable AI APIs so that a device or application from one vendor works seamlessly across other platforms (Audience)
EXPLANATION
An audience member raises the issue that AI‑enabled devices such as Meta glasses may become locked to a single ecosystem, and asks whether a common AI API architecture can be created to ensure cross‑platform interoperability with other services like Google or other applications.
EVIDENCE
The participant notes that Meta glasses currently work only with Meta, and suggests the need for an AI API framework that would allow products from one vendor to operate across different platforms and applications [312-320].
MAJOR DISCUSSION POINT
Interoperability of AI applications across vendors
Agreements
Agreement Points
AI will be a native, ubiquitous component of 6G architecture rather than an after‑thought
Speakers: Ashok Kumar, Surojeet Roy, Radhakant Das
ITU’s “ubiquitous intelligence” principle makes AI a native element of every 6G component (Ashok Kumar) AI‑integrated usage scenario is part of the ITU 6G framework (Surojeet Roy) AI is positioned as the basic infrastructure that will permeate radio, core, edge and satellite layers (Radhakant Das)
All three speakers stress that AI is being embedded from the outset of 6G design, with the ITU’s ubiquitous-intelligence principle, inclusion in 6G usage scenarios, and AI described as the foundational infrastructure across the whole network stack [28-30][27-28][88-94].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with emerging 6G research that positions AI as integral to network functions, emphasizing self-learning, predictive management and edge decision-making [S35].
The AI/6G ecosystem must be open, API‑driven and interoperable to avoid silos and enable monetisation
Speakers: Rajeev Saluja, Audience, Sandeep Sharma
An open, API‑driven and loosely‑coupled ecosystem is essential to avoid proprietary lock‑in and to monetise network APIs (Rajeev Saluja) Call for interoperable AI APIs so that a device or application from one vendor works across other platforms (Audience) National coordination mechanisms and sandbox frameworks are needed to align pilots with standards and keep the ecosystem open (Sandeep Sharma)
The panel agrees that a common, open API layer is crucial for cross-vendor interoperability and for creating market opportunities, with industry and audience participants explicitly requesting such standards and the government stressing coordination and openness [330-334][312-320][237-244].
POLICY CONTEXT (KNOWLEDGE BASE)
Open, API-driven architectures are advocated in international AI standards discussions, which argue that interoperable open standards preserve sovereignty while fostering ecosystem growth [S32][S33][S43].
Edge inference and higher uplink capacity are required to support the surge of AI‑driven traffic
Speakers: Surojeet Roy, Sandeep Sharma, Rajeev Saluja
AI‑enabled wearables and smart glasses will rely on edge inference because of limited on‑device compute (Surojeet Roy) AI traffic introduces new latency, coverage and token‑economy dimensions; even a 10‑20 % latency reduction yields large efficiency gains (Sandeep Sharma) Simple AI workloads will be handled at the edge, while complex multi‑step workflows stay in the core (Rajeev Saluja)
All three highlight that AI applications will shift traffic patterns, demanding more uplink bandwidth and pushing inference toward the edge to meet latency and power constraints [115-124][170-174][159-160].
POLICY CONTEXT (KNOWLEDGE BASE)
Experts highlight that 6G will rely on edge inference and expanded uplink bandwidth to handle AI workloads, a premise reflected in technical roadmaps for AI-enabled 6G [S35].
India should develop a sovereign, end‑to‑end AI stack built domestically to control costs, security and cultural relevance
Speakers: Rajeev Saluja, Sandeep Sharma, Ashok Kumar
A sovereign end‑to‑end AI stack is necessary to control costs, ensure security and reflect local cultural contexts (Rajeev Saluja) AI sovereignty is important for economic sense and to embed social values in the system (Sandeep Sharma) Building an indigenous end‑to‑end 6G technology stack is a historic opportunity for India (Ashok Kumar)
Government and industry converge on the need for a domestically built AI/6G ecosystem that remains under Indian control while delivering affordable intelligence [278-282][289-292][34-35].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy papers stress the need for end-to-end domestic AI stacks to ensure data sovereignty, security and culturally appropriate models, echoing India’s push for sovereign capabilities [S27][S28][S29][S26].
A national data‑exchange and coordinated AI‑model training framework is needed to leverage India’s massive data while preserving confidentiality
Speakers: Sandeep Sharma, Audience, Rajeev Saluja
Centralised data exchanges and training platforms enable large‑scale AI model development while protecting confidential data (Sandeep Sharma) Leverage India’s huge data sets to train models locally and at scale (Audience) AI must be built in every Indian language, requiring large domestic data for training (Rajeev Saluja)
Consensus emerges on establishing a national data-exchange infrastructure to harness India’s data wealth for AI model training, with attention to privacy and multilingual needs [337-344][321-328][333-336].
POLICY CONTEXT (KNOWLEDGE BASE)
Reports on India’s digital future note that data is fragmented across sectors and call for a national data-exchange platform that safeguards confidentiality while enabling large-scale model training [S23][S26].
Similar Viewpoints
Both the government representative and the industry leader stress the importance of building a home‑grown, inclusive 6G/AI ecosystem that serves all of India’s population, rather than relying on imported solutions [34-35][149-158].
Speakers: Ashok Kumar, Rajeev Saluja
Launch of the 6G Accelerated Research Program, testbeds and 100 5G labs to create an indigenous end‑to‑end 6G stack (Ashok Kumar) Vision of “democratizing intelligence” – building an affordable AI ecosystem that reaches every citizen (Rajeev Saluja)
Both highlight that AI techniques can materially improve network performance – either by enhancing physical‑layer capacity or by redefining KPIs such as latency and coverage [199-202][170-174].
Speakers: Surojeet Roy, Sandeep Sharma
AI‑driven signal processing (DeepRx/DeepTx) can increase capacity by 25‑30 % and enable higher‑order modulation (Surojeet Roy) AI traffic’s latency, coverage and token‑economy dimensions become productivity KPIs; modest latency reductions yield large efficiency gains (Sandeep Sharma)
Unexpected Consensus
Agreement on an open, interoperable AI API layer despite strong calls for a sovereign AI stack
Speakers: Rajeev Saluja, Audience, Sandeep Sharma, Rajeev Saluja (sovereignty argument)
Open, API‑driven ecosystem to avoid lock‑in (Rajeev Saluja) Call for interoperable AI APIs across vendor ecosystems (Audience) National coordination mechanisms to keep the ecosystem open (Sandeep Sharma) Need for a sovereign end‑to‑end AI stack built in India (Rajeev Saluja – sovereignty argument)
It is unexpected that the same participants simultaneously champion full openness and interoperability while also insisting on a completely sovereign AI stack, revealing a nuanced balance between openness for innovation and sovereignty for security and cultural relevance [330-334][312-320][237-244][278-282].
POLICY CONTEXT (KNOWLEDGE BASE)
Consensus on an open API layer is consistent with global AI standards work that promotes modular, interoperable interfaces as a way to balance openness with sovereign control [S32][S33][S43][S36].
Overall Assessment

The discussion shows a strong convergence among government, industry and academia on four core themes: AI as a native element of 6G, the necessity of an open API‑driven ecosystem, the shift of AI workloads to the edge with higher uplink demands, and the pursuit of a sovereign yet inclusive AI/6G stack supported by national data‑exchange frameworks. While participants differ on the exact balance between openness and sovereignty, the overall alignment is high.

High consensus – the shared positions provide a clear, coordinated roadmap for India’s 6G and AI strategy, suggesting that policy, research funding and industry initiatives are likely to move forward in a mutually reinforcing manner.

Differences
Different Viewpoints
Sovereign AI stack versus an open, interoperable AI ecosystem
Speakers: Rajeev Saluja, Sandeep Sharma, Radhakant Das
A sovereign end‑to‑end AI stack, built and operated in India, is necessary to control costs, ensure security and reflect local cultural contexts (Rajeev Saluja) An open AI ecosystem that is API‑driven and loosely coupled is essential to avoid siloed solutions and to enable monetisation of network APIs (Sandeep Sharma) A hybrid approach is advocated: maintain openness for learning and innovation while preserving sovereign control over critical layers (Radhakant Das)
Rajeev stresses that the entire AI value chain must be Indian-owned and sovereign to keep costs low and protect cultural and security interests [278-282]. Sandeep argues that openness, API-driven design and loose coupling are crucial for ecosystem health and commercialisation [330-334]. Radhakant adds that while openness is needed for cross-learning, some components should remain sovereign, highlighting a tension between full sovereignty and openness [295-302].
POLICY CONTEXT (KNOWLEDGE BASE)
The tension mirrors the “open sovereignty” debate in international AI policy, where stakeholders weigh fully closed stacks against interoperable ecosystems [S29][S30][S31][S32].
Approach to data sharing and model training for AI
Speakers: Sandeep Sharma, Rajeev Saluja
Centralised data exchanges and training platforms are needed to leverage India’s large data sets while preserving confidentiality (Sandeep Sharma) AI models must be trained on Indian data to ensure relevance and avoid bias; intelligence should be delivered in every local language (Rajeev Saluja)
Sandeep proposes a national, centralised data-exchange framework where anonymised enterprise data can be pooled for model training, emphasizing scalability and privacy safeguards [337-344]. Rajeev focuses on the necessity of training models on India-specific data to capture local linguistic and cultural nuances, implying a more distributed or locally-controlled data approach [333-336]. The two positions differ on the degree of centralisation versus localisation of data assets.
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions on Indian data governance highlight the challenge of sharing siloed data while retaining ownership, underscoring divergent views on centralized versus federated model training [S23][S26][S29].
Unexpected Differences
Renting versus building intelligence and the role of shared infrastructure
Speakers: Rajeev Saluja, Sandeep Sharma
We cannot rent intelligence; it must be built and scaled domestically (Rajeev Saluja) Suggests using shared compute resources such as GPUs at cell towers for training and inference, implying a shared (rented) model (Sandeep Sharma)
Rajeev’s strong stance that intelligence must be built in-house and not outsourced [154-156] contrasts with Sandeep’s proposal to utilise shared infrastructure (e.g., cell-tower GPUs) for AI workloads, which could be interpreted as a form of ‘renting’ or leveraging common resources [341-343]. This tension between self-reliance and shared resource utilisation was not anticipated given the overall consensus on national development.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy analyses suggest a strategic mix of shared infrastructure and selective in-house development, arguing that limited sovereign resources should focus on critical control points rather than full stack replication [S31][S39][S30].
Degree of openness in AI standards versus sovereign control
Speakers: Rajeev Saluja, Sandeep Sharma
Calls for a sovereign AI ecosystem where the entire value chain is Indian (Rajeev Saluja) Advocates an open, API‑driven, loosely‑coupled ecosystem to avoid vendor lock‑in (Sandeep Sharma)
While both speakers support a strong national AI capability, Rajeev insists on full sovereignty of the stack, whereas Sandeep stresses openness and interoperability, revealing an unexpected divergence on how much control versus openness the ecosystem should embody [278-282][330-334].
POLICY CONTEXT (KNOWLEDGE BASE)
International standards bodies advocate open, modular standards as a means to preserve national control while enabling cross-border collaboration, reflecting the ongoing debate over openness versus sovereignty [S32][S33][S43][S29][S30].
Overall Assessment

The discussion shows broad consensus on the importance of AI‑enabled 6G for India’s digital future, but notable disagreements arise around the balance between national sovereignty and openness, and the architecture for data sharing and model training. These disagreements centre on whether the AI stack should be fully sovereign or open, and how centralized data exchanges should be versus locally‑controlled data ecosystems.

Moderate – while participants share common goals (democratising intelligence, edge computing, national research programmes), they diverge on strategic implementation (sovereign vs open models, centralised vs distributed data governance). The implications are significant: policy must reconcile the push for self‑reliance with the need for interoperable standards and collaborative innovation, and must define clear frameworks for data sharing that respect both scalability and national sovereignty.

Partial Agreements
All three speakers share the goal of creating a robust, inclusive AI/6G ecosystem for India. Ashok focuses on government funding and test‑beds, Rajeev on the broader societal goal of democratising intelligence, and Sandeep on the technical architecture (open APIs). Their methods differ – subsidy programmes vs. societal outreach vs. open‑API design – but they converge on the same overarching objective [45-52][149-158][330-334].
Speakers: Ashok Kumar, Rajeev Saluja, Sandeep Sharma
Government‑led schemes (6G Accelerated Research Program, 5G labs, subsidies) to build an indigenous 6G stack (Ashok Kumar) Vision of ‘democratizing intelligence’ – building an affordable end‑to‑end AI ecosystem for every citizen (Rajeev Saluja) Need for an open, API‑driven and loosely‑coupled ecosystem to avoid silos and enable monetisation (Sandeep Sharma)
Both agree that edge computing is essential for latency‑sensitive applications. Surojeet emphasises the power‑consumption advantage of edge inference, while Rajeev highlights that routine AI tasks should be processed at the edge, reserving the core for more complex pipelines [137-146][159-160].
Speakers: Surojeet Roy, Rajeev Saluja
Shifting inference to the edge reduces latency and power consumption for latency‑critical use cases such as autonomous vehicles and robotic surgery (Surojeet Roy) Most simple inferencing workloads will be handled at the edge, with complex workflows at the core (Rajeev Saluja)
Takeaways
Key takeaways
India is positioning AI as the core infrastructure for the upcoming 6G era, with AI being a native element of every network component (ITU’s ‘ubiquitous intelligence’ principle). The government is actively building a 6G ecosystem through low‑cost 3GPP/TSDSI membership for startups, the 6G Accelerated Research Program, testbeds (terahertz, AOC), and the 100 5G labs initiative. Bharat 6G Alliance and multiple ministries (DOT, DST, etc.) are coordinating policy, spectrum, and device work‑streams to shape standards and drive indigenous technology development. Technical expectations include a reversal of the traditional downlink‑uplink traffic ratio, higher uplink capacity, 400 MHz of bandwidth, and AI‑enhanced signal processing (DeepRx/DeepTx) that could boost capacity by 25‑30 %. Edge inference will be critical for latency‑sensitive use cases (autonomous vehicles, robotic surgery) while central data‑centers will handle more complex, multi‑agent workloads. Business vision focuses on ‘democratizing intelligence’ – delivering affordable, end‑to‑end AI services to every citizen and creating new enterprise value pools: demand analytics, workflow automation, and enhanced security. An open, API‑driven, loosely‑coupled ecosystem is deemed essential for interoperability, monetisation of network APIs, and avoiding siloed solutions. Sovereignty is emphasized: the AI stack—from device to cloud to edge—should be built and operated in India to control costs, ensure security, and reflect local cultural contexts, while still leveraging openness where appropriate.
Resolutions and action items
DOT to continue subsidising TSDSI/3GPP membership for Indian startups (cost ~₹10,000). Maintain and expand the 6G Accelerated Research Program (100+ projects) and associated testbeds (terahertz, AOC). Leverage the existing 100 5G labs across institutes for 6G research; industry participants are urged to adopt one or two labs for collaborative work. Strengthen coordination with Bharat 6G Alliance working groups to translate their recommendations into government policy. Develop a national data‑exchange framework and sandbox environment to enable safe, scalable AI model training and testing across sectors. Promote open, API‑centric architecture for AI services to enable cross‑vendor interoperability and future monetisation of network APIs. Encourage deployment of edge compute resources (e.g., GPUs at cell towers) to democratise AI inference and training for end‑users.
Unresolved issues
Exact mechanisms and standards for interoperable AI APIs that would allow devices/applications from different vendors to work seamlessly together. Detailed roadmap, timelines, and governance for the upcoming 6G standard releases (e.g., release 21) and how current pilots will align with them. Specific safety, auditability, and explainability policies for AI models that control live network parameters. Quantitative breakdown of AI workload distribution (device vs edge vs core vs cloud) for typical use cases. Business model and pricing strategy for monetising network‑API services, especially for large enterprises such as banks. Implementation details for the proposed “token economy” and how token sovereignty will be enforced technically.
Suggested compromises
Adopt a hybrid ecosystem: keep core AI/6G layers sovereign and Indian‑controlled while allowing open, API‑driven interfaces for broader innovation and learning. Combine open‑source, interoperable AI APIs with national data‑exchange sandboxes to balance openness with security and cultural relevance. Utilise under‑used edge compute (e.g., idle GPU capacity at cell sites) to provide low‑cost AI services, thereby sharing resources without compromising sovereignty.
Thought Provoking Comments
The ITU’s 6G design principle of “ubiquitous intelligence” means every element of the end‑to‑end 6G system – user equipment, radio, core and applications – will have AI embedded natively, not as an after‑thought.
It reframes 6G from being just a faster network to being an AI‑first communications fabric, highlighting a fundamental shift in how standards are being conceived.
Set the technical baseline for the whole panel, prompting others to discuss how AI will be woven into RAN, edge and core. It led to deeper conversations about AI‑native features (e.g., DeepRx/DeepTx) and the need for new ecosystem policies.
Speaker: Ashok Kumar
We cannot rent intelligence. We have to build and scale it ourselves so that every citizen gets affordable, pervasive intelligence.
It challenges the prevailing model of consuming third‑party AI services and positions intelligence as a public utility, echoing the earlier “democratizing connectivity” narrative.
Shifted the discussion from pure technology to economic and policy dimensions, prompting follow‑up remarks on sovereign AI ecosystems, cost‑effective edge deployment, and the role of government schemes.
Speaker: Rajeev Saluja
By 2033, about 30 % of total traffic will be AI‑driven, and the uplink/downlink ratio will move from roughly 10:1 to about 4:1, demanding far higher uplink capacity.
Provides a concrete, data‑backed forecast that reframes AI as a traffic‑shaping force, not just a service layer, and quantifies the network redesign needed.
Triggered a cascade of technical discussions about spectrum, spectral efficiency, AI‑assisted signal processing (DeepRx/DeepTx), and the necessity of moving compute to the edge.
Speaker: Surojeet Roy
AI can improve spectral efficiency by 25‑30 % and enable higher‑order modulation, effectively delivering 20× more capacity for 6G through AI‑driven RAN optimization.
Links AI directly to measurable performance gains, moving the conversation from abstract concepts to tangible engineering benefits.
Prompted other panelists to explore practical implementation paths, such as 5G‑Advanced evolution, AI‑native slicing, and the role of test‑beds and labs mentioned earlier.
Speaker: Surojeet Roy
We need a national framework for AI‑native architectures, centralized data exchanges, and audit‑able models to ensure safety, explainability and interoperability across vendors.
Highlights governance, security, and standardisation challenges that are often overlooked in hype‑driven AI discussions.
Steered the dialogue toward policy, sandbox creation, and cross‑industry collaboration, influencing later audience questions about interoperability and open APIs.
Speaker: Sandeep Sharma
India must develop a sovereign AI ecosystem – from device to cloud to edge – so that the entire inference/value chain is owned and operated domestically.
Introduces the concept of AI sovereignty, tying national security, economic independence, and cultural relevance together.
Created a turning point where the conversation moved from technical possibilities to strategic national priorities, leading to comments on cultural bias in models and the need for India‑specific training data.
Speaker: Rajeev Saluja
Putting GPUs at cell towers can democratise AI by allowing idle compute at the edge to be used for training or inference, lowering costs for developers and users.
Proposes an innovative, infrastructure‑level solution to make AI resources widely accessible, bridging the gap between cloud‑centric and edge‑centric models.
Expanded the discussion on edge compute, influencing later remarks about open API‑driven ecosystems and the analogy to India’s UPI model for openness.
Speaker: Surojeet Roy
Overall Assessment

The discussion was shaped by a series of pivotal remarks that moved the conversation from a high‑level vision of 6G to concrete technical, economic, and policy dimensions. Ashok Kumar’s framing of AI as a native design principle set the stage, while Rajeev Saluja’s call to ‘build, not rent, intelligence’ and his sovereignty argument reframed the debate around national self‑reliance. Surojeet Roy’s traffic‑forecast and AI‑driven capacity gains grounded the vision in measurable network requirements, prompting deeper technical dialogue. Sandeep Sharma’s emphasis on governance, safety, and data‑exchange frameworks introduced the necessary regulatory perspective. Together, these comments redirected the panel from abstract hype to actionable pathways, influencing each other’s responses and steering the audience’s questions toward interoperability, open APIs, and practical deployment models.

Follow-up Questions
What is the expected distribution of AI inference workloads across devices, edge, core network, and cloud (percentage split) for typical 6G applications?
Understanding the split is crucial for resource planning, network architecture, and cost allocation in the rollout of AI‑native 6G.
Speaker: Radhakant Das (to Surojeet Roy)
What level of regulatory or agency influence should be applied to AI‑driven traffic, especially regarding cloud versus edge processing?
Clarifying regulatory influence will shape policies on data sovereignty, latency requirements, and compliance for AI services.
Speaker: Radhakant Das (to panel)
What specific coordination mechanisms or co‑creation models should industry, academia, and government adopt to align AI‑6G pilots with emerging standards and safety guidelines?
A structured framework is needed to avoid siloed development, ensure standard compliance, and address safety concerns.
Speaker: Radhakant Das (to Sandeep Sharma)
How can ROI for AI‑6G use cases in priority sectors (BFSI, manufacturing, healthcare, mobility) be measured within the next 1.5 years, and what metrics should be used?
Defining clear ROI metrics will help stakeholders justify investments and track the economic impact of early 6G deployments.
Speaker: Radhakant Das (to Sandeep Sharma)
What framework should be established to balance openness (interoperability) with sovereignty in AI‑native telecom ecosystems?
Balancing open standards with national sovereignty is essential for security, innovation, and global competitiveness.
Speaker: Radhakant Das (to Rajeev Saluja)
How can an AI‑API architecture be standardized to ensure interoperability of AI‑enabled devices (e.g., Meta glasses) across different platforms and operators?
Standardized AI APIs would prevent vendor lock‑in, foster a vibrant ecosystem, and simplify developer integration.
Speaker: Audience member (unnamed)
What mechanisms can be created to leverage India’s massive domestic data for training large language models while preserving privacy and encouraging industry participation?
Utilizing domestic data responsibly is key to building sovereign AI capabilities and reducing dependence on external models.
Speaker: Audience member (unnamed)
What is the current state and future potential of monetizing network API‑centric services in India, and how can this be quantified?
Quantifying API‑based revenue streams will guide telecom operators in developing new business models and partnerships.
Speaker: Sidhu (AT&T audience)
What national sandbox or test‑bed frameworks are needed to safely trial AI‑native telecom services before standardization?
A sandbox environment would allow controlled experimentation, validation of safety measures, and smoother transition to standards.
Speaker: Sandeep Sharma
What safety guardrails and audit mechanisms are required for AI models that can modify live telecom network parameters?
Ensuring explainability and auditability of AI‑driven network changes is vital to maintain reliability and trust.
Speaker: Sandeep Sharma
What research is needed on uplink traffic growth due to AI applications and the required enhancements in 6G network architecture?
AI‑driven uplink surge will impact spectrum planning, hardware design, and overall network capacity.
Speaker: Surojeet Roy
What performance gains can be achieved with AI‑enhanced PHY (DeepRx/DeepTx) in real‑world deployments, and how should they be validated?
Empirical validation of AI‑based physical‑layer improvements is necessary to justify standard inclusion and investment.
Speaker: Surojeet Roy
How will the token economy affect network KPIs and business models for AI‑driven services?
Understanding token‑based economics will influence pricing, resource allocation, and sustainability of AI services.
Speaker: Sandeep Sharma
How can AI models be trained to reflect Indian driving behavior for autonomous vehicle use cases?
Localized model training is essential for safety and effectiveness of autonomous systems in India’s unique traffic environment.
Speaker: Surojeet Roy
What is the optimal split of inference between edge and central cloud for different AI use cases in India?
Determining the edge‑cloud split per use case will guide architecture design, latency management, and cost efficiency.
Speaker: Surojeet Roy and Rajeev Saluja
What are the power consumption implications of distributing AI inference to edge nodes versus centralized data centers?
Assessing power distribution is critical for sustainable network operation and infrastructure planning.
Speaker: Surojeet Roy
What are the spectrum requirements (e.g., 400 MHz) and spectral efficiency targets for 6G, and how realistic are they?
Clarifying spectrum needs and efficiency goals will inform policy, licensing, and technology development strategies.
Speaker: Surojeet Roy

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.