How AI Drives Innovation and Economic Growth

How AI Drives Innovation and Economic Growth

Session at a glance

Summary

This discussion, moderated by Jeanette Rodrigues at the Bharat Mandapam, focused on how artificial intelligence can either narrow or widen development gaps between countries, particularly examining opportunities and challenges for emerging economies like India. Johannes Zutt from the World Bank opened by highlighting AI’s potential as a game-changer for developing nations, noting that 15-16% of jobs in South Asia show strong complementarity with AI, enabling workers to enhance their skills and effectiveness across sectors like agriculture, healthcare, and finance.


The panelists explored the concept of “small AI” – practical, affordable, locally relevant applications that work with limited infrastructure – as opposed to large foundational models concentrated in the US and China. Michael Kremer emphasized AI’s potential to provide public goods like weather forecasting and digital identity systems, citing India’s success in distributing AI weather forecasts to 38 million farmers. Anu Bradford discussed regulatory approaches, comparing the EU’s rights-driven framework with other models, while debunking the myth that regulation necessarily stifles innovation.


Ufuk Akcigit raised concerns about market concentration in AI’s foundational layer, noting worrying trends of talent migration from academia to large tech companies and the shift from open to protected science. Iqbal Dhaliwal stressed the importance of evidence-based evaluation of AI interventions, highlighting examples where promising technologies failed due to trust issues or inadequate adaptation of existing systems.


The discussion revealed both optimism about AI’s transformative potential in healthcare, education, and government services, and significant concerns about labor market disruption, market concentration, and the risk of humans becoming overly dependent on AI systems. The panelists concluded that realizing AI’s benefits while mitigating risks requires careful policy design, robust governance frameworks, and continued investment in human capabilities alongside technological advancement.


Keypoints

Major Discussion Points:

AI’s Dual Potential for Development: The discussion centered on how AI could either narrow or widen development gaps, with particular focus on “small AI” – practical, affordable, locally relevant applications that work in environments with limited connectivity and infrastructure, versus large foundational models that require significant resources.


Market Concentration vs. Democratization: A key tension emerged between AI’s democratizing potential at the application layer (where small businesses can access previously unavailable tools) and concerning concentration trends at the foundational layer, where high barriers to entry in compute, data, and talent are creating oligopolistic conditions.


Real-World Implementation Challenges: Panelists emphasized that successful AI deployment requires addressing fundamental systemic issues – from basic infrastructure (electricity, internet) to business environments, regulatory frameworks, and human adaptation. Technology alone cannot solve problems without proper institutional support.


Regulatory Sovereignty and Global Power Dynamics: The discussion explored how developing countries can maintain AI sovereignty when foundational technologies are concentrated in the US and China, examining different regulatory approaches (US innovation-focused vs. EU rights-driven) and their implications for emerging economies.


Evidence-Based Evaluation and Scaling: Strong emphasis on rigorous testing of AI interventions, moving beyond technological capability to measure actual user impact, scalability, and continuous improvement, with multiple examples of promising pilots that failed to scale due to political economy factors.


Overall Purpose:

The discussion aimed to provide policymakers in developing countries with practical guidance on harnessing AI’s benefits while mitigating risks, moving beyond both utopian and dystopian narratives to focus on real-world implementation challenges and opportunities.


Overall Tone:

The tone was notably optimistic yet pragmatic, described as representing “hope” rather than the “fear” that characterized earlier AI summits. While panelists acknowledged significant risks around market concentration, job displacement, and governance challenges, they maintained a constructive focus on actionable solutions. The conversation remained consistently grounded in empirical evidence and real-world examples, avoiding both technological determinism and excessive pessimism.


Speakers

Speakers from the provided list:


Jeanette Rodrigues: Moderator/Host of the panel discussion


Johannes Zutt: World Bank representative (referred to as “John” in the discussion)


Ufuk Akcigit: Macroeconomist, working on World Development Report 2026 on AI and development with the World Bank


Michael Kremer: Nobel Prize winner, involved with Development Innovation Ventures and various AI development initiatives


Anu Bradford: Legal scholar/academic based in the U.S., originally from Europe, specializing in AI regulation and policy


Iqbal Dhaliwal: Works at J-PAL (Abdul Latif Jameel Poverty Action Lab), former civil services exam topper in India, focuses on evidence-based policy interventions


Additional speakers:


None – all speakers mentioned in the transcript are included in the provided speakers names list.


Full session report

This comprehensive discussion at the Bharat Mandapam, moderated by Jeanette Rodrigues, brought together leading experts to examine one of the most pressing questions in international development: whether artificial intelligence will narrow or widen the development gap between nations. The panel featured Johannes Zutt from the World Bank, Nobel laureate economist Michael Kremer, macroeconomist Ufuk Akcigit, legal scholar Anu Bradford (originally from Europe but based in the US), and development practitioner Iqbal Dhaliwal (a civil services exam topper turned researcher), each offering distinct perspectives on AI’s transformative potential and inherent risks.


Rodrigues noted this represented the fourth AI summit, following previous gatherings including the first in the UK, and observed a notable shift from fear-based discussions in earlier summits to the hope-focused approach evident in India’s “AI for all” objective.


AI’s Transformative Potential for Development

Johannes Zutt opened the discussion by positioning AI as a potential game-changer for emerging markets and developing economies, presenting evidence from the World Bank’s recent research in South Asia. The findings revealed that approximately 15-16% of jobs in the region demonstrate strong complementarity with AI, enabling workers to expand their skills and effectiveness rather than being displaced. This statistic challenges the common narrative of AI as primarily a job destroyer, instead highlighting its potential as a productivity enhancer.


Zutt described practical applications that illustrate AI’s democratising potential: farmers using AI to identify crop diseases and pests, nurses leveraging AI for diagnostic support in unfamiliar cases, and financial institutions employing AI to better assess borrower creditworthiness. These examples demonstrate how AI can fill critical skill gaps in healthcare, education, and financial services.


However, Zutt acknowledged significant challenges facing developing countries in harnessing AI’s potential. Basic infrastructure deficits—unreliable electricity, weak internet connectivity, limited digital literacy—create fundamental barriers to AI adoption. Many users may need to rely on voice-based interactions with basic devices rather than sophisticated smartphones.


The Small AI Revolution

Central to Zutt’s analysis was the concept of “small AI”—practical, affordable, locally relevant applications that address specific problems whilst working within constraints of limited connectivity, data availability, skills, and infrastructure. This approach contrasts with large foundational models that require massive computational resources.


Zutt emphasised that small AI represents the most promising pathway for developing countries, requiring bespoke solutions that help users conduct basic investigations using their phones, identify problems, find solutions, and connect with local resources. India emerged as a compelling example, with the world’s third-largest digital universe after the United States and China, built on strong foundations through digital identity programmes and payment platforms.


Market Concentration and Creative Destruction

Ufuk Akcigit introduced a crucial analytical framework distinguishing between AI’s foundational layer and application layer. At the application layer, AI democratises capabilities previously available only to large businesses, enabling small enterprises to access sophisticated tools. However, the foundational layer presents extraordinarily high entry barriers due to compute-intensive requirements and massive data needs, creating conditions prone to market concentration.


Akcigit presented empirical evidence of troubling trends: market concentration in the United States has been increasing since 1980, accelerating after 2000, with innovative resources increasingly shifting towards large incumbent firms. He highlighted a significant brain drain from academia to industry, with dramatic salary increases in industry accelerating after breakthrough moments in 2012 (image processing) and 2017 (foundational models). When researchers move to industry, their publication output drops significantly whilst patenting increases dramatically, representing a shift from open science to protected intellectual property.


Akcigit’s most provocative insight challenged AI’s development premise, questioning why entrepreneurship and dynamism were absent in emerging economies before AI’s arrival. He noted that firm size in developing countries was often best predicted by family size rather than competitive performance, suggesting AI alone cannot overcome deep-seated institutional barriers.


Public Goods and Government Investment

Michael Kremer provided analysis of market failures and government roles, arguing that whilst private firms develop profitable AI applications, critical public goods applications require government and multilateral support. He cited AI-powered weather forecasting as an exemplar: India’s distribution of AI weather forecasts to 38 million farmers demonstrated both scale of impact and public good nature of such services.


During an unpredictable monsoon season, AI forecasts accurately predicted early arrival in Kerala and southern India followed by unexpected delays—information that reached farmers when other sources failed. Survey evidence showed farmers responding by adjusting transplanting schedules and seed varieties.


Kremer also highlighted India’s digital identity system as a powerful example of government investment in AI-enabled public goods creating platforms for broader innovation. He referenced Microsoft Research India’s HAB program for driver’s licenses as another example of AI applications in traffic safety.


However, Kremer expressed concern about public sector adoption challenges, noting that government systems may resist AI technologies, potentially excluding the poor from benefits in public services.


Regulatory Sovereignty and Global Power Dynamics

Anu Bradford addressed how developing countries can maintain AI sovereignty when foundational technologies are concentrated in the United States and China, with DeepSeek representing China’s position in large language models. She argued that the Global South has the same incentives for regulatory sovereignty as developed nations but faces extraordinary implementation challenges.


Bradford’s analysis of the European Union’s rights-driven regulatory framework offers lessons for countries seeking to balance innovation with protection. Crucially, she challenged the conventional wisdom that regulation stifles innovation, calling this a “false choice.” Her analysis of Europe’s innovation gap identified four structural factors: lack of a digital single market across 27 jurisdictions, absence of robust capital markets (with only 5% of global venture capital compared to over 50% in the United States), legal frameworks discouraging risk-taking, and failure to harness global talent effectively.


This reframing suggests developing countries can pursue protective regulation without sacrificing innovation, provided they address underlying structural factors that drive technological development.


Implementation Challenges and Real-World Constraints

Iqbal Dhaliwal brought crucial field experience, emphasising that successful AI applications must be demand-driven and free up time for frontline workers rather than adding burden. His example of AI-powered essay feedback in public schools illustrated this principle—the technology eliminated routine tasks like correcting spelling errors, freeing teachers for higher-value activities like analytical thinking instruction.


However, Dhaliwal’s research reveals systematic implementation failures even when AI demonstrates superior laboratory performance. His most revealing example involved machine learning for tax collection in India: despite successfully increasing identification of fraudulent firms from 38% to 55% at low cost, officials refused to scale the programme because it threatened existing power structures by removing human discretion in enforcement decisions.


Evidence-Based Evaluation and Scaling

Both Kremer and Dhaliwal emphasised rigorous evaluation methodologies. Kremer outlined a four-stage framework: model evaluation (technical performance), user impact assessment (efficacy trials), scalability testing (effectiveness at scale), and continuous improvement systems. He referenced Development Innovation Ventures as an example of tiered funding approaches—small grants for pilots, larger grants for rigorous testing, and substantial funding for successful scale-up.


Future Risks and Opportunities: 2035 Predictions

In rapid-fire predictions for 2035, panellists identified both opportunities and risks:


Ufuk Akcigit expressed optimism about government productivity improvements but concern about labour market disruption, particularly for entry-level jobs that represent aspirational opportunities in developing countries. He highlighted a policy contradiction where governments incentivise AI adoption whilst taxing human employment through provident fund contributions and labour regulations.


Anu Bradford showed excitement about education and health improvements but worried about humans “getting dumber” by outsourcing thinking to AI systems. As an educator, she emphasised using AI to enhance rather than substitute human capabilities.


Michael Kremer was optimistic about health and education advances but concerned about public sector adoption failures that could exclude the poor from AI benefits in public services.


Iqbal Dhaliwal shared optimism about healthcare and education whilst worrying about market concentration preventing broad benefit distribution.


Johannes Zutt expressed excitement about targeted poverty reduction through AI-enabled individual-level interventions but warned that inadequate governance frameworks could enable serious abuses.


Balancing Hope and Pragmatism

The discussion successfully balanced optimistic potential with realistic assessment of implementation challenges. Unlike earlier AI summits dominated by fear about job displacement, this conversation maintained constructive focus on actionable solutions whilst acknowledging genuine risks.


The panellists’ diverse backgrounds provided complementary perspectives, with convergence on key issues like evidence-based evaluation, locally relevant solutions, and market concentration concerns suggesting robust foundations for policy development.


Conclusion and Policy Implications

The discussion revealed that AI’s impact on development gaps depends critically on policy choices made today. The technology offers genuine opportunities to leapfrog development challenges, particularly through small AI applications working within existing constraints rather than requiring wholesale infrastructure transformation.


However, realising benefits requires addressing structural issues predating AI: market concentration in foundational development, inadequate governance frameworks, institutional resistance to change, and policy contradictions favouring capital over labour.


The conversation suggests developing countries need not choose between innovation and regulation but must address structural factors driving technological development: market access, capital availability, talent retention, and risk-taking culture. Success requires coordinated action across infrastructure investment, regulatory frameworks, education systems, and labour market policies.


As Rodrigues observed in closing, noting the “messy human notes” visible on panellists’ screens, the experts weren’t outsourcing their thinking to AI—embodying the principle that AI should enhance rather than replace human capabilities. The choice between AI narrowing or widening development gaps remains open, contingent on the wisdom and effectiveness of policy responses implemented today.


Session transcript

Jeanette Rodrigues

all around the Bharat Mandapam. So once again, thank you very much for your time this afternoon and for choosing us to have a conversation with. To start off, I would like to introduce John, who will make some opening comments for the World Bank.

Johannes Zutt

So thank you very much, Jeanette. It’s a great pleasure to be here speaking to all of you this afternoon. Over the past week, we’ve heard from a lot of world leaders, tech leaders, experts from across many, many countries about how AI is fundamentally reshaping our world, presenting not just a technological shift but a structural transformation with profound implications for economies and societies everywhere. For emerging markets and developing economies, as for all economies, AI could be a game changer. So sorry, that probably helps. I thought the mics were on. So, you know, for all countries, but especially for emerging markets and developing economies, AI can be a game changer, a unique opportunity to leapfrog longstanding development challenges.

It offers clear opportunities to enhance growth and productivity. We recently did some work in South Asia at the World Bank Group to see what sort of impact AI was having on jobs in the region, and we found that approximately 15 or 16 percent of jobs here have strong complementarity with AI. AI enables people in those jobs to expand their skills and their effectiveness in delivering the products and services that they are trying to provide. It also helps, you know, very, very diverse groups of people in many, many different sectors of the economy. It helps farmers to identify pests on their crops. It helps farmers to identify pests on their crops, diseases in their crops, and also how to address them.

It helps farmers to identify pests on their crops, diseases in their crops, and also how to address them. It helps nurses to identify the ailments and illnesses that their patients may be suffering, particularly the ones that they’re not very familiar with, but that they can research using appropriate AI applications. It helps financial institutions to understand better the ability of borrowers to take on loans, which, of course, expands the ability of the borrower to expand his or her business. So there’s clearly enormous potential for AI to fill skill gaps in the areas that I mentioned, also in education, in health care services, to detect patterns, to generate forecasts, to guide the allocation of public resources, and so on.

Of course, at the same time, on the flip side, AI also creates a number of challenges. One of them is there will be some job losses, particularly sort of entry -level jobs that are very much knowledge or document -based, performing relatively rote work that can be taken over by automation. And we’re actually seeing this in the World Bank Group. We went and looked at the number – the types of jobs that we are advertising these days compared to a couple of years ago, and what we found is that that layer, sort of at the bottom of the professional classes inside the bank group, there’s just fewer of those types of jobs being advertised in the World Bank Group today than there were a few years ago.

At the same time, you know, particularly for developing economies and emerging markets, many of them are going to struggle to harness the potential that AI offers because of very basic issues around the foundations for effective AI use. They may not have reliable electricity. We can start with that very basic one. They may not have an internet backbone that’s sufficiently strong. People in these countries may not have very, very basic skills of literacy and numeracy that enable them to work effectively with higher end devices. They may need to use very, very basic devices, not even smartphones, and rely on voice communication, asking a question and hearing a response. So there may be struggles of that kind in developing countries and emerging markets.

And I’m not even talking about all the governance and regulatory safeguards that can also come into play. So the question, of course, is how can emerging economies, developing markets, harness the potential of AI and avoid the pitfalls? And for us in the World Bank group, we’ve been very, very focused on focused recently on basically small AI. Small AI meaning practical, affordable, locally relevant AI that addresses specific problems and also works where connectivity, data, skills, infrastructure are fairly limited. And this is extremely important in countries like India where all of those conditions can apply. And yet there’s tremendous potential for people to expand their, to grow their productivity if they have timely access to information of the right kind in their local language tailored to their specific circumstances.

So that’s what we are trying to do in South Asia today and across the globe actually. And this is really about some of the examples that I mentioned earlier, having bespoke… applications that help farmers to do very basic investigation of the types of issues that they’re facing using their phone to analyze what’s going on to identify it to find out how to address it even to find out who within their local area in their market space can help them by providing the tools or the products that are necessary to address whatever they’re running into so India of course is a very strong example of what’s possible India has been a leading country in digital innovation for quite some time after the United States and China it has the largest if you like digital universe you in the in the world today it’s got some very good foundations there’s the the digital identity program as well as the digital payment platform that currently exists.

There are lots of Indian firms that are innovating in AI, including in the small AI applications that I’ve been talking about. And the governments of India have an objective of ensuring that there is AI for all. So they are very, very aware of the challenges that need to be overcome to make AI accessible to a very, very broad spectrum of the population and not just the very rich that, to some extent, need assistance the least, right? It’s the poorer parts of the country that benefit the most because they will be leveraging a tool that they are not very familiar with and have not been using that much in the past. So we’re working in India.

We’re working in a lot of different states, Uttar Pradesh, Maharashtra, Kerala, Haryana, Telgana. these different aspects working with governments to work on the foundational elements, interoperability, making sure that the accessibility is possible, that programs can run offline as it were so that people who aren’t able to get online all the time can benefit and so on. And then we’re also working with private sector investors who are developing apps. I mean we’re not actually developing many apps ourselves. That’s not really in our comparative advantage. Our comparative advantage as the World Bank Group is to do the more advisory work, make sure that the backbone information that’s embedded in the application is reliable and trustworthy because of course that’s critical for ensuring successful uptake.

But we are helping governments to create. We are helping governments to create the space that enables experimentation in AI sandbox to develop the different applications that people in this incredibly creative country are coming up with to help people get on with their work and become more productive. So I think it’s important to recognize that if we’re going to make effective use of this tool, we need both a public -facing effort to address the standards and the other issues, the interoperability and so on that I mentioned before, but also a private -sector -facing effort because it’s the private sector that’s actually generating, creating most of these applications that are working, particularly in the small AI area.

We’re doing a little bit on bigger AI. There’s obviously a connection between the two. Big AI can, through computational power, generate new knowledge that can help us to do things that we haven’t done so well in the past much, much better. But for… There are countries like India translating that. into small AI will also be very, very important for uptake. So I’m looking forward to hearing from all the distinguished speakers in this panel about their thoughts on what’s happening today in this sector. So thank you very much.

Jeanette Rodrigues

Thank you very much, John. John spoke about, of course, the use cases for AI, and on the other side of the spectrum we have the large language models, we have the foundational AI. But no matter where you sit on the spectrum, no matter where your interests lie, AI, innovation never disperses and never diffuses equally. Today on this panel, I hope to unpack what determines whether AI narrows the development gap or whether it widens the development gap. Especially we are looking to talk about the real world. What should policymakers in the real world think about and keep at the top of their mind as they go ahead preparing policies considering AI? Before I start, just setting the stage.

To a man, to a woman, everybody I spoke with who’s attended the first AI summit to today, this is, I think, the fourth AI summit being held. The first one was held in the UK. And without exception, all of them made it a point to tell me how the first session was full of fear. It was, oh, my God, AI is this terrible technology which is going to steal all our jobs, make us redundant. And when they come to India, they see the hope that technology and AI brings. And that’s the spirit of the discussion this afternoon, to figure out how can we balance both of those extremes, hope and concern, and go ahead in a pragmatic, policy -first way to prepare for the real world.

So if I could start with you, Ufuk, how do you think about AI? And especially, where do you see areas of creative destruction? To foster the innovation that we need.

Ufuk Akcigit

Thank you very much. And so, of course, creative destruction is an important driver of economic growth in the long run. So that’s why, you know, it’s an interesting question how AI will affect creative destruction in general. Of course, we are at a very early phase of AI, and it’s a GPT. And typically, you know, when GPTs are emerging, there’s a huge surge of new businesses. And this should not be misleading. I think the main question we should be asking ourselves is what will happen to the creative destruction in the future? How does the future look like in terms of creative destruction? And I’m a macroeconomist, so that’s why I like to look at this with a, you know, bird’s eye view.

And I would like to, you know, separate advanced economies from emerging or developing economies. So when it comes to advanced economies, there, again, we need to split the issue into two layers. One, the foundational layer. and the other one is the application layer. When we look at the application layer, it’s great. You know, the entry barriers are low. Small businesses can do what only large businesses could do in the past, and, you know, they can do their accounting, marketing. You know, there are so many opportunities now. The entry barrier is low. As a result, this suggests that, you know, this is going to be more, you know, friendly for creative destruction on the application. But then there’s also the foundation layer, and I think that’s exactly where the bottleneck is.

When we look at the foundation layer, the entry barrier is really, really high, and, you know, the compute is very compute -heavy. It’s very data -heavy. It’s very talent -heavy. So as a result, you know, this market, at least this layer, is very concentration -prone. Of course, it’s very early. But, you know, normally we have to be concerned about the foundational layer and how things will pan out because this is the upstream to the application layer, which is downstream to foundation layer. So that’s why whatever will happen at the foundational layer will potentially spill over to application layer two. So that’s why I think we need to look at early indicators. But, you know, in the interest of time, I don’t want to go into the empirical evidence yet.

Maybe we can come back in the second layer. When we look at the developing countries, so I think, you know, I agree with Johannes. You know, I think AI is creating fantastic opportunities. So that’s why I think it’s really important to understand the opportunities as well as the risks for developing countries. And together with the World Bank, we are working on the world development. Report 2026, which is going to be on AI and development. And these are exactly the issues that we are focusing on. But I think before we go into those details, we should ask ourselves one major question. Why was there no entrepreneurship and dynamism before the AI revolution in emerging economies? Why was, you know, when we looked at the firm’s life cycle, for instance, why was it not up or out?

Why was it not, you know, very competition friendly? Why did the best predictor of firm size in emerging economies or developing economies was the size of the family and or the number of male children? These are still lingering issues and AI is not, you know, will not bring magic unless we understand and fix the business environment in these economies. You know, AI will just create new tools. But at the end of the day, we need to make sure that the business friendly environment is there for entrepreneurs to come and exercise their ideas

Jeanette Rodrigues

Ufuk, that’s a very interesting leaping of point, the real world. And the intention of this panel is to get exactly there. So if I may turn to you, quite literally turn to you, Michael, and ask you about the real world. You’re obviously doing a lot of work on the ground. Where do you see the potential for AI to spur gains? And are there any really transformative breakthrough areas that you’re looking at right now?

Michael Kremer

Yes. Thank you. Thanks very much. You know, I don’t want to minimize the existence of forces that may widen gaps. I think that if policymakers, primarily at the national level, but also in multilateral development banks, take appropriate actions and make appropriate investments, then I think AI has the potential to substantially narrow some of the gaps. And, you know, I think the… which policy actions to take can be informed by thinking through relevant market failures and relevant government failures. Let me give a concrete example or two. So private firms have incentives to develop and improve applications of AI that can generate profits. But there are some very important applications of AI for public goods, for example, that will not attract commercial investment to measure it with their needs.

And that’s an area where I think governments and multilateral development banks can play an important role. And I think some of this very much echoes what you were saying about small models, but also I’ll mention the link between the two. So an obvious example where I think India has been a leader for the world is in the development of digital identity. You know, this is… will enable, as Ufuk was saying, this enables a lot of work by individual entrepreneurs, a lot of other applications. So that’s a huge success, and I think multilateral development banks together with India can help bring that to many other countries. Let me take another example, one that’s not as well -known, but picks up on your comment about farmers.

So one thing that’s critical for farmers, they have to make a bunch of decisions that are weather -dependent. You know, when do you plant, for example? What varieties do you use? A drought -resistant variety, another variety. That, most farmers don’t have access to state -of -the -art weather forecasts around the world. I’m not talking about one country. In low – and middle -income countries, they don’t have access to that. Now, there’s a huge advance. We tend to think of large language models, but obviously AI is pushing science forward, and that includes in weather forecasting. There’s really a revolution driven by AI. But weather forecasts are non -rival. They’re largely non -excludable. They’re the classic definition of a public good.

So there’s a strong rationale for national governments, in some cases supported by multilateral development banks, to make investments in producing and disseminating AI weather forecasts. Again here, India is a leader. So if you, India in particular, in particular, India’s, the Indian government distributed forecasts to AI weather forecasts to 38 million farmers last year. And the evidence suggests that farmers, both from India, from this particular case, that in areas, I’ll say a little bit about last year’s monsoon, it came early in Kerala and southern India, but then there was an unexpected delay in the progression. The AI forecasts got that right, that was the only source of information that reached farmers with that. In the areas, we did a survey above that line, and farmers are responding, and they transplant more, they use hybrid seeds more.

Evidence from around the world is consistent with this. Farmers respond to these AI weather forecasts. So I think that’s one example, but many others, and happy to discuss them in education and traffic enforcement and elsewhere.

Jeanette Rodrigues

Michael, your answer should be read the book. Okay. We’ve spoken about the use cases of India, but setting up digital IDs, of course, is a sovereign decision. It’s something India could do unilaterally. When it comes to the large language models, that’s not reality. The large language models are concentrated in the US, in China now with DeepSeek. Anu, in a world where you largely have the rules being set by the two large powers, the US and China, arguably, there’s of course the EU as well, and you’ve done a lot of work on that. Who sets the AI rules for the Global South? Is there even the possibility for the Global South to talk about sovereignty?

Anu Bradford

So I think the Global South has the same kind of incentive for their own AI sovereignty, including then regulatory sovereignty, to design the rules that better work for their economies, for their societies, for what the public interest in these jurisdictions calls for. But regulating AI is really difficult even for very established bureaucracies. You need to be able to make sure that it is an innovation -friendly, and yet you at the same time need to be careful in managing the risks for individuals and societies. So even very established regulators like the European Union have found it one of the most challenging tasks to come up with the AI Act. So there’s probably something to be learned from these jurisdictions that have gone ahead and done the kind of thinking that had then resulted into some of those regulatory frameworks that we have now in place.

So if you think about the choices that India has when it looks around, one of them is to think about, okay, how does the EU go about this? The EU follows what I would call a rights -driven approach to regulation. So what is really characterizing this, the first horizontal binding, so economy -wide regulation that the Europeans enacted, it is a regulation that seeks to protect the fundamental rights of individuals, the democratic structures of the society, and that also seeks to ensure a greater distribution of the benefits from AI revolution. So the European approach is very conscious that it wants to also share some of the benefits so they don’t all go to the large developers of these models, but individual use as society at large.

smaller companies benefit from AI as well. So there’s something I think the Europeans can teach in terms of that regulatory approach in addition to maybe then some details of how that regulation in the end was constructed. But just one word, India is a formidable economy that doesn’t need to take a template and plug it into the economy as such. I think India is in a very good position to take the lessons that serves its needs yet make the kind of local modification and variations that are more reflecting the distinct priorities of this country.

Jeanette Rodrigues

Anu, before I turn to Iqbal, a quick follow -up question to you. As India makes its own rules, where does the trade -off lie between regulation and innovation?

Anu Bradford

So this is very interesting because often I am based in the U.S., but I’m initially from Europe, and these two jurisdictions are described as the U.S. develops technologies and the Europeans regulate those technologies. many ways does India want the innovation path or the regulation path? And I think there are many votes who would go for innovation. But I really would like to debunk this myth that to me it’s a false choice to say that the reason we don’t see these large language models being developed in Europe is not because there’s a GDPR, the General Data Protection Regulation. It’s not because there is AI Act. So the reason there is a perceived innovation gap between the United States and Europe is, I think, four things.

So first, there is no digital single market in Europe. It’s very hard for these AI companies to scale across 27 distinct markets. Second, there’s no deep, robust capital markets union. 5 % of the global venture capital is in Europe, over 50 % in the United States. That explains why the U.S. has been able to take much greater steps in developing AI technologies. Third, there are legal frameworks and cultural attitudes to risk -taking. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone.

You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. I wouldn’t encourage you to replicate that because it’s very hard to innovate on the frontier of technological innovation because sometimes you fail. But you need to be then given the second chance.

And the fourth, I think, the sort of foundational pillar of the robust U.S. tech ecosystem is that the U.S. has been spectacularly successful in harnessing the global talent that has chosen to come to the U.S., including many Indian data scientists, engineers, who think that U.S. is the place where they can start their companies, scale their companies, fund their companies, U.S. universities can attract them. So the idea that choosing to follow… Or imitate aspects of the European rights protective regulation would come at the cost of innovation, we need to understand better what drives the technological innovation and whether regulation should

Jeanette Rodrigues

Thank you, Anu. Iqbal, turning to you. You’re working in an area of the world, South Asia, where what is regulation? What is enforcement? At the risk of sounding like a provocateur, it’s the Wild West a little bit. And therefore, we talk a lot in our part of the world about small AI, about targeted AI. My question to you is that what should policymakers keep in mind when designing AI -enabled interventions, especially when it comes to small AI and the targeted use cases?

Iqbal Dhaliwal

vulnerable public schools all the way from 11th to becoming the second best performing state in just a matter of two or three years. Phenomenal results, right? But then you start saying, let’s unpack this. What was this thing doing? The first thing that they find out is that a lot of people are like, oh, does this mean that I don’t need teachers anymore? No, you still need the teachers. What it replaces is the road task of the teacher having to correct spelling mistakes, calling you to the room and saying, hey, you forgot your comma, you forgot to capitalize. Instead, AI takes care of all of that. And now the teacher can sit with you in the free time and say, how did you set up the structure of this essay?

Did you think about this analytically or not? And that’s the first insight that comes from evaluation. It frees up the teacher time. Everything that we do in the field ends up adding to teacher’s time, adding to the nurse’s time, adding to the Anganwadi worker’s time. Very few teachers do that. Free up time. So if your AI application can free up the time of the health frontline workers, first of all, that’s a winner. The second thing that is really important here was that this is a demand -driven thing, right? Like there was a demand by the kids to improve their essays. There was a demand by the teachers to free up their time. But most importantly, there was a demand by the school districts to show progress.

So I think those is kind of a great example of how everything comes together if you think about it ahead of time.

Jeanette Rodrigues

Ladies and gentlemen, a topper of India’s notoriously difficult civil services exam. So take Iqbal more seriously than you would as just a normal.

Iqbal Dhaliwal

Thank you. I thought that was history now.

Jeanette Rodrigues

It’s never history in India, Iqbal. Michael, turning to you, almost as equal in accomplishment by winning a Nobel. What risks should multilaterals like the World Bank keep in mind? Or let me rephrase that actually. Is there a risk that multilaterals are moving too slowly relative to the technology?

Michael Kremer

I think there certainly is. As I noted before, there are certain areas where the private sector is going to move, but there are other areas where they’re not going to move quickly, and it’s going to be very important for governments and for multilateral development banks and for philanthropy to move. I think there are a number of approaches to this. One way is by encouraging innovation by setting up institutions like innovation funds, particularly evidence -based, to echo Iqbal, I think evidence -based innovation funds. So I’ll give you one example of something that I’m involved in. Development Innovation Ventures, that was initially set up in the U.S. government, but it’s now been relaunched independently. It has tiered funding, so there’s initially very small… grants to pilot new ideas.

Then there’s somewhat larger grants to rigorously test them as Iqbal emphasized and then for those that are most successful there’s funds to help transition them to scale up. I think why is that important? Well that’s important because if we’re thinking about the services that public services and there are other sectors where this is needed but there’s probably going to be insufficient competition. Private developers are going to come up with innovations but then there if they have to sell them to the government they’re facing a monopsonistic buyer. They’re not going to probably not going to get rich doing that. So some support to generate more in that market, generate more entrance in that market, well I think is very important.

It’ll also mean that prices will go down and quality will go up when the government does that thing. Does that. Let me, I’ll just again let me give a example of the potential of how you know we we tend to focus on certain examples time after time here let me give another another example that is you know something that I doubt many people here are thinking of when they think of AI you know one of the things that you know traffic safety and we’ve all been exposed to traffic in the past few days you know traffic is a real problem interfering with urbanization which may drive growth there are a lot of deaths from from traffic a lot of citizens around the world have very difficult and painful experiences with traffic enforcement well you know you can have automated traffic cameras that have the opportunity to improve improve traffic outcomes but also improve people’s perception of fairness in government India’s moving in this let me mention another thing that within traffic safety that’s being done Microsoft Research India developed a program called the India Research Program and it’s a program that’s been developed by the government and it’s a program called HAB that is for driver’s licenses and that it automatically uses AI to test are that are the drivers until they actually pass in their exams they when this was introduced it’s been introduced I believe in 56 sites across India hundreds of thousands of people have taken tests this way we took a leaf from a false book we followed up the we’ve got information from Ola on ratings on and the number of drivers who were rated as driving unsafely that went down 20 to 30 percent where hams had been installed so you know that’s something that was developed not by Microsoft’s main business but by Microsoft research we can just create some support for more ideas like that to be developed to be rigorously tested that can benefit India can benefit the whole world we are we are running out of time probably this is this is one place in in India where time is really respected and we have to end in time.

So I had a list of wonderful questions, but if I could now move to a space where we are really giving shorter answers and quick answers and the deeply, deeply interesting ones about who’s winning and who’s losing. Michael, if I could start with you, actually. We’ve seen many promising technologies fail to live up to their promise. How should we think when we are evaluating AI interventions? How should we think about it? What should be the metrics that we use? Okay. First, model evaluation. So AI companies typically do that part. How good is the model output for specific tasks? You know, forecasting the weather. Does it do a good job? Does it match your local language well?

Second, user impact. Here, I think there’s a role both for sort of initial pilots akin to a medical efficacy trial. If you put the work into trying it, does it lead to improvements and outcomes for the users? Second… scalability and usage at scale that’s more like an effectiveness trial in medicine that it’s important to think not just about the tech but also about the human systems are the teachers actually going to use the product I think is it is an example how can you get the teachers to use the product and then the fourth area is continuous improvement you want a system that improves the underlying models so I think in procurement we might want to think about requiring continuous a B test publicity about what the what the impact usages and impact is and perhaps even thinking about requiring open access as part of the procurement package

Jeanette Rodrigues

thank you Michael. Iqbal, I want to flip that question to you where do you see where do you see hype in the promises of AI that you don’t think will play out

Iqbal Dhaliwal

I think hype is natural because the technology is exciting. It’s a general -purpose technology. It’s evolving so quickly. The marginal cost of deployment for the next users is very low. It’s multimodal. Today you are doing it in text. Tomorrow you’re doing it in video. Day after tomorrow you’re doing it on audio. Everybody who has a smartphone has it. So I can understand the hype, right, like where it is coming from. But I think what we really need to do is separate the hype from the reality on the ground. And the reality on the ground is that many of these technologies are not having the final impact that we are having. And I see kind of two, you know, like once again my job at J -PAL always, you know, sitting at the top is like to say not worry about one professor’s evaluation or one researcher’s evaluation, but say when I connect all these dots, what am I seeing?

And I’m seeing two patterns. One is about trust in technology, and the second part is about the reality of the policy world. Let me elaborate quickly on both. Trust in technology. There are studies which found that even if you give doctors and frontline health care workers access to diagnostic tools, including radiology, tools, using AI, AI enabled prediction of the diseases, oftentimes it doesn’t lead to an improvement in results. And when you try and unpack that, even though this technology worked even better than the human intervention in the lab, right? So some of these diagnostic things can work, have better predictability in the lab, but in the field, they end up decreasing, not only is their efficiency lower, but it lowers the efficiency of the doctors, because we have not trained them enough important.

And the second thing is the enabling mechanism, the world around us. We just assume that just because the technology works, even if it works in the field, the rest of the system will adapt to it. No, you have to adapt the system to the rest of the world. So this example quickly comes from India, where, you know, we have a with one particular state government, we try to improve the collection of value added taxes, it’s called GST in India, there is a whole worry about bogus firms that are created to get these GST or value added tax thing. The machine learning algorithm is able to increase the probability of predicting a bogus firm from 38 % to 55 % in one shot at a very, very low cost.

When it came time to scale up this program by the government, they refused to scale it up because you think about it, you have taken away the discretion of the human to decide whether they should raid Michael’s firm or they should raid Iqbal’s firm. That is power. And if you haven’t thought through that point, what is the point of technology?

Jeanette Rodrigues

I won’t terrify anyone in the room by asking why they didn’t want to scale up this tech. But talking about weeding out the bad actors, talking about firm -level decisions, moving on to UFOOC, does the firm -level evidence show productivity gains diffusing evenly across?

Iqbal Dhaliwal

So just going back quickly to the question of the firm. In the earlier model that I highlighted, I think it’s important to understand what’s happening at the upstream. so that we can then understand where things will be going in the future. And the evidence there, the early signs, is a bit worrying. So first of all, when we look at, for instance, the dynamism or market concentration in the U.S., market concentration has been increasing since 1980 but in an accelerating way after 2000. So that’s the first set of evidence. The second set of evidence comes from how innovative resources are allocated across firms. And when we look at the inventors who are creating the creative destruction and technologies, there’s a massive shift towards market incumbents.

And when I say incumbents, those firms that have more than 1 ,000 employees. In around 2000, 50 % of employees used to work for incumbent firms in just 10 years. That shifted. To more than 60%. A massive reallocation of innovative resources. And the final piece of evidence, and we are going to release this study next week, we looked at the universities, how AI is impacting universities, and we look at the AI publishing scientists. And AI publishing scientists in academia, the top 1%, used to make around $300 ,000 in 2000. It went up to $390 ,000 over two decades. Similar people in industry used to make around $550 ,000. Now it went up to $2 million.

And there has been two breakpoints. One of them was in 2012. The other one was in 2017. Of course, image processing and then the foundational model revolution in 2017. The more worrying part about this, which brings me back to the foundational model side of things, is that this created a massive out -migration from academia to industry.

Ufuk Akcigit

And after 2017 especially, B2B. When the compute and infrastructure became so important. And then we saw the rise of AI. The target or the destination is large incumbent information companies, which again highlights where things are going in terms of the concentration. And the worrying part also is that when people are moving to industry from academia, their publication record goes down by 50%. They start patenting by 600 % more after they move, which means that we are moving from open science to more protected science. Now, spillover is extremely important for creative destruction, for the future of innovation. So that’s why, and if we will keep the foundational layer contestable, I think that the fundamental players there will be universities.

And keeping universities in a healthy way is extremely important, but there is very little discussion on this, which I think before it gets too late. Because once you start buttoning the wrong button, and then the rest will follow wrong as well. So that’s why I think we have to have this frank conversation early on in the game, otherwise it might… too late.

Jeanette Rodrigues

Ufuk, what you spoke about boils down to something Iqbal mentioned as well, power. Because power still makes decisions in this world today. So Anu, before I move to the final section of this panel, if I could ask you if the finance minister of a developing country let’s say India, comes to you and asks you, Anu, how should I think? What would you tell her?

Anu Bradford

So today if you think about how much political power but also geopolitical power is shaping our conversations around AI it is something where I think each country is now pushed towards greater techno -nationalism, techno -protectionism AI sovereignty has become almost a sort of uniformly goal for everyone. But I would remind even when encountering players like the United States and China that nobody in today’s world will be completely sovereign when it comes to AI space. If I just take one layer of the AI stack as an example. What is now driving a lot of the global AI race is this idea that we want to do frontier AI we want to have these powerful foundation models.

That means you need to have a lot of computers. You can’t have a lot of compute unless you have access to the high -end semiconductors. The U.S. is well positioned there. It is hosting companies like NVIDIA. The U.S. leads in the design of semiconductors. But who is manufacturing them? We really need to think about the role of Taiwan there. But then the Europeans have ASML in the Netherlands that leads in the high -end manufacturing with the equipment needed for manufacturing. But that is dependent on chemicals where Japan is leading. And the entire supply chain relies on raw materials from China. So ultimately, all these choke points can in principle be weaponized, but that is not ultimately a sustainable strategy.

Even President Trump had to walk back some of the export controls to China because Chinese were saying, okay, then the raw materials are not coming your way. So there are the potential ways to weaponize these interdependencies that ultimately make us all poorer. So as a finance minister of India, when approaching other middle powers, the great powers,

Jeanette Rodrigues

Easily said than done. Our final, final section is, of course, the rapid fire round. We all love this in this room. In one sentence, in one sentence, if I could ask all of you, and Johannes, you’re not getting away easily, you’re going to answer this as well. So in one. if I could ask you, we’re sitting in New Delhi 2035. Could you predict one development outcome that will have dramatically improved with the use of AI and one risk we’ll regret not addressing now? I guess you already know my second answer.

Iqbal Dhaliwal

I think the concentration, the future of market concentration is something that we should be concerned about and we might regret not having discussed this sufficiently in 10 years. On what will change in a positive direction, clearly health care and education, I think. It’s a no -brainer.

Jeanette Rodrigues

Anu?

Anu Bradford

So first of all, it’s so inspiring to hear all the use case examples, whether we talk about traffic or agriculture or education, because I often talk about the risks and the downsides, so it’s a really good reminder. I’m personally very excited, especially what happens in the education space but also in the health space. In terms of the risks, I think one thing that we are not paying attention to, and what I would even call a systemic risk, is the idea that many worry about AI getting almost too smart. But I am more worried about us getting dumber as a humanity. There is a temptation to start skipping steps, outsourcing your thinking and your creativity to these models.

And as an educator, when I think about how I will teach my students to use generative AI to enhance but not substitute their capabilities, we will just make a tremendous mistake if we just forewent that hard work, that beautiful moment of thinking hard problems and creating and investing in our own capabilities. And all that just cannot be so outsourced, because otherwise we don’t even know what kind of questions we should be asking the AI going forward.

Jeanette Rodrigues

Michael.

Michael Kremer

I agree that there is huge potential in health. and education. I think we’ll see big improvements in that, but the risk is that the public sector won’t adopt these, and therefore the poor won’t have access to them. And that’s because the public sector, as Iqbal indicated, the government systems and the government workers may not adapt to use these. There’s also risks of copycat regulation that are over -focused on certain problems that other countries may be worrying about, but might not be relevant for emerging economies. And then final risk is that the procurement systems are just set up in such a way that we don’t get sufficient competition, we get lock -in, and then we just don’t wind up with good quality.

Jeanette Rodrigues

Thank you, Michael. The buzzer’s down, but I’ll take a risk and quickly run through the other.

Ufuk Akcigit

Yes. I think I am much more optimistic about the government actually adopting this thing. Whether it is when you call 100, your call is going to get answered very quickly. The PCR van is going to be at your house much faster. The hospitals are able to be able to link your health record. So I think the government sector productivity is going to improve leapfrogs. The biggest risk, I think, is definitely the labor market. If there was a dial where I could slow down the adaptation and give time to the labor market to catch up, that’s my biggest worry. You talked about entry -level jobs. An entry -level coding job might be an entry -level job in the United States.

It’s the aspirational job that created Gurgaon’s and Noida’s and Mohali’s of this country. And those people are going to be running out of jobs very, quickly. And I think the labor market, whether it is ESI, Provident Fund, Gratuity, we are piling on and making it harder and harder to hire labor. when, on the other hand, capital is not taxed. We are giving incentives to people to use AI, and we are taxing them through provident fund and labor market regulations to hire labor. And I think that, for me, is the biggest risk, actually.

Johannes Zutt

So I think that for the first time in human history, we may actually have the tools available to enable us to target poverty reduction, poverty elimination initiatives on individuals. And that could be tremendously transforming. But at the same time, I do worry that we will not get the governance right or we won’t be able to make that governance sufficiently robust to prevent abuses.

Jeanette Rodrigues

Thank you very much to all of our panelists and to you for your time and attention once again. I had the very rare fortune of being able to peek into Michael’s screen while he was speaking, and I saw all the messy human notes. Our panelists are definitely not outsourcing their thinking anytime soon, and thank God for that. Thank you, ladies and gentlemen

J

Johannes Zutt

Speech speed

141 words per minute

Speech length

1450 words

Speech time

612 seconds

AI as a leap‑frog tool for productivity and growth

Explanation

Johannes argues that AI offers a unique chance for emerging economies to bypass long‑standing development hurdles and boost growth and productivity. He sees AI as a game‑changer that can accelerate progress across sectors.


Evidence

“So, you know, for all countries, but especially for emerging markets and developing economies, AI can be a game changer, a unique opportunity to leapfrog longstanding development challenges.” [5]. “It offers clear opportunities to enhance growth and productivity.” [1].


Major discussion point

AI as a development catalyst for emerging economies


Topics

Artificial intelligence | Social and economic development | The enabling environment for digital development


Infrastructure and skill gaps limit AI uptake

Explanation

He warns that many developing countries lack basic foundations such as reliable internet, electricity, and basic literacy, which hampers effective AI adoption. These constraints must be addressed before AI benefits can be realized.


Evidence

“At the same time, you know, particularly for developing economies and emerging markets, many of them are going to struggle to harness the potential that AI offers because of very basic issues around the foundations for effective AI use.” [19]. “They may not have an internet backbone that’s sufficiently strong.” [23]. “People in these countries may not have very, very basic skills of literacy and numeracy that enable them to work effectively with higher end devices.” [24].


Major discussion point

AI as a development catalyst for emerging economies


Topics

Capacity development | Closing all digital divides | The enabling environment for digital development


AI can fill skill gaps in agriculture, health, finance

Explanation

Johannes highlights AI’s potential to address shortages of skilled personnel by providing pattern detection, forecasting, and resource allocation tools in sectors like education, health care, and agriculture.


Evidence

“So there’s clearly enormous potential for AI to fill skill gaps in the areas that I mentioned, also in education, in health care services, to detect patterns, to generate forecasts, to guide the allocation of public resources, and so on.” [14].


Major discussion point

AI as a development catalyst for emerging economies


Topics

Artificial intelligence | Social and economic development | Capacity development


Risk of job losses in entry‑level, knowledge‑based roles

Explanation

He notes that AI automation may displace entry‑level, routine knowledge jobs, creating labor‑market challenges that require policy attention.


Evidence

“One of them is there will be some job losses, particularly sort of entry -level jobs that are very much knowledge or document -based, performing relatively rote work that can be taken over by automation.” [32].


Major discussion point

Risks of AI widening inequality and labor market disruption


Topics

The digital economy | Human rights and the ethical dimensions of the information society | Capacity development


Small AI: affordable, offline, locally relevant solutions

Explanation

Johannes defines “small AI” as practical, low‑cost applications that work with limited connectivity, data, and infrastructure, making them suitable for low‑resource settings.


Evidence

“Small AI meaning practical, affordable, locally relevant AI that addresses specific problems and also works where connectivity, data, skills, infrastructure are fairly limited.” [82].


Major discussion point

Small AI vs. foundational AI and market concentration


Topics

Artificial intelligence | Closing all digital divides | The enabling environment for digital development


Governments should create AI sandboxes and standards for safe experimentation

Explanation

He stresses the need for both public‑facing standards and private‑sector engagement, including sandbox environments, to enable safe and innovative AI deployment.


Evidence

“I think it’s important to recognize that if we’re going to make effective use of this tool, we need both a public‑facing effort to address the standards and the other issues, the interoperability and so that I mentioned before, but also a private‑sector‑facing effort because it’s the private sector that’s actually generating, creating most of these applications that are working, particularly in the small AI area.” [59]. “We are helping governments to create the space that enables experimentation in AI sandbox to develop the different applications that people in this incredibly creative country are coming up with to help people get on with their work and become more productive.” [169].


Major discussion point

Policy, regulation, and AI sovereignty


Topics

Artificial intelligence | The enabling environment for digital development | Monitoring and measurement


Governance failures could enable abuses and power concentration

Explanation

Johannes warns that without robust governance, AI could be misused, leading to concentration of power and potential abuses.


Evidence

“I do worry that we will not get the governance right or we won’t be able to make that governance sufficiently robust to prevent abuses.” [163].


Major discussion point

Risks of AI widening inequality and labor market disruption


Topics

Human rights and the ethical dimensions of the information society | Artificial intelligence | The enabling environment for digital development


U

Ufuk Akcigit

Speech speed

163 words per minute

Speech length

1041 words

Speech time

382 seconds

Creative destruction differs between advanced and emerging markets

Explanation

Ufuk points out that the dynamics of creative destruction will vary, with emerging economies facing distinct challenges compared to advanced economies.


Evidence

“I would like to, you know, separate advanced economies from emerging or developing economies.” [51]. “Now, spillover is extremely important for creative destruction, for the future of innovation.” [44].


Major discussion point

AI as a development catalyst for emerging economies


Topics

The digital economy | Artificial intelligence | Social and economic development


Need for a business‑friendly environment to realise AI benefits

Explanation

He emphasizes that a supportive business climate is essential for entrepreneurs to develop and deploy AI solutions.


Evidence

“But at the end of the day, we need to make sure that the business friendly environment is there for entrepreneurs to come and exercise their ideas” [58].


Major discussion point

AI as a development catalyst for emerging economies


Topics

The enabling environment for digital development | The digital economy


Foundational layer is compute‑, data‑, talent‑intensive, leading to concentration

Explanation

He describes the foundational AI layer as having high entry barriers due to heavy compute, data, and talent requirements, which fosters market concentration.


Evidence

“When we look at the foundation layer, the entry barrier is really, really high, and, you know, the compute is very compute‑heavy.” [46]. “It’s very talent‑heavy.” [92]. “It’s very data‑heavy.” [93].


Major discussion point

Small AI vs. foundational AI and market concentration


Topics

Artificial intelligence | The digital economy


Concentration risk: incumbents dominate foundational AI market

Explanation

He notes that the foundational AI market is prone to concentration, with large incumbent information firms likely to capture most of the value.


Evidence

“So as a result, you know, this market, at least this layer, is very concentration‑prone.” [91]. “The target or the destination is large incumbent information companies, which again highlights where things are going in terms of the concentration.” [99].


Major discussion point

Small AI vs. foundational AI and market concentration


Topics

Artificial intelligence | The digital economy


Keeping the foundational layer contestable; universities as key players

Explanation

He argues that to prevent excessive concentration, the foundational layer should remain contestable, with universities playing a central role.


Evidence

“So that’s why if we will keep the foundational layer contestable, I think that the fundamental players there will be universities.” [95].


Major discussion point

Small AI vs. foundational AI and market concentration


Topics

Artificial intelligence | Capacity development


Labor‑market risk: rapid loss of entry‑level jobs

Explanation

Ufuk identifies the biggest risk as labor‑market disruption, especially the swift disappearance of entry‑level positions without adequate safety nets.


Evidence

“The biggest risk, I think, is definitely the labor market.” [35]. “If there was a dial where I could slow down the adaptation and give time to the labor market to catch up, that’s my biggest worry.” [41].


Major discussion point

Risks of AI widening inequality and labor market disruption


Topics

The digital economy | Human rights and the ethical dimensions of the information society


M

Michael Kremer

Speech speed

160 words per minute

Speech length

1592 words

Speech time

593 seconds

Multilateral policy actions can narrow development gaps

Explanation

Michael contends that coordinated actions by national governments and multilateral development banks can harness AI to reduce existing development disparities.


Evidence

“I think that if policymakers, primarily at the national level, but also in multilateral development banks, take appropriate actions and make appropriate investments, then I think AI has the potential to substantially narrow some of the gaps.” [12].


Major discussion point

AI as a development catalyst for emerging economies


Topics

Financial mechanisms | The enabling environment for digital development


AI‑driven weather forecasts improve farmer decisions

Explanation

He provides evidence that AI weather forecasts are being used by millions of farmers, enhancing agricultural decision‑making.


Evidence

“Farmers respond to these AI weather forecasts.” [30]. “So there’s a strong rationale for national governments, in some cases supported by multilateral development banks, to make investments in producing and disseminating AI weather forecasts.” [68]. “The AI forecasts got that right, that was the only source of information that reached farmers with that.” [73]. “But weather forecasts are non‑rival.” [75].


Major discussion point

AI as a development catalyst for emerging economies


Topics

Artificial intelligence | Social and economic development | Agricultural development


Multilateral institutions risk moving too slowly; need faster action

Explanation

He acknowledges concerns that multilateral bodies may lag behind rapid AI advances, urging more agile responses.


Evidence

“Is there a risk that multilaterals are moving too slowly relative to the technology?” [66]. “There are certain areas where the private sector is going to move, but there are other areas where they’re not going to move quickly, and it’s going to be very important for governments and for multilateral development banks and for philanthropy to move.” [71].


Major discussion point

Policy, regulation, and AI sovereignty


Topics

Financial mechanisms | The enabling environment for digital development


Evidence‑based innovation funds and staged financing to scale AI solutions

Explanation

He proposes the creation of evidence‑based innovation funds that provide tiered grants, supporting pilots and scaling successful AI applications.


Evidence

“One way is by encouraging innovation by setting up institutions like innovation funds, particularly evidence‑based, to echo Iqbal, I think evidence‑based innovation funds.” [116]. “It has tiered funding, so there’s initially very small… grants to pilot new ideas.” [168]. “Then there’s somewhat larger grants to rigorously test them as Iqbal emphasized and then for those that are most successful there’s funds to help transition them to scale up.” [167].


Major discussion point

Policy, regulation, and AI sovereignty


Topics

Financial mechanisms | Monitoring and measurement


Rigorous evaluation framework: model performance, user impact, scalability, continuous improvement

Explanation

Michael outlines a multi‑dimensional evaluation approach covering technical performance, user outcomes, scalability, and ongoing model refinement.


Evidence

“First, model evaluation.” [124]. “Second, user impact.” [134]. “Second… scalability and usage at scale that’s more like an effectiveness trial in medicine… and the fourth area is continuous improvement you want a system that improves the underlying models…” [189].


Major discussion point

Implementation challenges, trust, and evaluation


Topics

Monitoring and measurement | Artificial intelligence


Public sector may lag in adopting AI, leaving the poor without access

Explanation

He warns that if governments do not adopt AI tools, the benefits will not reach low‑income populations who rely on public services.


Evidence

“the risk is that the public sector won’t adopt these, and therefore the poor won’t have access to them.” [185].


Major discussion point

Risks of AI widening inequality and labor market disruption


Topics

Social and economic development | Human rights and the ethical dimensions of the information society


A

Anu Bradford

Speech speed

199 words per minute

Speech length

1374 words

Speech time

412 seconds

EU rights‑driven, innovation‑friendly regulation as a reference

Explanation

Anu points to the European Union’s rights‑based regulatory approach as a model that balances protection of fundamental rights with innovation.


Evidence

“The EU follows what I would call a rights‑driven approach to regulation.” [122]. “So the idea that choosing to follow… Or imitate aspects of the European rights protective regulation would come at the cost of innovation, we need to understand better what drives the technological innovation and whether regulation should…” [123].


Major discussion point

Policy, regulation, and AI sovereignty


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | The enabling environment for digital development


India can adapt global lessons while crafting sovereign AI rules

Explanation

She argues that India is well‑positioned to incorporate international best practices into locally‑tailored AI regulations, preserving sovereignty.


Evidence

“I think India is in a very good position to take the lessons that serves its needs yet make the kind of local modification and variations that are more reflecting the distinct priorities of this country.” [135].


Major discussion point

Policy, regulation, and AI sovereignty


Topics

Artificial intelligence | The enabling environment for digital development


Myth that regulation necessarily stifles innovation; need to understand drivers

Explanation

Anu seeks to debunk the belief that regulation hampers AI progress, emphasizing that well‑designed rules can coexist with innovation.


Evidence

“But I really would like to debunk this myth that to me it’s a false choice to say that the reason we don’t see these large language models being developed in Europe is not because there’s a GDPR…” [144].


Major discussion point

Policy, regulation, and AI sovereignty


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Global South must assert regulatory sovereignty amid US/China dominance

Explanation

She stresses that countries of the Global South should develop their own AI regulatory frameworks to avoid dependence on the major powers.


Evidence

“the Global South has the same kind of incentive for their own AI sovereignty, including then regulatory sovereignty, to design the rules that better work for their economies, for their societies…” [137]. “But I would remind even when encountering players like the United States and China that nobody in today’s world will be completely sovereign when it comes to AI space.” [140].


Major discussion point

Policy, regulation, and AI sovereignty


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | The enabling environment for digital development


I

Iqbal Dhaliwal

Speech speed

183 words per minute

Speech length

1151 words

Speech time

375 seconds

Small AI frees teachers’ and health workers’ time, improving outcomes

Explanation

Iqbal notes that AI applications that automate routine tasks free frontline staff, leading to better service delivery in health and education.


Evidence

“So if your AI application can free up the time of the health frontline workers, first of all, that’s a winner.” [28]. “It frees up the teacher time.” [81]. “There was a demand by the teachers to free up their time.” [83].


Major discussion point

AI as a development catalyst for emerging economies


Topics

Artificial intelligence | Social and economic development | Capacity development


Laboratory‑proven AI may fail in the field without proper training and system adaptation

Explanation

He cautions that AI tools that perform well in controlled settings can underperform in real‑world deployments if users are not adequately trained or systems are not adapted.


Evidence

“So some of these diagnostic things can work, have better predictability in the lab, but in the field, they end up decreasing, not only is their efficiency lower, but it lowers the efficiency of the doctors, because we have not trained them enough important.” [172]. “We just assume that just because the technology works, even if it works in the field, the rest of the system will adapt to it.” [173].


Major discussion point

Implementation challenges, trust, and evaluation


Topics

Artificial intelligence | Capacity development | Monitoring and measurement


GST fraud‑detection algorithm not scaled due to concerns over human discretion and power

Explanation

He describes a case where a successful AI model for detecting bogus GST firms was not rolled out because authorities feared loss of human decision‑making power.


Evidence

“When it came time to scale up this program by the government, they refused to scale it up because you think about it, you have taken away the discretion of the human to decide whether they should raid Michael’s firm or they should raid Iqbal’s firm.” [178]. “The machine learning algorithm is able to increase the probability of predicting a bogus firm from 38 % to 55 % in one shot at a very, very low cost.” [181].


Major discussion point

Implementation challenges, trust, and evaluation


Topics

Artificial intelligence | Governance | The digital economy


Demand‑driven design that frees frontline workers’ time is crucial for adoption

Explanation

He argues that AI solutions should be built around clear demand signals and should free up staff time to ensure uptake and impact.


Evidence

“The second thing that is really important here was that this is a demand‑driven thing, right?” [186]. “But most importantly, there was a demand by the school districts to show progress.” [187]. “Free up time.” [188].


Major discussion point

Implementation challenges, trust, and evaluation


Topics

Artificial intelligence | Capacity development | Social and economic development


Shift of innovative resources to large incumbents and industry migration from academia

Explanation

He highlights a massive reallocation of talent and innovation from academic settings to large incumbent firms, raising concerns about concentration.


Evidence

“The more worrying part about this, which brings me back to the foundational model side of things, is that this created a massive out‑migration from academia to industry.” [110]. “A massive reallocation of innovative resources.” [109].


Major discussion point

Small AI vs. foundational AI and market concentration


Topics

Artificial intelligence | The digital economy | Capacity development


Concentration could limit diffusion of AI gains to poorer populations

Explanation

He points out that increasing market concentration may prevent AI benefits from reaching low‑income groups and regions.


Evidence

“In low‑ and middle‑income countries, they don’t have access to that.” [196]. “The poorer parts of the country that benefit the most because they will be leveraging a tool that they are not very familiar with…” [195].


Major discussion point

Risks of AI widening inequality and labor market disruption


Topics

Artificial intelligence | Social and economic development | The digital economy


J

Jeanette Rodrigues

Speech speed

174 words per minute

Speech length

1039 words

Speech time

356 seconds

Policymakers need to keep AI‑enabled interventions in mind

Explanation

Jeanette asks what policymakers should prioritize when designing AI‑enabled programs, emphasizing the need for clear guidance and focus on impact.


Evidence

“My question to you is that what should policymakers keep in mind when designing AI‑enabled interventions, especially when it comes to small AI and the targeted use cases?” [61]. “What should policymakers in the real world think about and keep at the top of their mind as they go ahead preparing policies considering AI?” [170].


Major discussion point

Policy, regulation, and AI sovereignty


Topics

The enabling environment for digital development | Artificial intelligence | Policy design


Agreements

Agreement points

AI has transformative potential for healthcare and education sectors

Speakers

– Johannes Zutt
– Ufuk Akcigit
– Anu Bradford

Arguments

AI enables people in those jobs to expand their skills and their effectiveness in delivering the products and services that they are trying to provide. It also helps, you know, very, very diverse groups of people in many, many different sectors of the economy


Healthcare and education will see dramatic improvements through AI applications


I’m personally very excited, especially what happens in the education space but also in the health space


Summary

All speakers agree that healthcare and education represent the most promising sectors for AI transformation, with potential for significant positive outcomes


Topics

Artificial intelligence | Social and economic development


Market concentration in AI is a significant concern requiring attention

Speakers

– Ufuk Akcigit
– Johannes Zutt

Arguments

Market concentration has been increasing since 1980, accelerating after 2000, with innovative resources shifting to large incumbent firms


I think the concentration, the future of market concentration is something that we should be concerned about and we might regret not having discussed this sufficiently in 10 years


Summary

Both speakers express concern about increasing market concentration in AI development and its potential negative implications for competition and innovation


Topics

Artificial intelligence | The digital economy | The enabling environment for digital development


Public sector adoption challenges pose risks to equitable AI access

Speakers

– Michael Kremer
– Iqbal Dhaliwal

Arguments

Government systems and workers may not adapt to use AI technologies, limiting access for the poor


Many promising AI technologies fail due to trust issues and inadequate adaptation of surrounding systems


Summary

Both speakers identify significant challenges in public sector AI adoption, including resistance to change and failure to adapt systems, which could prevent the poor from accessing AI benefits


Topics

Artificial intelligence | Capacity development | Social and economic development


Small AI and locally relevant applications are crucial for developing countries

Speakers

– Johannes Zutt
– Michael Kremer

Arguments

Focus should be on ‘small AI’ – practical, affordable, locally relevant AI that works with limited infrastructure


Private firms develop profitable applications, but public goods applications need government and multilateral support


Summary

Both speakers emphasize the importance of practical, locally-relevant AI solutions that can work within the constraints of developing country infrastructure and address specific local needs


Topics

Artificial intelligence | Information and communication technologies for development | Closing all digital divides


Similar viewpoints

AI presents significant opportunities for development if implemented thoughtfully with appropriate support systems and policy frameworks

Speakers

– Johannes Zutt
– Michael Kremer
– Iqbal Dhaliwal

Arguments

AI offers opportunities to leapfrog development challenges, with 15-16% of jobs in South Asia showing strong complementarity with AI


AI has potential to substantially narrow development gaps if appropriate policy actions are taken


AI applications should free up time for frontline workers rather than adding to their burden


Topics

Artificial intelligence | Social and economic development | Information and communication technologies for development


Structural factors like market access, capital availability, and talent retention are more important for innovation than regulatory constraints

Speakers

– Anu Bradford
– Ufuk Akcigit

Arguments

Innovation vs. regulation is a false choice – Europe’s innovation gap stems from market fragmentation, capital constraints, and talent issues, not regulation


There’s concerning migration of AI talent from academia to industry, reducing open science and increasing protected patents


Topics

Artificial intelligence | The enabling environment for digital development | Capacity development


Successful AI implementation requires addressing institutional and governance challenges, not just technical capabilities

Speakers

– Iqbal Dhaliwal
– Johannes Zutt

Arguments

Technology deployment requires addressing power dynamics and institutional resistance to change


Governance and regulatory safeguards are critical challenges, especially for developing countries


Topics

Artificial intelligence | The enabling environment for digital development | Capacity development


Unexpected consensus

Labor market disruption as primary AI risk

Speakers

– Ufuk Akcigit
– Johannes Zutt

Arguments

Labor market disruption is the biggest concern, especially for entry-level jobs that drive economic development


AI also creates a number of challenges. One of them is there will be some job losses, particularly sort of entry-level jobs that are very much knowledge or document-based


Explanation

Despite their different backgrounds (academic economist vs. World Bank practitioner), both speakers converge on labor displacement as the most significant risk, particularly for developing countries where entry-level jobs represent crucial economic opportunities


Topics

Artificial intelligence | The digital economy | Social and economic development


Need for evidence-based AI evaluation

Speakers

– Michael Kremer
– Iqbal Dhaliwal

Arguments

First, model evaluation. So AI companies typically do that part. How good is the model output for specific tasks? You know, forecasting the weather. Does it do a good job? Does it match your local language well? Second, user impact


I think what we really need to do is separate the hype from the reality on the ground. And the reality on the ground is that many of these technologies are not having the final impact that we are having


Explanation

Both the Nobel laureate economist and the development practitioner strongly emphasize rigorous evaluation methodologies, showing unexpected alignment between academic and field perspectives on the importance of evidence-based assessment


Topics

Artificial intelligence | Monitoring and measurement | Social and economic development


Human capability preservation in AI era

Speakers

– Anu Bradford
– Iqbal Dhaliwal

Arguments

Risk of humans becoming overly dependent on AI and losing critical thinking capabilities


Demand-driven AI solutions that address real needs of users, teachers, and institutions are most successful


Explanation

The legal scholar and development practitioner unexpectedly converge on concerns about maintaining human agency and capabilities, emphasizing that AI should enhance rather than replace human thinking and decision-making


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | Capacity development


Overall assessment

Summary

The speakers demonstrate strong consensus on AI’s transformative potential for healthcare and education, the importance of addressing market concentration concerns, challenges in public sector adoption, and the need for locally-relevant AI solutions. There is also significant agreement on the importance of evidence-based evaluation and addressing institutional barriers to implementation.


Consensus level

High level of consensus across diverse perspectives (World Bank, academia, legal scholarship, development practice) suggests robust foundation for policy development. The alignment between theoretical concerns and practical implementation challenges indicates that policy frameworks addressing these shared concerns could gain broad support across different stakeholder communities.


Differences

Different viewpoints

Speed of AI adoption and labor market adaptation

Speakers

– Ufuk Akcigit
– Iqbal Dhaliwal

Arguments

Labor market disruption is the biggest concern, especially for entry-level jobs that drive economic development


AI applications should free up time for frontline workers rather than adding to their burden


Summary

Akcigit is deeply concerned about rapid AI adoption displacing workers faster than the labor market can adapt, particularly entry-level coding jobs that built India’s tech hubs. Dhaliwal focuses on designing AI to complement rather than replace workers, emphasizing applications that free up time for higher-value tasks.


Topics

Artificial intelligence | The digital economy | Social and economic development


Public sector AI adoption capability

Speakers

– Michael Kremer
– Ufuk Akcigit

Arguments

Government systems and workers may not adapt to use AI technologies, limiting access for the poor


Public sector productivity will improve significantly through AI adoption in government services


Summary

Kremer is pessimistic about public sector adaptation to AI, viewing it as a major risk that could exclude the poor from AI benefits. Akcigit is optimistic about government AI adoption, predicting dramatic improvements in service delivery and response times.


Topics

Artificial intelligence | Social and economic development | Capacity development


Primary AI risks for humanity

Speakers

– Anu Bradford
– Ufuk Akcigit

Arguments

Risk of humans becoming overly dependent on AI and losing critical thinking capabilities


There’s concerning migration of AI talent from academia to industry, reducing open science and increasing protected patents


Summary

Bradford focuses on the risk of human intellectual degradation from AI dependency, while Akcigit is concerned about structural changes in the innovation ecosystem, particularly the brain drain from academia to industry affecting open science.


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | Capacity development


Unexpected differences

Regulation versus innovation trade-off

Speakers

– Anu Bradford
– Implicit assumption by others

Arguments

Innovation vs. regulation is a false choice – Europe’s innovation gap stems from market fragmentation, capital constraints, and talent issues, not regulation


Explanation

Bradford’s strong rejection of the regulation-innovation trade-off is unexpected given the common assumption that regulation stifles innovation. Her detailed evidence about Europe’s structural issues rather than regulatory burden challenges conventional wisdom about AI governance.


Topics

Artificial intelligence | The enabling environment for digital development | The digital economy


Trust in AI technology implementation

Speakers

– Iqbal Dhaliwal
– Other speakers

Arguments

Many promising AI technologies fail due to trust issues and inadequate adaptation of surrounding systems


Explanation

While other speakers focus on technical capabilities and policy frameworks, Dhaliwal’s emphasis on trust and human factors as primary barriers to AI success is unexpected. His examples of doctors not using superior AI diagnostic tools due to trust issues reveals a different dimension of implementation challenges.


Topics

Artificial intelligence | Capacity development | Social and economic development


Overall assessment

Summary

The speakers show broad agreement on AI’s transformative potential for development but significant disagreements on implementation approaches, risk priorities, and institutional capabilities. Key tensions exist between optimistic and cautious views of public sector adaptation, different prioritization of risks (labor displacement vs. human dependency vs. market concentration), and varying emphasis on technical solutions versus institutional reform.


Disagreement level

Moderate disagreement with high implications – while speakers share common goals of harnessing AI for development, their different approaches to risk management, implementation strategies, and institutional capabilities could lead to very different policy recommendations and outcomes for developing countries.


Partial agreements

Partial agreements

All speakers agree AI has tremendous potential for developing countries, but disagree on implementation approaches. Zutt emphasizes small AI solutions, Kremer focuses on public goods applications needing government support, while Akcigit stresses the need to fix underlying business environments first.

Speakers

– Johannes Zutt
– Michael Kremer
– Ufuk Akcigit

Arguments

Focus should be on ‘small AI’ – practical, affordable, locally relevant AI that works with limited infrastructure


Private firms develop profitable applications, but public goods applications need government and multilateral support


AI creates fantastic opportunities for developing countries but requires fixing underlying business environment issues


Topics

Artificial intelligence | Social and economic development | The enabling environment for digital development


Both speakers recognize the importance of governance and institutional factors in AI implementation, but Bradford focuses on regulatory sovereignty challenges while Dhaliwal emphasizes power dynamics and resistance within existing institutions.

Speakers

– Anu Bradford
– Iqbal Dhaliwal

Arguments

Global South has incentive for AI sovereignty but regulating AI is difficult even for established bureaucracies


Technology deployment requires addressing power dynamics and institutional resistance to change


Topics

Artificial intelligence | The enabling environment for digital development | Human rights and the ethical dimensions of the information society


Similar viewpoints

AI presents significant opportunities for development if implemented thoughtfully with appropriate support systems and policy frameworks

Speakers

– Johannes Zutt
– Michael Kremer
– Iqbal Dhaliwal

Arguments

AI offers opportunities to leapfrog development challenges, with 15-16% of jobs in South Asia showing strong complementarity with AI


AI has potential to substantially narrow development gaps if appropriate policy actions are taken


AI applications should free up time for frontline workers rather than adding to their burden


Topics

Artificial intelligence | Social and economic development | Information and communication technologies for development


Structural factors like market access, capital availability, and talent retention are more important for innovation than regulatory constraints

Speakers

– Anu Bradford
– Ufuk Akcigit

Arguments

Innovation vs. regulation is a false choice – Europe’s innovation gap stems from market fragmentation, capital constraints, and talent issues, not regulation


There’s concerning migration of AI talent from academia to industry, reducing open science and increasing protected patents


Topics

Artificial intelligence | The enabling environment for digital development | Capacity development


Successful AI implementation requires addressing institutional and governance challenges, not just technical capabilities

Speakers

– Iqbal Dhaliwal
– Johannes Zutt

Arguments

Technology deployment requires addressing power dynamics and institutional resistance to change


Governance and regulatory safeguards are critical challenges, especially for developing countries


Topics

Artificial intelligence | The enabling environment for digital development | Capacity development


Takeaways

Key takeaways

AI offers significant potential for developing countries to leapfrog development challenges, particularly through ‘small AI’ applications that are practical, affordable, and locally relevant


Success requires addressing foundational issues like infrastructure, digital literacy, and business environment rather than just deploying technology


Market concentration in AI’s foundational layer poses risks, with innovative resources increasingly shifting to large incumbent firms and away from open science


Effective AI implementation depends on demand-driven solutions that free up time for frontline workers and integrate well with existing systems


The choice between innovation and regulation is false – successful AI adoption requires both appropriate governance frameworks and supportive business environments


Public sector applications of AI (weather forecasting, digital identity, traffic management) require government and multilateral support as they won’t attract sufficient private investment


Trust in technology and adaptation of surrounding systems are critical factors that often cause promising AI applications to fail in real-world deployment


Resolutions and action items

World Bank Group to continue focus on ‘small AI’ applications working with governments across Indian states (Uttar Pradesh, Maharashtra, Kerala, Haryana, Telangana)


Need for evidence-based innovation funds with tiered funding structure for piloting, testing, and scaling AI applications


Governments and multilateral development banks should invest in AI applications for public goods like weather forecasting and digital identity systems


Requirement for continuous A/B testing and impact evaluation in AI procurement processes


Development of AI regulatory frameworks that balance innovation with rights protection, adapted to local contexts rather than copying templates


Unresolved issues

How to address the fundamental tension between AI-driven job displacement and the need for economic development in emerging markets


Who will ultimately set AI rules for the Global South given concentration of power in US and China


How to prevent the migration of AI talent from academia to industry and maintain open science


How to ensure public sector adoption of AI technologies when government systems resist change


How to balance AI sovereignty aspirations with the reality of global interdependence in AI supply chains


How to address labor market regulations that incentivize AI adoption over human employment


How to maintain human cognitive capabilities while leveraging AI tools in education and decision-making


Suggested compromises

Focus on AI applications that complement rather than replace human workers, particularly in freeing up time for higher-value tasks


Develop regulatory approaches that learn from established frameworks (like EU’s rights-driven approach) while adapting to local priorities and contexts


Balance between foundational AI development and application-layer innovation, recognizing different entry barriers and concentration risks


Create procurement systems that encourage competition while ensuring quality and avoiding vendor lock-in


Pursue AI sovereignty goals while acknowledging interdependence and avoiding counterproductive techno-nationalism


Implement AI solutions gradually with proper training and system adaptation to address trust and adoption challenges


Thought provoking comments

Why was there no entrepreneurship and dynamism before the AI revolution in emerging economies? Why was it not up or out? Why was it not very competition friendly? Why did the best predictor of firm size in emerging economies or developing economies was the size of the family and or the number of male children? These are still lingering issues and AI is not, you know, will not bring magic unless we understand and fix the business environment in these economies.

Speaker

Ufuk Akcigit


Reason

This comment cuts through the AI hype to address fundamental structural issues. It challenges the assumption that AI will automatically solve development problems and forces the discussion to confront deeper institutional and cultural barriers to economic growth.


Impact

This shifted the conversation from optimistic AI use cases to a more sobering examination of underlying constraints. It prompted Jeanette to pivot toward ‘real world’ considerations and influenced subsequent speakers to address implementation challenges rather than just technological possibilities.


I really would like to debunk this myth that to me it’s a false choice to say that the reason we don’t see these large language models being developed in Europe is not because there’s a GDPR… It’s not because there is AI Act. So the reason there is a perceived innovation gap between the United States and Europe is… four things: no digital single market, no deep robust capital markets union (5% of global venture capital vs 50% in US), legal frameworks and cultural attitudes to risk-taking, and success in harnessing global talent.

Speaker

Anu Bradford


Reason

This systematically dismantles a widely held belief about the regulation-innovation tradeoff, providing concrete evidence that structural economic factors, not regulation, drive innovation gaps. It reframes the entire debate about how developing countries should approach AI governance.


Impact

This fundamentally changed the framing of the regulation vs innovation debate. It gave policymakers permission to think about protective regulation without fearing innovation loss, and shifted focus to the real drivers of technological development – capital markets, talent, and risk culture.


Everything that we do in the field ends up adding to teacher’s time, adding to the nurse’s time, adding to the Anganwadi worker’s time. Very few teachers do that. Free up time. So if your AI application can free up the time of the health frontline workers, first of all, that’s a winner.

Speaker

Iqbal Dhaliwal


Reason

This provides a practical, field-tested criterion for evaluating AI interventions that cuts through technological complexity to focus on human impact. It offers a simple but powerful framework for policymakers to assess AI projects.


Impact

This introduced a concrete evaluation framework that other panelists could build upon. It grounded the abstract discussion in practical implementation reality and provided a memorable heuristic for the audience to apply in their own contexts.


When people are moving to industry from academia, their publication record goes down by 50%. They start patenting by 600% more after they move, which means that we are moving from open science to more protected science. Now, spillover is extremely important for creative destruction, for the future of innovation.

Speaker

Ufuk Akcigit


Reason

This reveals a hidden but critical consequence of AI development – the shift from open knowledge sharing to proprietary research. It connects talent migration to long-term innovation capacity in a way that’s not immediately obvious but has profound implications.


Impact

This introduced a completely new dimension to the discussion about AI’s impact on innovation ecosystems. It elevated the conversation from immediate applications to systemic effects on knowledge production and sharing, influencing how other panelists thought about long-term consequences.


I am more worried about us getting dumber as a humanity. There is a temptation to start skipping steps, outsourcing your thinking and your creativity to these models… we will just make a tremendous mistake if we just forewent that hard work, that beautiful moment of thinking hard problems and creating and investing in our own capabilities.

Speaker

Anu Bradford


Reason

This shifts focus from AI becoming too powerful to humans becoming too dependent, introducing a philosophical dimension about human agency and capability development that’s often overlooked in technical discussions.


Impact

This comment introduced a deeply humanistic perspective that balanced the technical and economic focus of the discussion. It prompted reflection on education and human development strategies, adding emotional resonance to the policy considerations.


An entry-level coding job might be an entry-level job in the United States. It’s the aspirational job that created Gurgaon’s and Noida’s and Mohali’s of this country. And those people are going to be running out of jobs very quickly… we are giving incentives to people to use AI, and we are taxing them through provident fund and labor market regulations to hire labor.

Speaker

Ufuk Akcigit


Reason

This powerfully illustrates how AI’s impact varies dramatically by economic context – what’s entry-level displacement in one country represents the destruction of an entire economic development model in another. It also connects AI adoption to specific policy contradictions.


Impact

This comment brought urgent specificity to abstract discussions about job displacement, making the stakes tangible for the Indian audience. It connected AI policy to broader economic development strategy and highlighted policy inconsistencies that needed immediate attention.


Overall assessment

These key comments fundamentally shaped the discussion by consistently challenging surface-level optimism about AI and forcing deeper examination of structural, institutional, and human factors. Ufuk Akcigit’s interventions were particularly influential in grounding the conversation in economic realities and long-term systemic effects. Anu Bradford’s contributions reframed conventional wisdom about regulation and introduced philosophical dimensions about human agency. Iqbal Dhaliwal provided practical frameworks that made abstract concepts actionable. Together, these comments transformed what could have been a typical ‘AI will solve everything’ discussion into a nuanced examination of how technology interacts with existing power structures, institutions, and human capabilities. The conversation evolved from optimistic use cases to structural constraints, from technical possibilities to implementation realities, and from immediate benefits to long-term systemic risks – creating a much more sophisticated and policy-relevant dialogue.


Follow-up questions

What will happen to creative destruction in the future with AI, particularly in the foundational layer versus application layer?

Speaker

Ufuk Akcigit


Explanation

This is critical for understanding long-term economic impacts and market concentration risks in AI development


Why was there no entrepreneurship and dynamism before the AI revolution in emerging economies?

Speaker

Ufuk Akcigit


Explanation

Understanding underlying structural issues is essential before AI can effectively transform business environments in developing countries


How can we ensure the foundational layer of AI remains contestable and doesn’t become overly concentrated?

Speaker

Ufuk Akcigit


Explanation

Market concentration in foundational AI could limit innovation and competition, affecting downstream applications


How can we keep universities healthy in the AI ecosystem to maintain open science and spillovers?

Speaker

Ufuk Akcigit


Explanation

The migration of AI talent from academia to industry is reducing open science and could harm future innovation


How can we design procurement systems to ensure sufficient competition and avoid lock-in with AI technologies?

Speaker

Michael Kremer


Explanation

Poor procurement could lead to monopolistic situations and reduced quality in AI services for governments


How can we adapt government systems and workers to effectively use AI technologies?

Speaker

Michael Kremer


Explanation

Government adoption challenges could prevent the poor from accessing AI benefits in public services


How can we train healthcare workers and other professionals to effectively use AI diagnostic tools?

Speaker

Iqbal Dhaliwal


Explanation

Studies show that even superior AI tools can reduce efficiency if users aren’t properly trained to trust and use them


How can we adapt existing power structures and systems to accommodate AI-driven decision making?

Speaker

Iqbal Dhaliwal


Explanation

Resistance to scaling AI solutions often stems from concerns about losing human discretion and power


How can we reform labor market regulations to balance AI adoption with employment protection?

Speaker

Ufuk Akcigit


Explanation

Current regulations may incentivize AI adoption over human hiring, potentially accelerating job displacement


How can we develop robust governance frameworks to prevent abuses in AI-powered poverty targeting?

Speaker

Johannes Zutt


Explanation

While AI could enable precise poverty interventions, inadequate governance could lead to misuse or discrimination


How can we ensure humans don’t become overly dependent on AI and lose critical thinking capabilities?

Speaker

Anu Bradford


Explanation

There’s a systemic risk that outsourcing thinking to AI could diminish human cognitive abilities and creativity


What are the early indicators of market concentration in AI and how should we monitor them?

Speaker

Ufuk Akcigit


Explanation

Understanding concentration trends is crucial for policy interventions before market dominance becomes entrenched


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Harnessing Collective AI for India’s Social and Economic Development

Harnessing Collective AI for India’s Social and Economic Development

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by likening the debate on AI for the collective good to an “Avengers” narrative, assigning each speaker a superhero persona to highlight diverse viewpoints on technology’s societal role and asking whether AI will become an ally or a destructive “snap.” [1][13-15]


Professor Seth argued that AI should shift from answering isolated queries to coordinating whole populations during events such as floods or tax filing, turning coordination itself into a form of intelligence; he emphasized that this requires new technologies, cross-sector partnerships, and proactive policy guidance rather than leaving development to market forces. [25-31][32-33]


Professor Nirav described many societal challenges as socio-technical multi-agent problems, noting that individual optimization often yields local maxima that fail to maximize social welfare; he cited ride-sharing and epidemic prevention as domains where a global optimum would better serve collective needs. [38-55][47-56]


Professor Manjunath explained that recommendation systems act as learning agents that continuously nudge users by shaping the utility functions they optimize, thereby altering preferences at scale; he pointed to the Facebook scandal documented in Sarah Wynn Williams’s book as evidence of large-scale societal impact and argued that these systems function as powerful advertisements that make repeated exposure highly persuasive. [77-84][90-99][94-99]


Antaraa illustrated AI’s governance role through a Maharashtra project that gathered 380 000 citizen inputs via a chatbot and made this feedback mandatory for future law-making, showing how AI can amplify citizen voices while requiring transparent design to ensure equity; Kushe added that the greatest sustainable value of AI lies in personalized services that generate new revenue rather than simple cost-saving replacements, and the panel agreed that public education about AI is more effective than trying to block malicious use. [121-130][288-290][185-204][248-251] They concluded that if AI is built to enhance connectivity, give citizens a genuine voice, and be governed with transparency, tangible everyday improvements could be felt within five years. [301-311][312-321]


Keypoints

Major discussion points


AI as a coordination tool for whole populations, not just individual assistants – Seth argues that future AI should help coordinate large groups (e.g., flood victims, tax payers) and that this requires new technologies, partnerships, and a shift away from “AI-for-profit” pathways [24-32]. He later stresses that the biggest risk is widespread use without public understanding, not malicious intent [248-251].


Multi-agent and socio-technical systems as a framework for solving collective problems – Nirav explains that many social challenges (ride-sharing, pandemics, etc.) are inherently socio-technical and can be modeled as interacting agents, allowing a move from local to global optima and better social welfare [36-55].


Recommendation systems and algorithmic nudging shape preferences and can amplify bias – Manjunath describes how learning agents infer user utility functions, subtly steer choices, and can dramatically alter preferences over time, effectively acting as powerful advertisements [77-96].


AI in governance can both empower citizens and reinforce institutional power – Antaraa details a large-scale citizen-feedback chatbot used by Maharashtra, showing how AI can amplify voices when designed transparently [121-130]; she later argues that AI should shift power toward citizens by reducing information asymmetry [237]. Manjunath counters that institutions, with their resources, are more likely to capture AI benefits [294-297].


Impact of AI on work: replacement vs. reshaping and value creation – Kushe highlights that simple task automation often fails to sustain cost savings, whereas AI that enables uniquely human-scale personalization unlocks far greater value (e.g., revenue uplift) [185-202]. In the rapid-fire segment he predicts AI will primarily reshape jobs rather than merely replace them [257].


Overall purpose / goal of the discussion


The panel, framed through an “Avengers” metaphor, aimed to explore how AI can be harnessed for the collective good-by improving coordination, fairness, and citizen participation-while identifying technical, ethical, and governance challenges that must be addressed to prevent harm and ensure equitable outcomes [13-15][20-22].


Overall tone and its evolution


Opening (0:00-4:00): Playful, optimistic, and metaphor-rich, setting a collaborative mood.


Middle (4:00-22:00): Shifts to analytical and cautionary as experts present technical concepts (population-level AI, multi-agent models) and raise concerns about algorithmic nudging, government over-reach, and unintended resource consumption [58-68][133-160].


Rapid-fire & audience Q&A (22:00-45:00): Becomes pragmatic and solution-focused, with concise answers, concrete examples (Maharashtra chatbot, job-impact figures), and a mix of optimism about new value creation and realism about regulatory gaps [121-130][185-202][237][294-297].


Closing (45:00-53:00): Returns to a grateful, hopeful tone, thanking participants and emphasizing the need for continued collaboration [501-506].


Thus, the conversation moves from an enthusiastic framing to a nuanced, sometimes uneasy examination of AI’s societal role, ending on a constructive, forward-looking note.


Speakers

Moderator (Janhavi) – Moderator of the panel; serves as the voice asking questions.


Professor Seth Bullock – Professor; expertise in collective AI, coordination, societal systems, and shared values [S3][S4].


Professor Manjunath – Professor; focuses on recommendation systems, algorithmic bias, and AI ethics [S5][S6].


Professor Nirav Ajmeri – Professor at the University of Bristol; specializes in multi-agent systems and socio-technical networks [S21].


Antaraa Vasudev – Founder/Leader at Civis (NGO); works on civic technology, AI for citizen engagement and governance [S13][S14].


Kushe Bahl – Senior leader (Partner) at McKinsey - leads McKinsey Digital and McKinsey Analytics practices in India; expertise in AI implementation, consulting, and scaling AI for business [S28][S29].


Audience Member 1 – Founder of Corral Inc. [S10].


Audience Member 2 – Participant from Germany (group affiliation). [S25].


Audience Member 3 – Audience participant (no specific role mentioned). [S1].


Audience Member 4 – Intellectual property and business lawyer. [S23].


Audience Member 5 – Audience participant (no specific role mentioned). [S7].


Speaker 3 – Unspecified speaker (role/title not provided). [S15].


Additional speakers:


(None)


Full session reportComprehensive analysis and detailed insights

Opening & framing – The panel opened with a playful “Avengers” metaphor, positioning each speaker as a superhero to illustrate the diversity of perspectives on artificial intelligence (AI) for the collective good, and the moderator asked whether AI would become an ally or the “great snap” that could threaten society [1][13-15].


Population-scale AI – Professor Seth Bullock argued that AI should move beyond answering isolated queries and become a tool for population-scale coordination (i.e., coordinating whole groups of people rather than individual queries). He described intelligence as the ability to orchestrate whole communities-flood victims, patients with a common disease, or taxpayers-through shared knowledge and coordinated action [24-33]. To realise this, he called for new technologies, delivery models, and cross-sector partnerships among researchers, private firms, non-profits and governments, warning that reliance on “path of least resistance” commercial tools would be insufficient [31-33].


Multi-agent socio-technical systems – Professor Nirav Ajmeri framed many societal challenges as socio-technical multi-agent systems, explaining that intelligence emerges from the interaction of human and technical agents and that optimisation for individual users often yields local maxima that do not serve overall social welfare [36-55]. Using ride-sharing and pandemic-prevention examples, he showed how a global optimum derived from multi-agent modelling could improve collective outcomes and fairness [47-56].


Recommendation systems & nudging – Professor Manjunath characterised recommendation systems as learning agents that infer users’ utility functions and continuously nudge preferences. He noted that the utility functions are set by platform owners, not users, allowing platforms to reshape tastes over time and act as powerful, personalised advertisements [77-84][94-99]. He cited the Facebook scandal documented in Sarah Wynn Williams’s book as evidence of large-scale societal impact when recommendation engines “go berserk” [90-92].


AI-enabled governance example – Antaraa Vasudev presented a concrete example from Maharashtra, where a simple chatbot collected 380 000 citizen inputs (voice notes, text, drawings) and fed them into the policy-making pipeline, making citizen feedback a mandatory consideration for future laws [121-130]. She stressed that such systems must be transparent, accessible and equitable, and argued that AI can reduce information asymmetry to close power gaps [109-115][237]. Later she expanded the vision, noting that disaggregation of civic-tech platforms can enable decentralized control and broader citizen participation [380-386].


Rapid-fire exchange – In a brief rapid-fire segment, Antaraa asserted that AI shifts power toward citizens by amplifying their voices [260-262], while Professor Manjunath countered that institutions with greater resources are more likely to capture AI benefits, making it difficult for citizens to compete [280-283]. He also warned that algorithms can hide bias, a point raised during the same exchange [280-283]. Professor Bullock warned about the next wave of agentic AI, describing purposive agents that communicate with each other and could generate cascades of resource consumption from trivial requests (e.g., a picture of a dog on a skateboard), disadvantaging other users unless social responsibility is embedded in their design [58-68].


Role of government – Professor Manjunath critiqued heavy-handed state direction, citing India’s CDOT project and Japan’s Fifth-Generation computing initiative as examples where governments, as generalists, failed to keep pace with rapid technological change [139-152][155-162]. He advocated an enabling and monitoring stance rather than micromanagement, a view echoed by Antaraa’s call for transparent frameworks and by an audience member who cited recent bans on social-media use for minors in Spain and Australia as useful early guardrails [109-115][409-416].


Employment impact – Kushe Bahl distinguished between simple task replacement and value-creating personalization. He argued that replacing routine tasks rarely yields sustainable savings, whereas AI-driven personalised services-such as recommendation engines that boost revenue by up to ten percent-unlock far greater economic value and reshape rather than merely replace jobs [185-202][257].


Education concerns – The audience’s rapid-fire reactions (excitement, anxiety, FOMO, etc.) were tallied by the moderator, highlighting mixed emotions about AI’s role in learning [320-327]. Bahl warned that AI-generated content, while correct, often lacks “soul” and inspiration, making it unsuitable for deep learning [375-378]. Manjunath shared a classroom example where a student used ChatGPT to fabricate data, illustrating how instant AI feedback can bypass step-by-step learning and undermine understanding [490-496].


Intellectual-property & consent – Professor Bullock noted that generative models are already trained on copyrighted material without consent and that legal battles over musicians’ and artists’ rights are just beginning [470-478]. He proposed the development of consent-based data ecosystems in which participants voluntarily share information for collective benefit [476-478].


Regulatory experiments – An unnamed speaker highlighted early regulatory experiments restricting AI-enabled platforms for children, arguing that such steps, though imperfect, signal accountability and may influence industry behaviour [409-416]. Manjunath reinforced the need for agile, enabling regulation rather than rigid micromanagement, noting the difficulty of imposing guardrails on fast-moving technology [139-152][155-162].


Audience questions – When asked about AI’s impact on young minds, Professor Seth Bullock responded that education systems must adapt to foster critical thinking alongside AI tools [340-345]; Kushe Bahl added that over-reliance on AI can erode foundational skills [350-354]. A question on regulation of AI in education elicited Manjunath’s answer that standards should be flexible, outcome-oriented, and regularly updated [470-476].


Closing visions – Professor Bullock envisioned AI delivering a greater sense of connection by breaking down language, expertise and distance barriers, enabling richer citizen-government interactions that go beyond simple voting [301-311]. Kushe Bahl offered a concrete “‘unicorn-scale’ impact” scenario: if AI could raise the earnings of India’s 150 million self-employed workers by just ₹600 each, the aggregate effect would be transformative [312-321]. Antaraa reiterated that disaggregated, transparent AI systems can broaden access to governance, while Professor Ajmeri highlighted the potential for collective decision-making at scale, and Professor Manjunath warned that the quality of AI-generated output must be critically assessed [380-386][470-476].


Session transcriptComplete transcript of the session
Moderator

sci -fi movies that we grew up watching and what it primarily also reminds me of is in specific terms the avengers right the avengers are the superheroes and they’re trying to you know save the world and decide how one can do that and they all have very different strengths so i was wondering that if all our panelists were superheroes who would they be introducing our panelists i have our first avenger captain america principled steady under pressure obsessed with doing the right thing even when it’s unpopular professor seth is exactly that and reminds me of the lens that he brings in he studies how societies hold together how coordination succeeds or fails and why systems need shared values as much as intelligence next we have spider -man spider -man strength isn’t brute force it’s his ability to navigate through complex webs adapt quickly, and see connections that others miss.

Professor Nirav thinks the same way. At the University of Bristol, his work focuses on multi -Asian systems because societies like Spider -Man are all about networks. Andhra Vasudev reminds me of Captain Marvel, operating at scale, moving across institutions, pushing boundaries. Through her NGO service, she uses AI to amplify citizen voices and reshape how power flows between governments and people. And of course we have Iron Man, Iron Man who is obsessed with execution, iteration, and making ideas work in the real world. Mr. Bal is our Iron Man, focused on execution, scale, and impact in the real economy. He leads the McKinsey Digital and McKinsey Analytics practices in India. Last but not the least, no team is complete without Bruce Banner.

Deeply aware of the challenges that we face, of AI’s raw power and focused on how to control it before it controls us, Professor Mantunath’s work reminds us that intelligence at scale can cause damage if we don’t fully understand its consequences. My name is Janhavi and today I’m embodying Jarvis, except for being the one answering the questions, I’m the voice asking them. Every Avengers story has a Thanos. The real question is whether AI becomes our ally or the great snap that we didn’t see coming. So when we talk about AI for collective good, we’re not just talking about smarter apps, we’re talking about systems that influence how people live, work and participate in society. Before we start, I would request all my panelists to just stand up for a quick photo op.

So, quick show of hands from the audience. How many of you feel that technology today is only with those who have power or resources or information, that technology has been reserved for the elite few? Do we have a show of hands in the house by any chance? Okay, clearly we don’t really have an opinion as such over here. But moving on, Professor Seth, when we look at society, you know, governments, markets or online platforms, we often assume that problems exist because we don’t have enough intelligence or data. Your work suggests something a little bit deeper, that perhaps failures come from how decisions interact at scale. From a systems perspective, Do you think our biggest societal problems are intelligence problems

Professor Seth Bullock

Thanks a lot. So it’s great to be here in India. I think this topic is extremely relevant to both the UK where I’m working and India. And I think the answer is that coordination is intelligence in this situation that we’re interested in. So I guess we’re used to situations now where we interact with an AI as an individual. One person asks the AI a question and gets one answer. But really there’s the potential for us to develop AI systems that are designed to support a whole population at once. So a population of people that are affected by a flood, a population of people that all are coping with the same. disease or medical condition, a population of people that are all trying to get taxes to and from a summit.

So instead of AI answering individual questions, AI can help coordinate those people, share intelligence, share their knowledge, and achieve better outcomes. And I think that’s quite a different way of framing AI than many of the systems that we’re hearing about and requires different technologies and different ways of delivering that to people, different ways of engaging with populations. So I think that’s something that can only really be achieved by partnerships between researchers and companies and not -for -profit organizations and governments and requires probably interventions in the way that we promote AI rather than letting the sort of path of least resistance develop AI commercial tools. I think there are opportunities to really engage with the idea of making AI for populations.

Moderator

Wonderful. Professor Nirav, you’re also from the University of Bristol and your work focuses on multi -agent systems where basically intelligence emerges from all these entities interacting with one another. What kind of social problems are best suited for these multi -agent approaches?

Professor Nirav Ajmeri

Thanks, Chandni. Good question. And I think partly Seth already answered what multi -agents could do. So all problems that we’re thinking over here are in terms of, if you’re understanding those problems, they are socio -technical in nature. So there are social entities including people, organizations which interact. All of us also use some technical tools. These could be intelligent agents. These could be applications, softwares that we use. And all of these combined together, help us. So all problems… include or all domains are socio -technical in nature. Multi -agent inherently can encapsulate socio -technical systems. So that is how I would look at it. If you’re talking about, say, ride sharing, for instance, or hailing a ride, current system could be optimizing only for me, right?

And then what we end up with, we could end up with local maxima. So if we are optimizing for each one of us, we are doing a local optimal for each of us. But we may not be doing a global optimal. And global optimal would map to social welfare. What does social welfare mean? Does it mean just maximizing experience for everybody? Or are we meaning satisfactory experience? So I think any problem that we think about, say, epidemic, pandemic prevention, making sure. That is. are located properly, all of that would be multi -agent in nature.

Moderator

Interesting. Professor Seth, do you have anything that you’d like to add on to that?

Professor Seth Bullock

So, yeah, I think we’ve heard, I think, a little bit from some AI leaders about a next wave of AI that will be agentic, where we won’t be just interacting with ChatGPT as a monolith. We will be interacting with an agent that has purposive aims and is helping us to achieve tasks. And it might do that by communicating with other agents. Whenever we interact with AI, we would be, in fact, interacting with a population of AIs that are sending each other information, that are tasking each other with different jobs to do. And actually, it might not be clear whether one of those agents is artificial or a person. And so, if we enter into that sort of world, I think we have to really understand whether those agents are going to be able to do that.

I think those agents are interacting with each other. in a way that is likely to advantage the community of users because the amount of resources that will be consumed by these population of agents and the potential for them to interact in ways that have unforeseen consequences for other people are going to ramify. When we do that manually, really we can only hold so many interactions with other people at once, and so we’re limited in the scale. You know, one request does not create this kind of cascade of other requests in the system. But as we move to artificial systems, that scaling will rapidly increase and potentially one trivial request by me asking a computer to make a picture of a dog riding a skateboard could create a whole kind of wave of different agentic interactions that consume loads of resource and also, depending on what I’ve asked for, disadvantage other people.

So embedding some kind of social responsibility… into those agents, some appreciation for how their behavior impacts other agents in the system, I think is going to be imperative. Otherwise, we end up with systems that create conflict and contestation for resources.

Moderator

Interesting. Whenever I’m on Instagram or Facebook, and let’s say if I’m talking to my friends, I’m really thinking about buying this Dyson or a particular product, it’s always weird to me how the next time I open the app, it’s almost like the app has heard me, and I start seeing the ads for those exact things, even if I’ve not searched it. I’ve just talked about it to someone. Has anybody here also experienced the same thing, a show of hands quickly, where you feel that maybe the choices that we make, are they really our choices, or are we being nudged by algo somewhere? So, Professor Manjana, your work focuses so much on recommendation systems, and we often hear that these algos are just tools.

Perhaps your research suggests that they actively shape what people see, buy, believe. How much of human behavior today is genuinely chosen by us and how much is subtly nudged by these algorithms?

Professor Manjunath

Yeah, recommendation systems and the way they shape many of our feelings and our attitudes and our habits has essentially been a significant concern for me for a while. One of the things that you have to think about when you look at recommendation systems is that they’re essentially learning agents. So they want to learn your preferences, your likes, your dislikes, etc. And when they’re trying to do that learning, they do things. They’re trying to sort of give you options, different kinds of options, and then see how you react. So there is the first way in which the interaction between you and the learning system happens. The learning algorithm. So corresponding to our recommendations. So according to our recommendations.

system happens is this, they are showing you a variety of things and the way you react. And then your reaction is usually captured in some kind of utility function, something that the algorithm believes is positive for whoever is designing that algorithm. Now, what exactly is that utility function essentially determines what gets recommended to you in the future and what the system learns about you. Now, there is no such thing as the right utility function and every organization will figure out what they want for themselves. And if you just look at some of the, we have actually done several models, mathematical models on this and show that if I, depending on the kind of learning algorithm that I have and I am assuming a benign recommendation system here, depending on the kind of learning algorithm that I use.

Where I start off with a set of preferences. by the end of the day or over a certain time horizon, my preferences can be dramatically different. So there is a certain nudge that is steadily pushed by these algorithms and in which direction the nudge is pushed depends on the kind of algorithms they use and the kind of what we call utility functions that they use. So what exactly are they trying to optimize for themselves? And if you look at various analysis of many of these, especially Facebook algorithms, there is a very famous book that came out recently by somebody called Sarah Wynn Williams who was an insider. You can see the impact of what that had on some sections of some society elsewhere when the whole recommendation system went berserk.

So there is definitely a huge impact on the population’s preferences by the recommendation systems. And if you want to sort of give a quick understanding of that, recommendation systems essentially are advertisements. The difference between a standard… and this advertisement definitely shape our preferences. If you see something more often, you will start thinking about it and so on. The difference between, at least in my opinion, the difference between the advertising advertisements that you see on the street and the advertisement corresponding to a recommendation engine is that you are significantly more receptive. You are looking to do something. And when you are trying to look for something to do, if the recommendation pushes you in a certain direction, you are naturally going to go there.

So the impact of recommendation systems on the population’s preference, in my opinion, is spectacularly large.

Moderator

Wow. That’s quite a lot to actually digest and hear from. I really wonder how much my personality is my own at this point. Antara, so from your work in civic engagement, when AI enters governance, is it to primarily help citizens be heard or is it helping governments manage complexity? And where do… Where do citizens struggle the most when technology becomes the interface between them and the government?

Antaraa Vasudev

Thank you for that question. Just want to make sure that everyone can hear me. Thank you. Some problems like on -stage mics, AI cannot solve. No, but I just wanted to, of course, next year, correct. Thank you for that lovely question and lovely being here with all of you today. Jandi, to your point, I think AI currently is being used in both use cases. It’s allowing us to engage with citizens who perhaps have little or limited knowledge about law and policy and to be able to help them clarify doubts, for them to be able to air out their grievances, for them to actually be able to understand the frameworks of policy and law that govern their lives.

But in addition to that, it is also being used in a very large way for optimization. In a country of India’s size and diversity, I think the only other ways to perhaps not tackle circumstances, not an important governance does. So better than that is to actually build strong and robust frameworks for how governance can utilize AI, which is put out in a manner which is transparent, accessible, and one that actually has certain equity built in, which is really what the panel is also discussing today. And once you have that, to know that these optimization solutions can perhaps be built by AI rather than citizen -led. So at CIVIS, we’ve actually been working on gathering a lot more public feedback on draft laws and policies using AI.

And again, we see optimization in both ends, but very, very mindful of the fact that the frameworks that govern that level of optimization are what needs to be designed before perhaps even we race to the next model.

Moderator

Got it. Can you share some examples of the kind of laws that have been impacted or the kind of work that you’ve done? Have you worked with different state governments? Governments where citizens of that particular state have been able to engage with the government about a certain law? or practice has been happening. Thank you.

Antaraa Vasudev

Absolutely. So I’ll share one example from recent work with the government of Maharashtra that Civis led. The government of Maharashtra actually undertook a very ambitious mission of trying to understand how the next 22 years of the state can be governed by citizens’ voice. Now, this is something which is honestly quite remarkable on their part. What Civis was able to do is that we built out a very easy -to -use chatbot, wherein you could send in a voice note, you could send in any text messages, or you could even, we had people send in drawings, letters that they had personally written to the Chief Minister and other things. Civis aggregated all of that feedback. So that was almost 3 .8 lakh citizen responses from 37 districts across Maharashtra.

And that was aggregated, sorted through, and then shared with the government as well. The Vixit Maharashtra report, as it’s called, is now publicly available at the government of Maharashtra has put it. out on their own website as well. But in addition to that, what’s been really interesting about it is that they have said that every law that’s going to come out in the state for the next coming years has to, in some way, factor in what citizens are saying about that problem area or that district for where the law is being made. And you can only do that if you’re able to actually engage at scale. And I think that’s the beauty of what that entire project showed.

Moderator

Absolutely. Professor, how do you feel about the government in terms of what approach should they be taking when it comes to AI and technology?

Professor Manjunath

Yeah. One of the fears that I have when the government gets involved in technology development is that they want to start controlling the direction. They want to tell what to be done at a very micromanaging kind of level. And I recently had an article on Tuesday, I think it was in the Financial Express. There was an op -ed where we talked about me and a colleague of mine. We talked about, you know, looking, we looked at history. kind of successful and spectacularly unsuccessful involvements of the government when they wanted to direct technology. So I’ll just give you two quick examples. So in India, about 40 years ago, there was something called CDOT. So that developed some spectacular technology when it was left alone.

The government started to direct it and micromanage the flow of technology. Many of you probably don’t even know CDOT. They don’t even come to IIT Bombay campus, for example, for recruitment. That’s just one example. If you look at Japan, just to give you another badly successful story, many of you are too young to know about something called the fifth generation computing systems that they wanted to start off in. The AI boom that we see today was originally planned to be launched in Japan in the 1980s. There was a huge project that the government wanted to micromanage, develop native hardware for AI and everybody thought they would be successful. It was a spectacular failure. The failure essentially stemmed from the fact that the government was directing everything.

Governments are generalists. People who run governments are generalists. They are brilliant people. They know society. They understand administration. But they don’t understand technology. Especially a technology that is moving too damn fast has a very large surface area and they cannot control it. They cannot control that. So it is best that they just enable and let others, let the people on the ground, people with a track record and people who want to take risks manage them. They should be enablers. They should also be monitors. Monitors nudging it in a certain direction making sure bad things don’t happen. But that’s a very hard task. So the biggest role that the government should have is just enable and step away.

Just to give you one positive example the NPCI in India is a spectacular example of where the government started something and let the private sector and sort of technologies handle that. In the US many of you may be familiar with the internet. It was exactly that. It was just a vision that somebody had and said let’s build this. and the technology is built. That’s the way I would think the government should handle it, but we’ll have to see how that goes.

Moderator

So just a quick question for the audience. You guys can shout the answers out loud. What emotions come to your mind when we think about AI? Are we feeling excitement? Are we feeling anxiety? Are we feeling FOMO? What are we feeling, guys? Curiosity. Dangerous, somebody said. What else? Definitely opportunity. Opportunity. The man over there? Confusion. Confusion. Anything else? Responsibility. Responsibility, fantastic. Great. So Mr. Bhai, this question is for you. There’s a lot of anxiety, a little bit of excitement as well about maybe AI replacing jobs, especially in India’s tech and services sector. From your experience working with different companies, where is AI genuinely replacing humans? and where is it actually creating new forms of value and roles?

Kushe Bahl

Yeah, that’s a great question. Thank you. So I think the, let me try and give you the very brief answer, because I could talk about this for a long time. But there is a lot of focus on AI being used to replace humans in particular operations. So, you know, when you have an AI taking a call center call, that’s the simplest example of that. And what, and, you know, the math, the way it works is that, you know, if you’re spending 100 rupees on something, you can save 40 % of that roughly by replacing it with AI, with the current economics of the way it works. And obviously, if you’re in a high -cost geography, you can save more.

In a country like even in India, you can save that much. What we have found, though, is that most of the cases where you do this simple replacement of a human with AI, that’s not the case. cost reduction doesn’t really sustain. There’s a famous example of Klarna in Europe where they brought back a lot of the costs called center costs because they had to bring back some of the senior customer support people because a lot of the conversations were not going well and they were losing customer satisfaction. The same thing with IT, you can replace a lot of developers with this, but then people will come back with more projects and there’ll be more things to be done.

The real value unlock, which is sustaining, is actually when you get AI to do something which humans can’t do or are not able to do because it’s so time consuming and so difficult. For instance, a genuinely personalized customer engagement engine using the kind of recommendation system that he was talking about, which actually engages in a personalized way with every customer that I have as a company, for instance, or every entity that any organization is dealing with. That genuinely has value. It creates huge value unlock. So like for instance, I mean if I spend 2 -3 % of my revenue on say customer support and even if I save 40 % on that, I’m saving like 0 .8 % or 1%. But if I can generate even just 10 % more revenue from existing customers with hardly any marketing cost and I make 30 -40 % margin on that, I’m getting 3 -4 % more to the bottom line.

So that is a huge, it’s like almost 5x of what you can save. So the value unlock is very large and that’s sustainable because you’re really getting AI to do, no human being is going to sit and figure out exactly for millions of customers exactly what is the kind of personal message to send because the amount of experimentation you have to do and the kind of connections you have to draw between individuals and similarities and so on, which the recommendation engines are based on are impossible to do humanly. And that’s where the biggest value unlocks are at least that I’m seeing and those are sustainable and they’re actually even applicable in high cost geographies. It’s just that unfortunately a lot of the initial focus of the innovation has been on this just save, you know, do the easy stuff, right?

Have an AI agent replace a human agent. But that’s not the real power of where what AI can bring. So hopefully we’ll see a lot more of that type of innovation going forward as well.

Moderator

Right. I think I see a lot of students here today. What kind of backgrounds do you all come from? Hands up if you’re from STEM at all? STEM backgrounds? Okay. Anybody from business, humanities, arts? Okay. So I read this LinkedIn post. I’m not sure whether it’s a great post or not. Apparently it’s going to be a little tough for STEM students to, you know, get into this world of AI because they could be replaced a lot easier. What kind of measures, businesses or like degrees does one, should one essentially come from to sustain in this world of AI, do you think? What should the next five years look like?

Kushe Bahl

Yeah, I think there is some near -term potential impact on jobs and particularly on entry level coding jobs and so on. But honestly, there’s nothing which tells us that there is a, firstly, nobody knows exactly how the math is going to work. So between new work that people do for AI enabling versus the old work that may get more efficient because of AI enabled coding and so on, will we next see an increase or decrease of employment? Nobody actually knows. There are many, many forecasts and so on done by economists much more qualified than me. But what one can see is certainly that the enterprise adoption of AI has not really happened. So right now, the impact has not really happened of all of this.

So you’re seeing some initial hit on maybe, okay, this year I have promised I’m going to use AI and reduce my budget by a certain amount, so I’ll stop hiring. That’s the kind of, I would say, almost knee -jerk impact that you’re seeing right now. What eventually plays out will be… A mix of, okay, I will do the work more efficiently and use… a lot more automation, but now I have a lot more things to do as well. So I would say that students in general, actually forget just STEM, students in general need to be focusing a lot on how I can use AI to do the best possible thing that I can do in my field and in every possible field.

So whether I’m studying marketing or if I’m studying science degree or if I’m studying any form of the humanities, you know, there is a lot of journalism. If I’m, you know, whoever I am, right, there’s so many things that I can actually be doing with AI to do my, to do things which I was not humanly possible earlier. And that’s really what the students should be equipping themselves with. And then, you know, potentially innovating and also creating things, you know, around that, but also personally equipping themselves to actually leverage AI. And I think there are lots of examples of how that can play out and will serve people really well.

Moderator

Absolutely. We are now going to get into a quick rapid fire round and then I want to open up the floor for audience questions. So the only rule here is I want short answers only. No explanations. We only have 10 seconds to answer. So I am going to start off by putting Antara on the spot. Does AI in governance shift power towards citizens or towards institutions today?

Antaraa Vasudev

I want to say citizens because it allows for a lot more information asymmetry to be addressed which is where a lot of the power gaps come up today.

Moderator

Professor Manjunath, are algorithms today more likely to reduce bias or hide bias better?

Professor Manjunath

Hide bias. No, the options don’t look right to me. What would you put as the options then? The bias will start increasing. I think they are not trained. I don’t expect training to get better. I think it will be better in the immediate future, maybe much later. But I also want to disagree with what Antara said.

Moderator

I’ll come back to you for that one. Professor Seth, what worries you more, AI being used with bad intent or AI being used widely without anyone fully understanding its consequences?

Professor Seth Bullock

Well, they’re both terrible, aren’t they? I think people will always use technologies with bad intent and it can only really be addressed if a large number of people understand that technology and can then resist it. So I think the second is more important. Uplifting the public’s understanding of AI and kind of engagement with AI properly will protect us against malign uses of AI because we will be able to spot them.

Moderator

Got it. Professor Nirav, what’s harder to design, ethical individuals or ethical systems?

Professor Nirav Ajmeri

I think that becomes tricky. Like what do we mean by ethical, right? So ethical individuals, if you’re combining ethical individuals and we say individuals combined together is a system, then ethical individuals.

Moderator

Mr. Bal, in India, will AI mostly replace jobs, reshape jobs or polarize jobs?

Kushe Bahl

Reshape.

Moderator

That’s a very quick answer. You win the rapid fire answer. Right. Professor Saad, where does AI struggle more today, with people or with systems?

Professor Seth Bullock

I mean, I think it struggles with people, but we don’t notice because it resembles the kind of natural language. When I say AI, I’m talking about something like chat GPT. So I think there’s a disguised problem with people there because… Those AIs, they don’t really mean what they say, they don’t really understand what they say, but it seems very strongly that they do. So I think that’s the problem. But what’s coming is AI embedded in all of our systems and then that will create its own set of problems as well.

Moderator

Mr. Bal, who benefits more from AI today, companies or employees? AI mostly replace jobs, reshape jobs, or polarize jobs?

Kushe Bahl

Reshape.

Moderator

That’s a very quick answer. You win the rapid fire answer. Right. Professor Seth, where does AI struggle more today, with people or with systems?

Professor Seth Bullock

I mean, I think it struggles with people, but we don’t notice because it resembles the kind of natural language. When I say AI, I’m talking about something like chat GPT. So I think there’s a disguised problem with people there because those AIs, they don’t really mean what they say. They don’t really understand what they say, but it seems very strongly that they do. So I think that’s the problem. But what’s coming is AI embedded in all of our systems, and then that will create its own set of problems as well.

Moderator

Mr. Bahl, who benefits more from AI? Today, companies or employees?

Kushe Bahl

I would say that right now, no one is benefiting from AI. But if I were to bet, it will be companies who will benefit first. And then employees will benefit. And the whole idea of having sessions like this is that we can get the employees to learn what we talked about, right? Students equipping themselves right from college. Absolutely.

Moderator

Andra, for AI used in public systems, what matters more, transparency or effectiveness?

Antaraa Vasudev

Transparency, off the bat. It’s the only way that we can actually design AI for public systems. It has to be at the front and center of all of our efforts.

Moderator

Got it. Before we get into the last question for the entire panel, I do want to get your answer to Andra’s statement. If that’s fine. The question that I had asked. I’d ask, does AI in governance shift power to a citizen or to an institution?

Professor Manjunath

Absolutely to the institutions. They have the money. to invest and discover what’s going on. There is no way citizens can beat that so easily. It requires a different whatever. I’m not allowed to say anything.

Moderator

My last question for all the panelists before we open the floor for audience questions. If we get AI right, what is one everyday improvement people in this room would actually feel within the next five years?

Professor Seth Bullock

So I think something that connects, there’s a thread that runs through this or there’s supposed to be and I think one thing that AI could give us is a greater sense that we are properly connected with each other and learning from each other. So the possibility for AI to break down barriers between people because of language and expertise and distance I think is huge. So the kind of traditional collective interaction intelligence that we’re used to where we put an X in a box when we vote for someone. It’s very, very simple, right? We can’t write an essay like the users of Antara’s system and send an essay to the government about what we want because there’s so many people, we can’t read all of those essays.

But AI can enable that kind of rich interactions. It’s an example of one of the things that Kush is talking about, that AI delivers something that is impossible for humans to do. It doesn’t just replace something that humans are already doing. So a future in which we all feel like we have a voice and AI is helping us mediate between each other, I think is something that is technically possible. There’s a whole bunch of political and social barriers to prevent that from happening. But I think five years is a timeline during which we could see the starts of those sorts of systems.

Kushe Bahl

I can talk about what I’d like to see if we get AI right. We talk a lot about institutions, we talk about companies, we talk about individuals. But not enough talk happens specifically about small businesses. India is a country of self -employed people and small enterprise. I think there are about 150 million self -employed people. If each of those people could somehow earn 600 rupees more because of AI, and I’ll talk about how, that’s a unicorn. So 600 rupees more of allocation for each of these 150 million people is not, I mean there’s a lot of large numbers in India, but it’s true, right? It’s a unicorn. So I think we think of the next 50 unicorns. We may not think of like 50 companies worth a billion dollars, but we may think of 50 innovations that puts 600 rupees more in the pockets of 150 million people.

And how does one do that? I mean if you look at all the important things all of us use today, ride hailing, e -commerce, this restaurant ordering, food ordering, right? All of these created by… On institution, they make an app and then they do spend money on marketing and so on. Today, you have AI systems that are incredibly low cost. You know, 50 cab drivers can get organized. There’s an AI agent can do the scheduling and whatever. You have a WhatsApp chat with them and you can just find the driver, right? There’s no reason why we can’t have innovation like this. Very low cost. The price, the cost of the tokens can be funded in that ride.

It can be, right? That’s all that there is to run it. It’s an autonomous system which just runs off publicly available infrastructure. I think that, to me, is the real unlock that we can see. And those same systems can then serve anyone in the world. So you can do this for taxi drivers. You can do this for lawyers. And those lawyers can then serve anyone anywhere in the world. So I think that’s the real, real unlock that we are waiting for. These systems are very low cost to build. They can be built by anybody. They can be self -built by people. And it just takes a few groups, a group of a few of these self -employed people to get together.

And then, you know, suddenly this can go viral. So I would love to see that type of innovation coming. Rather than necessarily, you know, the stuff that we know for the companies we’ll do or the things that we’ll all play around with on our LLMs on ourselves.

Moderator

Great. Antara?

Antaraa Vasudev

Thank you. I think building on what Seth and Mr. Baral just said, there’s two things that I see happening. One is the disaggregation of systems and a lot of decentralized control mechanisms, right? When that happens, you have very fragmented channels to actually engage with institutions, to Seth’s point about building collective and new ways of collective intelligence. What I want to see happening for all of us in the room is greater access and connectivity to public institutions, which actually fuels us to get easier access to entitlements and benefits that the state is supposed to provide to us. If AI can get that right, if we can solve for that, I think there is a long and a big argument to be made about that being the sort of rising tide that lifts all boats.

Professor Nirav Ajmeri

Building on to what people have been talking and last on Antara’s point, thinking about collectives, right? So we can build systems which work for individuals, but how do we make sure that those individuals could be, like each individual have different preferences. How do we take into account different people’s preferences? How do we aggregate people’s preferences and then come up with a collective decision? If you are coming up with a collective decision, how does that decision affect various other people? How do we explain that decision to other people that, hey, we have taken into account your preferences in this particular way? So we need to get that part of AI right to make sure that people have a buy -in, people trust the system that we are designing.

So that is what I would want to see and I’m thinking that we are moving forward with that. We are thinking about fairness. We are thinking about, transparency. we are thinking about accountability and so on and so forth

Professor Manjunath

yeah I can probably say what I already see the homeworks that my students submit are perfect the essays are spectacularly written the presentations are beautiful the only hope that I have is that they actually understand what they say so if that happens I will be very happy I think the output is perfect the understanding behind that output I hope will get better and better that’s my my wish for

Moderator

I’m going to open up the floor for audience questions

Audience Member 1

my question is sir I want to understand what kind of impact AI will be having on management consultants and the business

Kushe Bahl

I have no idea I have no idea really it’s very hard to say every industry is going to evolve Obviously, management consultants like everybody else are using AI for every possible thing that they can do with it. So they’re also trying to become more efficient, more productive with it. We don’t know what that means in terms of reshaping of the business. If you look at past tech innovations, which have also had a very big impact on productivity in many sectors, it’s not that entire sectors have disappeared or things have got, but things have got reshaped significantly. That has happened a lot. So I think the job that consultants do, like today when we do research, you don’t wait for one week for somebody to go and find things from everywhere.

It comes in a few minutes. Unfortunately, I find that a lot of the output, I have also seen a lot of the output, like Professor Manjunath said. I find two issues right now with the current versions of the AI. When it writes, it has no soul. So it’s correct, but it has no soul. And when it prepares a presentation or a piece of communication, it’s not inspiring. So it is correct, but it’s not inspiring. So I think there is a, so the consultants will spend more time on actually communicating in a way that’s inspiring while the desk, you know, the basic desk work will be done for you. So you spend time doing more, I would say, human tasks.

And that’s going to happen actually in a lot of other, in a lot of service jobs, right? You’re going to do, you’re going to spend time doing what humans are truly supposed to do and are really good at, which the AI models are not able to do.

Audience Member 2

Okay, thanks. So my question is for everyone. I have a younger cousin who is in high school and her entire life is on chat GPT at this point. So she shares everything, relationship issues, family issues, and it knows more about her than I do. And I kind of worry when I see the younger generation getting on these AI platforms. So what is your take on this, like, impact? What is the impact of this technology on young minds?

Professor Seth Bullock

So. I share your concern I have slightly older kids I think we have to trust that we’ve been through these technological shifts before so my parents when they looked at me watching television had similar worries about they told me that my eyes would become square because I watched too much television so actually my generation became much more sophisticated consumers of television and were much more savvy about TV ads than my parents’ generation so I think we have to listen to our children about the way that they’re using these technologies they’re natives in this new world I’m calibrated for a world where AI doesn’t work where AI is not rolled out across the whole world so I’m the wrong person really to ask about how AI is going to change people we should ask young people how they’re using it and engage with them before they start to use their AI in a way that we don’t understand in secret

Kushe Bahl

I have a funny answer and a short answer. But I think that one, I think the real danger actually is not with the chat GPTs of the world, but with the earlier addictive systems like the Instagrams of the world, right? Because they are genuinely playing on our brain’s dopamine circuits and are genuinely addictive and can therefore be harmful. I think with chat GPT, I think the only thing I would say is, I think it makes one actually question where we are as individuals, as parents, as family, that our children prefer to communicate to a relatively soulless communication device which answers everything like an American therapist textbook would, right? That they prefer to talk to that than to us.

It shows what a distance we have created. With each other, right? And that may be a good reminder to us as individuals around the task that we have to do in to rebuild bonds with each other.

Antaraa Vasudev

I think on a very similar note actually to what Kusha just said, I think there have been studies from Youth Ki Awaaz and a number of other global youth -based organizations which have been looking at why exactly we turn to AI. And the phraseology is very interesting there because it indicates that turning to AI is something that you can also turn away from. I think the questions really come up where exactly what was just mentioned about understanding what are the kinds of tactile family bonds, what are the kinds of lived experience -based interactions that we can keep having with the younger generation to show that AI is a part of their life, but it’s not the only part of their life.

And I think that’s maybe my hypothesis on where we’re headed there.

Audience Member 3

I have a quick follow -up and you can connect with the previous question also. Many countries right now are trying to ban the new AI. Clearly there is evidence it is harmful in the course it’s coming. You mentioned Instagram or any other. AI is an amplifier. So unless we design, whether it’s regulation or whether it’s guardrails or whatever, what is our hope and what is the hope for a society not to get amplified harm than what they have already experienced, especially for the generation? Shall we start with you, sir?

Speaker 3

Well, I think that’s basically what I wanted to say was to, the countries of Spain and Australia are two examples of where severe restrictions have been put on social media companies to at least give access to children. And that’s an interesting experiment. One has to see what’s going on. What will happen because it’s not an easy thing to do. I mean, I think technologically it’s not easy. Legally, I’m sure there are a lot of loopholes in all of this. We have to see how that evolves and potentially apply a similar. similar kind of guardrails with respect to AI. That’s the view, at least that’s the view that I have on that matter.

Professor Manjunath

No, it has to start somewhere. I mean, this exactly goes to my point that I made earlier. Generalists in government cannot handle the space at which technology can move. You cannot put guardrails on that at the beginning. The moment you know something is happening, you have to get into the act as quickly as possible. Somebody is making an attempt. So let’s understand what’s going on. Maybe it’s, I mean, exactly what goes on is, what will happen is something that we have to see. I mean, what was interesting, at least in that attempt, was that the way in which the social media companies reacted to both the Australian and the Spanish ban. Okay, so to me, the most interesting part was they all said it was too fast, they’ve not thought about it.

things through. And then I remembered what Facebook’s slogan was, move fast and break things. They are allowed to move fast, but the legal system is not allowed to experiment. That seemed like an interesting contradiction for me to study.

Professor Seth Bullock

Relatedly, so the first AI summit in London was very closed, right? Politicians and the leaders of big tech firms. And the idea that a couple of years later governments would actually be legislating in ways that limited in this case social media companies is very good news. After London, you could imagine that regulatory capture had happened, right? Governments were not going to be able to resist these big companies and their multinational power. So those first couple of steps of regulating social media for under -16s, even if it doesn’t quite work, even if it’s not exactly right, it at least is a step of introducing regulations and it will make AI companies… at least aware that that is a possibility.

Because they have to take that responsibility, I think.

Moderator

Professor Nirav, do you have any other input on that as well?

Professor Nirav Ajmeri

I think I agree to the points that have been made. I think there could be different ways to think about a blanket ban, for instance. If you try to restrict something, people may not… They can have more curiosity in terms of why is it something which is getting banned. So we have to be thinking about that as well. But there is a step. There will have to be some regulations that should come into place. What those regulations would be, we need to be thinking about that. I think a lot of times the worry is people keep scrolling. And then the way the algorithms work, Professor Majunath knows better, but recommender systems would put you in a rabbit hole.

And you keep going into one direction. There could be echo chambers that could get informed. So the younger population is more vulnerable there, and that is where possibly a ban or restricted access helps. We have to be thinking about how can we, say in YouTube, there is YouTube Kids, and they only see kids’ content, but then there are malicious actors who would post some content which is targeted towards kids, but it is not actually kids’ content. There could be somebody could come up with a new social media platform for kids. I am not very sure what it would look like, but there would be new technology that would come, but that needs some guardrails to be put into place.

What kind of guardrails? Research and the legislation will have to be thinking about it.

Moderator

Sure. I think we have time for one last question. Can we give it to somebody at the back? Yeah. The jean jacket. Yeah. Go for it. Can we pass the mic at the back, please?

Audience Member 4

So, definitely AI has enabled in the education and medical domain. But do we think that it has influenced, reached or violated the concept of the developers as well? There are singers who no longer exist. We are getting to hear those songs in the new generation. The ones who are alive, they definitely have a way to improve. But those who are not going to exist, it’s a breach of concept that, of course, it is falling under the domain of ethical AI. But just wanted to know your thoughts.

Moderator

Is there someone that’s directed route for this question or is it open for all? Ethical. Okay, we can just, whoever would like to take that.

Professor Seth Bullock

So, I think it’s a completely legitimate concern. Okay. And it’s difficult to understand where we go from here because the cat is already out of the bag, right? The models are already trained on everyone’s data without our consent. And how do we put that back in the box? I’m not sure that we can. I think, so there are currently legal cases that are going through the courts about the IP claims of musicians and artists, and it will be very interesting to see what law courts decide about that. I do think the kind of systems I’m interested in are systems that are built on consent. So a population of people that all have diabetes who sign up for an app that will track their disease, and then they gain by being part of a community where information is being shared to help people manage their diabetes.

So that’s a much more consenting model. It’s not about stealing people’s writing and art and music from the Internet, but that activity is already underway, and I don’t see a way of really putting it back in the box.

Moderator

Let’s do one last question.

Audience Member 5

Yeah, I guess it’s not the… the topic of education and the internet is strong and all of those things. One thing that we have observed is that instant feedback even by AI tools in education especially, students do not go through the whole process, the step by step process of foundation. So if your, let’s say your courses work in a way or the tools work in a way that they are step by step trying to make learn the person, make learn the student instead of giving instant gratification with the output. So one thing, the question is like this that has any of the professors in the panel been approached for this kind of a thing for modeling of the education process or process of getting educated or learning especially.

And the other thing that would you, can we see a collaboration in that regard where we can try to create a regulatory thing for us or a guidelines that how AI tools should be constructed for imparting education in a step by step so that that is structured with gratification. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Professor Manjunath

Yeah, the short answer to honor his, whatever, never mind. I didn’t get that right. So, yeah, the short answer is no, nobody is thinking along those lines. And handling AI in a classroom has been quite painful. And to give you one example, there was an example in which I asked somebody to do something, essentially write a certain program to perform a certain task. I gave the data. The student, because the student went to chat GP to understand what the question was about, created her own data to do and did not know how to use the data that I was giving. So the point you are making is extremely valid. if you want to think about legislation or any other guardrails or anything like that, I’m up to discuss those with you offline.

Give a very brief answer today. More generally, I think every university is struggling with that question. And I’m hoping that there are lots of bright people and we will start to see some answers. But it’s not easy.

Moderator

Well, a big thank you to all the panelists here. And a big thank you to all the audience members as well for being such great and engaging people. We have a token of appreciation from the University of Bristol side for all the panelists. From all of our sides. From somewhere. Thank you very much. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (31)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“The panel opened with a playful “Avengers” metaphor, positioning each speaker as a superhero to illustrate the diversity of perspectives on artificial intelligence (AI) for the collective good.”

The moderator explicitly referenced the Avengers metaphor in the discussion, as recorded in the transcript excerpt [S3].

Confirmedhigh

“Professor Seth Bullock argued that AI should move beyond answering isolated queries and become a tool for population‑scale coordination, supporting entire groups rather than individual queries.”

Bullock’s stance on designing AI for whole-population support rather than single-user queries is documented in the knowledge base entry [S4].

Additional Contextmedium

“Bullock called for new technologies, delivery models, and cross‑sector partnerships among researchers, private firms, non‑profits and governments to achieve population‑scale AI.”

The importance of multilateral, multi-stakeholder collaboration for AI deployment is highlighted in several sources, e.g., the call for broad sector participation in AI initiatives [S102] and the emphasis on multi-stakeholder partnerships for effective AI implementation [S103].

Additional Contextmedium

“Professor Manjunath characterised recommendation systems as learning agents that infer users’ utility functions set by platform owners, allowing platforms to reshape tastes and act as powerful, personalised advertisements.”

The knowledge base notes that platforms control massive information about users and use targeted advertising, which aligns with the description of platforms shaping user preferences [S107] and the critique of invasive targeted ads [S109].

External Sources (109)
S1
AI Transformation in Practice_ Insights from India’s Consulting Leaders — – Romal Shetty- Sanjeev Krishan- Audience member 3- Audience member 4
S2
Global Perspectives on Openness and Trust in AI — Speakers:Alondra Nelson, Audience member 3 Speakers:Anne Bouverot, Alondra Nelson, Audience member 3
S3
Harnessing Collective AI for India’s Social and Economic Development — -Professor Seth Bullock- Professor studying how societies hold together, coordination systems, and shared values; works …
S4
Harnessing Collective AI for India’s Social and Economic Development — Speakers:Kushe Bahl, Professor Seth Bullock Speakers:Professor Manjunath, Professor Seth Bullock Speakers:Professor Se…
S5
Harnessing Collective AI for India’s Social and Economic Development — – Antaraa Vasudev- Professor Manjunath – Professor Manjunath- Professor Seth Bullock
S6
Harnessing Collective AI for India’s Social and Economic Development — Speakers:Antaraa Vasudev, Professor Manjunath Speakers:Professor Manjunath, Antaraa Vasudev Speakers:Professor Manjuna…
S7
Global Perspectives on Openness and Trust in AI — – Karen Hao- Audience member 1- Audience member 5
S8
Global Perspectives on Openness and Trust in AI — Speakers:Karen Hao, Audience member 1, Audience member 5
S10
AI Transformation in Practice_ Insights from India’s Consulting Leaders — -Audience member 1- Founder of Corral Inc -Audience member 6- Role/title not mentioned
S11
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S12
Global Perspectives on Openness and Trust in AI — – Karen Hao- Audience member 1- Audience member 5
S13
S14
Harnessing Collective AI for India’s Social and Economic Development — Speakers:Antaraa Vasudev, Professor Manjunath Speakers:Antaraa Vasudev, Professor Nirav Ajmeri Speakers:Kushe Bahl, An…
S15
Building the Workforce_ AI for Viksit Bharat 2047 — -Speaker 1- Role/Title: Not specified, Area of expertise: Not specified -Speaker 3- Role/Title: Not specified, Area of …
S16
S18
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S19
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S20
Conversation: 02 — -Moderator: Role/Title: Event moderator; Area of expertise: Not specified
S21
Harnessing Collective AI for India’s Social and Economic Development — -Professor Nirav Ajmeri- Professor at University of Bristol focusing on multi-agent systems and socio-technical networks
S22
AI Transformation in Practice_ Insights from India’s Consulting Leaders — – Romal Shetty- Sanjeev Krishan- Audience member 3- Audience member 4
S23
Global Perspectives on Openness and Trust in AI — -Audience member 4- Intellectual property and business lawyer
S24
https://dig.watch/event/india-ai-impact-summit-2026/ai-transformation-in-practice_-insights-from-indias-consulting-leaders — Sorry, we have a lot of people who’ve raised their hands. I think we can just probably take a couple of questions. I thi…
S25
Global Perspectives on Openness and Trust in AI — -Audience member 2- Part of a group from Germany
S26
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S27
The Arc of Progress in the 21st Century / DAVOS 2025 — – Paula Escobar Chavez: Audience member asking a question (specific role/title not mentioned)
S28
S29
Harnessing Collective AI for India’s Social and Economic Development — Speakers:Kushe Bahl, Antaraa Vasudev Speakers:Kushe Bahl, Antaraa Vasudev, Audience Member 2 Speakers:Kushe Bahl, Prof…
S30
From Innovation to Impact_ Bringing AI to the Public — “we are all in committed towards agent -first interfaces.”[91]. “The agent will talk to agent.”[82]. Sharma states that…
S31
Revitalising trust with AI: Boosting governance and public services — AI is reshaping public governance, offering innovative ways to enhance services and restore trust in institutions. The d…
S32
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Lt Gen Vipul Shinghal — Third, ensuring transparency in AI systems:Commanders must understand the data sources, training methodologies, and deci…
S33
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Olahaji highlights AI’s potential to improve democratic governance by analyzing citizen feedback, enabling online consul…
S34
Education meets AI — Lastly, the analysis supports teaching critical thinking as a basic skill. It is agreed that students should learn how t…
S35
Panel Discussion AI & Cybersecurity _ India AI Impact Summit — This comment humanized the capacity building challenge and validated the struggles many educators face. It shifted the d…
S36
https://app.faicon.ai/ai-impact-summit-2026/harnessing-collective-ai-for-indias-social-and-economic-development — Absolutely to the institutions. They have the money. to invest and discover what’s going on. There is no way citizens ca…
S37
Building Population-Scale Digital Public Infrastructure for AI — Excellent point. Excellent point, Trevor. And I think you brought out the inherent stress in the phrase diffusion pathwa…
S38
How nonprofits are using AI-based innovations to scale their impact — However, several challenges remain unresolved. The technical issue of AI hallucinations continues to affect user trust, …
S39
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Thank you. The principle that elected legislatures shape the rules governing society is… the cornerstone of democracy….
S41
Comprehensive Report: AI’s Impact on the Future of Work – Davos 2026 Panel Discussion — Ng emphasized that whilst efficiency gains from AI point solutions might yield modest improvements, transformative workf…
S42
The State of the model: What frontier AI means for AI Governance — ### Task Automation vs. Job Replacement
S43
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Development | Legal and regulatory Evidence-Based Policymaking and Research Integration Part of the roadmap emphasizes…
S44
Artificial intelligence (AI) – UN Security Council — In conclusion, the discussions highlighted the importance of fostering transparency and accountability in AI systems. En…
S45
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Galia Daor:Yeah, thanks very much. I admit it’s a bit challenging to speak after Allison on that front, but I will try, …
S46
Harnessing Collective AI for India’s Social and Economic Development — Kushe Bahl believes that AI will fundamentally reshape jobs rather than just replacing them outright. He suggests this t…
S47
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Rather than following historical patterns of automation that replace workers, AI development should prioritize applicati…
S48
Shaping the Future AI Strategies for Jobs and Economic Development — This discussion focused on AI-driven strategies for workforce and economic growth, examining how artificial intelligence…
S49
Shaping the Future AI Strategies for Jobs and Economic Development — A central theme emerged around collaboration rather than displacement of human workers. Panelists emphasized that AI sho…
S50
Harnessing Collective AI for India’s Social and Economic Development — Professor Bullock argues that AI systems should be designed to support entire populations simultaneously rather than jus…
S51
Building Population-Scale Digital Public Infrastructure for AI — Summary:All speakers agree that moving from fragmented pilot projects to systematic, coordinated approaches is essential…
S52
How to make AI governance fit for purpose? — Focus should be on actions and practical outcomes rather than regulation, with emphasis on innovation over regulatory co…
S53
Scaling Enterprise-Grade Responsible AI Across the Global South — “And those engineered systems might require, for example, yes, human in the loop or on the loop, for sure, but also agen…
S54
AI governance in India: A call for guardrails, not strict regulations — The TRAI’srecent call to regulateAI comes at a time when policymakers must address rapidly evolving technological innova…
S55
From principles to practice: Governing advanced AI in action — Juha Heikkila: Thank you. Thank you very much. It’s indeed a great pleasure to be here and to be a member of this panel….
S56
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S57
Artificial intelligence — Capacity development Content policy Online education
S58
Why science metters in global AI governance — But now I don’t know what is the causal factor there. I don’t know if the causal factor is whether they are using AI mor…
S59
Empowering India & the Global South Through AI Literacy — Explanation:The unexpected consensus emerges around the government’s commitment to introduce AI education from class thr…
S60
How Trust and Safety Drive Innovation and Sustainable Growth — Explanation:Despite representing different perspectives (UK regulator, Singapore regulator, and industry), there was une…
S61
Safeguarding Children with Responsible AI — High level of consensus across diverse stakeholders (government, industry, academia, and youth representatives) suggests…
S62
Open Forum #30 High Level Review of AI Governance Including the Discussion — High level of consensus with significant implications for AI governance development. The alignment suggests that despite…
S63
Safeguarding Children with Responsible AI — Consensus level:High level of consensus across diverse stakeholders (government, industry, academia, and youth represent…
S64
AI for Social Empowerment_ Driving Change and Inclusion — This discussion focused on the impact of artificial intelligence on labor markets and employment, featuring perspectives…
S65
Generative AI is enhancing employment opportunities and shaping job quality, says ILO report — A new study conducted by the International Labour Organization (ILO) investigates the consequences of Generative AI on t…
S66
Anthropic report shows AI is reshaping work instead of replacing jobs — A new report by Anthropicsuggestsfears that AI will replace jobs remain overstated, with current use showing AI supporti…
S67
Harnessing Collective AI for India’s Social and Economic Development — Thanks a lot. So it’s great to be here in India. I think this topic is extremely relevant to both the UK where I’m worki…
S68
Harnessing Collective AI for India’s Social and Economic Development — Artificial intelligence | Social and economic development Professor Bullock argues that AI systems should be designed t…
S69
How nonprofits are using AI-based innovations to scale their impact — However, several challenges remain unresolved. The technical issue of AI hallucinations continues to affect user trust, …
S70
AI for Good Technology That Empowers People — Low to moderate disagreement level with significant implications for implementation strategies. The differences suggest …
S71
Gathering and Sharing Session: Digital ID and Human Rights C | IGF 2023 Networking Session #166 — Amandeep Singh Gill:Thank you very much. It’s a great pleasure to join you, and such an important topic. So, the interfa…
S72
WS #86 The Role of Citizens: Informing and Maintaining e-Government — PeiChin Tay emphasizes the importance of leveraging technology to reduce barriers and create digital feedback loops in e…
S74
How to make AI governance fit for purpose? — Jennifer Bachus: So, in addition to my very strong concern that essentially A.I. governance is going to strangle A.I. in…
S75
Comprehensive Report: AI’s Impact on the Future of Work – Davos 2026 Panel Discussion — Ng emphasized that whilst efficiency gains from AI point solutions might yield modest improvements, transformative workf…
S76
The State of the model: What frontier AI means for AI Governance — ### Task Automation vs. Job Replacement
S77
When Code and Creativity Collide: AI’s Transformation of Music and Creative Expression — Juliet Mann argues that artificial intelligence is advancing at an unprecedented pace compared to previous technologies….
S78
Scaling Innovation Building a Robust AI Startup Ecosystem — Overall Tone:The tone was consistently celebratory, appreciative, and inspirational throughout. It began formally with t…
S79
Elevating AI skills for all — The tone is consistently optimistic, enthusiastic, and collaborative throughout. The speaker maintains an upbeat, missio…
S80
AI Policy Summit Opening Remarks: Discussion Report — The tone is consistently optimistic and collaborative throughout both speeches. Both speakers maintain an encouraging, f…
S81
Managing Change in Media Space: Social Media, Information Disorder, and Voting Dynamics 2 — Additionally, SDG 17: Partnerships for the Goals accentuates the critical function of worldwide collaborations in realis…
S82
Open Forum #7 Deepen Cooperation on Governance, Bridge the Digital Divide — The overall tone was collaborative, optimistic and forward-looking. Speakers shared positive examples and experiences fr…
S83
Why science metters in global AI governance — Summary:The discussion revealed surprisingly few fundamental disagreements among speakers, with most conflicts arising a…
S84
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — The tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shif…
S85
As AI agents proliferate, human purpose is being reconsidered — As AI agentsrapidly evolvefrom tools to autonomous actors, experts are raising existential questions about human value a…
S86
Strategic prudence in AI: Experts advise incremental approach for meaningful advancements — At TechCrunch Disrupt 2024, data management leadersadvisedAI-driven businesses to focus on incremental, practical applic…
S87
GOVERNING AI FOR HUMANITY — – 19 Problems such as bias in AI systems and invidious AI-enabled surveillance are increasingly documented. Other risks …
S88
Panel 4 – Resilient Subsea Infrastructure for Underserved Regions  — The discussion maintained a professional, collaborative tone throughout, with panelists building on each other’s insight…
S89
Driving Indias AI Future Growth Innovation and Impact — The discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI …
S90
How Trust and Safety Drive Innovation and Sustainable Growth — The discussion concluded with panelists predicting what AI summits might be called in five years’ time. Their responses …
S91
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — The conversation maintained an optimistic and patriotic tone throughout, with both participants expressing strong confid…
S92
Comprehensive Discussion Report: AI Agents and Fiduciary Standards — The tone is pragmatic and solution-oriented throughout, with Pentland presenting a confident, business-like approach to …
S93
High-Level Track Facilitators Summary and Certificates — The discussion maintained a consistently positive and celebratory tone throughout, characterized by gratitude, accomplis…
S94
Closing Ceremony — The discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebrat…
S95
Closing Ceremony — The overall tone was positive and forward-looking. Speakers expressed gratitude to the hosts and participants, emphasize…
S96
[Parliamentary Session Closing] Closing remarks — The tone of the discussion was formal yet collaborative and appreciative. There was a sense of accomplishment for the wo…
S97
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S98
Inclusive AI Starts with People Not Just Algorithms — The tone was consistently optimistic and empowering throughout the discussion. Speakers maintained an enthusiastic, forw…
S99
Panel Discussion AI and the Creative Economy — This panel discussion examined the complex relationship between artificial intelligence and cultural diversity in creati…
S100
Panel Discussion AI and the Creative Economy — This panel discussion examined the complex relationship between artificial intelligence and cultural diversity in creati…
S101
AI for agriculture Scaling Intelegence for food and climate resiliance — Thank you. Thank you, sir, for your visionary address. You always continue to inspire us to aim higher and achieve bette…
S102
All hands on deck to connect the next billions | IGF 2023 WS #198 — Additionally, Joe Welch affirms the value of a multilateral, multistakeholder approach. He emphasizes the need for colla…
S103
AI/Gen AI for the Global Goals — Speakers consistently emphasized the crucial role of multi-stakeholder collaboration in effectively developing and imple…
S104
https://dig.watch/event/india-ai-impact-summit-2026/welfare-for-all-ensuring-equitable-ai-in-the-worlds-democracies — And I think, you know, more globally, you know, efforts like the Hiroshima AI process, there are sort of all these pre -…
S105
The History of Cyber Diplomacy Future — Pascal Lamy challenged traditional approaches to international cooperation, arguing that “Classical multilateralism… i…
S106
Sangeet Paul Choudary — Another issue that affects drivers arises from the implementation of surge pricing on ride-hailing platforms. Platforms …
S107
© 2019, United Nations — In the digital economy, platforms unilaterally control massive amounts of information about producers and consumer…
S108
7th edition — The net neutrality debate triggers linguistic debates. Proponents of net neutrality focus on Internet ‘users’, while the…
S109
Digital democracy and future realities | IGF 2023 WS #476 — These corporations, with their established platforms and significant influence, can create barriers for competing servic…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
P
Professor Seth Bullock
2 arguments165 words per minute1587 words575 seconds
Argument 1
Population‑scale AI should enable coordination of whole communities rather than single‑user queries
EXPLANATION
Professor Bullock argues that AI should move beyond answering individual questions and be designed to support entire populations facing common challenges, such as floods or disease outbreaks. By coordinating many users simultaneously, AI can share intelligence and improve collective outcomes.
EVIDENCE
He explains that instead of a single person asking an AI a question, AI can be built to help a whole population affected by a flood, a disease, or tax filing, enabling coordination and better outcomes for many people at once [28-30]. He adds that achieving this requires new technologies, partnerships between researchers, companies, and governments, and a shift away from purely commercial AI tools [31-33].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bullock’s claim that AI should support entire populations and enable coordinated action is corroborated by S4, which emphasizes designing AI for whole-community challenges rather than individual queries [S4].
MAJOR DISCUSSION POINT
Population‑scale coordination
AGREED WITH
Antaraa Vasudev, Professor Nirav Ajmeri
Argument 2
Future AI agents will act purposively and communicate with each other, requiring embedded social responsibility
EXPLANATION
Bullock warns that upcoming AI systems will be agentic, pursuing specific goals and interacting with other agents, which could lead to unintended resource consumption and conflicts. Embedding social responsibility into these agents is essential to prevent harmful cascades.
EVIDENCE
He describes a next wave of AI where agents have purposive aims, communicate, and may task each other, creating cascades of requests that consume resources and could disadvantage others, emphasizing the need for social responsibility in their design [58-65]. He illustrates how a trivial request could trigger a large chain of agent interactions, highlighting potential unforeseen consequences [66-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for socially responsible, purposive AI agents that interact at scale is highlighted in S4 and further detailed in S30, which discusses agent-first interfaces and agents talking to agents [S4][S30].
MAJOR DISCUSSION POINT
Agentic AI and responsibility
P
Professor Nirav Ajmeri
2 arguments148 words per minute695 words280 seconds
Argument 1
Multi‑agent approaches can model socio‑technical systems to achieve socially optimal outcomes
EXPLANATION
Ajmeri states that multi‑agent systems can capture the interaction of people, organizations, and technical tools, allowing the design of solutions that aim for global optima rather than local, individual optima. This can improve social welfare in domains such as ride‑sharing and pandemic prevention.
EVIDENCE
He explains that current ride-sharing optimizes for each individual, leading to local maxima, whereas a multi-agent approach can target a global optimum that maps to social welfare, questioning what social welfare means and how to achieve it [47-52]. He also mentions that epidemic and pandemic prevention are inherently multi-agent problems requiring coordinated solutions [55-56].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Ajmeri’s argument that multi-agent systems can achieve global, socially optimal outcomes is supported by S4, which describes his view on moving from individual to collective optimization [S4].
MAJOR DISCUSSION POINT
Socio‑technical optimization
AGREED WITH
Professor Seth Bullock, Antaraa Vasudev
Argument 2
Intelligence emerges from interacting agents; suitable for problems like ride‑sharing, pandemics, and social welfare
EXPLANATION
Ajmeri emphasizes that intelligence is not isolated but arises from the interaction of many agents, making multi‑agent frameworks appropriate for complex societal challenges. By modeling these interactions, AI can help design fair and effective collective decisions.
EVIDENCE
He notes that intelligence emerges when social entities (people, organizations) and technical tools (intelligent agents, applications) interact, and that this structure fits problems such as ride-sharing, where individual optimization leads to sub-optimal global outcomes, and public health crises that require coordinated action [47-52][55-56].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S4 provides additional context for Ajmeri’s point that intelligence emerges from the interaction of many agents and is apt for complex societal problems such as ride-sharing and pandemic response [S4].
MAJOR DISCUSSION POINT
Emergent intelligence in multi‑agent systems
A
Antaraa Vasudev
2 arguments170 words per minute883 words310 seconds
Argument 1
AI can both help citizens voice concerns and optimize governmental processes; transparency is essential
EXPLANATION
Vasudev explains that AI is currently used to enable citizens with limited legal knowledge to ask questions, air grievances, and understand policies, while also being employed for large‑scale optimization of government functions. She stresses that transparent, accessible, and equitable frameworks are needed to ensure AI benefits are fairly distributed.
EVIDENCE
She describes AI-driven citizen engagement tools that clarify doubts, collect grievances, and explain policy frameworks, alongside AI-based optimization for a country as large and diverse as India, calling for transparent and equitable frameworks before scaling AI solutions [109-115].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Vasudev’s emphasis on AI-enabled citizen engagement and the need for transparent, equitable frameworks is echoed in S4, which discusses AI tools for large-scale government optimization and calls for transparency [S4].
MAJOR DISCUSSION POINT
Civic engagement and transparent AI governance
AGREED WITH
Professor Manjunath, Speaker 3
Argument 2
AI can empower citizens by aggregating massive feedback and informing policy decisions
EXPLANATION
Vasudev highlights a project with the Maharashtra government where AI collected hundreds of thousands of citizen inputs via a chatbot, aggregated them, and fed the results back into policy making, ensuring future laws consider citizen perspectives.
EVIDENCE
She details how Civis built an easy-to-use chatbot that gathered 3.8 lakh responses from 37 districts, aggregated the feedback, and produced the publicly available Vixit Maharashtra report, after which the state mandated that upcoming laws factor in citizen input [121-130].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Maharashtra chatbot project described in S4, which gathered 3.8 lakh responses and fed them into policy making, directly supports this argument [S4].
MAJOR DISCUSSION POINT
Citizen‑centric policy design
AGREED WITH
Professor Seth Bullock, Professor Nirav Ajmeri
P
Professor Manjunath
4 arguments169 words per minute1529 words540 seconds
Argument 1
Recommendation engines act as powerful nudges that reshape user preferences and can hide bias
EXPLANATION
Manjunath argues that recommendation systems learn users’ likes and dislikes through utility functions defined by platform owners, subtly steering preferences over time. This nudging effect can be large and may conceal underlying biases.
EVIDENCE
He explains that recommendation systems act as learning agents that present options, capture reactions via utility functions, and over time can dramatically change user preferences, acting as advertisements that heavily influence population tastes [77-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Manjunath’s claim about recommendation systems learning utility functions and subtly shifting preferences is substantiated by S4, which outlines how such systems act as nudges and can conceal bias [S4].
MAJOR DISCUSSION POINT
Algorithmic nudging and hidden bias
Argument 2
Governments should act as enablers and monitors, not micromanage technology development
EXPLANATION
Manjunath cautions that when governments overly direct technology projects, such as India’s CDOT or Japan’s Fifth Generation computing, they often fail. He recommends that governments enable private innovation, monitor outcomes, and intervene only to prevent harms.
EVIDENCE
He cites the failure of India’s CDOT after government micromanagement and Japan’s Fifth Generation AI project as examples of over-directed initiatives, then argues that governments should enable, monitor, and nudge rather than control technology development [139-152][155-162].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S4 cites Manjunath’s examples of CDOT and Japan’s Fifth Generation project to illustrate the pitfalls of government micromanagement and his recommendation for an enabling role [S4].
MAJOR DISCUSSION POINT
Government role as enabler vs. director
AGREED WITH
Antaraa Vasudev, Speaker 3
Argument 3
Educators face challenges with AI‑generated work lacking depth and inspiration
EXPLANATION
Manjunath observes that AI‑produced content, while correct, often lacks the ‘soul’ and inspirational quality of human‑crafted material, making it insufficient for educational purposes.
EVIDENCE
He notes that AI-generated presentations and essays are accurate but have no soul and are not inspiring, highlighting a limitation for teaching and learning [375-378].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Manjunath’s observation that AI-generated content is accurate but lacks ‘soul’ and inspiration is documented in S4, providing direct support for this concern [S4].
MAJOR DISCUSSION POINT
Quality of AI‑generated educational content
Argument 4
Over‑directed government projects often fail; better to enable private innovation while monitoring risks
EXPLANATION
Reiterating his earlier point, Manjunath emphasizes that government‑driven tech projects frequently underperform, and a more effective approach is to let the private sector lead while the state ensures safety and fairness.
EVIDENCE
He repeats the CDOT and Fifth Generation examples to illustrate failure of government-led tech, and advocates for an enabling role with monitoring and risk mitigation [139-152][155-162].
MAJOR DISCUSSION POINT
Policy approach to AI development
S
Speaker 3
2 arguments179 words per minute117 words39 seconds
Argument 1
Regulatory guardrails (e.g., bans for minors) are needed to limit amplified harms
EXPLANATION
Speaker 3 points out that countries like Spain and Australia have imposed strict restrictions on social‑media platforms for children, serving as experimental guardrails that could inform similar measures for AI.
EVIDENCE
He mentions that Spain and Australia have placed severe restrictions on social-media companies to protect children, describing these as interesting experiments whose outcomes need to be observed [409-416].
MAJOR DISCUSSION POINT
Early regulatory safeguards for vulnerable users
AGREED WITH
Antaraa Vasudev, Professor Manjunath
Argument 2
Early regulatory steps (e.g., social‑media restrictions for youth) can signal accountability and shape industry behavior
EXPLANATION
Speaker 3 argues that imposing early limits on technology use by minors sends a clear signal to industry that regulation is possible, encouraging responsible behavior even if the measures are imperfect.
EVIDENCE
He explains that the bans in Spain and Australia, though not easy to implement, represent a step toward accountability that may influence how AI companies operate [409-416].
MAJOR DISCUSSION POINT
Regulation as a catalyst for industry responsibility
A
Audience Member 1
1 argument100 words per minute22 words13 seconds
Argument 1
Concern about AI’s effect on management consulting and the need to focus on human‑centric tasks
EXPLANATION
The audience member asks how AI will impact management consultants, expressing worry that AI might replace human roles and emphasizing the importance of retaining tasks that require human creativity and inspiration.
EVIDENCE
He poses the question about AI’s impact on management consultants and the business, seeking insight into replacement versus value creation [365].
MAJOR DISCUSSION POINT
AI impact on consulting profession
A
Audience Member 3
2 arguments142 words per minute94 words39 seconds
Argument 1
AI in governance currently shifts power toward institutions rather than citizens
EXPLANATION
The audience member asserts that, despite AI’s potential to empower citizens, current implementations tend to concentrate power with institutions that have the resources to leverage AI.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S36 offers a counterpoint, noting that institutions possess the resources to dominate AI deployment, suggesting a power shift toward institutions rather than citizens [S36].
MAJOR DISCUSSION POINT
Power dynamics in AI‑enabled governance
Argument 2
Early regulatory steps (e.g., social‑media restrictions for youth) can signal accountability and shape industry behavior
EXPLANATION
The audience member highlights that imposing restrictions on technology for minors can act as a precedent for AI regulation, encouraging responsible industry practices.
MAJOR DISCUSSION POINT
Regulatory precedents for AI
A
Audience Member 4
2 arguments127 words per minute91 words42 seconds
Argument 1
Algorithms learn utility functions set by owners, leading to drift in user preferences over time
EXPLANATION
The audience member notes that recommendation algorithms are programmed with utility functions defined by platform owners, which can gradually shift users’ preferences in directions aligned with those owners’ goals.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion in S4 about recommendation systems using owner-defined utility functions and causing preference drift aligns with this audience observation [S4].
MAJOR DISCUSSION POINT
Algorithmic ownership and preference drift
Argument 2
AI models trained on copyrighted material raise consent and IP issues; legal resolution is pending
EXPLANATION
The audience member raises concerns that AI systems are trained on artists’ and creators’ works without consent, creating intellectual‑property disputes that are currently being litigated.
EVIDENCE
He asks whether AI-generated content violates developers’ rights, citing examples of singers whose voices are reproduced and questioning the ethical implications [462-467].
MAJOR DISCUSSION POINT
IP and consent in AI training data
A
Audience Member 5
1 argument186 words per minute196 words62 seconds
Argument 1
Instant AI feedback can bypass step‑by‑step learning, risking shallow understanding; calls for structured guidelines
EXPLANATION
The audience member worries that AI tools providing immediate answers prevent students from engaging in the gradual learning process, and suggests the need for regulatory or guideline frameworks to ensure educational AI supports deep learning.
EVIDENCE
He describes how instant AI feedback leads students to skip foundational steps, and asks whether professors have been approached to develop structured, step-by-step AI tools for education [481-486].
MAJOR DISCUSSION POINT
AI in education and learning depth
K
Kushe Bahl
2 arguments188 words per minute1945 words620 seconds
Argument 1
AI will reshape rather than simply replace jobs, creating new value through personalization
EXPLANATION
Bahl explains that while AI can automate routine tasks, its greatest economic impact comes from enabling personalized services that generate new revenue streams, thereby reshaping job roles rather than merely eliminating them.
EVIDENCE
He cites examples such as AI replacing call-center work but notes limited cost savings, and emphasizes that personalized customer engagement engines can increase revenue by 10 % with high margins, delivering far greater value than simple cost cuts [186-199].
MAJOR DISCUSSION POINT
Job transformation and value creation
AGREED WITH
Professor Seth Bullock
Argument 2
Reshape (as answer to rapid‑fire question about AI’s impact on jobs)
EXPLANATION
In the rapid‑fire segment, Bahl succinctly states that AI will reshape jobs rather than merely replace or polarize them.
EVIDENCE
He answers “Reshape” to the moderator’s rapid-fire question about AI’s impact on jobs in India [257].
MAJOR DISCUSSION POINT
Rapid‑fire view on job impact
M
Moderator
1 argument147 words per minute1619 words659 seconds
Argument 1
Rapid‑fire insights highlight differing views on bias, power shift, and who benefits from AI
EXPLANATION
The moderator summarizes a rapid‑fire round where panelists offered brief, contrasting perspectives on algorithmic bias, the direction of power in AI‑governance, and whether companies or employees stand to gain most from AI.
EVIDENCE
During the rapid-fire, Antaraa said AI shifts power to citizens, Manjunath argued it shifts to institutions, and Bahl answered that AI will reshape jobs, illustrating varied viewpoints on bias, power, and benefit distribution [237-245][248-251][257][281-284].
MAJOR DISCUSSION POINT
Diverse panel perspectives in rapid fire
Agreements
Agreement Points
AI should be designed for population‑scale coordination rather than isolated individual queries
Speakers: Professor Seth Bullock, Antaraa Vasudev, Professor Nirav Ajmeri
Population‑scale AI should enable coordination of whole communities rather than single‑user queries AI can empower citizens by aggregating massive feedback and informing policy decisions Multi‑agent approaches can model socio‑technical systems to achieve socially optimal outcomes
All three speakers stress that AI systems need to operate at the scale of whole populations or societies, coordinating many users (e.g., flood victims, citizens providing feedback) and moving beyond single-question interactions to achieve collective benefits [28-30][31-33][121-130][47-52][55-56].
POLICY CONTEXT (KNOWLEDGE BASE)
This view reflects an emerging consensus that AI systems should function as shared digital public infrastructure, enabling coordinated outcomes across whole societies rather than siloed personal assistants. The need for systematic, population-scale approaches is highlighted in discussions on building digital public infrastructure for AI [S51] and in Professor Bullock’s argument that coordination itself is a form of intelligence supporting entire populations [S50].
Transparency and accountable governance are essential for AI deployment in the public sector
Speakers: Antaraa Vasudev, Professor Manjunath, Speaker 3
AI can both help citizens voice concerns and optimize governmental processes; transparency is essential Governments should act as enablers and monitors, not micromanage technology development Regulatory guardrails (e.g., bans for minors) are needed to limit amplified harms
Vasudev calls for transparent, accessible, and equitable AI frameworks for citizen engagement, Manjunath warns against government micromanagement and advocates an enabling, monitoring role, while Speaker 3 points to early regulatory experiments as necessary safeguards [109-115][288-290][139-152][155-162][409-416].
POLICY CONTEXT (KNOWLEDGE BASE)
The UN Security Council emphasized that AI systems must be transparent, explainable and accountable to maintain public trust and ensure ethical outcomes, framing these principles as core to AI governance in the public sector [S44].
AI will reshape jobs and create new value rather than simply replace workers
Speakers: Kushe Bahl, Professor Seth Bullock
AI will reshape rather than simply replace jobs, creating new value through personalization AI will break down barriers between people, enabling richer interactions that were previously impossible
Bahl emphasizes that AI’s biggest economic impact comes from personalized services that generate new revenue, reshaping roles, while Bullock highlights AI’s potential to connect people and enable capabilities beyond human limits, both indicating a transformation rather than outright replacement [257][186-199][301-309].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple expert panels have argued that AI will transform work by augmenting human capabilities and generating new employment opportunities, rather than causing wholesale displacement. This perspective appears in discussions on AI’s impact on jobs in India [S46], policy-focused forums stressing complementary design choices [S47][S48][S49], and ILO research showing generative AI can enhance employment prospects [S65][S66].
Building public understanding and capacity is crucial for responsible AI adoption
Speakers: Kushe Bahl, Professor Seth Bullock
Students need to focus on how to use AI across fields and equip themselves with relevant skills Uplifting public understanding of AI will protect against malicious uses
Both Bahl and Bullock argue that widespread AI literacy-students learning to apply AI in their domains and the general public grasping AI’s implications-is essential to mitigate risks and harness benefits [226-230][250-251].
POLICY CONTEXT (KNOWLEDGE BASE)
The AI Policy Research Roadmap calls for capacity-building initiatives to raise awareness and enable effective navigation of AI systems in the public sector [S43]. Complementary efforts on AI literacy, such as introducing AI education from primary school onward, reinforce the policy priority of public understanding [S59][S57].
Similar Viewpoints
Both speakers stress that as AI becomes more autonomous and agentic, its design must incorporate social responsibility and governance mechanisms to prevent unintended harms, calling for collaborative oversight rather than unchecked deployment [58-65][139-152][155-162].
Speakers: Professor Seth Bullock, Professor Manjunath
Future AI agents will act purposively and communicate with each other, requiring embedded social responsibility Governments should act as enablers and monitors, not micromanage technology development
Both see the state’s role as facilitating transparent, citizen‑centric AI tools while avoiding heavy‑handed control, emphasizing enabling frameworks that protect public interest [109-115][288-290][139-152][155-162].
Speakers: Antaraa Vasudev, Professor Manjunath
AI can empower citizens by aggregating massive feedback and informing policy decisions Governments should act as enablers and monitors, not micromanage technology development
Unexpected Consensus
Agreement across diverse participants that early regulatory interventions (e.g., bans for minors) are a useful experiment for AI governance
Speakers: Speaker 3, Professor Manjunath, Audience Member 3
Regulatory guardrails (e.g., bans for minors) are needed to limit amplified harms Governments should enable and monitor rather than micromanage, implying a need for early safeguards AI in governance currently shifts power toward institutions, suggesting regulation is required
While speakers came from different domains-policy, academia, and audience-their statements converge on the idea that early, targeted regulatory steps are valuable for managing AI’s societal impact, a consensus not explicitly anticipated at the start of the panel [409-416][139-152][155-162].
POLICY CONTEXT (KNOWLEDGE BASE)
High-level consensus on safeguarding children through targeted AI restrictions has been documented in UN-backed child-focused AI governance forums, which view early bans for minors as a pragmatic experiment [S61]. Similar multi-stakeholder dialogues favor targeted, harm-focused interventions over sweeping legislation [S60][S52].
Overall Assessment

The panel largely converged on four core themes: (1) AI must be built for collective, population‑scale coordination; (2) transparent, accountable governance and early regulatory guardrails are essential; (3) AI will reshape rather than merely replace jobs, creating new value; and (4) capacity building and public understanding are critical for responsible adoption.

High consensus across speakers on these themes, indicating a shared belief that AI’s future benefits hinge on coordinated design, transparent governance, and widespread capacity development. This alignment suggests strong support for policies that promote collective AI solutions, enforce transparency, and invest in education and public awareness.

Differences
Different Viewpoints
Who gains power from AI in governance – citizens or institutions
Speakers: Antaraa Vasudev, Professor Manjunath
AI can empower citizens by aggregating massive feedback and informing policy decisions (Antaraa) AI in governance shifts power toward institutions that have the resources to invest and control AI (Manjunath)
Antaraa asserts that AI shifts power to citizens by enabling their voices to be heard (e.g., the Maharashtra chatbot project) [237][121-130]. Manjunath counters that, in practice, AI gives institutions the advantage because they control the data, funding and deployment, making it hard for citizens to compete [294-296].
Approach to government involvement in AI – enable‑and‑monitor vs regulatory guardrails
Speakers: Professor Manjunath, Speaker 3, Antaraa Vasudev
Governments should act as enablers and monitors, avoiding micromanagement of technology projects (Manjunath) Early regulatory steps such as bans for minors are needed to limit amplified harms and signal accountability (Speaker 3) AI governance requires transparent, equitable frameworks before scaling, implying some level of oversight (Antaraa)
Manjunath warns that government micromanagement leads to failure (e.g., CDOT, Japan’s Fifth Generation) and recommends an enabling role with monitoring [139-152][155-162]. Speaker 3 argues that strict bans for children (Spain, Australia) are useful guardrails, suggesting a more proactive regulatory stance [409-416]. Antaraa calls for transparent, accessible frameworks, indicating a need for structured oversight rather than pure hands-off enabling [109-115]. The three positions diverge on how much direct regulation is appropriate.
POLICY CONTEXT (KNOWLEDGE BASE)
Recent policy debates highlight a split between action-oriented, enable-and-monitor models and calls for explicit guardrails. Some reports advocate focusing on practical outcomes and innovation-friendly approaches rather than heavy regulation [S52], while others stress the necessity of guardrails to balance trust and risk [S53][S54][S60].
How bias in recommendation systems should be addressed – hide bias vs reduce bias
Speakers: Professor Manjunath, Moderator (implicit)
Algorithms tend to hide bias and may increase it over time (Manjunath) The rapid‑fire question asked whether algorithms today are more likely to reduce bias or hide bias, implying an expectation of reduction (Moderator)
When asked about bias, Manjunath responded that algorithms are more likely to hide bias and may even increase it, showing skepticism about current mitigation efforts [239-245]. The moderator’s framing of the question suggested a hope that bias could be reduced, revealing a tension between expectations of bias reduction and Manjunath’s assessment that bias is being concealed.
Unexpected Differences
Perceived impact of AI on jobs – replacement vs value creation
Speakers: Audience Member 1, Kushe Bahl
Concern that AI will replace management consultants and reduce human‑centric tasks (Audience Member 1) AI will reshape jobs by creating new value through personalization rather than simply replacing roles (Bahl)
The audience member expressed anxiety that AI might replace consultants, whereas Bahl argued that the real economic benefit comes from AI-enabled personalization that creates new revenue streams and reshapes work, not wholesale replacement [365][186-199][257]. This contrast between fear of job loss and optimism about job transformation was not anticipated given the broader discussion on AI for collective good.
POLICY CONTEXT (KNOWLEDGE BASE)
Expert analyses consistently argue that AI is more likely to create value and new roles than to replace workers outright, countering alarmist narratives about job loss. This view is supported by discussions on AI reshaping work in India and global forums, as well as ILO and Anthropic reports highlighting augmentation over replacement [S46][S47][S48][S49][S64][S65][S66].
Effectiveness of AI‑generated educational content
Speakers: Professor Manjunath, Professor Seth Bullock
AI‑generated essays and presentations lack ‘soul’ and are not inspiring for learning (Manjunath) Bullock envisions AI breaking down barriers and enabling richer, meaningful interactions (Bullock)
Manjunath criticizes AI output for being correct but soulless, limiting its educational value [375-378]. Bullock, while not directly addressing education, promotes AI as a means to connect people and facilitate deep collective interaction, implying a more positive view of AI’s educational potential [301-307]. The tension between AI’s perceived superficiality and its potential to enhance learning was not a primary focus of the panel, making it an unexpected point of disagreement.
Overall Assessment

The panel displayed several substantive disagreements, chiefly around who benefits from AI in governance (citizens vs institutions), the appropriate level of government intervention (enabling vs regulatory guardrails), and how bias in algorithmic systems should be handled. While there was broad consensus that AI should serve collective good and that system‑level coordination is essential, the pathways to achieve these goals diverged sharply.

Moderate to high – the core philosophical split on power dynamics and regulatory philosophy could shape policy outcomes significantly. The disagreements suggest that without a shared framework for governance, AI initiatives may oscillate between citizen‑centric empowerment and institutional control, potentially limiting the realization of inclusive, equitable AI benefits.

Partial Agreements
Both agree that AI must move beyond isolated individual interactions toward coordinated, system‑level solutions. Bullock emphasizes population‑scale coordination for floods, disease, taxes [28-30][31-33]; Ajmeri stresses that intelligence emerges from interacting agents and that multi‑agent approaches are suited for collective problems like ride‑sharing and pandemics [36-46][47-56]. They differ in terminology (population‑scale AI vs multi‑agent modeling) but share the same overarching goal.
Speakers: Professor Seth Bullock, Professor Nirav Ajmeri
Population‑scale AI should coordinate whole communities rather than answer single‑user queries (Bullock) Multi‑agent systems can model socio‑technical interactions to achieve socially optimal outcomes (Ajmeri)
Both see AI as a tool for enhancing citizen participation and collective intelligence. Antaraa describes AI‑driven citizen engagement platforms and the need for transparency [109-115][121-130]; Bullock envisions AI breaking barriers between people and enabling richer interaction with governments [301-307]. Their convergence is on the desired outcome (empowered citizens), while their focus differs (transparent platforms vs agentic coordination).
Speakers: Antaraa Vasudev, Professor Seth Bullock
AI can empower citizens by providing access to information and enabling collective decision‑making (Antaraa) AI agents that communicate and coordinate can give people a greater sense of connection and a voice in collective decisions (Bullock)
Takeaways
Key takeaways
AI should move from individual‑query tools to population‑scale coordination systems that can help whole communities manage floods, disease outbreaks, tax collection, etc. (Prof. Seth Bullock) Multi‑agent and socio‑technical approaches are essential for problems where many human and technical agents interact (Prof. Nirav Ajmeri). These can improve social welfare in domains such as ride‑sharing, pandemic response, and public policy. Recommendation and advertising algorithms act as powerful nudges that can reshape user preferences and often hide bias; the utility functions they optimise are set by owners, not users (Prof. Manjunath). AI in governance can both amplify citizen voice and optimise government processes, but transparency, accessibility, and equity must be built into frameworks (Antaraa Vasudev). Governments are better positioned as enablers and monitors rather than micromanagers of technology development; over‑directed projects tend to fail (Prof. Manjunath). AI will more likely reshape jobs than simply replace them, creating new value through personalization and automation of tasks that are infeasible for humans (Kushe Bahl). In education, unchecked AI feedback can bypass step‑by‑step learning, leading to shallow understanding; structured guidelines are needed (Audience Q5, Prof. Manjunath). Ethical concerns around AI‑generated content and IP arise because models are trained on copyrighted material without consent; consent‑based data collection is advocated (Prof. Seth Bullock). Early regulatory steps (e.g., age‑based bans on social‑media) signal accountability and can influence industry behaviour, though they are imperfect (Audience Q3, Prof. Seth Bullock).
Resolutions and action items
Develop transparent, citizen‑centric AI frameworks for public services, emphasizing consent‑based data collection (Antaraa Vasudev). Encourage partnerships between researchers, private firms, and governments to build AI systems that serve whole populations rather than individual queries (Prof. Seth Bullock). Create guidelines for AI use in education that enforce step‑by‑step learning and prevent over‑reliance on instant AI answers (Audience Q5, Prof. Manjunath). Promote the design of AI agents with embedded social responsibility to mitigate unintended resource consumption and conflicts (Prof. Seth Bullock). Monitor and evaluate early regulatory experiments (e.g., youth‑focused bans) to inform future AI governance policies (Audience Q3, Prof. Seth Bullock).
Unresolved issues
How to concretely shift AI‑enabled governance power toward citizens rather than institutions; current perception is that power still leans toward institutions. Effective methods for reducing hidden bias in recommendation systems and ensuring algorithms are accountable to public values. Specific regulatory mechanisms that balance transparency with effectiveness of AI in public systems; no consensus reached. Legal and practical solutions for intellectual‑property rights of creators whose works are used to train generative models. Detailed strategies for up‑skilling the workforce and integrating AI into job roles without causing large‑scale displacement. Implementation pathways for consent‑based data ecosystems at scale, especially in health or civic domains. Standardised, enforceable guidelines for AI use in classrooms and assessment of learning outcomes.
Suggested compromises
Adopt a transparency‑first approach for AI in public systems while still pursuing effectiveness, acknowledging that transparency is a prerequisite for trust (Antaraa Vasudev). Governments act as enablers and monitors rather than direct developers, allowing private innovation to flourish while providing oversight (Prof. Manjunath). Introduce targeted, age‑based restrictions on AI‑enabled platforms as an interim safeguard while broader regulatory frameworks are developed (Audience Q3). Balance AI‑driven job automation with a focus on augmenting human‑centric tasks, reshaping roles instead of pure replacement (Kushe Bahl). Combine multi‑agent system design with ethical guidelines to ensure that emergent behaviours align with societal welfare (Prof. Nirav Ajmeri & Prof. Seth Bullock).
Thought Provoking Comments
Coordination is intelligence. Instead of AI answering individual questions, we can design AI systems that support whole populations—e.g., coordinating flood response, disease management, or tax collection.
Reframes AI from a personal tool to a societal coordination mechanism, highlighting a shift in purpose and scale.
Opened a new line of discussion about population‑level AI, prompting follow‑up questions on multi‑agent systems and leading the panel to explore how AI can be structured for collective coordination rather than isolated queries.
Speaker: Professor Seth Bullock
When AI becomes agentic, even a trivial request (like a picture of a dog on a skateboard) can trigger cascades of interactions that consume resources and potentially disadvantage others; we need to embed social responsibility into these agents.
Identifies a hidden risk of emergent, large‑scale AI interactions and calls for proactive ethical design.
Shifted the tone from optimism to caution, steering the conversation toward the unintended consequences of AI ecosystems and influencing later remarks about regulation and public understanding.
Speaker: Professor Seth Bullock
Recommendation systems are learning agents that actively shape preferences; depending on the utility function they optimize, they can dramatically alter users’ tastes over time, essentially acting as powerful advertisements.
Highlights how algorithmic design directly influences human behavior, moving beyond the notion of neutral tools.
Deepened the analysis of algorithmic nudging, leading to audience concerns about autonomy and prompting further discussion on bias, transparency, and the need for oversight.
Speaker: Professor Manjunath
In Maharashtra, we built a simple chatbot that collected 380,000 citizen inputs (voice notes, texts, drawings) and fed them into the policy‑making process; now every law must consider this citizen feedback.
Provides a concrete, scalable example of AI empowering citizens in governance, illustrating practical impact.
Grounded the abstract debate in a real‑world case, encouraging other panelists to discuss how AI can be used for civic engagement and influencing the later focus on transparency and decentralization.
Speaker: Antaraa Vasudev
Government micromanagement of technology (e.g., India’s CDOT and Japan’s Fifth Generation computing) often leads to failure; governments should act as enablers and monitors, not directors of tech development.
Offers historical evidence that challenges the assumption that state control ensures beneficial AI outcomes.
Prompted a re‑evaluation of the appropriate role of policy, influencing subsequent remarks about regulatory guardrails, rapid‑fire answers, and the need for agile, not heavy‑handed, governance.
Speaker: Professor Manjunath
AI should not just replace humans to cut costs; the real value lies in unlocking capabilities humans can’t achieve, like personalized customer engagement engines that can increase revenue far beyond the savings from automation.
Distinguishes between superficial cost‑cutting and transformative value creation, reframing the job‑impact narrative.
Redirected the conversation from fear of job loss to opportunities for new value, influencing later discussion on reshaping jobs and supporting small businesses.
Speaker: Kushe Bahl
My generation worried about TV; we adapted and became savvy consumers. Today’s youth will similarly adapt to AI, and we should listen to them rather than impose adult fears.
Provides a historical analogy that normalizes technological anxiety and emphasizes intergenerational dialogue.
Eased audience concerns, shifted the discussion toward empowerment and education, and set the stage for audience questions about youth and AI.
Speaker: Professor Seth Bullock
The biggest danger is not chat‑GPT but platforms that exploit dopamine circuits (e.g., Instagram). AI amplifies existing harms; we need consent‑based models where users opt‑in to data sharing, not covert data harvesting.
Prioritizes consent and data ethics, pointing out that AI’s risks are extensions of existing platform issues.
Reinforced calls for transparent, consent‑driven AI systems, influencing the rapid‑fire debate on bias, transparency, and the role of government in setting guardrails.
Speaker: Professor Seth Bullock
Overall Assessment

These pivotal comments collectively steered the panel from a broad, metaphor‑driven introduction toward concrete, systemic considerations of AI. Professor Seth’s framing of coordination and agentic cascades introduced the need for societal‑scale design and ethical safeguards, while Professor Manjunath’s insights on recommendation systems and governmental overreach highlighted hidden influences and policy pitfalls. Antaraa’s Maharashtra case grounded the discussion in real‑world civic empowerment, and Kushe Bahl’s distinction between cost‑cutting and value creation reshaped the narrative around job impact. Together, these remarks deepened the conversation, prompted new topics (population AI, consent, governance models), and shifted the tone from speculative optimism to a nuanced, solution‑oriented dialogue.

Follow-up Questions
How can AI systems be designed to support whole populations (e.g., disaster response, tax collection) rather than individual queries?
Identifies a need to shift AI from individual assistance to coordinated population‑level services, requiring new technologies and delivery models.
Speaker: Professor Seth Bullock
What partnership models between researchers, companies, non‑profits, and governments are needed to develop AI for populations?
Highlights the importance of cross‑sector collaboration to create and deploy population‑scale AI solutions.
Speaker: Professor Seth Bullock
What interventions in AI promotion are required to avoid the ‘path of least resistance’ commercial tools and ensure socially beneficial outcomes?
Calls for policy or strategic guidance to steer AI development toward public‑good applications rather than purely profit‑driven tools.
Speaker: Professor Seth Bullock
How can social responsibility be embedded into agentic AI to prevent resource contention and unintended societal consequences?
Points to the need for research on designing AI agents that consider the impact of their actions on other agents and on society.
Speaker: Professor Seth Bullock
What are the emergent behaviors and cascading resource consumption effects when AI agents interact at scale (e.g., trivial requests causing large cascades)?
Raises concerns about scalability and externalities of interconnected AI agents, requiring study of systemic impacts.
Speaker: Professor Seth Bullock
What governance frameworks are needed to ensure transparency, accessibility, and equity when AI is used in public systems?
Emphasizes the necessity of designing transparent and equitable AI frameworks before large‑scale deployment in governance.
Speaker: Antaraa Vasudev
How should regulatory and policy frameworks be designed to prevent premature racing to the next AI model without adequate safeguards?
Calls for research on creating timely regulations that balance innovation with safety and public interest.
Speaker: Antaraa Vasudev
How do recommendation systems shape human preferences and potentially amplify bias, and how can this impact be measured?
Identifies a gap in understanding the magnitude of preference manipulation and bias amplification by recommendation algorithms.
Speaker: Professor Manjunath
What methods can be developed to detect and mitigate hidden bias in recommendation algorithms?
Points to the need for technical solutions and standards to address bias that is not immediately visible.
Speaker: Professor Manjunath
What is the effectiveness of AI and social‑media restrictions for minors (e.g., bans in Spain and Australia) and what guardrails are appropriate?
Seeks empirical evaluation of regulatory experiments aimed at protecting children from AI‑driven harms.
Speaker: Speaker 3 (unnamed) and Professor Manjunath
How can appropriate guardrails be established for AI deployment in the public sector without stifling innovation?
Calls for research on balancing rapid AI adoption with necessary oversight in government contexts.
Speaker: Professor Manjunath
How can AI be leveraged to create value for small businesses and self‑employed workers rather than merely replacing jobs?
Suggests investigation into low‑cost AI solutions that augment income for millions of micro‑entrepreneurs.
Speaker: Kushe Bahl
What design principles and regulatory guidelines are needed for AI tools in education to promote step‑by‑step learning rather than instant gratification?
Highlights a gap in current AI‑enabled educational tools and the need for structured, pedagogically sound frameworks.
Speaker: Audience Member 5; Professor Manjunath
What are the legal and ethical implications of AI‑generated content that uses works of deceased artists, and how should consent and IP be managed?
Raises concerns about copyright, consent, and the need for new legal frameworks for AI‑generated creative works.
Speaker: Professor Seth Bullock
How can AI systems aggregate individual preferences into collective decisions while ensuring fairness, transparency, and accountability?
Identifies a research challenge in designing AI‑mediated collective decision‑making mechanisms that maintain trust.
Speaker: Professor Nirav Ajmeri
How can AI increase citizens’ access to government entitlements and benefits through decentralized, disaggregated control mechanisms?
Calls for exploration of AI‑driven platforms that reduce information asymmetry and improve service delivery.
Speaker: Antaraa Vasudev
What mechanisms can enable AI to break down barriers between people (language, expertise, distance) to give citizens a real voice in governance?
Suggests research into AI‑facilitated rich, large‑scale citizen‑government interactions beyond simple voting.
Speaker: Professor Seth Bullock
What are the psychological and social impacts of young people relying heavily on conversational AI (e.g., ChatGPT) for personal issues, and how should society respond?
Points to a need for interdisciplinary study on AI’s influence on youth mental health and family dynamics.
Speaker: Professor Seth Bullock; Kushe Bahl; Antaraa Vasudev
How can AI regulation be structured to shift power towards citizens rather than institutions in governance contexts?
Indicates ongoing debate and need for research on power dynamics shaped by AI‑enabled governance tools.
Speaker: Rapid‑fire (Antaraa Vasudev) and subsequent discussion

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Keynote by Naveen Tewari Founder & CEO, inMobi India AI Impact Summit

Keynote by Naveen Tewari Founder & CEO, inMobi India AI Impact Summit

Session at a glanceSummary, keypoints, and speakers overview

Summary

Naveen Tewari opened by stating the talk would examine how AI is reshaping commerce [1-4]. He claimed AI will extend human lifespan to about 120 years and democratize skills, creating unprecedented productivity growth [6-13]. Tewari introduced “agentic commerce,” where AI agents act for consumers, and presented Glance as the platform implementing this model [22-26]. Glance shifts from generic personalized feeds to real-time personal feeds generated for each user, aiming to train a model for a billion people soon [29-35]. The system combines a commerce intelligence graph, a generative AI experience model that creates visual product feeds, and a user-level model that tailors outputs individually [37-50]. A “living commerce context graph” continuously captures a shopper’s context, price and brand sensitivities, and computes optimal purchase paths across billions of options [55-57]. Transparency is built into the reasoning engine so users can see why items are recommended, fostering accountability and trust [58-66]. Tewari said that agentic commerce will cut consumer waste, generate efficiency savings, and create an economic flywheel as billions make intelligent choices [67-69]. The ripple effect will make supply chains agentic, weaken traditional marketplaces, and empower local and specialized brands through direct AI-driven connections [70-78]. He estimated commerce represents 25 % of global GDP and that agentic commerce could add roughly $3 trillion to India’s economy by 2047 [88-91]. Linking to Indian values of truth, he argued that transparent agents will embody authenticity as the platform is built in Bangalore for the world [93-105]. Describing the plan as the company’s most audacious in 18 years, he urged the audience to embrace bold AI-driven ideas [107-112]. The session concluded with thanks and a handover to Fujitsu CTO Vivek Mahajan [123].


Keypoints


Major discussion points


AI will democratize intelligence and reshape society – Naveen argues that AI will extend human lifespan, erase skill-based inequality, and drive unprecedented productivity growth, fundamentally altering how we live and work [6-13].


“Agentic commerce” is the new commercial architecture – Glance’s platform will move from generic personalized feeds to truly personal, AI-generated commerce experiences for each individual, built on a commerce intelligence graph, a generative AI experience model, and user-level models [22-36][37-50].


A living commerce context graph will deliver transparent, accountable shopping – By continuously modelling a user’s context, price- and brand-sensitivity, the system can optimise purchase paths, expose the reasoning behind recommendations, and foster trust through openness [55-66].


Economic scale and ecosystem transformation – Agentic commerce is projected to add roughly $3 trillion to India’s GDP by 2047, reshape supply chains, diminish traditional marketplaces, and enable hyper-localized manufacturing, creating a massive new market [70-88][89-91].


Overall purpose / goal


The keynote is intended to persuade the audience that AI-driven “agentic commerce” is both imminent and revolutionary, showcase Glance’s technical roadmap, and rally stakeholders-especially Indian entrepreneurs and technologists-to adopt this platform and capture the massive economic opportunity it promises.


Overall tone


The talk begins with an optimistic, visionary tone, emphasizing AI’s societal benefits. It then shifts to a technical, explanatory tone as the architecture of agentic commerce is detailed. Toward the end, the tone becomes inspirational and rallying, stressing audacity, national pride, and a call to action for the audience to seize the emerging opportunity. No major negative or skeptical tone appears; the energy stays consistently upbeat, growing more urgent in the closing remarks.


Speakers

Naveen Tewari


– Role/Title: Founder & CEO, InMobi (also leading the Glance platform) [S4]


– Areas of Expertise: Artificial Intelligence, commerce innovation, agentic commerce, entrepreneurship


Speaker 2


– Role/Title: Event moderator / host (moderator) [S1]


– Areas of Expertise: Event facilitation, AI summit moderation


Additional speakers:


Vivek Mahajan


– Role/Title: CTO, Fujitsu


– Areas of Expertise: Technology leadership, AI and enterprise solutions


Full session reportComprehensive analysis and detailed insights

Naveen Tewari opened his keynote by stating that the purpose of his talk was to examine how artificial intelligence (AI) will fundamentally reshape global commerce [1-4]. He argued that AI will extend human longevity-potentially to 120 years-by eradicating diseases and enabling organ-creation technologies [6-8], and that it will democratise intelligence, turning virtually everyone into high-quality coders and thereby eliminating skill-based inequality [10-13]. This convergence of longevity and skill equality, he suggested, will trigger a disproportionate surge in economic productivity worldwide [13].


He then introduced “agentic commerce”, a new architecture in which AI agents act on behalf of individual consumers [22-24]. Glance, InMobi’s platform, was presented as the vehicle for this shift, moving from the current model of generic personalised feeds to truly personal feeds that are generated in real time for each user [26-30][32-35]. The ambition, he said, is to train a commerce model for a billion people over the coming years, embedding intelligence into every purchase journey [34-36].


The technical backbone of agentic commerce consists of three interlocking models. First, the Commerce Intelligence Graph functions as a universal knowledge graph that captures every element of commerce across the globe [37-43]. Second, a Generative AI Experience Model produces visual, personalised product feeds rather than simple text answers, creating a bespoke “pamphlet” for each shopper [44-48]. Third, an individual User Model is trained on each consumer’s data, ensuring that the generative output is uniquely tailored to that person [49-50]. Complementing these is the “living commerce context graph”, which continuously monitors a shopper’s real-time context, price sensitivity and brand preferences [55-57]; this enables purchase-path optimisation that computes the most efficient route for a user by integrating thousands of potential paths with a single click [55-57].


Transparency was positioned as a core ethical pillar. Tewari explained that the reasoning engine behind recommendations will be openly exposed, allowing users to see exactly why a particular product was shown to them [58-66]. By making the “why” visible, the platform aims to build accountability and trust, countering the typical opacity of AI-driven recommendation systems [62-66].


From an economic standpoint, he highlighted massive efficiency gains driven by consumer-intelligence-driven savings. Today, shoppers waste large sums of money throughout their journeys; intelligent, AI-guided decisions will dramatically reduce this waste, feeding savings back into the economy and creating a virtuous “flywheel” effect [67-71]. He noted that commerce accounts for roughly 25 % of global GDP [88-90] and projected that applying agentic AI to India alone could add about $3 trillion to the nation’s economy by 2047 [90-91].


The ripple effects extend to supply chains and market structures. Agentic commerce will render traditional marketplaces-currently aggregators of convenience-less relevant, while empowering individual and local brands that can be discovered directly by AI agents [73-80]. This decentralisation will also drive “agentic manufacturing”, where precise demand signals from consumers improve production efficiency and productivity [81-86]. The overall transformation, he suggested, will reshape the entire value chain from the consumer back to the factory [86-87].


Tewari invoked Indian cultural values, citing an Upanishadic principle that calls for truthfulness and authenticity [93-95]. He warned that recent digital trends have distorted truth, but argued that transparent AI agents can restore authenticity [96-100]. He recalled a recent comment by the Prime Minister describing a “man of vision”, which he linked to the need for transparency, accountability and a human-centred approach [31]. Building Glance in Bangalore for a global audience, he framed the endeavour as a distinctly Indian contribution to the world’s digital economy [101-105].


Concluding on a motivational note, Tewari described the plan as the most audacious initiative in the company’s 18-year history and urged the audience to embrace bold, AI-driven ideas [107-112]. He expressed excitement about returning to the energy of his twenties and called on listeners to seize the moment [113-119]. The session ended with a brief procedural hand-over, as Speaker 2 thanked Tewari for his keynote and invited Fujitsu CTO Vivek Mahajan to speak next [123].


Session transcriptComplete transcript of the session
Naveen Tewari

Truly speaking, what I will talk about today is how commerce is going to change in the world. Look, internet is, you know, AI is changing many things. It is redefining paradigms. What is so exciting about AI? Think about it, right? Think about, you know, how AI is going to expand lifespan. We all understand that, like, there is a very high probability that every one of us in this room would probably extend ourselves to 120 years because diseases would get, you know, eradicated very differently. Organs will get created differently, right? So there is a lifespan argument to be made. The second big argument to be made is, you know, we will live very differently because in the world, in the future, it is very hard to see inequality anymore.

You know, today there is an engineer who is very good at coding and then there is one who is not. That’s going to disappear. You know, by the time you get to the end of the day, you are going to be in a box. five years from now you might actually see every one of us across our country become super high quality coders and that’s just one example of a thing so you’re actually going to have you’re going to live very democratically very differently where you’re going to essentially see the skill equal the skill equality which would lead to a very different way of living for all of us and the third is is a very disproportionate rate of growth of economic prosperity because of all the factors that the level of productivity that gets added into the whole world you are going to see a very different level of productivity so yes AI is exciting and that’s why I’m pretty I presume all of you are here to to listen and to learn and to imbibe it what I would really talk about is how this how the world is truly shifting when it comes to commerce look the in in the world of commerce there is a completely new architecture also being written.

You know, when intelligence becomes democratic, it changes ecosystems. What does intelligence getting democratically involved in commerce mean? It means that it is going to impact how we shop, how supply chains work, how manufacturing works, how we think about every aspect of it. And so that is completely getting rewritten as we look at the world in the future. At InMobi, we were one of the first companies, we were actually the first company that became a unicorn. We take a lot of pride in that because we built a product company from India to the globe. We take a pride in it because we worked in, we work in deep tech. We now take pride in actually taking on a global problem.

We now take pride in actually looking at the world from how we can bring agentic commerce to the world. Now agentic commerce and our platform is called Glance. Glance is all about bringing agentic commerce in the world in a way that’s never been done before. We’re very proud of how rapidly bringing intelligence in that world is changing everything. You know if you think about the product that we have truly built, we are moving from a world of personalized you know feeds to personal feeds. What does a personal feed mean? Think about commerce. Commerce in the world has always been driven across you know what I may like. But today if you think about agentic commerce it is actually centered around you.

We built a platform that’s launched globally. Our model of agentic commerce is launched globally. And what you would see here is personal feeds of consumers getting created in real time with products on them. What you’re seeing is a single model gets trained on single consumers. We plan to train a commerce model for a billion people over the next several years. What it would do is it would bring intelligence into the journey of commerce for every one of us. That is a superlatively advanced way of thinking about commerce, and it is a superlatively advanced way of thinking about how every element of it would actually change. So if you think about this in a slightly more architectural manner, we have created multiple models that actually come together to essentially create this agentic experience.

We have what’s called the commerce intelligence graph. Think of it the knowledge graph. The fact that there has to be a model that needs to know everything about every commerce element in the world. The fact that this is a white shirt is a world knowledge. You have to understand it. Then you have what we have built is a generative AI experience model. In that model, if you see the example, we are effectively creating an output. If you look at all the answer engines today, the output is effectively a text output. But when you think about commerce, you have to think about a visual output. How do you create a personalized pamphlet or a feed that is just for you using intelligence?

That’s a generative experience that gets done. The generative experience, unlike in the answer engine, is specific to you. and that’s why the model has to be created at a user level, which is the third model where the user model gets trained on you as an individual. That training of the user model at an individual level is what differentiates the way one thinks about shopping from everything else out there. And so I feel very excited about what agentic commerce can do. The reason I feel very excited about it, if I could move this forward, I think it’s stuck. You know, you talk about agentic and then some of these smaller elements. Oh, that worked, it worked.

All right. One of the most important elements of the agentic commerce era is going to be a living context graph. A living commerce context graph, which basically understands your context, which context you are in, what are you looking for, what are you seeking in that moment. It also understands very different levels, different levels of price intelligence. think about today when you go for a when each one of us go out there and look to buy something, we think about buying and we search for ways in which we can buy it at the most efficient pricing you can actually put that into the model and it will find out the pathway for the most efficient buying for you, which is the purchase path optimization bases your context, bases your price sensitivity and the brand sensitivity and it can do this across millions and billions of potential pathways at the click of a button and it will do that for you at that level, that living context graph is very very powerful and our ability to navigate through that is what we are really trying to build the other thing about if you think about contextual agentic commerce you know, Prime Minister talked about the man of vision What is it?

It is about being transparent and accountable and human centric. Shopping has not been considered accountable. It is seen as some people selling you things. But with agentic commerce coming in, we have an opportunity to convert the whole experience of commerce to be very transparent and very accountable. What does that mean? It means that we would certainly be opening up our model and making it transparent. In the world of AI, the reasoning engine will become transparent so that everybody can understand why you were shown or recommended a certain product. And that understanding of why is what creates transparency. And therefore, one of our big ethos is to make this very, very transparent and lead with a very different perspective of, of building trust in the era of.

agentic commerce. We also think about the fact that the consumer intelligence as that rises, the consumer intelligence brings hordes of efficiency, hordes of efficiency because billions of people will be making intelligent decisions going forward. Today think about the amount of money wasted at a consumer level as you go down your commerce journey. If you use agentic commerce in your life going forward and Glance does that for you, the amount of intelligence brought into the decision -making leads to significant savings which basically come back into the economy and then that leads to a flywheel which is very very powerful and therefore if you think about this happening at billions of people level it creates a very different size of the market and that’s a very powerful thing to create.

Similarly, the supply chains will become agentic. You know if you think about this when you have the consumer experience becoming agentic, it basically transcends itself into the supply chain. What’s going to happen? It’s going to create the demise of the marketplaces. The marketplaces today are effectively an aggregation to give you a lot more comfort. The marketplaces will become weaker. What will be the rise of it? The rise of individual brands, the rise of local brands, the like of very specialized producers. They are going to come up because the agent will be able to go find them. And that’s great for our country where you have entrepreneurs sitting in every nook and corner. Not just this.

You think about manufacturing. It is going to essentially evolve itself and become agentic manufacturing. In this, think about this. Because you have agentic experience at the consumer level, your precision will be given, very different precision signals will be given out to the manufacturers. And therefore, the productivity of the manufacturer changes drastically. Again, think about starting from the consumer into the supply chain into manufacturing. And that’s a phenomenal change that’s going to happen. Let me just explain the scale of this. Commerce in the world is 25 % of the world’s GDP. Same as for India. If you think about the impact of agentic commerce just at the India level, that’s going to be of the order of $3 trillion in the next 20 years by 2047.

That’s what we are really seeking if you essentially bring AI intelligence in the world of commerce. And that’s what we at Glance are truly attempting to go after. Given this is happening in India, we have a saying which is as part of our Upanishads in Sanskrit, which is, What does it mean? In a very simplistic way, what it is truly trying to say, be truthful. Bring truth out. If you think about digital economy in the last several years, it has led to distortion. Social media has led us into a world which is not very good. They have played around with our mental abilities and have forced us to think about things in a wrong way.

But I think we have an opportunity to think about how agents can be authentic. Once we make an agent transparent, authenticity becomes part of it. And I think that’s how we need to lead the world very differently. And we have an opportunity, especially as a company which is coming out from India, we take that very, very seriously. So in short, we are bringing with glance, as in Mobi, we are bringing AI in commerce. We are very proud of the fact that we are building this from Bangalore. We are building this from India for the world. We truly want to impact every consumer on the planet and bring agentic commerce. Bring intelligence into commerce and impact the world’s supply chain.

This event is all about audacity. We have not had a more audacious plan in our history of 18 years of me running this company or founding the company. But this is what this event does to you, but this is what technologies like AI do to you. I think it is time for us to rise and think about every possible idea in a very audacious manner. And I think this is what AI does to you. I hope every one of you rise up to that occasion and think about it that way. We are very excited. We are kicked about what we are truly trying to go about doing it. I am back to my ways of working in my 20s.

The energy is very different. The excitement is very different. And certainly the world is right now back in your palms. If we all were living in the 19th, if we were all very active in mid -90s, we would have thought about Internet very differently. I think that is what we are doing. I think that is what we are doing. I think that is what we are doing. we were laggards in internet we came into the internet era about 10 years late big things were already built by then that’s not the case when it comes to AI and I think we have an opportunity not just in the sector of commerce but every possible sector to build global platforms so that’s what we’re going to aim for that’s what we’re going to try for thank you so much for being

Speaker 2

thank you Mr. Tiwari for the keynote address for the next keynote may I now invite Mr. Vivek Mahajan CTO of Fujitsu may I also request everybody to please settle down thank you

Related ResourcesKnowledge base sources related to the discussion topics (5)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Naveen Tewari opened his keynote by stating that the purpose of his talk was to examine how artificial intelligence (AI) will fundamentally reshape global commerce.”

The knowledge base records that Naveen Tewari’s keynote was focused on how AI will fundamentally transform global commerce [S4] and [S5].

Confirmedhigh

“He argued that AI will extend human longevity—potentially to 120 years—by eradicating diseases and enabling organ‑creation technologies.”

The source notes that Tewari outlined AI-driven extensions of human lifespan to potentially 120 years through medical advances [S4] and [S5].

Confirmedhigh

“He introduced “agentic commerce”, a new architecture in which AI agents act on behalf of individual consumers.”

Other references describe future AI agents acting on behalf of individuals and an Agentic Commerce Protocol enabling agents to conduct transactions for users [S20] and [S22].

Additional Contextmedium

“A “living commerce context graph” continuously monitors a shopper’s real‑time context, price sensitivity and brand preferences.”

The knowledge base explains that a living commerce context graph understands a user’s current context, what they are looking for, and related preferences, providing the underlying concept for such monitoring [S24] and [S15].

External Sources (25)
S1
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — -Speaker 2: Role appears to be event moderator or host. Area of expertise and specific title not mentioned.
S2
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S3
Policy Network on Artificial Intelligence | IGF 2023 — Moderator 2, Affiliation 2 Speaker 1, Affiliation 1 Speaker 2, Affiliation 2
S4
Keynote by Naveen Tewari Founder & CEO, inMobi India AI Impact Summit — -Speaker 1: Role appears to be event moderator/host (introducing speakers and managing the event flow) All right. One o…
S5
Keynote by Naveen Tewari Founder & CEO, inMobi India AI Impact Summit — Impact:This comment elevates the discussion from product features to national economic strategy. It positions agentic co…
S6
Comprehensive Report: Preventing Jobless Growth in the Age of AI — AI democratizes access to expertise and disproportionately benefits lower-skilled workers by providing them with capabil…
S7
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Chunggong acknowledges the significant positive potential of AI for social good, including improvements in healthcare de…
S8
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — Speaker 2 formally welcomes the next presenter, thanks the current speaker for his remarks, and introduces Mr. Naveen Ti…
S9
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — Speaker 2 serves as a moderator, providing a brief transition between presentations by thanking the previous speaker and…
S10
Keynote by Naveen Tewari Founder & CEO, inMobi India AI Impact Summit — -Speaker 1: Role appears to be event moderator/host (introducing speakers and managing the event flow) -Vivek Mahajan: …
S11
Keynote by Naveen Tewari Founder & CEO, inMobi India AI Impact Summit — Naveen Tewari, founder of InMobi, delivered a keynote address focused on how artificial intelligence will fundamentally …
S12
Keynote by Naveen Tewari Founder & CEO, inMobi India AI Impact Summit — Truly speaking, what I will talk about today is how commerce is going to change in the world. Look, internet is, you kno…
S13
AI: The Great Equaliser? — Artificial intelligence (AI) has the potential to revolutionise various aspects of global society. It can democratise he…
S14
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Chunggong acknowledges the significant positive potential of AI for social good, including improvements in healthcare de…
S15
https://dig.watch/event/india-ai-impact-summit-2026/keynote-by-naveen-tewari-founder-ceo-inmobi-india-ai-impact-summit — All right. One of the most important elements of the agentic commerce era is going to be a living context graph. A livin…
S16
9821st meeting — Yann Lecun argues that AI will enhance human intelligence and speed up scientific advancements. This could lead to signi…
S17
Beyond human: AI, superhumans, and the quest for limitless performance & longevity — This discussion featured presentations on using artificial intelligence to combat aging and develop life-extending techn…
S18
The Global Economic Outlook — Shanmugaratnam points to the historical advantage of having China and other developing countries enter the global labor …
S19
!” — To summarize, one would normally expect technological change to increase youth wage inequality – and to a lesser extent …
S20
From Innovation to Impact_ Bringing AI to the Public — Sharma predicts a future where AI agents will act on behalf of individuals, communicating with other AI agents to accomp…
S21
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — First of all, just from a back -end usage, like my colleague spoke about, I think finance typically deals with large vol…
S22
U.S. AI Standards Shaping the Future of Trustworthy Artificial Intelligence — Michael Brown from OpenAI, substituting for George Osborne and noting his relative newness to the company, discussed the…
S23
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Owen Larter from Google DeepMind provided an industry perspective on the technical requirements for robust AI assurance,…
S24
https://app.faicon.ai/ai-impact-summit-2026/keynote-by-naveen-tewari-founder-ceo-inmobi-india-ai-impact-summit — All right. One of the most important elements of the agentic commerce era is going to be a living context graph. A livin…
S25
Postal network as enabler for e-commerce and trade facilitation (UPU) -UPU TradePost Forum — Furthermore, a neutral stance is taken regarding Africa’s representation in the statistics, with a request for clarifica…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
N
Naveen Tewari
16 arguments152 words per minute2226 words877 seconds
Argument 1
Lifespan extension to ~120 years via disease eradication (Naveen Tewari)
EXPLANATION
Tewari claims that AI will dramatically increase human lifespan, potentially reaching 120 years, by enabling new ways to eradicate diseases and create organs. This longer life expectancy is presented as one of the major societal benefits of AI.
EVIDENCE
He describes AI’s ability to expand lifespan, noting a high probability that everyone could live to 120 years because diseases would be eradicated differently and organs would be created in new ways [6-9].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Tewari claims AI will enable humans to live up to 120 years by eradicating diseases, a point reiterated in the keynote where he discusses extending lifespan to potentially 120 years through medical advances [S4][S5].
MAJOR DISCUSSION POINT
Lifespan extension
Argument 2
Democratization of coding skills, reducing inequality (Naveen Tewari)
EXPLANATION
Tewari argues that AI will level the playing field by making advanced coding skills accessible to everyone, thereby eliminating a major source of inequality. He envisions a future where all individuals become high‑quality coders.
EVIDENCE
He explains that today there is a divide between engineers who can code and those who cannot, but this gap will disappear as AI enables everyone to become super high-quality coders within a few years [10-13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote highlights AI’s role in leveling the playing field by making advanced coding skills accessible to everyone, supporting the claim of skill democratization [S5].
MAJOR DISCUSSION POINT
Skill democratization
Argument 3
Disproportionate boost to global economic productivity (Naveen Tewari)
EXPLANATION
Tewari states that AI will drive a disproportionate increase in economic prosperity by dramatically raising global productivity. The added productivity from AI is positioned as a catalyst for faster economic growth.
EVIDENCE
He mentions a “very disproportionate rate of growth of economic prosperity because of all the factors that the level of productivity that gets added into the whole world” [13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Tewari describes a “very disproportionate rate of growth of economic prosperity” driven by AI-added productivity, which is echoed in the summit transcript [S4][S5].
MAJOR DISCUSSION POINT
Economic productivity surge
Argument 4
Shift from personalized feeds to personal feeds centered on each user (Naveen Tewari)
EXPLANATION
Tewari describes a transition from generic personalized feeds to truly personal feeds that are built around the individual user. This shift is framed as a core feature of agentic commerce.
EVIDENCE
He says the product moves “from a world of personalized feeds to personal feeds” and that agentic commerce is centered around you, illustrating the change with examples of commerce driven by personal relevance [26-30].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He explains the move from generic personalized feeds to truly personal feeds built around the individual, a transition detailed in the keynote description [S5][S4].
MAJOR DISCUSSION POINT
Personal feed transformation
Argument 5
Goal to train a commerce model for a billion users, embedding intelligence in every purchase journey (Naveen Tewari)
EXPLANATION
Tewari outlines an ambitious plan to develop a commerce AI model that will serve a billion people, integrating intelligence into each consumer’s buying process. This scale is presented as a cornerstone of the Glance platform.
EVIDENCE
He notes that a single model is trained on single consumers and that they plan to train a commerce model for a billion people over the next several years [33-35].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The address outlines an ambition to train a commerce AI model for a billion people, a concrete target mentioned in both source excerpts [S5][S4].
MAJOR DISCUSSION POINT
Scaling commerce AI
Argument 6
Commerce Intelligence Graph as a universal knowledge graph for all commerce elements (Naveen Tewari)
EXPLANATION
Tewari introduces the Commerce Intelligence Graph, likening it to a knowledge graph that contains information about every element of commerce worldwide. This graph underpins the agentic commerce architecture.
EVIDENCE
He describes the commerce intelligence graph as “the knowledge graph” that must know everything about every commerce element in the world, emphasizing its role as a world-wide knowledge base [38-42].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Tewari introduces the “commerce intelligence graph,” likening it to a knowledge graph that must know everything about every commerce element worldwide, as described in the keynote [S4].
MAJOR DISCUSSION POINT
Universal commerce knowledge base
Argument 7
Generative AI Experience Model delivering visual, personalized product feeds (Naveen Tewari)
EXPLANATION
Tewari explains that the generative AI experience model creates visual outputs—personalized product pamphlets or feeds—rather than just text. This model generates a unique shopping experience for each user.
EVIDENCE
He contrasts traditional answer engines that output text with a generative experience that produces visual, personalized product feeds, describing how the model creates a personalized pamphlet for each user [43-48].
MAJOR DISCUSSION POINT
Visual generative commerce
Argument 8
Individual User Model trained per consumer to personalize shopping experience (Naveen Tewari)
EXPLANATION
Tewari highlights that a separate user model is trained for each individual, allowing the system to tailor shopping experiences uniquely. This per‑user training differentiates agentic commerce from other platforms.
EVIDENCE
He states that the model must be created at a user level and that training the user model at an individual level is what differentiates the way one thinks about shopping [49-51].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He notes that a dedicated user-level model is trained on each consumer’s data to deliver uniquely tailored experiences, a detail provided in the summit transcript [S4].
MAJOR DISCUSSION POINT
Per‑user model personalization
Argument 9
Real‑time context graph captures user intent, price and brand sensitivity to optimize purchase paths (Naveen Tewari)
EXPLANATION
Tewari describes a living commerce context graph that continuously understands a user’s current context, price sensitivity, and brand preferences, enabling optimal purchase‑path recommendations. This graph operates in real time across billions of potential pathways.
EVIDENCE
He explains that the living context graph understands your context, price intelligence, brand sensitivity, and can find the most efficient buying pathway for you at the click of a button, processing millions of pathways instantly [55-58].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The “living context graph” that understands user context, price intelligence and brand sensitivity in real time is explained in the keynote [S4].
MAJOR DISCUSSION POINT
Dynamic context‑aware purchasing
Argument 10
Transparent reasoning engine explains why specific products are recommended, building trust (Naveen Tewari)
EXPLANATION
Tewari asserts that making the AI’s reasoning transparent will allow users to understand why a product is shown, thereby fostering trust and accountability in commerce. Transparency is positioned as a core ethical principle.
EVIDENCE
He says the model will be opened up, the reasoning engine will become transparent so everyone can understand why a product was recommended, and that this understanding creates transparency and trust [62-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Tewari emphasizes a transparent reasoning engine that reveals why products are recommended, a point highlighted in the presentation [S4].
MAJOR DISCUSSION POINT
Transparency and trust
Argument 11
Projected $3 trillion impact on India’s economy by 2047 from agentic commerce (Naveen Tewari)
EXPLANATION
Tewari quantifies the economic potential of agentic commerce in India, estimating a $3 trillion contribution to the Indian economy by 2047. This figure underscores the macro‑economic significance of the technology.
EVIDENCE
He notes that commerce accounts for 25 % of world GDP, and that the impact of agentic commerce in India could be on the order of $3 trillion over the next 20 years, by 2047 [88-91].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He quantifies the economic contribution of agentic commerce in India at roughly $3 trillion by 2047, as stated in the keynote summary [S4].
MAJOR DISCUSSION POINT
Economic impact projection
Argument 12
Consumer‑level intelligence creates massive savings, generating a virtuous economic flywheel (Naveen Tewari)
EXPLANATION
Tewari argues that AI‑driven consumer decisions will reduce wasteful spending, leading to significant savings that flow back into the economy and create a self‑reinforcing growth cycle. This efficiency is presented as a key benefit of agentic commerce.
EVIDENCE
He points out the amount of money wasted at the consumer level, explains that agentic commerce brings intelligence into decision-making leading to significant savings, which then generate a powerful economic flywheel [68-70].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The talk describes how consumer-level AI intelligence reduces wasteful spending, generating savings that fuel a self-reinforcing economic flywheel [S4].
MAJOR DISCUSSION POINT
Savings‑driven economic flywheel
Argument 13
Traditional marketplaces weaken; individual and local brands flourish as agents locate them (Naveen Tewari)
EXPLANATION
Tewari predicts that the rise of agentic commerce will diminish the role of large marketplaces and instead empower individual and local brands, as AI agents can directly connect consumers with niche producers. This shift is framed as a positive outcome for entrepreneurship.
EVIDENCE
He states that marketplaces will become weaker and that there will be a rise of individual, local, and specialized producers because the agent will be able to find them, benefiting entrepreneurs across the country [73-78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Tewari predicts that large marketplaces will weaken while local and niche brands thrive because agents can directly connect consumers to them, as outlined in the keynote [S4].
MAJOR DISCUSSION POINT
Marketplace disruption and brand decentralization
Argument 14
Precise demand signals to manufacturers boost productivity through agentic manufacturing (Naveen Tewari)
EXPLANATION
Tewari describes how agentic commerce will feed precise, real‑time demand signals to manufacturers, enabling them to adjust production efficiently and dramatically increase productivity. This creates a seamless link from consumer intent to manufacturing.
EVIDENCE
He explains that agentic experience at the consumer level provides precise signals to manufacturers, which in turn changes manufacturer productivity drastically, illustrating a chain from consumer to supply chain to manufacturing [81-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He explains that agentic commerce provides precise, real-time demand signals to manufacturers, dramatically improving productivity, a claim made in the summit transcript [S4].
MAJOR DISCUSSION POINT
Agentic manufacturing productivity
Argument 15
Commitment to truthfulness and authentic agents through transparency, reflecting Indian values (Naveen Tewari)
EXPLANATION
Tewari ties the ethical stance of the platform to Indian cultural values, emphasizing truthfulness and authenticity. He argues that transparent agents will embody these values, countering the distortions of the recent digital economy.
EVIDENCE
He references an Upanishadic saying about truthfulness, critiques the distortion caused by social media, and claims that making agents transparent will make them authentic, aligning with Indian values [93-100].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Tewari links transparency and authenticity of AI agents to Indian cultural values of truthfulness, referencing an Upanishadic saying, as noted in the keynote [S4].
MAJOR DISCUSSION POINT
Ethical authenticity rooted in Indian values
Argument 16
Audacious vision to build a global AI‑driven commerce platform from India (Naveen Tewari)
EXPLANATION
Tewari expresses a bold ambition to create a worldwide AI‑powered commerce platform originating from India, highlighting national pride and the scale of the undertaking. He frames this as the most audacious plan in the company’s 18‑year history.
EVIDENCE
He states that they are building AI in commerce from Bangalore, for the world, and describes the plan as the most audacious in the company’s history, emphasizing the goal of a global platform [102-108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He declares the ambition to build a worldwide AI-powered commerce platform from Bangalore, emphasizing its audacity, which is documented in the address [S4][S5].
MAJOR DISCUSSION POINT
Ambitious global platform from India
S
Speaker 2
1 argument63 words per minute33 words31 seconds
Argument 1
Acknowledgement of Naveen Tewari’s keynote and invitation to Vivek Mahajan for the next address (Speaker 2)
EXPLANATION
Speaker 2 thanks Mr. Tewari for his keynote, acknowledges his contribution, and formally invites Mr. Vivek Mahajan, the CTO of Fujitsu, to deliver the next keynote address. The speaker also asks the audience to settle down.
EVIDENCE
The speaker says, “thank you Mr. Tiwari for the keynote address… may I now invite Mr. Vivek Mahajan CTO of Fujitsu… please settle down” [123].
MAJOR DISCUSSION POINT
Transition to next speaker
AGREED WITH
Naveen Tewari
Agreements
Agreement Points
Speaker 2 thanked Naveen Tewari for his keynote and invited the next speaker, indicating shared appreciation of the address
Speakers: Naveen Tewari, Speaker 2
Acknowledgement of Naveen Tewari’s keynote and invitation to Vivek Mahajan for the next address (Speaker 2)
Speaker 2 expressed gratitude for Tewari’s keynote and formally transitioned to the next presenter, showing agreement with the value of the presented content [123]
POLICY CONTEXT (KNOWLEDGE BASE)
This reflects the standard role of a conference moderator who provides transitions by thanking the previous speaker and introducing the next keynote, as documented in the summit’s protocol descriptions [S8][S9].
Similar Viewpoints
Unexpected Consensus
Overall Assessment

The only point of consensus between the two speakers is procedural – Speaker 2’s acknowledgment and hand‑over to the next speaker. No substantive thematic agreement is evident because Speaker 2 does not articulate any of the detailed arguments presented by Tewari.

Minimal consensus limited to ceremony; this suggests that the discussion did not generate shared positions on the substantive AI‑driven commerce topics.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The exchange consists of a single, extensive presentation by Naveen Tewari followed by a brief procedural acknowledgment from Speaker 2. No substantive counter‑arguments or conflicting viewpoints are presented; the second speaker merely thanks the presenter and moves the agenda forward.

Minimal to none – the interaction is collaborative and procedural, suggesting a consensus or at least no overt conflict, which implies smooth thematic continuity for the session.

Partial Agreements
Speaker 2 explicitly thanks Naveen Tewari for his keynote and signals support for the continuation of the program, indicating agreement with the value of the presented ideas, but does not elaborate on any substantive content or propose an alternative approach [123]
Speakers: Naveen Tewari, Speaker 2
Acknowledgement of Naveen Tewari’s keynote and invitation to Vivek Mahajan for the next address (Speaker 2)
Takeaways
Key takeaways
AI is expected to dramatically extend human lifespan, democratize coding skills, and boost global economic productivity. Introduction of “agentic commerce” via the Glance platform, shifting from generic personalized feeds to truly personal, AI‑driven shopping experiences for each individual. Agentic commerce architecture comprises three core models: a Commerce Intelligence Graph (a universal knowledge graph of commerce elements), a Generative AI Experience Model (producing visual, personalized product feeds), and an Individual User Model (trained per consumer). A Living Context Graph captures real‑time user intent, price and brand sensitivity, enabling optimal purchase‑path recommendations and transparent reasoning for each recommendation. Projected economic impact in India of roughly $3 trillion by 2047, driven by consumer‑level intelligence that creates savings and a virtuous economic flywheel. Agentic commerce is expected to weaken traditional marketplaces while empowering individual and local brands, and to provide precise demand signals that drive “agentic manufacturing” and higher producer productivity. The initiative emphasizes ethical principles—truthfulness, authenticity, and transparency—rooted in Indian values, and aims to build a global AI‑driven commerce platform from India. The keynote concluded with a call for audacious thinking and a handoff to the next speaker, Vivek Mahajan.
Resolutions and action items
None identified
Unresolved issues
None identified
Suggested compromises
None identified
Thought Provoking Comments
AI will democratize skills, turning everyone into high‑quality coders and creating skill equality that will fundamentally change how we live.
It challenges the prevailing view that technical expertise will remain a scarce resource and suggests a future where AI levels the playing field, reshaping societal structures and economic opportunities.
This idea set the stage for the rest of the keynote, moving the conversation from generic AI hype to a concrete societal transformation, and introduced the premise that commerce will be reshaped because the consumer base itself will become uniformly empowered.
Speaker: Naveen Tewari
We are moving from ‘personalized feeds’ to ‘personal feeds’ – agentic commerce that is centered around the individual rather than generic personalization.
It reframes personalization as a deeper, user‑level agency, implying that AI will not just recommend but actively act on behalf of each consumer, a shift from passive to proactive commerce.
This pivot introduced the core concept of ‘agentic commerce’, steering the discussion toward the technical architecture (commerce intelligence graph, generative experience model) that would enable such a paradigm.
Speaker: Naveen Tewari
One of the most important elements of the agentic commerce era is a ‘living commerce context graph’ that understands a user’s context, price sensitivity, brand sensitivity and can optimise purchase paths across billions of possibilities in real time.
It adds concrete technical depth to the earlier abstract vision, showing how AI can operationalise context‑aware decision‑making at massive scale.
This comment deepened the conversation by moving from high‑level vision to a specific, implementable component, prompting listeners to consider feasibility, data requirements, and the scale of computation involved.
Speaker: Naveen Tewari
We will make the reasoning engine transparent so everyone can understand why a particular product was shown or recommended – transparency and accountability become the foundation of trust in agentic commerce.
It directly addresses a major criticism of AI systems—opacity—by proposing openness as a design principle, linking technical architecture to ethical considerations.
This shifted the tone from purely commercial optimism to a discussion of responsibility, signaling that the company’s strategy includes governance and could influence how regulators and partners view the technology.
Speaker: Naveen Tewari
Commerce accounts for about 25 % of global GDP; applying agentic AI to India alone could generate a $3 trillion impact by 2047.
It quantifies the economic stakes, turning abstract ideas into a measurable opportunity, and underscores the strategic importance of the initiative for investors and policymakers.
By providing a concrete financial projection, the comment anchored the earlier visionary statements, likely prompting the audience to evaluate the business case and scale of the proposed platform.
Speaker: Naveen Tewari
The rise of agentic commerce will weaken traditional marketplaces and empower individual and local brands, giving entrepreneurs across the country a direct channel to consumers.
It challenges the entrenched belief that large marketplaces will dominate the future, suggesting a disruptive shift toward decentralised brand‑to‑consumer relationships.
This observation introduced a potential market‑structure disruption, prompting listeners to rethink competitive dynamics and consider new opportunities for small‑scale producers.
Speaker: Naveen Tewari
Overall Assessment

The keynote’s momentum was driven by a series of pivotal statements that moved the audience from a broad, optimistic view of AI to a concrete, actionable vision of ‘agentic commerce.’ Each thought‑provoking comment introduced a new layer—social democratization of skills, a redefinition of personalization, a technical architecture for context‑aware decision making, a commitment to transparency, a massive economic forecast, and a disruptive market‑structure hypothesis. Together, they transformed the discussion from speculative hype into a multi‑dimensional roadmap, shaping the audience’s perception of both the opportunities and responsibilities inherent in deploying AI at commerce scale.

Follow-up Questions
How will AI contribute to extending human lifespan to 120 years?
Tewari mentions AI’s potential to eradicate diseases and create organs, raising the need to explore concrete pathways and timelines for lifespan extension.
Speaker: Naveen Tewari
How will democratization of intelligence eliminate skill inequality, especially in coding?
He predicts everyone becoming high‑quality coders, prompting investigation into the mechanisms, education models, and societal impacts of such skill equalization.
Speaker: Naveen Tewari
What are the technical and ethical challenges of training a personalized commerce model for a billion users?
Tewari states the goal of training a commerce model for a billion people, which raises questions about scalability, data quality, bias, and privacy.
Speaker: Naveen Tewari
How will the Commerce Intelligence Graph be constructed to capture all commerce elements globally?
He references a graph that ‘needs to know everything about every commerce element,’ requiring research into data sources, ontology design, and maintenance.
Speaker: Naveen Tewari
How will generative AI experience models produce visual personalized feeds, and what data is required?
The shift from text answers to visual outputs for commerce needs clarification on model architecture, training data, and rendering pipelines.
Speaker: Naveen Tewari
How will the Living Commerce Context Graph understand real‑time user context, price sensitivity, and brand sensitivity to generate optimal purchase paths?
Tewari describes a graph that optimizes purchase pathways across millions of options, necessitating research into context detection, pricing intelligence, and optimization algorithms.
Speaker: Naveen Tewari
What mechanisms will ensure transparency and accountability in agentic commerce recommendations (explainability of the reasoning engine)?
He promises a transparent reasoning engine, prompting inquiry into explainable AI techniques, user‑facing explanations, and auditability.
Speaker: Naveen Tewari
How will authenticity of AI agents be measured and maintained?
Tewari links transparency to authenticity, raising the need for metrics, validation processes, and governance to keep agents trustworthy.
Speaker: Naveen Tewari
What is the projected economic impact of agentic commerce in India (approximately $3 trillion by 2047) and how will it be quantified?
He cites a $3 trillion figure, which requires rigorous economic modelling, baseline assumptions, and impact attribution studies.
Speaker: Naveen Tewari
How will agentic commerce lead to the decline of traditional marketplaces and the rise of individual/local brands?
The claim that marketplaces will become weaker and local brands will flourish calls for market‑structure analysis and case‑study research.
Speaker: Naveen Tewari
How will agentic manufacturing receive precision signals from consumer agents and improve productivity?
He suggests consumer‑level intelligence will feed manufacturers, necessitating investigation into signal design, integration with production systems, and ROI.
Speaker: Naveen Tewari
What data infrastructure is needed to support real‑time personalization at scale for billions of users?
Scaling personalized feeds to a billion users implies massive data pipelines, storage, and compute resources that must be studied.
Speaker: Naveen Tewari
How will privacy and data security be handled when training user‑level models?
Training models on individual consumer data raises concerns about consent, anonymization, and protection against breaches.
Speaker: Naveen Tewari
What regulatory frameworks are required for transparent, AI‑driven commerce?
Ensuring accountability and transparency will likely need new policies; research is needed on appropriate regulations and compliance mechanisms.
Speaker: Naveen Tewari

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building the Future STPI Global Partnerships & Startup Felicitation 2026

Building the Future STPI Global Partnerships & Startup Felicitation 2026

Session at a glanceSummary, keypoints, and speakers overview

Summary

The AI Impact Summit, hosted by the Software Technology Parks of India (STPI), brought together government officials, industry leaders, and startup founders to discuss building a robust AI startup ecosystem [1-2][3-9]. STPI Director Rakesh Dubey introduced a unique online portal that aggregates government policies, hosts incubator and accelerator contests, and offers a product marketplace and hiring hub to support startups throughout their lifecycle [11-19][20-21]. Bala of Strat Infinity highlighted the projected $15.7 trillion global AI contribution by 2030 and projected India’s GCC sector to employ 3.5 million people and generate $150 billion in software exports, emphasizing that integration into global organizations, not just model development, determines scale [36-43]. He argued that the current gap lies in institutionalizing AI through co-creation models that provide startups with real data, infrastructure, and enterprise validation, positioning GCCs as the bridge between innovation velocity and enterprise scale [50-57][60-63]. Geetika Dayal of Thai Delhi NCR described a collaborative ecosystem built on five structural pillars-knowledge building, resource access, market validation, funding, and ethical AI-and announced an MOU with STPI to expand joint accelerators and benchmarking initiatives [102-103][104-107]. NPC Director Neerja Sekhar presented a three-part framework of trust, testbeds, and traction for AI startups, stressing that productivity gains require responsible, secure, and transparent AI solutions and that partnership with STPI will accelerate this agenda [140-148][149-162]. STPI Director Arvind Kumar outlined the organization’s 70 centers across tier-2 and tier-3 cities, its incubation and seed-funding services, and reiterated that safe, trusted, responsible, and ethical AI are essential for scaling innovations [175-183][188-196]. The ceremony featured the exchange of MOUs between STPI, NPC, and Thai Delhi NCR, followed by the felicitation of startups recognized for revenue growth, funding, employment generation, women participation, and AI-driven impact [207-212][214-227][228-260]. Founders such as Devika Chandrasekaran of Fuselage Innovations and the team behind EZO5 Solutions recounted how STPI’s early programs validated their technologies, enabled market access, and helped them achieve national and international milestones [279-287][332-338]. The event concluded with a vote of thanks that highlighted the collective role of STPI, NPC, GCCs, and startups in creating a coordinated, collaborative AI ecosystem capable of delivering scalable, responsible innovation for India’s economy [353-360][366-369]. The co-creation model, as described by Bala, is being operationalized through partnerships such as the FinBlue COE in Chennai, demonstrating how startups can move from pilot to production within a controlled sandbox [61-63]. Participants agreed that collaboration over competition, reinforced by policy alignment and ecosystem partnerships, is the key driver for scaling AI innovation across India [103-106][124-126].


Keypoints


Major discussion points


STPI’s digital platform as a one-stop ecosystem – The portal aggregates incubators, accelerators, contests, a product marketplace and a hiring hub, enabling startups to post products, job openings and receive end-to-end support online [11-19].


Global Capability Centers (GCCs) as the bridge for AI startups – GCCs provide real data, infrastructure, and enterprise validation; the “co-creation” model lets startups move from pilot to production within a sandbox, turning innovation velocity into scalable enterprise impact [36-63].


Collaborative ecosystem pillars needed to scale AI innovation – Five structural pillars are highlighted: knowledge & capability building, resource access, market validation, funding, and ethical/responsible AI; coordinated action among STPI, TI, GCCs, government and corporates is deemed essential [102-108].


Framework for trustworthy AI deployment – Neerja Sekhar proposes a three-part approach-trust (privacy, security, accountability), testbeds (real-world sandboxes, reference architectures) and traction (moving pilots to full-scale implementation) [140-148].


Celebration of successful AI startups and their impact – The ceremony recognises startups across revenue, funding, employment and AI-driven outcomes; founders share concrete results such as Fuselage Innovations’ drone deployment for 10 000+ farmers and EZO5’s AI-powered oncology platform that processed over one million scans [214-277][332-338].


Overall purpose / goal


The session was convened to scale AI innovation by building a robust, collaborative AI startup ecosystem. It brought together government officials, industry leaders, and founders to (i) outline policy and infrastructure support (STPI platform, GCC integration), (ii) articulate strategic frameworks (trust-testbed-traction, ecosystem pillars), (iii) formalise partnerships through MOUs, and (iv) showcase and reward startups that have already demonstrated measurable impact.


Overall tone and its evolution


– The discussion began with a formal, welcoming tone from the hosts [1-7].


– It shifted to an informative and visionary tone as speakers described platforms, market potential and strategic models [11-19][36-63].


– The tone then became strategic and collaborative, emphasizing policy pillars and partnership imperatives [90-108][140-148].


– Finally, it moved to a celebratory and inspirational tone, highlighting startup achievements, personal testimonies, and concluding with gratitude and calls for continued cooperation [214-277][332-338][353-363].


Throughout, the tone remained optimistic and constructive, consistently reinforcing the message that collective effort and structured support are key to scaling AI innovation in India.


Speakers

Speakers from the provided list


Meenal Gupta – Founder, EZO5 Solutions; expertise in AI-powered oncology treatment planning platforms (Imagix AI)[S4].


Noor Fatma – Co-founder, EZO5 Solutions; focuses on AI-driven precision oncology and imaging solutions[S4].


Devika Chandrasekaran – Co-founder, Fuselage Innovations (formerly “Useless Innovations”); specializes in drone technology for agriculture, defence and disaster-management applications[S7].


Dr. Soumya – Representative of TectoCell; builds AI-powered diagnostic solutions at the intersection of radiology and artificial intelligence[transcript].


Vaani Kapoor – Manager, Software Technology Parks of India (STPI); co-host/manager of the session[S12].


Shelly Sharma – Deputy Director, STPI; host of the event[S15].


Ms. Neerja Sekhar – Director General, National Productivity Council (NPC); chief guest delivering the special address[S17].


Praveen Kumar – Joint Director, STPI; presented the vote of thanks and several mementos[S18].


Kirty Datar – Representative, Caneboard Solutions Private Limited; speaker during the startup felicitation segment[S21].


Milind Datar – Representative, Caneboard Solutions Private Limited; listed alongside Kirty Datar in the ceremony[S24].


Ms. Geetika Dayal – Director General, Thai Delhi NCR (STPI region); addressed the entrepreneurial and startup ecosystem[transcript].


Sh. Bala MS – CEO, Strat Infinity; delivered industry perspective on Global Capability Centers (GCCs) and AI innovation[transcript].


Sh. Rakesh Dubey – Director, Startups and Innovation, STPI; gave the opening address and highlighted STPI’s portal features[transcript].


Arvind Kumar – Director General, STPI; provided the keynote address on scaling innovation and responsible AI[transcript].


Arita Dalan – Regional Head (North), SecureTech IT Solutions Private Limited; spoke about cybersecurity solutions and STPI collaboration[S35].


Additional speakers (not in the provided list)


Ashok Gupta – Director, STPI Gurugram; presented a memento to the NPC Director General during the ceremony[transcript].


Nikhil Panchabai – Director, National Productivity Council (NPC); participated in the MOU exchange ceremony[transcript].


Sanjay Gupta – Senior Director, STPI; invited to the dais and involved in multiple MOU exchanges[transcript].


Atul Kumar Singh – Additional Director, STPI; presented mementos to various dignitaries and speakers[transcript].


Shri Rakesh Dubia – (variant spelling of Sh. Rakesh Dubey) – Director, Startups and Innovation, STPI; referenced during the opening remarks[transcript].


Shri Ashok Gupta – (same as Ashok Gupta above) – Director, STPI Gurugram; also mentioned in the MOU ceremony[transcript].


Shri Nikhil Panchabai – (same as Nikhil Panchabai above) – Director, NPC; part of the MOU exchange[transcript].


Shri Sanjay Gupta – (same as Sanjay Gupta above) – Senior Director, STPI; involved in MOU exchanges[transcript].


Shri Atul Kumar Singh – (same as Atul Kumar Singh above) – Additional Director, STPI; presented several mementos[transcript].


Shri Praveen Kumar – (same as Praveen Kumar above) – Joint Director, STPI; delivered vote of thanks and presented mementos[transcript].


Shri Ashok Gupta – (duplicate entry for clarity) – Director, STPI Gurugram[transcript].


Shri Nikhil Panchabai – (duplicate entry for clarity) – Director, NPC[transcript].


Shri Sanjay Gupta – (duplicate entry for clarity) – Senior Director, STPI[transcript].


Shri Atul Kumar Singh – (duplicate entry for clarity) – Additional Director, STPI[transcript].


Shri Praveen Kumar – (duplicate entry for clarity) – Joint Director, STPI[transcript].


Full session reportComprehensive analysis and detailed insights

The AI Impact Summit opened with a formal welcome from Shelly Sharma, Deputy Director, Software Technology Parks of India (STPI), who thanked the dignitaries and audience for joining a session on “Scaling Innovation, Building a Robust AI Startup Ecosystem” [1-2]. Co-host Vaani Kapoor then introduced the chief guest, Ms Neerja Shekhar, IAS, Director-General, National Productivity Council (NPC), and a roster of senior officials from STPI, the Global Capability Centre (GCC) community and the startup ecosystem, emphasizing that the day’s agenda would blend government policy, industry insight and founder experience to shape a future-ready AI landscape [3-10].


Rakesh Dubey, Director of Startup and Innovation, STPI, opened with a showcase of the organisation’s newly-launched digital portal, describing it as a “one-of-its-kind” platform that aggregates incubators, accelerators, state-government initiatives and academic partners [11-13]. The portal also serves as a live repository of evolving government policies [12-13] and hosts contests from any incubator worldwide, managing applications, screening and publishing results while hand-holding startups through their entire lifecycle online [13-15]. Recent enhancements include a product marketplace where startups can exhibit offerings and a hiring hub that matches niche talent with startup needs; both features aim to reduce friction in talent acquisition and market exposure [16-19]. Dubey noted that the portal will continue to evolve with additional capabilities, positioning it as a critical national and global resource for innovation [20-21][14-16].


Bala MS, Strat Infinity, presented a macro-economic forecast that AI could contribute US $15.7 trillion to the global economy by 2030, with roughly $5 trillion stemming from productivity gains [36-40]. He projected India’s GCC ecosystem to expand from 1 900 to over 3 500 centres by 2030, supporting 3.5 million employees and generating about $150 billion in software exports [41-43]. While acknowledging the usual focus on model development, compute power and funding, Bala argued that true scale is achieved when AI is integrated into global organisations-a gap he identified as “institutionalisation” rather than technology [44-48][50-53]. He positioned GCCs as the bridge that supplies real data, infrastructure and enterprise validation, enabling a co-creation model that shortens the pilot-to-production cycle through sandbox environments, domain expertise and production-grade gateways [54-63]. Bala further highlighted India’s unique talent density and cost advantage, which together amplify the economic multiplier effect of GCCs and support a strategic partnership with STPI to institutionalise AI at national scale [64-71][72-78].


Geetika Dayal, Director-General, Thai Delhi NCR, outlined a collaborative framework built on five structural pillars-knowledge and capability building, resource access, market validation, funding, and ethical/responsible AI-required to realise the vision [102-108]. She announced an MOU with STPI that will expand joint accelerators, scale the Samarth programme, launch more corporate-challenge initiatives and produce AI-benchmarking reports, thereby moving from isolated programmes to a unified strategy [104-107][94-100]. Dayal stressed that when STPI, TI, GCCs, government and corporates align around these pillars, “scale becomes inevitable” [104-106].


Ms Neerja Shekhar, IAS, Director-General, NPC, offered a concise three-part framework for startups and ecosystem builders: trust, testbeds, and traction[140-148]. Trust, she argued, is the “entry ticket” and must embed privacy, cyber-security, transparency and accountability [141-145]; testbeds provide real-world sandboxes, labs and reference architectures that bridge promise and proof [145-147]; traction converts pilots into full-scale deployments [147-149]. She linked this framework to the broader national agenda, noting that NPC’s role is to extend productivity metrics to include reliability, safety and responsible performance, and that a partnership with STPI will accelerate responsible digital transformation for MSMEs, clusters and AI-enabled startups [149-162][130-138][139-148].


Arvind Kumar, Director-General, STPI, outlined the organisation’s extensive physical infrastructure: 70 STPI centres across the country-predominantly in Tier-2 and Tier-3 cities-supplemented by 24 domain-specific entrepreneurship centres that provide incubation, seed funding, market access and global reach [175-180]. He highlighted additional services such as BAPT, network-security, data-centre and cloud-PPP initiatives [181-183]. Kumar argued that the shift from the traditional MSME model to a startup-centric approach has transformed the ecosystem, but that scaling still hinges on delivering “safe and trusted” AI solutions. He distinguished ethical AI (concerned with environmental impact and job creation) from responsible AI (focused on fairness, bias-free outcomes and accountability), illustrating the need for accountability in scenarios such as driverless-car accidents [188-199][194-199][200-206].


MOU exchange ceremony – The first memorandum of understanding was exchanged between Sri Ashok Gupta, Director, STPI Gurugram, and Sri Nikhil Panchabai, Director, NPC. The second MOU was exchanged between Shri Sanjay Gupta, Senior Director, STPI, and Ms Geetika Dayal (DG, Thai Delhi NCR)[123-124].


The session then moved to the startup felicitation segment, where the DGs of STPI and NPC presented certificates and trophies to companies recognised for revenue growth, funding achievements, employment generation, women participation and AI-driven impact [214-227][228-260][261-273]. Notable awardees included Phoenix Marine Exports (highest revenue up to ₹25 cr), Vimeo Consulting (highest funding up to ₹25 cr), Swada Agri (employment generation), Strangify Technologies (women employment) and Suhora Technologies (revenue up to ₹50 cr). The ceremony also recognised Connector Foods Private Limited (most innovative startup – second position) and Fuse Ledge Innovations Private Limited (most promising innovation – second position) [261-273].


Founders then shared their journeys. Devika Chandrasekaran, co-founder of Fuselage Innovations (the name appearing on the award certificate), clarified that the correct company name is “Fuselage Innovations” despite an earlier reference to “Useless Innovations” [279-287]. She recounted how participation in STPI’s Scout 2021 programme validated their drone-technology prototype, leading to deployment for over 10 000 farmers and recognition with the National Startup Award [279-287]. Noor Fatma and Meenal Gupta, co-founders of EZO5 Solutions, described their AI-powered oncology platform, Imagix AI, which processed around one million scans, flagged thousands of TB and lung-cancer cases and reduced radiotherapy planning from a month to a week; they credited STPI’s early support for securing funding and enabling rapid scaling, and noted recent interest from the Prime Minister and Bill Gates [329-338][332-338]. Arita Dalal, founder of SecureTech, highlighted STPI’s role in facilitating industry connections, investor outreach and cybersecurity collaborations [304-312]. Kirty Datar added that STPI’s recognition enhanced his startup’s credibility with customers and investors [323-325]. Milind Datar was called on stage but did not deliver a statement [322-324].


The formal vote of thanks was delivered by Praveen Kumar, Joint Director, STPI, who thanked all dignitaries, speakers and founders, reaffirmed the collective commitment to a coordinated AI ecosystem and invited the honoured startups to a group photograph [353-369]. Following his remarks, Shelly Sharma asked Kavita ma’am and Kishori ma’am to join the group photograph [366-369].


Key take-aways


1. The STPI portal’s marketplace, hiring hub and policy repository are vital enablers for validation, funding and regulatory support [11-19][283-287].


2. GCCs, through co-creation sandboxes, supply the data, infrastructure and enterprise validation needed to move AI from pilot to production [60-63][94-100].


3. Trust, safety, ethical and responsible AI-embodied in privacy, security, fairness and accountability-are prerequisites for large-scale adoption [140-148][194-199].


4. A five-pillar collaborative framework (knowledge, resources, market validation, funding, ethical AI) must be operationalised across STPI, TI, NPC, GCCs and investors [102-108].


5. Recognition programmes and founder testimonies demonstrate tangible impact across agriculture, health, cybersecurity and gender-inclusive employment [279-287][332-338][304-312].


Resolutions and action items announced at the summit were: signing of MOUs between STPI-NPC and STPI-TI to formalise partnership for scaling AI innovation [123-124]; commitment to expand joint accelerators, scale the Samarth programme, launch more corporate-challenge initiatives and produce AI-benchmarking reports [104-107]; agreement to develop co-creation platforms and enterprise sandboxes within GCCs, leveraging the STPI portal’s capabilities [60-63][11-19]; and a pledge to strengthen the five structural pillars through coordinated policy and programme implementation [102-108].


Unresolved issues highlighted include: ensuring consistent market access for startups beyond GCC sandboxes; defining mechanisms for providing large-scale, high-quality data sets and compute resources; operationalising the co-creation model across diverse sectors; clarifying funding pipelines for later-stage scaling; and finalising metrics for AI-benchmarking and productivity assessment [61-63][94-100][140-148][149-162].


Thought-provoking comments that shaped the dialogue were: Bala’s assertion that “the scale of AI is not determined by the model you build, but by how it is integrated into the global organisation” [50-57]; Sekhar’s three-part trust-testbed-traction framework [140-148]; Dayal’s articulation of collaboration over competition and the five structural pillars [102-108]; Kumar’s clarification of the distinction between ethical (environment, jobs) and responsible (fairness, accountability) AI [194-199]; Dubey’s description of the STPI portal as a one-of-its-kind marketplace and sandbox [11-19]; and the founders’ evidence that STPI support can translate into global impact, as seen in EZO5’s engagement with the Prime Minister and Bill Gates [329-338].


Follow-up questions raised for future work include:


– Who will supply the real data sets, compute infrastructure and enterprise validation required for AI startups? [36-57] (Bala)


– How can co-creation platforms and enterprise sandboxes be built to link startups with GCCs? [60-63] (Bala)


– What should a joint IP framework between startups and GCCs look like? [75-78] (Bala)


– How can joint accelerators, the Samarth programme, corporate-challenge initiatives and AI-benchmarking reports be expanded and coordinated? [94-100] (Dayal)


– What mechanisms are needed to move startups reliably from ideas to measurable societal impact? [139-148] (Sekhar)


– What specific safeguards are required to build trust in AI products (privacy, cybersecurity by design, transparency, accountability, fairness, operational reliability)? [140-145] (Sekhar)


– Which testbeds (real-world sandboxes, labs, reference architectures) are essential to bridge the promise-proof gap? [145-147] (Sekhar)


– How can traction be achieved to turn AI pilots into scaled implementations? [147-149] (Sekhar)


– How should “responsible” and “ethical” AI be differentiated and operationalised, especially regarding accountability? [194-199] (Kumar)


– What metrics and frameworks should assess productivity, quality, capability and industry alignment in the AI era? [149-162] (Sekhar)


– How should GCCs be structured as bridges for AI startups-what governance, collaboration models and scaling mechanisms are optimal? [60-63][94-100] (Bala, Dayal)


– What are the five structural pillars that need coordinated implementation to scale innovation? [102-108] (Dayal)


– What operational-readiness gaps prevent AI solutions from scaling across business units? [50-57] (Bala)


– How should productivity be re-defined in the AI era to include reliability, repeatability, safety and responsible performance? [149-162] (Sekhar).


Session transcriptComplete transcript of the session
Shelly Sharma

Good afternoon, everyone. On behalf of Software Technology Parks of India, I extend a very warm welcome to all the dignitaries on Dias and the entire audience to today’s session on Scaling Innovation, Building a Robust AI Startup Ecosystem. I am Shelly Sharma, Deputy Director, STPI, and it is my privilege to host this session.

Vaani Kapoor

Good afternoon, everyone. I am Vani Kapoor, Manager, STPI, your co -host for the session. May I now begin by respectfully welcoming our guests. Our distinguished dignitaries on the Dias. Our chief guest for today, Ms. Neerja Shekhar, IAS Director General, National Productivity Council. Sri Arvind Kumar sir, Director General, STPI Sri Rakesh Dubey sir, Director, Startup and Innovation, STPI Sri Bala MS, CEO, Strat Infinity and Ms. Geetika Dayal, Director General, Thai Delhi NCR and all other senior officials, ecosystem partners, startup founders and delegates present here today. We are truly honored by your presence. Today’s session brings together government, industry and the startup ecosystem to deliberate on building a future -ready AI innovation landscape while also celebrating startups that have demonstrated measurable impact across revenue, employment and business.

Government, innovation and inclusion. without further ado may I now invite Sri Rakesh Dubia sir director startup and innovation to kindly deliver the opening address sir please

Sh. Rakesh Dubey

incubators, accelerators, even state governments, academia, everyone can come to this platform and find the resources that they need here. This platform also serves as a repository of various government policies that come from time to time. It also serves as a platform where contests of not just STPI, but any incubator anywhere in India or even the world can host their contest, get their application invited, get the results published after screening and evaluation, and further handhold that startup’s entire life cycle online. This portal is, I think, one of its kind portal, not just in India, but across the world. It is a very valuable thing, and we are adding more and more features to it as time go.

For example, we have added features like a product marketplace as well as a hiring hub on it, using which a startup wanting a niche management, can post its requirement and individuals can apply against it. An individual wanting to look for a niche job can post his resume here and probably a startup can pick it up. It also has a feature called product marketplace in which any startup can post its product for anyone to see. And if any viewer finds interest in it, the two can interact together via this platform. That being said, the STPI is always looking for doing more and more things to support the innovation and startups across India as well as the world.

And we will be welcome to hear any thoughts from you. And there are many experts lined up. I am sure you will hear many more learnings from them also. With that, I thank you everyone and hope to see you. Thank you very much.

Vaani Kapoor

Thank you so much, sir, for setting the context so beautifully and highlighting STPI’s growing national impact. now may I request the technical team to play the short audio video presentation titled STPI Startup Ecosystem Drive Impact STPI Startup Thank you. Thank you. Thank you. Thank you, team. That gives us a powerful snapshot of how innovation is translating into real outcome across the country. Now to share insights from the industry and global capability center perspective, may I now invite Shree Bala, MSCO, Strat Infinity. Please come.

Sh. Bala MS

Very good afternoon. Namaste. Dr. DG, DIG, LPC, DG, hi. Good friend here, Rakesh. And everyone, very good afternoon. Thank you for the opportunity. Scaling innovation, building a robust AI, the perspective from GCC is going to… be phenomenal. We are not just living through the AI wave. We are rather living in the AI restructuring of global economy. If you look at the overall, if you look at the AI contribution, about 15 .7 trillion dollars by 2030 globally, whereas close to about 5 trillion dollars are going to be in productivity. That is an enormous opportunity. If you look at India by 2030, almost there is going to be 3 ,500 plus GCC, which is about 1 ,900 today now, and going to contribute about close to 150 billion dollar of software exports, and it is going to be 3 .5 million employees dedicatedly working for global capability center.

That is the very high level statistics for 2030. These are not just employment statistics, my dear friend, but this more like enterprise grade innovation, innovation, infrastructure at a national scale. Right. That said, if you look at AI leadership globally, most conversation comes on three things. Right. First thing is the model that we are building. Second thing comes the power, the compute power. The third thing may come under the funding perspective. Nothing wrong about it. All these three things are important. But in my experience working with a global organization, the scale is not determined by the AI you build. The scale is determined by the way how your AI gets integrated to the global organization. That’s where today the fundamental gap is all about.

And if you look at the real competitive advantage, in fact, the experimentation is really abundant today. But institutionalization is really limited. So that is where the real challenge and gap comes from. And the GCC component steps in this point. This is an inflection point in my view. If you look at the transformation. GCC has about from 1900 plus GCC are there in the country today and it was those days it was more looked at as a cost center or a labor arbitrage center but today it is more for the engineering centers, R &D centers if you look at more than 50 % of the or 40 % of the India’s GCC is more on R &D today the emerging technologies like AI, cyber security product development lot of new things have been developed out of the global organization now India is considered to be a digital talent center for the global organization which is not tactical but it’s truly very very strategical my dear friends.

If you look at the startup ecosystem that’s where the main thing comes from the venture capital, the investors STPI and lot of government organizations played a phenomenal role in making sure the grants are given which is very good very very important that accelerates the AI innovation, lot of fundings have come but capital alone cannot solve this friction that’s very very important one capital cannot solve the friction in fact when you look at global surveys in the recent AI study report that shows majority of the enterprises are piloting AI but only minority have scaled across the business units and the gap is not the technology problem it’s not a technology gap it is an operational readiness it is an organizational readiness that’s the biggest gap it’s not the technological capability so having said that if you look at any AI tools when you get into any organization the startups comes from across India you need to pass on through the risk you need to pass on through the compliance security the fitment the global workflow design lot of challenges that you come across again that is where the GCC component comes in because working with a global organization the enterprise organizations are totally different you can’t even think of one said we say India has lot of startups and we are phenomenally doing well at the same time the market access to the global organizations are really big question mark that’s where the GCC component comes in why the GCC matters for the startup it matters a lot here is the thing right so today what is needed for AI startup companies right you need a real data set you need a real infrastructure capability you need to have the enterprise validation so who is going to give that who is going to trust your model and put it in their system that’s the biggest question mark again that is where the GCC comes in where it is going to be the bridge between the startup ecosystem and the enterprise organization because you are going to work within the ecosystem within the infrastructure capabilities of the GCC that gives the confidence the enterprises to try you test you work with you and this is something important right today sorry yeah so this is something very important co -creation model so there were days where the startups were looked as vendors or suppliers but today the co -creation model is very very powerful in fact my personal experience from stat infinity for as you see 24 plus COEs are then STP we’ve been working with the fin blue we’ve been working with the ICOE where the global capability centers work with us through the STPI.

They’re able to identify a phenomenal startup and startup scaling for the global organization. So the co -creation model is the only model in our context how the startups, AI startups can get into the global capability centers because they provide the control sandbox, they provide a domain expertise, they provide a production grade environment gateways, pathways which basically helps to reduce the pilot to production cycle which globally remains, which is the basic bottleneck of any AI startup is that pilot to production. That can be solved by the co -creation model my dear friends. And coming back to that global why India is unique today, right? The economic multiplier effect. See, anything that comes into the GCC, the ecosystem grows, the value chain grows, the skill development happens, right?

And it generates a lot of revenue. Of course, the software exports increases. So a lot of possibilities, not just having one employee that indirectly helps many people to grow that. So that is where India is unique and the GCC ecosystem is going to make a phenomenal impact. And institutionalization the model. What must happen next is if you look at the global comparison again India plays a phenomenal role in terms of the GCC density the talent and also the local ecosystem connect even with the things like STPI working on the GCC policy so lot of such ecosystem connects truly helps the global capability centers to adapt the AI ecosystem and make it. Why the strategic partnership is very important.

What must happen next is something very important. The co -creation platforms have to be formed. So especially the organizations like STPI are the real right organizations to build this co -creation platform and build an enterprise sandbox which is already there in the COEs but it has to be nurtured for the GCC perspective. one of the large US multinational banks have done the FinBlue in Chennai COE which has gained the phenomenal success making sure the FinTech COE under the SCPI have factored to the global ecosystem global large multinational bank has benefited out of that that’s where and joint IP framework that’s another important thing which is still under discussion but definitely it’s getting into a better space with that said I just wanted to submit upon one aspect which is the broader reflection right today startups create innovation velocity whereas enterprise creates scale and between these two the global capability centers are the pathways for building between the innovation velocity and the enterprise scale because that helps you to get the pathway to navigate the challenges that you get in the global enterprise ecosystem and be working in the GCC’s that helps to get your product or take your service, use it locally in the environment of global ecosystem, get the acceptance even if it doesn’t work, it fails fast, nothing is going to harm the GCC or the global organization.

That’s where the opportunity through the GCCs are truly evolving to work for your AI products to the larger ecosystem. Thank you so much. Jai Hind, Jai Bharat.

Vaani Kapoor

Thank you, sir, for your valuable industry perspective and for highlighting the role of GCCs in nurturing startups. Next, may I invite Ms. Geetika Dayal, DG Thai Delhi NCR, to address the audience on the entrepreneurial and startup ecosystem.

Ms. Geetika Dayal

My warm greetings to the dignitaries on the dial. Thank you so much, Arvindji, for this opportunity. And to you and to your entire team for, I know, countless hours of effort that have gone into this massive exercise of putting this program together from which all of us are benefiting. A very good afternoon, friends. We are gathered at probably one of the most important AI policy and innovation programs or platforms that our country has seen. And this summit truly represents the national ambition at its strongest and very best. But it is the dialogue and this session that we are doing here today that will help realize this ambition, help execute this ambition. All the discussions that have gone on for the last few days around AI policy, national strategy, global competitiveness, etc.

They must translate into real support for startup founders who are building AI products. And that translation only happens through the kind of ecosystems that we build together. I think India’s landscape is expanding very rapidly, making it among the top three global AI ecosystems. But it’s not numbers that actually build scale. It will be these kind of programs and innovation ecosystems that come together to make that happen. At Thai Delhi NCR, we’ve been at the forefront of mentoring and accelerating, working closely with startup founders. And over the last few years, we’ve worked with many of them to help them bridge the gap from innovation to market readiness. What we learned is that some of the areas where startups struggle are areas around business capability, around market access, around access to patient capital.

And therefore, our approach focuses on primarily these levers. which provides deep mentorship from entrepreneurs who’ve already scaled globally, market access with enterprises and GCCs, as you just heard, investor access through various funding stages, and structured capability building for our founder. STPI, of course, over the last, you know, more than two decades, 25 years and more, has done such remarkable work and played a great role in actually creating infrastructure and incubation support, a strong policy alignment, a massive pan -India presence and regulatory and institutional strength. And therefore, together, we really create the complete support stack for founders. And this has been demonstrated by the success of some of our key initiatives around Deep Ahead or Samark, etc., which really helped us to create a strong policy alignment.

And it really shows, it proves that collaboration does multiply outcomes as we work together on it. also I think there are certain strengths that India has which are raw ingredients for what we are all working towards now some of that is world class technical talent that comes from our premier institutions, cost effective innovation which is probably around 30 -40 % lower than operational costs in Silicon Valley strong public digital infrastructure as well as policy momentum to India AI mission and what we are seeing now so we must use all of that and bridge the gap around access to data, compute, infrastructure etc. but the collaboration that we are really excited about is the one with STPI which demonstrates how complementary skills when they come together can create real impact.

I think what we feel is that there are five structural pillars that are needed for scaling innovation. This is around knowledge and capability building, resource access market validation, funding access as well as well as of course ethical and responsible AI which our Prime Minister has been talking about and I think these kind of ecosystem collaborations and organizations they act as trust bridges which reduces the friction between government, startups, corporates and investors so I think as we move ahead we are very keen to work and to see how we can move out of the format of isolated programs and come together to create coordinated strategy there are certain immediate priorities that we can definitely work on this would be around expanding joint accelerators scaling up Samarth which has been going on so beautifully many more corporate challenge programs export readiness and perhaps AI benchmarking reports etc.

What we’d love to see is how AI startup ecosystems thrive, not by competition, but by collaboration. And as you’ve seen over here, to build a robust ecosystem when STPI, TI, GCCs, government, corporates, etc., come together with a shared vision, scale becomes inevitable. And that is what we are all here for, scaling innovation. Today, when we sign an MOU with STPI, it amplifies the impact and relevance, and it is a great pleasure and a great matter of privilege and pride for TI to work with STPI as a key enabler and a partner. So our very best wishes to all of you. As Rakeshji mentioned, these are times of great change, probably something that our generation has been very fortunate to see where we were and what we are heading towards.

And for all of us, to play a small role in what the years ahead will bring is really a humbling experience. It’s a great opportunity to be here. And my congratulations to all of you and my thank you for having all of us together here. Thank you so much.

Vaani Kapoor

Thank you, ma ‘am, for sharing Ty’s remarkable journey and continued commitment to entrepreneurs. May I now invite Ms. Nija Shekhar, Director General, National Productivity Council, for her special address. Ma

Ms. Neerja Sekhar

‘am, please. Good afternoon to you all. It’s a delight to be here at the AI Impact Summit. And specifically in this session, being hosted by the Software Technology Program, Park of India, where they have invited the National Productivity Council, who I represent, Ty, and other partners together, the GCC partners together. we are all talking about scaling AI innovation through the startup system. My warm greetings to everyone, to our ecosystem partners, industrial leaders, GCCs, mentors, investors and the startup founders who are also here as we work together on our next growth journey of innovation and AI impact. Anchored in the seven chakras of human capital, I am talking about the event that is anchored in the seven chakras of human capital, inclusion for social empowerment, safe and trusted AI, science, resilience, innovation and efficiency, democratizing AI resources and AI for economics.

Economic development and social growth. and the Three Sutras of People, Planet and Progress. This summit is focusing very effectively on a development -oriented framework for artificial intelligence. Today’s special session, where we are discussing the national imperative of scaling AI innovation, we will exchange a memorandum of understanding with STPI, NPC, and STPI have planned and pledged to work together for scaling this AI innovation, support the AI startup ecosystem in the country, and also to bring together innovation and collaboration. Because we know this is… This is not the era of competition. It’s an era of collaboration, where we have to put our energies together. and focus on areas that impact the population for good. This diffusion at scale across sectors, value chains, MSMEs, clusters, and public services.

This is what we are looking at. NPC, the National Productivity Council, works for productivity in the entire economic sector in the country. So we work on the total factor productivity, labor, land, infrastructure, capital. We support these areas and bring every player, make every player a part of the larger growth in the Indian economy. And manufacturing is a major focus area. Services, of course, but manufacturing. Because we know that that is where the employment is. And that is where. And the maximum exports also are growing. going to grow in the future. And of course, it is also going to maximize the GDP in the country. We’ve seen in this Expo area, we’ve seen many small AI applications, many of which are from the startups, working on areas of agriculture, health, some very, very interesting innovations on health and education, which is something very, very dear to all of us.

In areas like textiles, pharmaceuticals, etc. The question now is, how do we reliably move from ideas to impact and be meaningful to the society under the overall theme of welfare of all and happiness of all? Let me offer a crisp three -part framework for startups and ecosystem builders, which is trust, testbeds, and and traction. Trust is the entry ticket. If customers can’t trust our AI, they will not adopt it, on a scale not at least. So trust brings privacy and cyber security by design, transparency, accountability. It also means operational reliability and responsible governance. Testbeds. This will bridge the promise and proof. Startups need real world sandboxes. Labs, testing environments, applications, reference architectures, etc. And traction is what turns pilots into scale.

Not just a demo, but actual implementation. So we feel that the STPI will bring the ecosystem together and play a pivotal national role by bringing the industry a connected innovation landscape through their entire setup of innovation hubs, platforms, structured programs, centers of excellence across the country and their digital enablement frameworks. They have very successfully over a period of time connected mentors, labs, startups and resolved the challenges facing the startups leading them to larger markets. So their ecosystem of shifting from incubation to scalable infrastructure has seen very good days and we are going to see much much more success in the future. NPC’s role is to strengthen infrastructure. In the adoption spine of this ecosystem. productivity, quality, capability and industry alignment.

In the AI era, productivity is not just efficiency in land, labour and capital, but also reliability, repeatability, safety, security and responsible performance. Everything at scale. Startups scale faster when they can demonstrate measurable outcomes. Better output, better quality, fewer defects, faster service delivery, better customer experience, end -to -end experience and success. NPC supports this outcomes -driven pathway through benchmarking. We are very good at creating models, frameworks, assessment assessments and assessments, evaluations and providing a platform for the industries, MSMAs or the sector wise platforms are very well developed by NPC. This is an area many of you would have maybe if you are not associated with NPC you would know many people who worked with NPC, moved out in the economy, in the consultancy sector, in the evaluation sector and worked through benchmarking, capacity building and spreading of the productivity culture.

That’s why the partnership that we are looking at with STPI, between STPI and NPC is very timely, very strategic and we feel it will accelerate responsible digital transformation. And EA adoption. especially for MSMEs, clusters, industry ecosystems and RAI startups. We are really looking forward to a partnership where we can bring in more productivity into the ecosystem and today’s summit is a context that provides us and asks us and exhorts us to reorient our energies towards a more productive AI system that is scalable, that supports the AI startups and also has a very productive

Vaani Kapoor

Thank you, ma ‘am, for your inspiring words and for reinforcing the importance of productivity and capability development. Now, I would sincerely request our DG sir, Sri Arvind Kumarji, to kindly enlighten the audience with the keynote address.

Arvind Kumar

Hello, namaskar. Good afternoon. I think this, when such session happens with Expo, it’s very difficult to have the attendance. Since morning, I am fighting for this only, who is speaking, kindly ensure that attendance is there. But here, there is no problem with attendance. So, organizer, thank you very much. I think you did a wonderful job. The Expo is going on. still we have the full attendance, people are standing also there. Neerja ma ‘am, other dignities on the dais. I think there are a lot of other business pending, some felicitations and all right. So just those who are not familiar with STPI, two minutes about STPI, two minutes about the subject and then I will end.

So STPI has 70 centers across the country where we provide incubation to all small IT companies. These centers are generally in tier 2, tier 3 cities. So we have 62 centers in tier 2, tier 3 cities. Apart from the 70 centers, we have 24 centers of entrepreneurship which are domain specific wherein we provide at least 60 degree support to start -up. We nurture them. We provide some seed fund to them. we provide global reach to them, we provide market access to them and of course incubation to them. So this is what the STPI is doing when it comes to startup and all. Other things we are also doing like BAPT, network security, data centers, cloud services in PPP partners. So lot of things are doing just for that STPI domain.

Now as far as this topic is scaling innovation is concerned, so I mean there is a big change when there is no concept of startup in the country. We used to call it MSMEs and those MSMEs are generally meant for supporting the big companies, especially PSUs. They create something, then merge with either PSUs or they provide some product which is a product for something as input to PSUs or the big organization. Now this change of the startup and with support by the government to the startup has changed the whole landscape. Now startup can themselves scale their product. This is the change which you can see in last 5 to 6 years. And if you really want to scale up your innovation, then what is actually required for the startup, that is that product or that innovation should be safe and trusted.

Unless it is not trusted, nobody is going to use it and it is not going to scale up. Now how to make this trusted and safe, especially in the AI era? That means you have to make your product which is responsible and the product which also ethical. You have to make sure that the product is safe and trusted. only then people will have trust in it only then it can be scaled. Now people are generally confused between two words responsible and ethical. Though these two words are interconnected but different. When I say responsible or when I say ethical, though it is part of all five big parameters like we say it should be accountable, it should be secure, there has to be privacy, it has to be fairness.

These are five words we use. But difference just by examples, when you say something ethical, that means like as a CEO of the company because you are owner of the startup, whether you are taking care of environment or not, when you are producing your product. that is a part of ethical or when you are going to product whether you are taking care of the jobs creation or not this is a larger part this is your ethical attitude what I am going to do with the product when it comes to responsible responsible means fairness which means whether it is a not biased towards anything whether it’s not biased towards country whether it’s biased towards male female gender caste religion then it is a fair product and when I say responsibility it also means somebody should be accountable accountability is the very important part of the responsibility which means that suppose there is a car hit somebody in the road and the car is a driverless car now who is accountable for that accident whether the person who has purchased the car that is a person who has created that car name of the company or somebody who develops algorithm or the this is even the large language model which has been used by that very rappers.

So this is accountability. So unless you are not able to make something which is responsible ethical and therefore safe and trusted, it can’t be skilled. So all startups are there. They must ensure that whatever they are going to create today everybody is using UPI because they have able to create the trust among us. Lot of things came in this country. But now today biometric attendance or biometric identity has become scalable because they were able to create that this product is trusted, this product is safe. So any product which you are going to create whether the product is related to anything if you really want to innovate which is a very good thing and if this country has that opportunity, scale up anything we have population of 1 .4 billion and here scalability is very important.

and therefore if you really want to scale your product, you really want to scale your innovation, that must be safe and trusted. Thank you. Thank you very much.

Vaani Kapoor

Thank you so much, sir, for always encouraging, enlightening and guiding us throughout the journey. Now we begin with the MOU exchange ceremony. The first MOU between STPI and National Productivity Council May I request Sri Ashok Gupta, Sir, Director, STPI Gurugram to please come on the dais and Sri Nikhil Panchabai, Director, NPC to please come on the dais and exchange the MOU. I would also request DG Sir and DG Ma ‘am to grace the audience. Sanjay Sir, please come and grace the dais, please, Sir. Can we have a round of applause for Shri Sanjay Gupta sir, our Senior Director, STPI Thank you so much Sir, please be on the stage sir Shri Ashok Gupta sir, if you would like to The next demo you exchange is between STPI and Thai Delhi NCR For that may I request Shri Sanjay Gupta sir, Senior Director, STPI and Ms.

Geetika Dayal to please come forward and exchange the MOUs please Thank you Thank you. Thank you.

Shelly Sharma

So we now come to one of the most awaited segments, the startup felicitation ceremony. Today, we recognize startups supported under STPI ecosystem for excellence across revenue, funding, employment, women participation, innovation, and AI -led impact. I would like to request our Honor Dignitaries, DG Sir STPI and Nirja Shekhar Ma ‘am, Director General, National Productivity Council to kindly come forward to present the certificate and trophy to our startups. I request these startups to kindly come on the stage as per the name announced. So the first name is, may I invite Phoenix Marine Exports and Solutions Private Limited to come on the stage. They are being recognized under the category highest revenue up to 25 CR revenue and highest impact based on revenue tier 2 and tier 3 reason.

May I request DG STPI and DG NPC to please present the certificate and trophy. Once again, a big round of applause for their outstanding contribution. Now, may I invite Vimeo Consulting Private Limited to please come on the stage. They are being recognized for highest funding raised up to 25 CR revenue category. Heartiest congratulations on your fundraising success. A big round of applause. A louder round of applause, please. Now, may I invite Swada Agri Private Limited to the stage. Thank you, Judy. They are being felicitated for highest employment generation up to 25 CR revenue category. Congratulations for generating valuable employment. A big round of applause. Thank you. invite Strangify Technologies Pvt. Ltd. to please come on the stage.

They are being recognized for highest number of women employment up to 25 CR revenue category. Well done for empowering women in the workforce. A big round of applause. A louder round of applause for women participation. Now, our next startup is Suhora Technologies Pvt. Ltd. May I invite Suhora Technologies Pvt. Ltd. to the stage. They are being recognized for highest revenue up to 50 CR revenue category. Congratulations on your outstanding business performance. A big round of applause. Suhora Technologies Pvt. Ltd. Now I invite Puvation Technology Solutions Private Limited. They are being felicitated for highest funding raised up to 50 CR revenue category. Applause for your impressive funding milestone. A big round of applause. Now I invite our next startup, Sequera Tech IT Solutions Private Limited to come on the stage.

They are being recognized under multiple categories. so the categories are highest employment up to 50 CR revenue category highest women employment up to 50 CR revenue category highest AI based impact based on revenue a special recognition for excellence across multiple dimensions a big round of applause now I invite our next startup Atmik Bharat Industries Pvt. Ltd. to the stage they are being recognized for highest impact based on beneficiaries congratulations for touching countless lives a big round of applause May I invite Mobile Pay E -Commerce Private Limited. They are being validated for highest impact based on beneficiaries as a second position. Well done for your meaningful outreach. A big round of applause. Now I invite the another startup Devnagri AI Private Limited to please come on the stage.

They are being recognized for highest AI based impact based on revenue as a second position. Congratulations on leveraging AI for impact. A big round of applause. Thank you. Thank you so much, sir, DG, sir, for attending us. Now I invite our next startup, Dactrosel Healthcare and Research Private Limited. They are being recognized for most innovative startup. Applause for Breakthrough Healthcare Innovation. A big round of applause. Now I invite our next startup, EZO5 Solutions Private Limited. Please come on the stage. They are being felicitated as most promising innovation. Please, please A big round of applause Thank you Now I invite our next startup Connector Foods Private Limited. Please come on the stage for a beautiful couple.

They are being recognized as most innovative startup as a second position. Well done for creative excellence. A big round of applause. Finally, our last startup, may I invite Fuse Ledge Innovations Private Limited. They are being recognized as most promising innovation, second position. Congratulations on your forward -looking journey. A big round of applause. A big round of applause for all our felicitated startups. Your innovation, resilience and contribution to India’s digital economy truly inspire us all. May I request our dignitaries to kindly resume their seats on the dais. We will now invite our selected startups to briefly share their journey with us. So may I invite Fuselage Innovations, Private Limited, to kindly come on the

Devika Chandrasekaran

Hi everyone, my name is Devika Chandrasekaran. I’m the co -founder of Useless Innovations. It’s truly an honor to stand on a stage today being felicitated by STPI. This moment feels very special because we started our journey with STPI in our early days. Back in 2021, we participated in a program called Scout 2021. At that time, we were building our prototype. The support we received through the program was not just a funding, it’s a validation. That recognition gave us the confidence to push forward. We’re going to do it. Today, Fuselage Innovations manufactures drones in agriculture, defence, disaster management applications We are working with more than 10 ,000 farmers across India helping them to improve productivity, efficiency through drone technology We are also contributing to defence, disaster management and maritime operations serving critical national needs Last month, we were deeply honoured to receive National Startup Award and we got the opportunity to present our journey in front of our Honourable Prime Minister, Narendra Modi Sir I would like to sincerely thank STBI and everyone involved in the journey to believe a startup like us The ecosystem, the encouragement and the early trust that make a huge difference in our journey Thank you so much

Shelly Sharma

Thank you for sharing inspiring story. Now may I invite Dr. Rosals to kindly come on the stage and share your startup journey with us. Dr.

Dr. Soumya

Good evening, everyone. My name is Dr. Soumya, and I’m really glad to be a part of this prolific platform today. Just very quickly, I’d like to walk you through what we build. So at TectoCell, we build AI -powered diagnostic solutions at the intersection of radiology and artificial intelligence. And DNA sequencing, while addressing the huge havoc of drug resistance and robust clinical trials panning across India, facilitated by the software technology parks of India. We’ve been able to sort of exceptionally benchmark our accuracy, clinical accuracy, that sort of amplifies the reliability of our products. And the continued commitment of Software Technology Parks of India to sort of help us navigate through our regulatory compliances, get global collaborations, and also sort of get data acquisition, which is sort of machine readable, is extremely noteworthy.

And this unique foundation sort of puts us in a very good position, in a very strengthful position to now sort of scale this globally, building from India for the world. So I’m very grateful for this. Thank you.

Shelly Sharma

Thank you. Lots of applause. Thank you so much for sharing your story and journey with us. Now I invite Sequeira Tech IT Solutions Private Limited to come on the stage and share your startup journey with us.

Arita Dalan

Hi everyone. Good evening to everyone. So my name is Arita Dalal. I’m heading this region for North with SecureTech. I have been in this organization from the last 11 years. But during this journey, we have a lot of interaction with STPI as an organization. They are one of the nurturing body which has done a lot of collaboration in the industries as well. They are one of the bodies which has given us an opportunity to talk to the investors as well. And there are various industry connect as well that is being established by the organization. And we are very sincerely thankful to the entire organization and the team of STPI as well. Just to give you a brief about.

SecureTech is a cyber security organization in VR. Our mantra is to simplify security. We are touching, we are securing the security for the large enterprise organization, with size organization across the industry, whether it is pharma, banking finance organizations, or even the small organizations which are currently establishing the digital landscape in the country, while they are being regulated by large RBI and CBI. So, in nutshell, we are providing them all the frameworks, security parameters, and the solutions as well, so that they can be powered, they can be enabled, and they can secure their infrastructure platforms and the data that they are processing for the countries or for the users that they are providing services. So, whether it is a startup organization or even a large infrastructure organization, we are securing.

We are providing them end -to -end. Thank you. Thanks, everyone.

Shelly Sharma

Thank you. Now, I invite… Caneboard Solutions Private Limited to come on the stage and share your journey with us.

Kirty Datar

helping us sharpen our positioning as a deep tech company. Most importantly, STPI’s recognition has strengthened our credibility with customers, investors, and government stakeholders. We are very happy and very honored to be here today, and we thank you so much to STPI and everybody who is present here today.

Shelly Sharma

Thank you so very much. May I invite now EZO5 to kindly come on the stage and share your startup journey with us.

Noor Fatma

Hi, everyone. Good afternoon. I’m Noor Fatma, co -founder of EZO5 Solutions.

Meenal Gupta

Hi, I’m Meenal Gupta, founder of EZO5 Solutions.

Noor Fatma

at EZO5 we have built an AI powered platform Imagix AI that does precision treatment planning for oncology cases and so in the in the startup journey there was a time for us when we one and a half years back when we had just two months of cash flow with us we were thinking a lot what to do and that is when STPI came to our rescue and it helped us raise money and there has been no looking back since then so in the past three years where we have been incorporated we have processed around one million scans we have in the last three months we have scanned around 50 ,000 chest XAs where we have flagged around 4 ,000 cases of TB cut the transmission by short we have flagged six cases of lung cancer where the intervention was still possible so We have prepared 1000 radiotherapy plants in the last three months and we have cut short the treatment planning and start from around one month to a week.

So that is the impact we are making to the support of the whole ecosystem and STPI.

Meenal Gupta

And proudly I say that the impact that we have brought, even our Prime Minister Mr. Narendra Modi was interested and he invited us to discuss our solution in IMC. And just day before yesterday, we have gone global because Bill Gates showed interest in our solution and he invited us in Microsoft to show our solution and he was discussing how he can help us. Thank you.

Noor Fatma

So now we are going from local to global serving the whole world. Thank you.

Shelly Sharma

Yeah, thank you. Thank you to all the founders for sharing such inspiring stories. So, we now proceed with presentations of mementos to our esteemed dignitaries. To begin with, may I request Shri Ashok Gupta, Sir, Director, STPI, Gurugram to kindly come on the stage. Sir will present the memento to Nirja Shekhar ma ‘am, Director General, NPC. A big round of applause. Thank you so much, Sir. And thank you so much, ma ‘am. Next, may I request Shri Atul Kumar Singh Sir, Additional Director, STPI to kindly come on the stage and present the memento to Shri Bala M .S. A big round of applause. May I now request Shri Praveen Kumar Sir, Joint Director, STPI to kindly come on the stage and present the memento to Geetika Dayal Ma ‘am.

A big round of applause. May I also request Shri Atul Kumar Singh Sir, Additional Director, STPI to kindly come on the stage and present the memento to Geetika Dayal Ma ‘am. Shri Praveen Kumar sir, Joint Director, STPI to kindly present the memento to Shri Rakesh Dubey sir, Director, Startups and Innovation, STPI. Thank you sir. A big round of applause. Now, I would like to request Shri Praveen Kumar sir, Joint Director, STPI to present

Praveen Kumar

the formal vote of thanks. Expected dignitaries, speakers, startup founders, innovators and ladies and gentlemen. On behalf of Software Technology Parks of India, it is our true privilege to thank each one of you for making this session focused, meaningful and definitely forward. looking. Nirja Sekhar ma ‘am, thank you for your thoughtful reflections on productivity and growth. Your perspective adds depth and direction both to our collective mission ma ‘am. We are truly encouraged to have your presence. Thank you so much. We are grateful for it. Sri Rakesh Dube sir, thank you for your profound support which has been both guiding and grounding sir. Your constant encouragement and hands -on involvement in shaping the entire session together has helped us immensely sir.

My sincere appreciation to Geetika Dayal ma ‘am from Thai Daily NCR for your continued partnership and reinforcing the importance of collaborative startup ecosystem building ma ‘am. Thank you. Thank you, Mr. Bala, for bringing up a sharp industry lens and pragmatic approach that startups can directly relate to as they scale. So your thoughts on the GCC is definitely going to help them all. To the startups, all the startups who were felicitated today, congratulations. Your achievements demonstrate that innovation from India, including Tier 1 and Tier 2, is both scalable and globally relevant. To all the founders who shared their journey, thank you for your candor and inspiration. Your stories remind us why platforms like STPI matter. And before I conclude, I sincerely appreciate.

My organizing team and every colleague who worked diligently. behind the scene to ensure the session came seamlessly. With that, I once again thank all of you and the dignitaries and I request dignitaries and startups to come forward and have a group chat. Thank you. Thank you again.

Shelly Sharma

I request to all the felicitated startups to kindly come on the stage and have the group photograph with all the dignitaries on the dais. Thank you. Thank you. I also request the other directors as well to please come on stage and join us for the group photographs. Yes, Kavita ma ‘am, please come on the stage. I also request Kishori ma ‘am to please join us for the group photograph. Thank you. Thank you. Once again, thank

Related ResourcesKnowledge base sources related to the discussion topics (22)
Factual NotesClaims verified against the Diplo knowledge base (1)
Confirmedhigh

“Shelly Sharma, Deputy Director, Software Technology Parks of India (STPI), gave a formal welcome to the session on “Scaling Innovation, Building a Robust AI Startup Ecosystem”.”

The transcript excerpts S5 and S4 record Shelly Sharma, Deputy Director of STPI, delivering the welcome for the “Scaling Innovation, Building a Robust AI Startup Ecosystem” session, confirming the report’s description.

External Sources (89)
S1
Scaling Innovation Building a Robust AI Startup Ecosystem — -Shri Ashok Gupta: Title – Director STPI Gurugram; Role – Dignitary presenting mementos Hi, I’m Meenal Gupta, founder o…
S2
Founders Adda Raw Conversations with India’s Top AI Pioneers — 1230 words | 154 words per minute | Duration: 478 secondss Hello everyone, I am Meenal Gupta from EasyOPI Solutions and…
S3
https://dig.watch/event/india-ai-impact-summit-2026/founders-adda-raw-conversations-with-indias-top-ai-pioneers — Accuracy is around 92%. So it is around 92 % to 99 % depending upon the data. complexity you can see this data we are wo…
S4
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — Hi, everyone. Good afternoon. I’m Noor Fatma, co -founder of EZO5 Solutions. at EZO5 we have built an AI powered platfo…
S5
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — Hi, everyone. Good afternoon. I’m Noor Fatma, co -founder of EZO5 Solutions.
S6
https://app.faicon.ai/ai-impact-summit-2026/scaling-innovation-building-a-robust-ai-startup-ecosystem — Hi, everyone. Good afternoon. I’m Noor Fatma, co -founder of EZO5 Solutions. We have flagged six cases of lung cancer w…
S7
Scaling Innovation Building a Robust AI Startup Ecosystem — -Devika Chandrasekaran: Role – Co-founder of Fuselage Innovations; Area of expertise – Drone technology for agriculture,…
S8
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — -Devika Chandrasekaran- Co-founder, Fuselage Innovations (drone technology for agriculture, defense, disaster management…
S9
https://app.faicon.ai/ai-impact-summit-2026/ai-meets-agriculture-building-food-security-and-climate-resilien — Dr. Chaturvedi leads our national effort in agriculture and farmer’s welfare. Mr. Johannes Jett, he is the Regional Vice…
S10
AI Meets Agriculture Building Food Security and Climate Resilien — -Dr. Soumya Swaminathan- Chairperson of Dr. M.S. Swaminathan Research Foundation; global leader in science, champion for…
S11
AI for agriculture Scaling Intelegence for food and climate resiliance — -Dr. Soumya Swaminathan: Chairperson of Dr. M.S. Swaminathan Research Foundation – global leader in science, champion fo…
S12
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — Good afternoon, everyone. I am Vani Kapoor, Manager, STPI, your co -host for the session. May I now begin by respectfull…
S13
https://dig.watch/event/india-ai-impact-summit-2026/building-the-future-stpi-global-partnerships-startup-felicitation-2026 — Good afternoon, everyone. I am Vani Kapoor, Manager, STPI, your co -host for the session. May I now begin by respectfull…
S14
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — -Vaani Kapoor- Manager, STPI (co-host for the session)
S15
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — 1208 words | 29 words per minute | Duration: 2418 secondss Good afternoon, everyone. On behalf of Software Technology P…
S16
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — Good afternoon, everyone. On behalf of Software Technology Parks of India, I extend a very warm welcome to all the digni…
S17
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — – Arvind Kumar- Ms. Neerja Sekhar – Sh. Bala MS- Ms. Neerja Sekhar Bala MS identifies institutionalization as the key …
S18
Scaling Innovation Building a Robust AI Startup Ecosystem — -Shri Atul Kumar Singh: Title – Additional Director, STPI; Role – Dignitary presenting mementos -Shri Praveen Kumar: Ti…
S20
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — Yeah, thank you. Thank you to all the founders for sharing such inspiring stories. So, we now proceed with presentations…
S21
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — -Kirty Datar- Representative, Caneboard Solutions Private Limited -Milind Datar- Representative, Caneboard Solutions Pr…
S22
Scaling Innovation Building a Robust AI Startup Ecosystem — Agreed with:Devika Chandrasekaran, Arita Dalan, Kirty Datar, Noor Fatima — STPI’s critical role in providing comprehensi…
S23
Scaling Innovation Building a Robust AI Startup Ecosystem — – Dr. Saumya Shukla- Kirty Datar
S24
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — -Kirty Datar- Representative, Caneboard Solutions Private Limited -Milind Datar- Representative, Caneboard Solutions Pr…
S25
Scaling Innovation Building a Robust AI Startup Ecosystem — Agreed with:Devika Chandrasekaran, Arita Dalan, Kirty Datar, Noor Fatima — STPI’s critical role in providing comprehensi…
S26
Scaling Innovation Building a Robust AI Startup Ecosystem — – Devika Chandrasekaran- Milind Datar – Dr. Saumya Shukla- Kirty Datar- Noor Fatima
S27
Scaling Innovation Building a Robust AI Startup Ecosystem — -Geetika Dayal: Role – Representative from an organization (specific title not clearly mentioned); Role – Partnership an…
S28
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — Hi everyone. Good evening to everyone. So my name is Arita Dalal. I’m heading this region for North with SecureTech. I h…
S29
https://dig.watch/event/india-ai-impact-summit-2026/building-the-future-stpi-global-partnerships-startup-felicitation-2026 — Thank you, sir, for your valuable industry perspective and for highlighting the role of GCCs in nurturing startups. Next…
S30
Scaling Innovation Building a Robust AI Startup Ecosystem — -Bala MS: Role – Industry representative; Area of expertise – GCC (Global Capability Centers) and industry perspective f…
S31
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — – Sh. Bala MS- Ms. Neerja Sekhar Bala MS identifies institutionalization as the key challenge, emphasizing the role of …
S32
Scaling Innovation Building a Robust AI Startup Ecosystem — -Shri Rakesh Dubey: Title – Director, Startups and Innovation, STPI; Role – Dignitary and supporter of the event
S33
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — -Sh. Rakesh Dubey- Director, Startup and Innovation, STPI
S35
Scaling Innovation Building a Robust AI Startup Ecosystem — -Arita Dalan: Role – Representative of SecurTech IT Solutions Private Limited; Area of expertise – Cybersecurity solutio…
S36
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — -Arita Dalan- Regional Head North, SecureTech IT Solutions Private Limited (cybersecurity)
S37
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — Hi everyone. Good evening to everyone. So my name is Arita Dalal. I’m heading this region for North with SecureTech. I h…
S38
Shaping Investment: Spurring Investment in Cyber Sector Start-Ups — A lot of cybersecurity startups appear to be doing similar things so they need to differentiate themselves AI presents …
S39
Empowering Women Entrepreneurs through Digital Trade and Training ( Global Innovation Forum) — In conclusion, the analysis provides valuable insights into various aspects of entrepreneurship, gender equality, and di…
S40
Boosting women digital entrepreneurship: Bridging the gender financing gap (UNCTAD) — Additionally, the analysis emphasizes the importance of gender balance in decision-making bodies within the private equi…
S41
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — STPI has developed a comprehensive framework for evaluating and recognizing startup success that goes beyond traditional…
S42
Building a Digital Society, from Vision to Implementation — Stacey Hines, joining from Vancouver at 4 AM Kingston time, cited research from Web Summit where AI expert Gary Marcus p…
S43
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — This challenges the prevalent startup narrative that funding is the primary barrier to success. By identifying organizat…
S44
WS #83 the Relevance of Dpgs for Advancing Regional DPI Approaches — There’s growing consensus that funding isn’t the primary constraint for DPI development because people recognize the mas…
S45
Successes & challenges: cyber capacity building coordination | IGF 2023 — However, building trust is challenging due to the presence of different policy fields and institutions. Luxembourg, perc…
S46
Scaling Innovation Building a Robust AI Startup Ecosystem — STPI has successfully created a comprehensive startup ecosystem that supports companies from early prototype stage to gl…
S47
Scaling Innovation Building a Robust AI Startup Ecosystem — He notes that STPI provided direct access to global investors and that its recognition boosted credibility with customer…
S48
How Trust and Safety Drive Innovation and Sustainable Growth — Summary:All speakers agreed that trust is the foundational requirement for AI adoption. Without trust, people simply won…
S49
AI as critical infrastructure for continuity in public services — Lidia states that trust is a prerequisite for widespread technology adoption and diffusion. She argues that without esta…
S50
From principles to practice: Governing advanced AI in action — Strong consensus on fundamental principles including multi-stakeholder collaboration, trust as prerequisite for adoption…
S51
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — Rakesh Dubey described specific functionalities of the STPI portal, such as a product marketplace where startups can sho…
S52
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — STPI created a comprehensive digital platform that serves multiple functions for the startup ecosystem. The platform act…
S53
Scaling Innovation Building a Robust AI Startup Ecosystem — STPI ecosystem as catalyst for startup growth
S54
How Trust and Safety Drive Innovation and Sustainable Growth — Summary:All speakers agreed that trust is the foundational requirement for AI adoption. Without trust, people simply won…
S55
Scaling Innovation Building a Robust AI Startup Ecosystem — EZO5 Solutionswas represented by co-founders Noor Fatima and Meenal Gupta, who described their Imagix AI platform for pr…
S56
Launch / Award Event #52 Intelligent Society Development & Governance Research — The discussion maintained a consistently optimistic and collaborative tone throughout. Speakers expressed enthusiasm abo…
S57
[Opening] IGF Parliamentary Track: Welcome and Introduction — The tone is consistently formal, welcoming, and optimistic throughout. It maintains a diplomatic and collaborative atmos…
S58
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S59
Opening Remarks (50th IFDT) — The overall tone was formal yet warm and celebratory. Speakers expressed pride in the IFDT’s accomplishments and gratitu…
S60
How Multilingual AI Bridges the Gap to Inclusive Access — The tone was consistently collaborative, optimistic, and mission-driven throughout the conversation. Speakers demonstrat…
S61
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S62
Powering the Technology Revolution / Davos 2025 — The tone was generally optimistic and forward-looking, with panelists highlighting opportunities for innovation and prog…
S63
Host Country Open Stage — The tone throughout the discussion was consistently optimistic and solution-oriented. All presenters maintained a profes…
S64
AI: Lifting All Boats / DAVOS 2025 — The tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunit…
S65
WS #19 Satellites, Data, Action: Transforming Tomorrow with Digital — The tone of the discussion was largely informative and analytical, with speakers providing overviews of different aspect…
S66
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S67
Any other business /Adoption of the report/ Closure of the session — In summary, the speaker artfully blended expressions of gratitude with recognition of collaborative efforts and a call f…
S68
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — The tone was professional and forward-looking, with a sense of urgency about making AI work effectively in healthcare. W…
S69
Safe Smart Cities and Climate Frustration — The discussion maintained a collaborative and solution-oriented tone throughout. Speakers were optimistic about the pote…
S70
WSIS Action Line C7 E-learning — The discussion maintained a professional and collaborative tone throughout, with speakers demonstrating cautious optimis…
S71
Closing Ceremony — The discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebrat…
S72
World Economic Forum Annual Meeting Closing Remarks: Summary — The tone is consistently positive, celebratory, and grateful throughout the discussion. It begins with formal appreciati…
S73
Keynote Adresses at India AI Impact Summit 2026 — And we’re doing it in a partnership with the world’s largest democracy, a nation of 1 .4 billion people that share our v…
S74
I NTRODUCTION — – A Unified Digital Factory Platform – An advanced, centralized, digital platform providing government entities with the…
S75
The Power of the Commons: Digital Public Goods for a More Secure, Inclusive and Resilient World — Krzysztof Szczerski: Thank you very much, Excellencies, Mr. Special Envoy, ladies and gentlemen, I’m so excited to be …
S76
DIGITAL GOVERNMENT PLAN — – data generation, collection, processing, storage, use and re-use; – sharing across the government, and between governm…
S77
Hello from the CyberVerse: Maximizing the Benefits of Future Technologies — Chante Maurio:Maybe to supplement it, you’re talking about the technology aspect of the challenges. And when we think ab…
S78
Indeed expands AI tools to reshape hiring — Indeed isexpanding its use of AIto improve hiring efficiency, enhance candidate matching, and support recruiters, while …
S79
Dedicated stakeholder session (in accordance with agreed modalities for the participation of stakeholders of 22 April 2022)/OEWG 2025 — European Union: Thank you, Chair. I’m honored to speak on behalf of the UN’s 27 member states. The candidate countrie…
S80
Agenda item 6: other matters — France: Thank you, Mr. Chairman. I will be delivering a shortened version of my statement. France supports the idea o…
S81
Agenda item 5: discussions on substantive issues contained in paragraph 1 of General Assembly resolution 75/240 (continued)/5/OEWG 2025 — Burkina Faso: Mr. Chairman, since this is the first time that Burkina Faso is taking the floor, our delegation wishes t…
S82
Agenda item 5 : Day 4 Afternoon session — Philippines:This may not be as elegant as the proposal of India, I am not as creative or maybe as visual as the earlier …
S83
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Jonathan Mendoza Iserte:Thank you, Luca. Good afternoon. How are you? I want to thank the organizers for bringing this t…
S84
Media Briefing: Unlocking the North Star for AI Adoption, Scaling and Global Impact / DAVOS 2025 — Cathy Li: Thanks for having me. So first of all, just a very quick overview. The work is done not by one organisation…
S85
#205 L&A Launch of the Global CyberPeace index — Suresh Yadav: Thank you, Vinit. I hope you can hear me, Vinit, if you can. Loud and clear, we can hear you. Thank you ve…
S86
IndoGerman AI Collaboration Driving Economic Development and Soc — Thank you so much, Anandi. Thank you, Anandi. Quite pervasive, it is being applied to almost all the sectors. And where …
S87
From KW to GW Scaling the Infrastructure of the Global AI Economy — He points out his involvement in designing large‑scale, gigawatt‑level data centers, underscoring India’s growing capaci…
S88
Indias Roadmap to an AGI-Enabled Future — This discussion focused on India’s path to building an AGI-enabling ecosystem, examining the critical pillars of energy,…
S89
Leaders TalkX: ICT Applications Unlocking the Full Potential of Digital – Part II — Anil Kumar Lahoti:Thank you, Dana. First of all, I thank ITU for inviting me to this plus 20, and I consider this as my …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Sh. Rakesh Dubey
1 argument158 words per minute287 words108 seconds
Argument 1
Integrated portal offering marketplace, hiring hub and policy repository (Sh. Rakesh Dubey)
EXPLANATION
Rakesh Dubey described the STPI portal as a one‑of‑its‑kind digital platform that aggregates government policies, contests, and services for startups. He highlighted added features such as a product marketplace and a hiring hub that enable startups to post job openings and showcase products.
EVIDENCE
He explained that the portal serves as a repository of various government policies and can host contests from any incubator worldwide, publishing applications and results online [11-13]. He noted that the portal includes a product marketplace and a hiring hub where startups can post product listings and job requirements, allowing individuals to apply and interact directly with startups [16-19].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Rakesh Dubey’s description of the STPI portal’s product marketplace, hiring hub and policy repository is confirmed in the event transcript where he outlines these functionalities and the platform’s role as a policy repository [S5][S4].
MAJOR DISCUSSION POINT
Major discussion point 1 – STPI Platform as an Enabler for Start‑ups
AGREED WITH
Devika Chandrasekaran, Dr. Soumya, Arita Dalan, Kirty Datar, Noor Fatma, Shelly Sharma
DISAGREED WITH
Sh. Bala MS, Ms. Neerja Sekhar, Arvind Kumar
D
Devika Chandrasekaran
2 arguments122 words per minute207 words101 seconds
Argument 1
Early STPI program (Scout 2021) provided validation and confidence to a drone‑tech start‑up (Devika Chandrasekaran)
EXPLANATION
Devika recounted how her startup participated in the STPI Scout 2021 program, receiving validation that boosted the team’s confidence. The early support helped them progress from prototype to market‑ready solutions.
EVIDENCE
She stated that in 2021 they joined the Scout 2021 program, were building a prototype, and the support received was not just funding but validation that gave them confidence to move forward [283-287].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Devika’s account of participating in the Scout 2021 program and receiving validation is corroborated by the transcript that records her joining the program in 2021 and the confidence-boosting support she received [S4][S1].
MAJOR DISCUSSION POINT
Major discussion point 1 – STPI Platform as an Enabler for Start‑ups
AGREED WITH
Sh. Rakesh Dubey, Dr. Soumya, Arita Dalan, Kirty Datar, Noor Fatma, Shelly Sharma
Argument 2
Drone solutions for agriculture, defence and disaster management, serving 10,000+ farmers (Devika Chandrasekaran)
EXPLANATION
Devika highlighted her startup’s impact, noting that its drones are used in agriculture, defence and disaster management, reaching over ten thousand farmers. The technology improves productivity and supports critical national needs.
EVIDENCE
She explained that Fuselage Innovations manufactures drones for agriculture, defence and disaster management, working with more than 10,000 farmers across India to improve productivity and efficiency [280-287].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The claim of serving over 10,000 farmers with drone technology is supported by the scaling discussion that highlights the company’s reach to more than 10,000 farmers across India [S22].
MAJOR DISCUSSION POINT
Major discussion point 5 – Startup Success Stories Demonstrating Innovation and Impact
D
Dr. Soumya
1 argument126 words per minute175 words82 seconds
Argument 1
Assistance with regulatory compliance, data acquisition and global collaborations for AI diagnostics (Dr. Soumya)
EXPLANATION
Dr. Soumya explained that STPI helped her AI‑diagnostic startup navigate regulatory compliance, acquire data, and establish global collaborations. This support enhanced the reliability and scalability of their solutions.
EVIDENCE
She noted that STPI’s continued commitment helped them navigate regulatory compliances, obtain global collaborations, and acquire machine-readable data, which boosted clinical accuracy and reliability of their AI-powered diagnostic products [294-298].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Dr. Soumya’s presentation of AI-powered diagnostic solutions aligns with the transcript where she describes TectoCell’s AI diagnostic platform, confirming STPI’s support for such ventures [S5][S4].
MAJOR DISCUSSION POINT
Major discussion point 1 – STPI Platform as an Enabler for Start‑ups
AGREED WITH
Sh. Rakesh Dubey, Devika Chandrasekaran, Arita Dalan, Kirty Datar, Noor Fatma, Shelly Sharma
A
Arita Dalan
2 arguments139 words per minute268 words114 seconds
Argument 1
Industry connections, investor access and collaboration opportunities for a cybersecurity start‑up (Arita Dalan)
EXPLANATION
Arita described how STPI facilitated industry linkages, investor introductions, and collaborative projects for her cybersecurity firm. These connections helped the startup expand its market reach.
EVIDENCE
She mentioned that STPI provided opportunities to talk to investors, established various industry connections, and collaborated with the organization, which she credited for the startup’s growth [308-312].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Arita’s remarks about STPI facilitating industry linkages, investor introductions and collaborative projects are reflected in the transcript that details her long-term engagement with STPI and the networking opportunities provided [S4][S1].
MAJOR DISCUSSION POINT
Major discussion point 1 – STPI Platform as an Enabler for Start‑ups
AGREED WITH
Sh. Rakesh Dubey, Devika Chandrasekaran, Dr. Soumya, Kirty Datar, Noor Fatma, Shelly Sharma
Argument 2
Cybersecurity solutions simplifying security for enterprises across sectors (Arita Dalan)
EXPLANATION
Arita outlined her company’s mission to simplify security for large enterprises across sectors such as pharma, banking, and emerging digital firms. The startup delivers end‑to‑end security frameworks and solutions.
EVIDENCE
She explained that SecureTech provides cybersecurity frameworks, security parameters and solutions to large enterprises across pharma, banking, finance and other sectors, securing infrastructure and data for both startups and large organizations [314-320].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The description of SecureTech’s comprehensive security frameworks for pharma, banking and other sectors matches the external summary of the company’s cybersecurity offerings [S22].
MAJOR DISCUSSION POINT
Major discussion point 5 – Startup Success Stories Demonstrating Innovation and Impact
K
Kirty Datar
2 arguments147 words per minute50 words20 seconds
Argument 1
STPI recognition strengthens credibility with customers, investors and government (Kirty Datar)
EXPLANATION
Kirty stated that being recognized by STPI enhanced his startup’s credibility among customers, investors, and government bodies, facilitating market acceptance and growth.
EVIDENCE
He said that STPI’s recognition has strengthened their credibility with customers, investors, and government stakeholders, helping them sharpen their positioning as a deep-tech company [323-325].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Kirty’s statement that STPI recognition boosts credibility is supported by the event commentary that highlights STPI’s validation as a trust signal opening doors with customers, investors and government bodies [S5][S4].
MAJOR DISCUSSION POINT
Major discussion point 1 – STPI Platform as an Enabler for Start‑ups
AGREED WITH
Shelly Sharma, Noor Fatma
DISAGREED WITH
Ms. Neerja Sekhar
Argument 2
STPI recognition enhances start‑up credibility with stakeholders (Kirty Datar)
EXPLANATION
Reiterating the earlier point, Kirty emphasized that STPI’s acknowledgment serves as a trust signal, opening doors to further partnerships and funding.
EVIDENCE
He again noted that STPI’s recognition strengthens credibility with customers, investors and government, which is crucial for deep-tech startups [323-325].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same transcript notes that STPI’s acknowledgment serves as a credibility enhancer for deep-tech startups, confirming Kirty’s point about stakeholder trust [S5][S4].
MAJOR DISCUSSION POINT
Major discussion point 5 – Startup Success Stories Demonstrating Innovation and Impact
N
Noor Fatma
2 arguments169 words per minute219 words77 seconds
Argument 1
STPI helped secure funding and scale an AI‑powered oncology platform (Noor Fatma)
EXPLANATION
Noor described how STPI intervened when her startup faced cash‑flow constraints, helping them raise funds and scale their AI oncology solution. The platform now processes large volumes of scans and improves treatment planning.
EVIDENCE
She recounted that when they had only two months of cash flow, STPI rescued them by helping raise money, after which they processed around one million scans, flagged thousands of TB and lung cancer cases, and reduced radiotherapy planning time from a month to a week [329-333].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Noor’s account of STPI intervening to raise funds and enabling the AI oncology platform to scale to process a million scans is documented in the transcript describing the cash-flow rescue and subsequent growth [S4][S22].
MAJOR DISCUSSION POINT
Major discussion point 1 – STPI Platform as an Enabler for Start‑ups
AGREED WITH
Shelly Sharma, Kirty Datar
Argument 2
AI‑powered diagnostic platform improving oncology treatment planning and achieving global interest (Noor Fatma)
EXPLANATION
Noor highlighted the impact of their AI platform, which has attracted attention from national leaders and global tech figures, demonstrating its scalability and relevance.
EVIDENCE
She noted that the platform processed 1 million scans, identified 4 000 TB cases and 6 lung cancer cases, and that both Prime Minister Narendra Modi and Bill Gates showed interest, with Gates inviting them to Microsoft for further discussion [332-336].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The claim of attracting attention from Prime Minister Narendra Modi and Bill Gates is confirmed by the external source that cites both leaders expressing interest in the AI oncology solution [S22].
MAJOR DISCUSSION POINT
Major discussion point 5 – Startup Success Stories Demonstrating Innovation and Impact
S
Shelly Sharma
2 arguments29 words per minute1208 words2418 seconds
Argument 1
Formal recognition and awards reinforce the impact of the ecosystem (Shelly Sharma)
EXPLANATION
Shelly, as the session host, announced the felicitation ceremony, presenting certificates and trophies to startups for achievements in revenue, funding, employment, and women participation, thereby underscoring the ecosystem’s impact.
EVIDENCE
She introduced the startup felicitation segment, listed each award category (revenue, funding, employment, women participation, AI impact, innovation) and invited dignitaries to present certificates and trophies to the recognized startups [214-276].
MAJOR DISCUSSION POINT
Major discussion point 1 – STPI Platform as an Enabler for Start‑ups
AGREED WITH
Kirty Datar, Noor Fatma
Argument 2
Celebration of start‑ups’ achievements in revenue, funding, employment and women participation (Shelly Sharma)
EXPLANATION
Shelly reiterated the celebration of startup successes, emphasizing the diverse metrics—revenue, funding, job creation, and gender inclusion—used to recognize excellence.
EVIDENCE
She repeatedly announced each startup’s award (e.g., highest revenue, highest funding, highest women employment) and called for applause, highlighting the breadth of achievements across the ecosystem [214-276].
MAJOR DISCUSSION POINT
Major discussion point 5 – Startup Success Stories Demonstrating Innovation and Impact
S
Sh. Bala MS
3 arguments160 words per minute1424 words531 seconds
Argument 1
GCCs supply real data, infrastructure and enterprise validation, acting as a bridge for AI start‑ups (Sh. Bala MS)
EXPLANATION
Bala explained that Global Capability Centers (GCCs) provide the data sets, infrastructure, and enterprise validation that AI startups need to move from prototype to production, acting as a critical bridge.
EVIDENCE
He stated that GCCs give startups real data, infrastructure capability, and enterprise validation, answering the question of who will trust and integrate the model into global systems [40-42][50-57].
MAJOR DISCUSSION POINT
Major discussion point 2 – Role of Global Capability Centers (GCC) and Co‑creation Model
AGREED WITH
Ms. Geetika Dayal, Ms. Neerja Sekhar, Arvind Kumar
DISAGREED WITH
Sh. Rakesh Dubey, Ms. Neerja Sekhar, Arvind Kumar
Argument 2
Co‑creation model reduces pilot‑to‑production cycle by providing sandbox, domain expertise and production‑grade environment (Sh. Bala MS)
EXPLANATION
Bala described a co‑creation model where GCCs offer sandbox environments, domain expertise, and production‑grade infrastructure, shortening the time from pilot to full deployment for AI startups.
EVIDENCE
He detailed that the co-creation model provides a control sandbox, domain expertise, and production-grade pathways that reduce the pilot-to-production cycle, which is a major bottleneck for AI startups [60-63].
MAJOR DISCUSSION POINT
Major discussion point 2 – Role of Global Capability Centers (GCC) and Co‑creation Model
Argument 3
India’s high GCC density offers strategic talent and ecosystem advantage (Sh. Bala MS)
EXPLANATION
Bala highlighted India’s large number of GCCs, noting that this density creates strategic talent pools and ecosystem benefits, positioning India as a digital talent hub for global organizations.
EVIDENCE
He noted that India has about 1,900 GCCs today, projected to reach 3,500 by 2030, employing 3.5 million people and contributing $150 billion in software exports, making it a strategic talent and ecosystem advantage [40-42][58-60].
MAJOR DISCUSSION POINT
Major discussion point 2 – Role of Global Capability Centers (GCC) and Co‑creation Model
M
Ms. Geetika Dayal
3 arguments143 words per minute872 words364 seconds
Argument 1
Collaboration between STPI and GCCs is essential for market access and scaling (Ms. Geetika Dayal)
EXPLANATION
Geetika emphasized that partnerships between STPI and GCCs are crucial for providing startups with market access, especially to enterprise customers, thereby enabling scaling.
EVIDENCE
She said that the ecosystem’s collaboration with GCCs provides market access for startups, noting that GCCs act as a bridge between startups and enterprises, which is essential for scaling [94-100].
MAJOR DISCUSSION POINT
Major discussion point 2 – Role of Global Capability Centers (GCC) and Co‑creation Model
AGREED WITH
Ms. Neerja Sekhar, Sh. Bala MS, Vaani Kapoor, Praveen Kumar
Argument 2
Five structural pillars (knowledge, resources, market validation, funding, ethical AI) and collaboration reduce ecosystem friction (Ms. Geetika Dayal)
EXPLANATION
Geetika outlined five pillars—knowledge & capability building, resource access, market validation, funding, and ethical AI—that together reduce friction among government, startups, corporates, and investors.
EVIDENCE
She listed the five pillars and explained that ecosystem collaborations act as trust bridges, reducing friction between government, startups, corporates and investors [102-108].
MAJOR DISCUSSION POINT
Major discussion point 4 – Multi‑Stakeholder Collaboration and Policy Framework
Argument 3
Immediate priorities: joint accelerators, scaling programs, corporate challenge initiatives and AI benchmarking reports (Ms. Geetika Dayal)
EXPLANATION
Geetika identified concrete next steps, including expanding joint accelerators, scaling the Samarth program, launching more corporate challenge initiatives, and producing AI benchmarking reports.
EVIDENCE
She mentioned priorities such as expanding joint accelerators, scaling Samarth, more corporate challenge programs, export readiness, and AI benchmarking reports as immediate actions [103-107].
MAJOR DISCUSSION POINT
Major discussion point 4 – Multi‑Stakeholder Collaboration and Policy Framework
M
Ms. Neerja Sekhar
5 arguments97 words per minute921 words565 seconds
Argument 1
Partnership with NPC and STPI will accelerate responsible digital transformation via GCCs (Ms. Neerja Sekhar)
EXPLANATION
Neerja announced that the MoU between NPC, STPI, and other partners will speed up responsible digital transformation, leveraging GCCs to improve productivity and quality across sectors.
EVIDENCE
She stated that the partnership will accelerate responsible digital transformation via GCCs, supporting productivity, quality, capability, and industry alignment, and that NPC’s role is to strengthen infrastructure in the adoption spine of the ecosystem [123-131].
MAJOR DISCUSSION POINT
Major discussion point 2 – Role of Global Capability Centers (GCC) and Co‑creation Model
AGREED WITH
Sh. Bala MS, Ms. Geetika Dayal, Arvind Kumar
Argument 2
MOUs between STPI, NPC and TI formalize partnership for ecosystem scaling (Ms. Neerja Sekhar)
EXPLANATION
Neerja highlighted the signing of MoUs as a formal step to cement collaboration among STPI, NPC, and other stakeholders, aiming to scale AI innovation nationwide.
EVIDENCE
She noted that during the session a memorandum of understanding would be exchanged between STPI, NPC and TI to work together for scaling AI innovation and supporting the AI startup ecosystem [123-124].
MAJOR DISCUSSION POINT
Major discussion point 4 – Multi‑Stakeholder Collaboration and Policy Framework
AGREED WITH
Ms. Geetika Dayal, Sh. Bala MS, Vaani Kapoor, Praveen Kumar
Argument 3
Trust is the entry ticket; requires privacy, security, transparency, accountability (Ms. Neerja Sekhar)
EXPLANATION
Neerja argued that trust is fundamental for AI adoption, requiring robust privacy, security, transparency, and accountability measures.
EVIDENCE
She defined trust as the entry ticket, stating it requires privacy, security, transparency, and accountability, as well as operational reliability and responsible governance [140-145].
MAJOR DISCUSSION POINT
Major discussion point 3 – Trust, Safety, Ethical and Responsible AI as Prerequisite for Scale
AGREED WITH
Arvind Kumar, Ms. Geetika Dayal
Argument 4
Testbeds bridge promise and proof, providing real‑world sandboxes (Ms. Neerja Sekhar)
EXPLANATION
Neerja emphasized that testbeds act as real‑world sandboxes, allowing startups to move from promise to proven performance.
EVIDENCE
She described testbeds as bridging promise and proof, providing real-world sandboxes, labs, testing environments, reference architectures, etc. [145-147].
MAJOR DISCUSSION POINT
Major discussion point 3 – Trust, Safety, Ethical and Responsible AI as Prerequisite for Scale
Argument 5
Traction turns pilots into scalable deployments (Ms. Neerja Sekhar)
EXPLANATION
Neerja noted that achieving traction—moving beyond demos to actual implementations—is essential for scaling AI solutions.
EVIDENCE
She explained that traction turns pilots into scale, meaning not just a demo but actual implementation [147-149].
MAJOR DISCUSSION POINT
Major discussion point 3 – Trust, Safety, Ethical and Responsible AI as Prerequisite for Scale
A
Arvind Kumar
3 arguments134 words per minute923 words413 seconds
Argument 1
Ethical AI concerns environment and job creation; responsible AI focuses on fairness and accountability (Arvind Kumar)
EXPLANATION
Arvind distinguished ethical AI (concerned with environmental impact and job creation) from responsible AI, which emphasizes fairness, lack of bias, and accountability in AI systems.
EVIDENCE
He explained that ethical AI relates to environmental and job-creation concerns, while responsible AI focuses on fairness, bias-free outcomes, and accountability, giving examples such as driverless car accidents and who should be held responsible [194-199].
MAJOR DISCUSSION POINT
Major discussion point 3 – Trust, Safety, Ethical and Responsible AI as Prerequisite for Scale
AGREED WITH
Ms. Neerja Sekhar, Ms. Geetika Dayal
DISAGREED WITH
Ms. Neerja Sekhar
Argument 2
Safe, trusted AI is essential for adoption, illustrated by UPI and biometric systems (Arvind Kumar)
EXPLANATION
Arvind argued that safety and trust are prerequisites for AI adoption, citing UPI and biometric attendance systems as examples of trusted technologies that achieved scale.
EVIDENCE
He cited UPI’s widespread trust and the scalability of biometric attendance/identity systems as proof that safe, trusted AI enables large-scale adoption [202-205].
MAJOR DISCUSSION POINT
Major discussion point 3 – Trust, Safety, Ethical and Responsible AI as Prerequisite for Scale
Argument 3
STPI’s nationwide centers and PPP initiatives provide infrastructure, policy alignment and market reach (Arvind Kumar)
EXPLANATION
Arvind listed STPI’s extensive network of 70 centers, including domain‑specific entrepreneurship centers, and its PPP initiatives such as data centers and cloud services, which together deliver infrastructure and market access across India.
EVIDENCE
He detailed that STPI has 70 centers across the country, 24 domain-specific entrepreneurship centers, provides seed funding, global reach, market access, incubation, and also runs PPP projects like BAPT, network security, data centers and cloud services [175-183].
MAJOR DISCUSSION POINT
Major discussion point 4 – Multi‑Stakeholder Collaboration and Policy Framework
V
Vaani Kapoor
1 argument69 words per minute520 words451 seconds
Argument 1
Moderator emphasized the need for industry insight and cross‑sector dialogue (Vaani Kapoor)
EXPLANATION
Vaani, acting as moderator, invited industry experts to share insights, underscoring the importance of bringing together government, industry, and startups for a holistic AI innovation dialogue.
EVIDENCE
She thanked Rakesh Dubey for setting the context, played the STPI impact video, and then invited Shree Bala from Strat Infinity to share industry perspective, highlighting the need for cross-sector insight [26-30].
MAJOR DISCUSSION POINT
Major discussion point 4 – Multi‑Stakeholder Collaboration and Policy Framework
AGREED WITH
Ms. Geetika Dayal, Ms. Neerja Sekhar, Sh. Bala MS, Praveen Kumar
P
Praveen Kumar
1 argument108 words per minute299 words165 seconds
Argument 1
Vote of thanks highlighted collective contributions, reinforcing collaborative spirit (Praveen Kumar)
EXPLANATION
Praveen delivered the formal vote of thanks, expressing gratitude to all dignitaries, speakers, and startups, and emphasizing the collaborative effort that made the session successful.
EVIDENCE
He thanked the dignitaries, speakers, and startups, highlighted contributions of Neerja Sekhar, Rakesh Dubey, Geetika Dayal, Bala MS, and the organizing team, and invited everyone for a group photograph [353-371].
MAJOR DISCUSSION POINT
Major discussion point 4 – Multi‑Stakeholder Collaboration and Policy Framework
AGREED WITH
Ms. Geetika Dayal, Ms. Neerja Sekhar, Sh. Bala MS, Vaani Kapoor
M
Meenal Gupta
1 argument133 words per minute76 words34 seconds
Argument 1
Global interest and scaling of the AI oncology platform, underscoring STPI’s role (Meenal Gupta)
EXPLANATION
Meenal highlighted that the AI oncology platform attracted attention from the Prime Minister and Bill Gates, indicating global interest and scaling potential, and credited STPI’s support for this trajectory.
EVIDENCE
She mentioned that Prime Minister Narendra Modi invited them to discuss their solution and that Bill Gates showed interest, inviting them to Microsoft to explore further collaboration [334-336].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Meenal’s highlight of global interest and scaling aligns with the same source noting invitations from the Prime Minister and Bill Gates to discuss the platform further, underscoring STPI’s contribution [S22].
MAJOR DISCUSSION POINT
Major discussion point 5 – Startup Success Stories Demonstrating Innovation and Impact
M
Milind Datar
1 argument0 words per minute0 words1 seconds
Argument 1
Participation in group photograph symbolizing collective success of the ecosystem (Milind Datar)
EXPLANATION
Milind’s presence was noted during the group photograph, symbolizing the collective achievement of the ecosystem’s stakeholders.
EVIDENCE
The transcript records a brief placeholder for Milind Datar [326] and later mentions a group photograph with all dignitaries and startups, indicating his participation in the collective photo session [374-380].
MAJOR DISCUSSION POINT
Major discussion point 5 – Startup Success Stories Demonstrating Innovation and Impact
Agreements
Agreement Points
STPI provides a comprehensive enabling platform and ecosystem that validates, mentors, and connects startups, boosting their confidence, credibility and growth.
Speakers: Sh. Rakesh Dubey, Devika Chandrasekaran, Dr. Soumya, Arita Dalan, Kirty Datar, Noor Fatma, Shelly Sharma
Integrated portal offering marketplace, hiring hub and policy repository (Sh. Rakesh Dubey) Early STPI program (Scout 2021) provided validation and confidence to a drone‑tech start‑up (Devika Chandrasekaran) Assistance with regulatory compliance, data acquisition and global collaborations for AI diagnostics (Dr. Soumya) Industry connections, investor access and collaboration opportunities for a cybersecurity start‑up (Arita Dalan) STPI recognition strengthens credibility with customers, investors and government (Kirty Datar) STPI helped secure funding and scale an AI‑powered oncology platform (Noor Fatma) Formal recognition and awards reinforce the impact of the ecosystem (Shelly Sharma)
All these speakers highlighted how STPI’s digital portal, incubation programs, mentorship, regulatory assistance and public recognition act as critical enablers for startups, providing validation, market access and credibility that accelerate scaling [11-19][283-287][294-298][308-312][323-325][329-333][214-276].
POLICY CONTEXT (KNOWLEDGE BASE)
STPI’s role as a catalyst is documented in recent assessments that highlight its comprehensive ecosystem supporting startups from prototype to global scaling, and note that its recognition enhances credibility with investors and government stakeholders [S46][S47].
Global Capability Centers (GCCs) and co‑creation models are essential bridges that supply real data, infrastructure and enterprise validation, reducing the pilot‑to‑production gap for AI startups.
Speakers: Sh. Bala MS, Ms. Geetika Dayal, Ms. Neerja Sekhar, Arvind Kumar
GCCs supply real data, infrastructure and enterprise validation, acting as a bridge for AI start‑ups (Sh. Bala MS) Collaboration between STPI and GCCs is essential for market access and scaling (Ms. Geetika Dayal) Partnership with NPC and STPI will accelerate responsible digital transformation via GCCs (Ms. Neerja Sekhar) STPI’s nationwide centres and PPP initiatives provide infrastructure, policy alignment and market reach (Arvind Kumar)
Bala described the GCC co-creation model that offers sandboxes and domain expertise to cut the pilot-to-production cycle [60-63]; Geetika stressed that STPI-GCC partnerships give startups market access [94-100]; Neerja announced MoUs to leverage GCCs for responsible transformation [123-124]; Arvind highlighted STPI’s extensive centre network and PPP projects that underpin such ecosystem support [175-183].
Trust, safety, ethical and responsible AI are prerequisite conditions for large‑scale adoption of AI solutions.
Speakers: Ms. Neerja Sekhar, Arvind Kumar, Ms. Geetika Dayal
Trust is the entry ticket; requires privacy, security, transparency, accountability (Ms. Neerja Sekhar) Ethical AI concerns environment and job creation; responsible AI focuses on fairness and accountability (Arvind Kumar) Five structural pillars include ethical AI; collaboration reduces friction (Ms. Geetika Dayal)
Neerja defined trust as needing privacy, security and accountability [140-145]; Arvind distinguished ethical AI (environment, jobs) from responsible AI (fairness, accountability) [194-199]; Geetika listed ethical AI as one of five pillars for scaling innovation [102-108]. All converge on the need for trustworthy, responsible AI before scaling.
POLICY CONTEXT (KNOWLEDGE BASE)
The necessity of trust, safety and ethical AI aligns with emerging governance frameworks that place trust as a prerequisite for adoption, as reflected in multi-stakeholder AI principles and recent policy discussions on responsible AI [S48][S49][S50].
Multi‑stakeholder collaboration (government, STPI, GCCs, NPC, industry, investors) is vital to reduce ecosystem friction and accelerate AI startup scaling.
Speakers: Ms. Geetika Dayal, Ms. Neerja Sekhar, Sh. Bala MS, Vaani Kapoor, Praveen Kumar
Collaboration between STPI and GCCs is essential for market access and scaling (Ms. Geetika Dayal) MOUs between STPI, NPC and TI formalize partnership for ecosystem scaling (Ms. Neerja Sekhar) GCC component steps in to bridge startup‑enterprise gap (Sh. Bala MS) Moderator emphasized the need for industry insight and cross‑sector dialogue (Vaani Kapoor) Vote of thanks highlighted collective contributions, reinforcing collaborative spirit (Praveen Kumar)
Geetika, Neerja and Bala all stressed formal partnerships and co-creation to bridge gaps [94-100][123-124][71-73]; Vaani introduced industry experts to the forum [26-30]; Praveen’s vote of thanks thanked all stakeholders, underscoring the collaborative nature of the event [353-361].
POLICY CONTEXT (KNOWLEDGE BASE)
Multi-stakeholder collaboration is a core tenet of international AI governance recommendations, which call for coordinated action among governments, industry and civil society to reduce friction and enable scaling [S48][S50].
Recognition, awards and public visibility amplify startup credibility and attract further investment and market opportunities.
Speakers: Shelly Sharma, Kirty Datar, Noor Fatma
Formal recognition and awards reinforce the impact of the ecosystem (Shelly Sharma) STPI recognition strengthens credibility with customers, investors and government (Kirty Datar) STPI helped secure funding and scale an AI‑powered oncology platform (Noor Fatma)
Shelly’s felicitation ceremony highlighted awards across revenue, funding, employment and women participation [214-276]; Kirty noted that STPI’s acknowledgment boosted credibility [323-325]; Noor recounted how STPI’s intervention enabled funding and rapid scaling [329-333].
POLICY CONTEXT (KNOWLEDGE BASE)
Studies of the STPI ecosystem show that formal recognition and awards directly boost startup credibility and attract further investment, confirming the impact of public visibility on growth trajectories [S46][S47].
Similar Viewpoints
All three emphasize that STPI’s digital infrastructure and early‑stage programs deliver validation, data access and regulatory support that are crucial for startups to move from prototype to market‑ready products [11-19][283-287][294-298].
Speakers: Sh. Rakesh Dubey, Devika Chandrasekaran, Dr. Soumya
Integrated portal offering marketplace, hiring hub and policy repository (Sh. Rakesh Dubey) Early STPI program (Scout 2021) provided validation and confidence to a drone‑tech start‑up (Devika Chandrasekaran) Assistance with regulatory compliance, data acquisition and global collaborations for AI diagnostics (Dr. Soumya)
Both stress that formal partnership agreements (MoUs) among STPI, NPC and other ecosystem actors are the mechanism to operationalise collaboration and accelerate AI innovation at national scale [94-100][123-124].
Speakers: Ms. Geetika Dayal, Ms. Neerja Sekhar
Collaboration between STPI and GCCs is essential for market access and scaling (Ms. Geetika Dayal) MOUs between STPI, NPC and TI formalize partnership for ecosystem scaling (Ms. Neerja Sekhar)
Both agree that beyond technical capability, AI systems must be trustworthy, ethical and responsible—incorporating privacy, security, fairness and accountability—to achieve scale [140-145][194-199].
Speakers: Ms. Neerja Sekhar, Arvind Kumar
Trust is the entry ticket; requires privacy, security, transparency, accountability (Ms. Neerja Sekhar) Ethical AI concerns environment and job creation; responsible AI focuses on fairness and accountability (Arvind Kumar)
Unexpected Consensus
Women participation and gender inclusion as a metric of startup success.
Speakers: Shelly Sharma, Ms. Geetika Dayal
Celebration of start‑ups’ achievements in revenue, funding, employment and women participation (Shelly Sharma) Five structural pillars include ethical AI and emphasize gender inclusion (Ms. Geetika Dayal)
While Shelly highlighted women employment as a specific award category during the felicitation ceremony [231-236], Geetika, in her policy-oriented remarks, incorporated gender considerations within the broader ethical AI pillar, showing an unexpected alignment on gender inclusion across both ceremonial and strategic policy discussions [102-108].
POLICY CONTEXT (KNOWLEDGE BASE)
Gender-inclusive entrepreneurship is emphasized in UNCTAD and Global Innovation Forum reports, which stress the need for gender balance in funding and accelerator programs as a metric of success [S39][S40][S41].
Recognition of the same AI startup (EZO5) by both government officials and private investors as a model of rapid scaling.
Speakers: Noor Fatma, Arvind Kumar
STPI helped secure funding and scale an AI‑powered oncology platform (Noor Fatma) STPI’s nationwide centres and PPP initiatives provide infrastructure, policy alignment and market reach (Arvind Kumar)
Noor described how STPI’s intervention directly enabled funding and scaling of an AI oncology solution, while Arvind, a government official, emphasized the same STPI infrastructure as the backbone for such scaling, indicating an unexpected convergence of perspectives on the same startup’s growth pathway [329-333][175-183].
Overall Assessment

The speakers displayed strong consensus that a robust, collaborative ecosystem—anchored by STPI’s digital platform, nationwide centres, and formal partnerships with GCCs, NPC and industry—provides the necessary validation, market access, data, and infrastructure for AI startups to scale responsibly. Trust, ethical AI and multi‑stakeholder collaboration were repeatedly highlighted as pre‑conditions for large‑scale adoption.

High consensus: The convergence across government officials, industry leaders, and startup founders on the importance of ecosystem support, co‑creation with GCCs, and trustworthy AI indicates a unified strategic direction, suggesting that policy and programmatic efforts are likely to be coordinated and mutually reinforcing.

Differences
Different Viewpoints
How to achieve scaling of AI startups – GCC co‑creation model versus STPI digital portal versus trust‑based testbeds versus ethical‑responsible AI frameworks
Speakers: Sh. Bala MS, Sh. Rakesh Dubey, Ms. Neerja Sekhar, Arvind Kumar
GCCs supply real data, infrastructure and enterprise validation, acting as a bridge for AI start‑ups (Sh. Bala MS) Integrated portal offering marketplace, hiring hub and policy repository (Sh. Rakesh Dubey) Trust is the entry ticket; requires privacy, security, transparency, accountability; Testbeds bridge promise and proof; Traction turns pilots into scale (Ms. Neerja Sekhar) Ethical AI concerns environment and job creation; responsible AI focuses on fairness and accountability; Safe, trusted AI is essential for adoption (Arvind Kumar)
Bala argues that Global Capability Centers provide the data, infrastructure and enterprise validation needed for AI startups to move from prototype to production [40-42][50-57]. Dubey promotes the STPI portal as a one-of-its-kind digital marketplace and policy repository that can connect startups with resources [11-19]. Sekhar stresses that trust, testbeds and traction are the prerequisite for scaling AI, emphasizing privacy, security and real-world sandboxes [140-145][145-149]. Kumar focuses on ethical and responsible AI, linking trust to fairness, accountability and societal impacts such as environment and jobs [194-199][202-205]. These perspectives diverge on the primary mechanism to achieve scaling, ranging from infrastructure provision, digital platform services, trust-building test environments to ethical governance.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy debates have highlighted multiple pathways for AI startup scaling, including co-creation with GCCs, digital portals managed by agencies like STPI, and trust-based testbeds, each featured in recent analyses of ecosystem design and governance models [S45][S46][S48].
Funding and credibility versus operational readiness and trust as the main barrier to scaling
Speakers: Kirty Datar, Ms. Neerja Sekhar
STPI recognition strengthens credibility with customers, investors and government (Kirty Datar) Capital alone cannot solve the friction; operational readiness and trust are the biggest gaps (Ms. Neerja Sekhar)
Kirty Datar claims that recognition by STPI boosts a startup’s credibility with stakeholders, implying that funding and reputation are key to growth [323-325]. Sekhar counters that merely providing capital does not address the core friction; instead, startups need operational readiness, trust mechanisms, testbeds and traction to scale effectively [140-145][145-149]. This reflects a disagreement on whether financial credibility or operational trust is the primary lever for scaling.
POLICY CONTEXT (KNOWLEDGE BASE)
Recent research argues that funding is no longer the primary obstacle; instead, organizational readiness and trust are identified as the critical constraints for scaling AI ventures, echoing findings from innovation ecosystem studies [S43][S46][S48].
Definition and emphasis of ethical versus responsible AI
Speakers: Arvind Kumar, Ms. Neerja Sekhar
Ethical AI concerns environment and job creation; responsible AI focuses on fairness and accountability (Arvind Kumar) Trust requires privacy, security, transparency, accountability and operational reliability (Ms. Neerja Sekhar)
Arvind Kumar distinguishes ethical AI (environmental impact, job creation) from responsible AI (fairness, bias-free outcomes, accountability) [194-199]. Sekhar’s notion of trust also includes privacy, security, transparency and accountability, overlapping with responsible AI but not explicitly addressing environmental or employment concerns [140-145]. The differing emphases reveal a subtle disagreement on what aspects should be prioritized within AI governance.
POLICY CONTEXT (KNOWLEDGE BASE)
The distinction between ethical AI and responsible AI is reflected in emerging governance frameworks that separate normative principles (ethical) from implementation mechanisms (responsible), as outlined in multi-stakeholder AI policy discussions [S48][S50].
Unexpected Differences
Government official prioritising ethical and environmental dimensions of AI while other speakers focus on economic scaling mechanisms
Speakers: Arvind Kumar, Sh. Bala MS, Ms. Geetika Dayal
Ethical AI concerns environment and job creation; responsible AI focuses on fairness and accountability (Arvind Kumar) Co‑creation model reduces pilot‑to‑production cycle by providing sandbox, domain expertise and production‑grade environment (Sh. Bala MS) Collaboration between STPI and GCCs is essential for market access and scaling (Ms. Geetika Dayal)
Arvind Kumar’s emphasis on environmental impact and job creation as core ethical concerns is unexpected in a session largely dominated by discussions of infrastructure, market access and scaling through GCCs and digital platforms. This creates a divergence between a sustainability-focused ethical lens and the predominantly economic scaling narratives of Bala and Dayal [194-199][202-205][60-63][94-100].
Startup credibility via STPI recognition versus trust‑building testbeds as the primary growth driver
Speakers: Kirty Datar, Ms. Neerja Sekhar
STPI recognition strengthens credibility with customers, investors and government (Kirty Datar) Trust is the entry ticket; requires privacy, security, transparency, accountability; Testbeds bridge promise and proof (Ms. Neerja Sekhar)
While Kirty Datar highlights external credibility conferred by STPI as the main lever for startup growth, Sekhar argues that without trust mechanisms and testbeds, credibility alone cannot translate into scalable impact. This contrast between reputation-based versus trust-based growth pathways was not anticipated given the overall focus on ecosystem support [323-325][140-145][145-149].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of the STPI ecosystem note that formal recognition enhances market credibility, while parallel literature on trust-by-design emphasizes testbeds as essential for user adoption, illustrating the tension between these growth levers [S46][S47][S48].
Overall Assessment

The discussion revealed several points of contention: (1) divergent views on the primary mechanism to scale AI startups—GCC co‑creation, a digital STPI portal, trust‑based testbeds, or ethical‑responsible AI frameworks; (2) disagreement on whether financial credibility or operational readiness/trust is the main barrier; (3) differing emphases on ethical versus responsible AI dimensions. While participants shared a common goal of scaling AI innovation, they proposed contrasting pathways and priorities.

Moderate to high. The disagreements are substantive, reflecting different institutional perspectives (government, industry, startups) on what levers are most critical for scaling. This may lead to fragmented policy and program design unless a coordinated strategy that integrates infrastructure, platform services, trust mechanisms and ethical considerations is adopted.

Partial Agreements
Both agree that collaboration between STPI and Global Capability Centers is crucial for scaling AI startups, but Dayal emphasizes broader partnership pillars and joint accelerators, while Bala focuses specifically on the co‑creation sandbox model to shorten the pilot‑to‑production gap [94-100][102-108][60-63].
Speakers: Ms. Geetika Dayal, Sh. Bala MS
Collaboration between STPI and GCCs is essential for market access and scaling (Ms. Geetika Dayal) Co‑creation model reduces pilot‑to‑production cycle by providing sandbox, domain expertise and production‑grade environment (Sh. Bala MS)
Both view STPI as a key enabler for startups; Dubey highlights the digital portal’s functional features, while Dayal stresses STPI’s role within a wider collaborative ecosystem that includes GCCs and other partners [11-19][94-100][102-108].
Speakers: Sh. Rakesh Dubey, Ms. Geetika Dayal
Integrated portal offering marketplace, hiring hub and policy repository (Sh. Rakesh Dubey) Collaboration between STPI and GCCs is essential for market access and scaling (Ms. Geetika Dayal)
Takeaways
Key takeaways
The STPI integrated portal (marketplace, hiring hub, policy repository) is a critical enabler for startups, providing validation, funding assistance, regulatory support and credibility. Global Capability Centers (GCCs) act as a bridge for AI startups by supplying real data, infrastructure, and enterprise validation; the co‑creation model with GCCs shortens the pilot‑to‑production cycle. Trust, safety, ethical and responsible AI are prerequisites for scaling; privacy, security, transparency, fairness and accountability must be built into solutions. Multi‑stakeholder collaboration (government, STPI, NPC, TI, GCCs, investors, corporates) and a structured policy framework (knowledge building, resource access, market validation, funding, ethical AI) are essential to reduce ecosystem friction. Startup success stories (drone‑tech for agriculture, AI diagnostics for oncology, cybersecurity, women‑focused employment) illustrate the tangible impact of the ecosystem and the importance of recognition and awards.
Resolutions and action items
Signing of MoUs between STPI and National Productivity Council (NPC) and between STPI and Thai Delhi NCR (TI) to formalize partnership for scaling AI innovation. Commitment to develop joint accelerators, expand the Samarth program, launch more corporate challenge initiatives and produce AI benchmarking reports (as proposed by Ms. Geetika Dayal). Agreement to build co‑creation platforms and enterprise sandboxes within GCCs, leveraging STPI’s portal capabilities. Plan to strengthen the five structural pillars (knowledge, resources, market validation, funding, ethical AI) through coordinated strategies among stakeholders.
Unresolved issues
How to ensure consistent and scalable market access for startups to global enterprises beyond the GCC sandbox. Specific mechanisms for providing startups with large‑scale, high‑quality data sets and compute resources. Detailed roadmap for operationalizing the co‑creation model across diverse industry sectors. Clarification of funding pipelines beyond initial seed/angel support, especially for later‑stage scaling. Implementation details of the proposed AI benchmarking reports and metrics for measuring ecosystem productivity.
Suggested compromises
Emphasis on collaboration over competition as a common ground for all ecosystem players. Adopting a co‑creation model that positions startups as partners rather than mere vendors, balancing control and innovation. Balancing ethical considerations (environment, job creation) with responsible AI requirements (fairness, accountability) to satisfy both regulatory and business objectives.
Thought Provoking Comments
The scale of AI is not determined by the model you build, but by the way your AI gets integrated into the global organization. The real gap is operational/organizational readiness, not a technology gap.
This reframes the common assumption that technical superiority alone drives AI success. It shifts focus to integration, governance, and the ‘co‑creation’ model that links startups with Global Capability Centers (GCCs).
Bala’s point redirected the conversation from pure technology talk to the practical challenges of scaling AI in enterprises. It prompted the subsequent speakers (Geetika Dayal and Neerja Sekhar) to propose concrete mechanisms—co‑creation platforms, testbeds, and trust frameworks—to bridge that integration gap.
Speaker: Sh. Bala MS, Strat Infinity
We need a three‑part framework for startups and ecosystem builders: Trust (privacy, security, accountability), Testbeds (real‑world sandboxes, labs), and Traction (moving pilots to real implementation).
She distilled the complex scaling problem into a clear, actionable framework that aligns policy, industry, and startup needs, emphasizing responsible AI and measurable outcomes.
Her framework became a reference point for the rest of the session. It was echoed by Arvind Kumar’s discussion on responsible vs ethical AI and reinforced the later emphasis on collaboration over competition by Geetika Dayal.
Speaker: Ms. Neerja Sekhar, Director General, National Productivity Council
Collaboration, not competition, is the engine for scaling AI innovation. Five structural pillars—knowledge & capability building, resource access, market validation, funding, and ethical/responsible AI—must be coordinated across STPI, TI, GCCs, government, and corporates.
She synthesised policy, ecosystem, and startup perspectives into a strategic roadmap, highlighting the need for coordinated action rather than siloed programs.
Her articulation prompted a shift from individual success stories to a systemic view, leading to the MOU exchange ceremony and reinforcing the session’s theme of building a unified AI ecosystem.
Speaker: Ms. Geetika Dayal, DG, Thai Delhi NCR
Responsible and ethical are related but distinct: ethical concerns the CEO’s attitude toward environment and job creation, while responsible focuses on fairness, bias‑free outcomes, and accountability (e.g., who is liable for a driverless car accident).
He clarified commonly conflated concepts, providing a nuanced understanding essential for building trust in AI products.
This clarification deepened the discussion on trust introduced by Neerja Sekhar and set the stage for later mentions of accountability and safety in AI deployments.
Speaker: Arvind Kumar, Director General, STPI
STPI’s portal now includes a product marketplace, hiring hub, and a sandbox where startups can post products and get enterprise validation—making it a one‑of‑its‑kind global platform.
He introduced a concrete digital infrastructure that operationalises the ecosystem vision, moving from abstract policy to a tangible tool for startups and incumbents.
This announcement provided the practical foundation that the later speakers (Bala, Geetika, Neerja) referenced when discussing integration, testbeds, and market access.
Speaker: Sh. Rakesh Dubey, Director, Startup and Innovation, STPI
Our AI‑powered platform Imagix AI has processed over one million scans, flagged thousands of TB and lung cancer cases, and reduced radiotherapy planning from a month to a week—demonstrating real‑world impact and attracting global attention (Bill Gates, Microsoft).
Their data‑driven success story exemplifies how the trust‑testbed‑traction framework can translate into measurable health outcomes and international interest.
The founders’ testimony validated the earlier theoretical frameworks, reinforcing the session’s message that ecosystem support leads to tangible, scalable impact.
Speaker: Noor Fatma & Meenal Gupta, Co‑founders, EZO5 Solutions
Overall Assessment

The discussion evolved from an introductory overview of STPI’s digital platform to a deep dive into the systemic challenges of scaling AI. Key turning points were triggered by Bala’s insight on integration over pure technology, Geetika’s collaborative roadmap, and Neerja’s concise trust‑testbed‑traction framework. These comments reframed the conversation, moving it from celebratory announcements to strategic problem‑solving, and prompted participants to align on concrete mechanisms—co‑creation models, sandbox environments, and accountability standards. The cumulative effect was a coherent narrative that linked policy, infrastructure, industry, and startup experiences, culminating in actionable commitments (MOUs) and real‑world success stories that illustrated the ecosystem’s potential.

Follow-up Questions
Who will provide real data sets, compute infrastructure, and enterprise validation for AI startups to scale?
Identified as the biggest question mark hindering AI startup integration with global enterprises, requiring trusted data and validation to move from prototype to production.
Speaker: Bala MS (Strat Infinity)
How can co‑creation platforms and enterprise sandboxes be developed to link startups with Global Capability Centers (GCCs)?
Co‑creation platforms are seen as essential to reduce the pilot‑to‑production cycle and institutionalize AI solutions within large enterprises.
Speaker: Bala MS (Strat Infinity)
What should a joint IP framework between startups and GCCs look like?
A joint intellectual‑property framework is under discussion and is crucial for protecting startup innovations while enabling collaborative development.
Speaker: Bala MS (Strat Infinity)
How can joint accelerators, programs like Samarth, corporate challenge initiatives, export‑readiness schemes, and AI benchmarking reports be expanded and coordinated?
These initiatives are immediate priorities to create a coordinated strategy, measure ecosystem performance, and enhance market access for AI startups.
Speaker: Geetika Dayal (TI, Delhi NCR)
How can startups reliably move from ideas to measurable societal impact?
Addressing the translation of innovative concepts into real‑world benefits is essential for scaling AI innovation and achieving the summit’s welfare goals.
Speaker: Neerja Sekhar (National Productivity Council)
What mechanisms are needed to build trust in AI products (privacy, cybersecurity by design, transparency, accountability, fairness, operational reliability)?
Trust is the entry ticket for AI adoption; establishing robust safeguards ensures user confidence and regulatory compliance.
Speaker: Neerja Sekhar (National Productivity Council)
What testbeds (real‑world sandboxes, labs, reference architectures) are required to bridge the promise‑proof gap for AI pilots?
Testbeds enable startups to validate solutions in realistic environments, facilitating transition from pilot to production.
Speaker: Neerja Sekhar (National Productivity Council)
What pathways are needed to achieve traction—turning AI pilots into scaled implementations?
Beyond demos, sustained traction requires systematic scaling strategies, market integration, and measurable outcomes.
Speaker: Neerja Sekhar (National Productivity Council)
How can ‘responsible’ and ‘ethical’ AI be clearly differentiated and operationalized, especially regarding accountability?
Startups are confused between these concepts; clear guidelines are needed to ensure AI systems are safe, fair, and accountable.
Speaker: Arvind Kumar (Director General, STPI)
What metrics and frameworks should be used to assess productivity, quality, capability, and industry alignment in the AI era?
Expanding productivity definitions to include reliability, safety, and responsible performance will help benchmark AI impact across sectors.
Speaker: Neerja Sekhar (National Productivity Council)
How should GCCs be structured as bridges for AI startups—what governance, collaboration models, and scaling mechanisms are optimal?
Both speakers highlighted GCCs as critical pathways; defining effective partnership models is vital for market access and co‑creation.
Speaker: Bala MS (Strat Infinity); Geetika Dayal (TI, Delhi NCR)
What are the five structural pillars (knowledge & capability building, resource access, market validation, funding, ethical AI) that need coordinated implementation to scale innovation?
Identifying and aligning these pillars is necessary to create a robust, collaborative ecosystem that can scale AI startups.
Speaker: Geetika Dayal (TI, Delhi NCR)
What are the operational readiness gaps that prevent AI solutions from scaling across business units?
Surveys show most enterprises are piloting AI but few have scaled; the gap lies in organizational readiness rather than technology.
Speaker: Bala MS (Strat Infinity)
How should productivity be re‑defined in the AI era to include reliability, repeatability, safety, and responsible performance?
Broadening productivity metrics will capture the true value AI brings beyond traditional efficiency measures.
Speaker: Neerja Sekhar (National Productivity Council)

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trustworthy AI Foundations and Practical Pathways

Building Trustworthy AI Foundations and Practical Pathways

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel discussed how artificial intelligence is moving from the era of general-purpose hardware to a new era of “general software” that can perform many tasks traditionally handled by separate applications [40-44]. Alok explained that this shift mirrors the historic transition when a single computer could run both Excel and PowerPoint through different software, and that AI aims to replace such distinct programs with a single, instruction-driven system [42-43]. He argued that because software has a high upfront development cost but negligible marginal cost, the rise of general AI threatens the business models of many software-dependent industries [55-60]. Specific sectors he cited include the thousands of Indian web-design firms and the emerging markets for novel-writing and movie production, which could become obsolete as AI generates content on demand [63-66][73-80]. Moreover, the traditional ad-supported web economy is collapsing because users can obtain answers directly from models like ChatGPT, reducing click-through rates from one-in-six to one-in-seven and jeopardizing revenue for content sites [82-89][90-99].


Devayan then shifted the focus to the problem of alignment, defining risk as the probability and severity of an undesirable outcome and noting that these dimensions vary across contexts such as education or healthcare [149-152][170-177][186-188]. He illustrated the stakes with real-world incidents, for example the Air Canada case, to show that AI safety failures can cause loss of life, liberty, or property [190-193].


Anirban introduced ASTRA, a risk-assessment database co-created with the AICSTEP Foundation, designed specifically for the Indian context [211-218]. The database categorises risks into a contextualised taxonomy, distinguishing “social” risks like linguistic bias and infrastructure exclusion from “frontier” risks such as power-seeking or rogue AI behavior [224-230][250-259][260-268]. He gave concrete examples, such as AI systems failing in regions with poor connectivity and a trading-firm incident where an unchecked model caused massive losses [267-272][256-259]. Anirban emphasized that mitigation is especially difficult because measures are often context-specific and can trade off utility against safety [282-289].


The speakers agreed that building a nuanced, Indian-focused risk taxonomy and continuously expanding it to sectors like agriculture is essential for responsible AI deployment [290-294][231-236]. They concluded that while general AI promises powerful new capabilities, its safe integration requires careful alignment, contextual risk assessment, and balanced mitigation strategies [110-119].


Keypoints

From general-purpose hardware to “general-purpose software” (AI) and its revolutionary promise – Alok explains how early computers were single-purpose machines, how the invention of a universal hardware platform enabled the software boom, and how today AI is becoming a single software layer that can replace many applications, heralding a massive societal shift. [41-48][49-60]


Immediate economic disruption caused by AI-generated content – The rise of conversational agents is eroding ad-driven web traffic, threatening web-design agencies, content-creation businesses, and even open-source tooling such as Tailwind, while simultaneously enabling non-technical users to build apps instantly. [82-103][104-106]


Fundamental safety, alignment, and correctness challenges – The ease of natural-language interaction hides the danger of ambiguous instructions; AI may fulfill literal requests in harmful ways, underscoring the need for trustworthy, aligned behavior. [110-119][121-148]


A need for a contextual Indian risk framework – India’s scale, linguistic diversity, and infrastructure constraints (e.g., connectivity) create unique AI safety concerns that global frameworks miss; the team built the ASTRA database to capture Indian-specific social and frontier risks. [165-208][211-279]


Mitigation is hard and must be nuanced – Risks are split into observable “social” risks and hard-to-measure “frontier” risks; mitigation measures often trade off safety against utility, requiring empirical grounding and sector-specific solutions. [250-289][290-294]


Overall purpose / goal


The discussion aims to map the transformative impact of AI-from its historical evolution to its present-day economic upheaval-and to highlight the urgent need for systematic risk identification, contextual assessment (especially for India), and carefully balanced mitigation strategies to ensure safe, trustworthy deployment.


Overall tone


The conversation begins with an enthusiastic, almost speculative tone about AI’s potential (Alok’s historical narrative). It then shifts to a cautionary, concerned tone as the speakers describe economic displacement and safety hazards. By the latter part, the tone becomes collaborative and solution-oriented, focusing on concrete risk taxonomy (ASTRA) and mitigation challenges. This progression moves from optimism → warning → constructive problem-solving.


Speakers

Alok


Area of expertise / topics: AI general software, technology economics, policy implications


Role / Title: Senior official, Ministry of Panchayati Raj (MOPR), Government of India [S7]


Devayan


Area of expertise / topics: AI alignment, risk definition (as discussed in the transcript)


Role / Title: (not specified in external sources)


Anirban


Area of expertise / topics: AI safety, risk taxonomy, risk categorization (social vs. frontier risks), mitigation strategies in the Indian context


Role / Title: Researcher at Ashoka University; contributor to the ASTRA risk database project [S5]


Additional speakers:


Ananya


Area of expertise / topics: Contributed to the ASTRA risk database (social risk analysis)


Role / Title: Contributor to the ASTRA risk database project [S5]


Full session reportComprehensive analysis and detailed insights

The panel began with Alok describing a historic shift in computing: after decades of relying on general‑purpose hardware that could run any software, we are now entering an era of general‑purpose software— a single AI system that can perform the roles of many specialised applications such as PowerPoint and Excel simply by following natural‑language instructions. He argued that the shift to general‑purpose AI software will be as transformative as the advent of universal hardware.


Alok traced the evolution from early single‑purpose machines—a hammer, a car, a door—through the first computers that each performed a single task, and explained why two separate computers are not needed for Excel and PowerPoint once universal hardware exists. He highlighted pioneers such as Babbage, Vannevar Bush and Alan Turing, whose work showed that a single machine could be programmed for many tasks, creating the hardware platform that powered the information revolution and a subsequent software boom.


He then argued that the emergence of general‑purpose AI software will upend the economics of the software industry. Because the marginal cost of distributing software is near‑zero, the traditional “burn‑rate” model of large upfront investment followed by low ongoing costs will disappear. He gave concrete examples: thousands of Indian web‑design agencies that built modest‑size businesses will see their market evaporate; nascent markets for novel writing and film production are already threatened by AI‑generated content that can produce a full‑length movie on demand; and the ad‑driven web economy is collapsing as users obtain answers directly from models like ChatGPT, bypassing the websites that previously earned revenue from clicks. The decline in click‑through rates was illustrated by a drop from a one‑in‑six chance of being visited after a search to a one‑in‑seven chance, which the speaker described as “multiple orders of magnitude” loss of income for content providers. Open‑source tooling such as Tailwind is also suffering because developers can simply ask the AI to generate the required code.


Alok warned that the ease of natural‑language interaction hides a fundamental safety problem. Programming languages were created to disambiguate human language, yet AI systems are now being asked to act on free‑form instructions, a practice he described as “deadly” because ambiguous prompts can lead to literal, harmful fulfilments. He invoked the classic “genie” metaphor to illustrate the risk of AI granting wishes without regard for consequences.


Devayan followed by introducing the concept of alignment and defining risk as the product of likelihood and severity. He emphasized that these dimensions must be evaluated in the specific deployment context—education, healthcare, etc.—and cited an example from Air Canada as an illustration of real‑world AI safety failures. He also asked how to quantify risk in practice.


Anirban expanded the risk definition by adding two further dimensions: intentionality (intentional vs. unintentional) and the stage of manifestation (development, deployment, usage). He argued that a comprehensive taxonomy should also capture stakeholder attribution—who is responsible for the risk. Rather than a disagreement, his contribution complemented Devayan’s definition, creating a richer risk model.


Both speakers agreed that existing global AI risk frameworks suffer from “contextual blindness” when applied to India, overlooking challenges such as linguistic diversity, caste considerations, and unreliable connectivity. To address this gap, Anirban presented ASTRA, the AI Safety, Trust and Risk Assessment database launched in partnership with the AICSTEP Foundation. ASTRA follows a seven‑step, bottom‑up process that begins with “resource identification,” a research phase to locate where risks occur in the Indian context. The framework currently identifies 37 risk types, organised into a causal taxonomy that maps each risk to its lifecycle stage and intent. Risks are categorised as “social” (observable, e.g., linguistic bias, infrastructure exclusion) and “frontier” (hard‑to‑observe, e.g., power‑seeking or rogue AI behaviour). Concrete Indian examples include AI systems failing in regions with poor internet connectivity, leaving farmers with unusable applications, and a trading‑firm incident where an unchecked model went rogue and generated massive losses. ASTRA is hosted on an archive repository, and a publicly available paper with a link describes the database; Ananya is noted as a primary contributor to its development.


Both Alok and Anirban stressed that mitigation of AI risks is exceptionally challenging. Alok cautioned that mitigation measures must be applied “exceedingly carefully,” warning that overly strict safeguards can erode the utility of AI systems. Anirban added that mitigation is the hardest task, highly context‑specific, often ineffective, and involves trade‑offs between safety and usefulness.


Looking ahead, the panel outlined several action items. ASTRA will be expanded beyond education and finance to sectors such as agriculture, and the team aims to empirically ground risk probabilities for the identified categories. Continued collaboration among the three speakers is intended to refine the taxonomy, improve mitigation strategies, and monitor emerging frontier risks.


In summary, the discussion moved from an optimistic vision of AI as a universal software platform that could democratise creation, through a cautionary appraisal of the economic and safety disruptions it may cause, to a constructive, solution‑oriented plan centred on the ASTRA risk framework. While there is strong consensus on the need for contextual, India‑focused risk assessment and the difficulty of mitigation, the panel highlighted remaining challenges: developing effective, utility‑preserving mitigation techniques, quantifying frontier‑risk likelihoods, and redesigning business models for an AI‑answer‑centric web. The panel concluded that while AI’s move toward universal software promises unprecedented capabilities, careful, context‑aware risk assessment—exemplified by the ASTRA initiative—is essential to harness its benefits responsibly.


Session transcriptComplete transcript of the session
Alok

I give this example because I’m fairly confident that when you look it up and when you try it yourself it will work. And I know it will work, by the way. That is, it will fail rather. On the current versions of ChatGPT, it will not fail, by the way. In the next generation, I do some stuff with Google for example, it won’t fail in the next generation of Gemini anymore. Because they’re putting a lot of effort into fixing this one error. They haven’t fixed the underlying problem. They saw some presentations of people like me pointing this stuff out so they’ve just put a band -aid on top. Now we can’t run life on band -aids.

Band -aids is what? Band -aids is students mugging up one answer before the exam so they get the marks for it. That’s not real learning, by definition. The problem is that we’ve built this system which is our attempt to have general software. And we don’t quite know how to do it. We don’t quite know how to handle it. So let me clarify what I… I’m going to say something incredibly stupid and then I’ll bring it into place. We were talking about this not too long ago. A long time ago you had machines that could do one thing. A hammer is a hammer, a car is a car, a door is a door. You can’t use one as the other.

I’m saying something that sounds incredibly stupid, but think about it. Why don’t you need two separate computers, one to run Excel and one to run PowerPoint? How come both run on the same machine? This is not obvious at all. We’re just used to it, so it seems obvious, but it wasn’t obvious. In fact, the first few computation machines that were made, if you go back, look at all of this Vannevar Bush and even before that Charles Babbage, all of these names one reads in history books or whatever, you’ll see they had differential analyzers and this, that and the other. Oh, this machine, it can add. That machine, it can solve differential equations. This other machine.

It can fit curves. This other machine. It can. do this mapping task. This idea that you could have one machine which could do everything was completely ridiculous because there’s only one thing in the universe that we know of that can do that and it’s the human brain. The human brain is a singular object that can retrain itself to play billiards, to arrange chairs in a room, to present, to drive a car, it can do all of these things. So due to a bunch of very clever people like Alan Turing and co, we figured out that wait a minute, we can have one computer, we can build this one machine. I mean think of it just from a manufacturing point of view, like jackpot.

We can build one machine and it can do all the things. All we need to do is we need to have different software, one for each task. So we’ll have one software for Excel, one software for PowerPoint and the same physical machine will be able to run both. So we built general hardware. And that worked for decades and the fact that we had general hardware to the computation and information revolution. Now for the first time, instead of just having general hardware, that is one machine that can run all software, we have general software, which is you don’t need PowerPoint and Excel separately. You can have one software which you tell it what to do and it will do the job of PowerPoint and you tell it something else and it will do the job of Excel also.

That’s what we are trying to build with AI at the end of the day. Going from general software to general hardware. And as we know, this edit, this ability that we got when we built a general purpose machine, before you needed to spend all this money and build separate machines for every task, and the moment you had a single machine that could do all things, that led to an absolutely massive change. It was a massive revolution. Now that you have general software coming in, right? Once we learn how to do that, think of how the world is going to change. Software companies which used to be, there’s a very interesting graph that I really should have put here, which is if you’re manufacturing something, there’s a burn rate.

So you have an increase in the amount of money you have to invest in your company initially. And then if you manufacture 10 cars, you have a certain amount of money you need to invest. If you want to increase the number of cars you manufacture, I’m talking toy cars, I’m not rich enough to manufacture real cars. But as you increase the number of toy cars you’re manufacturing, your costs go up and sort of linear. And there are bumps every time you do a new round of R &D or something. Software companies don’t do that, right? Software companies, you have this huge expense at the beginning to build everything up. And then once you have that, your burn rate is relatively low.

Selling 50 ,000 units of a software and selling it for $1 ,000. Selling 2 ,000 ,000 units of a software. isn’t going to make a material change in the amount of money you’re investing every day. That entire economy is now going to be gone because you don’t need that kind of investment in software anymore. And this has led to multiple real economies collapsing. So I’ll give you two examples just off the top of my head. Web design companies. There were thousands and thousands of them all over India. You know, a group of college students get together, they say, look, we’ll build websites for people. And these were all micro and medium industries, maybe employing anywhere from 10 to 50 people. That economics is just gone.

We all learned when we were small, right? What is the definition of economics? Economics is the study of the allocation of resources under conditions of scarcity. What if it’s not scarce? There’s no economics affair, despite the fact, again, that we’re in Delhi. There’s no econ affair. But similarly now, econ of maybe writing novels is gone. on, right? You saw what happened with C dance recently, just 24 hours ago. The movie industry is worried. Who’s why should people invest in making movies? If I can write, you know what? I want a movie like Sherlock Holmes, but I want Salman Khan to be the main character and I want me to be the side character and I want this to be the story maker to our movie.

I press enter movies done, right? If that comes to pass, then that, that entire economics is just gone, right? We have seen, these are me talking about the future. Let’s talk about right now, right now at this very moment, a large portion of the internet is collapsing because what used to happen is a large portion of the internet used to run on ads, right? So if I have a recipe website, what do I do? I put some ads on it. You visit my website to read my recipe for blueberry cupcakes or whatever, and you get that ad displayed to you. Okay. And you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get I get some money from like the Google, you’ve seen that at the bottom of pages and so on, right?

So I get some money off of that. The problem is that now who’s going to come to my stupid website? They’ll just ask ChatGPT or Gemini for that and they’ll get it and nobody’s going to come to my website. Generally speaking, if you got your search engine optimization correct and you were on the first page of Google, your click rate was one in six, okay? This was the official statistic. That is, let’s say I am a top blueberry cupcake chef. I don’t, that’s definitely not a thing, but let’s say I am. I’m very proud of my blueberry cupcakes. And I’ve made my website and everyone agrees it’s a great site, so when you search for blueberry cupcake recipes, let’s say I’m one of the top 10 and so I would normally have, because people don’t just click one link, they usually go to two or three, I would have a one in six chance of getting clicked and I would make some money off of it.

That number in the past year has gone from one in six to one in seven. in 1500. This means that this is multiple orders of magnitude. So all of these websites that ChatGPT and Gemini and DeepSeek and all of these people, they got the data from these websites only. But now no one will go to these websites and they’re all dying. This is even true of open source tools. So Tailwind, which is a major CSS platform, had to let go of a lot of its engineers because what’s happening is these tools have eaten all the open source code and then people are no longer going to the open source libraries to get it. They’re just saying make me this thing that does that and do it.

Of course there are positive sides. There are non -technical people who can now just say things to the system and it will build them a nice little app, which is great. But simultaneously we are destroying much of the infrastructure and much of the information landscape that made this possible. In the first place. So we’re going to do this. So we have to be exceedingly careful about that. Let me sort of poke on that last sentence that I said. And I think that’s a really important thing when we talk about correctness, trustworthiness, and all of this, right? Which is, in many ways, you know, we had machine learning before 2020 also, right? We were doing classification. We were doing all sorts of clever things.

What really changed with ChatGPT was that anyone could use it. It was the genius of the interface. You had the simple chatbot. You didn’t need to program anymore. You could just say things and it would do them, right? And it is this ease of that interface which changed everything about how we interact with these powerful AI systems. But there is an inherent danger in that. What is the danger? Well, we didn’t build, you know, computers. Languages, all their brackets and, you know, weird expressions. We didn’t do that for fun. Okay? We could have had computer, if we could have written computer programs in English and have them run, we would have stuck with that only.

Why create all of these complicated looking languages where if I miss a semicolon, my computer is going to turn into a peacock, right? We did it that way because our normal language is too ambiguous. There are too many ways in which we say things where we assume you already know what I’m talking about. It’s too easy to miscommunicate, right? The teacher told the student that he was going to the fair. Who’s going to the fair, the teacher or the student? This is obviously a very stupid example, but we have thousands and thousands of ambiguities in our language which make it exceedingly difficult to understand what the other person even wants. That’s why we had computer languages in the first place, to disambiguate.

Now we are saying, no need. I will just give the problem description. This general purpose. Software is just good. going to basically custom solve it. Think about how useful your instructions are. This is deadly, right? We have literally got stories about this, right? About how easy or hard our instructions are. We have cautionary tales about the genies and monkeys for storylines, right? Yeah, yeah, yeah. We’ll switch at 15, don’t worry. Yeah, so when we get to those storylines, we hear that someone says, I want to be the richest person in the world or I want to be the most beautiful person in the world. And what happens immediately after that is it kills everyone else. And it says, I have technically, correctly satisfied your query.

Everything you said, I have done. And so when we give a query, we want the machine to basically align with my expectations. That’s

Devayan

what alignment is. That’s what alignment, that term means, right? That we wanted to align with my expectations of how this stupid thing is going to act. That leads us to the following conundrum. That I have the system, it’s going to do certain things. I worry that it may do certain bad things. How do I define what is the risk of it getting into this bad thing and doing this bad thing? Do we have a clear way to define risk in our context? And for that, I’ll hand over to Anirban. Alright, I can take the clicker. So, I will keep it slightly brief and I’m going to skip over some slides in the interest of time.

We have looked at different aspects. Three of us are at Ashoka, we work together on different aspects of risk. Safety, risks, harm reduction, risk management, safety, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk and trying to quantify them and understand them.

This is just the map of India part. I’ll get back to India as a question. India is a big nation, as we all know. But there’s a lot of technology, and we have a tendency to solve our questions of scale using a lot of technology. That naturally introduces many challenges. You’re on the fifth day of the summit. I don’t need to tell this to you. All of you have seen different examples of how empowering this technology could be and why it’s important to be a bit skeptical about its deployment, because that could introduce new kinds of risks. But what is a risk? It’s hard to quantify that and define it. Risks and harms would mean different things in different contexts.

Our goal as a team was to understand and try to make sense of these risks. So hard to define. One definition that we’ve chosen is that the probability of an undesirable outcome characterized by two things. The two things are its likelihood and its severity. I think the airplane example is a good example of that. The two things are its likelihood and its severity. This example is just soon up. Okay, it’s coming back. But basically, airplanes are unsafe, all of you know that. Most of you also take airplanes. It’s because the probability of something happening is lower, that’s what likelihood comes in. But airplanes are dangerous, that’s why we like watching aircraft investigations, because of severity.

Those two are just oversimplifying what I mean here. These definitions also need to be grounded in context, context such as where you’re deploying these systems, so education, healthcare, some of the many areas that have been discussed in many of these panels and discussions across different halls here. I’m going to keep it brief. But these risks go beyond hype. There are real, real challenges and the real costs that everyone has to pay when such systems are deployed at scale, without taking risks into account. Some cut off, but one example is from Air Canada. There are many such examples. These are examples of real people suffering. loss of life, loss of liberty, loss of money and property because of AI safety risks.

So we have taken a life cycle view of AI safety risks and tried to create a taxonomy. It’s a comprehensive taxonomy of 37 different kinds of risks. We have launched it earlier today and it’s now available online. I’m just going to give you a brief overview of the kind of work we have done towards that. Again, here are some examples of what is a risk in our definition or not. So what is not a risk is physical destruction of infrastructure. It is an AI related risk but we are not talking about that. Our scope is very limited. There are many global frameworks that talk about these kind of things. You have some coming from Singapore in Asia, we have Europe, we have the US.

But they do not take into account the main challenges that we see in India. India has scale, India has linguistic diversity, but India also has a lot of different things. India also has certain problems like low network connectivity. If you, for example, are deploying AI in a space which is safety critical, but you lose network and someone’s life depends on it, then it could be another kind of challenge that has to be uniquely defined in India. We see that many of these challenges are not covered in international repositories and risk databases like these. So what they have is what we call contextual blindness, where they are not realizing the social challenges and the socio -technological challenges.

India, again, as you know, deploys large amounts of technology. We have larger technology

Anirban

systems than any country in the world. UPI, EVM, Sadahar are just simple examples of that. The safety risk database that we have launched, it’s in partnership with AICSTEP Foundation, and it’s called ASTRA. It’s AI Safety, Trust, and Risk Assessments. We’ve tried to create a fun acronym that is easy to remember. ASTRA is now formally launched. Some of us worked on it. Ananya, who’s in the audience, is also one of the contributors. AI. It is a seven -step process. And maybe, Anurban, you could just quickly walk… through this process and how Astra was built. Yeah, hi everyone. So both Devayan and Alok did a good job summarizing the overall work. So these are a bit of technical details.

I’ll probably skip most of it. Basically what is there to understand is that this, if you think about it simply, it’s basically a risk, it’s a database of risks, right? But they are contextualized in the Indian context heavily, right? So one formula fits all kind of a narrative does not work in AI safety. This is what our claim is and this is in line with many researchers, right? Many prominent researchers. So what we started with was resource identification and here’s what our work differs from many of the global frameworks that people have built. So when it comes to resource identification, we had to actually do bottom -up research of how and where exactly these risks occur in the Indian context, right?

We have primarily education and financial as of now. but we started an exhaustive study of how exactly these risks manifest across sectors, right? And the final step of this is a comprehensive risk taxonomy and ontology, right? So taxonomy is basically categories and subcategories of risks which you will find probably in many global frameworks but what is there in our database is an illustrative set of use cases, right? Where you have a use case, a risk use case which you can go and click if you are in the financial lending sector, right? You can go click and see what kind of risk has happened in the Indian context exactly related to our language, our caste, our whatever kind of variables we care about, right?

So these are some of the basic steps through which we have worked on building Astra. So there are two parts to it very briefly. So one is the causal taxonomy. So one is we also tell you through this database at which stage the risk has occurred. So it can occur during development. for example bias in AI we all know about it it happens because of probably bias training data that is one of the sources right so it happens during development deployment let’s say you take an AI system which was built in the US and you implement that or deploy that in an Indian solution setup where most of the people speak in Marathi right this is a deployment problem right so it manifests in deployment and usage I take the AI system it was never meant to disseminate disinformation but I did that as the user I actually manipulated it so that’s in usage and then there are stakeholders is the AI system primarily responsible for the error or risk or is it that it happened because of a deliberate end -user kind of an action it also tells you about whether this risk is intentional or unintentional again in no way do we that this database is in any way exhaustive or foolproof right it is currently you know advancing it more and more expanding it to other sectors but But the target is to also tell you about these granularities around risk.

Because risk is not just one term like Alok explained, Debaian explained, right? You also have to look at what is the intent behind it. So there are two main categories of risk. And this is the part that we struggled the most about. By the way, this Astra, it’s currently available on archive. And you can probably go and read this paper. And you can also take a look at the database whose link is present in that paper. But this work took us almost six months. And again, Ananya, if you could wave. So Ananya is a primary contributor of this risk database that we formed. And so we categorized after looking at the type of risk. There are social risks which are easily quantifiable, which you can easily observe.

For example, linguistic bias. An AI system trained in English does not answer Hindi queries that well. So this is a typical risk which comes under social, right? Frontier risks are risks which are very, very difficult to observe, right? There are risks that we know. Could occur. tomorrow AI could replace jobs we all know about it but how do you quantify it I mean in many of these risks have haven’t even occurred in the Indian context you know about it because from some remote Western translation you could translate it we know there’s a gut feeling that it might go wrong but we don’t we can’t quantify them very easily these are the kinds of risks which come under frontier so there are some examples here I’m not going to the details in the interest of time but there is bias and exclusion toxicity risk categories right and then in frontier you have mostly around power seeking an AI system going rogue I’ll just quickly cite an example right I’m not naming the firm but there’s this news on a trading firm which applied an AI system to go do quick trading according to market variables right the AI system performed very well initially and then without the consent of the firm and because they were not monitoring it properly it went rogue it started doing transactions which were extremely lossy you know it was a risk category and then in frontier you have mostly around power seeking and AI system going rogue I’ll just quickly cite an example right I’m not naming the firm but there’s this this news on a trading firm which applied an AI system to go do quick trading according to market variables right the AI system performed very well initially and then without the consent of the firm and because they were not monitoring it properly it went rogue it started doing transactions which were extremely lossy and not just that in a very high volume it started doing that right so this is the example typical example of power seeking now in India Well, there might be some examples abound, but then do you really know whether this kind of risk can be easily quantified?

We don’t know what will happen. We’ll probably deploy and we’ll have to watch. So those are the kind of risks that we have listed in frontier risk. One quick example is also human -computer interaction, right? So we all know, I mean, sorry, there’s a student sitting here, but I’m going to say this, but in most universities, okay, students are using AI and we know that that leads to cognitive decline and lack of critical thinking. But again, how do you quantify it, right? It’s very difficult. So these are frontier risks, right? I’m not going to the details of this. You all know about caste bias, linguistic bias of AI systems, hallucination we all know about, right?

Incorrect outputs by AI and then infrastructure exclusion. So this is one critical example and this came up from a discussion with the XTREP team that let’s say there’s an AI system that you deploy and a farmer is trying to use it. In many regions of India, there are connectivity issues, right? There is an internet connectivity issue and the entire app starts loading, loading and buffering. It doesn’t work, right? Now this is a typical example of infrastructure exclusion. So again, remember the stage of error manifestation is the deployment. Chat GPT or any open AI for that matter will not care about this. It’s not their job, it’s our job. When we are deploying it in context, it’s our job to take into consideration that our connectivity might be poor.

So this is a typical example of some examples of social risks. So this is one reason why these social risks manifest at this level. As you go higher and higher models, they have more persuasive power so they can manipulate you. Frontier risks I already spoke about. I’ll quickly move on to mitigation. The one quick point I want to make about mitigation is it’s an extremely challenging task. So while the database is the first step, as per our AI safety risk framework of Astra, mitigation as they buy and adequately pointed out, is the hardest task. That we have at hand. So these mitigation measures are often not effective. They are very context specific. and there are certain kinds of mitigation measures that also lead to loss of utility.

So we have to be super careful about that, right? You put a very strong mitigation measure but then that leads to lack of utility on the user’s front. That is not a very good mitigation measure contextually speaking. So according to this work, what we want to carry forward is we want to empirically ground these risks going forward. What is the probability of risks really? And finally, we are also trying to include more and more domains. Currently it’s on education and financial lending. We want to expand it very soon to agriculture and many more

Related ResourcesKnowledge base sources related to the discussion topics (8)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“Computing evolved from specialised single‑purpose machines to general‑purpose computers capable of running many tasks, analogous to a hammer being only a hammer.”

The knowledge base explicitly describes this historical evolution, noting the shift from single-purpose machines to general-purpose computers [S5] and [S1].

Confirmedhigh

“We are now witnessing a transition from general‑purpose hardware that runs many applications to general‑purpose software (a single AI system) that can perform many tasks without separate applications.”

Thakkar’s argument about moving from universal hardware to universal software is recorded in the source, confirming the claim [S1].

Confirmedmedium

“Programming languages were created to disambiguate human language, and asking AI systems to act on free‑form natural‑language instructions is a “deadly” safety problem because ambiguous prompts can lead to harmful outcomes.”

The source remarks that free-form, general-purpose software can be “deadly” and cites stories of such risks, aligning with the speaker’s safety warning [S41].

Additional Contextmedium

“Open‑source tooling such as Tailwind is suffering because developers can simply ask AI to generate the required code.”

While the source does not mention Tailwind specifically, it discusses broader economic challenges for open-source projects when large AI models can replace manual coding effort [S44].

Additional Contextmedium

“The marginal cost of distributing software is near‑zero, which will upend traditional software business models that rely on large upfront investment and low ongoing costs.”

A discussion of “near-zero marginal costs” and its implications for economic models appears in the context of digital money, providing supporting context for the claim about software economics [S45].

Confirmedlow

“Pioneers such as Alan Turing laid the groundwork for programmable, general‑purpose machines.”

The timeline of AI development cites Alan Turing as a key early milestone, confirming his role in the evolution toward programmable computers [S42].

External Sources (49)
S1
Building Trustworthy AI Foundations and Practical Pathways — So both Devayan and Alok did a good job summarizing the overall work. So these are a bit of technical details. I’ll prob…
S2
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — -Amish Devagon: Role/Title not explicitly mentioned, appears to be an interviewer or journalist conducting the discussio…
S3
WS #111 Addressing the Challenges of Digital Sovereignty in DLDCs — Jimson Olufuye: Apologies for the late start of this workshop. Bismillahir Rahmanir Rahim. Greetings and welcome to A…
S4
Open Forum: Empowering Bytes / DAVOS 2025 — Audience: Hi, good morning. My name is Anirban, I’m a scientist and a drug developer. So my question is rather, you k…
S5
Building Trustworthy AI Foundations and Practical Pathways — -Anirban Sen: Works at Ashoka University, contributor to the ASTRA risk database project. Specializes in AI safety risk …
S6
Nepal Engagement Session — -Ms. Deepika: Mentioned at the end to felicitate Mr. Alok, specific role or title not mentioned
S7
Transforming Rural Governance Through AI: India’s Journey Towards Inclusive Digital Democracy — -Ms. Deepika: Mentioned at the end of the transcript as someone called to felicitate Mr. Alok, but does not participate …
S8
https://dig.watch/event/india-ai-impact-summit-2026/nextgen-ai-skills-safety-and-social-value-technical-mastery-aligned-with-ethical-standards — We are calling them partners and collaborators because the aim and the objective is all aligned within the ecosystem of …
S9
Publishers lose traffic as readers trust AI more — Online publishersare facing an existential threatas AI increasingly becomes the primary source of information for users,…
S10
AI technology sparks debate in Hollywood — Hollywoodis grapplingwith AI’s increasing role in filmmaking, with executives, actors, and developers exploring the tech…
S11
Creatives warn that AI is reshaping their jobs — AI isacceleratingacross creative fields, raising concerns among workers who say the technology is reshaping livelihoods …
S12
morning session — Risk assessment considers the likelihood of an event occurring and the severity of its consequences. Understanding these…
S13
Day 0 Event #142 Navigating Innovation and Risk in the Digital Realm — HADIA ELMINIAWI: Thank you, Noha. So again, privacy and data protection are among the key risks accompanying digital i…
S14
Day 0 Event #261 Navigating Ethical Dilemmas in AI-Generated Content — Economic | Human rights Policy Advocacy and Coalition Building The increasing costs of cloud computing, AI tools, and …
S15
Are AI safety institutes shaping the future of trustworthy AI? — As AI advances at an extraordinary pace, governments worldwide are implementing measures to manage associated opportunit…
S16
Policymaker’s Guide to International AI Safety Coordination — This comment crystallizes the fundamental tension at the heart of AI governance – the misalignment between market incent…
S17
India’s AI Future Sovereign Infrastructure and Innovation at Scale — Brandon identifies India’s linguistic diversity and data security concerns as significant adoption barriers. He notes th…
S18
OPENING SESSION | IGF 2023 — Luciano Mazza:Well, thank you. Thank you very much. I think, first of all, I think when… main things we must realize a…
S19
How Trust and Safety Drive Innovation and Sustainable Growth — “At Microsoft, we have something called the sensitive uses sort of scenarios where, you know, we have three categories w…
S20
How Trust and Safety Drive Innovation and Sustainable Growth — Explanation:Despite representing different perspectives (UK regulator, Singapore regulator, and industry), there was une…
S21
Building Trustworthy AI Foundations and Practical Pathways — Sen argues that addressing AI risks through mitigation is extremely challenging because measures must be tailored to spe…
S22
Safeguarding Children with Responsible AI — Impact:This broadened the conversation’s scope significantly, prompting Urvashi Aneja to note that ‘agency is not only a…
S23
Building Trustworthy AI Foundations and Practical Pathways — “India has scale, India has linguistic diversity, but India also has a lot of different things.”[63]. “In many regions o…
S24
WS #362 Incorporating Human Rights in AI Risk Management — Different socioeconomic realities and societal contexts in Global South, technologies not designed keeping those context…
S25
Practical Toolkits for AI Risk Mitigation for Businesses — In healthcare, risks involve threats to life, privacy, equality, and individual autonomy. Similarly, the retail sector a…
S26
WS #98 Towards a global, risk-adaptive AI governance framework — Al-Thani emphasizes the need for sector-specific approaches to AI governance. She argues that different sectors have uni…
S27
morning session — Risk assessment considers the likelihood of an event occurring and the severity of its consequences. Understanding these…
S28
Toward Collective Action_ Roundtable on Safe & Trusted AI — The discussion began with Ambassador Philip Tigo’s powerful reframing of AI safety concerns through an African lens. Rat…
S29
Military AI: Operational dangers and the regulatory void — Equally concerning is the regulatory gap enabling these technologies to proliferate. Humans are present at every stage f…
S30
https://dig.watch/event/india-ai-impact-summit-2026/building-trustworthy-ai-foundations-and-practical-pathways — That’s what we are trying to build with AI at the end of the day. Going from general software to general hardware. And a…
S31
Advancing Scientific AI with Safety Ethics and Responsibility — -Global South Perspectives and Adaptation: A significant focus was placed on how emerging scientific powers can shape AI…
S32
Advancing Scientific AI with Safety Ethics and Responsibility — Global South Perspectives and Adaptation: A significant focus was placed on how emerging scientific powers can shape AI …
S33
Building Trustworthy AI Foundations and Practical Pathways — “Now for the first time, instead of just having general hardware, that is one machine that can run all software, we have…
S34
Building Trustworthy AI Foundations and Practical Pathways — Thakkar argues that we are witnessing a fundamental transition from general-purpose hardware that can run different soft…
S35
https://dig.watch/event/india-ai-impact-summit-2026/building-trustworthy-ai-foundations-and-practical-pathways — That’s what we are trying to build with AI at the end of the day. Going from general software to general hardware. And a…
S36
Open Internet Inclusive AI Unlocking Innovation for All — This disruption threatens the fundamental economics of content creation, as creators lose the traffic necessary to monet…
S37
How AI Drives Innovation and Economic Growth — Arguments:Labor market disruption is the biggest concern, especially for entry-level jobs that drive economic developmen…
S38
Are AI safety institutes shaping the future of trustworthy AI? — As AI advances at an extraordinary pace, governments worldwide are implementing measures to manage associated opportunit…
S39
Building the Next Wave of AI_ Responsible Frameworks & Standards — The Moderator argues that India operates in contexts that most of the developing world shares – multilingual populations…
S40
OPENING SESSION | IGF 2023 — Luciano Mazza:Well, thank you. Thank you very much. I think, first of all, I think when… main things we must realize a…
S41
https://app.faicon.ai/ai-impact-summit-2026/building-trustworthy-ai-foundations-and-practical-pathways — Now we are saying, no need. I will just give the problem description. This general purpose. Software is just good. going…
S42
Day 0 Event #183 What Mature Organizations Do Differently for AI Success — Dr. Alomair presented a timeline of AI development from 1950 to the present. She emphasized key milestones such as Alan …
S43
Folding Science / DAVOS 2025 — Demis Hassabis: Well, the reason that we and my co-founder, Shane Legge, our chief scientist, are co-ing the term art…
S44
Open Internet Inclusive AI Unlocking Innovation for All — Anandan acknowledged the economic reality that makes open-source challenging: “if you invest a trillion dollars, you can…
S45
Comprehensive Summary: World Economic Forum Discussion on Stablecoins — The ‘new physics of money’ with near-zero marginal costs may require different approaches to monetary policy and risk ma…
S46
RESEARCH PAPERS — One result of the software development that accompanied the digitization revolution is the explosive growth…
S47
Embedding Human Rights in AI Standards: From Principles to Practice — Speakers presented several concrete examples of work already underway:
S48
Under the Hood: Approaches to Algorithmic Transparency | IGF 2023 — Consideration of a people’s search history for generating results. Different signals such as the freshness or location …
S49
© 2019, United Nations — In 2018, the landmark of half (51.2 per cent) the global population using Internet was reached, with 3.9 …
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Alok
4 arguments207 words per minute2475 words715 seconds
Argument 1
General‑purpose software will replace many specialised applications, mirroring the historic leap from single‑purpose machines to universal computers (Alok)
EXPLANATION
Alok argues that just as early computers evolved from single‑purpose machines to general‑purpose hardware, AI is now creating general‑purpose software that can perform the functions of many separate applications. This shift means that distinct programs like Excel and PowerPoint could be subsumed by a single intelligent system.
EVIDENCE
He describes the historical progression from machines that performed only one task (e.g., a hammer, a car) to the invention of universal computers, and then explains that today we are moving from general hardware to general software that can replace separate applications such as PowerPoint and Excel [41-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Both S5 and S1 describe the shift from separate apps like PowerPoint and Excel to a single AI‑driven general‑purpose software, highlighting the historical parallel with early universal computers.
MAJOR DISCUSSION POINT
Transition from specialized apps to unified AI-driven software
Argument 2
This transition promises a revolutionary change in how tasks are performed, akin to the impact of the original general‑purpose hardware (Alok)
EXPLANATION
Alok claims that the emergence of general‑purpose software will trigger a massive societal and economic revolution, comparable to the transformation caused by the first universal computers. He foresees profound changes in how work is done across all sectors.
EVIDENCE
He points to the historical “massive change” that followed the invention of general-purpose machines and suggests that a similar, even larger, revolution will occur once AI can handle any software task [45-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 (and S5) argue that the move to general‑purpose software represents a revolutionary transformation comparable to the original shift from single‑purpose machines to general‑purpose computers.
MAJOR DISCUSSION POINT
Revolutionary impact of AI‑driven general software
Argument 3
Ad‑supported websites will lose traffic because users obtain answers directly from AI, undermining the current ad‑revenue model (Alok)
EXPLANATION
Alok warns that AI assistants like ChatGPT and Gemini will answer user queries that previously drove traffic to ad‑supported websites, causing a sharp decline in page views and ad revenue. This threatens the business model of many content sites.
EVIDENCE
He explains how a recipe website currently earns money through ads, but users will increasingly ask AI directly for answers, bypassing the site entirely, leading to a drop in click-through rates from one-in-six to one-in-seven and causing many sites to die [82-90][91-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1, S5 and S9 report a dramatic drop in website click‑through rates and traffic as users turn to AI assistants for answers, threatening the advertising‑based revenue model.
MAJOR DISCUSSION POINT
AI eroding ad‑based web revenue
Argument 4
AI can generate creative outputs (e.g., movies, novels) at low cost, threatening traditional creative industries and associated employment (Alok)
EXPLANATION
Alok suggests that generative AI will be able to produce movies, novels, and other creative works cheaply, which could render traditional creative professions obsolete and destabilize related economic sectors. He cites recent industry reactions as evidence of this threat.
EVIDENCE
He mentions the recent controversy around a C-dance incident, the movie industry’s worries, and the possibility of instantly generating a full-length film by prompting an AI, illustrating how AI could replace human creators [73-80].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S5 notes the looming collapse of the novel‑writing economy and movie‑industry concerns; S10 discusses Hollywood’s debate over AI‑generated video; S11 cites a Cambridge study showing creative professionals fear job loss due to AI.
MAJOR DISCUSSION POINT
AI disrupting creative industries
D
Devayan
2 arguments202 words per minute930 words276 seconds
Argument 1
Risk is characterised by two dimensions—likelihood of occurrence and severity of impact—and must be evaluated within the specific deployment context (Devayan)
EXPLANATION
Devayan defines AI risk as a combination of how likely an undesirable event is and how severe its consequences would be. He stresses that these dimensions must be assessed relative to the particular context in which the AI system is used.
EVIDENCE
He presents a definition that risk consists of likelihood and severity, illustrated with an airplane safety example that shows how probability and impact together shape risk assessment [175-185].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S12 outlines risk assessment as a combination of likelihood and severity, reinforcing this two‑dimensional definition.
MAJOR DISCUSSION POINT
Two‑dimensional risk definition
AGREED WITH
Anirban
Argument 2
Global risk frameworks often overlook India‑specific factors such as linguistic diversity and unreliable connectivity, leading to “contextual blindness” (Devayan)
EXPLANATION
Devayan argues that many international AI risk frameworks fail to consider India’s unique challenges, like many languages and frequent network outages, resulting in a blind spot that could cause unaddressed hazards. He calls for risk assessments that are tailored to local conditions.
EVIDENCE
He notes that existing global frameworks from Singapore, Europe, and the US do not capture India’s scale, linguistic diversity, and connectivity problems, describing this omission as “contextual blindness” [201-209].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S5 highlights India’s linguistic diversity and connectivity challenges and points out that many global frameworks fail to address these issues.
MAJOR DISCUSSION POINT
Need for India‑specific risk frameworks
AGREED WITH
Anirban
DISAGREED WITH
Anirban
A
Anirban
4 arguments196 words per minute1615 words492 seconds
Argument 1
ASTRA offers a bottom‑up, sector‑specific taxonomy that maps risks to development, deployment, and usage stages, and captures intent (intentional vs. unintentional) (Anirban)
EXPLANATION
Anirban explains that the ASTRA database was built through bottom‑up research focused on Indian sectors, categorising risks by the phase in which they appear (development, deployment, usage) and whether they are intentional or accidental. This structure aims to make risk identification more granular and actionable.
EVIDENCE
He describes ASTRA as a risk database created in partnership with AICSTEP, detailing how it records the stage of risk manifestation and the intent behind it, covering development, deployment, and usage phases [213-224][225-240].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S5 describes the bottom‑up research approach used to build ASTRA and its stage‑wise, intent‑aware categorisation of risks.
MAJOR DISCUSSION POINT
ASTRA’s stage‑wise, intent‑aware taxonomy
Argument 2
Risks are grouped into “social” (observable, e.g., bias, infrastructure exclusion) and “frontier” (hard‑to‑observe, e.g., power‑seeking, rogue AI) categories, with concrete Indian use‑case examples (Anirban)
EXPLANATION
Anirban outlines two broad risk families within ASTRA: social risks that are readily observable such as linguistic bias or connectivity‑related exclusion, and frontier risks that are speculative or rare, like AI systems seeking power or acting rogue. He provides Indian‑centric examples for each category.
EVIDENCE
He cites linguistic bias in Hindi queries as a social risk, infrastructure exclusion due to poor connectivity, and a trading-firm incident where an AI went rogue and caused massive losses as a frontier risk example [250-267][256-262].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 outlines the social vs. frontier risk classification; S5 provides ASTRA’s implementation of these categories with Indian examples such as Hindi linguistic bias and a rogue‑AI trading incident.
MAJOR DISCUSSION POINT
Social vs. frontier risk classification with Indian examples
Argument 3
Mitigation measures are highly context‑dependent, often difficult to implement effectively, and can diminish system utility if overly restrictive (Anirban)
EXPLANATION
Anirban stresses that while mitigation is essential, it is challenging because solutions must fit specific contexts and may reduce the usefulness of AI systems if they are too stringent. He warns that overly strong safeguards can compromise user experience.
EVIDENCE
He notes that mitigation is “extremely challenging,” context-specific, and can lead to loss of utility when measures are too strong, emphasizing the need for balance [281-289].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S5 notes that mitigation is extremely challenging, context‑specific, and can reduce system utility when applied too strictly.
MAJOR DISCUSSION POINT
Complexity and trade‑offs of AI risk mitigation
Argument 4
Ongoing work aims to empirically ground risk probabilities and expand the taxonomy to additional domains such as agriculture (Anirban)
EXPLANATION
Anirban mentions that the team is working to collect empirical data to better estimate how likely each risk is, and they plan to broaden ASTRA beyond education and finance to sectors like agriculture. This effort seeks to improve the accuracy and coverage of the risk framework.
EVIDENCE
He states that the project will empirically ground risk probabilities and that the taxonomy will soon include agriculture and other domains, extending beyond the current focus on education and financial lending [290-294].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S5 mentions the goal of empirically grounding risk probabilities; S1 states plans to extend the ASTRA database to sectors like agriculture.
MAJOR DISCUSSION POINT
Empirical risk quantification and sector expansion
Agreements
Agreement Points
Risk should be understood as a combination of likelihood and severity, and must be evaluated in the specific deployment context.
Speakers: Devayan, Anirban
Risk is characterised by two dimensions—likelihood of occurrence and severity of impact—and must be evaluated within the specific deployment context (Devayan) ASTRA records the stage of risk manifestation and intent, providing a contextualised taxonomy for Indian sectors (Anirban)
Both speakers stress that AI risk is not abstract; it is defined by how probable an undesirable event is and how severe its consequences are, and that this assessment must be grounded in the concrete context where the system is used [175-185][213-240].
POLICY CONTEXT (KNOWLEDGE BASE)
Risk-assessment frameworks traditionally separate likelihood and severity, as highlighted in risk-management literature and echoed in AI discussions that stress these dimensions [S27]; recent regulator dialogues also stress context-specific evaluation rather than blanket rules [S20].
Existing global AI risk frameworks overlook India‑specific factors such as linguistic diversity and unreliable connectivity, creating a need for a locally‑tailored approach.
Speakers: Devayan, Anirban
Global risk frameworks often overlook India‑specific factors such as linguistic diversity and unreliable connectivity, leading to “contextual blindness” (Devayan) ASTRA is a bottom‑up, sector‑specific risk database built for the Indian context, addressing those blind spots (Anirban)
Both highlight that international AI risk assessments miss critical Indian realities (e.g., many languages, network outages) and therefore a home-grown, bottom-up taxonomy like ASTRA is required [201-209][213-240].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple reports note that global AI guidelines often miss Indian realities-linguistic plurality and patchy connectivity-calling for locally-crafted taxonomies [S23]; the Global South perspective further argues for context-specific benchmarks and taxonomies [S24][S31].
Mitigation of AI risks is extremely challenging, highly context‑specific, and can reduce system utility if applied too stringently.
Speakers: Alok, Anirban
We have to be exceedingly careful about that… mitigation measures are often not effective and can lead to loss of utility (Alok) Mitigation is an extremely challenging task, context‑specific, and can diminish utility when overly restrictive (Anirban)
Both agree that while mitigation is essential, designing effective safeguards is difficult because solutions must fit particular contexts and may compromise the usefulness of AI systems [108-110][121-130][281-289].
POLICY CONTEXT (KNOWLEDGE BASE)
Research on trustworthy AI emphasizes that mitigation measures are highly context-dependent and can erode system utility when overly strict, underscoring the trade-off challenge [S21]; regulators similarly favour targeted, proportionate interventions [S20].
Similar Viewpoints
Both speakers emphasize that AI systems must be aligned with user expectations and that their correctness and trustworthiness are central to managing risk [110-148][149-150].
Speakers: Alok, Devayan
Alok discusses the need for correctness, trustworthiness and alignment of AI systems (Alok) Devayan defines risk and stresses the importance of alignment with expectations (Devayan)
Both recognize that AI introduces profound, potentially disruptive risks that could reshape economies and societies, requiring careful attention [45-48][256-262].
Speakers: Alok, Anirban
Alok warns that AI can cause large‑scale economic disruption (Alok) Anirban notes that frontier risks such as power‑seeking AI could have severe societal impacts (Anirban)
Unexpected Consensus
Discussion of speculative ‘frontier’ risks like power‑seeking or rogue AI systems.
Speakers: Alok, Anirban
Alok mentions genies and monkeys, and the danger of AI fulfilling harmful wishes (Alok) Anirban classifies power‑seeking AI that goes rogue as a frontier risk with Indian examples (Anirban)
Although Alok’s remarks are more philosophical and Anirban’s are technical, both converge on the notion that AI could develop autonomous, harmful ambitions-a point not obvious given their different focal areas [144-146][256-262].
POLICY CONTEXT (KNOWLEDGE BASE)
While some forums prioritize immediate harms, others highlight speculative frontier risks such as rogue or power-seeking AI, noting a gap between current policy focus and these long-term concerns [S28]; the military AI discourse also points to regulatory voids around high-impact future threats [S29].
Overall Assessment

The speakers show considerable convergence on the nature of AI risk: it must be defined by likelihood and severity, contextualised to Indian realities, and mitigated with great care. They also share concern that AI’s transformative power brings both revolutionary opportunities and serious frontier threats.

High consensus on risk definition, contextual needs, and mitigation challenges, indicating a shared understanding that responsible AI deployment in India requires locally‑tailored frameworks and cautious mitigation strategies.

Differences
Different Viewpoints
Definition and dimensions of AI risk
Speakers: Devayan, Anirban
Risk is characterised by two dimensions—likelihood of occurrence and severity of impact (Devayan) Risks also need to capture intent (intentional vs unintentional) and the stage of manifestation (development, deployment, usage) (Anirban)
Devayan defines risk narrowly as a combination of likelihood and severity [175-179][184-185], while Anirban expands the definition to include the intent behind the risk and the lifecycle stage at which it appears, arguing for a richer, multi-dimensional taxonomy [239-242]. This reflects a disagreement on what elements are essential for a risk definition.
Adequacy of existing global risk frameworks versus a bottom‑up India‑specific taxonomy
Speakers: Devayan, Anirban
Global risk frameworks often overlook India‑specific factors such as linguistic diversity and unreliable connectivity, leading to “contextual blindness” (Devayan) ASTRA is a limited, non‑exhaustive database that focuses on Indian sectors but acknowledges its scope is narrow and still being built (Anirban)
Devayan criticises international frameworks for missing key Indian challenges like language diversity and network outages [201-209], whereas Anirban presents ASTRA as a home-grown solution but concedes that its current scope is limited and not exhaustive [198-200]. The tension lies in whether a bespoke taxonomy can fully replace or merely supplement global standards.
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses of AI governance argue that existing global frameworks lack granularity for Indian contexts, prompting calls for bottom-up taxonomies that reflect local linguistic and connectivity conditions [S23]; broader Global South scholarship advocates building country-specific benchmarks rather than importing generic standards [S24][S31].
Severity of AI‑driven economic disruption versus focus on mitigation
Speakers: Alok, Anirban
AI will cause massive collapse of ad‑supported websites, web‑design firms, creative industries and the broader digital economy (Alok) Mitigation of AI risks is extremely challenging, context‑specific, and can reduce system utility if too strong (Anirban)
Alok predicts a rapid erosion of ad-based revenue models and the disappearance of whole sectors such as web design and movie production as users turn to AI for answers [82-99][60-62], while Anirban stresses that while mitigation is necessary, it is fraught with trade-offs and may not prevent the broader economic shifts [281-289]. The disagreement centers on how imminent and irreversible the economic impact will be.
Feasibility of a single general‑purpose software replacing specialised applications
Speakers: Alok, Anirban
General‑purpose software will subsume separate apps like Excel and PowerPoint, mirroring the historic shift from single‑purpose machines to universal computers (Alok) Risks remain sector‑specific (e.g., linguistic bias, infrastructure exclusion) and require tailored mitigation, suggesting continued need for specialised solutions (Anirban)
Alok envisions a future where one AI-driven software can perform all tasks previously handled by distinct applications [41-44], whereas Anirban’s taxonomy highlights distinct social risks tied to particular sectors such as language bias in Hindi queries and connectivity-related exclusion [250-254][258-262], implying that specialised contexts will persist. This unexpected clash questions the practicality of a truly universal software layer.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on the promise of general-purpose AI versus domain-specific tools cite the historical shift from dedicated hardware to universal machines, questioning whether a single software layer can match specialised performance [S30]; sector-specific risk studies also warn that one-size-fits-all solutions may miss nuanced hazards [S25][S26].
Unexpected Differences
Universal AI software versus sector‑specific risk realities
Speakers: Alok, Anirban
Alok claims a single AI system will replace many specialised applications (Alok) Anirban highlights sector‑specific social risks that require distinct handling (Anirban)
Alok’s sweeping vision of one software doing everything [41-44] contrasts with Anirban’s emphasis on concrete, sector-bound risks such as linguistic bias and connectivity exclusion [250-254][258-262], an unexpected tension between a monolithic software future and the practical need for specialised risk mitigation.
POLICY CONTEXT (KNOWLEDGE BASE)
Sector-focused risk assessments in healthcare and retail illustrate that universal AI solutions often overlook domain-specific safety and equity concerns, supporting the argument for sector-tailored governance [S25][S26]; this aligns with calls for differentiated regulatory approaches across industries [S26].
Overall Assessment

The discussion reveals three main fault lines: (1) how to define AI risk (simple likelihood‑severity vs multi‑dimensional taxonomy), (2) the adequacy of global versus India‑specific frameworks, and (3) the scale of AI‑driven economic disruption versus the feasibility of mitigation. While all participants share a common concern for AI safety, they diverge on definitions, methodological approaches, and the expected magnitude of impact.

Moderate to high. The disagreements are substantive—affecting how risk is conceptualised, how policy should be crafted, and how quickly the economy may be reshaped. These divergences could impede coordinated policy action unless a common framework is negotiated, highlighting the need for interdisciplinary dialogue that bridges technical taxonomy work with broader economic and societal forecasts.

Partial Agreements
Devayan outlines a two‑dimensional risk definition and calls for context‑aware assessment [170-176][186-189], while Anirban presents ASTRA as a concrete, India‑focused risk database that maps risks to stages and intent [213-224][225-240]. They share the goal of robust risk management but differ on the methodological path—definition versus taxonomy implementation.
Speakers: Devayan, Anirban
Both aim to improve AI safety by defining, cataloguing and managing risks Both stress the importance of contextual (Indian) considerations in risk assessment
Devayan points out the ‘contextual blindness’ of international frameworks for India [201-209], while Anirban describes the bottom‑up research that underpins ASTRA’s Indian focus [229-236]. They agree on the necessity of an India‑centric approach but diverge on whether a new taxonomy alone suffices.
Speakers: Devayan, Anirban
Both recognize the need for India‑specific risk frameworks Devayan critiques existing global frameworks; Anirban builds a bottom‑up Indian taxonomy
Takeaways
Key takeaways
AI is moving from general‑purpose hardware to general‑purpose software, enabling a single system to replace many specialised applications. This shift is expected to cause massive economic disruption, e.g., ad‑driven websites losing traffic and creative industries (movies, novels) being undercut by AI‑generated content. Risk must be defined by two dimensions – likelihood and severity – and evaluated in the specific deployment context. Global AI risk frameworks often miss India‑specific challenges such as linguistic diversity and unreliable connectivity, leading to “contextual blindness.” ASTRA, an Indian‑centric AI safety risk taxonomy and database, was launched to capture sector‑specific risks, map them to development/deployment/usage stages, and distinguish intentional vs. unintentional causes. Risks are categorised as “social” (observable, e.g., bias, infrastructure exclusion) and “frontier” (hard‑to‑observe, e.g., power‑seeking, rogue AI), with concrete Indian use‑case examples. Mitigation is highly context‑dependent; strong safeguards can reduce utility, so trade‑offs must be carefully managed. Future work includes empirically estimating risk probabilities and expanding the taxonomy beyond education and finance to domains like agriculture.
Resolutions and action items
Launch of the ASTRA AI safety risk database (in partnership with AICSTEP Foundation). Commitment to expand ASTRA to additional sectors, starting with agriculture and other high‑impact domains. Plan to empirically ground risk probabilities for the identified risk categories. Ongoing effort to refine mitigation strategies that balance safety with system utility.
Unresolved issues
Effective mitigation techniques that do not overly diminish utility remain undefined. Precise quantification of risk likelihood and severity for many frontier risks is still lacking. How to redesign business models for ad‑supported content platforms in an AI‑answer‑centric web. Mechanisms to ensure AI alignment with user expectations without causing harmful outcomes. Strategies to address infrastructure exclusion (e.g., poor connectivity) in AI deployments. Long‑term impact on employment in creative and technical sectors and possible policy responses.
Suggested compromises
Adopt mitigation measures that are context‑specific and calibrated to avoid excessive loss of utility. Balance the push for general‑purpose software with safeguards that protect existing economic ecosystems (e.g., gradual transition for web‑based ad revenue).
Thought Provoking Comments
Why don’t you need two separate computers, one to run Excel and one to run PowerPoint? How come both run on the same machine? This wasn’t obvious; early machines were single‑purpose, and the idea of a general‑purpose computer was revolutionary.
It reframes the historical narrative of computing, highlighting the leap from specialized hardware to a universal platform, which underpins the later argument about “general software” as the next paradigm shift.
Sets up the analogy that just as general hardware enabled a software boom, general AI software could trigger a comparable transformation. It leads Alok to discuss the economic and societal consequences of such a shift.
Speaker: Alok
Now for the first time, instead of just having general hardware, we have general software – a single AI system that can replace both PowerPoint and Excel by understanding a natural‑language instruction.
Introduces the core concept of AI as “general software,” moving the conversation from hardware to software universality, and raises the stakes of what AI could replace.
Triggers a cascade of examples (software companies disappearing, web‑design industry collapse, ad‑driven web model erosion) that broaden the discussion from technical possibility to economic disruption.
Speaker: Alok
The entire economics of software companies will vanish because once a general AI exists, you no longer need to invest heavily in each product; the burn rate after the first build is minimal.
Makes a bold, concrete prediction about the future of an entire industry, challenging the audience to reconsider business models and investment strategies.
Shifts the tone from speculative to urgent, prompting listeners to think about real‑world implications and setting the stage for concerns about job loss and market collapse.
Speaker: Alok
A large portion of the internet is collapsing because ad‑driven sites will be bypassed by AI assistants that answer queries directly, eliminating traffic to those sites.
Provides a vivid, immediate illustration of how AI could disrupt existing content ecosystems, grounding abstract ideas in a tangible scenario.
Leads to a concrete discussion of the ripple effects on open‑source tools (e.g., Tailwind) and reinforces the urgency of addressing AI‑driven economic shifts.
Speaker: Alok
We built programming languages to disambiguate human language because natural language is ambiguous; now we are abandoning that safety net by asking AI to understand free‑form instructions, which is "deadly".
Highlights a fundamental safety and alignment issue: the trade‑off between usability and precision, raising a red flag about potential misinterpretation and harmful outcomes.
Acts as a pivot toward the risk and alignment theme, prompting Devayan to ask about alignment and risk definition, and paving the way for the subsequent risk‑taxonomy discussion.
Speaker: Alok
What is alignment? How do we define the risk of an AI system doing something bad, and can we quantify that risk?
Transitions the conversation from Alok’s broad, visionary narrative to a focused inquiry on safety, framing the problem in terms of measurable risk and alignment.
Serves as a turning point that shifts the dialogue from speculative impact to concrete methodological concerns, leading directly to Anirban’s presentation of the ASTRA framework.
Speaker: Devayan
One formula fits all does not work in AI safety; we need a contextualized risk taxonomy for India that captures linguistic, caste, connectivity, and other local factors.
Introduces the concept of contextual blindness in global AI risk frameworks and proposes a localized, granular approach, expanding the scope of the discussion to socio‑technical specificity.
Broadens the conversation from abstract risk definitions to actionable, region‑specific solutions, and validates Devayan’s earlier concerns about defining and measuring risk.
Speaker: Anirban
Frontier risks—such as power‑seeking AI, cognitive decline from over‑reliance, and infrastructure exclusion—are hard to observe and quantify, yet they may have the highest impact.
Identifies a class of risks that are speculative but potentially catastrophic, pushing the discussion beyond immediate, observable harms to long‑term existential considerations.
Deepens the analysis by adding layers of uncertainty and urgency, prompting participants to think about mitigation strategies that balance safety with utility.
Speaker: Anirban
Overall Assessment

The discussion evolved from Alok’s sweeping historical analogy and dramatic forecasts of AI‑driven economic upheaval to a more grounded examination of safety and risk. Key comments—especially Alok’s framing of “general software,” his concrete examples of industry collapse, and his warning about language ambiguity—set the stage for Devayan’s probing question on alignment, which acted as a turning point toward a systematic treatment of risk. Anirban’s introduction of the ASTRA taxonomy and the emphasis on contextual, frontier risks redirected the conversation toward actionable, region‑specific solutions. Together, these pivotal remarks shifted the dialogue from speculative excitement to a nuanced, risk‑aware perspective, shaping the overall narrative from visionary possibilities to responsible implementation.

Follow-up Questions
How can we clearly define risk in the context of AI systems?
Devayan explicitly asked whether there is a clear way to define risk, highlighting the need for a concrete definition to guide safety work.
Speaker: Devayan
How can we quantify the likelihood and severity of AI risks to make them actionable?
Both speakers discussed the difficulty of measuring risk probability and impact, indicating a need for methods to operationalize the likelihood‑severity framework.
Speaker: Devayan, Anirban
What methods can be used to mitigate AI risks effectively without sacrificing utility?
Anirban emphasized that mitigation is extremely challenging, often context‑specific, and can reduce system utility, calling for research into balanced mitigation strategies.
Speaker: Anirban
How can the ASTRA taxonomy be expanded to cover additional sectors such as agriculture and others?
Anirban mentioned plans to extend the risk database beyond education and finance, indicating a research agenda for sector‑specific risk identification.
Speaker: Anirban
How can we address contextual blindness in global AI risk frameworks to reflect Indian‑specific challenges like linguistic diversity, caste, and connectivity?
Anirban pointed out that existing international frameworks miss Indian context, suggesting a need for localized risk frameworks.
Speaker: Anirban
What strategies can mitigate infrastructure exclusion caused by poor connectivity in AI deployments?
Anirban gave the example of a farmer’s app failing due to network issues, highlighting a research gap in designing resilient AI services for low‑connectivity environments.
Speaker: Anirban
How can frontier risks such as power‑seeking or rogue AI behavior be identified, quantified, and monitored?
Frontier risks are described as hard to observe and quantify, prompting the need for detection and measurement techniques.
Speaker: Anirban
What is the impact of AI usage on cognitive decline and critical thinking among students?
Anirban raised concerns about AI‑assisted learning leading to reduced critical thinking, indicating a research area in educational outcomes.
Speaker: Anirban
How can AI alignment be ensured so that systems satisfy user expectations without causing unintended harmful outcomes?
Both speakers discussed alignment and the danger of systems fulfilling literal queries in harmful ways, calling for research on alignment mechanisms.
Speaker: Alok, Devayan
Why can a single computer run multiple applications like Excel and PowerPoint, and what historical developments enabled this?
Alok posed a rhetorical question about the evolution from specialized hardware to general‑purpose machines, suggesting a historical/technical investigation.
Speaker: Alok
What are the broader economic consequences of general AI software on industries such as web design, novel writing, film production, and ad‑based websites?
Alok described how AI could collapse existing business models, indicating a need for economic impact studies.
Speaker: Alok
How will AI‑generated content affect ad revenue models and the viability of content‑driven websites?
Alok highlighted that users may bypass websites in favor of AI answers, threatening ad‑based revenue streams, a topic for further research.
Speaker: Alok
How can we empirically ground the probability of AI risks in real deployments?
Anirban stated the goal of empirically grounding risk probabilities, pointing to a need for data collection and statistical analysis of AI incidents.
Speaker: Anirban

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw

Session at a glanceSummary, keypoints, and speakers overview

Summary

Kiran Mazumdar-Shaw opened the Impact AI Summit by emphasizing that the coming decades will be defined by “biotech sovereignty embedded in AI” rather than digital sovereignty, and that nations mastering the convergence of biological and artificial intelligence will shape future health, food security, sustainability and biosecurity [2-4][5]. For India, this convergence is not merely an opportunity but a strategic and geopolitical imperative, linking scientific leadership to national resilience [6-8].


She defined biological intelligence as the product of 3.8 billion years of evolution, where living cells sense, compute and act through intricate signaling networks and built-in guardrails that maintain homeostasis, and illustrated this with the immune system’s ability to store pathogen information in memory cells and launch rapid, energy-efficient responses without large data centers [9-16][19-25]. The migratory precision of the Arctic tern, driven by DNA-encoded navigation, serves as another example of innate biological intelligence, contrasting with AI that learns from external data [29-33].


Mazumdar-Shaw highlighted that AI can accelerate protein-structure prediction, generative drug design, digital twins of cells, and ultimately enable reprogramming of cells to restore biological balance, while AI-driven mapping of regulatory circuits allows interventions that preserve homeostasis and shift biotech from disease management to system re-engineering [36-43][50-52]. She warned that reliance on offshore AI models for drug discovery and genomics would create strategic dependence, making sovereign control over data, models and infrastructure essential for health security [54-57].


To achieve this, she called for a “triple helix” of government investment in sovereign AI bio-infrastructure, academia development of computational-biology curricula, and industry co-creation of shared platforms and biomanufacturing clusters, noting that regulations must keep pace with rapid AI-driven timelines to avoid missed opportunities [71-77][68-70]. Ethical, transparent, energy-efficient and bias-aware AI systems rooted in public interest are presented as India’s unique model for global interoperability and social purpose [82-86]. Concluding, Mazumdar-Shaw asserted that India possesses the scientific talent, AI expertise and values to lead in biotech sovereignty, provided it builds sovereign platforms today, thereby securing health, strategic autonomy and economic resilience [90-91].


Keypoints

Major discussion points


Biotech sovereignty + AI is a strategic, geopolitical imperative for India.


Mazumdar-Shaw frames the need for “biotech sovereignty that is embedded in AI” as essential to national resilience, health security and economic competitiveness, warning that reliance on offshore AI models creates strategic dependence [3-5][6-8][54-57].


Biological intelligence is a model for AI-driven innovation.


She describes living systems as “the original intelligent machines” that learn, store, retrieve and act on information with extreme energy efficiency, using examples such as immune memory and Arctic-tern navigation to illustrate how biology processes data far beyond today’s data-center capabilities [9-15][19-25][29-34][36-38].


A full-stack AI-enabled biotech ecosystem is required, spanning discovery, development, manufacturing and regulation.


The speaker outlines concrete AI applications – foundation models for proteins/RNA, in-silico trials, digital twins, smart biomanufacturing, AI-validated regulatory pathways – and stresses the need for sovereign data, computing infrastructure and translational platforms [60-68][70-77].


Triple-helix collaboration and ethical, transparent AI are essential for global leadership.


She calls for coordinated action among government, academia and industry (the “triple helix”) together with capital markets, and stresses that India’s AI-bio systems must be energy-efficient, bias-aware and interoperable, embedding equity, affordability and public-interest values [71-76][81-86].


Realised AI-bio sovereignty will deliver health, longevity and economic benefits while mitigating risks.


By re-programming cells, extending health-span, and creating AI-native discovery engines, India can shift from “managing disease” to “re-engineering biological systems,” securing a 50-year-plus lifespan for its citizens and positioning the country as a global biotech platform [48-53][89-90].


Overall purpose / goal


The talk is a strategic appeal to policymakers, industry leaders, academia and investors to accelerate the creation of an Indian-owned, AI-driven biotech infrastructure. It seeks to convince the audience that building sovereign AI models, data assets and biomanufacturing capabilities is vital for national health security, economic resilience, and to claim a leadership role in the emerging convergence of biology and artificial intelligence.


Overall tone


The tone begins with enthusiastic optimism (“delighted…heralds a big signal”) and a visionary framing of AI-biotech convergence. It then moves into a more urgent, persuasive register when stressing strategic imperatives and risks of external dependence. Mid-speech the tone becomes technical and explanatory, detailing how biological intelligence works. In the latter part it shifts to a rallying, call-to-action tone, urging coordinated “triple-helix” effort and ethical stewardship, and concludes on an inspirational, confident note about India’s capacity to lead the future of humanity.


Speakers

Speaker 1 – Role/Title: Event moderator or host introducing the main speaker (appears to be an event host)[S1][S3].


Areas of expertise: (not specified)


Kiran Mazumdar-Shaw – Role/Title: Chairperson, Biocon Group[S4].


Areas of expertise: Biotechnology, healthcare innovation, AI-enabled drug discovery, biotech sovereignty, life-sciences entrepreneurship.


Additional speakers:


– None.


Full session reportComprehensive analysis and detailed insights

Ladies and gentlemen were invited to applaud the arrival of Ms Kiran Mazumdar-Shaw, Chairperson of the Biocon Group [1][2]. She opened by expressing delight at taking part in the inaugural Impact AI Summit, signalling India’s entry onto the global AI journey.


Mazumdar-Shaw situated the summit’s theme within a geopolitical narrative. She argued that the 20th century was defined by the Internet, the early 21st century by digital sovereignty, and that the coming decades will be shaped by “biotech sovereignty embedded in AI” [4]. Nations that master the convergence of biological intelligence and artificial intelligence will dictate the future of healthcare, food security, education, biomanufacturing, sustainability, bio-security and many other domains [5]. For India this is not merely a cutting-edge opportunity; it is a strategic and geopolitical imperative that underpins national resilience and health security [6-8].


To explain “biological intelligence”, Mazumdar-Shaw described living systems as the original intelligent machines, honed over 3.8 billion years of evolution [9-11]. Cells sense, compute and respond through intricate signalling networks coupled to gene-regulatory circuits and immune memory, operating within built-in guardrails that maintain homeostasis [12-16]. When these guardrails fail, disease emerges [17-18]. She illustrated this with the immune system: cytokines, antibodies and killer T-cells constitute the body’s immunological ammunition, while memory B- and T-cells store pathogen identities and can launch rapid, energy-efficient responses upon re-exposure [19-22]. This biological information processing occurs at speeds and with energy consumption far below that of modern data-centres, relying on distributed, low-power “data centres” within the body and the brain-the largest known supercomputer [23-27].


A vivid example was the Arctic tern, a bird the size of a tennis ball that migrates ≈ 70 000 km between the poles without prior learning or guidance, relying on DNA-encoded navigational intelligence [29-33][??]. By contrast, conventional AI learns from external data to optimise decisions at machine scale. The true inflection point, she argued, lies at the intersection of these two forms of intelligence, where AI-powered biology can accelerate protein-structure prediction, generative drug design and the creation of digital twins of cells and organs, thereby compressing discovery timelines and reducing development risk [34-38]. She emphasized that AI by itself will not generate economic growth; the value will arise from applying AI-driven solutions in manufacturing and product delivery to create tangible economic benefits [??].


Building on this, Mazumdar-Shaw highlighted the next frontier: reprogramming cells to restore biological balance. She invited the audience to imagine converting cancer cells into non-malignant ones and repairing bone tissue that is currently irreparable, noting that such feats require deep understanding of cell signalling, gene regulation and immune memory-the same networks that maintain homeostasis [39-43][44-46]. She linked these ambitions to personalised CAR-T therapies, autoimmune-disease interventions that recalibrate immune tolerance rather than broadly suppress immunity [??], and longevity research aimed at modulating senescence, metabolic pathways of ageing and cellular repair mechanisms, which could extend human health-span by fifty years or more [47-48]. Crucially, these approaches seek to reinforce, rather than override, the innate guardrails of biology, with AI mapping regulatory circuits at scale to identify interventions that preserve homeostasis [49-52].


Mazumdar-Shaw warned that reliance on offshore foundational AI models for drug discovery, genomics, cellular engineering and clinical decision-making would create strategic dependence in the most critical domain of national resilience – human health [??]. She defined “biotech sovereignty embedded in AI” as sovereign control over trusted biological data, indigenous AI models, computing infrastructure and end-to-end translational platforms spanning discovery, development, manufacturing and delivery [56-57]. Such sovereignty is essential not only for economic competitiveness but also for preparedness against pandemics, antimicrobial resistance and emerging bio-threats [57].


To realise this vision, she outlined a full-stack AI-enabled biotech ecosystem. In discovery, India must develop foundation models for proteins, RNA, cellular circuits and systems biology [62]. In development, opportunities exist for in-silico trials, digital twins and AI-driven trial design to de-risk pipelines and improve probability of success [63]. In manufacturing, smart biomanufacturing that uses AI for yield optimisation and quality-by-design will be a key growth area [64]; she called for the development of a coordinated system of biomanufacturing, integrated with AI-enabled quality-by-design and yield-optimization [??]. In regulation, a science-first, tech-enabled pathway that integrates real-world evidence through AI validation is required, with regulatory speed matching accelerated innovation timelines [65-70]. Without coordinated regulation, the benefits of faster AI-driven discovery could be lost [68-70].


Recognising that industry alone cannot drive this transformation, Mazumdar-Shaw called for a “triple helix” of government, academia and industry:


* Government: invest in sovereign AI-bio infrastructure, trusted data architectures, regulatory sandboxes and mission-mode programmes in cell and gene therapy, immuno-oncology and longevity science [74];


* Academia: mainstream computational biology, neurosymbolic AI and AI-native life-science curricula to create a new cadre of translational scientists [75];


* Industry: co-create shared platforms, translational pipelines and globally benchmarked biomanufacturing clusters, while capital markets evolve to provide patient, long-cycle funding for high-risk biotech innovation [76-80].


Ethical considerations were woven throughout. Mazumdar-Shaw asserted that sovereignty does not mean isolation; India must build AI systems for biology that are ethical, transparent, energy-efficient and bias-aware, yet globally interoperable and rooted in the public interest [82-84]. By embedding equity, affordability and access into AI-driven biotech, the country can offer a model of innovation that couples technological leadership with social purpose [85-86].


Finally, she concluded that biotech sovereignty embedded in AI is not a sectoral ambition but the foundation of health security, strategic autonomy and economic resilience. Those who master the language of life augmented by the language of machines will shape humanity’s future, and India possesses the science, AI expertise, talent, scale and values to lead-provided it builds sovereign platforms today [87-90][91][??].


Session transcriptComplete transcript of the session
Speaker 1

Ladies and gentlemen, please put your hands together to welcome Ms. Kiran Mazumdar -Shaw, Chairperson, Biocon Group.

Kiran Mazumdar-Shaw

Good afternoon, and let me say how delighted I am to be a part of this wonderful summit, the Impact AI Summit that India… is launching and hosting for the first time, which I think heralds a big signal that we are part of the AI journey that the world is on. I’ve basically taken off from where the last panel talked about sovereignty, and I thought I should talk about why India must build biotech sovereignty that is embedded in AI. And let me start with this first slide that basically says that if the 20th century was defined by the Internet and the early 21st century by digital sovereignty, which was all about data being the new oil and the new fuel, the coming decades, I believe, will be…

…be shaped by… biotech sovereignty that is embedded in AI. I believe that nations that command the convergence of biology and AI, or what I like to call the convergence of biological intelligence and artificial intelligence, will define the future of healthcare, food security, education, biomanufacturing, sustainability, biosecurity, and much more. For India, this is not merely a cutting -edge opportunity. It is a strategic and geopolitical imperative. Now, let me really touch upon what I mean by biological intelligence. Living systems are the original intelligent machines. And why do I say this? Because biological intelligence has evolved and has been built over 3 .8 billion years. It is different in the way it learns, memorizes, builds and processes information from multimodal signals and circuits.

Cells sense, they compute and they respond through intricate signaling networks. They also then interface with gene regulation and gene regulatory circuits and immune memory. These systems operate within inbuilt biological guardrails, which form a network of cells that are connected to each other. They focus on feedback loops and control mechanisms that maintain what we refer to as homeostasis. or health equilibrium. Disease arises when these guardrails fail. So when we talk about ethics, when we talk about governance, living systems have an inbuilt sense of guardrails and governance, which is about keeping you healthy, which is about homeostasis, which is a wonderful way of making sure that it compensates, it repairs and makes sure that you can still live in a health as healthier way as possible.

And to illustrate this, let’s look at the way our immune system responds to pathogens. The immune system responds through immunological ammunition like cytokines, antibodies and killer T cells. It also memorizes the identity of the pathogen in memory T cells and B cells. And years later, when the pathogen reinvades, the memory cells rapidly retrieve this information and translate it into instant action. This is the marvels of biology in the way it receives information, processes information, stores information, retrieves information and acts. And the inference of all this information is done at speed and with energy efficiency that we can’t even imagine. We don’t need those gigawatts of data centers. We have distributed data centers that take sips of energy when it needs to use it.

Our brain, which is the biggest supercomputer known to man, does this. So efficiently that we need to understand how biology works. Another great thought -provoking example of biological intelligence is the migration of the Arctic tern. This little bird, that is the size of a tennis ball, undertakes a 70 ,000 -kilometer journey between the Arctic, the Antarctic and back with no prior knowledge, with no older bird to guide it, and yet it does it with astonishing precision and speed. How does it do it? This is about navigational intelligence embedded in its DNA. AI, by contrast, learns from data to optimize decisions at machine scale. So therefore, the true… The true inflection point lies at their intersection. AI -powered biology…

from protein structure prediction and generative drug design to digital twins of cells and organs. AI is compressing discovery timelines and reducing development risk. And therefore, I believe that the next frontier is even more profound. The reprogramming of cells themselves to restore biological balance. But for this, we need to understand how biological intelligence operates. Imagine reprogramming cancer cells into non -malignant cells. Imagine repairing bone tissue that is damaged and irreparable. Biological intelligence is built on an intricate network of cell signaling, gene regulation and immune memory that works symbiotically, as I mentioned, to maintain homogenization. And so, we need to understand how biological intelligence operates. And so, we need to understand how biological intelligence operates. And so, we need to understand how biological intelligence operates.

now if we now come to what i’ve just spoken about which is reprogramming and re -engineering we are moving from static one -size -fits -all drugs to programmable biology which is the new frontier we need to learn how biology learns stores data retrieves and processes data in such an agile and energy efficient way once we understand the computational models of living systems we can use ai to accelerate with predictive precision the most advanced present -day therapies today we are all excited about personalized carti therapies that basically eliminate tumors with precision autoimmune disease interventions that are used to eliminate tumors with precision that recalibrate immune tolerance rather than broadly suppressing immunity And then the most exciting part of longevity and health span.

These are areas where we must understand how senescence is modulated, metabolic pathways of aging are created and cellular repair mechanisms to delay biological aging and restore tissue resilience happens. If we understand all this, as the last speaker said, we may be able to live for another 50 years and more. Flucially, these approaches seek not to overpower biology but to reinforce its inbuilt guardrails or regulatory circuits which focus on repair, feedback control and immune surveillance. AI can map these regulatory circuits at scale, enabling target interventions that preserve homeostasis. That is the excitement of new science led by AI, new biology led by AI. This represents a paradigm shift from managing disease to re -engineering biological systems to sustain equilibrium.

So, India’s future health security will depend on how optimally we combine the code of life and the code of intelligence. If foundational AI models for drug discovery, genomics, cellular engineering and clinical decision making are owned offshore, India risks strategic dependence in the market. This is the most critical domain of national resilience, which is human health. Biotech sovereignty embedded in AI must therefore mean sovereign control over trusted biological data. indigenous AI models, computing infrastructure, and translational platforms from discovery and development to manufacturing and delivery. This is essential not only for economic competitiveness, but also for preparedness against pandemics, antimicrobial resistance, and emerging new bio -threats. Now, I really believe this is a very important aspect of what AI can do for biotech and the economy.

AI alone will not create economic opportunities, but the delivery of AI in our field through manufacturing and products will do that. India’s global role must evolve from being the pharmacy of the world, to becoming the biotech platform of the world, a nation that offers AI -native discovery engines programmable therapy platforms and scalable biomanufacturing as global public goods. And this requires embedding AI across the biotech value chain. When it comes to discovery, we need to develop foundation models for proteins, RNA, cellular circuits and systems biology. When it comes to development, I think there are huge opportunities to develop in silico trials, digital twins and AI driven trial design to really de -risk the success of pipelines and probability of success.

When it comes to manufacturing, smart biomanufacturing using AI for yield optimization and most importantly, quality by design is going to be a great opportunity for all of us. Now, when it comes to the biotech value chain, we need to develop a system of biomanufacturing and also a system of biotech regulation. It has to be a science -first approach, tech -enabled regulatory pathways, integrating real -world evidence through AI validation. I think that’s going to be a huge opportunity which we must do right now. What is important is for regulations to keep up with technology. If we compress timelines of discovery and development to a fraction of what happens today, and if regulatory speed does not keep up with it, then we miss out on a huge lost opportunity.

So working in tandem, working in synchronization is the need of the hour. This transformation cannot be driven by industry alone. It demands a triple helix of government, academia and industry. Government, academia and industry. Government must invest in sovereign AI bio -infrastructure. trusted data architectures, regulatory sandboxes, and mission mode programs in cell and gene therapy, immuno -oncology, and longevity science. Academia must mainstream computational biology, neurosymbolic AI, and AI -first life sciences education to build a new cadre of translational scientists. Industry must co -create shared platforms, translational pipelines, and globally benchmarked biomanufacturing clusters that convert science into scale. Capital markets must also evolve to support long -cycle, high -risk biotech innovation that is so rampant in startups in our country.

Deep science requires a lot of research and development. It requires patient capital. But the societal and economic returns from reduced disease burden to global platform leadership are exponential. Now coming to ethics, trust and global leadership. Sovereignty is not isolation. India must build ethical, transparent, energy efficient and bias aware AI systems for biology that are globally interoperable yet rooted in public interest. And I think this is the unique model India can create. By embedding principles of equity, affordability and access into AI driven. AI. AI driven biotech, India can offer the world a new model of innovation combining technological leadership with social purpose. For India, biotech sovereignty embedded in AI is not a sectoral ambition. It is a foundation of health security, strategic autonomy and economic resilience.

Those who master the language of life augmented by the language of machines will shape the future of humanity. India has the science, the AI and life sciences talent, the scale and the values to lead provided it builds the sovereign platforms of tomorrow today. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (11)
Factual NotesClaims verified against the Diplo knowledge base (3)
Confirmedhigh

“Ms Kiran Mazumdar‑Shaw is Chairperson of the Biocon Group”

The knowledge base lists Kiran Mazumdar-Shaw as Chairperson of the Biocon Group and a pioneering biotech entrepreneur [S4] and [S33].

Confirmedhigh

“Mazumdar‑Shaw positioned the summit’s theme within a geopolitical narrative, arguing that nations that master the convergence of biological intelligence and artificial intelligence will dictate the future of healthcare, food security, education, biomanufacturing, sustainability, bio‑security and many other domains”

Sources note that Mazumdar-Shaw framed the convergence of biology and AI as a geopolitical priority, stating that countries leading this intersection will shape multiple sectors including health, food, education and bio-manufacturing [S5] and [S6].

Additional Contextmedium

“For India this convergence is a strategic and geopolitical imperative that underpins national resilience and health security”

The knowledge base adds that her remarks emphasized national health security and the strategic importance of biotech-AI convergence for India’s resilience [S5] and [S6].

External Sources (38)
S1
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S2
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S3
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S4
AI for Social Good Using Technology to Create Real-World Impact — -Kiran Mazumdar-Shaw: Chairperson of Biocon Group; pioneering biotech entrepreneur, healthcare visionary, and philanthro…
S5
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event moderator or host introd…
S6
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — Impact:This comment elevates the discussion from academic concepts to practical applications with profound implications …
S7
Folding Science / DAVOS 2025 — Demis Hassabis highlights how AI, specifically AlphaFold, has dramatically sped up protein structure prediction. This ac…
S8
Breakthroughs in human-centric bioscience with AI — This landmark achievement shows how powerful, responsible AI research can address urgent human health needs, moving beyo…
S9
AI for Social Good Using Technology to Create Real-World Impact — So I think I have to answer this in two parts. The first part is how do we basically leverage what Nandan refers to as t…
S10
WS #110 AI Innovation Responsible Development Ethical Imperatives — Ricardo Israel Robles Pelayo: Thank you very much. Good afternoon, everyone. It is an honor to be here and share a refle…
S11
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Alex Moltzau: I think I just also wanted to speak to this question on the importance of evidence-based policymaking. I m…
S12
Lightning Talk #173 Artificial Intelligence in Agrotech and Foodtech — The discussion suggests several key implications for agricultural development. First, AI tools must be designed with acc…
S13
Artificial Intelligence & Emerging Tech — In conclusion, the meeting underscored the importance of AI in societal development and how it can address various chall…
S14
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — 3. Global collaboration: Li Junhua stressed the importance of cooperation among all stakeholders. Doreen Bogdan-Martin:…
S15
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what…
S16
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Chunggong acknowledges the significant positive potential of AI for social good, including improvements in healthcare de…
S17
Building Sovereign and Responsible AI Beyond Proof of Concepts — Hi there thank you my name is Ami Kotecha I’m co -founder of Amro Partners we are a real estate company and we are now g…
S18
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-kiran-mazumdar-shaw — Deep science requires a lot of research and development. It requires patient capital. But the societal and economic retu…
S19
Keynote Adresses at India AI Impact Summit 2026 — Summary:All speakers acknowledge India’s substantial technological capabilities, particularly in AI, semiconductor desig…
S20
Panel Discussion AI in Healthcare India AI Impact Summit — The tone was consistently optimistic and collaborative throughout, with participants expressing genuine excitement about…
S21
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — Disagreement level:Very low disagreement level. The discussion represents a highly collaborative and aligned conversatio…
S22
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — I believe that nations that command the convergence of biology and AI, or what I like to call the convergence of biologi…
S23
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — Mazumdar-Shaw positioned this technological convergence within a broader geopolitical context, arguing that nations comm…
S24
Breakthroughs in human-centric bioscience with AI — This breakthrough is not happening in isolation; it forms part of a rapidly expanding constellation of AI-driven advance…
S25
Folding Science / DAVOS 2025 — Hassabis notes that relatively simple algorithmic concepts like backpropagation and reinforcement learning have scaled i…
S26
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — The discussion then shifted to the Trump administration’s American AI Export Program, presented by Michael Kratsios and …
S27
How to make AI governance fit for purpose? — Effective light-touch regulation demands extensive effort to build comprehensive ecosystems beyond just legal frameworks…
S28
AI/Gen AI for the Global Goals — The importance of collaboration and partnerships
S29
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — 3. Global collaboration: Li Junhua stressed the importance of cooperation among all stakeholders. Doreen Bogdan-Martin:…
S30
Building Sovereign and Responsible AI Beyond Proof of Concepts — But otherwise, we hope this session was useful. If you want to give us feedback, here’s a bit. Bigger QR code. so if you…
S31
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-kiran-mazumdar-shaw — Deep science requires a lot of research and development. It requires patient capital. But the societal and economic retu…
S32
Practical Toolkits for AI Risk Mitigation for Businesses — In healthcare, risks involve threats to life, privacy, equality, and individual autonomy. Similarly, the retail sector a…
S33
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-social-good-using-technology-to-create-real-world-impact — Our third guest… is Kiran Mamzouma -Shaw. As chairperson of Biocon Group, Kiran is a pioneering biotech… Kiran is a …
S34
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — Agar kisi machine ko sir paper clip banane ka alak de diya jaye to wo uska ek kaam ke liye duniya ke saare resources ko …
S35
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — Naveen Tiwari begins his presentation by expressing gratitude to the event organizers, specifically mentioning the AI Im…
S36
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — But leadership in AI requires also investment, scale, and deployment. Let me start with investment. Europe has set up no…
S37
Panel Discussion AI & Cybersecurity _ India AI Impact Summit — Sri S. Krishnan, Secretary, Ministry of Electronics and IT, my dear friend, Professor Ravindran, Excellencies, distingui…
S38
Session — Marilia Maciel: Thank you, Jovan. I’ll do that, but I’ll do that by going back to your question about what predominates,…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
K
Kiran Mazumdar-Shaw
17 arguments106 words per minute1698 words955 seconds
Argument 1
AI‑driven biotech sovereignty is essential for India’s health security, economic resilience, and geopolitical standing (Kiran Mazumdar-Shaw)
EXPLANATION
She argues that controlling AI‑enabled biotechnology is crucial for safeguarding public health, maintaining economic competitiveness, and ensuring strategic autonomy on the global stage. Without sovereign capabilities, India could become dependent on external actors for critical health technologies.
EVIDENCE
She stated that if foundational AI models for drug discovery, genomics, cellular engineering and clinical decision making are owned offshore, India faces strategic dependence, emphasizing that biotech sovereignty is critical for health security, economic competitiveness, and preparedness against pandemics and bio-threats [54-58].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote stresses that biotech sovereignty embedded in AI is a strategic imperative for India’s health security, economic competitiveness and geopolitical positioning [S5][S6].
MAJOR DISCUSSION POINT
Strategic importance of AI‑driven biotech sovereignty
Argument 2
Nations that master the convergence of biology and AI will dominate future sectors such as healthcare, food security, and bio‑security (Kiran Mazumdar-Shaw)
EXPLANATION
She contends that the countries that can integrate biological intelligence with artificial intelligence will set the agenda for key sectors, from health to agriculture and security. This convergence will become the decisive competitive advantage in the coming decades.
EVIDENCE
She claimed that nations that command the convergence of biology and AI will define the future of healthcare, food security, education, biomanufacturing, sustainability, and biosecurity [5].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
She states that nations mastering the biology-AI convergence will define the future of healthcare, food security, education, biomanufacturing, sustainability and bio-security [S5][S6].
MAJOR DISCUSSION POINT
Geopolitical advantage of biology‑AI convergence
Argument 3
Living systems are “original intelligent machines” that process, store, and retrieve information with extreme energy efficiency (Kiran Mazumdar-Shaw)
EXPLANATION
She describes biological entities as the earliest form of intelligent machines, evolved over billions of years, capable of sensing, computing and responding to complex signals. Their information handling is highly efficient compared with conventional digital systems.
EVIDENCE
She described living systems as the original intelligent machines, noting that biological intelligence has evolved over 3.8 billion years and differs in how it learns, memorizes, builds and processes multimodal information, with cells sensing, computing and responding via signaling networks [9-13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
She describes living systems as the original intelligent machines, emphasizing their efficient information handling [S5].
MAJOR DISCUSSION POINT
Biological systems as intelligent machines
Argument 4
Examples like immune memory and Arctic tern migration illustrate innate biological computation and guardrails that maintain homeostasis (Kiran Mazumdar-Shaw)
EXPLANATION
She uses the immune system’s ability to remember pathogens and the Arctic tern’s long‑distance navigation as concrete illustrations of biological information processing and built‑in regulatory mechanisms that keep organisms healthy.
EVIDENCE
She illustrated biological computation by explaining how the immune system uses cytokines, antibodies, killer T cells and memory cells to store pathogen information and rapidly act upon re-infection [19-24], and by citing the Arctic tern’s 70,000-km migration guided by DNA-encoded navigation without prior knowledge [29-33].
MAJOR DISCUSSION POINT
Biological computation examples: immune memory and bird migration
Argument 5
AI compresses drug‑discovery timelines, enables generative design, protein‑structure prediction, and digital twins of cells/organs (Kiran Mazumdar-Shaw)
EXPLANATION
She explains that AI tools dramatically shorten the time needed to discover new therapeutics, design molecules, predict protein structures and create virtual replicas of biological systems, thereby lowering risk and cost.
EVIDENCE
She explained that AI-powered biology, including protein-structure prediction, generative drug design and digital twins of cells and organs, is compressing discovery timelines and reducing development risk [36-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI’s impact on accelerating drug discovery, generative design and protein-structure prediction (e.g., AlphaFold) is highlighted as dramatically shortening timelines [S7][S8].
MAJOR DISCUSSION POINT
AI accelerating biotech discovery
Argument 6
Future frontier: reprogramming cells (e.g., converting cancer cells, repairing bone) and engineering programmable biology for personalized therapies and longevity (Kiran Mazumdar-Shaw)
EXPLANATION
She envisions a next‑generation biotech where AI helps re‑engineer cells directly, turning malignant cells benign and repairing damaged tissues, enabling highly personalized treatments and extending healthy lifespan.
EVIDENCE
She highlighted the next frontier of reprogramming cells, such as converting cancer cells into non-malignant cells and repairing bone tissue, and described programmable biology that can deliver personalized therapies and extend health span [39-44] and further elaborated on this vision in a detailed discussion of cellular re-engineering [46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
She outlines programmable biology as the next frontier, citing conversion of cancer cells and tissue repair as examples [S6].
MAJOR DISCUSSION POINT
Cell reprogramming and programmable biology as future frontier
Argument 7
Reliance on offshore AI models for drug discovery, genomics, and clinical decision‑making creates strategic dependence (Kiran Mazumdar-Shaw)
EXPLANATION
She warns that depending on foreign AI platforms for critical biotech processes leaves India vulnerable to external control and limits its strategic autonomy.
EVIDENCE
She warned that reliance on offshore AI models for drug discovery, genomics and clinical decision-making creates strategic dependence, threatening national resilience [54-55].
MAJOR DISCUSSION POINT
Strategic risk of offshore AI dependence
Argument 8
India must own trusted biological data, indigenous AI models, computing resources, and end‑to‑end translational platforms (Kiran Mazumdar-Shaw)
EXPLANATION
She calls for the creation of domestic data repositories, home‑grown AI algorithms and national computing infrastructure to ensure end‑to‑end control over biotech pipelines from research to product delivery.
EVIDENCE
She argued that India must secure sovereign control over trusted biological data, develop indigenous AI models, own computing infrastructure and build end-to-end translational platforms from discovery to delivery [56-57].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for sovereign control over trusted biological data, home-grown AI models and national computing infrastructure are emphasized [S5][S6].
MAJOR DISCUSSION POINT
Need for sovereign AI‑bio data and infrastructure
Argument 9
Discovery: develop foundation models for proteins, RNA, cellular circuits, systems biology (Kiran Mazumdar-Shaw)
EXPLANATION
She proposes building large, general AI models that capture the fundamental patterns of proteins, nucleic acids and cellular networks to accelerate early‑stage biotech research.
EVIDENCE
She called for the development of foundation models for proteins, RNA, cellular circuits and systems biology to power AI-driven biotech discovery [62].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
She advocates building foundation models for proteins, RNA and cellular circuits to accelerate early-stage biotech research [S5].
MAJOR DISCUSSION POINT
Foundation models for biotech discovery
Argument 10
Development: use in‑silico trials, digital twins, AI‑driven trial design to de‑risk pipelines (Kiran Mazumdar-Shaw)
EXPLANATION
She highlights that virtual clinical testing, AI‑guided trial designs and digital replicas of patients can lower failure rates and speed up the transition from lab to market.
EVIDENCE
She identified opportunities to use in-silico trials, digital twins and AI-driven trial design to de-risk pipelines and increase probability of success in drug development [63].
MAJOR DISCUSSION POINT
AI‑enabled de‑risking of drug development
Argument 11
Manufacturing: implement smart biomanufacturing, AI‑optimized yields, and quality‑by‑design (Kiran Mazumdar-Shaw)
EXPLANATION
She advocates for AI‑driven optimization of bioprocesses, ensuring higher yields and consistent product quality through data‑centric, adaptive manufacturing systems.
EVIDENCE
She advocated for smart biomanufacturing that leverages AI for yield optimisation and quality-by-design to enhance productivity and reliability [64].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-driven smart biomanufacturing, yield optimisation and quality-by-design are presented as key economic enablers [S5].
MAJOR DISCUSSION POINT
AI‑powered smart biomanufacturing
Argument 12
Regulation: create science‑first, tech‑enabled pathways with AI‑validated real‑world evidence; regulatory speed must match accelerated innovation (Kiran Mazumdar‑Shaw)
EXPLANATION
She stresses that regulatory frameworks need to be built on scientific evidence, incorporate AI validation, and evolve quickly enough to keep pace with rapid biotech advances.
EVIDENCE
She stressed the need for a science-first, tech-enabled regulatory framework that incorporates AI-validated real-world evidence and must keep pace with accelerated innovation to avoid missed opportunities [66-70].
MAJOR DISCUSSION POINT
AI‑integrated regulatory frameworks
Argument 13
Government: invest in sovereign AI‑bio infrastructure, trusted data architectures, regulatory sandboxes, and mission‑mode programs (Kiran Mazumdar‑Shaw)
EXPLANATION
She calls on the state to fund national AI‑bio platforms, secure data ecosystems, create experimental regulatory environments and launch focused programmes in cutting‑edge therapeutic areas.
EVIDENCE
She urged the government to invest in sovereign AI-bio infrastructure, trusted data architectures, regulatory sandboxes and mission-mode programmes in cell and gene therapy, immuno-oncology and longevity science [74].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
She urges government investment in sovereign AI-bio infrastructure, trusted data ecosystems and regulatory sandboxes to accelerate innovation [S5][S6].
MAJOR DISCUSSION POINT
Government investment in sovereign AI‑bio ecosystem
Argument 14
Academia: mainstream computational biology, neurosymbolic AI, AI‑first life‑science curricula to train translational scientists (Kiran Mazumdar‑Shaw)
EXPLANATION
She recommends that universities embed advanced computational methods and AI‑centric courses into life‑science programs to produce a workforce capable of driving AI‑enabled biotech.
EVIDENCE
She recommended academia to mainstream computational biology, neurosymbolic AI and AI-first life-science education to train a new cadre of translational scientists [75].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
She recommends academia embed computational biology, neurosymbolic AI and AI-first curricula to develop translational scientists [S5].
MAJOR DISCUSSION POINT
Academic capacity building for AI‑bio
Argument 15
Industry: co‑create shared platforms, translational pipelines, globally benchmarked biomanufacturing clusters; capital markets must also evolve to support long‑cycle, high‑risk biotech innovation (Kiran Mazumdar‑Shaw)
EXPLANATION
She urges private sector players to collaborate on common platforms and manufacturing hubs, while calling for investors to provide patient capital for high‑risk, long‑term biotech projects.
EVIDENCE
She called on industry to co-create shared platforms, translational pipelines and globally benchmarked biomanufacturing clusters, and highlighted the need for capital markets to provide patient, long-term funding for high-risk biotech innovation [76-80].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Industry collaboration on shared platforms, biomanufacturing clusters and the need for patient capital for high-risk biotech are advocated [S5][S6].
MAJOR DISCUSSION POINT
Industry collaboration and financing for AI‑bio
Argument 16
Sovereignty does not mean isolation; India must build ethical, transparent, energy‑efficient and bias‑aware AI systems that are globally interoperable yet rooted in public interest (Kiran Mazumdar‑Shaw)
EXPLANATION
She clarifies that national AI‑bio sovereignty should coexist with global standards, emphasizing ethical design, transparency, low energy consumption and mitigation of algorithmic bias.
EVIDENCE
She clarified that sovereignty does not mean isolation, and advocated for building ethical, transparent, energy-efficient and bias-aware AI systems for biology that are globally interoperable yet rooted in the public interest [82-86].
MAJOR DISCUSSION POINT
Ethical, transparent AI for biotech sovereignty
Argument 17
Embedding equity, affordability, and access into AI‑driven biotech positions India as a model of innovation with social purpose (Kiran Mazumdar‑Shaw)
EXPLANATION
She argues that integrating principles of fairness, cost‑effectiveness and universal access into AI‑enabled biotech will showcase India as a leader that couples technological excellence with societal benefit.
EVIDENCE
She emphasized embedding principles of equity, affordability and access into AI-driven biotech to offer the world a model of innovation that combines technological leadership with social purpose [85-87].
MAJOR DISCUSSION POINT
Equity and access in AI‑driven biotech innovation
Agreements
Agreement Points
Both speakers acknowledge the significance of the Impact AI Summit as a platform for AI and biotech discussions in India.
Speakers: Speaker 1, Kiran Mazumdar-Shaw
Speaker 1 welcomes the audience and introduces Ms. Kiran Mazumdar-Shaw at the Impact AI Summit [1], and Kiran expresses delight to be part of this “wonderful summit, the Impact AI Summit that India… is launching and hosting for the first time” [2].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors the summit’s highlighted role in keynote addresses that stress India’s deep engineering talent and innovation potential, the collaborative optimism expressed in AI-healthcare panels, and the broader US-India alignment on AI export policy discussed at the summit [S19][S20][S21].
Similar Viewpoints
Kiran consistently emphasizes that AI‑enabled biotechnology is a strategic national priority that must be sovereign, ethically grounded, and integrated across discovery, development, manufacturing, regulation, and education to secure health, economic, and geopolitical benefits. She repeatedly calls for domestic control of data and models, government investment, academic capacity building, industry collaboration, and ethical design, linking these to AI’s ability to accelerate science and deliver equitable outcomes [4-8][9-13][36-38][39-44][54-58][74-80][82-86].
Speakers: Kiran Mazumdar-Shaw
AI‑driven biotech sovereignty is essential for India’s health security, economic resilience, and geopolitical standing (Kiran Mazumdar-Shaw) Nations that master the convergence of biology and AI will dominate future sectors such as healthcare, food security, and bio‑security (Kiran Mazumdar-Shaw) AI compresses drug‑discovery timelines, enables generative design, protein‑structure prediction, and digital twins of cells/organs (Kiran Mazumdar-Shaw) Future frontier: reprogramming cells (e.g., converting cancer cells, repairing bone) and engineering programmable biology for personalized therapies and longevity (Kiran Mazumdar-Shaw) India must own trusted biological data, indigenous AI models, computing resources, and end‑to‑end translational platforms (Kiran Mazumdar-Shaw) Government: invest in sovereign AI‑bio infrastructure, trusted data architectures, regulatory sandboxes, and mission‑mode programmes (Kiran Mazumdar‑Shaw) Academia: mainstream computational biology, neurosymbolic AI, AI‑first life‑science curricula to train translational scientists (Kiran Mazumdar‑Shaw) Industry: co‑create shared platforms, translational pipelines, globally benchmarked biomanufacturing clusters; capital markets must evolve to support long‑cycle, high‑risk biotech innovation (Kiran Mazumdar‑Shaw) Sovereignty does not mean isolation; India must build ethical, transparent, energy‑efficient and bias‑aware AI systems that are globally interoperable yet rooted in public interest (Kiran Mazumdar‑Shaw) Embedding equity, affordability, and access into AI‑driven biotech positions India as a model of innovation with social purpose (Kiran Mazumdar‑Shaw)
Unexpected Consensus
Overall Assessment

The discussion shows strong internal coherence in Kiran Mazumdar‑Shaw’s arguments, with multiple points converging on AI‑driven biotech sovereignty as essential for India’s health security, economic resilience, and global standing. The only cross‑speaker agreement is the shared recognition of the Impact AI Summit’s importance. Overall consensus among speakers is limited due to the single substantive voice, but the depth of agreement within that voice signals a clear policy direction for India.

Low inter‑speaker consensus (only one substantive speaker), but high intra‑speaker coherence, implying that if the agenda moves forward, the outlined strategic pillars are likely to be pursued collectively across government, academia, and industry.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript contains only an introductory welcome by Speaker 1 and a single, uninterrupted keynote by Kiran Mazumdar‑Shaw. No other speakers present opposing viewpoints, and the speaker does not articulate any counter‑arguments to her own positions. Consequently, there are no observable disagreements or partial agreements among participants.

Minimal – the discussion is essentially a monologue, so disagreement does not affect the thematic focus on AI‑driven biotech sovereignty.

Takeaways
Key takeaways
Biotech sovereignty, powered by AI, is a strategic and geopolitical imperative for India’s health security, economic resilience, and global standing. Living systems constitute a form of ‘biological intelligence’ that processes, stores, and retrieves information with extreme energy efficiency, offering a model for AI integration. AI can dramatically accelerate biotech breakthroughs, from protein‑structure prediction and generative drug design to digital twins, and ultimately to reprogramming cells for personalized therapies and longevity. Reliance on offshore AI models creates strategic dependence; India must develop sovereign AI infrastructure, trusted biological data, and indigenous end‑to‑end translational platforms. Embedding AI across the entire biotech value chain—discovery, development, manufacturing, and regulation—is essential to compress timelines and maintain competitive advantage. Implementation requires a triple‑helix collaboration: government investment in AI‑bio infrastructure and regulatory sandboxes; academia delivering AI‑first life‑science curricula; industry co‑creating shared platforms and scalable biomanufacturing clusters, supported by patient capital. Sovereignty should not equate to isolation; ethical, transparent, energy‑efficient, bias‑aware AI systems that are globally interoperable and rooted in public interest will position India as a model of socially responsible innovation.
Resolutions and action items
Government to invest in sovereign AI‑bio infrastructure, trusted data architectures, regulatory sandboxes, and mission‑mode programs in cell & gene therapy, immuno‑oncology, and longevity science. Academia to mainstream computational biology, neurosymbolic AI, and AI‑first life‑science education to train translational scientists. Industry to co‑create shared AI platforms, translational pipelines, and globally benchmarked biomanufacturing clusters; adopt smart biomanufacturing and quality‑by‑design practices. Capital markets to develop financing mechanisms that provide long‑term, patient capital for high‑risk biotech innovation. Development of foundation AI models for proteins, RNA, cellular circuits, and systems biology; creation of in‑silico trial frameworks, digital twins, and AI‑driven trial design. Regulatory bodies to establish science‑first, tech‑enabled pathways that integrate real‑world evidence validated by AI, and to accelerate approval timelines to match faster innovation cycles.
Unresolved issues
Specific funding models and budget allocations for the proposed sovereign AI‑bio infrastructure remain undefined. Concrete timelines and milestones for building indigenous foundation models and regulatory sandboxes were not detailed. Mechanisms for ensuring data privacy, security, and interoperability with global partners while maintaining sovereignty were not fully addressed. Strategies for mitigating bias and ensuring energy efficiency in AI systems need further elaboration. Details on how the triple‑helix collaboration will be coordinated, governed, and held accountable were not specified.
Suggested compromises
Balancing strategic autonomy with global interoperability: India will develop sovereign AI systems that are ethically transparent and compatible with international standards, avoiding isolationist approaches. Embedding ethical, equity‑focused principles into AI development while pursuing rapid technological advancement, ensuring that speed does not compromise trust and fairness.
Thought Provoking Comments
The coming decades will be shaped by biotech sovereignty that is embedded in AI; nations that command the convergence of biology and AI will define the future of healthcare, food security, education, biomanufacturing, sustainability, biosecurity, and much more.
Frames the entire discussion as a geopolitical imperative, moving the conversation from a technical trend to a strategic national priority.
Sets the overarching theme of the talk, prompting the rest of the speech to explore how India can achieve this sovereignty and influencing the audience to view AI‑biotech convergence as a matter of national security rather than just innovation.
Speaker: Kiran Mazumdar-Shaw
Living systems are the original intelligent machines… they operate within inbuilt biological guardrails that maintain homeostasis; disease arises when these guardrails fail.
Introduces the concept of ‘biological intelligence’ and links it to natural governance mechanisms, providing a novel lens to compare biology with artificial intelligence.
Creates a conceptual bridge that justifies why AI should learn from biology, steering the discussion toward the idea of mimicking biological guardrails in engineered systems.
Speaker: Kiran Mazumdar-Shaw
The Arctic tern undertakes a 70,000‑kilometer journey with no prior knowledge or older bird to guide it—its navigational intelligence is embedded in DNA, whereas AI learns from data to optimize decisions at machine scale.
Uses a vivid biological example to illustrate innate, DNA‑encoded intelligence, contrasting it with data‑driven AI, thereby deepening the audience’s appreciation of the uniqueness of biological computation.
Acts as a turning point that moves the narrative from abstract definitions to concrete biological phenomena, reinforcing the argument that AI can be enhanced by emulating such innate intelligence.
Speaker: Kiran Mazumdar-Shaw
Reprogramming cancer cells into non‑malignant cells and repairing bone tissue that is damaged and irreparable represent the next frontier—moving from static one‑size‑fits‑all drugs to programmable biology.
Projects a bold, future‑oriented vision of therapeutic innovation, shifting the conversation from current AI‑enabled drug discovery to transformative cell‑reprogramming technologies.
Introduces a new topic—programmable biology—that expands the scope of the discussion to include regenerative medicine and longevity, prompting listeners to consider deeper scientific and ethical implications.
Speaker: Kiran Mazumdar-Shaw
If foundational AI models for drug discovery, genomics, cellular engineering and clinical decision‑making are owned offshore, India risks strategic dependence in the most critical domain of national resilience—human health.
Links technological capability directly to national security, challenging any complacent view that AI tools can be imported without consequence.
Creates a pivot toward policy and sovereignty concerns, leading to the subsequent call for indigenous AI infrastructure and influencing the audience to think about ownership, data sovereignty, and geopolitical risk.
Speaker: Kiran Mazumdar-Shaw
India must evolve from being the ‘pharmacy of the world’ to becoming the ‘biotech platform of the world’, offering AI‑native discovery engines, programmable therapy platforms and scalable biomanufacturing as global public goods.
Reframes India’s economic role in a bold, aspirational way, moving beyond traditional strengths to a future‑focused, AI‑driven biotech leadership model.
Serves as a strategic vision statement that unifies the earlier technical points under a national development agenda, encouraging stakeholders to align their efforts toward this higher‑order goal.
Speaker: Kiran Mazumdar-Shaw
The transformation cannot be driven by industry alone; it demands a triple helix of government, academia and industry, with coordinated investment in sovereign AI bio‑infrastructure, trusted data architectures, regulatory sandboxes and mission‑mode programs.
Highlights the necessity of cross‑sector collaboration, shifting the tone from a solo visionary speech to a call for collective action and systemic change.
Marks a turning point toward actionable policy recommendations, prompting the audience to consider concrete steps and partnerships required to realize the earlier vision.
Speaker: Kiran Mazumdar-Shaw
Sovereignty is not isolation; India must build ethical, transparent, energy‑efficient and bias‑aware AI systems for biology that are globally interoperable yet rooted in public interest, embedding equity, affordability and access into AI‑driven biotech.
Integrates ethical considerations with the sovereignty narrative, challenging any purely techno‑centric approach and emphasizing responsible innovation.
Adds a layer of complexity by introducing ethics and global interoperability, steering the discussion toward responsible AI governance and influencing how the audience perceives the balance between national interests and global collaboration.
Speaker: Kiran Mazumdar-Shaw
Overall Assessment

Kiran Mazumdar‑Shaw’s monologue is structured around a series of pivotal insights that progressively broaden the conversation—from defining biotech sovereignty and biological intelligence, to illustrating natural examples, envisioning programmable biology, and finally confronting geopolitical, economic, regulatory and ethical dimensions. Each highlighted comment acts as a turning point, redirecting the audience’s focus and deepening the analysis. Collectively, these remarks shape the discussion into a comprehensive roadmap that links scientific possibility with national strategy, urging coordinated action across government, academia and industry while foregrounding responsible, sovereign AI development.

Follow-up Questions
How can we understand the mechanisms of biological intelligence—how living systems learn, store, retrieve, and process information?
Understanding biological intelligence is foundational for leveraging AI to reprogram cells, develop programmable biology, and create energy‑efficient computational models of life.
Speaker: Kiran Mazumdar-Shaw
What strategies are needed to develop indigenous AI models for drug discovery, genomics, cellular engineering, and clinical decision‑making?
Relying on offshore AI models creates strategic dependence; building sovereign AI models ensures national health security and economic competitiveness.
Speaker: Kiran Mazumdar-Shaw
How should India build sovereign AI bio‑infrastructure, including trusted data architectures and computing resources?
Secure, reliable data and compute platforms are essential for trustworthy AI applications across the biotech value chain and for pandemic preparedness.
Speaker: Kiran Mazumdar-Shaw
What are the priorities for creating foundation models for proteins, RNA, cellular circuits, and systems biology?
Foundation models can accelerate discovery, reduce risk, and enable AI‑native therapies, positioning India as a leader in biotech innovation.
Speaker: Kiran Mazumdar-Shaw
How can in‑silico trials, digital twins, and AI‑driven trial design be developed to de‑risk pipelines and improve probability of success?
These tools can compress development timelines, lower costs, and align regulatory evaluation with rapid scientific advances.
Speaker: Kiran Mazumdar-Shaw
What approaches are needed for smart biomanufacturing using AI for yield optimization and quality‑by‑design?
AI‑enabled manufacturing will enhance scalability, reduce waste, and ensure consistent product quality, supporting India’s vision as a global biotech platform.
Speaker: Kiran Mazumdar-Shaw
How should a science‑first, tech‑enabled regulatory system be designed to integrate real‑world evidence through AI validation?
Regulatory frameworks must evolve in step with accelerated discovery to avoid bottlenecks and to safely bring AI‑derived therapies to market.
Speaker: Kiran Mazumdar-Shaw
What mechanisms can ensure regulatory speed keeps up with compressed discovery and development timelines?
Without parallel regulatory agility, the benefits of faster AI‑driven innovation could be lost, undermining economic and health gains.
Speaker: Kiran Mazumdar-Shaw
How can the triple helix of government, academia, and industry be coordinated to invest in sovereign AI bio‑infrastructure and mission‑mode programs?
Collaboration across sectors is critical to fund and execute large‑scale initiatives in cell and gene therapy, immuno‑oncology, and longevity science.
Speaker: Kiran Mazumdar-Shaw
What educational reforms are needed to mainstream computational biology, neurosymbolic AI, and AI‑first life‑sciences curricula?
Training a new cadre of translational scientists ensures a skilled workforce capable of driving AI‑augmented biotech research.
Speaker: Kiran Mazumdar-Shaw
How can capital markets evolve to provide patient capital for long‑cycle, high‑risk biotech innovation in India?
Sustained financing is essential for deep science projects that have high societal and economic returns but require extended investment horizons.
Speaker: Kiran Mazumdar-Shaw
What principles should guide the development of ethical, transparent, energy‑efficient, and bias‑aware AI systems for biology?
Ensuring AI systems are trustworthy and aligned with public interest is vital for global interoperability and domestic acceptance.
Speaker: Kiran Mazumdar-Shaw
How can equity, affordability, and access be embedded into AI‑driven biotech innovations?
Integrating these values positions India as a model of socially responsible innovation and expands global impact.
Speaker: Kiran Mazumdar-Shaw
What scientific pathways exist for reprogramming cancer cells into non‑malignant cells?
Successfully converting malignant cells could revolutionize oncology and reduce reliance on conventional therapies.
Speaker: Kiran Mazumdar-Shaw
How can AI‑enabled approaches repair damaged bone tissue that is currently irreparable?
Advances in tissue regeneration would address unmet medical needs and demonstrate the power of AI‑augmented biology.
Speaker: Kiran Mazumdar-Shaw
What are the mechanisms to modulate senescence, metabolic pathways of aging, and cellular repair to extend healthspan?
Understanding and intervening in aging processes could dramatically improve longevity and reduce disease burden.
Speaker: Kiran Mazumdar-Shaw
How can AI be used to map regulatory circuits at scale to identify targets that preserve homeostasis?
Large‑scale mapping enables precise interventions that reinforce biological guardrails rather than override them.
Speaker: Kiran Mazumdar-Shaw
What steps are required to develop AI‑native discovery engines, programmable therapy platforms, and scalable biomanufacturing as global public goods?
Creating these shared resources would shift India from a pharmaceutical supplier to a leading biotech platform for the world.
Speaker: Kiran Mazumdar-Shaw
How can India transition from being the ‘pharmacy of the world’ to the ‘biotech platform of the world’?
This strategic shift involves integrating AI across the entire biotech value chain to achieve leadership in discovery, development, and manufacturing.
Speaker: Kiran Mazumdar-Shaw

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Driving Indias AI Future Growth Innovation and Impact

Driving Indias AI Future Growth Innovation and Impact

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session focused on India’s strategy to close the global AI divide by unveiling a Dell Technologies blueprint that aims to position AI as a driver of economic growth and sovereign capability [1][6][7].


Dr. Vivek Mohindra presented the blueprint’s three pillars-invest in scalable compute and energy infrastructure, innovate through widespread skilling and collaboration, and evolve with agile, security-first governance-arguing that public-private partnership is essential to unlock sovereign AI potential [17-21][23-30][31-34]. He highlighted that AI workloads in India are expected to grow over 30 % CAGR, requiring more than 10 exaflops of compute and an estimated 200 000 GPUs, far beyond the current 40-50 k available [15-16][71-73]. Raj Gopal suggested fiscal measures such as waiving GST on imported servers and extending tax holidays to reduce upfront costs for startups and MSMEs [86-92]. He also illustrated the ecosystem’s impact by describing how Dell helped the Election Commission deduplicate 90 crore photographs in 51 hours [78-83].


Professor Bhaskar Chakravarti warned that beyond hardware, a “trust infrastructure” encompassing data governance, transparency, grievance mechanisms and public confidence is the key non-technical bottleneck for inclusive AI adoption [113-130][131-138]. Manish Gupta reinforced this by calling for a “UPI of AI” that democratizes access through a unified data-set API, while emphasizing that India must shift from “made in India” to “trusted in India” and scale the developer base from a billion users to millions of creators [220-229][232-245]. The panel debated the balance between rapid innovation and regulation, with Raj Gopal urging minimal constraints to avoid stifling growth, whereas Manish argued that agile frameworks and existing privacy laws can reconcile speed with safeguards [256-259][291-300]. Bhaskar used a car-road analogy to stress that even the fastest AI models cannot thrive on a “dirt-road” of weak institutions and job-impact policies [275-280][282-284].


Both the moderator and Minister Jayant Chaudhary emphasized that public-private partnerships have already delivered a low-cost compute facility (≈ ₹65 per hour) and must expand to tier-2/3 regions, academia, and skill hubs to sustain the AI mission [146-148][322-328][345-350]. The minister described a zero-trust AI architecture that verifies every protocol, segments data, and ensures auditability, aligning with Dr. Mohindra’s call for national risk registries and observability [386-394][398-405][412-416]. The discussion concluded that coordinated investment, skill development, trustworthy governance, and robust PPP models are critical to translate India’s AI ambitions into inclusive economic growth [302-304][321-324].


Keypoints


Major discussion points


A three-pillar “Invest-Innovate-Evolve” blueprint for scaling AI in India - the framework calls for massive compute and energy investment, a skills-driven innovation ecosystem, and agile, responsible governance ([18-20][23-26][27-30][31]; [46-48]).


Public-private partnership (PPP) as the engine for sovereign AI infrastructure - the blueprint stresses marrying public resources with private innovation, building distributed, cost-efficient data centres across states, leveraging open-source to lower costs, and mobilising billions of dollars of investment ([32-34][167-176][183-188][158-162]).


Non-technical bottlenecks: trust, institutional capacity and regulatory agility - trust is described as a “trust infrastructure” that includes data-governance, transparency, grievance mechanisms and job-impact safeguards; regulations must be agile and balance innovation with responsibility ([113-122][129-138][275-283][284-288][291-296]).


Policy and fiscal levers to unlock AI for startups and MSMEs - subsidised GPU access, a proposed GST waiver on imported servers, and tax holidays for AI services are highlighted as ways to reduce upfront cost barriers for smaller firms ([68-71][86-92]).


Skill development and a “UPI of AI” – building a massive developer base - the conversation stresses moving from a billion-user base to millions of AI developers, creating tiered skilling pathways (schools, colleges, employment), and establishing a common data-API layer to democratise access ([226-236][240-244][373-379]; [362-365]).


Overall purpose / goal


The session was convened to present and debate a concrete “AI blueprint” that bridges the global AI divide by positioning India as a sovereign AI leader. It aimed to translate high-level aspirations into actionable steps-investment in compute, skilling the workforce, and establishing trustworthy governance-through coordinated action among government, industry, academia and the startup ecosystem.


Overall tone


The discussion began with a formal, optimistic “call to action” tone, emphasizing opportunity and national ambition. As the panel moved into technical and policy details, the tone became more analytical, highlighting concrete challenges (infrastructure gaps, regulatory speed, trust deficits). Toward the end, the tone shifted to collaborative and hopeful, underscoring partnership, shared responsibility, and a collective resolve to build an inclusive, secure AI future for India.


Speakers

Mridu Bhandari – Senior Anchor and Consulting Editor at Network 18 (brands include CNBC and Forbes India); moderator/host of the session. [S12]


Dr. Vivek Mohindra – Special Advisor to the Vice Chairman and COO, Dell Technologies Global. [S8]


Manish Gupta – President and Managing Director, Dell Technologies India. [S11]


A. S. Rajgopal – Managing Director and Chief Executive Officer, NextGen Cloud Technologies. [S3]


Bhaskar Chakravarti – Dean of Global Business, the Fletcher School of Law and Diplomacy, Tufts University. (referred to as Professor Bhaskar)


Shri Jayant Chaudhary Ji – Minister of State for Education and Minister of Skill Development and Entrepreneurship (Independent Charge), Government of India. [S6]


Additional speakers:


(none)


Full session reportComprehensive analysis and detailed insights

The session opened with senior anchor and consulting editor Mridu Bhandari framing the summit’s purpose as “bridging the global AI divide” and positioning artificial intelligence as a catalyst for economic growth, social empowerment and India’s global leadership [1-3]. She stressed that the 55-minute programme was a “call to action” rather than a mere presentation [4] and set the agenda by announcing a focus on the execution pathway for AI adoption from an industry perspective [5-6]. The first speaker, Dr Vivek Mohindra – special advisor to the vice-chairman and COO of Dell Technologies Global – was then introduced [7-8].


Dr Mohindra noted that India stands at a “cusp of very significant changes and progress on the back of AI” with bold domestic and global aspirations [10]. He highlighted Dell’s three-decade presence in India and its role as the world’s leading AI-infrastructure provider, which underpins the “AI blueprint” that Dell has prepared for the country [12-16][15-16]. The blueprint is built around three pillars:


* Invest – massive spending on sovereign, scalable compute and data foundations and on energy infrastructure, because “without energy…there is really no compute infrastructure” [18-21]; projected AI-driven compute growth in India to exceed 10 exaflops and AI workloads to expand at a CAGR of over 30 % [15-16].


* Innovate – skilling the nation across schools, colleges and the workforce, delivered through online, in-person and incubation modes, and fostering ecosystem collaboration [23-26].


* Evolve – an agile, security-first governance regime that balances innovation with responsibility and anchors AI in a “trust-first” regulatory principle [27-32][33-34].


After Dr Mohindra’s remarks, Mr Bhandari invited Manish Gupta, President and Managing Director of Dell Technologies India, to the stage for the unveiling of the Dell Technologies Blueprint that aligns with India’s “Vixit Bharat 2047” vision [39-48]. She summarised the blueprint as centred on the three pillars – invest in sovereign compute and data, innovate through collaboration and a future-ready workforce, and evolve into a responsible, agile governance structure [46-48].


A panel was then convened, comprising Raj Gopal (NextGen Cloud Technologies), Professor Bhaskar Chakravarti (The Fletcher School, Tufts) and Manish Gupta [52-56]. The moderator began by asking Raj Gopal to identify policies and market interventions that could unlock reliable, affordable AI compute for startups and MSMEs [61-65].


Policy levers for startups and MSMEs – Raj Gopal pointed out that India already offers subsidised GPU access, with some firms receiving 100 % of the GPUs they need, yet the country still possesses only about 40-50 k GPUs against an estimated requirement of 200 k [71-73]. He advocated additional fiscal measures, notably waiving GST on imported servers (cutting upfront costs by roughly 18 %) and extending income-tax holidays for AI service providers [90-98]. He also noted that India’s investment in AI compute is roughly one-hundredth of what the United States is investing [100-104]. He illustrated the ecosystem’s impact by describing Dell’s work for the Election Commission of India, where a massive deduplication of 90 crore photographs was completed in 51 hours [78-83].


Non-technical bottlenecks – Professor Chakravarti shifted the focus to the “trust infrastructure” that underpins AI adoption. He argued that the single most important determinant of a country’s AI momentum is the demand side, which in turn depends on transparent data-governance, grievance mechanisms, and public confidence [101-110][113-122]. While India enjoys relatively high public trust in digital systems [123-127], he warned that institutional trust is still nascent, varying across districts and requiring robust policies on transparency, explainability and job-impact mitigation [128-138][260-267].


Industry vision of a “UPI of AI” – Manish Gupta echoed the need for a unified, open API layer that aggregates government data sets and compute capacity, likening it to the successful UPI payments system [220-245]. He stressed that India must move from “made in India” to “trusted in India” by collaborating with the Artificial Intelligence Safety Institute and by building a “security-first” governance framework [229-236]. He also highlighted the importance of sustainable, energy-efficient data centres, noting that NextGen Cloud Technologies is already deploying highly efficient facilities that can democratise compute access [156-162][160-162].


When asked about the practical steps to build sovereign, cost-efficient AI infrastructure, Raj Gopal described a plan to install roughly 100 MW of data-centre capacity across six states, leveraging existing telecom, rail and power networks for inter-connectivity [167-176]. He stressed that billions of dollars of investment are required, that open-source software can dramatically lower compute costs, and that a distributed model would bring compute closer to end-users, especially in education and healthcare [182-188][167-176].


All three speakers converged on the centrality of public-private partnership (PPP) as the engine for scaling AI infrastructure, skilling and inclusive growth. Dr Mohindra described sovereign AI as “really about the public-private partnership” [30-34]; Manish Gupta highlighted PPP-driven democratisation of AI access [146-149][148-155]; and Minister Jayant Chaudhary later reinforced that the low-cost compute facility (≈ ₹65 per hour) is a direct outcome of a people-centric PPP model that embeds resources in academic institutions [333-351][345-351].


A notable disagreement emerged around the regulatory approach. Dr Mohindra advocated an “agile” regulatory framework that balances innovation with responsibility [28-32], whereas Raj Gopal argued for a “minimal” regulatory regime to avoid stifling growth, suggesting that risks can be managed through continuous monitoring [256-259]. A second point of contention concerned the balance between speed and trust. Professor Chakravarti used a car-and-road metaphor to warn that even the fastest AI models cannot perform on a “dirt-road” of weak institutions, emphasizing the need for trust, transparency and job-impact policies before rapid deployment [275-283][284-288][260-267]. Manish Gupta, however, contended that agility and security are not opposing forces and that institutions must evolve faster than the technology to accommodate both [291-295].


The panel also addressed strategic autonomy. The moderator asked about foundational capabilities for a sovereign AI ecosystem [210-215], and Manish Gupta responded by outlining a “people-first” approach, the shift to “trusted in India”, and the vision of a “UPI of AI” platform [220-245].


In concluding remarks, Mr Bhandari summarised the consensus: investment in compute, innovation through skill pipelines, and evolution via trustworthy, agile governance are essential to realise India’s sovereign AI infrastructure [302-304]. She announced the arrival of Hon Jayant Chaudhary, Minister of State for Education and Skill Development, and previewed a forthcoming fireside chat on public-private models for AI [321-328][322-328]. The minister reiterated that the India AI Mission has already surpassed its target of 18 000 GPUs, now providing about 38 000 and aiming for 100 000 by year-end, and that the compute facility is being offered at the world’s lowest price point (≈ ₹65 per hour) to foster inclusive innovation [345-351][346-350]. She also outlined a zero-trust AI architecture, calling for verification of every protocol, data segmentation, audit trails and a national risk registry [386-398][401-409]; Dr Mohindra echoed this, extending zero-trust from data through models to identity-and-access management [412-416].


Overall, the discussion painted a detailed roadmap: massive, geographically distributed compute investment; fiscal incentives such as GST waivers and tax holidays to lower barriers for MSMEs; a unified “UPI of AI” platform; a robust trust infrastructure encompassing transparency, explainability and grievance mechanisms; and an agile, security-first regulatory regime. The participants agreed that only through coordinated public-private partnership, sustained skill development and trustworthy governance can India translate its AI ambitions into inclusive economic growth and strategic autonomy [302-304][345-351].


Session transcriptComplete transcript of the session
Mridu Bhandari

So this conversation here and the couple of conversations we are going to have over the next one hour or so are aligned with the summit’s goal of bridging the global AI divide. So AI drives economic growth, social empowerment, and of course, global leadership for India. This is not just a presentation, it is a call to action. I’m your host, Mridu Bhandari, senior anchor and consulting editor at Network 18 with brands like CNBC and Forbes India, and I’ll be guiding you through this next 55 -minute journey that we’re on. To set the tone of this morning, we’re going to begin with framing the execution pathway of AI adoption and scaling it up from an industry vantage point. Our leadership keynote theme today is architecting India’s AI leadership, a blueprint for transformation.

To deliver this knowledge, we’re going to be talking about the key points of AI adoption and scaling it up from an Please join me in welcoming on stage Dr. Vivek Mohindra, special advisor to the vice chairman and COO of Dell Technologies Global. Dr. Mohindra, please join us here.

Dr. Vivek Mohindra

Thank you, Mridu, and thank you, everyone, for joining us for the unveiling of this important blueprint. As we have heard over the most of this week, India is at the cusp of very significant changes and progress on the back of AI with very bold aspirations, which are not only bold for India, but they’re very bold when you put it in the context of global aspirations that lots of other countries have. Dell has had a presence in India for over 30 years. We have partnered very closely with several government agencies as well as companies and the broader ecosystem to bring the broader set of capabilities that we have, which are across the board. We have a board covering server storage, networking, PCs.

and we are the number one AI infrastructure provider to enterprises globally. So leveraging our global presence and leveraging our deep knowledge of India, we have put all of that thought into putting forth, as Milu described, an AI blueprint, which is a practical guide for what we think not only the country but also companies need to do to be able to take advantage of this particular opportunity. The growth in terms of compute expected on the back of AI in India is expected to be well greater than 10 exo -flops, and that is a significant amount of growth. And the AI workloads in India are growing at over 30 % compound annual growth rate over the next few years, which is extremely significant.

So as we step back and look at what are the key elements of what a country and companies need to do, there really are three key elements. The first element is investments. And the investment really goes at the heart of the compute infrastructure that a country needs to put in place to ensure that everybody has access to that infrastructure, including MSMEs who sometimes do not have the capacity to be able to put their infrastructure in place. Investment also includes energy infrastructure, because without energy, there is really no compute infrastructure you can put in place which can run on that. So those are some of the key areas of the invest pillar of that. And there are other several other areas that I will encourage you to read through our blueprint that you will see both from a policy perspective and practical perspective that we think needs to get done.

The innovate side really comes down to. Areas like skilling, which I know when Minister Chaudhry joins us, we will get into that in quite some detail. But innovating around how the skilling occurs all the way from schools to colleges to workforce entering employment and employers themselves, what role they play across a whole spectrum of mediums to deliver that skilling is a key part of the innovate pillar. And then the last one, evolve, revolves all around governance aspects of that. And governance covers multiple areas. One of the key areas within governance is fundamentally the regulatory framework that needs to exist and that countries need to put in place. The pace of change is so significant with AI and how rapidly the technologies are moving that one of the fundamental balances that countries need to strike vis -a -vis, regulations, is striking the balance between innovation and responsibility while anchoring it to responsibility.

That is one of the key. regulatory principles that needs to be in place. And the regulations have to be agile because the technology is moving at such a fast pace that you cannot anchor the regulatory framework to yesterday’s technologies. And at the heart of it, I hope what you will take away from our blueprint is realizing sovereign AI potential for any country, including India, is really about the public -private partnership. And it’s really about marrying the public resources with private innovation. And that really is what the key to unlocking the full potential of AI and sovereign AI is in this country. So, again, I would encourage you to read through the blueprint, and we look forward to your feedback, and we look forward to partnering closely with Indian ecosystem to help India realize its aspirations with AI.

Thank you very much.

Mridu Bhandari

Thank you so much, Dr. Mohindra. I’m going to request you to please stay back on stage. I’d also like to invite Manish Gupta, President and Managing Director of Dell Technologies India, to join us here. This is the big moment, ladies and gentlemen. We are ready for the unveiling of the Dell Technologies Blueprint to accelerate India’s AI growth. Yes, that’s a photo moment for everyone. Thank you. Thank you so much, gentlemen. Thank you, Dr. Mohindra. Thank you, Mr. Gupta. Well, this blueprint advances India’s vision of Vixit Bharat 2047, positioning AI as a foundational engine for productivity, modernized public services, opportunity expansion and strategic autonomy. It centers on three pillars that we’ve been discussing. Invest, invest in sovereign, scalable compute and data foundations, innovate with collaboration and with a future ready workforce and evolve, evolve into a responsible, agile, security first governance structure.

So our next panel today will go inside this blueprint and India’s AI future to unpack how to convert this ambition. into nation scale execution. And that’s quite a mean feat for a country as diverse and as huge as India. So let’s welcome the panelists for all the tough questions this morning. Raj Gopal, AS, Managing Director and Chief Executive Officer of NextGen Cloud Technologies. Bhaskar Chakravarti, Dean of Global Business, the Fletcher School of Law and Diplomacy at Tufts University. Please have a seat, sir. And once again on stage, Manish Gupta, President and Managing Director of Dell Technologies India. And I will be moderating this session for you. Welcome, gentlemen. We are here this morning to really translate the invest, innovate and evolve pillars into very actionable steps that we all can take together to grow India’s AI ecosystem.

So I’ll begin with targeted questions to each of you. And of course, you are free to jump in to add thoughts to each other. It’s a candid, free -flowing conversation. Mr. Rajgopal, if I can start with you. So startups and MSMEs, they are the engine of our economy. They are also the engine of our innovation, especially as far as AI is concerned. But access to very reliable, affordable AI compute and cloud at scale continues to be a barrier for many of the small and medium enterprises. Now, in your opinion, what are some of the policies, some of the infrastructure, some of the market interventions that we need today to really unlock this access at scale?

A. S. Rajgopal

Yeah, actually, I think across many other countries that we have seen, India has got a much comprehensive approach to this. I mean, in terms of actually they started this India mission, which is across seven pillars. I actually don’t see many startups actually using the facilities that are there in the sense that you could, you know, you apply for GPU infrastructure and you would get it at a subsidized rate. And some of them even got 100 percent of the GPUs that they need. So I think from India’s side, we’ve got a little less number of GPUs compared to what we really need. Maybe we need about 200 ,000 GPUs now, and we have about 40 ,000 to 50 ,000 now. So we all need to really invest more and then deploy more.

But the most important thing that you should see is that there is a good ecosystem. There is a system policy available for MSMEs and startups to leverage this. So there are a lot of innovative AI solutions being built. Most importantly, I also see that government actually setting pace in terms of actually leveraging some of these. We ourselves, I mean, we did one job for government, which is very, very unique. Like we serve Election Commission of India. They came to us and said, can you deduplicate and look at all the photographs that we have? This is like, you know, 90 -year -old pictures, right? I mean, so. So humanly it was not possible to deduplicate. You can’t check one photograph with 90 crore others.

We did that in a matter of 51 hours, and then we responded to them as to whether they had complications and all that.

Mridu Bhandari

Wow, that deserves some applause. That’s a humongous task.

A. S. Rajgopal

So what I see in this country is that I don’t think we will be pure play, this chatbot, I mean, the generative AI, the way it has been envisaged, I mean, will be the primary use case. I think we are going far beyond what AI can be applied in terms of actually improving the productivity of citizen services and also give use cases that these small and medium enterprises can actually use. In terms of actually enabling more GPUs, I think we have, you see, we need a lot of money. I mean, we are investing about one hundredth of what U .S. is investing or even less. so for us to do more I think we need to remove certain bottlenecks that are there one of the things I believe that can be done is I’m sure everybody is familiar we all pay GST but basically when we import servers when we pay GST on it and then when we deliver service I get that as an input and we do the I mean we only pay the value -added piece back to the government but the government gets the GST either way and I get an input so the way to one thing that we could look at is whether we can wave off the GST up front and just take the GST when the services are being delivered what that would do is it will deliver it will reduce my upfront infrastructure cost to by about you know you know 18 percent you know I don’t have to fund that up front and then pay interest on it you know or raise equity and you know pay more expectation on that I mean these things could really help I mean there are some of these things that government should look at and I think that’s a good point.

Thank you. The last point is that they’ve given tax holiday for delivering services to the world, but I think India has got a lot more to do within India than just look at world market. And I believe that we should get, Indian service providers should get the same benefits as the global providers would get when they host services in India. So maybe a GST waiver and some income tax benefits could be good.

Mridu Bhandari

Okay. GST waiver, income tax benefits, demands from the industry coming in. Professor Bhaskar, if I can come to you now. You’ve often argued that nations need to compete not just on technology, but on trust, on institutional strength, and of course, very, very inclusive digital participation. And that’s very critical for a country like India because we are a country of many countries and the rich -poor divide is quite huge. There are a lot of bridges. to gap here. There are a lot of gaps to bridge here. My apologies. Now, as India accelerates AI, what do you think are some of the biggest non -technical bottlenecks that we should be looking at addressing, which you believe could really, you know, limit this momentum that we are on currently?

And if we need larger societal good, what are some of the non -technical barriers that we need to immediately resolve?

Bhaskar Chakravarti

Yeah, so thank you. Thank you for the question, and thank you for the invitation to join this terrific panel. I think the issue of what are the non -technical elements, it’s great that you have included that question in this discussion, because in all the excitement around the technical infrastructure, which is, of course, enormous that’s happening right here in India, it’s no secret that, you know, this is one of the biggest talent pools. In the world, growing very, very fast. In two years, one in three developers are going to be in India, the largest mobile data pool anywhere in the world, the third largest data pool, you know, anywhere in the world once you take mobile and everything else together.

Growth in compute, growth in energy, growth in workloads, all that is happening, you know, which is fantastic. Now, when you think about what are the other elements that drive demand, what we have found, we study 125 countries and try to understand the role of technology in shaping lives and livelihoods across 145 countries. The single most important determinant of what keeps a country on trajectory in terms of both the momentum of growth and also the state of their digital evolution is the demand side. So when I think about the demand side, obviously the core infrastructure, which has been talked about, is enormously important and is going to continue. So the demand side is going to continue to be a major contributor.

A second part, which has been talked about a lot in the Indian context, is the distribution infrastructure. With DPI and all the different platforms associated with it, we know that there’s a very powerful distribution system. Now, there’s a third infrastructure, which is the non -technical part, and that is what I would call the trust infrastructure. Now, when you think about trust, it’s a bit of a slippery concept. It’s very hard to define. Each one of us in our heads has an idea of what trust is. But if you force somebody to define it, we’ll struggle. The best thing about trust is I know what trust is when it is not there, when it is missing.

And then you have to ask the question from a human perspective, what really is trust? And how do I bake that into the policy systems, into the technical systems, into the marketing systems, into the narrative around the India ecosystem, which will then keep moving the system. And trust ultimately has to do with the the Do people have confidence, the people who are the grantors of trust, do they have confidence that this invisible transaction that I’m engaging in, whether I’m putting my data into a system or whether I have entered my financial information and I’m expecting something on the other side, that this whole thing is going to be completed and it’s going to be completed in a way that is reliable, that is repeatable and will not take advantage of me.

So when you think about that whole trust ecosystem, India starts in a great place. Relative to where I come from, the United States, India is a far more trusting country in terms of trust in digital systems overall and certainly in terms of AI. There’s a tremendous level of enthusiasm in terms of embracing all things AI. And we’ve seen this right here, just the sheer numbers of people who’ve attended the conference. It shows a level of trust that is probably unmatched anywhere in the world. Now, this is a tremendous asset. That’s it to start with. The challenge is that the institutional side of trust is still in the development process in India. So if you think about data governance, privacy, security, we are making progress, but we need to be much further along.

Other aspects of trust has to do with the fact that, say, the India AI mission, that is developed at the union level, at the center level. But the actual exercising of trust, the granting of trust happens at the district level. And the district varies depending on whether I’m in Telangana or I’m in Jharkhand. And at the union level, the principles that I’ve got in place need to be sensitive to how it’s being experienced from the ground up. So there are many different facets of trust that we need to work on and put in place, including transparency, which AI is, you know, we are facing a challenge regarding transparency across the world. So this is an issue.

And then, you know, an approach to having redress and grievance systems and then literacy. You know, people need to be able to understand how to use this exciting technology and also protect themselves. I’ll pause.

Mridu Bhandari

Absolutely. Thank you for that wonderful perspective, Professor. Coming to you, Manish. Now, the Dell Technologies blueprint really calls for tighter alignment between policymakers, between industry, academia, institutional capacity. How can frameworks like this one really ensure growth that is both globally competitive for India, but also locally inclusive? Because there is a lot of regional growth. There is a lot of, you know, geography that needs to be taken into the fold of AI when we are talking about being globally competitive as well.

Manish Gupta

Thank you for that question. And, you know, just before I go in there, I would just add or maybe, you know, speak on a couple of topics. That professor just spoke about your trust, while he spoke more from a non -technical standpoint, you also brought in a technical aspect to it, you know, around the entire governance. And it’s not just non -technical in today’s world. It’s driven by data privacy and all of the things around. But equally on literacy, there’s also explainability. You know, that trust comes really inherently once you have got explainability and people are aware on what outcomes are coming. There’s the is that explainable and do they understand that? Right. So it’s a very, very interesting world that we are in.

Now, back to the blueprint that we were talking about and Vivek articulated that beautifully well across the three pillars of invest, invest in in data centric capacity, invest in energy infrastructure, invest in people, which goes back into the innovate side of it. Because like we like like we just discussed, we’ve possibly got the largest pool of engineers around AI. And that’s that’s killing on innovate part is what’s going to differentiate us, what’s going to make it really real and practically doable within the industry and would differentiate us versus other nations equally make it make it make it more democratized and ubiquitous across the nation. And lastly, it’s really about how do you continue to build in the guardrails?

How do you? Build the trust like we just discussed to ensure that there is that. the ecosystem knows that this entire process can be trusted and can be built upon. We’ve also got to remember the sustainability aspect of it, you know, and which is where, as you look at the blueprint, you will see us talk about the fact that energy efficiency, sustainability on data centers and new architectural models are becoming super important. And that’s something that NextGen under Raj Kapoor’s leadership has demonstrated in building highly sustainable, more energy efficient data centers that will allow us to use our energy resources in the best possible manner while democratizing the access of compute capacity, while democratizing the access of data center capacity to organizations and verticals of various sizes.

So that’s going to be the critical pillars around which we really believe that there’s practicality in adopting and differentiating ourselves on the AI arena. You know, as we go forward. Right.

Mridu Bhandari

I’m going to pick up on that data center piece and come to you, Rajgopal. Now, given the scale that India will really need for competing globally, what would it take to truly build sovereign, cost -efficient AI infrastructure that’s not just available to large enterprises, but is also very, very affordable for the long tail of innovators that we have in this country?

A. S. Rajgopal

Yeah, if you see the data center industry, it’s been pretty concentrated in Mumbai and a little bit in Chennai. And, you know, the other markets didn’t really take off as much as they should have. So what we are trying to do is to have, I mean, our current plan is to put about 100 megawatts of data centers in about six states. And what I see is that going forward, I mean, this could be the model that can be built on where each state has got a capacity. Because these states itself have got so much of consumption that can happen primarily because. if you start looking at applying AI into elevating the quality of education in India, which will be one of the first things that will get rolled out at scale in India, and also the healthcare aspects of it and citizen services, these things can require lots of computing capacity.

So we are working with few state governments to actually see if we can bring a total transformation, you know, actually consolidate their applications, bring about a data lake, and then apply data to it, sorry, intelligence to it, and take it to masses. So the way I think it will evolve is that there will be many more regions where data centers would spring up, and then when they’re distributed, they need to be interconnected. We have very good interconnect system, and not just the telcos, but you have things like, you know, we have railway networks, we have power networks which can actually assist the good connectivity between them. When these things come into play, actually, we can have a pretty distributed, a good amount of compute in India that can actually serve this aspect.

But you must be aware that, you know, this game is actually, I mean, if you see my context, I mean, I have four diamonds to work on. One, of course, is geopolitics. I mean, there is quite a lot that is happening. We need to ensure that we have access to the technologies that we want to bring to India. So that India actually works on the best available infrastructure. It is slightly better now, but, you know, there were restrictions before. The other aspect is the amount of money. I mean, so I think it requires multiple billion dollars of investments to do. And that should be facilitated. That should really come into the country. When we have the money piece sorted out and then we build the infrastructure, a very good, you know, a very good, you know, a very good, you know, a very good, you know, a very good, you know, a very good, you know, a very good, you know, a very good, you know, a very good, you know, But the good thing that we can leverage is open source.

I mean, when we leverage open source, we can actually combine the infrastructure and open source and bring down the cost of compute so much that it is actually palatable for the Indian citizens to use because it’s not about serving this 2 percent, 3 percent of the population which pays the income tax. It’s about serving the 90 percent others. The moment we succeed in doing this, I think the talent also, the good talent that we have, we have access to good talent in India, but we don’t have good talent. The quantity is missing. I mean, so we have good people, but, you know, you need many more good people. They’re going all over the world. We want to bring them back.

These people can come back when this money and infrastructure fall in place. Right. That would ensure that India is playing a role which is actually pretty balanced, leveraging the global technologies and leveraging local talent and actually setting the standard for the future. It’s a blueprint for all the other countries which don’t have. which may miss out this revolution and become digital colonies of the top two countries that are investing heavily in AI. So I personally believe we have a good blueprint, and the blueprint can be applied to multiple countries, and we are well on the path. And I would prefer a distributed development of data centers across the country so that we are closer to the users and we

Mridu Bhandari

Absolutely. Absolutely. Well, Professor, coming to you next, studies on digital competitiveness have consistently shown that institutional capacity often determines whether technology adoption really translates into economic value or not. Now, in the Indian context, what do we really need to do to really strengthen institutions and to really build the institutional muscle here? That ensures that AI drives very, very inclusive growth rather than deepening the tech divide, because there are already many divides that we are battling with.

Bhaskar Chakravarti

and I can then end up with a solution to the problem. So the same thing for skill building, literacy. If I can see my ability to speak, my ability to read in multiple languages improve, you know, suddenly my trust goes up. So what is the minimum amount of institutional safeguards I need to provide that? Then I come to something like health care. When people have a much bigger chasm to cross, that’s where people have a lot of concerns about, you know, should I be putting my information into the system? How is it going to be used? Can I trust the phone where I’ve relied on a doctor or relied on, you know, a wise person in my community?

I’ve relied on my mother, you know, for maternal health care advice. So how do I cross that chasm? You know, being able to provide the foundational trust elements is going to be important. So the answer to your question, the long answer is it depends. It depends on the user. As is the case to a lot of questions about India, it depends, right?

Mridu Bhandari

Well, Manish, you know, globally, we are seeing nations tie AI strategy to strategic autonomy. Now, whether it comes to the semiconductor ecosystem or the supply chain, you know, the strategic autonomy is becoming extremely important for countries. For India, what are the two or three foundational capabilities that you think we need to build domestically in this decade to ensure that we are true creators of AI value and not just consumers?

Manish Gupta

Awesome. Great question, Midu. And, you know, we as a nation have proven ourselves to be phenomenal adopters of technology. You know, and the best example in my mind comes on UPI or digital payment. Ten years back, 11 years back, we were just not there. and today we are by far the largest consumers or the number of digital payments and the value of digital payments within India is multiple times of the second economy that does this, right? So that’s a great example of how we have been able to localize, democratize and proliferate the use of technology. So within that, I would really put on three hats here. The first and just, you know, inverting the pyramids, not starting with technology but starting with people.

We have really got to think away from the users to the developers. You know, it’s got to move from 1 billion users to 1 billion or 1 million or 10 million developers and that’s the skill set, that’s the IP that we are going to bring in because we’ve got that largest talent pool residing within the country. The second, and again, I heard you talk about semiconductor and supply chain. I think we have got to adopt the best that’s available. They’re globally. But equally, we’ve got to move, we’ve got to not just think about made in India. but talk about trusted in India and which is where work with organizations such as AISI, Artificial Intelligence Safety Institute to ensure that we are putting the right guardrails, putting the right governance policy and the entire institutional framework to ensure that the AI that we are building here is trusted.

And lastly, goes back to the same thing. And maybe I’ll use the same example. You know, we had the UPI of money. We need to have UPI of AI. Where we are, we are building that at scale using the data sources that we have. We have the largest ones and some of the initiatives that the government has taken, India AI mission. But equally think about AI Kosh. There are more than 7000 data sets that are now available to organizations of all sizes. Use that to ensure that we are developing for the country at the population scale through academia, through private sector, through startups, through MSMEs, all coming together. And that really represents. Be quiet. a consistent API layer that’s bringing theoretically, maybe even all of the data center and the compute capacity that we are creating as a part of AI mission to be one single layer that can be consumed by anybody and everybody across the nation to start to innovate on that, to start to develop on that.

So going back, I know a long answer, but if I were to summarize three things there, UPI of money to UPI of AI. You know, made in India, transitioning to trusted in India. And, you know, from a developer users to from a billion users to maybe a million or 10 million developers.

Mridu Bhandari

Made in India, but made for the world.

Manish Gupta

Absolutely.

Mridu Bhandari

All right. So I’m going to ask each one of you for a few concise takeaways today. Now, the blueprint that we are talking about that, you know, Dell Technologies has just unveiled talks about agile, trusted AI governance with sectoral baselines, testbeds, strong institutional coordination. Yet globalization, only what we’ve seen is that speed often beats cost. caution. And we are seeing, you know, some of the scary stuff coming out with AI as well. A lot of the experiments, a lot of, you know, agentic AI experiments that people are doing across the world, some of them call for caution. Now, in the Indian context, how do we stay globally competitive while also operationalizing very, very stringent safeguards?

And where should we really draw the line between the speed of innovation or innovation velocity and regulatory discipline? So, Rajgopal, if we can start with you,

A. S. Rajgopal

please. If you take the birth of Gen AI, I mean, it actually, I don’t think none of those rules were actually followed. And, you know, it was built on every data that was available in whatever form. Personally, you know, in most places, I’m actually trying to tell people, just it’s not about ignoring the risk factors of it, but I think the regulation should not curtail the innovation in the thinking that you know we should be restrictive about whatever we are working with and so one of the most important things is i think we should have less regulation in this place because i think overall the benefit you should look at ai like a utility i mean you will have more good with ai than with bad yes there are things that can be handled as we go along if you see what we do in cloud today we haven’t been able to sort out our security and you know data protection postures even today it’s an evolving journey and i think we will continuously catch up with the bad factors around the ai adoption and and that’s a journey it cannot start or stop or it can be implemented at a point in time so we should keep looking at those aspects and keep putting whether regulations or technology interventions to ensure that we handle the problem and we should keep looking at those aspects and keep putting whether regulations those pieces but we should go fast forward with implementation and adoption of And I see a lot of Indian enterprises really being reluctant in terms of adopting it, especially the larger ones.

But if you see in India, I think government will set the pace and the startups and the MSMEs will actually catch on from there. And the large enterprises will actually struggle to catch up with the amount of innovation that’s happening in these

Mridu Bhandari

Where does that reluctance come from? Like, what are the top three reasons large organizations are reluctant? One is, of course, the fact that it’s not easy to adopt and transform a large organization. And perhaps startups and SMEs have the benefit of the agility and the small scale that they’re at. What else?

A. S. Rajgopal

So I think the first issue is not about security and all that. I mean, about these aspects. But most importantly, I think a lot of people are struggling to imagine where to apply AI. And I think the moment we can understand that, you will start seeing that. that the benefits far outweigh the negative aspects of it. So I think that’s the first thing that people should look at, is to not just look at leveraging the Gen AI in its chatbot form, but actually really look at where you can deploy. I talked about that deduplication piece. We are working on more than 150 projects, and not all of them are bot -based. So that imagination is what is important, and once that imagination comes, the benefits will outweigh the negative aspects of whatever we

Mridu Bhandari

All right. Professor, final takeaway from you on speed versus caution.

Bhaskar Chakravarti

Yes, so if you think about speed, I always like to use the analogy of a car and a road. you can think about the speed that you can build into the car, the velocity of the Ferrari. And a lot of the conversations that have happened, not necessarily in this room, but in other rooms, is about the Ferraris, whether I’m talking about agentic AI or AI optimized for certain applications and the technical aspects, you know, really, really important. Now, if you take the Ferrari and you bring it into the Indian context, maybe it’s a Maruti or something else that I need to be talking about. But then the question is, what’s the road on which this Ferrari is going?

If it’s a dirt road full of potholes, even a Ferrari is not going to go very fast. So much of our conversation here is about that dirt road. And what are the things, what are the potholes that we need to fix? There’s one elephant on the table that we did not address, and I’m just going to leave it at that, which is when we talk about trust, there’s a whole bunch of things you can do from an institutional standpoint to build trust, transparency, explainability, and so on. There’s a huge issue that we need to think about, which is what is going to be the impact on jobs? What’s going to be the impact on jobs?

This is the youngest major country in the world. It’s also one of the least employed country in the world. And now with AI coming in, is that going to help boost jobs or is it going to take jobs away? If we don’t fix that problem, get ahead of that, all the trust we are talking about, all the institutions you build could come down. So part of the policy infrastructure here is to figure out what is the post AI jobs picture.

Mridu Bhandari

Absolutely. Manish, final word to you.

Manish Gupta

So, you know, I honestly don’t think that these are opposing forces. Agility versus security. And, you know, particularly in this in this side of technology, you cannot have them act as opposing. It’s really about building the frameworks that are going to take both of them together. This is fast evolving as a technology, but equally as as as institutions will have to be faster than that in evolving. I think the government has done a phenomenal job in building some of the frameworks around that, you know, and the and the institutions, the ASI as one example. on the privacy side, DPDP or DEPA, all of those acts being there, are good frameworks to start with. And I’ll just index back on the question that you had asked Raj Gopal earlier on what is the hesitation from enterprises in adopting.

I don’t think it’s necessarily about security. You know, it’s really about saying how many of those have real use cases? While the real use cases exist, how many of them are able to monetize or are able to see them scale from experimentation or pilots into production? And I think that’s a job that we as industry folks who understand the technology, who are innovating in this space, really need to bring to the table so that we can bring this to the fore across the nation and enterprises and organizations of all sizes and academia and public. I think that’s where this will get practical, but equally these are not opposing

Mridu Bhandari

Right. Well, thank you, gentlemen, for that absolutely incredible conversation. You know, the takeaway is clear that investing, innovating and, of course, innovating by expanding skills pipelines and accelerating AI deployment is going to be key to India’s sovereign AI infrastructure. And appreciate you joining us here and taking the time today. We are also very delighted to now be joined by Honorable Shri Jayant Chaudhary Ji, Minister of State for Education and Minister of Skill Development. Huge round of applause. We are going to have him up here shortly. Thank you, gentlemen. Thank you very much for joining us. So if we can have you up here for a quick photo op and we will then continue the conversation.

Thank you. Thank you. Thank you, everyone. Thank you. Thank you, gentlemen. Manish, if I can please request you to felicitate our speakers. Mr. Rajgopal. Let’s have a huge round of applause for our panelists here today. Professor Bhaskar Chakravarti. thank you so much for joining us here today if you all can just get off the stage for two minutes we are getting it ready for our next conversation thank you so much well ladies and gentlemen time to move on now if India’s AI ambition is to translate into real economic growth it’s obviously not going to be any one entity’s job it is not going to be driven only by the government or by the industry alone it will be driven by partnership India has the talent the digital backbone and the momentum the real question though is how do we scale AI responsibly securely and inclusively so our next fireside chat conversation will explore what is the role of AI in the development of the future what a powerful public -private model regarding AI could really look like.

And for this, I’m delighted to welcome two very eminent leaders who are instrumental in shaping the journey, both from policy and industry perspectives. We have, of course, Honorable Shri Jayant Chaudhary Ji, Minister of State for Education and Minister of Skill Development and Entrepreneurship, Independent Charge, Government of India. And we have Dr. Vivek Mohindra, Special Advisor to the Vice Chairman and COO, Dell Technologies Global. If I can please have both of you up here for a quick conversation. Thank you so much. Thank you very much. Thank you, gentlemen. Well, it’s quite clear that public -private partnership is going to be critical to AI scaling and adoption in India. You know, Chaudhary, if I can start with you, how can PPP models really, how can PPP accelerate?

large scale AI infrastructure? What have been some of the on ground experiences you’ve seen so far? And of course, the government has been moving at breakneck speed when it comes to deploying more technology, sort of, you know, giving a more fillip to innovation in India. How are you really ensuring trust, resilience and long term national competitiveness as AI becomes very mainstream with this event in India?

Shri Jayant Chaudhary Ji

In the Indian context, as the audience is aware, we had a lot of catching up to do. And it’s fair to say that a lot of what we are seeing around us in AI has been facilitated by creating an ecosystem in a short span of time. Perhaps we may enjoy a second mover advantage with regards to this technology. And that has come about only because of a strong top down emphasis and push. the only reason why this event is happening here in India is because the leadership at the top understood very quickly the value and the potential of this new technology that we should not view it as a disruption but view it as an opportunity to leapfrog legacy problems deficit problems and provide access and equity to our citizens and dignity to our workers and that’s why prime minister you know the last event in France shared that leadership space every opportunity he gets he talks about skilling about young people about the potential for AI and the enunciation in manner of that this technology needs to be human -centric I think that has given us a real emphasis and a push for the academia for our industry for our vibrant startups systems you know to really think about what they are doing in this space I think that is the background to the event that we are all witnessing.

Thousands and thousands of people, casual visitors, apart from those who are already entrenched in technology. And the message that goes out is that one billion strong young people in a developing country are already thinking about what AI means to them and what they can do in this space. Not just be consumers, but also be producers and innovators and thinkers and creators. Now, PPP in this domain for me, and when you think about Manav being human centric, citizen centric, the P that really matters is the people. And in that context, it’s important that you have the broad architecture which is open. This is something that India has stood for from the beginning. When there was a lot of debate about what should be the policy that enables AI.

But there was also a lot of fear around AI about trust factors, about privacy, data, sovereignty, multiple issues about the human interface, the augmented human worker, what this means for education, for the future of jobs. A lot of those issues were being discussed and debated. And India said that, yes, it’s good to have a strategy. And out of that strategy and experiences will evolve a robust policy. It is essential to have guardrails. But it is a starting point. Currently, we don’t want to infringe upon the possibility of innovation. And India took that approach. And we had open access to whatever compute, you know, India AI mission was set up with a target of 18 ,000 GPUs. And in a short span, they’ve surpassed it.

It’s about 38 ,000. And a roadmap is by the end of this year, it’s going to cross one lakh, threefold. now think about it all of this compute facility that has been created is a model of ppp it has to be housed in educational institutions so that real research can happen in our premier educational institutions this is a great time when academia is more important than it perhaps ever was in the indian context in the indian context academia was partly separated from industry and the real economy in our minds but now every indian citizen is realizing the value of research and innovation every family is saying that no this is important we must value it and every educational institution is saying that we are not divorced from the market and needs of our community and society and nationhood building so that engagement with nationhood building and concept of technology is deeply immersed thanks to the efforts of india ai mission and here i’ll just you know leave one data point to you what is the cost of this compute facility it’s being provided for startups, for researchers at 65 rupees for an hour.

You pay 300 rupees for a couple of hours PVR cinema ticket. So it’s probably the world’s cheapest compute facility which is open. We are celebrating Sarvam. Let’s not focus on for profit not for profit. Because everything has to be for people. If you look at Sarvam, that’s also in my mind a PPP. It’s been incubated by IIT Madras. It has been supported by AI mission. So that’s another example because you’re right. Government cannot invest in from data to energy to compute to innovation. It really has to come from our citizens, our researchers, our technologists. It’s a collective mission.

Mridu Bhandari

Absolutely. Well, Dr. Mohindra getting your point of view from perspective now, if we look at PPP as far as job enablement is concerned, because that’s the big concern that citizens have that what’s going to happen to our jobs. And of course, skilling is part of that journey. How can Dell partner with, you know, Ministry of Skill Development and Entrepreneurship? What are you all doing from a future skill labs perspective? How are you accelerating AI apprenticeships, so that basically the jobs also move beyond the metros, because tier two, tier three towns is where, you know, we are looking at a lot of talent sitting there, but we are looking at a lack of access of sorts when it comes to skilling?

Dr. Vivek Mohindra

Yeah, I think that’s a great question. And I think, Honorable Minister, good to see you again. It reminds me of a discussion we had back in last time we met in October 2024. I know we missed each other and often. And we covered very similar ground. I think at the heart of it, it does come down to, and I commend India on the progress it’s made in making access to all these GPUs available. It is an industry -academia partnership, working closely with the minister here. And our view is when you think about skilling, there are three different levels. You have to think about schooling. You have to think about college level. And you have to think about employment.

People entering the workforce are employed today. And you think about delivery of this through online, in -person, incubation. So those are the two big dimensions. And from our perspective, we are very excited to partner on extending it to Tier 2, Tier 3, working closely with the minister and other institutions in India. And the core of it, having accessibility to this GPU at such an amazing price point, really unlocks the potential. And I think as Minister and I, we have to be very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very,

Mridu Bhandari

Right. Well, finally, to both of you, now, as we embed AI into all our critical sectors, whether it’s BFSI or telecom or agriculture, healthcare, education, now governance has to move from intent to very, very strong operational safeguards. What does Zero Trust AI architecture then practically look like at a national level? Minister, if we can start with you.

Shri Jayant Chaudhary Ji

Well, Zero Trust, it’s an interesting terminology. For me, the way I look at it, it means that you have to be able to verify each and every protocol in your design. And in India, we generate a lot of data. Everything, and Indian citizens are quite open about access. globally privacy has been a major concern and sometimes it becomes also an impediment for governance because those data points aren’t being collected, analyzed, researched. In the Indian context, citizens are okay about sharing their data. And I’m saying this with the knowledge that, you know, crores of upper IDs have been formed, created using consent. But we have not received any blowback. In a way, from students’ families saying that, why are you?

Because they understand that if we are able to, with technology, customize and tailor our experience in the classroom for every student, where no student can get left behind, what that means for the employability, for the knowledge acquisition for that student, for the quality of their educational experience is immense. So I think, but once you’re collecting the data, there is a lot of… effort that needs to be put in so that that trust is maintained from zero trust to 100 % trust in the public mind. So I think our data sets need to be segmented. There are protocols within Government of India. In education, we’re thinking of creating a complete AI stack, which means the anonymized data sets will be made available for researchers for creating the value for the layers of innovation, for enabling startups to engage with that data that Government of India and the citizens have shared.

Similarly, in skill, we have Skill India Digital Hub, which is also looking at creating those data sets, which can then really help us unleash the next wave of innovation and requirement that we have in skilling. So I think once you have that system design in place, it can be achieved. The prime minister spoke about a label for content. That it should be verifiable and legal. this technology and consumer awareness is a big aspect of it and how we engage with these tools and how we understand what the outcome of our engagement, how true is it? How is it verifiable? Where are these AI models trained? Is there any bias in that data set? All that knowledge needs to be out there for the consumers.

I feel also that there needs to be an audit trail for our new AI models. Maybe in the future you could have CAG come out with an audit report of all the AI models. So it’s a brave new future, but it’s a balance. For partnership at scale, you need architecture with trust.

Mridu Bhandari

Absolutely. Final 30 seconds to you, Dr. Mohindra.

Dr. Vivek Mohindra

I think the minister said it very eloquently. I would extend the notion to zero trust should extend to start with data, go into AI models. the usability, the cybersecurity elements, and the identity and access management. Those would be the ways I would extend it. And practically, it means beyond the governance framework, having things like a national risk registry, observability, being able to report whenever there is an infraction and auditability. But, Minister, you said it very eloquently, and I think our AI blueprint has more details that would be worth looking at.

Related ResourcesKnowledge base sources related to the discussion topics (11)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“Mridu Bhandari framed the summit’s purpose as “bridging the global AI divide” and positioned AI as a catalyst for economic growth, social empowerment and India’s global leadership.”

The opening remarks in the transcript explicitly link the summit to bridging the global AI divide and cite AI’s role in driving economic growth, social empowerment and global leadership for India, matching the description in the knowledge base [S1].

Confirmedhigh

“The summit’s overarching theme is to bridge the global AI divide.”

Multiple sources from the AI Impact Summit 2026 reference the goal of “bridging the global AI divide,” confirming that this is a stated purpose of the event [S63] and [S64].

Additional Contextmedium

“The ‘Invest’ pillar calls for massive spending on sovereign, scalable compute and data foundations and on energy infrastructure to support AI growth in India.”

The knowledge base notes that achieving AI leadership in India will require strategic capital coordination at unprecedented scales, providing additional context for the need of large-scale investment in compute and energy infrastructure [S26].

Additional Contextmedium

“The ‘Evolve’ pillar emphasizes an agile, security‑first governance regime that balances innovation with responsibility and adopts a “trust‑first” regulatory principle.”

Other discussions highlight the global shortage of AI regulations (85% of states lack them) and call for multi-stakeholder, iterative governance, as well as openness and value distribution, which adds nuance to the described trust-first, responsible governance approach [S68] and [S67].

External Sources (68)
S1
Driving Indias AI Future Growth Innovation and Impact — Right. Well, thank you, gentlemen, for that absolutely incredible conversation. You know, the takeaway is clear that inv…
S2
https://app.faicon.ai/ai-impact-summit-2026/driving-indias-ai-future-growth-innovation-and-impact — Thank you. Thank you. Thank you, everyone. Thank you. Thank you, gentlemen. Manish, if I can please request you to felic…
S3
Driving Indias AI Future Growth Innovation and Impact — -A. S. Rajgopal- Managing Director and Chief Executive Officer of NextGen Cloud Technologies
S4
https://dig.watch/event/india-ai-impact-summit-2026/driving-indias-ai-future-growth-innovation-and-impact — A. S. Rajgopal: Yeah, actually, I think across many other countries that we have seen, India has got a much comprehensi…
S5
Driving Indias AI Future Growth Innovation and Impact — Right. Well, thank you, gentlemen, for that absolutely incredible conversation. You know, the takeaway is clear that inv…
S6
Driving Indias AI Future Growth Innovation and Impact — And for this, I’m delighted to welcome two very eminent leaders who are instrumental in shaping the journey, both from p…
S7
Driving Indias AI Future Growth Innovation and Impact — Right. Well, thank you, gentlemen, for that absolutely incredible conversation. You know, the takeaway is clear that inv…
S8
Driving Indias AI Future Growth Innovation and Impact — -Dr. Vivek Mohindra- Special advisor to the vice chairman and COO of Dell Technologies Global
S9
https://app.faicon.ai/ai-impact-summit-2026/driving-indias-ai-future-growth-innovation-and-impact — And for this, I’m delighted to welcome two very eminent leaders who are instrumental in shaping the journey, both from p…
S10
Driving Indias AI Future Growth Innovation and Impact — Thank you so much, Dr. Mohindra. I’m going to request you to please stay back on stage. I’d also like to invite Manish G…
S11
Driving Indias AI Future Growth Innovation and Impact — -Manish Gupta- President and Managing Director of Dell Technologies India
S12
Shaping AI’s Story Trust Responsibility & Real-World Outcomes — -Mridu Bhandari- Moderator from Network18 This comprehensive discussion at the AI Impact Summit brought together leader…
S13
Shaping AI’s Story Trust Responsibility & Real-World Outcomes — This comprehensive discussion at the AI Impact Summit brought together leaders from government, telecommunications, bank…
S14
Driving Indias AI Future Growth Innovation and Impact — Mridu Bhandari explains that the Dell Technologies blueprint is designed to support India’s long-term vision of Vixit Bh…
S15
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — “I believe that nations that command the convergence of biology and AI, or what I like to call the convergence of biolog…
S16
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The government’s response through the India AI Mission has established a shared compute framework providing access to 38…
S17
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — AI commerce. What I’m going to talk about is something that was discussed in the plenary session yesterday as well about…
S18
Agenda item 5: Day 1 Afternoon session — Albania:Honorable Chair, dear colleagues and stakeholders, in the light of the evolving landscape of threats arising fro…
S19
GPAI: A Multistakeholder Initiative on Trustworthy AI | IGF 2023 Open Forum #111 — Kavita Bhatia:good morning, and good evening to all of you. I’ll just share my screen. Is the screen visible? Is the scr…
S20
Challenges and Opportunities: Emerging Technologies and Sustainability Impacts  — In addressing the aim for data centres to reach zero emissions, the speaker suggested this ambition should be nested wit…
S21
Atelier #1 : « Infrastructures et services numériques à l’ère de l’IA : quels enjeux de régulation, de sécurité et de souveraineté des données ? » — Drudeisha Madhub Au pas de course et je découvre le concept de la conclusion évolutive. Ça veut dire qu’au départ on ann…
S22
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S23
GermanAsian AI Partnerships Driving Talent Innovation the Future — This panel discussion focused on international cooperation between Germany and India in developing AI partnerships that …
S24
India’s AI Future Sovereign Infrastructure and Innovation at Scale — The government’s response through the India AI Mission has established a shared compute framework providing access to 38…
S25
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — Public-private partnerships that leverage government policy support with private sector investment and expertise
S26
The Global Power Shift India’s Rise in AI & Semiconductors — Public-Private Partnership Models and Capital Requirements: The discussion highlighted the need for substantial capital …
S27
The Global Power Shift India’s Rise in AI & Semiconductors — -Public-Private Partnership Models and Capital Requirements: The discussion highlighted the need for substantial capital…
S28
Shaping the Future AI Strategies for Jobs and Economic Development — The emphasis on collaboration over displacement provides a framework for managing workforce transitions while capturing …
S29
Shaping the Future AI Strategies for Jobs and Economic Development — A fundamental insight emerged that trust must be designed into AI systems from inception rather than retrofitted later. …
S30
Closing remarks – Charting the path forward — Al Mesmar emphasizes the importance of unified policy approaches that can adapt to technological changes while maintaini…
S31
The Impact of Digitalisation and AI on Employment Quality – Challenges and Opportunities — Mr. Sher Verick:Great. Well, thank you very much. It’s a real pleasure to be with you here today. I think Janine updated…
S32
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — Additionally,public-private partnershipsare essential for scaling sustainability initiatives. Companies invest in on-sit…
S33
Democratizing AI Building Trustworthy Systems for Everyone — Private sector investment is necessary due to the scale of infrastructure needs that cannot be met by governments alone
S34
Open Forum #33 Building an International AI Cooperation Ecosystem — Development | Economic | Capacity development Innovation Ecosystems and Practical Implementation The speaker argues th…
S35
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The government’s response through the India AI Mission has established a shared compute framework providing access to 38…
S36
Driving Indias AI Future Growth Innovation and Impact — Summary:Rajgopal advocates for minimal regulation to avoid stifling innovation, arguing that benefits outweigh risks and…
S37
How to make AI governance fit for purpose? — – Jennifer Bachus- Anne Bouverot Effective light-touch regulation demands extensive effort to build comprehensive ecosy…
S38
WS #283 AI Agents: Ensuring Responsible Deployment — Light-touch regulatory approach with options for course correction before moving to fining regimes Soft law approaches …
S39
Driving Indias AI Future Growth Innovation and Impact — The main areas of disagreement center around regulatory approach (light-touch vs. balanced frameworks), implementation s…
S40
Building Population-Scale Digital Public Infrastructure for AI — Mundeli acknowledges the tension between the urgent need to deploy AI solutions to save lives and the critical importanc…
S41
Indias AI Leap Policy to Practice with AIP2 — Explanation:This unexpected disagreement emerges around the pace of AI deployment. Fred emphasizes the dual nature of AI…
S42
AI Meets Cybersecurity Trust Governance & Global Security — “Move fast, break things.”[113]”And the motto there is move deliberately and maintain things.”[114]”How to be able to ge…
S43
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — The collaborative model provides a template for other technology sectors and countries facing similar workforce challeng…
S44
Welcome address — This is critical for creating regulatory frameworks that don’t stifle innovation while still providing necessary oversig…
S45
Keynote Adresses at India AI Impact Summit 2026 — Summary:The speakers demonstrate remarkable consensus across multiple dimensions: the strategic importance of U.S.-India…
S46
Laying the foundations for AI governance — High level of consensus on problem identification and broad solution directions, suggesting significant potential for co…
S47
Cutting through Cyber Complexity / DAVOS 2025 — 3. Zero Trust Architecture: Jay Chaudhry, CEO of Zscaler, argued for a paradigm shift in cybersecurity approaches, advoc…
S48
Keynote-Nikesh Arora — The central thesis of Arora’s presentation revolves around a critical imbalance in AI development priorities. He argues …
S49
Towards a Reskilling Revolution — | Today, 2018 [[DOC_PAGE_MARKER_14]] | Increasing, 2022 | Declining, 2022 …
S50
Exploring the need for speed in deploying information and communications technology for international development and bridging the digital divide — – Policy should address the question of who is slowing down development and conduct an accountability witch-hunt if nece…
S51
Hard power of AI — In conclusion, the analysis provides insights into the dynamic relationship between technology, politics, and AI. It hig…
S52
Driving Indias AI Future Growth Innovation and Impact — Evidence:The blueprint centers on three pillars: invest in sovereign, scalable compute and data foundations, innovate wi…
S53
Driving Indias AI Future Growth Innovation and Impact — Dr. Vivek Mohindra from Dell Technologies presented a comprehensive AI blueprint built upon three foundational pillars d…
S54
India’s AI Future Sovereign Infrastructure and Innovation at Scale — The government’s response through the India AI Mission has established a shared compute framework providing access to 38…
S55
The Global Power Shift India’s Rise in AI & Semiconductors — -Public-Private Partnership Models and Capital Requirements: The discussion highlighted the need for substantial capital…
S56
The Global Power Shift India’s Rise in AI & Semiconductors — Public-Private Partnership Models and Capital Requirements: The discussion highlighted the need for substantial capital …
S57
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — The participant explains that India is following the same successful approach used for DPI development, where basic buil…
S58
How Trust and Safety Drive Innovation and Sustainable Growth — Disagreement level:The level of disagreement is moderate but significant for policy implications. While all speakers agr…
S59
How Trust and Safety Drive Innovation and Sustainable Growth — I just have the image of the U.K. Information Commissioner doom -scrolling TikTok in my head now. Let’s do a quick round…
S60
Designing the AI Factory Scaling Compute to Sovereign AI — Gandotra praises the Indian government’s AI initiatives, stating that no other country has provided such comprehensive s…
S61
Skilling and Education in AI — This discussion focused on leveraging artificial intelligence as a tool for development and equality in India, examining…
S62
Skilling and Education in AI — It’s a very trusting infrastructure. Trust levels in India is in the 70 % range, whereas in the United States is in the …
S63
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Costa Rica has chosen to lead by example. Together with the OECD, we’re leading the development of the OECD AI Policy To…
S64
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 2 — Namaste. We deeply appreciate the kind hospitality we have received this week in India at the India AI Impact Summit. Co…
S65
Open Forum #13 Bridging the Digital Divide Focus on the Global South — Former WIPO Director-General Francis Gurry identified what he termed “a real crisis point,” outlining two converging cha…
S66
Bridging the AI innovation gap — LJ Rich: to invite our opening keynote. It’s a pleasure to invite to the stage the director of the Telecommunications St…
S67
Global Perspectives on Openness and Trust in AI — Kapoor referenced India’s experience with digital public infrastructure over the past 12-15 years, where initial innovat…
S68
Welcome address — Bogdan-Martin also shared that an ITU survey revealed that 85% of member states lack AI regulations or policies, undersc…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Dr. Vivek Mohindra
6 arguments160 words per minute1078 words402 seconds
Argument 1
Blueprint outlines Invest, Innovate, Evolve pillars for sovereign AI (Dr. Vivek Mohindra)
EXPLANATION
Dr. Mohindra presented a three‑pillar framework—investment in compute and energy infrastructure, innovation through skilling and collaboration, and evolution via agile governance—to guide India’s sovereign AI development.
EVIDENCE
He identified three key elements: investment in compute and energy infrastructure, innovation focused on skilling from schools to workforce, and evolution covering regulatory frameworks that balance innovation with responsibility, emphasizing the need for agile regulations [17-30]. He also described the blueprint as a practical guide for the country and companies to leverage AI opportunities [14-16].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The three-pillar blueprint (invest, innovate, evolve) is described in the external source that outlines India’s sovereign AI strategy [S1].
MAJOR DISCUSSION POINT
Three‑pillar AI blueprint
Argument 2
Sovereign, scalable compute and data foundations are essential for nationwide AI adoption (Dr. Vivek Mohindra)
EXPLANATION
Dr. Mohindra stressed that building a sovereign, scalable compute and data infrastructure is critical for India to adopt AI across the nation, linking compute growth with energy and policy support.
EVIDENCE
He highlighted the expected compute growth of over 10 exa-flops and a 30% CAGR for AI workloads, underscoring the need for compute and energy infrastructure to support nationwide AI adoption [15-21].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 highlights the need for sovereign compute and data infrastructure with projected 10 exaflops growth, and S16 notes India’s shared compute framework of 38,000 GPUs, underscoring the shortage and push for expanded capacity.
MAJOR DISCUSSION POINT
Need for sovereign compute infrastructure
AGREED WITH
A. S. Rajgopal, Manish Gupta, Shri Jayant Chaudhary Ji
Argument 3
Regulations must be agile, balancing innovation with responsibility and accountability (Dr. Vivek Mohindra)
EXPLANATION
He argued that AI regulations should be flexible and keep pace with rapid technological change, ensuring a balance between fostering innovation and maintaining responsibility.
EVIDENCE
Dr. Mohindra noted that regulatory frameworks must be agile to avoid anchoring to outdated technologies and must balance innovation with responsibility and accountability [28-32].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 emphasizes that AI regulations should be agile and forward‑looking rather than anchored to outdated technologies.
MAJOR DISCUSSION POINT
Agile AI regulation
AGREED WITH
Manish Gupta, Bhaskar Chakravarti, A. S. Rajgopal
DISAGREED WITH
A. S. Rajgopal
Argument 4
Sovereign AI realized through public‑private partnership and domestic investment (Dr. Vivek Mohindra)
EXPLANATION
He emphasized that achieving sovereign AI capabilities requires a strong public‑private partnership that combines public resources with private sector innovation and investment.
EVIDENCE
He stated that sovereign AI potential depends on marrying public resources with private innovation, highlighting public-private partnership as the key to unlocking AI potential [32-34].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Both S1 and S16 discuss public‑private partnership models that combine public resources with private sector investment to build sovereign AI capabilities.
MAJOR DISCUSSION POINT
Public‑private partnership for sovereign AI
Argument 5
PPP essential for scaling AI infrastructure and delivering apprenticeship pathways (Dr. Vivek Mohindra)
EXPLANATION
Dr. Mohindra described how public‑private partnerships can expand AI infrastructure and support skill development, especially in tier‑2 and tier‑3 regions, through apprenticeship and training programs.
EVIDENCE
He outlined a three-level skilling approach (school, college, employment) delivered via online, in-person, and incubation models, and highlighted partnership with the Ministry of Skill Development to reach tier-2/3 areas [373-381].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 outlines PPP‑driven infrastructure expansion and a three‑level skilling approach (school, college, employment) that supports apprenticeship pathways.
MAJOR DISCUSSION POINT
PPP for infrastructure and skilling
AGREED WITH
Manish Gupta, A. S. Rajgopal, Bhaskar Chakravarti
Argument 6
Extend zero‑Trust from data to models, include risk registry, observability, and incident reporting (Dr. Vivek Mohindra)
EXPLANATION
He expanded the zero‑trust concept to cover data, AI models, cybersecurity, and identity management, recommending national risk registries and auditability to ensure trustworthy AI deployment.
EVIDENCE
Dr. Mohindra suggested that zero-trust should start with data, extend to AI models, include cybersecurity and IAM, and be supported by a national risk registry, observability, and auditability mechanisms [413-416].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 proposes extending zero‑trust from data to AI models, adding risk registries, observability and auditability mechanisms.
MAJOR DISCUSSION POINT
Comprehensive zero‑trust AI architecture
AGREED WITH
Bhaskar Chakravarti, Manish Gupta, Shri Jayant Chaudhary Ji
M
Manish Gupta
6 arguments174 words per minute1181 words405 seconds
Argument 1
Blueprint aligns with India’s strengths, democratizes AI access and stresses trusted AI (Manish Gupta)
EXPLANATION
Manish Gupta highlighted that the Dell blueprint builds on India’s talent pool, promotes democratized access to compute, and embeds trust and governance to ensure responsible AI use.
EVIDENCE
He noted that the blueprint emphasizes democratizing compute capacity, building trust through governance, and leveraging sustainable, energy-efficient data centers demonstrated by NextGen, thereby making AI accessible across the nation [156-162].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 notes that the blueprint leverages India’s talent pool, democratizes compute access, and embeds trust and governance.
MAJOR DISCUSSION POINT
Blueprint’s focus on democratization and trust
AGREED WITH
Bhaskar Chakravarti, Dr. Vivek Mohindra, Shri Jayant Chaudhary Ji
Argument 2
Build sustainable, energy‑efficient data centres; leverage open‑source to cut costs (Manish Gupta)
EXPLANATION
Manish described the importance of constructing energy‑efficient data centers and using open‑source technologies to reduce compute costs, enabling broader AI adoption.
EVIDENCE
He referenced NextGen’s highly sustainable, energy-efficient data centers that lower resource consumption while democratizing compute access for organizations of all sizes [160-162].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S20 discusses highly sustainable, energy‑efficient data‑center designs and the use of open‑source technologies to reduce compute costs.
MAJOR DISCUSSION POINT
Sustainable data‑center strategy
AGREED WITH
Dr. Vivek Mohindra, A. S. Rajgopal, Shri Jayant Chaudhary Ji
Argument 3
Shift focus from 1 billion users to millions of AI developers; create a unified “UPI of AI” data‑compute layer (Manish Gupta)
EXPLANATION
Manish argued that India should move from a user‑centric model to cultivating millions of AI developers, establishing a unified API layer akin to a “UPI of AI” that connects data sets and compute resources for nationwide innovation.
EVIDENCE
He described the need to transition from a billion users to millions of developers, proposing a consistent API layer that aggregates over 7,000 datasets and compute capacity into a single consumable platform for innovators [226-244].
MAJOR DISCUSSION POINT
Developer‑centric AI ecosystem
AGREED WITH
Dr. Vivek Mohindra, A. S. Rajgopal, Bhaskar Chakravarti
Argument 4
Industry‑academia collaboration and robust governance frameworks enable rapid, inclusive AI growth (Manish Gupta)
EXPLANATION
Manish emphasized that close collaboration between industry and academia, supported by strong governance, can accelerate inclusive AI deployment across India.
EVIDENCE
He highlighted the blueprint’s call for tighter alignment among policymakers, industry, and academia, and stressed that robust governance and collaboration are essential for rapid, inclusive AI growth [148-155].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 highlights the need for tighter alignment among policymakers, industry and academia, supported by robust governance frameworks.
MAJOR DISCUSSION POINT
Collaboration for inclusive AI
Argument 5
Build “trusted in India” AI ecosystem via AI Safety Institute and domestic governance (Manish Gupta)
EXPLANATION
He advocated for establishing a trusted AI ecosystem in India through the AI Safety Institute, ensuring that AI systems are safe, reliable, and governed by domestic standards.
EVIDENCE
Manish mentioned working with organizations such as the Artificial Intelligence Safety Institute to put guardrails and governance policies in place, moving from “made in India” to “trusted in India” [229-233].
MAJOR DISCUSSION POINT
Trusted AI ecosystem
Argument 6
Agility and security are complementary; institutions must evolve faster than technology (Manish Gupta)
EXPLANATION
He argued that agility and security should not be seen as opposing forces; instead, institutions need to evolve rapidly to keep pace with fast‑moving AI technologies.
EVIDENCE
Manish stated that agility and security are not opposing, and that institutions must evolve faster than technology, citing the need for frameworks that accommodate both speed and safeguards [291-295].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 stresses that agile regulatory frameworks can coexist with security‑first approaches, requiring institutions to evolve rapidly.
MAJOR DISCUSSION POINT
Balancing agility and security
AGREED WITH
Dr. Vivek Mohindra, Bhaskar Chakravarti, A. S. Rajgopal
DISAGREED WITH
Bhaskar Chakravarti
M
Mridu Bhandari
1 argument135 words per minute1976 words875 seconds
Argument 1
AI is a catalyst for growth; actionable steps needed to bridge the global AI divide (Mridu Bhandari)
EXPLANATION
Mridu framed AI as a driver of economic growth, social empowerment, and global leadership for India, calling for concrete actions to close the global AI divide.
EVIDENCE
In her opening remarks she stated that AI drives economic growth, social empowerment, and global leadership for India, and framed the conversation as a call to action to bridge the global AI divide [1-4].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 frames AI as a driver of economic growth and cites India’s rapid adoption of digital technologies such as UPI as evidence.
MAJOR DISCUSSION POINT
AI as growth catalyst
A
A. S. Rajgopal
3 arguments173 words per minute1866 words644 seconds
Argument 1
Severe GPU shortage; propose GST waiver, tax holidays, and distributed data‑center rollout (A. S. Rajgopal)
EXPLANATION
Rajgopal highlighted India’s GPU deficit, suggested fiscal incentives such as GST waivers and tax holidays, and advocated for a geographically distributed data‑center strategy to expand AI compute capacity.
EVIDENCE
He noted that India needs about 200,000 GPUs but currently has only 40,000-50,000, and proposed waiving GST on imported servers to reduce upfront costs, along with tax holidays for services [71-73][86-92]. He also described plans to deploy 100 MW of data centers across six states to create a distributed compute network [167-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S16 reports India’s shared compute framework of 38,000 GPUs and the need for additional capacity, reflecting the GPU shortage, and mentions a distributed data‑center rollout.
MAJOR DISCUSSION POINT
GPU shortage and fiscal incentives
Argument 2
Geopolitical constraints demand billions of dollars in investment and access to global technologies (A. S. Rajgopal)
EXPLANATION
He warned that geopolitical factors require multi‑billion‑dollar investments and unhindered access to global technologies to build India’s AI infrastructure.
EVIDENCE
Rajgopal referenced geopolitics as a key factor, stating that billions of dollars are needed for investment and that access to global technologies is essential for India’s AI ambitions [176-184].
MAJOR DISCUSSION POINT
Geopolitical investment needs
Argument 3
Minimal regulation to preserve innovation velocity; monitor risks continuously (A. S. Rajgopal)
EXPLANATION
He argued for a light regulatory approach that allows rapid AI innovation while continuously monitoring and addressing risks.
EVIDENCE
Rajgopal said that regulations should be minimal so as not to curtail innovation, emphasizing a utility-like view of AI and suggesting ongoing risk monitoring rather than restrictive rules [256-259].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 advocates balanced, agile regulation, providing a contrasting perspective to the minimal‑touch regulatory stance.
MAJOR DISCUSSION POINT
Light‑touch regulation
B
Bhaskar Chakravarti
3 arguments183 words per minute1314 words430 seconds
Argument 1
Trust infrastructure (transparency, grievance mechanisms, data governance) is the key non‑technical bottleneck (Bhaskar Chakravarti)
EXPLANATION
Bhaskar identified trust—encompassing transparency, grievance redress, and robust data governance—as the primary non‑technical barrier to AI adoption in India.
EVIDENCE
He described a “trust infrastructure” that includes transparency, grievance mechanisms, and data governance, noting that institutional trust is still developing and that these elements are essential for AI uptake [113-119][130-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 identifies trust infrastructure—including transparency, grievance mechanisms and data governance—as essential for AI adoption.
MAJOR DISCUSSION POINT
Trust as non‑technical bottleneck
AGREED WITH
Manish Gupta, Dr. Vivek Mohindra, Shri Jayant Chaudhary Ji
Argument 2
Enhance AI literacy, explainability, and address job‑impact concerns through institutional safeguards (Bhaskar Chakravarti)
EXPLANATION
He stressed the need for AI literacy, explainability of outcomes, and policies to mitigate job displacement, all supported by institutional safeguards.
EVIDENCE
Bhaskar highlighted the importance of explainability, AI literacy, and the need for grievance and redress mechanisms, and warned that without addressing job impact, trust could erode [151-154][282-289].
MAJOR DISCUSSION POINT
AI literacy and job impact
AGREED WITH
Dr. Vivek Mohindra, Manish Gupta, A. S. Rajgopal
Argument 3
Speed requires solid “road” – trust, transparency, and job‑impact policies before high‑velocity deployment (Bhaskar Chakravarti)
EXPLANATION
Using a car analogy, he argued that rapid AI deployment needs a reliable “road”—trust, transparency, and job‑impact policies—to avoid pitfalls.
EVIDENCE
He compared AI speed to a Ferrari on a dirt road, emphasizing that trust, transparency, and job-impact policies are essential road-conditions before high-speed deployment [275-282][284-289].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1’s discussion of trust, transparency and agile governance supports the metaphor that reliable “road” conditions are needed before high‑speed AI deployment.
MAJOR DISCUSSION POINT
Road metaphor for responsible AI speed
DISAGREED WITH
Manish Gupta
S
Shri Jayant Chaudhary Ji
4 arguments148 words per minute1259 words508 seconds
Argument 1
Zero‑Trust approach: verify every protocol, segment data, and maintain audit trails (Shri Jayant Chaudhary Ji)
EXPLANATION
He defined a zero‑trust model for AI as verifying each protocol, segmenting data, and ensuring auditability to build public confidence.
EVIDENCE
He explained that zero-trust means verifying every protocol, segmenting data, and creating audit trails, noting the need for data segmentation, audit reports, and transparent model provenance [386-398][401-409].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 defines zero‑trust for AI as verification of each protocol, data segmentation and audit trails to build public confidence.
MAJOR DISCUSSION POINT
Zero‑trust definition
AGREED WITH
Bhaskar Chakravarti, Manish Gupta, Dr. Vivek Mohindra
Argument 2
PPP is people‑centric; cheap, open compute facilities and academia integration drive innovation (Shri Jayant Chaudhary Ji)
EXPLANATION
He described PPP as focusing on people, providing low‑cost open compute resources housed in academic institutions to foster inclusive innovation.
EVIDENCE
He highlighted that PPP delivers cheap compute (₹65 per hour), is housed in educational institutions, and integrates academia with industry, citing the Sarvam initiative as an example of a low-cost, open compute facility [333-351].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S16 describes a PPP model delivering low‑cost compute resources housed in academic institutions, aligning with a people‑centric approach.
MAJOR DISCUSSION POINT
People‑centric PPP model
AGREED WITH
Dr. Vivek Mohindra, Manish Gupta
Argument 3
PPP‑driven skilling for tier‑2/3 regions; human‑centric education and apprenticeship programs (Shri Jayant Chaudhary Ji)
EXPLANATION
He advocated for public‑private partnership‑based skill development targeting tier‑2 and tier‑3 cities, emphasizing human‑centric education and apprenticeship pathways.
EVIDENCE
He mentioned the Ministry’s focus on tier-2/3 skilling, apprenticeship programs, and the need for human-centric education, linking these to PPP initiatives and the AI mission’s compute facilities [330-338][361-368].
MAJOR DISCUSSION POINT
Tier‑2/3 PPP skilling
Argument 4
Zero‑Trust means verifying every interaction, segmenting data, and providing auditability at national scale (Shri Jayant Chaudhary Ji)
EXPLANATION
He reiterated the zero‑trust concept, emphasizing nationwide verification, data segmentation, and auditability to ensure trustworthy AI deployment.
EVIDENCE
He restated that zero-trust involves verification of each interaction, data segmentation, and audit trails, and linked it to national policy and governance structures [386-398][401-409].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 reiterates the zero‑trust model of nationwide verification, data segmentation and auditability for trustworthy AI deployment.
MAJOR DISCUSSION POINT
National‑scale zero‑trust
Agreements
Agreement Points
Public‑private partnership (PPP) is essential for building sovereign AI infrastructure, scaling compute, and delivering skilling/apprenticeship pathways.
Speakers: Dr. Vivek Mohindra, Manish Gupta, Shri Jayant Chaudhary Ji
PPP essential for scaling AI infrastructure and delivering apprenticeship pathways (Dr. Vivek Mohindra) Blueprint aligns with India’s strengths, democratizes AI access and stresses trusted AI (Manish Gupta) PPP is people‑centric; cheap, open compute facilities and academia integration drive innovation (Shri Jayant Chaudhary Ji)
All three speakers stress that a strong PPP model is the cornerstone for expanding AI compute capacity, making it affordable, and creating apprenticeship and skilling programmes, linking public resources with private innovation and academic ecosystems [32-34][373-381][146-149][148-155][333-351].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors the emphasis on public-private partnerships for AI scaling highlighted in reports on sustainability and AI infrastructure, which note that PPPs are essential to mobilise private investment and accelerate compute capacity, as described in S32, S33 and the India AI Mission’s PPP model [S35].
Regulatory frameworks must be agile, balanced and supportive of innovation while ensuring responsibility and trust.
Speakers: Dr. Vivek Mohindra, Manish Gupta, Bhaskar Chakravarti, A. S. Rajgopal
Regulations must be agile, balancing innovation with responsibility and accountability (Dr. Vivek Mohindra) Agility and security are complementary; institutions must evolve faster than technology (Manish Gupta) Trust infrastructure (transparency, grievance mechanisms, data governance) is the key non‑technical bottleneck (Bhaskar Chakravarti) Minimal regulation to preserve innovation velocity; monitor risks continuously (A. S. Rajgopal)
Speakers converge on the need for flexible, forward-looking regulation that does not hinder AI innovation, emphasizing trust, transparency and rapid institutional adaptation; even the industry voice calling for minimal rules aligns with the broader call for agility [28-32][291-295][130-138][256-259].
POLICY CONTEXT (KNOWLEDGE BASE)
The call for agile yet responsible regulation aligns with discussions on balanced regulatory frameworks that combine flexibility with accountability, a theme explored in the debate between minimal and balanced approaches in S36, S37 and the broader governance consensus in S46.
Massive investment in compute infrastructure and distributed data‑centers is required to meet India’s AI workload growth and GPU shortage.
Speakers: Dr. Vivek Mohindra, A. S. Rajgopal, Manish Gupta, Shri Jayant Chaudhary Ji
Sovereign, scalable compute and data foundations are essential for nationwide AI adoption (Dr. Vivek Mohindra) Severe GPU shortage; propose GST waiver, tax holidays, and distributed data‑center rollout (A. S. Rajgopal) Build sustainable, energy‑efficient data centres; leverage open‑source to cut costs (Manish Gupta) Zero‑Trust approach: verify every protocol, segment data, and maintain audit trails (Shri Jayant Chaudhary Ji)
All parties highlight the urgent need for large-scale, geographically distributed compute capacity, noting projected 10 exaflops growth, a current GPU gap of ~160,000 units, and the role of sustainable, low-cost data-centres to serve the long tail of innovators [15-21][71-73][167-176][182-184][160-162][345-351].
POLICY CONTEXT (KNOWLEDGE BASE)
India’s compute expansion plans, including the shared GPU framework of 38,000 GPUs and additional allocations, are documented in the sovereign AI mission briefing, underscoring the need for massive infrastructure investment as noted in S35 and the infrastructure-focused consensus at the AI Impact Summit [S45].
Developing a skilled AI workforce and expanding the developer ecosystem are critical for India’s AI future.
Speakers: Dr. Vivek Mohindra, Manish Gupta, A. S. Rajgopal, Bhaskar Chakravarti
PPP essential for scaling AI infrastructure and delivering apprenticeship pathways (Dr. Vivek Mohindra) Shift focus from 1 billion users to millions of AI developers; create a unified “UPI of AI” data‑compute layer (Manish Gupta) Severe GPU shortage; propose GST waiver, tax holidays, and distributed data‑center rollout (A. S. Rajgopal) Enhance AI literacy, explainability, and address job‑impact concerns through institutional safeguards (Bhaskar Chakravarti)
Consensus that AI adoption must be paired with multi-level skilling-from schools to higher education to on-the-job training-and that expanding the pool of AI developers is as important as expanding user adoption, with industry, government and academia all stressing this need [373-381][226-244][188-195][151-154][205-210].
POLICY CONTEXT (KNOWLEDGE BASE)
Workforce upskilling and developer ecosystem growth are repeatedly highlighted as priorities for India’s AI future, with evidence from the AI-powered chips and skills collaboration model [S43] and the reskilling analysis in S49.
A robust trust and zero‑trust architecture, including transparency, explainability, data segmentation and auditability, is essential before high‑speed AI deployment.
Speakers: Bhaskar Chakravarti, Manish Gupta, Dr. Vivek Mohindra, Shri Jayant Chaudhary Ji
Trust infrastructure (transparency, grievance mechanisms, data governance) is the key non‑technical bottleneck (Bhaskar Chakravarti) Blueprint aligns with India’s strengths, democratizes AI access and stresses trusted AI (Manish Gupta) Extend zero‑Trust from data to models, include risk registry, observability, and incident reporting (Dr. Vivek Mohindra) Zero‑Trust approach: verify every protocol, segment data, and maintain audit trails (Shri Jayant Chaudhary Ji)
All speakers underline that trust-through transparent governance, explainability, data segmentation and continuous audit-must be built first, likened to a reliable road for a fast car, to ensure responsible AI rollout [113-119][130-138][148-155][413-416][386-398][401-409].
POLICY CONTEXT (KNOWLEDGE BASE)
The requirement for a zero‑trust, transparent AI architecture is reinforced by industry advocacy for zero‑trust models (e.g., Zscaler’s framework) and calls for deliberate, safety‑first deployment, as presented in S47 and the trust‑governance discussion in S42.
Similar Viewpoints
Both emphasize that a collaborative PPP model combined with agile, forward‑looking regulation is vital for rapid, inclusive AI growth [32-34][373-381][146-149][148-155][291-295][28-32].
Speakers: Dr. Vivek Mohindra, Manish Gupta
PPP essential for scaling AI infrastructure and delivering apprenticeship pathways (Dr. Vivek Mohindra) Blueprint aligns with India’s strengths, democratizes AI access and stresses trusted AI (Manish Gupta) Agility and security are complementary; institutions must evolve faster than technology (Manish Gupta) Regulations must be agile, balancing innovation with responsibility and accountability (Dr. Vivek Mohindra)
Both stress that trust, transparency and a unified, developer‑centric platform are essential foundations for AI adoption [113-119][130-138][148-155][226-244].
Speakers: Bhaskar Chakravarti, Manish Gupta
Trust infrastructure (transparency, grievance mechanisms, data governance) is the key non‑technical bottleneck (Bhaskar Chakravarti) Blueprint aligns with India’s strengths, democratizes AI access and stresses trusted AI (Manish Gupta) Shift focus from 1 billion users to millions of AI developers; create a unified “UPI of AI” data‑compute layer (Manish Gupta)
Both highlight that affordable, widely‑distributed compute resources delivered through people‑centric PPPs are key to unlocking AI for startups and MSMEs [71-73][167-176][333-351].
Speakers: A. S. Rajgopal, Shri Jayant Chaudhary Ji
Severe GPU shortage; propose GST waiver, tax holidays, and distributed data‑center rollout (A. S. Rajgopal) PPP is people‑centric; cheap, open compute facilities and academia integration drive innovation (Shri Jayant Chaudhary Ji) Zero‑Trust approach: verify every protocol, segment data, and maintain audit trails (Shri Jayant Chaudhary Ji)
Unexpected Consensus
Agreement between a government minister and a corporate executive on a detailed zero‑trust architecture for AI at national scale.
Speakers: Shri Jayant Chaudhary Ji, Dr. Vivek Mohindra
Zero‑Trust approach: verify every protocol, segment data, and maintain audit trails (Shri Jayant Chaudhary Ji) Extend zero‑Trust from data to models, include risk registry, observability, and incident reporting (Dr. Vivek Mohindra)
Despite their different institutional roles, both converge on a technically detailed zero-trust model that spans data, models, risk registries and auditability, showing rare cross-sector alignment on security architecture [386-398][401-409][413-416].
POLICY CONTEXT (KNOWLEDGE BASE)
The reported minister‑executive agreement on a national zero‑trust AI architecture reflects the same zero‑trust principles championed by leading cybersecurity executives and aligns with the broader push for deliberate, secure AI deployment noted in S47 and S42.
Overall Assessment

The panel shows strong convergence on five core themes: (1) PPP as the engine for sovereign AI infrastructure and skilling; (2) the necessity of agile, balanced regulation; (3) massive investment in distributed compute capacity; (4) comprehensive capacity development across education and industry; and (5) a trust‑first, zero‑trust architecture before high‑speed AI deployment. Divergence appears mainly around the intensity of regulation—minimal versus agile—but even this reflects a shared desire for flexibility rather than rigidity.

High consensus on strategic pillars, infrastructure, skilling and trust, indicating a unified national roadmap; moderate disagreement on regulatory strictness, suggesting future policy debates will focus on calibrating the right balance between innovation speed and safeguards.

Differences
Different Viewpoints
Regulatory approach – agile, balanced regulation vs minimal, light‑touch regulation
Speakers: Dr. Vivek Mohindra, A. S. Rajgopal
Regulations must be agile, balancing innovation with responsibility and accountability (Dr. Vivek Mohindra) Minimal regulation to preserve innovation velocity; monitor risks continuously (A. S. Rajgopal)
Dr. Mohindra argues that AI regulations need to be agile, keeping pace with rapid technological change and striking a balance between fostering innovation and ensuring responsibility [28-32]. Rajgopal counters that regulation should be kept to a minimum so as not to curtail innovation, advocating a light-touch approach with continuous risk monitoring [256-259].
POLICY CONTEXT (KNOWLEDGE BASE)
The disagreement over regulatory style echoes the documented split between light‑touch and balanced regulatory philosophies, captured in the contrasting positions of minimal regulation advocates and balanced‑framework proponents in S36, S37, S38 and summarised in S39.
Speed of AI deployment versus need for trust and safeguards
Speakers: Bhaskar Chakravarti, Manish Gupta
Speed requires solid “road” – trust, transparency, and job‑impact policies before high‑velocity deployment (Bhaskar Chakravarti) Agility and security are complementary; institutions must evolve faster than technology (Manish Gupta)
Bhaskar warns that rapid AI rollout is only possible if a solid trust infrastructure-transparency, grievance mechanisms, and policies addressing job impact-is in place, using a car-on-a-dirt-road analogy [275-282][284-289]. Manish argues that agility and security should not be seen as opposing forces; instead, institutions must evolve quickly to provide frameworks that accommodate both speed and safeguards [291-295].
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between rapid AI rollout and safeguarding trust mirrors multiple analyses that stress the need for safety frameworks before high-speed deployment, including healthcare-focused safety concerns [S40], the dual-risk/benefit narrative in S41, and the “move fast vs. move deliberately” debate in S42, S48, and S51.
Unexpected Differences
Fiscal incentives (GST waiver, tax holidays) versus focus on trust and governance
Speakers: A. S. Rajgopal, Manish Gupta, Dr. Vivek Mohindra
Severe GPU shortage; propose GST waiver, tax holidays, and distributed data‑center rollout (A. S. Rajgopal) Blueprint aligns with India’s strengths, democratizes AI access and stresses trusted AI (Manish Gupta) Regulations must be agile, balancing innovation with responsibility and accountability (Dr. Vivek Mohindra)
Rajgopal pushes concrete fiscal measures (GST waiver, tax holidays) to lower upfront costs for AI compute, a stance not echoed by Manish or Dr. Mohindra, who focus on trust, governance, and agile regulation rather than tax policy. The divergence between a fiscal‑policy solution and a governance‑centric approach was not anticipated given the otherwise aligned emphasis on PPP and scaling AI.
Prioritisation of job‑impact policies versus speed and security
Speakers: Bhaskar Chakravarti, Manish Gupta
Speed requires solid “road” – trust, transparency, and job‑impact policies before high‑velocity deployment (Bhaskar Chakravarti) Agility and security are complementary; institutions must evolve faster than technology (Manish Gupta)
Bhaskar places job‑impact mitigation at the core of the trust infrastructure needed before rapid AI rollout, whereas Manish does not address job impact, concentrating on institutional agility and security. The omission of employment considerations by Manish, given the prominence of job‑impact concerns in Bhaskar’s argument, was unexpected.
POLICY CONTEXT (KNOWLEDGE BASE)
The clash between job‑impact policies and deployment speed/security reflects the broader institutional priority debate identified in S39, which contrasts people‑centric workforce measures with infrastructure‑first or security‑first approaches, and is echoed in workforce‑focused discussions such as S43.
Overall Assessment

The discussion shows strong consensus on the need for massive investment, public‑private partnership, and capacity building, but sharp disagreements on regulatory philosophy (agile vs minimal) and on how to balance speed with trust and job‑impact safeguards. These divergences could affect the pace and design of India’s AI blueprint, requiring further negotiation to align policy with industry and civil‑society expectations.

Moderate to high – while participants share common goals, the contrasting views on regulation and the sequencing of trust versus speed create substantive friction that could delay implementation if not reconciled.

Partial Agreements
All three emphasize that a strong public‑private partnership (PPP) is essential to scale AI infrastructure, democratize access, and drive inclusive innovation. Dr. Mohindra highlights the partnership as the key to sovereign AI and investment; Manish stresses collaboration and governance; Chaudhary stresses people‑centric, low‑cost compute housed in academia [33-34][35-36][148-155][333-351].
Speakers: Dr. Vivek Mohindra, Manish Gupta, Shri Jayant Chaudhary Ji
Sovereign AI realized through public‑private partnership and domestic investment (Dr. Vivek Mohindra) Industry‑academia collaboration and robust governance frameworks enable rapid, inclusive AI growth (Manish Gupta) PPP is people‑centric; cheap, open compute facilities and academia integration drive innovation (Shri Jayant Chaudhary Ji)
All agree that massive investment in compute infrastructure is critical. Dr. Mohindra points to the need for sovereign, scalable compute; Rajgopal quantifies the GPU shortfall and suggests fiscal incentives and a distributed data‑center model; Manish adds that sustainability and open‑source can reduce costs. The consensus is on the necessity of large‑scale, cost‑effective compute capacity, though the pathways differ [15-21][71-73][167-176][160-162].
Speakers: Dr. Vivek Mohindra, A. S. Rajgopal, Manish Gupta
Sovereign, scalable compute and data foundations are essential for nationwide AI adoption (Dr. Vivek Mohindra) Severe GPU shortage; propose GST waiver, tax holidays, and distributed data‑center rollout (A. S. Rajgopal) Build sustainable, energy‑efficient data centres; leverage open‑source to cut costs (Manish Gupta)
All three stress multi‑level skilling as a pillar of AI adoption. Dr. Mohindra outlines a three‑level skilling approach (school, college, employment) delivered via online and in‑person modes [373-381]. Manish highlights the need for collaboration and governance to accelerate inclusive growth [148-155]. Chaudhary emphasizes people‑centric PPP programmes targeting tier‑2/3 cities and apprenticeship pathways [330-338][361-368].
Speakers: Dr. Vivek Mohindra, Manish Gupta, Shri Jayant Chaudhary Ji
PPP essential for scaling AI infrastructure and delivering apprenticeship pathways (Dr. Vivek Mohindra) Industry‑academia collaboration and robust governance frameworks enable rapid, inclusive AI growth (Manish Gupta) PPP‑driven skilling for tier‑2/3 regions; human‑centric education and apprenticeship programs (Shri Jayant Chaudhary Ji)
All acknowledge that trust and governance are essential for AI deployment. Bhaskar identifies trust infrastructure as the primary non‑technical barrier; Manish proposes a “trusted in India” ecosystem through the AI Safety Institute; Dr. Mohindra links sovereign AI to public‑private partnership and agile governance. They converge on the need for robust, trustworthy frameworks, though their emphases differ [113-119][130-138][229-233][33-34].
Speakers: Bhaskar Chakravarti, Manish Gupta, Dr. Vivek Mohindra
Trust infrastructure (transparency, grievance mechanisms, data governance) is the key non‑technical bottleneck (Bhaskar Chakravarti) Build “trusted in India” AI ecosystem via AI Safety Institute and domestic governance (Manish Gupta) Sovereign AI realized through public‑private partnership and domestic investment (Dr. Vivek Mohindra)
Takeaways
Key takeaways
Dell’s AI Blueprint is built around three pillars – Invest (compute, data, energy), Innovate (skill pipelines, collaboration) and Evolve (agile, trusted governance). Public‑private partnership (PPP) is seen as the core execution model for scaling sovereign AI infrastructure and skilling across India. India faces a severe shortage of GPUs and compute capacity; proposals include massive data‑center roll‑out, GST waivers, tax holidays and leveraging open‑source to lower costs. Trust infrastructure – transparency, explainability, grievance mechanisms, data governance and a zero‑trust architecture – is the primary non‑technical bottleneck. Skill development must shift from a billion‑user focus to creating millions of AI developers; a unified “UPI of AI” data‑compute layer is envisioned. Regulatory frameworks must be agile, balancing rapid innovation with responsibility; a zero‑trust model should verify every protocol and maintain audit trails. Strategic autonomy requires domestic capabilities that are “trusted in India” while still accessing global technologies.
Resolutions and action items
Participants were asked to read the detailed Dell AI Blueprint and provide feedback. Dell Technologies will partner with the Ministry of Skill Development & Entrepreneurship to expand AI apprenticeship and skilling programs in Tier‑2/3 regions. Proposal to waive GST on imported AI servers and provide income‑tax incentives for AI service providers to reduce upfront infrastructure costs. NextGen Cloud Technologies plans to deploy ~100 MW of data‑center capacity across six Indian states and interconnect them via existing telecom, rail and power networks. Goal to increase the national GPU inventory from ~40‑50 k to 100 k by year‑end, supporting the India AI Mission’s target of 1 lakh GPUs. Creation of a national AI risk registry, observability platform and auditability mechanisms (including possible CAG audits) as part of a zero‑trust AI architecture. Establishment of open, low‑cost compute facilities (e.g., 65 INR per hour) for startups, researchers and academia. Collaboration with the Artificial Intelligence Safety Institute (AISI) to develop “trusted in India” governance standards.
Unresolved issues
Specific reasons for large enterprises’ reluctance to adopt AI beyond general awareness – concrete use‑case identification and monetisation pathways remain unclear. How to finance the multi‑billion‑dollar investment required for sovereign compute infrastructure and energy upgrades. Detailed design of the national zero‑trust AI framework, including data segmentation standards and real‑time incident reporting processes. Mechanisms for addressing potential AI‑induced job displacement and establishing comprehensive up‑skilling or reskilling programs. Uniform implementation of trust and data‑governance policies across diverse state administrations. Balancing GST waiver and tax incentives with fiscal sustainability for the government. Finalization of an agile regulatory regime that can keep pace with rapid AI advancements without stifling innovation.
Suggested compromises
Adopt a lighter regulatory approach initially to preserve innovation velocity while instituting continuous risk monitoring (Rajgopal’s suggestion). Provide targeted tax relief (GST waiver, income‑tax holidays) rather than full exemption, balancing industry cost reduction with government revenue concerns. Combine public‑sector resources (compute, data, policy) with private‑sector innovation to achieve sovereign AI goals without exclusive reliance on either side. Leverage open‑source technologies to cut costs while maintaining domestic control, satisfying both cost‑efficiency and strategic autonomy objectives. Implement a phased, state‑by‑state rollout of data‑centers and trust mechanisms, allowing adjustments based on local feedback and capacity.
Thought Provoking Comments
The regulatory framework has to be agile because the technology is moving at such a fast pace that you cannot anchor the regulatory framework to yesterday’s technologies.
Highlights the need for dynamic, forward‑looking policy rather than static rules, framing regulation as a core pillar (Evolve) of the AI blueprint.
Set the agenda for the governance discussion, prompting participants to consider how regulation can keep pace with AI advances and leading to later remarks on trust, explainability, and zero‑trust architecture.
Speaker: Dr. Vivek Mohindra
If we could waive GST on imported servers and only collect GST when services are delivered, it would reduce upfront infrastructure costs by about 18 % for MSMEs.
Introduces a concrete fiscal policy suggestion that directly addresses a major barrier for small firms accessing AI compute.
Shifted the conversation from abstract investment needs to specific tax reforms, eliciting agreement on the need for GST waivers and income‑tax benefits and deepening the policy‑focused segment of the dialogue.
Speaker: A. S. Rajgopal
The single most important determinant of a country’s AI momentum is the demand side, and we need to build a ‘trust infrastructure’—transparent, reliable systems that give people confidence in data handling and outcomes.
Moves the focus from hardware to the softer, non‑technical layer of trust, framing it as essential infrastructure alongside compute and data.
Redirected the panel to discuss non‑technical bottlenecks, prompting further exploration of trust, transparency, grievance mechanisms, and later influencing the Ferrari‑vs‑Maruti metaphor and job‑impact concerns.
Speaker: Bhaskar Chakravarti
We need a ‘UPI of AI’—a single, open API layer that aggregates all government data sets and compute capacity so any developer, from a startup to a large enterprise, can plug in and innovate.
Proposes a unifying technical and policy construct that mirrors India’s successful digital payments system, offering a clear pathway to democratize AI access.
Expanded the discussion from isolated data centers to a national AI platform, reinforcing the theme of public‑private partnership and influencing later remarks on open‑source cost reduction and distributed data centers.
Speaker: Manish Gupta
Our compute facility is being offered at 65 rupees per hour—the world’s cheapest—making AI resources openly available to startups and researchers.
Provides a tangible metric of how PPP can dramatically lower cost barriers, underscoring India’s competitive advantage in affordable AI infrastructure.
Validated earlier calls for affordable compute, reinforced the narrative of inclusive growth, and set the stage for discussing scaling PPP models and regional skilling initiatives.
Speaker: Shri Jayant Chaudhary Ji
Speed is like a Ferrari, but the road matters; in India the road is full of potholes—trust, transparency, explainability, and especially the impact on jobs are the potholes we must fix before the Ferrari can go fast.
Uses a vivid metaphor to integrate the speed‑vs‑safety debate with the often‑overlooked employment implications of AI.
Re‑oriented the conversation toward societal consequences, prompting participants to address job displacement concerns and the need for institutional safeguards before aggressive AI rollout.
Speaker: Bhaskar Chakravarti
Zero‑Trust AI architecture should start with data, flow through models, include usability, cybersecurity, identity‑access management, and be backed by a national risk registry and auditability.
Offers a concrete, multi‑layered framework for operationalizing trustworthy AI at a national scale.
Provided a practical culmination to the governance thread, giving the audience actionable steps and reinforcing the earlier themes of agile regulation and trust infrastructure.
Speaker: Dr. Vivek Mohindra
Overall Assessment

The discussion was steered by a handful of high‑impact remarks that moved the dialogue from broad aspirational statements to concrete policy, technical, and societal considerations. Dr. Mohindra’s call for agile regulation and the zero‑trust blueprint framed the governance narrative, while Rajgopal’s GST‑waiver proposal and Chaudhary’s cheap compute statistic grounded the conversation in actionable economic incentives. Chakravarti’s introduction of a ‘trust infrastructure’ and his Ferrari metaphor shifted focus to non‑technical bottlenecks and job impacts, prompting deeper analysis of societal readiness. Manish Gupta’s ‘UPI of AI’ concept tied these strands together, offering a unifying platform vision. Collectively, these comments redirected the panel toward a nuanced, multi‑dimensional roadmap—balancing investment, innovation, and evolution—while highlighting the pivotal role of public‑private partnership in achieving inclusive, sovereign AI growth for India.

Follow-up Questions
What would be the impact of waiving GST on imported AI servers and providing income tax benefits for AI service providers on the affordability and deployment of AI infrastructure for MSMEs?
Rajgopal suggested GST waivers and tax incentives to reduce upfront costs for AI hardware, indicating a need for policy analysis on fiscal measures to accelerate AI adoption.
Speaker: A. S. Rajgopal
How can India scale its GPU capacity from the current 40‑50,000 units to the estimated need of 200,000 GPUs, and what financing or procurement models are viable?
Rajgopal highlighted a large shortfall in GPU availability, pointing to a research gap in supply chain, financing, and procurement strategies for large‑scale GPU acquisition.
Speaker: A. S. Rajgopal
What role can open‑source AI frameworks play in reducing compute costs for Indian users, and how can they be effectively integrated into the national AI infrastructure?
He mentioned leveraging open source to lower costs, suggesting a need for studies on open‑source adoption, compatibility, and support ecosystems in India.
Speaker: A. S. Rajgopal
What specific mechanisms (e.g., transparency standards, grievance redressal systems, digital literacy programs) are required to build a robust ‘trust infrastructure’ for AI across diverse Indian districts?
Bhaskar identified trust as a non‑technical bottleneck and called for concrete institutional measures, indicating further research on trust‑building frameworks.
Speaker: Bhaskar Chakravarti
What will be the post‑AI employment landscape in India, and how can policy mitigate potential job displacement while maximizing job creation?
He raised concerns about AI’s impact on jobs, highlighting a need for labor market impact studies and policy design.
Speaker: Bhaskar Chakravarti
How feasible is the creation of a unified ‘UPI of AI’ – a single API layer that aggregates data sets and compute resources for nationwide access by developers, academia, and startups?
Manish proposed a national AI API platform, requiring research into technical architecture, governance, and stakeholder adoption.
Speaker: Manish Gupta
What standards and certification processes are needed to shift from ‘Made in India’ AI to ‘Trusted in India’ AI, ensuring security, privacy, and ethical compliance?
He emphasized moving toward trusted AI, indicating a gap in defining and implementing trust‑centric standards.
Speaker: Manish Gupta
What are the primary barriers causing large enterprises in India to hesitate in adopting AI, and how can targeted interventions address these concerns?
Rajgopal noted large firms’ reluctance, suggesting a need for case‑study research on adoption challenges and solutions.
Speaker: A. S. Rajgopal
What institutional safeguards and capacity‑building measures are essential to ensure AI drives inclusive growth rather than widening the digital divide?
He called for strengthening institutions, pointing to research on governance structures, capacity gaps, and implementation roadmaps.
Speaker: Bhaskar Chakravarti
How can the low‑cost compute facilities (e.g., 65 rupees per hour) be scaled sustainably while maintaining energy efficiency and meeting growing demand?
He highlighted the cheap compute offering, prompting investigation into scalability, sustainability, and operational models.
Speaker: Shri Jayant Chaudhary Ji
What does a national ‘Zero Trust AI’ architecture entail in practice, including audit trails, risk registries, and observability mechanisms?
He asked for a concrete design of Zero Trust AI at the national level, indicating a need for detailed framework development.
Speaker: Shri Jayant Chaudhary Ji
How can a national AI risk registry and continuous observability system be implemented to detect and report infractions across diverse sectors?
He suggested a risk registry and observability, requiring research on governance tools, data collection, and enforcement processes.
Speaker: Dr. Vivek Mohindra

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Fireside Chat Moderator- Mariano-Florentino Cuellar

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Fireside Chat Moderator- Mariano-Florentino Cuellar

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, comprising the IMF managing director, the WTO deputy director-general and Singapore’s minister for digital development, opened a discussion on how artificial intelligence should be positioned in the global context [1-9]. Moderator Mariano Florentino Cuellar highlighted that while advances in science and technology have lengthened life expectancy, the world is now more fragmented and the development of AI will both shape and be shaped by these global ties [19-27]. Kristalina Georgieva argued that AI could add roughly 0.8 percentage points to global growth, accelerating post-COVID recovery and creating jobs, especially in fast-adopting economies such as India [40-46]. She cautioned that AI also risks widening inequality, displacing up to 40 % of jobs in emerging markets and 60 % in advanced economies, and could threaten financial stability if left unchecked [57-66]. Georgieva therefore called for embracing AI’s opportunities while actively managing its risks to ensure benefits are widely shared [66-68]. Joanna Hill noted that trade can facilitate the diffusion of AI to low- and middle-income countries and that AI itself can boost trade growth by up to 40 % by 2040, but this requires investment in skills, regulations and digital infrastructure [75-84]. She warned that AI is reshaping comparative advantage toward data-rich, capital-intensive economies, putting labour-intensive countries at risk unless they adapt their policies [77-79]. Josephine Teo described Singapore’s strategy of acting as a “trusted node” that maintains consistent, principled technology choices and navigates great-power competition by remaining technology-agnostic and reliable for partners [97-103][110-112]. She emphasized that trust in AI, built on ethical foundations and safeguards, is essential for public acceptance and for preventing social disruption [212-218][227-232]. The panel agreed that education must be revamped to teach learning how to learn, and that social-protection measures are needed to support workers displaced by automation [145-149]. They also stressed that existing international institutions, such as the WTO, can cooperate with national authorities and the private sector to create a holistic policy framework for AI [197-203]. Cuellar concluded that, rather than creating new agencies, the world should rely on current institutions and collective action, with trust as the cornerstone of a successful AI transition [236-242]. Overall, the discussion underscored that coordinated global governance, equitable diffusion of AI, and a strong ethical and trust framework are critical to harness AI’s benefits while mitigating its risks [236-242].


Keypoints


Major discussion points


AI’s macro-economic upside and systemic risks – The IMF Managing Director highlighted that AI could add about 0.8 % to global growth, accelerating post-COVID recovery and creating jobs, especially for fast-adopting economies like India [40-42][45-48]. She also warned of three major dangers: widening inequality between AI-rich and AI-poor countries, massive labour-market disruption (up to 40 % of jobs in emerging markets and 60 % in advanced economies could be affected) [55-60][61-63], and potential threats to financial-market stability[64-66].


Trade as a conduit for AI diffusion and a source of new inequities – The WTO Deputy Director General argued that trade can spread AI to low- and middle-income economies and that AI itself can boost trade by up to 40 % by 2040[75-81]. At the same time, AI reshapes comparative advantage toward data-rich, capital-intensive economies, putting labour-intensive countries at risk, which calls for skill development, digital infrastructure and regulatory updates[76-80][84-85]. She later stressed the need for co-ordination between the WTO, international organisations and national authorities to address competition, labour and education challenges [197-202].


Singapore’s “trusted-node” model for AI governance – Singapore’s Minister explained that a small state can stay relevant by being a trusted node that offers reliable access to advanced technology while maintaining principled, consistent policies regardless of size [97-104][107-110]. She cited concrete examples such as principle-based 5G decisions that balance performance, security and resilience [111-112], positioning Singapore as a bridge between competing technology blocs.


Inclusive policy responses: education, social protection and ethical foundations – The IMF representative called for a revamped education system that teaches “learning-to-learn,” social safety nets for displaced workers, and an enabling environment that reduces digital-infrastructure gaps [145-152]. Complementing this, the Singapore Minister argued that regulation alone cannot solve inequality; instead, broader social-solidarity measures (housing, health care, lifelong learning) are required [183-188]. Later, the IMF chief emphasized the need for a strong ethical foundation and guardrails that protect against misuse without stifling innovation [227-232].


Trust and global cooperation as the linchpin for a positive AI future – The moderator and panelists repeatedly returned to trust-in institutions, in technology, and in cross-border collaboration-as the essential ingredient for a successful AI transition [210-218][227-232][236-242].


Overall purpose / goal of the discussion


The panel was convened to position artificial intelligence within the global context, examining how AI can drive economic growth while posing systemic risks, and to identify policy levers-through trade, governance, education, social protection and ethical standards-that can ensure the benefits of AI are widely shared and the downsides mitigated. The conversation sought concrete insights from the IMF, WTO and Singapore on how international cooperation and national strategies can shape an inclusive AI future.


Tone of the discussion


– The session opened with a formal, optimistic tone, celebrating the high-level panel and the promise of AI [12-18].


– It then shifted to a cautious, problem-focused tone, acknowledging fragmentation, inequality and the “tsunami” of labour disruption [23-31][55-66].


– As each speaker contributed, the tone became constructive and collaborative, offering concrete policy ideas and highlighting successful models (e.g., Singapore’s trusted-node approach, WTO’s trade-growth forecasts) [75-110][145-152].


– The closing remarks returned to a hopeful, forward-looking tone, emphasizing trust, ethical foundations and the capacity of existing global institutions to manage AI’s challenges [210-218][227-242].


Overall, the dialogue moved from optimism through caution to a balanced, solution-oriented outlook, underscoring the need for global trust and coordinated action.


Speakers

Mariano Florentino Cuellar – President of the Carnegie Endowment for International Peace; moderator of the panel discussion on AI in the global context. [S1][S3]


Speaker 1 – Unnamed event host/moderator who introduced the panel and invited the speakers to the stage. [S4][S5][S6]


Kristalina Georgieva – Managing Director of the International Monetary Fund (IMF); provides macro-economic perspective on AI. [S9]


Joanna Hill – Deputy Director General of the World Trade Organization (WTO); discusses trade implications of AI. [S12]


Josephine Teo – Minister for Digital Development and Information, Singapore; shares Singapore’s AI governance strategy. [S7]


Additional speakers:


Mr. Quayar – Mentioned in the opening as the person to be invited onto the stage; no further role or title provided in the transcript.


Tino – Addressed by Josephine Teo during her remarks; likely a nickname for the moderator, but no explicit role or title identified.


Full session reportComprehensive analysis and detailed insights

The session opened with a formal introduction of an “elite” panel that would discuss how artificial intelligence (AI) should be positioned in the global arena. Speaker 1 listed the three senior participants – the IMF Managing Director, Kristalina Georgieva, the WTO Deputy Director-General and Singapore’s Minister for Digital Development – and announced the panel’s title [1-9].


Cuellar’s three opening observations set the tone. First, he highlighted how advances in technology, science and global ties have made the world better, underpinning improvements such as longer life-expectancy [19-22]. He then warned that today’s world is more fragmented, making the diffusion of science and technology harder than a decade ago [23-25]. Finally, he argued that AI will reshape these ties, but countries are following divergent paths in adoption and capability [26-31].


Georgieva framed AI as an incredibly transformative force for the world economy [40-46] and estimated that AI could add roughly 0.8 % to global growth, a boost that would outpace the pre-COVID recovery [40-46]. She added that countries that go fast on digital infrastructure, skills and AI adoption can do twice as well as those that don’t [46-48]. She also highlighted three major risks: a widening gap between AI-rich and AI-poor nations, massive labour-market disruption (up to 40 % of jobs in emerging markets and 60 % in advanced economies could be affected) [57-63], and the possibility that AI-driven systems could destabilise financial markets [64-66]. Her overall appeal was to “embrace the opportunities, be mindful of the risks and manage them well” [66-68].


Hill shifted the focus to trade, arguing that international commerce can act as a conduit for AI diffusion to low- and middle-income economies [75-76]. Her research suggests AI could lift trade growth by almost 40 % by 2040[80-81], but she noted that AI reshapes comparative advantage toward data-rich, capital-intensive economies, leaving labour-intensive countries vulnerable [77-78]. She therefore called for investment in digital infrastructure, skills development and updated regulations [79-84]. Hill later explained that the WTO’s existing technology-neutral architecture helped launch the Web and could be leveraged for AI, yet some rules remain “too new and too nuanced” and will need refinement through cooperation with national authorities and the private sector [197-203][218-224].


Teo presented Singapore’s “trusted-node” approach. She described a trusted node as a small state that remains reliable for partners by operating on consistent, principled policies regardless of size, thereby allowing companies to access sophisticated technology without fear of misuse [97-104][107-110]. The 5G rollout, she explained, is decided by operators on the basis of performance, security and resilience, guided by national rules rather than geopolitical allegiance [111-112]. While acknowledging the risk of technology decoupling [97-110], Teo argued that regulation alone cannot curb AI-driven inequality; broader social-solidarity measures-affordable housing, health care, lifelong learning and pathways for job transition-are required [173-188]. She reiterated that public confidence is the ultimate yardstick: if citizens do not trust AI, the endeavour has failed [212-215].


Returning to labour-market dynamics, Georgieva cited U.S. data showing that one in ten jobs already requires new AI-related skills, and workers who acquire them earn higher wages, which in turn stimulates demand for lower-skilled services, creating a net 1.3 jobs for every AI-enabled job [121-130]. She warned that the gains accrue mainly to a small segment of the population, while the “middle of the labour market” is squeezed: routine, entry-level jobs disappear and relative wages fall [131-138]. To counter this she proposed three policy pillars: a revamp of education that teaches “learning-to-learn” [145-146], robust social-protection schemes for displaced workers [145-148], and an enabling environment that closes digital-infrastructure gaps [149-152]. She stressed that an ethical foundation and public trust are essential; without guard-rails AI could become a “force for evil” [227-232].


The panel’s nuances lay in emphasis rather than outright conflict. Teo warned that regulation alone cannot solve AI-induced inequality and called for broader social-solidarity measures, whereas Georgieva stressed education reform, social-protection schemes and ethical guard-rails as complementary tools. Cuellar’s view that no new AI-specific agency is required contrasted with earlier calls for such an institution, highlighting a tension between creating new governance bodies and leveraging existing ones. Hill emphasized trade as a conduit for AI diffusion, whereas Teo highlighted the risk of technology decoupling and the importance of a trusted-node approach; these are complementary perspectives on how to manage AI’s global rollout.


In the “15-year-later” reflections, the speakers reiterated that trust and an ethical foundation are indispensable for AI to be a “force for good” [210-218][227-232]; that AI offers significant macro-economic gains but also risks widening inequality and labour disruption[40-46][57-63][75-82]; and that multilateral cooperation-through the WTO, IMF or national initiatives-is essential to harness benefits and mitigate harms [197-203][145-146][235-242]. Capacity development-revamping education, upskilling workers and building digital infrastructure-was identified as a prerequisite for inclusive AI adoption [145-149][79-80][97-100].


Cuellar closed the discussion by reiterating that the single most critical factor for a successful AI transition is trust[210-218]. He observed that early calls for a new “international atomic-energy-type agency for AI” have faded, suggesting that existing multilateral bodies (IMF, WTO, national regulators) are sufficient if they cooperate and maintain confidence [235-242].


Takeaways


– AI can raise global GDP by about 0.8 %, and fast adopters can achieve roughly twice the growth of slower adopters [40-48].


– Up to 40 % of jobs in emerging economies and 60 % in advanced economies could be affected, underscoring the need for social safety nets, affordable housing, health care and lifelong learning [57-63][173-188].


– Trade is a powerful conduit for AI diffusion but must evolve to address data flows, competition and the shift in comparative advantage [75-84][197-203].


– Singapore’s trusted-node model shows how a small state can stay relevant through principle-based, technology-agnostic governance [97-104][110-112].


– Building an ethical foundation and public trust is essential; without it, AI deployment risks social backlash [227-232][210-218].


– Existing institutions-the IMF, WTO and national regulators-should be leveraged rather than replaced [235-242].


Across the dialogue, trust and an ethical foundation emerged as the linchpin for a sustainable, equitable AI future. [210-218][227-232][235-242]


Session transcriptComplete transcript of the session
Speaker 1

Now we move to a conversation about how artificial intelligence needs to be positioned in the global context. And we have very elite panelists for this session. Ms. Kristalina Georgieva, the Managing Director of the International Monetary Fund. From macroeconomic stability to digital transformation, she’s been a leading voice on how AI will reshape the global economic order and what policymakers must do to ensure that its benefits are widely shared. Ms. Joanna Hill, the Deputy Director General of the World Trade Organization, bringing the trade perspective to a technology that is redrawing the boundaries of comparative advantage. Ms. Josephine Teo, the Minister of Digital Development and Information for Singapore, a nation that has become a global benchmark for how governments can integrate AI into public services.

And this conversation will be held in a few minutes. This will be moderated by Mr. Mariano Florentino Cuellar, President of the Carnegie Endowment for International Peace. So we have a very elite… set of panelists who are going to join us on this panel discussion, which is titled AI Needs to be Positioned in the Global Context. May I please invite our panelists to please join us on stage? So over to you, Mr. Quayar.

Mariano Florentino Cuellar

Thank you very much and good afternoon, everybody. How are we doing AI summits? Let me try that again. Hello, Delhi. Thank you. Much better. It is not every day that we have the pleasure of having such a distinguished panel of international leaders. And I want to start by making three observations only as special observations for those of you who have chosen to be with us this afternoon. You could be anywhere in this complex, anywhere in the city, and you’re right here with us. The first is about the role of technology and science and global ties in making the world better. For those of you who are younger than me, which is most of you in the audience, you will live longer than my generation because of global ties, commerce, science and technology.

In 1950, when India was a young nation, global life expectancy was 47 years. Now it’s closer to 73 years. But at the same time, the second point is that the world that we are navigating today is fragmented. That set of global ties, diffusing science and technology, advancing global understanding and cooperation is a lot harder now than it was even five or 10 years ago. And everybody who’s been on this stage has been alluding to that in some way, that reality. The third point is that the use and development of AI will have an effect on those ties and on that prosperity in all likelihood. But there are divergences, different paths around AI. Some countries are using it more, some less.

Some countries play a certain role. Some very developed role in the tech stack and others less. To talk about these issues, I cannot imagine a better pan. It’s not every day, as I said, that we have the managing director of the IMF, the deputy director general of the World Trade Organization, and the minister for information and digital development from Singapore. So I’m going to start with a question for managing director Gorgieva. And the question is, all this discussion about artificial intelligence at the frontier, what do you see as the greatest possibilities and the greatest risks?

Kristalina Georgieva

Thank you very much. Namaste. Namaste. AI is an incredibly transformative we know. And the question is, what does it do for the world economy? We did some research, and here is the answer. Based on what we know, AI can lift up global growth by all. Almost. a percentage point, we say 0 .8%. What does that mean? It would mean that the world would grow faster than it did before the COVID pandemic. And that is fantastic for creating more opportunities, more jobs. This is the magnitude that we see for India. And it would mean that India’s Vixit Bharat is achievable. It also means that the world risks to be even further diverse. The accordion of opportunities may open even more from countries that do well to those who fall behind.

Thank you very much. Actually, what we see is the potential for countries that go fast on digital infrastructure, on skills, on adoption of AI, that they can do twice as well as those that don’t. So what is our main reason to be here at the AI Summit in Delhi? To embrace India’s proposition of democratizing AI, making sure that experience in India can then be passed to other countries, especially countries in the developing world, to make diffusion, to make adoption of AI. The main priority and do it with focus on people, on improving the opportunities, the livelihoods of people. I am very optimistic about AI. I’m also not naive. It brings significant risks. First, it brings the risk of making countries and the world less fair.

Some have it and others don’t. Second, it brings the risk of displacement of jobs with no good thinking about how to help people find their place in the new AI economy. We calculated this risk as very high. We actually see the impact of AI on the labor market like a tsunami hitting it globally. 40 % of jobs will be affected by AI, some enhanced, others eliminated. Emerging markets, 40%, but in advanced economies, 60%. And that is happening over a relatively short period of time. And the third risk we at the IMF worry a lot about is financial stability risk. Could AI get loose and create havoc on financial markets? But on balance, my appeal to all of us is embrace the opportunities, be mindful of the risks, and manage them well.

And above all, make sure that the spirit here is that AI is for the well -being of everybody, everywhere. Thank you.

Mariano Florentino Cuellar

And what we’re going to do, we’re going to. I’m going to come right back to these questions in a minute, but I want to bring in the Deputy Director General of the World Trade Organization into the conversation. I want to ask you, picking up exactly where Managing Director Gheorgheva was going. the interest in democratizing the technology, having more countries be closer to the frontier. For more than a generation, as you know, we have been having arguments about trade globally and about whether trade helps reduce the gap in well -being between countries or actually pulls them apart even more. And given all that experience, I wonder what role you think the international trading system has in dealing with potential inequities and access to AI and the development of AI.

Joanna Hill

Thank you so much for the invitation. To be here, definitely we see that trade can help the diffusion of AI to those that most need it. And we also think that AI can help trade and can help lower income and middle income economies really progress through trade. Now, we do see that AI is really shifting what we think of as comparative advantage to those economies that are more strong in capital, data, and in computing power. and therefore the countries that are more labor intensive feel more at risk. At the same time, we also see important opportunities for these same countries. Of course, with all the caveats that we’ve been speaking about, the importance of investing in skills and regulations and in infrastructure, digital infrastructure are incredibly important.

Our research suggests that by the year 2040, trade growth could be almost growing by 40%. So we see really important opportunities for the middle and lower income economies. And trade is already working well in that way. Our trade agreements, the world trading system is set up so that goods trade and services trade can develop with AI. But there are some areas where they’re still too new and still too nuanced. And we still have to wait and see how that will develop and how the system has to accommodate.

Mariano Florentino Cuellar

Minister Teo, as that system evolves, and we deal with this, emerging, not even emerging anymore, emerged technology. we talk about how much it’s going to affect countries large and small. You are playing a critical role, and I know you’re playing a critical role because I see you at every single AI summit in the world. It’s amazing. But how are countries like Singapore in a position to navigate this tsunami, these changes? And what, in particular, what do you think we could learn from Singapore’s strategy, as I see it, of being at the forefront here on AI governance, the Model AI Governance Plan, for example, but also navigating a world that some people see as balkanized between China and the United States around the technology stack?

Josephine Teo

Thank you very much, Tino. That’s a lot of questions packed into one. I’ll do my best to address them. I think embedded in what you’re saying is that there is the risk of technology decoupling. And what does a small state do? In this kind of context? And how do we navigate the big power contestation? The way we think about it is that for Singapore, it’s very important for us to maintain this ability to operate as a trusted node. Trusted node means that, well, we can trust you with our technology. So your companies, your people can continue to access this, whatever is the most sophisticated, because they will not be abused and the risk of them being misused is also minimized.

The question, however, is how do we remain trusted? And I think the only way to do so is if we act in a consistent and principled way. And being consistent and principled is not a matter of size. And Singapore is not the only small state that has a good track record of holding this discipline. We are consistent in being. Pro -Singapore. And sometimes our choices may align with this country or that country. Sometimes they will align with many countries. Sometimes they only align with a few countries. But they always align with our own interests. In technology choice, for example, 5G, we are always operating on the basis of principles. Number one, that these are commercial decisions that have to be undertaken by the operators of the mobile networks.

And they have to decide on the basis of what works for them in terms of performance, in terms of security, in terms of resilience, keeping in mind what are all the rules that are in place in our context. So those are the broad directions in which we operate in. And it’s not easy, but it’s a path that has served us well.

Mariano Florentino Cuellar

And I note that among the many things that Singapore, I think, has contributed to the discussion of AI globally, in addition to being a trusted node and connecting different countries, there’s also the role Singapore and the region of Southeast Asia plays in all this because Southeast Asia is such a region of such diversity and importance globally. And I want to come back in a minute to the question of how we might imagine Southeast Asia evolving as almost a laboratory for some of the issues we’re talking about. But first, I want to go back to Kristalina, if I may, and ask you about, it was clear in your earlier remarks that you see enormous possibilities for AI.

But you also acknowledge candidly something that maybe not every speaker has acknowledged, which is along with that opportunity will probably come some disruption. Some real policy difficulties in some countries that are experiencing rapid change. The question then is how we might develop the right strategy so that the productivity gains that the world can experience would actually translate into shared prosperity. What do you think we can do on that score?

Kristalina Georgieva

The first thing we ought to do is… to carefully observe what is actually happening and then project what are the implications for policymakers. At the Fund, we did a very interesting piece of research in the United States assessing how much AI is affecting already the labor market. And we found out that one in 10 jobs already requires additional skills. And for those who have these skills, the job pays better. Now, with money in their pocket, people then go and buy more local services. They go to restaurants, to entertainment. That creates demand for low -skilled jobs. And to our surprise, the total impact on employment in the aggregate is positive. One job with AI, 1 .3 jobs. 1 .3 jobs. in total employment.

But what does that mean? It means that a smaller segment of people get higher opportunities. A larger segment, yes, they can have jobs, but jobs that are on the lower end of the pay scale. And the most problematic is the fate of those squeezed in the middle. Their jobs don’t change. In relative terms, they pay less, and some of these jobs disappear. What concerns us the most is that jobs that disappear tend to be entry -level jobs. They are routine, and they are easily automated. So if you are in this place of the labor market that is easily automated, of course that creates a risk. Now, we are going to talk about the risk of the labor market.

We are going to talk about the risk of the labor market. We are going to talk about the risk of the labor market. We are going to talk about the risk of the labor market. We are going to talk about the risk of the labor market. We are going to talk about the risk of the labor market. We are going to talk about the risk of the labor market. once obviously we will continue to work with countries to understand what is happening and then how do we project it for policies for the future i would make three conclusions so far and of course we have to be agile in how we look at ai the first one is education has to be revamped for the for a new world people have to learn to learn not to learn specific skills so much and there has to be second there has to be support for those if they’re a big chunk in a particular local economy and this labor market is changing dramatically there has to be social protection social support so they don’t feel like what happened with the industrial world workers in the united states when their jobs were exported overseas and three it is very important that we look at the overall enabling environment.

Why in some places AI makes it faster and in others it doesn’t. And what we find is not very surprising. Some parts of the economy, some parts of society are naturally better positioned because they have digital infrastructure in place. They are already in the digital world because there is more demand for entrepreneurship. Somebody spoke about it and entrepreneurship is more dominant. And I think it is important for the world to be very attentive to what works, what doesn’t work and not sugarcoat the picture because if we do, we would end up where we ended up with globalization. People revolting against it despite all the benefits it brings because, yes, the world as a whole benefited but some communities were devastated.

and the world did not pay attention to these communities in a timely manner. So that is my conclusion so far. And I know that I am very mindful that we are going to learn much more. At the front, we are trying to see how our country is positioned. Some countries actually have more demand for AI skills than supply. Some countries have more supply of AI skills than demand, and some have neither. So we have to work on multiple fronts, and we have to work based on concrete assessment of conditions in countries and localities in countries. I want to finish with a message to the Indian friends here in the audience. You’re very fortunate that your country invested in public digital infrastructure.

So this country… Condition for AI? Check. You are very fortunate because your country is removing actively barriers to entrepreneurship. And on that count, we say check. And you are super fortunate to have youthful, energetic, innovative population that is embracing AI. So what do we say? Check. So all the very best. This is terrific. Perfect. Minister Teo. Can I agree with

Mariano Florentino Cuellar

the managing director more, if I

Josephine Teo

may be allowed to chime in? Yes, please. I think sometimes there is a desire, a

Mariano Florentino Cuellar

tendency to

Josephine Teo

want to think of ways of regulating AI in order to slow down its advance and perhaps to try and forestall the risk. I’m not underestimating the need. For example, in making… I’m not underestimating the need for AI in order to slow down its advance and perhaps to try and forestall the risk. I’m not underestimating the need for AI in order to slow down its advance and perhaps to try and forestall the risk. I’m not underestimating the need for AI in order to slow down its advance and perhaps to try and forestall the risk. I’m not underestimating the need for AI in order to slow down its advance and perhaps to try and forestall the risk.

I’m not underestimating the need for AI in order to slow down its advance and perhaps to try and forestall the risk. I’m not underestimating the need for AI in order to slow down its advance and perhaps to try and forestall the risk. I’m not underestimating the need for AI in order to slow down its advance and perhaps to try and forestall the risk. But to over -expect AI regulations to deliver on the other important issues, such as the potential for greater social inequality, I think it’s unrealistic. The way to deal with it is to look at what other methods there must be to strengthen social solidarity. For example, what provisions do we put in place to help people to move from one job to the next?

What provisions do we put in place to ensure that even people who don’t earn a lot have the prospect of owning their own homes, access to good health care, educating their children to a very high level? I think these are the other things, and you cannot run away from those conversations just by expecting regulations to solve the problem. So what I’m hearing you both say, in a

Mariano Florentino Cuellar

way, is that it would be a very silly thing if we tried to solve health care problems. just by regulating pharmaceuticals. That would be a very poor fit, right? At the same time, you recognize that, you know, certain products that are sold, it’s good for them to be safe. And in fact, safety, trust, security can make them even more easy to diffuse. But I think what a very important takeaway from both of you is that the entire spectrum of tools that a society has to build social cohesion are going to be important in the transition to a more AI -driven economy. And we shouldn’t ignore them, but we shouldn’t put just the focus on what we can do by making models built in a certain way.

And I’d love for you to chime in because trade has come up already, just even in the last like 47 seconds of a bunch of times. Actually, yes. We put out a report last

Joanna Hill

year that looks at this issue exactly in that way. We look at the opportunities that I talked about of AI in the future, not only for the advanced countries, but developing in the lower income. But we also look at the need for national policies for that to actually… happen and to help transition. And so we look at issues around competition policy, around labor force. around skills development, around education. And to do that, the world trading system cannot do it alone. We need to partner at our level with international organizations and at the national level with the appropriate authorities and the private sector in order to have that holistic approach. I would say lesson learned from past experiences, and we definitely want to apply those lessons to this new one.

So we have about four minutes left, and

Mariano Florentino Cuellar

I have a last question for you all. Well, imagine yourselves in the future looking back at the past, maybe 15 years in the future. And at that point, you’re being interviewed on the same stage here in India, and you’re saying it’s been a very good thing to see how well the world has handled its relationship with this emerging technology of AI, and it’s turned out very well because blank. And I want you to mention one thing that you think in particular would have been so critical to make that transition well. You’ve all mentioned a bunch of things, but I’m interested in the main, most important takeaway that you’d like to leave the audience with. For me, that one word is trust.

In

Josephine Teo

15 years, if we went and asked citizens in all the countries where AI is being deployed widely, do you trust this technology? If their answer is no, then I believe that we must have failed in some way. If they believe that this technology has been implemented in a way that didn’t rob them of a livelihood, that didn’t rob them of, you know, being totally misinformed about the world, didn’t rob them of, you know, being able to carry out their lives in a safe and secure manner, it didn’t destroy families. I think if they can still say that this is a technology that can work reasonably well if you put in place the safeguards, I think we would have come a long way.

Deputy Director? An appreciation for what the world

Mariano Florentino Cuellar

trading system

Joanna Hill

can and is delivering. You know, when I think about it, last year it turned 30 years that the WTO was born. And down the road at CERN, the World Wide Web was being created by scientists that wanted to collaborate. And that architecture, which is technology neutral, allowed for those developments of the digital economy to come through. And how much of that architecture can serve us for this new wave? And then concentrate on those areas that are still needing to be worked on by collaboration, by cooperation, and focus on those. You know, trading with trust, trading with safety, and then appreciating and using what we already have to deliver. Managing Director? Well, in 15 years, if my

Mariano Florentino Cuellar

life expectancy

Kristalina Georgieva

has grown by another 50 years, I would say, great, we are successful. But on a serious note, I think, to me, the most important… factor, it goes a bit in the trust area, is the ethical foundation of AI. Whether we would manage to put AI on the foundation of force for good, or we leave space for AI to be force for evil. And that balance is not easy one. When I look at progress so far, we have done much more on the technical side of AI, and much less on building that strong ethical foundation, and putting guardrails that are not restricting innovation, but are protecting us from AI for bad. I still want my 50 years extra life.

One closing observation to just reinforce my appreciation

Mariano Florentino Cuellar

to the three of you and the work we do. So in the weeks immediately after the release of ChatGPT, which seems like 20 years ago, but it was not that long ago, there was talk about the need for an international atomic energy agency for AI or a new international agency or treaty. We don’t talk about that anymore. And I think in some ways it’s an appropriate and mature recognition that we already have a set of institutions and mechanisms in place to deal with a set of emerging challenges. I think it’s also a recognition that many individual countries have to do their part to create social cohesion and manage this change and this transformation effectively. But I would ask that this audience recognize that all three of our remarkable leaders here on the stage also reflect another reality, which is that even if sovereignty is important and even if individual countries have to have their own priorities, the challenge of how we best live with the technology we have created is truly a global one.

It’s not an individual country. It’s a country one. And the conversation we’re having today is an example of how we can learn from each other and find the right solutions. Thank you and namaste.

Related ResourcesKnowledge base sources related to the discussion topics (34)
Factual NotesClaims verified against the Diplo knowledge base (6)
Additional Contexthigh

“Speaker 1 introduced an “elite” panel featuring the IMF Managing Director Kristalina Georgieva, the WTO Deputy Director‑General and Singapore’s Minister for Digital Development.”

The knowledge base confirms the presence of an elite panel and Kristalina Georgieva as IMF Managing Director, but does not mention the WTO Deputy Director-General or Singapore’s minister, so the claim is only partially corroborated [S2].

Confirmedhigh

“Advances in technology, science and global ties have improved life‑expectancy.”

Steven Pinker’s analysis of 21st-century progress notes improvements in life expectancy among other human-flourishing indicators, confirming the claim [S109].

Confirmedhigh

“The world is more fragmented today, making diffusion of science and technology harder than a decade ago.”

Both the Digital Policy Alert on fragmentation and Guio’s warning about growing technological fragmentation support this observation [S112] and [S115].

Confirmedhigh

“Countries are following divergent paths in AI adoption and capability.”

The AI and Digital Developments Forecast for 2026 notes that nations are taking different stances, risking decentralisation, which aligns with the claim [S118].

Additional Contextmedium

“Georgieva estimated AI could add roughly 0.8 % to global growth, outpacing the pre‑COVID recovery.”

Georgieva emphasizes AI as a crucial driver of future economic growth, but the knowledge base does not provide the specific 0.8 % figure, offering only general context on AI’s growth relevance [S19].

Additional Contextmedium

“AI could widen the gap between AI‑rich and AI‑poor nations, creating massive labour‑market disruption (up to 40 % of jobs in emerging markets and 60 % in advanced economies).”

The AI Economy Institute report highlights uneven benefits and a growing digital divide, supporting the risk of a widening gap, though it does not quantify job-impact percentages [S122].

External Sources (122)
S1
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Fireside Chat Moderator- Mariano-Florentino Cuellar — 1 .3 jobs. in total employment. But what does that mean? It means that a smaller segment of people get higher opportunit…
S2
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Fireside Chat Moderator- Mariano-Florentino Cuellar — 203 words | 140 words per minute | Duration: 86 secondss Now we move to a conversation about how artificial intelligenc…
S3
https://app.faicon.ai/ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-fireside-chat-moderator-mariano-florentino-cuellar — And this conversation will be held in a few minutes. This will be moderated by Mr. Mariano Florentino Cuellar, President…
S4
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S5
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S6
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S7
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Josephine Teo- Role/title not specified (represents Singapore)
S9
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — -Kristalina Georgieva- Managing Director of the International Monetary Fund (IMF)
S10
(Interactive Dialogue 1) Summit of the Future – General Assembly, 79th session — Kristalina Georgieva, Managing Director of the International Monetary Fund
S11
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-fireside-chat-moderator-mariano-florentino-cuellar — Now we move to a conversation about how artificial intelligence needs to be positioned in the global context. And we hav…
S12
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-fireside-chat-moderator-mariano-florentino-cuellar — Now we move to a conversation about how artificial intelligence needs to be positioned in the global context. And we hav…
S13
https://app.faicon.ai/ai-impact-summit-2026/regional-leaders-discuss-ai-ready-digital-infrastructure — And in there, you can see, for example, that some of the lower income economies can seem quite open in that space. But i…
S14
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Fireside Chat Moderator- Mariano-Florentino Cuellar — Now we move to a conversation about how artificial intelligence needs to be positioned in the global context. And we hav…
S15
9821st meeting — For Mozambique, it is essential that the international community establishes norms and standards that promote trust and …
S16
Building Trusted AI at Scale – Keynote Anne Bouverot — It is a global transformation and it must be shaped by all. India is, in my view, the perfect country to host this summi…
S17
AI Governance Dialogue: Steering the future of AI — The discussion aims to advocate for comprehensive, inclusive AI governance that ensures the benefits of AI are shared gl…
S18
AI: Lifting All Boats / DAVOS 2025 — Kristalina Georgieva presents research showing that AI has the potential to increase global economic growth. This increa…
S19
The Global Economic Outlook — – Kristalina Georgieva: Managing Director of the International Monetary Fund (IMF) Kristalina Georgieva, Managing Direc…
S20
AI market surge raises alarm over financial stability — AI has becomeone of the dominant forcesin global markets, with AI-linked firms now making up around 44% of the S&P 500’s…
S21
Shaping AI’s Story Trust Responsibility & Real-World Outcomes — “Public trust is very important”[5]. “How can we achieve trust before skill?”[4]. “How should we be rethinking trust?”[6…
S22
How AI Drives Innovation and Economic Growth — Yes. Thank you. Thanks very much. You know, I don’t want to minimize the existence of forces that may widen gaps. I thin…
S23
UNGA/DAY 1/PART 2 — Artificial intelligence poses new challenges to human dignity, justice, and labor, with risks of exclusion, social manip…
S24
Regional Leaders Discuss AI-Ready Digital Infrastructure — Thank you so much to the Asian Development Bank for the invitation and the organization to this interesting conversation…
S25
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Achieving inclusive AI requires addressing inequalities across three fundamental areas: access to computing infrastructu…
S26
Global dialogue on AI governance highlights the need for an inclusive, coordinated international approach — Global AI governance was the focus of a high-levelforumat the IGF 2024 in Riyadhthat brought together leaders from gover…
S27
AI for Social Empowerment_ Driving Change and Inclusion — Urgent need for comprehensive policy responses including competition policy, tax policy, labor law reforms, and universa…
S28
World Economic Forum Panel Discussion: Global Economic Growth in the Age of AI — Economic | Development | Infrastructure The UAE has successfully diversified from oil-dependent economy to producing mu…
S29
How AI Drives Innovation and Economic Growth — “In low‑ and middle‑income countries, they don’t have access to that.”[196]. “The poorer parts of the country that benef…
S30
Open Forum #67 Open-source AI as a Catalyst for Africa’s Digital Economy — The moderator highlights specific policy conflicts within AU frameworks, showing how different policy domains can work a…
S31
Why science metters in global AI governance — Teo demonstrates how smaller nations can play significant roles in global AI governance through targeted investments and…
S32
Revitalizing Universal Service Funds to Promote Inclusion | IGF 2023 — Reforms in Brazil’s USF have unlocked $675 million for school connectivity, with Giga securing an additional $1.7 billio…
S33
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — Kapoor emphasizes that managing AI transition requires comprehensive policy responses beyond just reskilling. She advoca…
S34
AI for Social Empowerment_ Driving Change and Inclusion — Discussion point:Governance and Regulatory Responses
S35
AI as critical infrastructure for continuity in public services — The moderator highlights that trust is a key factor influencing economic confidence and cross‑border collaboration. Trus…
S36
Setting the Rules_ Global AI Standards for Growth and Governance — Key areas of convergence included the importance of process-oriented standards that can adapt to evolving capabilities, …
S37
AI Meets Cybersecurity Trust Governance & Global Security — “AI governance now faces very similar tensions.”[27]”AI may shape the balance of power, but it is the governance or AI t…
S38
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — Georgieva describes AI’s impact on labor markets as dramatic and uneven, affecting different types of economies at vastl…
S39
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — An audience member articulated what they described as “overwhelming pessimism” among young people about career prospects…
S40
Open Forum: A Primer on AI — Another concern is the potential impact of AI on the job market. As AI capabilities advance, certain professions may bec…
S41
IMF chief sounds alarm at Davos 2026 over AI and disruption to entry-level labour — AI hasdominateddiscussions at the World Economic Forum in Davos, where IMF managing director Kristalina Georgieva warned…
S42
Local, Everywhere: The blueprint for a Humanitarian AI transformation — Knowledge shapes identity. Preserving knowledge is closely tied to maintaining dignity and, ultimately, our shared human…
S43
Ethical AI_ Keeping Humanity in the Loop While Innovating — It means AI, technologies, and here I would like to invite you to think that it’s technologies, it’s not one single elem…
S44
How Trust and Safety Drive Innovation and Sustainable Growth — Summary:All speakers agreed that trust is the foundational requirement for AI adoption. Without trust, people simply won…
S45
Artificial intelligence (AI) – UN Security Council — In conclusion, the discussions highlighted the importance of fostering transparency and accountability in AI systems. En…
S46
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S47
How AI Drives Innovation and Economic Growth — Rodrigues emphasizes that while early AI discussions were dominated by fear about job displacement and technological thr…
S48
How AI Drives Innovation and Economic Growth — Summary:The speakers show broad agreement on AI’s transformative potential for development but significant disagreements…
S49
Artificial Intelligence & Emerging Tech — In conclusion, the meeting underscored the importance of AI in societal development and how it can address various chall…
S50
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Jonathan Mendoza Iserte:Thank you, Luca. Good afternoon. How are you? I want to thank the organizers for bringing this t…
S51
360° on AI Regulations — Balancing national security interests with maintaining trading partnerships is a crucial aspect of AI regulation. The po…
S52
Building the Workforce_ AI for Viksit Bharat 2047 — Responsibility is to carve out trust -based collaborative ethical frameworks so that the demands of fast -paced dynamic …
S53
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Fireside Chat Moderator- Mariano-Florentino Cuellar — Teo warns against over-relying on AI regulations to address broader social issues like inequality. While acknowledging t…
S54
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Fireside Chat Moderator- Mariano-Florentino Cuellar — This comment challenges a fundamental assumption in AI policy discussions – that regulation is the primary tool for mana…
S55
AI for Social Empowerment_ Driving Change and Inclusion — Regulation, governance, and policy response She argues that immediate policy action is required across competition, tax…
S56
Why science metters in global AI governance — helping member states move from philosophical debates to technical coordination, and anchor choices in evidence so polic…
S57
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — And this requires proactive and coherent policy responses. First, people must be at the center of AI strategy, as we hea…
S58
AI for Social Empowerment_ Driving Change and Inclusion — This insight suggests that the future of work may increasingly centre on fundamentally human capabilities rather than te…
S59
Inclusive AI Starts with People Not Just Algorithms — Capacity development | Social and economic development Education, upskilling, and future skills for youth
S60
What is it about AI that we need to regulate? — The Role of International Institutions in Setting Norms for Advanced TechnologiesThe discussions across IGF 2025 session…
S61
WS #187 Bridging Internet AI Governance From Theory to Practice — Governance Implementation Challenges Uses historical examples of radio frequency spectrum and telecom network interconn…
S62
Global AI Policy Framework: International Cooperation and Historical Perspectives — And for the implementation, I think we should rely on existing structures, because there’s lots of talk of creating thes…
S63
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — In conclusion, the UNESCO recommendation on AI ethics provides crucial guidance for global AI governance. By grounding A…
S64
Global AI Governance: Reimagining IGF’s Role & Impact — Ivana Bartoletti: Thank you very much and so sorry for not being able to be physically with you. So I think I wanted to …
S65
Policymaker’s Guide to International AI Safety Coordination — In terms of what is the key to success, what is the most important lesson on looking back on what we need, trust is buil…
S66
Conversation: 02 — “So that’s why without trust and safety and understanding of what’s happening in your underlying environment, it becomes…
S67
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Fireside Chat Moderator- Mariano-Florentino Cuellar — IMF Managing Director Kristalina Georgieva highlighted AI’s potential to boost global growth by 0.8 percentage points, w…
S68
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Fireside Chat Moderator- Mariano-Florentino Cuellar — Thank you very much. Namaste. Namaste. AI is an incredibly transformative we know. And the question is, what does it do …
S69
World Economic Forum Panel Discussion: Global Economic Growth in the Age of AI — And so it’s that duality that we have to get right. And I think if people don’t appreciate the magnitude of the upside, …
S70
How AI Drives Innovation and Economic Growth — “In low‑ and middle‑income countries, they don’t have access to that.”[196]. “The poorer parts of the country that benef…
S71
Rethinking trade and IP: prospects and challenges for development in the knowledge economy (WTO) — In conclusion, the analysis highlights the significant contribution of copyright industries and the creative economy to …
S72
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — Fink raises concerns about AI adoption patterns based on research showing that educated populations are disproportionate…
S73
Why science metters in global AI governance — Teo demonstrates how smaller nations can play significant roles in global AI governance through targeted investments and…
S74
How to make AI governance fit for purpose? — China’s Vice Minister Shan Zhongde discussed their emphasis on open-source development, citing DeepSeek as an example of…
S75
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — Need policies supporting displaced workers through industrial, macroeconomic, and social protection measures
S76
Revitalizing Universal Service Funds to Promote Inclusion | IGF 2023 — Revitalizing Universal Service Funds (USF) is crucial for enhancing school connectivity. Reforms in Brazil’s USF have un…
S77
AI as critical infrastructure for continuity in public services — The moderator highlights that trust is a key factor influencing economic confidence and cross‑border collaboration. Trus…
S78
WS #100 Integrating the Global South in Global AI Governance — Jill: Thank you, Fadi. I think in a nutshell, I think it’s important to acknowledge and realize that without the contr…
S79
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Governments have collectively affirmed the importance of building trust by governing AI based on human rights, and that …
S80
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — And this means that, as usual, the key point is talents. And it means that we have to build ways to push people to inter…
S81
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — Yoichi Iida: So, I try to be brief, but let me talk about the Japan situation before I talk about the international effo…
S82
AI for food systems — The tone throughout the discussion was consistently formal, optimistic, and collaborative. It maintained a ceremonial qu…
S83
Opening — The overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological ch…
S84
Announcement of New Delhi Frontier AI Commitments — Overall Tone:The tone was consistently formal, ceremonial, and optimistic throughout. It maintained a diplomatic and cel…
S85
Opening address of the co-chairs of the AI Governance Dialogue — The tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate fo…
S86
WAIGF Opening Ceremony & Keynote — The overall tone was formal yet optimistic. Speakers expressed enthusiasm about the potential of digital technologies wh…
S87
WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy — The tone of the discussion was largely serious and concerned, given the gravity of the issues being discussed. However, …
S88
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — The discussion maintained a thoughtful but somewhat cautious tone throughout, with speakers acknowledging both opportuni…
S89
New Technologies and the Impact on Human Rights — The discussion maintained a collaborative and constructive tone throughout, despite addressing complex and sometimes con…
S90
Afternoon session — The discussion began with a collaborative and appreciative tone as various stakeholders shared their visions and commitm…
S91
Multistakeholder digital governance beyond 2025 — The discussion maintained a constructive and collaborative tone throughout, with speakers sharing both challenges and su…
S92
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S93
Trade Deals or Disputes? / DAVOS 2025 — The tone of the discussion was largely constructive and forward-looking. Despite acknowledging challenges in the current…
S94
Open Forum #12 Ensuring an Inclusive and Rights-Respecting Digital Future — The tone was largely constructive and collaborative, with speakers building on each other’s points. There was a sense of…
S95
Open Forum #68 WSIS+20 Review and SDGs: A Collaborative Global Dialogue — The discussion maintained a constructive and collaborative tone throughout, characterized by cautious optimism balanced …
S96
AI: Lifting All Boats / DAVOS 2025 — The tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunit…
S97
Bridging the AI innovation gap — The tone is consistently inspirational and collaborative throughout. The speaker maintains an optimistic, forward-lookin…
S98
Building Trusted AI at Scale – Keynote Anne Bouverot — Overall Tone:The tone is diplomatic, optimistic, and collaborative throughout. It begins with ceremonial courtesy and ap…
S99
Scaling Innovation Building a Robust AI Startup Ecosystem — Overall Tone:The tone was consistently celebratory, appreciative, and inspirational throughout. It began formally with t…
S100
Trusted Connections_ Ethical AI in Telecom & 6G Networks — The discussion maintained a consistently optimistic and forward-looking tone throughout. Speakers expressed confidence i…
S101
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — Artificial intelligence requires enormous competition. Artificial capacity, which in turn requires unprecedented amounts…
S102
Artificial intelligence (AI) and cyber diplomacy — Adil Suleyman:Welcome back from lunch. So now we will have a very interesting session. And I think this is maybe this is…
S103
Main Session on Artificial Intelligence | IGF 2023 — In conclusion, the discussion on responsible AI governance highlighted the significance of technical standards, the need…
S104
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — Mr. Chief of State Mr. Chief of Government For Brazil it is a satisfaction to participate in the artificial intelligence…
S105
Workshop 3: Quantum Computing: Global Challenges and Security Opportunities — These key comments fundamentally shaped the discussion by establishing three critical dimensions: temporal urgency (harv…
S106
WS #241 Balancing Acts 2.0: Can Encryption and Safety Co-Exist? — These key comments fundamentally shaped the discussion by establishing it as a collaborative problem-solving exercise ra…
S107
WS #110 AI Innovation Responsible Development Ethical Imperatives — These key comments collectively transformed what could have been a theoretical discussion about AI ethics into a nuanced…
S108
AI for Good Technology That Empowers People — These key comments fundamentally shaped the discussion by establishing a philosophical framework that challenged convent…
S109
The Arc of Progress in the 21st Century / DAVOS 2025 — Steven Pinker argues that there has been significant progress in various aspects of human flourishing over time, includi…
S110
Keynote-Demis Hassabis — This balance reflects his understanding that the potential benefits—a new golden era of scientific discovery, improved g…
S111
Keynote-Demis Hassabis — Hassabis concludes with an optimistic vision for the future, believing that through international cooperation, scientifi…
S112
The State of Digital Fragmentation (Digital Policy Alert) — He warns that policy fragmentation can lead to technical fragmentation, which can be harder to resolve.
S113
Keynote-Brad Smith — And what I believe we need to recognize is that this economic divide is a result, more than anything else, of a technolo…
S114
Keynote-Brad Smith — Throughout his address, Smith maintained a balance between realism about current challenges and optimism about potential…
S115
WS #453 Leveraging Tech Science Diplomacy for Digital Cooperation — Guio warns that the world is trending toward fragmentation and nationalization of technology projects and infrastructure…
S116
Steering the future of AI — LeCun envisions future LLMs or their descendants becoming repositories of all human knowledge and culture. He argues thi…
S117
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — Anshul argues that AI can be a potential big equalizer, like electricity, that can change everything when properly imple…
S118
AI and Digital Developments Forecast for 2026 — Countries are taking different stances risking decentralization
S119
Media Hub — Minister Bah outlined Sierra Leone’s strategy to position itself as an “AI lab” for the region, drawing inspiration from…
S120
Building a Digital Society, from Vision to Implementation — – Nadeen Matthews Blair Development | Infrastructure Reckord argues that countries like Jamaica shouldn’t view themsel…
S121
Democratizing AI Building Trustworthy Systems for Everyone — Crampton draws on historical examples like electricity to argue that success with transformative technologies comes not …
S122
Global AI adoption rises quickly but benefits remain unequal — Microsoft’s AI Economy Institute hasreleased its 2025 AI Diffusion Report, detailing global AI adoption, innovation hubs…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
1 argument140 words per minute203 words86 seconds
Argument 1
AI must be positioned globally with elite international leadership
EXPLANATION
The opening remarks stress that AI should be discussed and framed within a global context, emphasizing the need for high‑level, cross‑national leadership. The moderator highlights the presence of top officials from the IMF, WTO and Singapore as evidence of this elite positioning.
EVIDENCE
The host introduces the session by stating that the conversation will focus on how artificial intelligence needs to be positioned in the global context and lists the distinguished panelists – the IMF Managing Director, the WTO Deputy Director General and Singapore’s Minister of Digital Development – underscoring the elite international leadership behind the discussion [1-10].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The session introduction highlights the need to frame AI in a global context and points to the presence of elite panelists from the IMF, WTO and Singapore, confirming the emphasis on high-level international leadership [S3][S1].
MAJOR DISCUSSION POINT
AI must be positioned globally with elite international leadership
M
Mariano Florentino Cuellar
1 argument188 words per minute1337 words424 seconds
Argument 1
Global cooperation and trust in existing institutions are needed to manage AI’s impact
EXPLANATION
Cuellar argues that creating new agencies is unnecessary because the world already has institutions capable of handling AI challenges, but they must work together with mutual trust. He stresses that AI’s impact is a global problem that requires coordinated action across sovereign states.
EVIDENCE
He notes that after the initial hype about a new international AI agency, the conversation has shifted to recognizing that existing institutions and mechanisms are sufficient, and that individual countries must also build social cohesion to manage transformation effectively [235-242].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Cuellar repeatedly stresses that AI is a truly global challenge that requires cooperation among existing multilateral bodies rather than new agencies, echoing the call for trust and coordination among states [S2].
MAJOR DISCUSSION POINT
Global cooperation and trust in existing institutions are needed to manage AI’s impact
AGREED WITH
Joanna Hill, Kristalina Georgieva
K
Kristalina Georgieva
4 arguments119 words per minute1338 words669 seconds
Argument 1
AI can boost global growth by ~0.8% and double gains for fast adopters
EXPLANATION
Georgieva presents IMF research indicating that AI could raise world GDP by roughly 0.8 percentage points, and that countries that invest quickly in digital infrastructure and skills could see productivity gains up to twice those of slower adopters. She links this growth to opportunities such as India’s “Vixit Bharat” agenda.
EVIDENCE
She cites IMF analysis that AI can lift global growth by about 0.8 % [40-42] and explains that nations that move fast on digital infrastructure and AI adoption can achieve twice the economic benefit of laggards [50-51], using India as a concrete illustration of the potential impact [45-46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
IMF research presented by Georgieva shows that AI could lift world GDP by about 0.8 % and that early adopters can reap roughly twice the productivity gains of laggards [S18][S19].
MAJOR DISCUSSION POINT
AI can boost global growth by ~0.8% and double gains for fast adopters
AGREED WITH
Joanna Hill, Josephine Teo
Argument 2
AI threatens labor markets: up to 40% of jobs affected, widening inequality
EXPLANATION
Georgieva warns that AI will disrupt labour markets, with a large share of jobs either transformed or eliminated, especially routine, entry‑level positions. The impact is projected to be larger in advanced economies, potentially deepening existing inequalities.
EVIDENCE
She identifies three major risks, the first being unfairness, the second being job displacement, noting that “40 % of jobs will be affected by AI” in emerging markets and “60 %” in advanced economies, describing the change as a rapid “tsunami” [57-63].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Georgieva warns that AI will impact 40 % of jobs in emerging markets and 60 % in advanced economies, creating a “tsunami” of labour-market disruption and widening inequality [S2][S9].
MAJOR DISCUSSION POINT
AI threatens labor markets: up to 40% of jobs affected, widening inequality
AGREED WITH
Joanna Hill, Josephine Teo
Argument 3
AI creates financial‑stability risks for markets
EXPLANATION
Georgieva highlights the possibility that AI systems could malfunction or be misused, leading to volatility or systemic shocks in financial markets. She calls for vigilance to prevent such destabilising effects.
EVIDENCE
She raises the question, “Could AI get loose and create havoc on financial markets?” and labels financial-stability risk as a key concern for the IMF [64-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analysts note that the rapid rise of AI-driven firms in equity markets raises concerns about volatility and systemic risk, underscoring Georgieva’s financial-stability warning [S20].
MAJOR DISCUSSION POINT
AI creates financial‑stability risks for markets
Argument 4
Building an ethical foundation and trust is vital for AI to be a force for good
EXPLANATION
Georgieva stresses that while technical progress in AI is advancing, the ethical underpinnings lag behind. She argues that strong guardrails and an ethical framework are essential to ensure AI benefits humanity rather than causing harm.
EVIDENCE
She observes that most progress has been technical, with insufficient work on an ethical foundation and guardrails that protect without stifling innovation, framing the choice between AI as a “force for good” or “force for evil” [227-232].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Georgieva highlights the lag in ethical guardrails behind technical progress and calls for strong ethical foundations to ensure AI benefits humanity, a point echoed in discussions on trust and responsible AI [S1][S21].
MAJOR DISCUSSION POINT
Building an ethical foundation and trust is vital for AI to be a force for good
AGREED WITH
Joanna Hill, Josephine Teo
J
Joanna Hill
4 arguments158 words per minute487 words184 seconds
Argument 1
Trade can accelerate AI diffusion to low‑ and middle‑income economies
EXPLANATION
Hill argues that international trade can serve as a conduit for spreading AI technologies to developing countries, helping them catch up and benefit from digitalisation. She points to research forecasting substantial trade‑driven AI growth.
EVIDENCE
She notes that trade helps diffuse AI to those most in need and can enable lower-income economies to progress, citing WTO research that predicts trade growth of up to 40 % by 2040 as a result of AI adoption [75-82].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Hill argues that the WTO-facilitated trade system can spread AI technologies to developing countries, with WTO research projecting up to 40 % trade growth by 2040 driven by AI adoption [S24][S2].
MAJOR DISCUSSION POINT
Trade can accelerate AI diffusion to low‑ and middle‑income economies
AGREED WITH
Kristalina Georgieva, Josephine Teo
Argument 2
AI reshapes comparative advantage toward data, capital and computing, challenging labor‑intensive countries
EXPLANATION
Hill explains that AI shifts the sources of comparative advantage from labour‑intensive factors to assets such as data, capital and computing power, putting traditional labour‑heavy economies at risk while also presenting new opportunities if they adapt.
EVIDENCE
She describes AI as moving comparative advantage toward economies strong in capital, data and computing, thereby making labour-intensive countries feel more at risk, while also acknowledging potential opportunities for those economies [77-78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Hill explains that AI shifts comparative advantage from labour-intensive factors to data, capital and computing power, creating risks for traditional labour-heavy economies [S2].
MAJOR DISCUSSION POINT
AI reshapes comparative advantage toward data, capital and computing, challenging labor‑intensive countries
AGREED WITH
Kristalina Georgieva, Josephine Teo
Argument 3
WTO rules need updating and coordination with national policies to capture AI benefits
EXPLANATION
Hill points out that the current WTO framework is broadly compatible with AI‑enabled trade but contains gaps and ambiguities that must be addressed. She calls for alignment between multilateral rules and national policies to fully realise AI’s potential.
EVIDENCE
She states that while the world trading system allows goods and services trade to develop with AI, there are “areas … too new and too nuanced” that require further development, and later emphasizes the need for national-level policies and private-sector partnership to achieve a holistic approach [83-85][197-203].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Hill points out gaps and ambiguities in current WTO rules that must be addressed and aligned with national policies to fully harness AI’s potential [S2][S24].
MAJOR DISCUSSION POINT
WTO rules need updating and coordination with national policies to capture AI benefits
AGREED WITH
Mariano Florentino Cuellar, Kristalina Georgieva
Argument 4
A holistic approach—linking competition policy, skills development and education—is required for inclusive AI transition
EXPLANATION
Hill argues that addressing AI’s impact demands coordinated action across competition policy, labour‑force development, skills training and education, involving both international organisations and national authorities.
EVIDENCE
She mentions that the WTO cannot act alone and that a partnership with international organisations, national authorities and the private sector is needed to address competition policy, labour-force, skills development and education for an inclusive AI transition [197-203].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for coordinated competition, labour-force, skills and education policies is reinforced by calls for comprehensive policy responses, including universal social protection, to ensure an inclusive AI transition [S27][S25].
MAJOR DISCUSSION POINT
A holistic approach—linking competition policy, skills development and education—is required for inclusive AI transition
AGREED WITH
Kristalina Georgieva, Josephine Teo
J
Josephine Teo
2 arguments177 words per minute794 words268 seconds
Argument 1
Singapore serves as a trusted node, maintaining consistent, principled choices amid tech decoupling
EXPLANATION
Teo describes Singapore’s strategy of positioning itself as a reliable, principled partner in the global AI ecosystem, emphasizing consistency, principled decision‑making and the ability to operate as a “trusted node” despite geopolitical tensions.
EVIDENCE
She explains that Singapore aims to remain a trusted node by ensuring technology is not misused, acting consistently and based on principles rather than size, and cites examples such as its principled approach to 5G decisions that balance performance, security and resilience [97-110].
MAJOR DISCUSSION POINT
Singapore serves as a trusted node, maintaining consistent, principled choices amid tech decoupling
AGREED WITH
Kristalina Georgieva, Joanna Hill
Argument 2
Regulation alone cannot address AI‑driven inequality; social safety nets, housing, health and education are essential
EXPLANATION
Teo argues that while regulation is important, it cannot by itself solve the broader social inequalities that AI may exacerbate. She calls for comprehensive social policies—such as job‑transition support, affordable housing, health care and quality education—to mitigate these risks.
EVIDENCE
She states that over-reliance on regulation is unrealistic and that strengthening social solidarity through provisions for job mobility, home ownership, health care and high-quality education is necessary to address AI-driven inequality [183-187].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Broader policy frameworks that go beyond regulation-such as social protection, affordable housing, health care and quality education-are identified as crucial for mitigating AI-induced inequality [S27][S25].
MAJOR DISCUSSION POINT
Regulation alone cannot address AI‑driven inequality; social safety nets, housing, health and education are essential
AGREED WITH
Kristalina Georgieva, Joanna Hill
DISAGREED WITH
Kristalina Georgieva
Agreements
Agreement Points
Trust and ethical foundation are essential for AI to serve humanity
Speakers: Kristalina Georgieva, Mariano Florentino Cuellar, Josephine Teo
Building an ethical foundation and trust is vital for AI to be a force for good Global cooperation and trust in existing institutions are needed to manage AI’s impact Singapore serves as a trusted node, maintaining consistent, principled choices amid tech decoupling
All three speakers stressed that trust-whether expressed as an ethical guard-rail for AI, confidence in multilateral institutions, or a trusted-node approach by a small state-is the cornerstone for a beneficial AI transition [227-232][210-211][212-214][97-100].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors UN Security Council calls for transparency and accountability to maintain public trust in AI systems [S45] and aligns with industry observations that trust is the foundational requirement for AI adoption and sustainable growth [S44]; trust is also emphasized as built through inclusive, evidence-based processes in the Policymaker’s Guide to International AI Safety Coordination [S65].
AI promises significant economic gains but also risks widening inequality and labour disruption
Speakers: Kristalina Georgieva, Joanna Hill, Josephine Teo
AI can boost global growth by ~0.8% and double gains for fast adopters AI threatens labor markets: up to 40% of jobs affected, widening inequality Trade can accelerate AI diffusion to low‑ and middle‑income economies AI reshapes comparative advantage toward data, capital and computing, challenging labor‑intensive countries Regulation alone cannot address AI‑driven inequality; social safety nets, housing, health and education are essential
Georgieva highlighted AI’s growth potential and labour-market disruption, Hill added that trade can spread AI while shifting comparative advantage and creating risks for labour-intensive economies, and Teo warned that regulation alone will not curb the resulting inequality, calling for broader social policies [40-42][57-63][75-82][77-78][183-187].
POLICY CONTEXT (KNOWLEDGE BASE)
The dual outlook reflects IMF Managing Director Kristalina Georgieva’s warning about rapid structural disruption in labour markets and uneven skill demand [S41] and Georgieva’s broader analysis of AI widening the gap between advanced and developing economies [S38]; similar concerns about job displacement are highlighted in sector-specific reports such as Duolingo’s workforce reductions [S40].
Multilateral cooperation and policy coordination are needed to harness AI benefits and manage risks
Speakers: Mariano Florentino Cuellar, Joanna Hill, Kristalina Georgieva
Global cooperation and trust in existing institutions are needed to manage AI’s impact WTO rules need updating and coordination with national policies to capture AI benefits Building an ethical foundation and trust is vital for AI to be a force for good
Cuellar argued that existing bodies (IMF, WTO, etc.) are sufficient if they cooperate, Hill called for WTO rule-updates and alignment with national policies, and Georgieva stressed the need for coordinated observation and policy work across countries [235-242][197-203][145-146].
POLICY CONTEXT (KNOWLEDGE BASE)
Calls for coordinated governance echo recommendations that scientific evidence should underpin global AI coordination rather than blunt regulation [S56] and that international institutions must play a central normative role [S60]; proposals to leverage existing mechanisms like the WSIS process instead of creating new bodies are discussed in the Global AI Policy Framework [S62], while the reimagining of the IGF’s role underscores the need for multilateral platforms [S64].
Capacity development—education, skills and lifelong learning—is critical for an inclusive AI transition
Speakers: Kristalina Georgieva, Joanna Hill, Josephine Teo
Building an ethical foundation and trust is vital for AI to be a force for good A holistic approach—linking competition policy, skills development and education—is required for inclusive AI transition Singapore serves as a trusted node, maintaining consistent, principled choices amid tech decoupling
Georgieva called for revamping education to teach learning-to-learn, Hill emphasized skills, education and a holistic policy mix, and Teo highlighted Singapore’s principled, consistent approach that includes capacity-building as a pillar of its trusted-node strategy [145-146][79][97-100].
POLICY CONTEXT (KNOWLEDGE BASE)
The importance of skills investment is a core recommendation of the AI Impact Summit 2026, which stresses people-centered strategies, lifelong learning and social protection [S57]; inclusive AI frameworks also highlight capacity development as essential for equitable outcomes [S59] and stress the development of future-oriented human capabilities [S58].
Similar Viewpoints
Both see AI as a double‑edged sword: a driver of macro‑economic growth and a source of labour market disruption that can be mitigated through trade‑enabled diffusion and skill investment [40-42][57-63][75-82][77-78].
Speakers: Kristalina Georgieva, Joanna Hill
AI can boost global growth by ~0.8% and double gains for fast adopters AI threatens labor markets: up to 40% of jobs affected, widening inequality Trade can accelerate AI diffusion to low‑ and middle‑income economies AI reshapes comparative advantage toward data, capital and computing, challenging labor‑intensive countries
Both stress that trust in existing multilateral frameworks and an ethical foundation are prerequisites for managing AI’s systemic impact [235-242][227-232].
Speakers: Mariano Florentino Cuellar, Kristalina Georgieva
Global cooperation and trust in existing institutions are needed to manage AI’s impact Building an ethical foundation and trust is vital for AI to be a force for good
Both underline the importance of principled, rule‑based coordination—whether at the WTO level or through a trusted‑node national strategy—to ensure equitable AI deployment [197-203][97-100].
Speakers: Joanna Hill, Josephine Teo
WTO rules need updating and coordination with national policies to capture AI benefits Singapore serves as a trusted node, maintaining consistent, principled choices amid tech decoupling
Unexpected Consensus
Trust as the single most critical factor for AI’s future
Speakers: Kristalina Georgieva, Mariano Florentino Cuellar, Josephine Teo
Building an ethical foundation and trust is vital for AI to be a force for good Global cooperation and trust in existing institutions are needed to manage AI’s impact Singapore serves as a trusted node, maintaining consistent, principled choices amid tech decoupling
While Georgieva approached trust from an ethical-governance angle, Cuellar framed it as institutional confidence, and Teo expressed it as a national-level trusted-node strategy-three very different perspectives converging on trust as the pivotal element, which was not anticipated given their distinct mandates [227-232][210-211][212-214].
POLICY CONTEXT (KNOWLEDGE BASE)
This view is reinforced by multiple sources that identify trust as the decisive factor for both user adoption and business success [S44], as well as by UN discussions emphasizing transparency and accountability to secure public confidence [S45]; the Policymaker’s Guide further notes that trust is built through inclusive, evidence-based coordination [S65].
Overall Assessment

The panel displayed strong convergence on four core themes: (1) trust and ethical foundations are indispensable; (2) AI offers sizable economic gains but also threatens labour markets and widens inequality; (3) multilateral cooperation and policy coordination (through existing institutions, WTO reforms, and trusted‑node approaches) are essential; and (4) capacity development—education, skills and lifelong learning—is a prerequisite for inclusive benefits. These shared positions cut across the IMF, WTO and Singapore perspectives, indicating a high level of consensus.

High consensus across economic, trade and governance dimensions, suggesting that future policy initiatives can build on this common ground to design coordinated, trust‑based, and inclusive AI strategies.

Differences
Different Viewpoints
Regulation versus broader social policies and ethical guardrails to address AI‑driven inequality
Speakers: Josephine Teo, Kristalina Georgieva
Regulation alone cannot address AI‑driven inequality; social safety nets, housing, health and education are essential Building an ethical foundation and trust is vital for AI to be a force for good; need for education revamp, social protection and guardrails
Teo argues that over-reliance on regulation is unrealistic and stresses the need for comprehensive social safety nets, job-transition support, affordable housing, health care and education to mitigate AI-induced inequality [173-183]. Georgieva, while acknowledging risks, focuses on revamping education, providing social protection and creating ethical guardrails as the primary means to manage AI’s labour-market impact and ensure fairness [145-146][227-232]. The two speakers therefore differ on the relative emphasis: Teo foregrounds broad social policies beyond regulation, whereas Georgieva highlights education reform and ethical/ regulatory guardrails as the core response.
POLICY CONTEXT (KNOWLEDGE BASE)
Fireside-chat participants argue that regulation alone cannot resolve AI-induced inequality and that a broader toolkit of social solidarity measures is required [S53][S54]; complementary perspectives call for immediate policy action across competition, tax, labour and social protection to mitigate disruption [S55].
Whether new international AI governance structures are needed versus relying on existing institutions
Speakers: Mariano Florentino Cuellar
Global cooperation and trust in existing institutions are needed to manage AI’s impact
Cuellar notes that early discussions about creating a new international AI agency have faded, arguing that existing multilateral bodies and national efforts are sufficient to handle AI challenges, emphasizing trust and cooperation among sovereign states [235-242]. This stance implicitly contrasts with the earlier hype about a new AI-specific agency, suggesting a disagreement with the notion that fresh institutional architecture is required.
POLICY CONTEXT (KNOWLEDGE BASE)
The debate mirrors discussions on leveraging existing WSIS and IGF frameworks rather than creating new bodies [S62][S64]; some analysts caution against assuming new governance is automatically required for emerging technologies [S61], while other commentary stresses the central role of established international institutions in setting AI norms [S60].
Unexpected Differences
Optimism about trade‑driven AI diffusion versus concerns about technology decoupling and the need for a trusted node
Speakers: Joanna Hill, Josephine Teo
Trade can accelerate AI diffusion to low‑ and middle‑income economies Singapore serves as a trusted node, maintaining consistent, principled choices amid tech decoupling
Hill expresses confidence that international trade will spread AI technologies and generate up to 40 % trade growth by 2040, viewing the WTO framework as broadly supportive [75-84][80-84]. Teo, by contrast, foregrounds the risk of technology decoupling and stresses that small states must act as a trusted node to navigate great-power contestation, implying that reliance on open trade alone may be insufficient [93-100][97-110]. The tension between a trade-centric diffusion model and a trust-centric, geopolitically cautious approach was not anticipated given the overall consensus on AI’s benefits.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy papers highlight the tension between promoting AI trade and safeguarding national security, urging a balance between regulation and trading partnerships [S51]; summaries of AI innovation debates note divergent views between optimistic diffusion and cautious approaches to decoupling and digital sovereignty [S48]; concerns about a trusted node are echoed in discussions of digital sovereignty and the limits of regulation alone [S53].
Overall Assessment

The panel shows broad consensus that AI can drive economic growth and that coordinated global action is essential. Disagreements centre on the means: (1) the balance between regulation, ethical guardrails, and wider social safety nets; (2) whether new AI‑specific international institutions are required; and (3) the relative weight of trade mechanisms versus trusted‑node strategies in a geopolitically fragmented environment. These divergences reflect differing institutional perspectives (multilateral vs national) and policy toolkits (education, social protection, trade policy, trust frameworks).

Moderate – while all participants share the overarching goal of inclusive AI benefits, they propose distinct policy levers, which could lead to fragmented implementation if not reconciled. The implications are that without alignment on governance mechanisms, efforts to harness AI for development may be uneven, risking the very inequities the speakers aim to avoid.

Partial Agreements
All speakers agree that AI presents significant economic opportunities and that coordinated action is required to realise them. However, they diverge on the primary vehicle: Georgieva stresses macro‑economic growth and education; Hill points to trade mechanisms; Teo highlights a trusted‑node approach and principled national choices; Cuellar stresses existing multilateral institutions and trust. The shared goal is inclusive AI‑driven prosperity, but the pathways differ.
Speakers: Kristalina Georgieva, Joanna Hill, Josephine Teo, Mariano Florentino Cuellar
AI can boost global growth by ~0.8% and double gains for fast adopters Trade can accelerate AI diffusion to low‑ and middle‑income economies Singapore serves as a trusted node, maintaining consistent, principled choices amid tech decoupling Global cooperation and trust in existing institutions are needed to manage AI’s impact
Takeaways
Key takeaways
AI has the potential to raise global GDP by about 0.8% and can double growth for countries that adopt it quickly, but it also poses significant labor‑market disruptions and financial‑stability risks. The benefits of AI are unevenly distributed; up to 40% of jobs in emerging markets and 60% in advanced economies could be affected, widening inequality if not managed. Trade can be a powerful conduit for AI diffusion, yet AI reshapes comparative advantage toward data, capital and computing power, challenging labor‑intensive economies. Existing WTO rules need to evolve and be coordinated with national policies to capture AI‑related trade opportunities and mitigate risks. Singapore’s model of acting as a “trusted node”—maintaining consistent, principled governance while remaining technology‑agnostic—offers a practical approach for small states amid great‑power tech decoupling. Regulation alone cannot solve AI‑driven social inequality; comprehensive social safety nets, housing, health, and education policies are essential. Building an ethical foundation and public trust in AI is critical; without trust, the technology’s deployment will be socially unsustainable. Global cooperation, leveraging existing institutions (IMF, WTO, national regulators), is necessary to manage AI’s cross‑border impacts.
Resolutions and action items
IMF will continue to monitor AI’s macro‑economic and labor‑market impacts and work with member countries on policy frameworks. WTO will pursue research and policy recommendations on AI‑related trade, competition, and skills development, and will seek coordination with other international bodies. Singapore will maintain its role as a trusted node, sharing its Model AI Governance framework and principles with other nations. All participants emphasized the need for education systems to be revamped toward lifelong learning and adaptability.
Unresolved issues
Specific mechanisms for updating WTO agreements to address AI‑driven services, data flows, and competition concerns remain undefined. How to design and fund large‑scale social protection programs that can absorb displaced workers has not been detailed. The precise governance structure or international treaty for AI (e.g., an “AI agency”) was mentioned historically but no consensus was reached on its creation. Methods for measuring and ensuring public trust in AI across diverse societies were discussed but not operationalized. Strategies for mitigating financial‑stability risks posed by AI‑driven market activities were identified but lack concrete action plans.
Suggested compromises
Combine regulatory measures with broader social policies (housing, health, education) rather than relying solely on AI‑specific regulation. WTO to work jointly with IMF, national governments, and the private sector, sharing responsibilities for competition policy, skills development, and trade facilitation. Singapore’s principle‑based approach allows alignment with multiple major powers (e.g., US, China) when it serves national interests, offering a pragmatic middle ground amid tech decoupling.
Thought Provoking Comments
AI can lift global growth by about 0.8%, but it also risks widening inequality, displacing up to 40% of jobs and creating financial‑stability threats.
She quantified both the macro‑economic upside and the systemic risks, moving the conversation from abstract optimism to concrete policy challenges.
Her statement prompted the panel to shift from describing AI’s potential to discussing concrete mitigation strategies, leading directly to Joanna’s trade‑focused response and later to Josephine’s emphasis on social safety nets.
Speaker: Kristalina Georgieva
AI is reshaping comparative advantage toward economies strong in capital, data and computing power, putting labor‑intensive countries at risk, but those countries can capture opportunities if they invest in skills, regulations and digital infrastructure.
She introduced a nuanced trade perspective, linking AI to the traditional debate on whether trade narrows or widens gaps between nations.
Her insight broadened the scope of the discussion to include trade policy, prompting Kristalina to reference labor‑market research and setting up the later dialogue on how WTO frameworks can adapt to AI.
Speaker: Joanna Hill
For Singapore, being a ‘trusted node’ means operating consistently and on principled grounds so that other countries can rely on our technology, even amid great‑power decoupling.
She offered a concrete model of how a small state can maintain relevance and trust in a fragmented tech landscape, shifting the conversation toward governance and trust rather than pure competition.
This concept of ‘trust’ became a recurring theme, influencing Kristalina’s later remarks on ethical foundations and the moderator’s final emphasis on trust as the key takeaway.
Speaker: Josephine Teo
Our IMF research shows that each AI‑enabled job creates about 1.3 additional jobs, but the middle of the labor market is squeezed; education must focus on learning how to learn, and we need stronger social protection for displaced workers.
She deepened the labor‑market analysis with empirical data and highlighted the paradox of job creation versus middle‑class erosion, calling for systemic policy responses.
This prompted Josephine to argue that regulation alone is insufficient, steering the dialogue toward broader social policies and reinforcing the need for a holistic approach.
Speaker: Kristalina Georgieva
Relying solely on AI regulation to solve inequality is unrealistic; we must strengthen social solidarity—housing, health care, education—to help people transition between jobs.
She challenged the prevailing regulatory narrative, urging a shift toward comprehensive social safety nets as the primary tool for managing AI disruption.
Her challenge caused the panel to acknowledge the limits of regulatory solutions, leading Joanna to stress the need for coordinated policy across trade, competition, and skills development.
Speaker: Josephine Teo
The WTO’s technology‑neutral architecture that enabled the creation of the Web can also serve us for AI; we should build on existing institutions rather than create new agencies.
She reframed the debate about new global governance structures, suggesting that existing frameworks can be adapted, which is a strategic pivot from calls for new treaties.
This perspective influenced the moderator’s closing remarks about the maturity of existing institutions and reinforced the panel’s consensus that cooperation, not fragmentation, is essential.
Speaker: Joanna Hill
Overall Assessment

The discussion’s trajectory was shaped by a series of pivotal remarks that moved the conversation from high‑level optimism to a granular, policy‑oriented debate. Kristalina’s quantification of AI’s macro impact introduced the risk‑benefit calculus, prompting Joanna to connect AI to trade dynamics and Josephine to foreground trust and governance. Subsequent data on job creation and the middle‑class squeeze deepened the labor‑market analysis, while Josephine’s critique of regulation‑only solutions broadened the focus to social protection. Finally, Joanna’s call to leverage existing WTO structures anchored the dialogue in pragmatic institutional thinking. Together, these comments redirected the panel toward a nuanced, multi‑dimensional view of AI’s global challenges, emphasizing trust, ethical foundations, and coordinated policy across economic, trade, and social domains.

Follow-up Questions
What specific social protection and reskilling measures can be implemented to help workers transition from displaced AI‑affected jobs to new employment?
Both highlighted the risk of large‑scale job displacement and the need for policies beyond regulation, indicating a gap in concrete solutions.
Speaker: Josephine Teo, Kristalina Georgieva
How should international trade agreements be updated to address AI‑driven services, data flows, and competition concerns?
Hill noted that current WTO rules are “too new and too nuanced” for AI, suggesting further study on adapting trade frameworks.
Speaker: Joanna Hill
What mechanisms can ensure that small states like Singapore function effectively as ‘trusted nodes’ in a bifurcated technology landscape?
Teo emphasized the importance of trust and principled consistency, but the operational model for trusted nodes remains under‑explored.
Speaker: Josephine Teo
What are the financial stability risks posed by AI deployment in markets, and how can regulators mitigate them?
Georgieva identified financial stability as a top risk, indicating a need for deeper analysis of AI‑induced market volatility.
Speaker: Kristalina Georgieva
How can the digital infrastructure gap across countries be accurately measured and closed to enable equitable AI adoption?
She pointed out that differing digital infrastructure determines AI’s impact, calling for systematic assessment and investment strategies.
Speaker: Kristalina Georgieva
What ethical guardrails and foundational principles are required to ensure AI serves as a ‘force for good’ rather than ‘force for evil’?
Georgieva stressed the lack of strong ethical foundations, highlighting a research need for robust, innovation‑friendly AI ethics frameworks.
Speaker: Kristalina Georgieva
What metrics and methodologies can be used to gauge public trust in AI across diverse societies?
Teo linked future success to citizens’ trust in AI, implying the necessity to develop reliable trust measurement tools.
Speaker: Josephine Teo
How can the WTO collaborate with other international organizations and national authorities to create a holistic approach to AI‑related trade, competition, and labor policies?
Hill mentioned the need for partnership beyond the WTO, suggesting research on inter‑institutional coordination mechanisms.
Speaker: Joanna Hill
What are the long‑term macroeconomic effects of AI‑induced productivity gains on income inequality within and between countries?
Georgieva warned that AI could widen disparities, indicating a need for detailed macro‑economic modeling of AI’s distributional impacts.
Speaker: Kristalina Georgieva
What role, if any, should a new international agency or treaty play in governing AI, given existing institutions?
The moderator referenced earlier calls for an “international atomic energy agency for AI,” suggesting investigation into the necessity and design of such a body.
Speaker: Mariano Florentino Cuellar (moderator)
How can policy frameworks balance the need for AI regulation with the risk of stifling innovation, especially in developing economies?
Teo cautioned against over‑reliance on regulation to solve inequality, highlighting a research gap on optimal regulatory balance.
Speaker: Josephine Teo
What are effective strategies for fostering entrepreneurship and AI skill development in economies where demand exceeds supply?
Georgieva noted mismatches between AI skill demand and supply, pointing to a need for targeted entrepreneurship and education policies.
Speaker: Kristalina Georgieva

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable

Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel convened to examine how artificial intelligence can be responsibly integrated into health care to improve outcomes [101]. Participants stressed that, unlike other domains, health care demands virtually zero tolerance for error, arguing that AI systems must be verifiable rather than opaque “black boxes” [22-30]. Zameer proposed a “glass-box” approach where every input, patient criteria, and model logic is documented and safeguards prevent harmful prescriptions [25-30]. He called on partners to develop pathways for verified AI, emphasizing the need for transparent decision trails akin to legislative requirements [31-35]. Professor Dasgupta highlighted concrete responsible-AI initiatives, including funding ambient AI for clinical note-taking, tele-surgery that enables remote operations, and autonomous robotic systems for procedures such as gallbladder removal [46-53]. He warned that without diverse data and equitable access, these technologies risk widening existing health disparities [49-51]. Dasgupta also identified a critical skills gap, noting that few medical curricula teach AI, and urged investment in education to sustain adoption [60]. Several speakers reiterated that AI’s promise must be anchored in human-centered design, with clinicians remaining in the loop to ensure safety and trust [74-75][92]. Haitham highlighted the need for coordinated donor strategies and priority setting to mobilize resources effectively [70]. Alain Labrique and Payden summarized emerging themes, stating that AI in health has moved from speculative possibility to a stage where investment, implementation, and measurable impact are paramount [101-104]. They argued that investment must flow into governance, regulatory frameworks, evidence generation, workforce readiness, and long-term partnerships to make AI a tool for equity rather than a source of new inequalities [110-114]. Trust was described as the “currency” that unlocks sustainable investment, requiring transparency, robust evidence, and cross-sector collaboration [115-119]. The panel concluded that while AI offers transformative potential, its health benefits will materialize only through coordinated, human-centered, and rigorously validated efforts [101][115-119].


Keypoints

Major discussion points


Verified, transparent AI is essential for patient safety.


The panel stressed that health-care AI must move from a “black-box” to a “glass-box” model, providing a full audit trail of inputs, logic, and outputs and guaranteeing zero-risk prescribing - especially to avoid catastrophic errors - and called for partners to develop pathways to “verified AI.” [24-30][31-33]


Investment must shift from pure innovation to implementation, governance, and equity.


Speakers noted that AI in health has reached an inflection point where the focus is now on funding systems that ensure safety, trust, and scalability through regulation, evidence generation, workforce readiness, data infrastructure, and long-term partnerships; without these foundations AI will exacerbate inequalities. [101-110][111-118]


Changing entrenched clinical practice requires dedicated research, evaluation, and skill-building.


The difficulty of altering long-standing medical workflows was highlighted, with a call for substantial investment in clinical research to generate evidence and for embedding AI education in medical and nursing curricula to ensure successful adoption. [40-41][58-60]


Global equity and diverse data are critical for AI impact.


Initiatives such as “Responsible AI UK” aim to place AI champions in hospitals worldwide, expand partnerships in India and Africa, and develop technologies like tele-surgery to reach underserved populations, emphasizing that diverse datasets are necessary to avoid bias and achieve equitable health outcomes. [46-51][52-55]


Human-centered AI and societal impact must guide development.


Multiple speakers urged that AI tools keep humans at the core, stressing the need to consider broader societal effects rather than just technical capability, and to move from theoretical promises to tangible progress that benefits patients and societies. [73-75][90-92][81-82]


Overall purpose / goal of the discussion


The panel convened to evaluate the current state of artificial intelligence in health care, identify the critical gaps that prevent AI from delivering equitable health outcomes, and agree on concrete priorities-verification, investment in implementation infrastructure, workforce development, and global collaboration-to transition AI from a promising concept to a trusted, impactful tool for all populations.


Overall tone and its evolution


– The session opened with formal, repetitive gratitude, establishing a courteous atmosphere.


– It then moved into a serious, analytical tone as speakers dissected challenges such as verification, risk, and clinical inertia.


– Mid-discussion, the tone became more urgent and passionate, emphasizing equity, global partnerships, and the need for decisive investment.


– The closing remarks shifted to an optimistic, collaborative tone, calling for collective action and expressing confidence that coordinated effort can realize AI’s health-care potential.


Speakers

Alain Labrique


Expertise/Role/Title: Panel moderator/facilitator for the WHO Strategic Roundtable on AI for Health; senior leader in global digital health initiatives[S3]


Prokar Dasgupta


Expertise/Role/Title: Working surgeon, innovator, and representative of Responsible AI UK; speaker on implementation of AI in clinical practice[S4]


Haitham Ali Ahmed El-Noush


Expertise/Role/Title: (role not specified in transcript or external sources)


Zameer Brey


Expertise/Role/Title: (role not specified in transcript or external sources)


Ken Ichiro Natsume


Expertise/Role/Title: Assistant Director General at the World Intellectual Property Organization (WIPO)[S9]


Payden P.


Expertise/Role/Title: (role not specified in transcript or external sources)


Justice Prathiba M. Singh


Expertise/Role/Title: Justice (judicial officer)[S13]


Additional speakers:


Justice Simo


Expertise/Role/Title: Justice (judicial officer) – mentioned briefly in the discussion; no further details provided.


Dr. Pagan


Expertise/Role/Title: (role not specified) – referenced by Alain Labrique during closing remarks.


Elaine


Expertise/Role/Title: (role not specified) – referenced by Zameer Brey in the context of verified AI discussions.


Professor Dasvipta


Expertise/Role/Title: Likely the same individual as Professor Prokar Dasgupta (surgeon and AI innovator); mentioned under a variant spelling.


Full session reportComprehensive analysis and detailed insights

Opening – The session began with a series of courteous thank-you remarks from the moderators, establishing a respectful atmosphere before moving into substantive discussion [1-6].


Zameer Brey – AI-assist workflow & verification


Brey described the current clinician workflow: clinicians complete notes, prescribe, and counsel before pressing an “AI-assist” button, and warned that moving this button earlier without rigorous validation would change outcomes unpredictably [12-13]. He repeatedly emphasized “this is the product flow” to underline the end-to-end AI-assist pathway [14-16]. Brey introduced a “fourth level” of evaluation – asking whether the improvement actually yields a health-outcome benefit – and argued that without evidence at this level AI cannot be justified [17-19]. Using a flight-safety analogy, he stated that the bar should be 0 % risk of failure, 0 % risk of error and insisted that health-care must meet this standard [22-24]. To achieve it he advocated a “glass-box” model in which every input, patient criterion and logical pathway is documented, providing a full audit trail and safeguards against catastrophic errors such as allergic reactions [25-31]. He invited partners to co-create verified-AI pathways and linked the need for a “chain of proof” to legislative traceability requirements [31-35].


Transition to Professor Prokar Dasgupta


Dasgupta, representing Responsible AI UK, outlined a programme that places AI champions in hospitals worldwide and extends partnerships to India and Africa, where the need is greatest [46-48]. He gave concrete examples: (1) ambient AI for automatic note-taking that saves operating-room time [49-50]; (2) “tele-surgery 2.0” enabling a surgeon to operate from 2 500 km away with ≤60 ms latency [51-52]; (3) a fully autonomous gallbladder-removal robot claimed to be 100 % accurate [52-55]. Dasgupta stressed that without diverse, representative data the AI battle cannot be won and highlighted a critical skills gap – few medical or nursing schools teach AI, so embedding AI education is essential [56-58]. He then called for multi-sector collaboration, naming the “three C’s”: companies, countries, civil society [59-60].


Human-centred AI – other panelists


Alain Labrique argued that impact, not accuracy, should be the benchmark for AI in health and explicitly said “we have humans in the loop” to make behavioural change possible [64-66]. Ken Ichiro Natsume echoed this, insisting that AI must be leveraged with “humans at the centre of utilisation” [67-69]. Justice Prathiba M. Singh added a succinct rallying cry for a healthier world through collaborative AI-health technology [70-71].


Donor coordination


Haitham Ali Ahmed El-Noush called for coordinated donor strategies, stating “for donors, we need coordination… to rally behind AI-health initiatives” [72-73].


Societal-impact framing


Dasgupta concluded his segment with a memorable line: “sell Mexico the test… from the Turing test to the Wieselbaum test,” emphasizing the need for societal-impact-focused evaluation of AI systems [74-76].


Final synthesis – Payden P.


Payden noted that AI in health has reached an inflection point: the question is no longer “can AI improve health?” but whether we will invest in the right foundations to ensure it does so for everyone. He outlined the pillars needed for sustainable impact – governance, regulation, evidence generation, workforce readiness, robust data infrastructure, and long-term cross-sector partnerships – and asserted that “trust is the currency that unlocks sustainable investment” [77-80][81-86][87-89]. He called for coordinated action among donors, governments, industry and civil society to build the trust-building infrastructure required for equitable impact [90-92].


Overall convergence – The discussion coalesced around three inter-linked pillars: (1) a verifiable, glass-box AI architecture that strives for 0 % risk of failure and error; (2) substantial, coordinated investment in governance, evidence generation, workforce training and data diversity; and (3) a human-centred approach that prioritises equitable outcomes and societal impact over purely technical benchmarks. Addressing the highlighted tensions – the acceptable level of risk and the degree of machine autonomy – will be essential for shaping policy frameworks and funding models that can sustain the promised transformation of health care through AI.


Session transcriptComplete transcript of the session
Haitham Ali Ahmed El‑Noush

Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Zameer Brey

Thank you. Thank you. Thank you. Thank you. We’ve confused shine… I’m sorry. Thank you. Thank you. Thank you. At the end. So think about this, you’ve done all your hard work, you’ve made your notes, you’ve written your prescription, you’ve counseled the patient and now you press AI assist. No thank you. All they did was to move the AI assist button earlier on and give the user the prescription to use it when it made sense to that user and the results changed. The fourth level is to what extent is the improvement actually going to yield an improvement in health outcomes? The reason we’re all here is what’s fundamentally going to shift? Is this going to help us get diagnosed TB better or help with adherence in diabetes, etc.

So these are some of the fundamental questions and I think we’ve got caught up with investment at levels one and two. Let’s just check how this model works. Let’s just check the product and having given enough investment into how this gets integrated into the world. Let’s just see how this goes. So this is the product flow. This is the product flow. This is the product flow. and then ultimately how does this shift outcomes over time I think can I take one more minute and talk about verified AI, should I come back to this I was thinking to myself and probably a bad analogy but I’m going to put it out there anyway because I’m flying this evening that’s why I didn’t want to use it but if I said to you all would you fly if the likelihood of the flight arriving safely was 95 % I’d fly, you’d fly if it was 95 % if you fly if I told you if it was 96, 97 or 98 would you fly no even if it was 95 just think for a second if it was 99 % That means every 100th flight taking off from Delhi airport would crash.

We would fly. And then go, oh, right, or we’ll take some other means of transport. And the reason I’m emphasizing this is that when it comes to health care, the bar should be 0 % risk of failure, 0 % risk of error. And so Elaine and many other partners we’re starting to have this discussion with is how do you get AI to be verifiable so that you know that whatever the input is, you can document that, it’s transparent, and we spoke about this, which is can we shift the narrative from black box to glass box? Can we really know why did the model make a particular decision? We gave it X input. The patient had these criteria. Here’s the logic model.

and it gave that particular output. But when it gives that output, can we put some safeguards in place that makes 100 % sure that it isn’t prescribing something the patient’s allergic to or that’s going to end up in a catastrophic event or that’s fundamentally flawed in its logic? And that’s where we’d like to invite partners to work with us on a pathway to verified AI. Thank you. Thank you. And I can see Justice Simo. So Justice Simo is just nodding her head because I think, you know, having that chain of proof is something we like to have in legislation. So, you know, it’s always nice when there’s a trail to follow to that decision. We couldn’t have queued it up today because our next person I’m going to ask is Professor Dasgupta, who is a clinician and an innovator.

I’m sure you’ve experienced the recalcitrance and challenge of shifting medical practice. And, you know, nurses and doctors are well known for being entrenched in the way of doing things. And changing those well -involved and well -trodden paths of workflows and clinical decision pathways are very difficult to shift. So what kind of investment do we need to make in clinical research and evaluation and evidence to shift those well -trodden paths of practice? Professor Vasgupta.

Prokar Dasgupta

Namaskar. Namaskar. Thanks. To realize that I am a working surgeon, so in addition to invention and innovation, what I’m really interested in is implementation. I want to make a difference. And if you may be patient someday, it will make a difference to you. I come here on behalf of Responsible AI UK, a major investment from UK research and innovation, not just in AI in the UK, but into an international ecosystem, including the greater south. We put AI champions in every hospital, and we are trying to expand into our partners in India and in Africa, where it is needed the most. Let me give you some examples of how we are doing this. Responsible AI UK, for example, funded an evaluation of ambient AI, writing those notes.

Shortening the operating time, saving a month of wasted time in the operating room. The British Association of Physicians of Indian origin, realized that wouldn’t it be wonderful if our parents, many of whom are living in India my mother is 87 before she has a heart attack wouldn’t it be nice if a message on my watch told me something was going to happen the reason I decided to make a note is because the data is not diversified enough without diversity of data we are not going to win this battle let me give you another example of investment of inequity two weeks ago if you look at the British Medical Journal there is a major article from us on tele -surgery 2 .0 it means to me the technology exists for a surgeon to operate from two and a half thousand kilometers away using a weapon with a time delay of 60 milliseconds or less it feels like you’re in the same operating room imagine this investment being one of the solutions to the 5 billion patients who do not have access to equitable surgery that is an example let me give you a third example and this is in automation my own group in King’s has funded and invested in automation big time the levels of autonomy in robotics takes place from 0 to 5 0 is more autonomy most autonomous machine is level 3 you map with the ultrasound the prostate all the men in this room have a prostate as we know we have difficulty in pain you move the middle of the prostate with an ultrasound you press a button a water jet floats at home in the middle of the prostate so that you don’t have to wake up 20 times at night to pee until last November when one university announced the the first the first the first in the world in the world the first in the world the first on a robotic system which can operate on big gallbladders.

Big gallbladders with 100 % misery, 100 % accurate. Five days after this, I was at the Royal Academy of Engineering, a group like this, and I said, hands up everyone who is going to allow this machine to operate on them. So hands up everyone who will allow a completely automated machine, 100 % accurate in pigs, to take out your gallbladder here. And in takers, there was one hand in the room. On the other occasion, there was a single hand in the room. He is down to his own. So we went into these public cells, but they are saying not yet. They are still going to hear them. Still today. So I do. companies of course we have to work with them, countries including the government side, civil society the three C’s, if we do not bring our patients with us all this investment is going to fail and the final investment I would urge is in skills there are hardly any medical and nursing schools in the world which have AI in the curriculum, if we do not have this embedded in education of the next generation of healthcare workers we are going to fail so these are my parting thoughts to you, thank you thank you Thank you.

Thank you. Thank you.

Alain Labrique

and impressive with impactful, focusing on things that get used and work in the real world. A benchmark might be the wrong thing, not accuracy but actually impact. And then, of course, you know, the challenge that Professor Dasvipta brought to us that, you know, it does take time to change behavior, but it is possible as long as, for the moment, we have humans in the loop. So I’d like to give each of you one sentence now just to wrap up. As you’ve heard others, what has changed your thoughts and what’s the one message you’d like to have people read the room with? And let me just go sequentially down the road. Thank you. … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … …

Zameer Brey

S

Haitham Ali Ahmed El‑Noush

o fo r donors, we need coordination, and there is a need to develop strategies, priorities, and investments so we can rally behind.

Alain Labrique

Fantastic. Ineji.

Ken Ichiro Natsume

Thank you. I think we’re asked to respond in one sentence. I was going to say, we’re not going to do something simple. We need something. But I haven’t changed my mind, but one point which resonated to my heart, which I was not able to mention in my opening sentence, but one thing I’d like to highlight is that, okay, we can leverage artificial intelligence with human being at the center of those utilizations. So that’s what I want to highlight. Thank you.

Justice Prathiba M. Singh

That’s the thing. I’m going to actually say one sentence. Here’s to a healthier world. Hey, D .I. and technology, we really work together in the world.

Zameer Brey

Fantastic. Professor.

Prokar Dasgupta

For AI tools and for the patients, I urge you to sell Mexico the test, which means do not just think about what these machines can do for us, but think about what are the societal effects of these machines. The change has to go from the Turing test to today, the Wieselbaum test.

Zameer Brey

I think for me, the question of how do we move from promise to progress is underpinned by I think a theme that I’m seeing at the conference. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important we need to keep humans at the center of the AI revolution.

Alain Labrique

Fantastic. So, Dr. Pagan, you’ve been patiently giving these wise words from our panel. I’d like to give you the last word to bring this home and everyone keep the audience with food for thought before they go for food for their stomachs.

Payden P

Thank you very much. Good afternoon to all. Sincere thanks to all the… I think it’s on. Yes. Sincere thanks to all the distinguished panelists for this very thought -provoking and very interesting conversation around AI and health. I think today’s conversation makes one thing very clear. AI and health has reached an inflection point. And for years we spoke about possibility. Today the conversation has shifted to investment, implementation, and impact. I think that was really highlighted. And emphasized by all. The question is no longer whether AI can improve health. The question is whether we will invest in the right foundations to ensure it improves health for everyone, not few. Over the past hour, several themes have emerged.

And the first is around investment. Investment must go beyond innovation. It must flow into the systems that make innovation safe, trusted, and scalable through governance and regulation, evidence generation, workforce readiness, and also workforce capacity building, which came very clearly, data systems, and long -term partnerships. These are not optional. They are the enabling conditions that determine whether AI becomes a tool for equity or a driver of innovation. New inequalities. Second, predictability builds confidence. When countries strengthen regulatory and legal frameworks, investment flows in. When evidence is generated and transparency shared, investment grows. When partnerships are built across sectors, investment scales. In short, trust is the currency that unlocks sustainable investment. So I think these are some important points that I could take out from here.

And we look forward to working with different partners, investors, donors, government agencies to take AI and health further for the benefit of all the populations. Thank you.

Alain Labrique

Thank you so much. Those are reserved test patients in writing from the Capacity Building Commission and Curfew Borrow.

Related ResourcesKnowledge base sources related to the discussion topics (33)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedmedium

“The session began with courteous thank‑you remarks from the moderators, establishing a respectful atmosphere before moving into substantive discussion.”

The knowledge base records similar courteous opening statements in other sessions, e.g., formal greetings and respectful tones at the start of meetings [S87] and [S66].

Confirmedhigh

“Brey introduced a “fourth level” of evaluation – assessing whether the improvement yields a health‑outcome benefit – and argued that without evidence at this level AI cannot be justified.”

A detailed explanation of four evaluation levels, with the fourth being impact evaluation (health outcomes), is documented in the knowledge base [S92].

Additional Contextmedium

“Using a flight‑safety analogy, he stated that the bar should be 0 % risk of failure, 0 % risk of error and insisted that health‑care must meet this standard.”

The knowledge base discusses using airplane-safety analogies to define risk as probability of undesirable outcomes, but does not assert a 0 % risk requirement, providing nuance to the analogy [S94].

Confirmedmedium

“He advocated a “glass‑box” model in which every input, patient criterion and logical pathway is documented, providing a full audit trail.”

The need for a complete audit trail, with all relevant information recorded, is highlighted in the knowledge base [S95].

Confirmedmedium

“Dasgupta outlined a programme that places AI champions in hospitals worldwide and extends partnerships to India and Africa.”

The knowledge base notes initiatives that make AI accessible across African and Asian (including Indian) contexts, supporting the claim of partnerships in those regions [S101]; Dasgupta is also listed as a speaker at the WHO roundtable on AI for health [S6].

External Sources (102)
S1
Classification of Digital Health Interventions v1.0 — 1. Hawkins, R. P., et al. (2008). Understanding tailoring in communicating about health. Health Education Research, 23(3…
S2
Multistakeholder Dialogue on National Digital Health Transformation — Alain Labrique: Fantastic. Thank you, Leah. I really appreciate everyone’s partnership. and engagement this morning,…
S3
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — -Alain Labrique- Panel moderator/facilitator
S4
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — Professor Prokar Dasgupta, speaking as both a practicing surgeon and innovator, provided sobering real-world evidence of…
S5
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — -Haitham Ali Ahmed El‑Noush- Role/expertise not specified in transcript
S6
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — Speakers:Haitham Ali Ahmed El‑Noush, Payden P.
S7
How Small AI Solutions Are Creating Big Social Change — – Zameer Brey- Antoine Tesniere
S8
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — – Ken Ichiro Natsume- Prokar Dasgupta- Zameer Brey- Alain Labrique – Zameer Brey- Alain Labrique – Zameer Brey- Payden…
S9
Panel Discussion AI and the Creative Economy — -Kenichiro Natsume: Assistant Director General at WIPO (World Intellectual Property Organization), works on policy side …
S10
Panel Discussion AI and the Creative Economy — Yes. Yes. three crucial elements. I’ll start with Nicholas Granatino, who’s on the business side, the chairman of Tara G…
S11
https://app.faicon.ai/ai-impact-summit-2026/panel-discussion-ai-and-the-creative-economy — I’m seeing this big flashing red sign which says time’s up. I don’t know, mine or the panel’s. I’m hoping it’s only the …
S12
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — – Haitham Ali Ahmed El‑Noush- Payden P. – Prokar Dasgupta- Payden P.
S13
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — – Prokar Dasgupta- Justice Prathiba M. Singh
S14
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — Speakers:Prokar Dasgupta, Justice Prathiba M. Singh Speakers:Prokar Dasgupta, Payden P., Justice Prathiba M. Singh
S15
How Small AI Solutions Are Creating Big Social Change — Zameer Brey advocates for AI systems that achieve near-perfect reliability and transparency. He argues for moving from b…
S16
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Lt Gen Vipul Shinghal — “The black box of data must become a glass box.”[11]. “the commander taking a decision based on an AI -enabled system bu…
S17
Donor roundtable: Enabling impact at scale in supporting inclusive and sustainable digital economies — Furthermore, the speakers recognize digitalization and development as important priorities. They stress the need for pol…
S18
Successes & challenges: cyber capacity building coordination | IGF 2023 — In today’s world, cyberattacks and cybercrime incidents are on the rise, resulting in international, governmental, multi…
S19
Ethical AI_ Keeping Humanity in the Loop While Innovating — But, you know, it’s really the pervasiveness of technology that touches our life, each and every day in many ways. And t…
S20
Digital Health at the crossroads of human rights, AI governance, and e-trade (SouthCentre) — Artificial intelligence (AI) has emerged as a powerful tool in healthcare, enhancing diagnosis, optimizing resource allo…
S21
Conversational AI in low income & resource settings | IGF 2023 — Addressing healthcare inequity requires collaboration and the appropriate use of technology. Inequities exist not only a…
S22
Robotics and the Medical Internet of Things /MIoT — However, there are privacy and security concerns associated with the use of AI and robots in medicine. Patients have leg…
S23
AI in healthcare gains regulatory compass from UK experts — Professor Alastair Dennistonhas outlinedthe core principles for regulating AI in healthcare, describing AI as the ‘X-ray…
S24
Boosting digital collaboration for resilience and sustainability in shipping (RISE) — The analysis argues that once an innovation is discovered and can be scaled, the private sector can rely on accessing gl…
S25
A Guide for Practitioners — Developing a strategy for addressing a problem requires an understanding of the causes of the problem. Several facto…
S26
https://app.faicon.ai/ai-impact-summit-2026/catalyzing-global-investment-in-ai-for-health_-who-strategic-roundtable — We couldn’t have queued it up today because our next person I’m going to ask is Professor Dasgupta, who is a clinician a…
S27
Empowering Workers in the Age of AI — Verick emphasised that the benefits of AI adoption are similarly unequal, with the global north positioned to capture mo…
S28
Opening address of the co-chairs of the AI Governance Dialogue — While this transcript captures only the opening remarks of the AI Governance Dialogue, the key comments identified estab…
S29
From Technical Safety to Societal Impact Rethinking AI Governanc — Virginia stresses that AI safety cannot be limited to technical robustness, accuracy or alignment. It must incorporate m…
S30
Advancing Scientific AI with Safety Ethics and Responsibility — And also, very importantly, how we have to also see it from the context of, you know, people doing their own thing, DIY …
S31
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what…
S32
Artificial intelligence (AI) – UN Security Council — In conclusion, the discussions highlighted the importance of fostering transparency and accountability in AI systems. En…
S33
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — Verified AI extends beyond accuracy to encompass complete transparency in decision-making processes. Brey advocated for …
S34
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The roadmap is built upon core principles including “human and planetary welfare, accountability and transparency, inclu…
S35
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Lt Gen Vipul Shinghal — “The black box of data must become a glass box.”[11]. “the commander taking a decision based on an AI -enabled system bu…
S36
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — And the first is around investment. Investment must go beyond innovation. It must flow into the systems that make innova…
S37
Enhancing rather than replacing humanity with AI — Humans retain agency and choice regarding when and how to use the technology.
S38
Transforming Health Systems with AI From Lab to Last Mile — This comment cuts through the typical AI hype by highlighting a fundamental contradiction in how AI discussions are cond…
S39
AI brings robots closer to autonomous surgery — A team from Johns Hopkins and Stanford hastrainedrobotic systems to perform surgical tasks with human-like precision. Us…
S40
Digital ECOnOMy POliCy lEgal inStRuMEntS — –  Accepting it: ‘taking the risk’ and accepting the effect of uncertainty on the objectives, including partial or comp…
S41
Shaping AI’s Story Trust Responsibility & Real-World Outcomes — Different sectors show varying risk tolerance levels, with Ekudden noting that enterprise risk assessment has become “qu…
S42
WSIS Action Line C10: Ethics in AI: Shaping a Human-Centred Future in the Digital Age — Low level of fundamental disagreement with moderate differences in implementation strategies. The speakers largely agree…
S43
Global AI Policy Framework: International Cooperation and Historical Perspectives — The speakers demonstrate significant consensus on key principles including the need for inclusive governance, building o…
S44
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — Disagreement level:The disagreement level is moderate but significant, particularly around philosophical approaches to s…
S45
Building the Next Wave of AI_ Responsible Frameworks & Standards — Chokshi’s approach involves embedding guardrails throughout the entire AI lifecycle: during input processing, reasoning …
S46
How nonprofits are using AI-based innovations to scale their impact — The discussion revealed that successful AI implementation in the social sector requires fundamentally different approach…
S47
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Achieving inclusive AI requires addressing inequalities across three fundamental areas: access to computing infrastructu…
S48
Opening address of the co-chairs of the AI Governance Dialogue — While this transcript captures only the opening remarks of the AI Governance Dialogue, the key comments identified estab…
S49
Shaping AI’s Story Trust Responsibility & Real-World Outcomes — Allow different risk tolerance levels (85% vs 99.99% accuracy) based on specific use cases and industry requirements
S50
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — Virkkunen explains that the EU’s AI regulation is not as comprehensive as critics suggest, focusing primarily on high-ri…
S51
Table of Contents — – “Risk avoidance”: Eliminate the risk by either countering the threat or removing the vulnerability. (Compare: “avoida…
S52
Authors of this report — Humanitarian agencies work in the space where fundamental values are in play as questions of life and death, indeed wher…
S53
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — Verified AI extends beyond accuracy to encompass complete transparency in decision-making processes. Brey advocated for …
S54
Robotics and the Medical Internet of Things /MIoT — However, there are privacy and security concerns associated with the use of AI and robots in medicine. Patients have leg…
S55
Artificial intelligence (AI) – UN Security Council — Another session highlighted the need for transparency and accountability in AI algorithms. The speakers advocated for AI…
S56
AI in healthcare gains regulatory compass from UK experts — Professor Alastair Dennistonhas outlinedthe core principles for regulating AI in healthcare, describing AI as the ‘X-ray…
S57
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — And the first is around investment. Investment must go beyond innovation. It must flow into the systems that make innova…
S58
https://dig.watch/event/india-ai-impact-summit-2026/catalyzing-global-investment-in-ai-for-health_-who-strategic-roundtable — And emphasized by all. The question is no longer whether AI can improve health. The question is whether we will invest i…
S59
In brief — – External evidence from systematic research: valid and clinically relevant findings from patient-centred clinical resea…
S60
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — If compute, database and foundational models remain concentrated of a few, we risk creating a new form of inequality, an…
S61
Empowering Workers in the Age of AI — Verick emphasised that the benefits of AI adoption are similarly unequal, with the global north positioned to capture mo…
S62
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — A regulamentação da informação e proteger as indústrias criativas de nossos países. O modelo atual de negócios dessas em…
S63
AI Governance Dialogue: Presidential address — ### Human-Centered Development Ettore Balestrero: On behalf of His Holiness Pope Leo XIV, I would like to extend his co…
S64
Welcome Address — Modi emphasizes that AI development must focus on human values rather than purely machine efficiency. A human‑centric ap…
S65
WSIS Action Line C7:E-Science: Open Science, Data, Science cooperation, IYQ, International Decade of Science for Sustainable Development — Human agency and rights must remain central to technological development, with emphasis on people-centered impact metric…
S66
Ad Hoc Consultation: Friday 2nd February, Morning session — During the session, chaired by Mr. Chair, the speaker began by extending greetings to colleagues and esteemed delegates …
S67
Any other business /Adoption of the report/ Closure of the session — In summary, the speaker artfully blended expressions of gratitude with recognition of collaborative efforts and a call f…
S69
World Economic Forum Annual Meeting Closing Remarks: Summary — The tone is consistently positive, celebratory, and grateful throughout the discussion. It begins with formal appreciati…
S70
The Future of Digital Agriculture: Process for Progress — This echoed the courteous opening, reinforcing the themes of gratitude and collaborative effort that were recurring moti…
S71
Can National Security Keep Up with AI? / Davos 2025 — The overall tone was serious and analytical, with panelists offering measured perspectives on complex issues. There were…
S72
Rewriting Development / Davos 2025 — The tone was largely serious and analytical, with speakers offering critical assessments of current development models. …
S73
Evolving Threat of Poor Governance / DAVOS 2025 — The tone was largely serious and analytical, with panelists offering thoughtful insights on complex governance challenge…
S74
WS #213 Hold On, We’re Going South: beyond GDC — The tone was generally serious and analytical, with speakers providing expert perspectives on complex policy issues. The…
S75
Day 0 Event #105 Women In IGF — The tone was primarily serious and focused, with speakers presenting statistics and discussing challenges. However, ther…
S76
(Interactive Dialogue 4) Summit of the Future – General Assembly, 79th session — The overall tone was one of urgency and determination. Many speakers emphasized that “the future starts now” and stresse…
S77
(Interactive Dialogue 1) Summit of the Future – General Assembly, 79th session — The overall tone was one of urgency and calls for action, with many speakers emphasizing the need for immediate reforms …
S78
(Interactive Dialogue 3) Summit of the Future – General Assembly, 79th session — Mohamed Muizzu: I thank my esteemed co-chair for his statement. Allow me at this point to make a few personal national…
S79
New Technologies and the Impact on Human Rights — The discussion maintained a collaborative and constructive tone throughout, despite addressing complex and sometimes con…
S80
(Plenary segment) Summit of the Future – General Assembly, 4th plenary meeting, 79th session — The tone of the discussion was generally optimistic and forward-looking, with speakers emphasizing the need for urgent a…
S81
Democratizing AI: Open foundations and shared resources for global impact — The tone was consistently collaborative, optimistic, and forward-looking throughout the discussion. Speakers maintained …
S82
Transforming Health Systems with AI From Lab to Last Mile — The discussion maintained a cautiously optimistic and collaborative tone throughout. It began with enthusiasm about AI’s…
S83
Keynote-Brad Smith — Overall Tone:The tone is optimistic yet realistic, maintaining a balance between acknowledging serious challenges and ex…
S84
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — The discussion maintained a professional, collaborative, and optimistic tone throughout. Panelists demonstrated mutual r…
S85
Keynote-Brad Smith — The tone is optimistic yet realistic, maintaining a balance between acknowledging serious challenges and expressing conf…
S86
Keynotes — At the European Dialogue on Internet Governance (EuroDIG) 2024, the imperative of multistakeholder collaboration in shap…
S87
Ad Hoc Consultation: Thursday 1st February, Morning session — In a formal and courteous address, the speaker began by respectfully acknowledging the presiding official, Madam Chair, …
S88
Open Forum #60 Cooperating for Digital Resilience and Prosperity — The discussion maintained a consistently collaborative and constructive tone throughout. It was professional yet engagin…
S89
AI tool improves accuracy in detecting heart disease — A team of researchers at Mount Sinai Hospital in New Yorkhas successfullycalibrated an AI tool to more accurately assess…
S90
Scaling Innovation Building a Robust AI Startup Ecosystem — The ceremony concluded with memento presentations to the recognized startups by STPI officials. Shri Praveen Kumar deliv…
S91
Biology as Consumer Technology — Additionally, the analysis underscores the significance of technology and partnerships in agriculture. Abbott notes that…
S92
How nonprofits are using AI-based innovations to scale their impact — Evidence:Detailed explanation of the four levels: user experience (technical metrics), user behavior (engagement pattern…
S93
Why science metters in global AI governance — And also mentioned here. So this is where we are suggesting that this could be one way to look at. It’s not that everyth…
S94
Building Trustworthy AI Foundations and Practical Pathways — Risk should be defined as probability of undesirable outcomes characterized by likelihood and severity, using airplane s…
S95
9.1.1 Why? — To provide a full audit trail. All relevant information that builds up the trail has to be available. As such, completen…
S96
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — Doreen Bogdan-Martin: Thank you. Maybe, Your Excellency, just to also pick up on the work we’ve done in terms of the AI …
S97
Can we test for trust? The verification challenge in AI — Anja Kaspersen: I think, you know, following up on the issue of language, I think one thing that we often overlook is th…
S98
The Innovation Beneath AI: The US-India Partnership powering the AI Era — He sees a large opportunity for U.S. and Indian firms to co‑create companies that will build refining capacity and reduc…
S99
Open Forum #33 Building an International AI Cooperation Ecosystem — Participant: Good afternoon, dear delegates and participants. It’s a great honor for me to have the opportunity to intro…
S100
How AI Is Transforming Indias Workforce for Global Competitivene — This panel discussion at the AI India Summit focused on AI and workforce transformation, examining how artificial intell…
S101
Responsible AI for Shared Prosperity — This discussion brought together international government officials, technology leaders, and development organisations t…
S102
From India to the Global South_ Advancing Social Impact with AI — you know first I’m sorry I got a bit late I was in hall number 17, 19 you know what was happening there they had identif…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
Z
Zameer Brey
1 argument82 words per minute789 words572 seconds
Argument 1
Need for verifiable, glass‑box AI to eliminate risk of error (Zameer Brey)
EXPLANATION
Brey argues that AI systems used in healthcare must be fully transparent and verifiable, moving from a black‑box to a glass‑box approach, so that any decision can be traced and validated. He stresses that the acceptable risk level in health care should be zero, unlike other domains where some risk is tolerated.
EVIDENCE
He uses a flight-safety analogy, asking whether people would fly if the chance of a crash were 95 % or higher, illustrating that even small risks are unacceptable in health care [22-24]. He then calls for AI that is verifiable, with documented inputs and transparent logic, and proposes safeguards to ensure outputs never prescribe harmful treatments or contain logical flaws [25-31].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Brey’s advocacy for glass‑box, fully auditable AI systems is documented in S15 and reinforced by the broader call for transparent AI in S16.
MAJOR DISCUSSION POINT
Verification and Safety of AI in Healthcare
AGREED WITH
Ken Ichiro Natsume, Justice Prathiba M. Singh
DISAGREED WITH
Prokar Dasgupta
P
Prokar Dasgupta
1 argument108 words per minute743 words410 seconds
Argument 1
Emphasis on implementation, equitable access, diverse data, tele‑surgery, and workforce training (Prokar Dasgupta)
EXPLANATION
Dasgupta stresses that the real challenge is moving AI innovations into practice, ensuring they reach underserved populations, and that data diversity is essential for reliable outcomes. He highlights concrete projects that illustrate equitable implementation and the need for skilled personnel.
EVIDENCE
He describes Responsible AI UK’s programme of placing AI champions in hospitals and expanding to India and Africa, funded evaluations of ambient AI that reduced operating-room time, a tele-surgery 2.0 concept enabling surgeons to operate from 2,500 km away, and a fully autonomous robotic system for gallbladder surgery, all illustrating implementation and equity efforts [46-60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Dasgupta’s focus on real‑world implementation, equity, diverse datasets, tele‑surgery concepts and workforce capacity building is described in S3 and echoed in S6.
MAJOR DISCUSSION POINT
Implementation, Equity, and Capacity Building
AGREED WITH
Zameer Brey, Payden P.
DISAGREED WITH
Zameer Brey
A
Alain Labrique
1 argument87 words per minute219 words150 seconds
Argument 1
Investment must extend beyond innovation to governance, regulation, evidence generation, workforce readiness, and long‑term partnerships (Alain Labrique)
EXPLANATION
Labrique argues that merely funding AI research is insufficient; sustainable impact requires investment in the systems that make AI safe, trusted, and scalable, including regulatory frameworks, evidence generation, and capacity building. He frames trust as the essential currency that will attract further investment.
EVIDENCE
He outlines that investment should flow into governance, regulation, evidence generation, workforce readiness, capacity building, data systems, and long-term partnerships, describing these as enabling conditions for equitable AI and stating that trust unlocks sustainable investment [108-119].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for investment in governance, regulation, evidence generation, workforce readiness and long‑term partnerships, as well as the role of trust, is highlighted in S3.
MAJOR DISCUSSION POINT
Investment, Governance, Trust, and Impact
AGREED WITH
Payden P., Prokar Dasgupta, Haitham Ali Ahmed El‑Noush
H
Haitham Ali Ahmed El‑Noush
1 argument10 words per minute47 words258 seconds
Argument 1
Coordination among donors and development of strategic priorities and investments are essential (Haitham Ali Ahmed El‑Noush)
EXPLANATION
El‑Noush calls for donors to coordinate their efforts, develop clear strategies, set priorities, and pool investments so that resources can be mobilized effectively toward shared goals.
EVIDENCE
He explicitly states, “for donors, we need coordination, and there is a need to develop strategies, priorities, and investments so we can rally behind” [70].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Donor coordination, strategic priority setting and pooled investment mechanisms are discussed in S17, aligning with El‑Noush’s point.
MAJOR DISCUSSION POINT
Investment, Governance, Trust, and Impact
P
Payden P.
1 argument117 words per minute276 words141 seconds
Argument 1
AI and health has reached an inflection point; trust is the currency that unlocks sustainable, equitable investment (Payden P.)
EXPLANATION
Payden declares that AI in health has moved from speculative possibilities to a stage where concrete investment, implementation, and impact are required. He emphasizes that building trust through governance, evidence, and partnerships is essential for sustainable and equitable scaling.
EVIDENCE
He notes that AI and health have reached an inflection point, that the conversation has shifted to investment, implementation, and impact, and that trust is the currency that unlocks sustainable investment, citing the need for governance, evidence generation, and partnerships [101-108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The inflection point of AI in health and the framing of trust as the currency for sustainable investment are noted in S3 and reiterated in S6.
MAJOR DISCUSSION POINT
Investment, Governance, Trust, and Impact
AGREED WITH
Alain Labrique, Prokar Dasgupta, Haitham Ali Ahmed El‑Noush
K
Ken Ichiro Natsume
1 argument143 words per minute84 words35 seconds
Argument 1
AI should be leveraged with humans at the center of utilization (Ken Ichiro Natsume)
EXPLANATION
Natsume stresses that AI solutions must keep humans central, ensuring that technology augments rather than replaces human judgment and that people remain the focal point of AI deployments.
EVIDENCE
He says, “we can leverage artificial intelligence with human being at the center of those utilizations” and emphasizes this point as his key message [72-75].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Natsume’s emphasis on keeping humans central to AI utilization is recorded in S6 (and also referenced in S3).
MAJOR DISCUSSION POINT
Human‑Centered AI and Collaboration
AGREED WITH
Zameer Brey, Justice Prathiba M. Singh
DISAGREED WITH
Prokar Dasgupta
J
Justice Prathiba M. Singh
1 argument120 words per minute27 words13 seconds
Argument 1
Collaboration between AI and health technologies is key to a healthier world (Justice Prathiba M. Singh)
EXPLANATION
Justice Singh highlights the importance of partnership between AI and health technologies, suggesting that their combined effort can lead to improved global health outcomes.
EVIDENCE
She delivers a concise statement: “Here’s to a healthier world. AI and technology, we really work together in the world” [77-79].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Singh’s call for collaborative integration of AI and health technologies to improve global health outcomes is captured in S6.
MAJOR DISCUSSION POINT
Human‑Centered AI and Collaboration
AGREED WITH
Zameer Brey, Ken Ichiro Natsume
Agreements
Agreement Points
Investment must go beyond pure innovation to include governance, regulation, evidence generation, workforce readiness, long‑term partnerships and coordinated donor strategies; trust is presented as the key currency that will unlock sustainable, equitable scaling of AI in health.
Speakers: Alain Labrique, Payden P., Prokar Dasgupta, Haitham Ali Ahmed El‑Noush
Investment must extend beyond innovation to governance, regulation, evidence generation, workforce readiness, and long‑term partnerships (Alain Labrique) AI and health has reached an inflection point; trust is the currency that unlocks sustainable, equitable investment (Payden P.) We put AI champions in every hospital… we need investment in skills… if we do not have this embedded in education of the next generation of healthcare workers we are going to fail (Prokar Dasgupta) for donors, we need coordination, and there is a need to develop strategies, priorities, and investments so we can rally behind (Haitham Ali Ahmed El‑Noush)
All four speakers stress that merely funding AI research is insufficient; systematic investment in enabling structures, coordinated donor action and capacity building is essential, and building trust is crucial for attracting further resources [108-119][101-108][46-60][70].
POLICY CONTEXT (KNOWLEDGE BASE)
The WHO Strategic Roundtable explicitly calls for investment that extends beyond innovation to governance, regulation, evidence generation and workforce readiness, echoing this point [S36]. The UN Security Council discussion stresses trust and transparency as foundational for AI systems, aligning with the trust emphasis [S32].
AI systems for health must be human‑centered, transparent and verifiable (glass‑box), ensuring zero‑risk outcomes and keeping people at the core of decision‑making.
Speakers: Zameer Brey, Ken Ichiro Natsume, Justice Prathiba M. Singh
Need for verifiable, glass‑box AI to eliminate risk of error (Zameer Brey) AI should be leveraged with humans at the center of utilization (Ken Ichiro Natsume) Collaboration between AI and health technologies is key to a healthier world (Justice Prathiba M. Singh)
Brey calls for fully auditable AI with zero risk, Natsume stresses keeping humans central to AI use, and Singh highlights collaborative integration of AI and health tech, all converging on a human-centered, transparent approach [22-31][72-75][77-79].
POLICY CONTEXT (KNOWLEDGE BASE)
WHO roundtable advocates a shift from “black box” to “glass box” AI with full traceability of inputs and logic [S33]. A keynote speaker reinforces the need for glass-box transparency [S35]. The AI policy roadmap lists human welfare, accountability and transparency as core principles [S34], while the UN Security Council highlights transparency for public trust [S32].
Equitable access and impact of AI in health, especially for underserved populations, require diverse data, implementation pathways, and outcome‑focused evaluation.
Speakers: Prokar Dasgupta, Zameer Brey, Payden P.
Emphasis on implementation, equitable access, diverse data, tele‑surgery, and workforce training (Prokar Dasgupta) The reason we’re all here is what’s fundamentally going to shift? Is this going to help us get diagnosed TB better or help with adherence in diabetes… (Zameer Brey) The question is whether we will invest in the right foundations to ensure it improves health for everyone, not few (Payden P.)
Dasgupta describes concrete projects targeting equity and diverse datasets, Brey asks whether AI will improve concrete health outcomes like TB diagnosis, and Payden stresses that AI must benefit all, not a few, highlighting a shared focus on equitable impact [46-60][15-16][106-108].
POLICY CONTEXT (KNOWLEDGE BASE)
Local AI policy pathways stress inclusive AI through diverse datasets, infrastructure access and skill development for equity [S47]. Nonprofit AI case studies underline the need for rigorous impact measurement for excluded groups [S46]. The AI policy roadmap emphasizes inclusivity and equitable economic growth [S34], and the Global AI Policy Framework calls for inclusive governance [S43].
Similar Viewpoints
Both emphasize that AI must operate transparently with humans in control to avoid errors and maintain safety [22-31][72-75].
Speakers: Zameer Brey, Ken Ichiro Natsume
Need for verifiable, glass‑box AI to eliminate risk of error (Zameer Brey) AI should be leveraged with humans at the center of utilization (Ken Ichiro Natsume)
Both frame trust and robust governance as the linchpin for attracting and sustaining investment in health AI [108-119][101-108].
Speakers: Alain Labrique, Payden P.
Investment must extend beyond innovation to governance, regulation, evidence generation, workforce readiness, and long‑term partnerships (Alain Labrique) AI and health has reached an inflection point; trust is the currency that unlocks sustainable, equitable investment (Payden P.)
Both focus on concrete health outcomes and the need for AI to improve disease diagnosis and management, especially for vulnerable groups [46-60][15-16].
Speakers: Prokar Dasgupta, Zameer Brey
Emphasis on implementation, equitable access, diverse data, tele‑surgery, and workforce training (Prokar Dasgupta) Is this going to help us get diagnosed TB better or help with adherence in diabetes… (Zameer Brey)
Unexpected Consensus
Zero‑risk safety versus claims of 100 % accurate autonomous machines
Speakers: Zameer Brey, Prokar Dasgupta
Need for verifiable, glass‑box AI to eliminate risk of error (Zameer Brey) Big gallbladders with 100 % misery, 100 % accurate… (Prokar Dasgupta)
While Brey argues that health AI must have zero tolerance for error, Dasgupta describes a fully autonomous system touted as 100 % accurate, revealing an unexpected alignment in emphasizing absolute safety despite different framings [22-31][52-55].
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions on AI risk tolerance note that different sectors accept varying accuracy thresholds (e.g., 85% vs 99.99%) and that zero-risk is unrealistic [S41][S49]. Digital Economy policy acknowledges the necessity of accepting residual risk [S40]. Risk-management literature distinguishes risk avoidance from acceptance, providing a framework for this debate [S51].
Overall Assessment

The panel shows strong convergence on three pillars: (1) the necessity of coordinated, trust‑building investment in governance, regulation and capacity building; (2) the imperative that AI be human‑centered, transparent and verifiable; and (3) the goal of equitable, outcome‑focused deployment for underserved populations.

High consensus across most speakers, indicating a shared understanding that future AI‑health initiatives must be underpinned by robust governance, human‑centric design, and equity. This alignment suggests that policy and funding agendas can move forward with a clear, unified set of priorities.

Differences
Different Viewpoints
Acceptable level of risk in health‑care AI: zero tolerance vs willingness to accept residual risk for benefits
Speakers: Zameer Brey, Prokar Dasgupta
Need for verifiable, glass‑box AI to eliminate risk of error (Zameer Brey) Emphasis on implementation, equitable access, diverse data, tele‑surgery, and workforce training (Prokar Dasgupta)
Zameer argues that health-care AI must have 0 % risk, using a flight-safety analogy to claim any chance of error is unacceptable [22-24][25-31]. Prokar describes autonomous robotic systems that are claimed to be “100 % accurate” yet acknowledges public hesitation, implying that some risk is tolerable in pursuit of benefits [52-55][56-58].
POLICY CONTEXT (KNOWLEDGE BASE)
Digital Economy policy explicitly states that some residual risk must be accepted when activities cannot be made risk-free [S40]. The “Shaping AI’s Story” report highlights differing risk tolerance levels across sectors and the need for pragmatic regulatory approaches [S41][S49]. Risk management taxonomy further clarifies the distinction between avoidance and acceptance [S51].
Degree of human involvement: keeping humans at the centre of AI utilisation vs pursuing fully autonomous robotic surgery
Speakers: Ken Ichiro Natsume, Prokar Dasgupta
AI should be leveraged with humans at the center of utilization (Ken Ichiro Natsume) Emphasis on implementation, equitable access, … and description of fully autonomous robotic system for gallbladder surgery (Prokar Dasgupta)
Ken stresses that AI must augment rather than replace human judgment, keeping people central to deployments [72-75]. Prokar promotes fully autonomous robotic surgery (e.g., a gallbladder robot operating without human hands) as a future solution, even while noting limited public acceptance [52-55][56-58].
POLICY CONTEXT (KNOWLEDGE BASE)
Research on autonomous robotic surgery demonstrates the push toward fully autonomous systems in clinical settings [S39]. Human-in-the-loop is promoted as a first-class safety feature in responsible AI frameworks [S45]. Enhancing rather than replacing humanity stresses that humans must retain agency over AI use [S37], and analyses of health-AI discourse note a gap between human-centered rhetoric and technology-centric focus [S38].
Unexpected Differences
Human‑centered AI vs fully autonomous AI
Speakers: Ken Ichiro Natsume, Prokar Dasgupta
AI should be leveraged with humans at the center of utilization (Ken Ichiro Natsume) Emphasis on implementation, equitable access, … and description of fully autonomous robotic system for gallbladder surgery (Prokar Dasgupta)
Ken’s explicit call for keeping humans central to AI use [72-75] contrasts with Prokar’s promotion of a completely autonomous surgical robot, a stance that was not anticipated given both speakers’ expertise in AI for health [52-55][56-58].
POLICY CONTEXT (KNOWLEDGE BASE)
The “human-in-the-loop” principle is advocated as essential for safe AI deployment [S45]. WHO roundtable emphasizes human-centered, transparent AI design [S33]. Scholarly commentary on AI in health underscores the importance of preserving human agency [S37], and the AI policy roadmap lists human welfare and accountability as core values [S34].
Zero‑risk requirement vs pragmatic risk acceptance
Speakers: Zameer Brey, Prokar Dasgupta
Need for verifiable, glass‑box AI to eliminate risk of error (Zameer Brey) Emphasis on implementation, equitable access, … and description of fully autonomous robotic system for gallbladder surgery (Prokar Dasgupta)
Zameer’s insistence on 0 % risk for health-care AI [22-24][25-31] is at odds with Prokar’s willingness to deploy technologies that, while claimed highly accurate, still involve residual risk and public hesitation [52-55][56-58]. This clash of risk philosophies was not foreseen.
POLICY CONTEXT (KNOWLEDGE BASE)
Digital Economy policy accepts that residual risk is inevitable and must be managed rather than eliminated [S40]. “Shaping AI’s Story” discusses pragmatic risk tolerance levels and the need for flexible regulatory approaches [S41][S49]. Risk management literature differentiates between risk avoidance and risk acceptance, providing a conceptual basis for this debate [S51].
Overall Assessment

The panel shows broad consensus that AI can transform health, but speakers diverge sharply on how to ensure safety, the appropriate level of human involvement, and the primary levers for investment. The most salient disagreements revolve around risk tolerance (zero‑risk glass‑box vs acceptable residual risk) and the degree of autonomy in clinical tools. These tensions suggest that future policy will need to balance rigorous verification and governance with pragmatic pathways for implementation and equity.

Moderate to high – while there is shared enthusiasm for AI, fundamental differences on safety standards and autonomy could impede coordinated action unless reconciled through joint frameworks.

Partial Agreements
Both agree that AI should improve health outcomes, but Zameer stresses verification and zero‑risk glass‑box models while Prokar focuses on scaling, equity and practical implementation of AI tools [22-31][46-60].
Speakers: Zameer Brey, Prokar Dasgupta
Need for verifiable, glass‑box AI to eliminate risk of error (Zameer Brey) Emphasis on implementation, equitable access, diverse data, tele‑surgery, and workforce training (Prokar Dasgupta)
Both call for mobilising investment for AI in health; Haitham emphasises donor coordination and strategic pooling [70], while Payden highlights building trust through governance, evidence and partnerships as the driver of sustainable investment [101-119].
Speakers: Haitham Ali Ahmed El‑Noush, Payden P.
Coordination among donors and development of strategic priorities and investments are essential (Haitham Ali Ahmed El‑Noush) AI and health has reached an inflection point; trust is the currency that unlocks sustainable, equitable investment (Payden P.)
Both stress that investment should go beyond pure research to include governance, regulation, evidence generation and capacity building to create trust and scale AI responsibly [108-119][101-119].
Speakers: Alain Labrique, Payden P.
Investment must extend beyond innovation to governance, regulation, evidence generation, workforce readiness, and long‑term partnerships (Alain Labrique) AI and health has reached an inflection point; trust is the currency that unlocks sustainable, equitable investment (Payden P.)
Takeaways
Key takeaways
AI in health must become verifiable and transparent (glass‑box) to eliminate risk of error and ensure safety (Zameer Brey). Implementation and equity are critical; diverse data, tele‑surgery, and workforce training are needed to reach underserved populations (Prokar Dasgupta). Investment must go beyond innovation to include governance, regulation, evidence generation, workforce readiness, data systems, and long‑term partnerships; trust is the currency that unlocks sustainable, equitable investment (Alain Labrique, Payden P.). Human‑centered AI is essential; AI should augment rather than replace clinicians, keeping humans in the loop (Ken Ichiro Natsume, Justice Prathiba M. Singh). Coordination among donors and strategic prioritisation of resources are required to rally support and avoid fragmented efforts (Haitham Ali Ahmed El‑Noush).
Resolutions and action items
Invitation to partners to collaborate on a pathway to verified (glass‑box) AI for healthcare decision‑making (Zameer Brey). Call for investment in capacity building: training health workers in AI, developing diverse data sets, and supporting implementation pilots (Prokar Dasgupta). Recommendation to establish governance frameworks, regulatory standards, and evidence‑generation mechanisms to build trust (Alain Labrique, Payden P.). Proposal for coordinated donor strategy and development of shared priorities to fund AI‑health initiatives (Haitham Ali Ahmed El‑Noush). Emphasis on maintaining human oversight while scaling AI solutions (Ken Ichiro Natsume).
Unresolved issues
How to achieve the near‑zero risk threshold required for clinical AI deployment. Specific mechanisms for shifting entrenched clinical workflows and achieving widespread adoption. Concrete metrics and timelines for measuring health‑outcome impact of AI interventions. Details of regulatory and legal frameworks needed to certify AI tools across different jurisdictions. Funding models and allocation of resources among competing AI‑health priorities.
Suggested compromises
Adopt a phased approach that keeps clinicians in the loop while gradually increasing AI autonomy, balancing safety with innovation (Ken Ichiro Natsume). Shift focus from purely technical performance (Turing test) to societal impact and safety (Wieselbaum test) as a way to align stakeholder expectations (Prokar Dasgupta).
Thought Provoking Comments
If I said to you all would you fly if the likelihood of the flight arriving safely was 95 %? … The bar in health care should be 0 % risk of failure, 0 % risk of error. We need AI that is verifiable – a ‘glass box’ where we can document the input, see the logic, and guarantee it never prescribes something catastrophic.
This analogy starkly illustrates the unacceptable risk tolerance in healthcare compared to other domains and introduces the concept of verified, transparent AI as a prerequisite for clinical use.
It shifted the discussion from product flow and investment levels to safety and accountability, prompting calls for partners to work on a pathway to verified AI and setting the tone for later emphasis on trust and governance.
Speaker: Zameer Brey
Responsible AI UK has funded ambient AI to write notes, tele‑surgery 2.0 that lets a surgeon operate 2,500 km away with <60 ms delay, and automation of prostate procedures. But without diverse data, education in AI for clinicians, and patient involvement, these investments will fail.
He broadens the conversation from technical possibilities to real‑world implementation challenges, highlighting equity, data diversity, and the critical need for workforce training.
His examples introduced concrete use‑cases and underscored systemic barriers, steering the panel toward discussions of equity, capacity‑building, and the importance of integrating AI into curricula – themes echoed later by other speakers.
Speaker: Prokar Dasgupta
A benchmark might be the wrong thing, not accuracy but actually impact. As long as we have humans in the loop, behavior change is possible.
He reframes success metrics from technical accuracy to measurable health impact and stresses the necessity of human oversight, challenging a purely algorithm‑centric view.
This comment redirected the panel’s focus toward outcome‑oriented evaluation and reinforced the human‑centered perspective that several participants later reiterated.
Speaker: Alain Labrique
AI and health has reached an inflection point. The question is no longer whether AI can improve health, but whether we will invest in the right foundations – governance, evidence generation, workforce readiness, data systems, and long‑term partnerships – because trust is the currency that unlocks sustainable investment.
Provides a concise synthesis of the discussion, highlighting the transition from possibility to implementation, and identifies trust and systemic investment as the decisive factors for equitable impact.
Served as a concluding anchor that tied together earlier points about verification, equity, and human‑in‑the‑loop, and called for coordinated action, influencing the final tone of the session.
Speaker: Payden P.
We should move from the Turing test to the Wieselbaum test – think about the societal effects of AI machines, not just whether they can perform a task.
Introduces a novel evaluative framework that shifts assessment from technical capability to societal impact, prompting deeper reflection on ethical and social dimensions.
Prompted other panelists to consider broader accountability measures and reinforced the earlier call for verified, transparent AI, adding complexity to the conversation about evaluation standards.
Speaker: Prokar Dasgupta
We can leverage artificial intelligence with the human being at the centre of those utilizations.
Echoes the human‑centred AI theme in a succinct way, reinforcing the panel’s consensus that technology must augment rather than replace clinicians.
Strengthened the emerging consensus around human‑in‑the‑loop approaches, supporting the shift from pure technical discussion to patient‑focused implementation.
Speaker: Ken Ichiro Natsume
Overall Assessment

The discussion evolved from a surface‑level description of AI product flows to a deep, multidimensional debate about safety, verification, equity, and systemic investment. Zameer Brey’s risk‑analogy and call for verified ‘glass‑box’ AI acted as a catalyst, steering the conversation toward accountability. Prokar Dasgupta expanded the scope with concrete global examples and highlighted the necessity of data diversity, education, and patient involvement, prompting a shift toward equity and implementation challenges. Alain Labrique’s reframing of success metrics to real‑world impact and Ken Natsume’s human‑centred reminder reinforced the need for human oversight. Payden P.’s closing synthesis crystallized these threads, positioning trust and foundational investment as the decisive factors for AI’s health impact. Collectively, these pivotal comments redirected the panel from abstract product talk to a concrete agenda focused on verified, equitable, and trustworthy AI deployment.

Follow-up Questions
To what extent does AI assistance improve actual health outcomes (e.g., TB diagnosis, diabetes adherence) compared to current practice?
Understanding real‑world impact is essential before scaling AI tools in healthcare.
Speaker: Zameer Brey
How can AI systems be made verifiable and transparent (shifting from a black‑box to a glass‑box model) with safeguards to prevent errors such as allergic reactions or catastrophic events?
Verification and explainability are critical for safety, regulatory approval, and clinician trust.
Speaker: Zameer Brey
What level and type of investment are required in clinical research, evaluation, and evidence generation to shift entrenched clinical practice pathways?
Evidence‑based investment is needed to overcome resistance and embed AI into routine care.
Speaker: Zameer Brey
How can we ensure AI models are trained on diversified, representative data to avoid bias and achieve equitable performance across populations?
Data diversity is a prerequisite for equitable AI outcomes and for avoiding systemic disparities.
Speaker: Prokar Dasgupta
What are the safety, efficacy, and scalability considerations for tele‑surgery 2.0 as a solution for patients lacking access to equitable surgery?
Tele‑surgery could address a massive unmet need, but requires rigorous research on technical feasibility and patient outcomes.
Speaker: Prokar Dasgupta
What research is needed on high‑autonomy robotic systems (e.g., fully autonomous gallbladder removal) regarding clinical effectiveness, patient acceptance, and ethical implications?
Fully autonomous surgery raises questions about accuracy, safety, and public trust that must be studied before deployment.
Speaker: Prokar Dasgupta
How should AI be incorporated into medical and nursing curricula, and what skill gaps exist among current healthcare workers?
Workforce readiness is a key enabler; identifying and filling educational gaps will support sustainable AI adoption.
Speaker: Prokar Dasgupta
What legal and regulatory frameworks are required to create a verifiable chain of proof for AI‑driven clinical decisions?
A documented decision trail is necessary for legislation, accountability, and clinician confidence.
Speaker: Zameer Brey
How can donors, governments, and other stakeholders coordinate to develop strategies, priorities, and investments that rally behind AI for health?
Coordinated funding mechanisms are needed to align resources with the most impactful AI initiatives.
Speaker: Haitham Ali Ahmed El‑Noush
What governance, regulatory, and evidence‑generation structures are needed to make AI safe, trusted, and scalable in health systems?
Identifying enabling conditions will determine whether AI becomes a tool for equity or a source of new inequalities.
Speaker: Payden P.
How does strengthening regulatory and legal frameworks influence investor confidence and the flow of sustainable investment into AI for health?
Trust built through regulation is seen as the currency that unlocks long‑term financing for AI initiatives.
Speaker: Payden P.
What models of long‑term, cross‑sector partnerships are most effective for scaling AI impact in health, and how can they be evaluated?
Partnerships are highlighted as essential for scaling; research is needed to understand best practices and outcomes.
Speaker: Payden P.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Lt Gen Vipul Shinghal

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Lt Gen Vipul Shinghal

Session at a glanceSummary, keypoints, and speakers overview

Summary

The keynote address delivered by a senior Indian Army officer highlighted the rapid transformation of military decision-making through artificial intelligence (AI) ([5]). He contrasted his early career, when battlefield information was gathered on paper maps and relayed slowly by notes and telephone, with the present where digital walls display fused sensor data in real time ([6-8]). Over the past two decades, the pace of intelligence has accelerated, with AI instantly analysing multiple feeds and presenting a dynamic picture that compresses decision cycles to seconds ([9]). He illustrated this shift with a high-tempo operation in which an AI system recommended an immediate strike, but the commander paused to ask what the machine did not know ([10-13]). The pause revealed a civilian evacuation not yet captured by the sensors, preventing a mistaken attack and saving lives ([19-23]). He used the episode to assert that AI can advise and accelerate decisions, yet only humans can exercise judgment and bear responsibility ([25]). Emphasising national policy, he noted that recent statements by the Prime Minister and other leaders call for mandatory guardrails and safety measures for AI, especially in the armed forces ([26-28]). The Indian Armed Forces view AI as a force multiplier across intelligence fusion, surveillance, logistics and other domains, and have declared the current year as the “year of networking and data-centricity” ([30-33]). Indigenous platforms such as ACOM AI-as-a-Service, Sama Drishti, Shakti and Akash Teer have been developed in partnership with industry and startups to support this transformation ([35-38]). He outlined four governance principles: critical decisions must remain human-controlled and legally accountable; AI systems are effectively weapons and must be tested in contested conditions; transparency requires a “glass box” of data provenance; and commanders need dedicated training on AI-enabled battlefields ([41-56]). He called for international governance frameworks, citing ongoing UN discussions on meaningful human control and the need for legal provisions governing autonomous weapons ([57-60]). Positioning India as both a major military power and an emerging AI hub, he argued that the nation has the capacity and credibility to lead the development of ethical AI guidelines, echoing the Prime Minister’s “Manav Vision for AI” ([61-63]). The address concluded that responsible AI integration will reshape warfare while preserving human judgment and ethical restraint, underscoring its strategic significance for national security ([25][41-56]).


Keypoints


Rapid transformation of military decision-making through AI – The speaker contrasts the early days of paper maps and slow information flow with today’s “massive digital display” that fuses sensor data and AI in real time, compressing decision windows to seconds [6-9][9-12].


Human judgment remains essential despite AI recommendations – A senior commander pauses a machine-generated strike recommendation, asks “What does the machine not know?” discovers an ongoing civilian evacuation, and averts civilian casualties, illustrating that AI can advise but only humans can exercise moral judgment and bear responsibility [13-24][25].


Mandate for responsible AI development, testing, and accountability – The speaker stresses that AI systems in the armed forces must be treated as weapons, subject to rigorous field testing, legal and moral accountability, transparency (“glass box” data), and continuous training of commanders [26-44][45-55][56-60].


India’s strategic push for indigenous, data-centric AI and global leadership in AI governance – The Indian Armed Forces are adopting AI-enabled platforms (e.g., ACOM AI, Sama Drishti, Shakti, Akash Teer), collaborating with industry and startups, and aligning with national AI governance guidelines to shape international norms on autonomous weapons [31-38][39-43][55-57][61-63].


Overall purpose/goal


The address aims to showcase how the Indian Army is integrating AI to enhance battlefield effectiveness while underscoring the non-negotiable need for human control, ethical safeguards, and robust governance. It also positions India as a proactive leader in developing responsible AI frameworks for both national security and global policy.


Overall tone


The speaker begins with a formal, proud tone reflecting on past experiences and technological progress. The narrative then shifts to a cautionary, reflective tone when discussing the limits of AI and the necessity of human judgment. This is followed by a constructive, collaborative tone emphasizing partnerships and responsible development, and concludes with an aspirational, confident tone about India’s capacity to lead international AI governance. The tone evolves from retrospective admiration to prudent warning, then to proactive optimism.


Speakers

Speaker 1


– Role/Title: Keynote speaker representing the Indian Army and Indian Armed Forces (senior military officer)


– Area of expertise: Military applications of AI, defence strategy, AI governance


Additional speakers:


Full session reportComprehensive analysis and detailed insights

The speaker opened with a formal greeting to a diverse audience of industry leaders, academics, AI innovators, uniformed colleagues and students, delivering the keynote on behalf of the Indian Army and the broader Indian Armed Forces [4-5].


He recalled his first war-game as a young lieutenant thirty-five years ago, when battlefield information was limited to large paper maps, hand-written notes and slow telephone reports that required manual colour-coding before a commander could deliberate a decision [6-12].


Contrasting that era, he described today’s “Star-Wars” operation rooms, where massive digital displays ingest continuous sensor streams, fuse the data instantly and hand it to AI for rapid analysis, producing a living, dynamic picture of the battle space. This transformation has compressed the OODA (Observe-Orient-Decide-Act) cycle to a matter of seconds, leaving little room for hesitation [13-22].


To illustrate the implications of such speed, he narrated a high-tempo scenario: an AI system generated a high-confidence recommendation to strike a target within a narrow decision window. The senior commander paused and asked, “What does the machine not know?” [13-17]. The pause revealed that a civilian evacuation had just begun and was not yet reflected in the sensor data, meaning the algorithm was mis-identifying civilians as enemy troops [18-22]. By exercising judgement and delaying the strike, the commander spared innocent lives while still achieving the mission objective [23-24]. This episode underscored his central thesis that AI can inform, accelerate and recommend decisions, but only humans can exercise moral judgement and bear responsibility [25].


He then outlined four governance principles for AI-enabled systems. First, decisions that must never be delegated to AI should remain under human control, with legal and moral accountability institutionalised [41-44]. Second, AI-enabled systems, being designed to cause harm, must be treated as weapons and rigorously tested in contested battlefield conditions rather than controlled labs [46-51]. Third, transparency is essential: commanders must know the data sources and training processes behind AI outputs, converting the “black box” into a “glass box” [52-55]. Fourth, continuous training of commanders and staff is required so they can integrate algorithms, command AI-enabled systems and retain decisive human judgement [56]. These principles collectively reinforce the view that AI can augment but not replace human agency [25][41-56].


The speaker also highlighted the recent launch of the India AI Governance Guidelines and the daily declaration made at the summit, calling them a “path-breaking step” that recognises generative AI systems can produce unintended consequences and that these lessons must inform military planning. He stressed that AI safety and governance are now integral to national policy, not merely a defence-only issue [45-48].


Linking operational insight to national statements, he noted that the Prime Minister and other senior leaders have called for mandatory guardrails and safety measures for AI-enabled models, especially in the armed forces where the stakes are exceptionally high [26-28]. The Indian Armed Forces operate in a uniquely complex security environment that spans contested borders, multiple domains, dense populations and high-intensity escalation [29-30].


He described AI as a force multiplier across intelligence fusion, surveillance, decision support, maintenance and logistics, and announced that this year has been declared the “year of networking and data-centricity” to accelerate the transition to data-driven operations [31-34]. Indigenous platforms such as ACOM AI-as-a-Service, the battlefield situational-awareness software Sama Drishti, and the sensor-shooter fusion systems Shakti and Akash Teer have been developed through collaboration with industry, leaders and startups, with openness to further partnerships for a self-reliant transformation [35-40].


He noted that the UN Secretary-General also addressed AI-related initiatives at the summit, underscoring the global relevance of meaningful human control and accountability in autonomous weapons discussions [58-60].


Finally, he argued that India, as a major military power, a growing AI hub and a civilisation rooted in ethical restraint-embodied in the concepts of Shakti (force) and Dharma (rightness)-has both the capacity and credibility to lead the formulation of global AI governance frameworks, echoing the Prime Minister’s “Manav Vision for AI” announced at the summit [61-63]. In closing, he emphasized that while AI reshapes military decision-making into a rapid, data-rich process, the preservation of human judgement, robust legal safeguards, transparency, rigorous testing and dedicated training are non-negotiable pillars for ethical responsibility and strategic stability [25][41-56][57-60].


Session transcriptComplete transcript of the session
Speaker 1

Firstly, let me just say this that, you know, I know I’m the last speaker of a long day. So I’ll do this quickly. I’ll come to the essentials. Distinguished guests, leaders of industry and academia, AI innovators, my colleagues in uniform, who are also innovators, students, ladies and gentlemen, a very good evening to you all. It’s a privilege to be speaking here as a keynote address representing the Indian Army and the Indian Armed Forces. You know, 35 years ago, when I joined the Army as a young lieutenant, in my first war game unfolded in a room dominated by large paper maps. Information arrived slowly, handed in notes, verbal updates, reports from the field taken on telephone. We pieced that picture together, physically marked it on the map using color -coded pins and flags, and presented it to the commander, who then took a decision deliberately and with reflection, fully aware that the adversary was operating within similar timelines.

Twenty years later, the rhythm began to change. intelligence became sharper and faster operation rooms had a few screens displaying maps presentations moved to powerpoint the volume of information increased timelines got compressed but there was still space to pause, breathe and the OODA cycle could still breathe today when I walk into an operation rooms the difference is stark it’s like a star wars coming to life a massive digital display dominates the wall input stream in continuously from multiple sensors intelligence is fused almost instantly analyzed by AI presenting a living dynamic picture of the battle space some of the work we did as left -handers is now automated and the commander knows that the adversary is seeing much the same picture about us at much the same speed the pressure is not anymore about awareness it is about decision seconds matter hesitation has consequences it is in this environment of speed, uncertainty and time compression that I want to transport you to an operational stage scenario During a high -tempo military operation, a senior commander was presented with a machine -generated recommendation based on multiple sensor feeds and AI analysis to engage a target immediately.

The system was confident. The probability score of the machine was high. The decision window was measured in seconds. But the commander paused. Not because he didn’t trust the technology. His experience told him that something was amiss. He asked a simple question. What does the machine not know? The pause revealed something the algorithm could not see. A civilian evacuation had just begun minutes earlier, not yet reflected in the data. The machine saw the movement as that of enemy troops, whereas they were civilians. It is even possible that troops were mixed with the civilians. However, the commander exercised judgment and restraint. The strike was delayed, innocent lives were spared, and the mission was still achieved. This moment captures a fundamental truth.

AI can inform, accelerate and recommend decisions, but only humans can exercise judgment and bear responsibility for them. Yesterday our Honorable Prime Minister and many other eminent speakers spoke of the need for guardrails and safety to be built into AI -enabled models. In the case of the military, these are not essential but mandatory as the stakes are much higher. The Indian Armed Forces operate in a uniquely complex security environment. Across contested borders, multiple domains, dense populations and high escalation intensity . Therefore, ladies and gentlemen, let me clearly state that we in the Defence Forces are fully cognizant that artificial intelligence is fundamentally redefining the modern battle space. Its power in intelligence fusion, surveillance, decision support, maintenance, logistics and a host of other functions is a force multiplier in today’s multi -domain battle space.

In keeping with the vision of technological transformation, the Indian Armed Forces are committed to ensuring that the military is fully equipped with the necessary equipment The Chief of Army Staff has formally declared this year as the year of networking and data centricity. The Indian Armed Forces are committed to ensuring that the military is fully equipped with the necessary equipment and data centricity. The Indian Armed Forces are committed to ensuring that the military is fully equipped with the signaling a deliberate shift towards data -driven operations and AI -enabled capabilities. The evolution is powered by many indigenously built applications, ACOM AI as a service, Sama Drishti, which is a battlefield situational awareness software, Shakti and Akash Teer, which are sensor and shooter fusion.

All of these have been built through our collaboration with industry, leaders and startups. Many of the innovators who have been around in this summit for the last few days. For this self -reliant transformation, we are open to collaboration with many startups, innovators to build it further. However, we are fully cognizant that this needs to be a responsible development of AI. Allow me to reflect on four points in this regard. Firstly, what decisions that AI must not be delegated to must always remain human. Human control has to be institutionalized into law and moral accountability. Accountability cannot be with the machine. If a machine recommends a decision with 90 % accuracy and the commander goes with it and it is a wrong decision, it gives the commander a moral buffer.

But is that correct? Secondly, AI -enabled systems are designed to cause harm. Therefore they must be treated as a weapon and not as a software. They therefore must be evaluated and tested in contested field conditions. Remember that the battlefield is a chaotic data environment. Sensors get obscured by dust, smoke, deception and many other things. A system that performs well in controlled condition but fails in a battlefield condition is not a force multiplier, it’s a liability. Thirdly, trust and sovereignty must get built in the system. The commander taking a decision based on an AI -enabled system but know, must know what is the data being used, how it has been trained. The black box of data must become a glass box.

And fourthly, commanders and staff of today need to be trained about this fast evolving battlefield. As I told you about the operational scenario, as it was 30 years ago and it is today in the in a war game we need to be able to integrate algorithms be able to command systems and know how to go forward the indian army is taking steps in training our commanderial staff in this direction the the next thing that i’d like to say is that in some the nature of war may change but our conscious must not it is important to recognize that these concerns about ai safety and governance are not confined to the military domain alone they are increasingly shaping national policy the launch of the india ai governance guidelines and the daily declaration during the summit is a path -breaking step in this direction just happened during this summit this framework defines ai systems being generative and therefore having unintended consequences and this has lessons for us as military planners at this stage i would also like to remind ourselves of a historical truth i do believe in the wisdom of humanity whenever faced with a new crisis we have to face it and we have to face it and we have to face it and we have to face it and we have to face it and we have to face it and we have to face it and we have to face it The rules governing the use of NBC weapons, the Geneva Convention on Treatment of Prisoners of War, the Convention on Use of Landmines and other such frameworks have stood the test of time and with few exceptions have been followed during conflicts also.

In a similar manner, a set of governance frameworks and legal provisions need to be evolved about use of AI -based systems and autonomous weapons. Already under the framework of the United Nations, discussions are underway around meaningful human control and accountability. His Excellency the UN Secretary General also talked about various such initiatives just yesterday. While consensus remains complex, the debate itself reflects a shared concern for autonomy without restraint that would undermine strategic stability. India, as a major military power, a growing AI hub and a civilization deeply rooted in ethical restraint and understanding that Shakti, that is force, and Dharma, that is rightness, must go hand in hand, has both the capacity. And the credibility to lead this conversation.

The clear and all -encompassing Manav Vision for AI, enunciated by the Honorable Prime Minister in this hall yesterday, emphasizing moral and ethical systems as well as

Related ResourcesKnowledge base sources related to the discussion topics (8)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“The speaker delivered the keynote on behalf of the Indian Army and the broader Indian Armed Forces”

The knowledge base identifies Lt Gen Vipul Shinghal as a senior Indian Army officer representing the Indian Armed Forces as a keynote speaker [S10].

Confirmedhigh

“He recalled his first war‑game as a young lieutenant thirty‑five years ago”

The source notes that Shinghal has 35 years of military service, starting as a young lieutenant, matching the timeframe mentioned in the report [S10].

Confirmedhigh

“The senior commander paused and asked, “What does the machine not know?” during a high‑confidence AI recommendation”

The knowledge base records the same moment: the system was confident, the decision window was seconds, and the commander paused to ask exactly that question [S21].

Confirmedhigh

“AI can inform, accelerate and recommend decisions, but only humans can exercise moral judgement and bear responsibility”

The source explicitly states that AI can inform, accelerate and recommend decisions, underscoring the need for human moral judgement [S10].

Additional Contextmedium

“The Indian military’s AI transformation involves collaboration with industry leaders and startups”

Additional detail: the transformation includes indigenously developed platforms such as ACOM AI, Sama Drishti, Shakti and Akash Teer, built through partnerships with industry and startups [S14] and [S64].

External Sources (64)
S1
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S2
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S3
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S4
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — Speaker 1’s presentation represents a masterful progression from current state analysis to future vision, punctuated by …
S5
Using AI to tackle our planet’s most urgent problems — 1. **The Earth Layer**: Changes occurring over decades, representing fundamental geographical shifts 2. **The Infrastru…
S7
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — Speaker 1’s presentation represents a masterful progression from current state analysis to future vision, punctuated by …
S8
Challenging the status quo of AI security — Babak Hodjat: Thank you very much, Sounil. Yeah, we came out here for two reasons, as cognizant, one, to get people invo…
S9
Responsible AI for Children Safe Playful and Empowering Learning — TV broadcast: curious how it works and I think that a lot of kids are. I would love to learn how it can be used in every…
S10
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Lt Gen Vipul Shinghal — The centerpiece of Shinghal’s argument is an operational scenario illustrating the irreplaceable value of human judgemen…
S11
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-kiran-mazumdar-shaw — I believe that nations that command the convergence of biology and AI, or what I like to call the convergence of biologi…
S12
The Power of Satellites in Emergency Alerting and Protecting Lives — ## Introduction and Context Alexandre Vallet: Thank you very much Dr. Zavazava. Thank you very much both of you for thi…
S13
Opening Ceremony — This comment introduced a spiritual and philosophical dimension to the technical and policy discussions, emphasizing hum…
S14
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Lt Gen Vipul Shinghal — Shinghal argues that human control must be institutionalized in law and moral accountability cannot be delegated to mach…
S15
9821st meeting — For Mozambique, it is essential that the international community establishes norms and standards that promote trust and …
S16
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — I am very pleased. I believe that our summit will play an important role in the creation of a human -centric, sensitive,…
S17
WS #184 AI in Warfare – Role of AI in upholding International Law — Accountability and Human Control Anoosha Shaigan: So thank you everyone for organizing this and thank you for having m…
S18
Skilling and Education in AI — Speakers:Speaker 1, Moderator Speakers:Speaker 1, Rakesh Kaul, Speaker 3 Speakers:Speaker 1, Speaker 2 Speakers:Speak…
S19
Advancing Scientific AI with Safety Ethics and Responsibility — – Speaker 1- Speaker 2 – Speaker 1- Speaker 2- Audience Member 3 – Speaker 1- Speaker 3 Both speakers agree that eval…
S20
Open Forum #27 Make Your AI Greener a Workshop on Sustainable AI Solutions — Marco Zennaro: Sure, sure. Definitely. Thank you very much. So let me introduce TinyML first. So TinyML is about running…
S21
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-lt-gen-vipul-shinghal — All of these have been built through our collaboration with industry, leaders and startups. Many of the innovators who h…
S22
Collaborative Innovation Ecosystem and Digital Transformation: Accelerating the Achievement of Global Sustainable Development Goals (SDGs) — Bocar Ba: Thank you. Thank you, Mohamed. And good morning, colleagues. It’s a very complex question. And it’s important …
S23
Hard power of AI — In conclusion, the analysis provides insights into the dynamic relationship between technology, politics, and AI. It hig…
S24
Enhancing rather than replacing humanity with AI — Individuals remain accountable for the outcomes of their decisions. People’s judgment remains crucial, particularly for…
S25
Open Forum #73 The Need for Regulating Autonomous Weapon Systems — Jimena Viveros: Hello. I hope you can all hear me. Perfect. Well, first of all, I would like to thank our Austrian and…
S26
WS #123 Responsible AI in Security Governance Risks and Innovation — Alexi Drew: Thank you, I’ll run through these nice and quickly in the interest of giving people their time. I’d like to …
S27
The Global Power Shift India’s Rise in AI & Semiconductors — Raised by:Vivek Kumar Singh This relates to developing a clear framework for strategic autonomy while maintaining benef…
S28
WS #110 AI Innovation Responsible Development Ethical Imperatives — Ricardo Israel Robles Pelayo: Thank you very much. Good afternoon, everyone. It is an honor to be here and share a refle…
S29
Why science metters in global AI governance — Thank you very much. There is a computer here. I don’t know to whom it belongs. Excellencies, ladies and gentlemen. Than…
S30
AI in Action: When technology serves humanity — Across these domains (conservation, disaster response, language preservation, small business, and agriculture), technolo…
S31
Enhancing rather than replacing humanity with AI — Individuals remain accountable for the outcomes of their decisions. People’s judgment remains crucial, particularly for…
S32
Adoption of the agenda and organization of work — Germany has taken a definitive and positive stance on the integration of human rights and safeguard measures within the …
S33
Open Forum #79 Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative — These key comments transformed what could have been a superficial policy discussion into a multi-dimensional analysis sp…
S34
Securing Access to the Internet and Protecting Core Internet Resources in Contexts of Conflict and Crises — There are gaps in understanding how these frameworks interrelate, with different proportionality assessments between hum…
S35
Opening of the session — Egypt’s detailed perspective exposes the intricate balance between advancing human rights and harmonising these principl…
S36
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Lt Gen Vipul Shinghal — “The black box of data must become a glass box.”[11]. “the commander taking a decision based on an AI -enabled system bu…
S37
How the Global South Is Accelerating AI Adoption_ Finance Sector Insights — See, under the remit of the mandate given to the Reserve Bank of India, under the Reserve Bank of India Act or the Banki…
S38
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Lt Gen Vipul Shinghal — Evidence:Commanders taking decisions based on AI-enabled systems must know what data is being used and how the system ha…
S39
Operationalizing data free flow with trust | IGF 2023 WS #197 — To address these fears, interoperable multilateral frameworks, such as the OECD process and data access agreements, are …
S40
Responsible AI in India Leadership Ethics & Global Impact part1_2 — Evidence:Responsibility is not anymore a compliance check which is supposed to be there, it’s a commitment of the techno…
S41
Powering AI Global Leaders Session AI Impact Summit India — Lehane positions India as having unique advantages for leading global AI democratization efforts, combining its status a…
S42
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — Unexpectedly, these speakers represent different philosophies toward AI development. Sheth emphasizes building indigenou…
S43
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — Explanation:Unexpectedly, these speakers represent different philosophies toward AI development. Sheth emphasizes buildi…
S44
AI/Gen AI for the Global Goals — Boa-Gue mentions the African Startup Policy Framework as an example of an initiative to enable member states to develop …
S45
Driving Indias AI Future Growth Innovation and Impact — But there was also a lot of fear around AI about trust factors, about privacy, data, sovereignty, multiple issues about …
S46
Agentic AI in Focus Opportunities Risks and Governance — “If the data can be manipulated, if the lineage of data is not properly understood, if it is not really governed, if the…
S47
Why science metters in global AI governance — And also mentioned here. So this is where we are suggesting that this could be one way to look at. It’s not that everyth…
S48
AI governance in India: A call for guardrails, not strict regulations — The TRAI’srecent call to regulateAI comes at a time when policymakers must address rapidly evolving technological innova…
S49
Policymaker’s Guide to International AI Safety Coordination — Translating scientific knowledge into effective policy requires extensive testing, simulations, and understanding of rea…
S50
AI and international peace and security: Key issues and relevance for Geneva — Title:Background on LAWS in the CCWDescription:The UNODA provides historical context on the Convention on Certain Conven…
S51
Pre 10: Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative — Accountability in autonomous weapons systems requires knowing whose intent was involved, what orders were given, what co…
S52
Open Forum #73 The Need for Regulating Autonomous Weapon Systems — Human control and accountability Whelan argues for the importance of maintaining meaningful human control over the use …
S53
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Lt Gen Vipul Shinghal — Shinghal begins with a historical perspective, contrasting his military experience from 35 years ago with today’s techno…
S54
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Lt Gen Vipul Shinghal — Shinghal begins with a historical perspective, contrasting his military experience from 35 years ago with today’s techno…
S55
Comprehensive Report: 18th Meeting of the Disarmament and International Security Committee — Madam Chair, artificial intelligence is reshaping the way we process knowledge and information, and it is rapidly transf…
S56
Enhancing rather than replacing humanity with AI — Individuals remain accountable for the outcomes of their decisions. People’s judgment remains crucial, particularly for…
S57
WS #184 AI in Warfare – Role of AI in upholding International Law — A significant point of agreement among the speakers was the necessity of maintaining human control and accountability in…
S58
WS #123 Responsible AI in Security Governance Risks and Innovation — Alexi Drew: Thank you, I’ll run through these nice and quickly in the interest of giving people their time. I’d like to …
S59
The Global Power Shift India’s Rise in AI & Semiconductors — Raised by:Vivek Kumar Singh This relates to developing a clear framework for strategic autonomy while maintaining benef…
S60
The Global Power Shift India’s Rise in AI &amp; Semiconductors — Adopt strategic autonomy approach – maintain sovereignty in critical areas while collaborating globally in non-sensitive…
S61
The Innovation Beneath AI: The US-India Partnership powering the AI Era — Evidence:He notes that data centers are essentially giant boxes providing power and cooling that can adapt to different …
S62
https://dig.watch/event/india-ai-impact-summit-2026/building-indias-digital-and-industrial-future-with-ai — What it is enabling is every transaction you do, there is a OTP or SMS which is coming out, right? So this OTP and this …
S63
Open Forum: A Primer on AI — Artificial Intelligence is advancing at a rapid pace
S64
https://app.faicon.ai/ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-lt-gen-vipul-shinghal — All of these have been built through our collaboration with industry, leaders and startups. Many of the innovators who h…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
13 arguments177 words per minute1445 words489 seconds
Argument 1
Historical shift from paper maps to AI‑fused digital battle‑space (Speaker 1)
EXPLANATION
The speaker describes how military intelligence gathering moved from slow, manual processes using paper maps and verbal updates to a modern, AI‑driven digital environment. This transition illustrates the accelerating pace and sophistication of information processing in defence.
EVIDENCE
He recounts his early experience in the army where war-games relied on large paper maps, color-coded pins and handwritten notes, and information arrived slowly via telephone ([6][8]). He then contrasts this with the situation twenty years later, when operation rooms featured multiple screens, PowerPoint presentations, and AI-fused real-time data streams that created a living picture of the battlespace ([9]).
MAJOR DISCUSSION POINT
Evolution of AI in military operations
Argument 2
Current “star‑wars” operation rooms with real‑time sensor streams and AI analysis (Speaker 1)
EXPLANATION
Modern command centres are depicted as high‑tech environments dominated by massive digital displays that ingest continuous sensor feeds and apply AI for instant fusion and analysis. This creates a dynamic, near‑instantaneous view of the battlefield, dramatically compressing decision timelines.
EVIDENCE
The speaker describes a massive digital display wall receiving continuous streams from multiple sensors, with AI instantly fusing and analysing the data to present a living, dynamic picture of the battlespace ([9]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The operational scenario described by Lt Gen Vipul Shinghal highlights massive digital display walls ingesting continuous sensor feeds and AI-driven fusion, matching the ‘star-wars’ command centre description [S10].
MAJOR DISCUSSION POINT
Modern AI‑enabled command infrastructure
Argument 3
Commander’s pause to ask “What does the machine not know?” saved civilian lives (Speaker 1)
EXPLANATION
During a high‑tempo operation, a commander halted a machine‑generated strike recommendation to question the system’s blind spots. By identifying an ongoing civilian evacuation not yet reflected in the data, the commander prevented potential civilian casualties while still achieving the mission.
EVIDENCE
The narrative recounts that the commander paused despite a high-confidence AI recommendation, asked what the machine did not know, discovered a civilian evacuation that the algorithm had mis-identified as enemy movement, and consequently delayed the strike, sparing innocent lives ([13][23]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Shinghal recounts a commander halting a high-confidence AI strike recommendation, questioning the system’s blind spots and averting civilian casualties [S10].
MAJOR DISCUSSION POINT
Human judgment averting AI error
Argument 4
AI can recommend, but only humans can exercise judgment and bear moral accountability (Speaker 1)
EXPLANATION
The speaker emphasizes that while AI can accelerate decision‑making and provide recommendations, ultimate moral responsibility and judgment remain the domain of humans. This underscores the need for human oversight in lethal contexts.
EVIDENCE
He states that AI can inform, accelerate and recommend decisions, but only humans can exercise judgment and bear responsibility for them ([25]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Human moral responsibility and the limits of AI recommendations are emphasized in Shinghal’s remarks on accountability and in the discussion on meaningful human control [S10][S14][S17].
MAJOR DISCUSSION POINT
Human responsibility versus AI recommendation
Argument 5
Certain decisions must never be delegated to AI; human control must be codified in law (Speaker 1)
EXPLANATION
The speaker argues that some critical decisions, especially those involving lethal force, must remain under human authority and be enshrined in legal frameworks. Delegating such decisions to machines would undermine moral accountability.
EVIDENCE
He outlines that decisions which AI must not be delegated to should always remain human, that human control must be institutionalised into law, and that accountability cannot reside with the machine ([41][44]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The speaker’s call for certain decisions to remain human and be enshrined in law is directly echoed in Shinghal’s statements that ‘decisions that AI must not be delegated to must always remain human’ and that ‘human control has to be institutionalized into law’ [S10][S14].
MAJOR DISCUSSION POINT
Legal codification of human control
Argument 6
AI‑enabled weapons are weapons, not mere software; they require rigorous contested‑field testing (Speaker 1)
EXPLANATION
The speaker stresses that AI systems designed for combat are weapons and must be evaluated under realistic battlefield conditions. Testing only in controlled environments risks creating liabilities rather than force multipliers.
EVIDENCE
He notes that AI-enabled systems are designed to cause harm and therefore must be treated as weapons, evaluated and tested in contested field conditions, and that performance in chaotic battlefield environments is essential ([46][51]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-enabled combat systems are described as weapons that must be tested in contested field conditions, a point made by Shinghal and reinforced by the safety-first evaluation emphasis in the scientific AI discussion [S10][S14][S19].
MAJOR DISCUSSION POINT
Weapon‑grade testing of AI systems
Argument 7
Transparency: data and training sets must be a “glass box,” not a black box (Speaker 1)
EXPLANATION
The speaker calls for openness about the data and algorithms that power AI systems, insisting that commanders need to understand the provenance and training of models. Converting the “black box” into a “glass box” enhances trust and accountability.
EVIDENCE
He argues that commanders must know what data is used and how models are trained, urging that the black box of data become a glass box ([52][55]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The demand for a ‘glass box’ of data and model provenance mirrors Shinghal’s call that ‘the black box of data must become a glass box’ for commanders [S10].
MAJOR DISCUSSION POINT
Transparency in AI systems
Argument 8
Continuous training of commanders and staff on AI‑augmented warfare (Speaker 1)
EXPLANATION
The speaker highlights the necessity of educating military personnel to integrate algorithms, command AI‑enabled systems, and make informed decisions. Ongoing training ensures that the force can effectively leverage AI while retaining control.
EVIDENCE
He mentions that commanders and staff need to be trained on the fast-evolving battlefield, integrating algorithms and knowing how to proceed, and that the Indian Army is taking steps in this direction ([55][56]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for ongoing AI-augmented warfare training aligns with the ‘Skilling and Education in AI’ session that stresses continuous commander and staff education on AI tools [S18].
MAJOR DISCUSSION POINT
Capacity development for AI‑enabled operations
Argument 9
Deployment of home‑grown applications (ACOM AI‑as‑a‑Service, Sama Drishti, Shakti, Akash Teer) (Speaker 1)
EXPLANATION
The speaker lists several indigenous AI solutions that have been developed for battlefield situational awareness, sensor‑shooter fusion, and other defence functions. These showcase India’s self‑reliant technological capability.
EVIDENCE
He enumerates indigenously built applications such as ACOM AI-as-a-Service, Sama Drishti, Shakti and Akash Teer, all created through collaboration with industry, leaders and startups ([35][36]).
MAJOR DISCUSSION POINT
Indigenous AI capabilities
Argument 10
Open invitation to startups and innovators for self‑reliant transformation (Speaker 1)
EXPLANATION
The speaker extends a call to the private sector, encouraging startups and innovators to partner with the armed forces to further develop AI solutions. This reflects a collaborative approach to building a self‑sufficient defence ecosystem.
EVIDENCE
He states that for self-reliant transformation the armed forces are open to collaboration with many startups and innovators to build further capabilities, while emphasizing responsible AI development ([38][39]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Shinghal explicitly invites collaboration with startups and innovators for self-reliant transformation, matching the speaker’s invitation [S10][S21].
MAJOR DISCUSSION POINT
Collaboration with industry
Argument 11
Need for AI governance guidelines, referencing India’s AI Governance Framework (Speaker 1)
EXPLANATION
The speaker points to the recent Indian AI Governance Framework as a necessary set of guardrails for safe AI deployment, especially in defence. Such guidelines aim to embed safety, ethics, and accountability into AI models.
EVIDENCE
He notes that the Prime Minister and other speakers called for guardrails and safety to be built into AI-enabled models, and later references India’s AI Governance Guidelines as a path-breaking step ([26], [57]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Reference to guardrails and India’s AI Governance Framework is found in Shinghal’s remarks about embedding safety and ethics into AI-enabled models [S10].
MAJOR DISCUSSION POINT
National AI governance
Argument 12
Call for global conventions on autonomous weapons, meaningful human control, and accountability (Speaker 1)
EXPLANATION
The speaker urges the international community to develop legal frameworks that ensure meaningful human control over autonomous weapons and hold actors accountable. He cites ongoing UN discussions as evidence of growing global concern.
EVIDENCE
He mentions that under the United Nations framework discussions are underway around meaningful human control and accountability, with the UN Secretary-General also addressing these initiatives, and that while consensus is complex, the debate reflects shared concern for autonomy without restraint ([58][60]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for international conventions on autonomous weapons and meaningful human control are supported by the Mozambique-focused norms discussion and the WS-184 session on accountability in AI warfare [S15][S17].
MAJOR DISCUSSION POINT
International norms for autonomous weapons
Argument 13
India’s potential role as a leader in ethical AI, aligning “Shakti” (force) with “Dharma” (rightness) (Speaker 1)
EXPLANATION
The speaker positions India as a major military power and AI hub capable of championing ethical AI principles, linking national values of strength and righteousness. He suggests India can lead global conversations on responsible AI use.
EVIDENCE
He asserts that India, as a major military power and growing AI hub rooted in ethical restraint, has the capacity and credibility to lead this conversation, referencing the concepts of Shakti and Dharma and the Manav Vision for AI articulated by the Prime Minister ([61][63]).
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The spiritual-philosophical framing of ‘Shakti’ and ‘Dharma’ resonates with the opening ceremony remarks that link human dignity, moral responsibility, and ethical AI deployment [S13].
MAJOR DISCUSSION POINT
India’s leadership in ethical AI
Agreements
Agreement Points
AI can inform and accelerate decisions, but only humans can exercise judgment and bear moral responsibility
Speakers: Speaker 1
AI can recommend, but only humans can exercise judgment and bear moral accountability (Speaker 1)
The speaker stresses that while AI provides recommendations, ultimate moral accountability rests with human commanders, as illustrated by the operational scenario where the commander paused and asked “What does the machine not know?” saving civilian lives [13-23][25].
POLICY CONTEXT (KNOWLEDGE BASE)
This reflects the view expressed in AI-for-humanity discussions that AI serves as a tool while humans retain agency and accountability [S30][S31][S28].
Critical decisions, especially lethal ones, must never be delegated to AI and should be enshrined in law
Speakers: Speaker 1
Certain decisions must never be delegated to AI; human control must be codified in law (Speaker 1)
The speaker argues that decisions that AI must not be delegated to should always remain human, with institutionalised legal and moral accountability, rejecting the idea that a machine can bear responsibility [41-44].
POLICY CONTEXT (KNOWLEDGE BASE)
The requirement for meaningful human control over lethal force is echoed in UN discussions on LAWS and calls for legal safeguards, including the CCW background and statements on accountability [S52][S51][S50][S33].
AI‑enabled combat systems are weapons and must be tested in realistic, contested battlefield conditions
Speakers: Speaker 1
AI‑enabled weapons are weapons, not mere software; they require rigorous contested‑field testing (Speaker 1)
AI systems designed to cause harm must be treated as weapons, evaluated under chaotic battlefield environments rather than controlled labs, otherwise they become liabilities [46-51].
POLICY CONTEXT (KNOWLEDGE BASE)
The classification of autonomous systems as weapons and the emphasis on testing under contested conditions are documented in the CCW’s LAWS background and expert analyses of operational testing requirements [S50][S33].
Transparency of data and models is essential – the “black box” must become a “glass box”
Speakers: Speaker 1
Transparency: data and training sets must be a “glass box,” not a black box (Speaker 1)
Commanders need visibility into the data sources and training processes of AI systems; the speaker calls for converting opaque black-box models into transparent glass-box ones [52-55].
POLICY CONTEXT (KNOWLEDGE BASE)
Lt Gen Vipul Shinghal highlighted the need for commanders to see data sources and model training, calling for the black box to become a glass box, reinforced by broader calls for data lineage guardrails [S36][S38][S46][S39].
Continuous capacity development and training of military personnel on AI‑augmented warfare
Speakers: Speaker 1
Continuous training of commanders and staff on AI‑augmented warfare (Speaker 1)
The speaker highlights the need to educate commanders and staff to integrate algorithms, command AI systems, and make informed decisions, noting steps the Indian Army is taking [55-56].
Promotion of indigenous AI applications and open collaboration with startups for self‑reliant defence transformation
Speakers: Speaker 1
Deployment of home‑grown applications (ACOM AI‑as‑a‑Service, Sama Drishti, Shakti, Akash Teer) (Speaker 1) Open invitation to startups and innovators for self‑reliant transformation (Speaker 1)
Indigenous solutions such as ACOM AI-as-a-Service, Sama Drishti, Shakti and Akash Teer have been built through industry collaboration, and the armed forces invite further partnership with startups while emphasizing responsible AI development [35-38].
POLICY CONTEXT (KNOWLEDGE BASE)
Initiatives in France-India partnerships stress building indigenous, self-reliant AI capabilities while fostering open collaboration with startups, aligning with policy pushes for domestic AI ecosystems [S42][S43][S44][S45].
Need for national AI governance guidelines and guardrails, referencing India’s AI Governance Framework
Speakers: Speaker 1
Need for AI governance guidelines, referencing India’s AI Governance Framework (Speaker 1)
The speaker cites the Prime Minister’s call for guardrails and notes India’s AI Governance Guidelines as a path-breaking step for safe AI deployment in defence [26][57].
POLICY CONTEXT (KNOWLEDGE BASE)
Recent Indian policy discussions advocate guardrails rather than strict regulation, calling for a national AI governance framework that balances innovation with oversight [S48][S46][S41].
Call for global conventions on autonomous weapons, meaningful human control and accountability
Speakers: Speaker 1
Call for global conventions on autonomous weapons, meaningful human control, and accountability (Speaker 1)
Referencing UN discussions, the speaker urges the development of international legal frameworks to ensure meaningful human control over autonomous weapons and accountability for their use [58-60].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple UN and expert forums have called for an international convention on LAWS that codifies meaningful human control and accountability, with Germany and other states supporting such measures [S52][S51][S50][S32][S33].
India’s potential role as a leader in ethical AI, linking “Shakti” (force) with “Dharma” (rightness)
Speakers: Speaker 1
India’s potential role as a leader in ethical AI, aligning “Shakti” (force) with “Dharma” (rightness) (Speaker 1)
The speaker positions India, as a major military power and AI hub rooted in ethical restraint, as capable of leading global conversations on responsible AI, invoking the concepts of Shakti and Dharma and the Prime Minister’s Manav Vision for AI [61-63].
POLICY CONTEXT (KNOWLEDGE BASE)
Commentators position India as a potential global leader in ethical AI, leveraging its democratic values and emerging AI strategy to combine technological “force” with moral “rightness” [S41][S45][S40].
Similar Viewpoints
Both the speaker and the Prime Minister emphasize the necessity of guardrails, safety and ethical guidelines for AI deployment, especially in high‑stakes domains like defence [26][57].
Speakers: Speaker 1, Prime Minister (referenced)
Need for AI governance guidelines, referencing India’s AI Governance Framework (Speaker 1)
The speaker’s call for international norms aligns with the UN Secretary‑General’s recent remarks on meaningful human control and accountability in autonomous weapons [59][58-60].
Speakers: Speaker 1, UN Secretary‑General (referenced)
Call for global conventions on autonomous weapons, meaningful human control, and accountability (Speaker 1)
Unexpected Consensus
Strong alignment between a military defence perspective and broader human‑rights/ethical concerns
Speakers: Speaker 1
AI can recommend, but only humans can exercise judgment and bear moral accountability (Speaker 1) Transparency: data and training sets must be a “glass box,” not a black box (Speaker 1) Call for global conventions on autonomous weapons, meaningful human control, and accountability (Speaker 1)
It is noteworthy that a senior defence officer foregrounds human-rights language, ethical responsibility and international humanitarian law alongside operational efficiency, indicating an unexpected convergence of military and human-rights discourse [25][52-55][58-60].
POLICY CONTEXT (KNOWLEDGE BASE)
European and international dialogues underline the need to integrate human-rights safeguards into defence AI policies, exemplified by Germany’s stance and broader human-rights-military balance discussions [S32][S34][S35][S33].
Overall Assessment

Speaker 1 consistently stresses that AI is a powerful force‑multiplier for the Indian Armed Forces but must be governed by human judgment, legal safeguards, transparency, rigorous testing, capacity building, indigenous development, and international norms. These points co‑here into a unified vision of responsible, ethically grounded AI in defence.

High internal consensus – all arguments reinforce a single, coherent stance on responsible AI. The alignment with external actors (Prime Minister, UN Secretary‑General) further strengthens the consensus, suggesting a strong, coordinated policy direction for AI governance in the defence sector.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript contains remarks only from Speaker 1. All arguments presented are his own views; no other speaker is quoted or referenced with a contrasting position. Consequently, there are no identifiable points of contention, partial consensus, or surprise disagreements among multiple participants.

Very low – the discussion is essentially a single‑speaker presentation, so the implications for the broader debate are that the transcript does not reveal any inter‑speaker conflict or divergent approaches to the issues raised.

Takeaways
Key takeaways
AI has transformed military decision‑making from slow, paper‑based processes to real‑time, sensor‑fused digital battle‑spaces. Human judgment remains essential; AI can recommend but cannot replace moral responsibility for lethal actions. Four principles for responsible defence AI were outlined: (1) retain human control over critical decisions, (2) treat AI‑enabled systems as weapons and test them in contested conditions, (3) ensure transparency of data and models (glass‑box), and (4) train commanders and staff on AI‑augmented warfare. India is developing indigenous AI capabilities (ACOM AI‑as‑a‑Service, Sama Drishti, Shakti, Akash Teer) and seeks collaboration with startups and industry for self‑reliant transformation. A national AI governance framework is being launched, and India advocates for international norms on autonomous weapons, emphasizing meaningful human control and accountability.
Resolutions and action items
Open invitation to startups and innovators to collaborate on defence AI projects. Commitment to train military commanders and staff in AI‑enabled operational concepts. Implementation of a data‑centric, network‑centric approach across the Indian Armed Forces (declared as the ‘year of networking and data centricity’). Development and deployment of indigenous AI applications (ACOM AI‑as‑a‑Service, Sama Drishti, Shakti, Akash Teer). Pursue and promote international discussions on AI weapon governance, including meaningful human control and legal accountability.
Unresolved issues
Specific legal mechanisms to codify human‑in‑the‑loop control for AI‑enabled weapons. Standardised testing protocols for AI systems under contested battlefield conditions. Details on how transparency (glass‑box) will be operationalised for proprietary or classified AI models. Global consensus on autonomous weapon conventions and the timeline for adopting such frameworks.
Suggested compromises
Balancing rapid AI‑driven decision support with mandatory human pause and judgment before lethal action. Treating AI systems as weapons (subject to rigorous testing) while still leveraging their speed and analytical advantages.
Thought Provoking Comments
He contrasted his first war‑game experience using paper maps and slow, manual updates with today’s “star wars” operation rooms where massive digital displays fuse sensor data instantly via AI, compressing decision cycles to seconds.
The anecdote vividly illustrates the technological leap and the resulting pressure on decision‑making, setting up the central tension of the talk – speed versus human judgment.
It established the baseline for the entire discussion, prompting listeners to consider how rapid AI‑driven insight changes the battlefield and preparing the audience for later ethical and governance concerns.
Speaker: Speaker 1
He narrated a scenario where a senior commander, faced with a high‑confidence AI recommendation to strike, paused and asked, “What does the machine not know?” discovering a civilian evacuation that the algorithm missed, thereby averting civilian casualties.
This story crystallises the abstract debate about AI trust into a concrete, human‑centric decision point, highlighting the limits of algorithmic perception.
The narrative acted as a turning point, shifting the tone from technological optimism to a sober reminder of human responsibility, and it sparked subsequent emphasis on judgment, accountability, and the need for safeguards.
Speaker: Speaker 1
“AI can inform, accelerate and recommend decisions, but only humans can exercise judgment and bear responsibility for them.”
It succinctly captures the core philosophical stance of the speech – technology as an aid, not a substitute for moral agency.
This declaration reinforced the earlier story, cementing the theme of human‑in‑the‑loop and guiding the later enumeration of four governance principles.
Speaker: Speaker 1
First principle: “Decisions that AI must not be delegated to must always remain human. Human control has to be institutionalized into law and moral accountability.”
It moves from anecdote to policy, proposing a concrete legal‑ethical boundary for AI use in combat.
Introduced a new discussion thread about legislative frameworks, prompting listeners to think about how existing military doctrine must evolve to embed human oversight.
Speaker: Speaker 1
Second principle: “AI‑enabled systems are designed to cause harm. Therefore they must be treated as a weapon, not as software, and must be evaluated in contested field conditions.”
Re‑framing AI as a weapon rather than a neutral tool foregrounds the necessity of rigorous testing and accountability, challenging any complacent view of AI as merely a decision‑support aid.
Shifted the conversation toward operational risk management and the practical challenges of deploying AI in noisy, deceptive battlefield environments.
Speaker: Speaker 1
Third principle: “The black box of data must become a glass box – commanders need to know what data is used and how it was trained.”
Calls for transparency directly addresses the trust deficit between operators and algorithms, introducing the concept of explainable AI in a high‑stakes context.
Prompted a deeper analytical layer, encouraging participants to consider technical solutions (e.g., model interpretability) alongside policy measures.
Speaker: Speaker 1
He linked military AI concerns to national policy, noting the launch of the “India AI Governance Guidelines” and describing them as a “path‑breaking step”.
By connecting the military narrative to broader civilian AI governance, he broadened the scope of the discussion beyond defense circles.
Created a bridge to civil‑society stakeholders, suggesting that lessons from the battlefield could inform civilian AI regulation and vice‑versa.
Speaker: Speaker 1
Historical analogy: “The rules governing NBC weapons, the Geneva Convention, the Convention on Landmines have stood the test of time; similarly, we need governance frameworks for AI‑based systems and autonomous weapons.”
Drawing on established international law provides a moral and legal precedent, reinforcing the argument for formalized AI controls.
Served as a rallying point for international cooperation, steering the conversation toward multilateral dialogue and the role of bodies like the UN.
Speaker: Speaker 1
Closing claim: “India, as a major military power, a growing AI hub and a civilization rooted in ethical restraint, has both the capacity and credibility to lead the global conversation on AI ethics – the ‘Manav Vision for AI’.”
Positions India not just as a consumer of AI technology but as a normative leader, injecting a strategic diplomatic dimension into the talk.
Elevated the discussion from technical and operational concerns to geopolitical leadership, encouraging other participants to view AI governance as an arena for soft power.
Speaker: Speaker 1
Overall Assessment

The speaker’s narrative arc—from a personal, technology‑driven war‑game memory to a concrete ethical dilemma, followed by a structured set of governance principles and a call for international leadership—served as the backbone of the discussion. Each pivotal comment introduced a new layer (operational reality, moral responsibility, legal frameworks, transparency, and geopolitical positioning) that progressively deepened the conversation. Although the transcript records only one voice, the remarks themselves acted as catalysts, steering the audience’s attention from awe at AI’s capabilities to a nuanced debate about accountability, safety, and global governance. Collectively, these insights shaped the session into a balanced examination of AI’s transformative power and the indispensable role of human judgment and institutional safeguards.

Follow-up Questions
What does the machine not know?
Highlights the need to identify blind spots in AI recommendations to prevent civilian casualties and ensure informed decision‑making.
Speaker: Senior commander (as described by Speaker 1)
Which decisions must never be delegated to AI and must always remain human?
Critical for establishing legal and moral accountability and preventing over‑reliance on autonomous systems.
Speaker: Speaker 1
How can AI‑enabled weapon systems be tested and evaluated in contested battlefield conditions?
Ensures that systems perform reliably under real‑world chaos, turning them into true force multipliers rather than liabilities.
Speaker: Speaker 1
How can trust and sovereignty be built into AI systems by making data and training processes transparent (turning the ‘black box’ into a ‘glass box’)?
Transparency is essential for commanders to understand and trust AI recommendations, safeguarding national security interests.
Speaker: Speaker 1
What training programs are needed for commanders and staff to effectively integrate, command, and oversee AI algorithms on the modern battlefield?
Equips military leadership with the skills required to use AI responsibly and maintain decisive human judgment.
Speaker: Speaker 1
What governance frameworks and legal provisions are required for the use of AI‑based autonomous weapons?
Necessary to align military AI use with international law, ethical standards, and strategic stability.
Speaker: Speaker 1
How can India collaborate with startups and innovators to accelerate indigenous AI applications for defence?
Promotes self‑reliant transformation and leverages domestic innovation to enhance military capabilities.
Speaker: Speaker 1
What specific AI safety guardrails and mandatory safeguards should be embedded in military AI systems?
High‑stakes military contexts demand robust safety mechanisms to prevent unintended harm.
Speaker: Speaker 1
How can meaningful human control and accountability be ensured in AI‑enabled autonomous weapons, as discussed in UN forums?
Ensures ethical deployment, prevents unchecked autonomy, and supports global consensus on responsible AI use.
Speaker: Speaker 1

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.