How AI Drives Innovation and Economic Growth

How AI Drives Innovation and Economic Growth

Session at a glance

Summary

This discussion, moderated by Jeanette Rodrigues at the Bharat Mandapam, focused on how artificial intelligence can either narrow or widen development gaps between countries, particularly examining opportunities and challenges for emerging economies like India. Johannes Zutt from the World Bank opened by highlighting AI’s potential as a game-changer for developing nations, noting that 15-16% of jobs in South Asia show strong complementarity with AI, enabling workers to enhance their skills and effectiveness across sectors like agriculture, healthcare, and finance.


The panelists explored the concept of “small AI” – practical, affordable, locally relevant applications that work with limited infrastructure – as opposed to large foundational models concentrated in the US and China. Michael Kremer emphasized AI’s potential to provide public goods like weather forecasting and digital identity systems, citing India’s success in distributing AI weather forecasts to 38 million farmers. Anu Bradford discussed regulatory approaches, comparing the EU’s rights-driven framework with other models, while debunking the myth that regulation necessarily stifles innovation.


Ufuk Akcigit raised concerns about market concentration in AI’s foundational layer, noting worrying trends of talent migration from academia to large tech companies and the shift from open to protected science. Iqbal Dhaliwal stressed the importance of evidence-based evaluation of AI interventions, highlighting examples where promising technologies failed due to trust issues or inadequate adaptation of existing systems.


The discussion revealed both optimism about AI’s transformative potential in healthcare, education, and government services, and significant concerns about labor market disruption, market concentration, and the risk of humans becoming overly dependent on AI systems. The panelists concluded that realizing AI’s benefits while mitigating risks requires careful policy design, robust governance frameworks, and continued investment in human capabilities alongside technological advancement.


Keypoints

Major Discussion Points:

AI’s Dual Potential for Development: The discussion centered on how AI could either narrow or widen development gaps, with particular focus on “small AI” – practical, affordable, locally relevant applications that work in environments with limited connectivity and infrastructure, versus large foundational models that require significant resources.


Market Concentration vs. Democratization: A key tension emerged between AI’s democratizing potential at the application layer (where small businesses can access previously unavailable tools) and concerning concentration trends at the foundational layer, where high barriers to entry in compute, data, and talent are creating oligopolistic conditions.


Real-World Implementation Challenges: Panelists emphasized that successful AI deployment requires addressing fundamental systemic issues – from basic infrastructure (electricity, internet) to business environments, regulatory frameworks, and human adaptation. Technology alone cannot solve problems without proper institutional support.


Regulatory Sovereignty and Global Power Dynamics: The discussion explored how developing countries can maintain AI sovereignty when foundational technologies are concentrated in the US and China, examining different regulatory approaches (US innovation-focused vs. EU rights-driven) and their implications for emerging economies.


Evidence-Based Evaluation and Scaling: Strong emphasis on rigorous testing of AI interventions, moving beyond technological capability to measure actual user impact, scalability, and continuous improvement, with multiple examples of promising pilots that failed to scale due to political economy factors.


Overall Purpose:

The discussion aimed to provide policymakers in developing countries with practical guidance on harnessing AI’s benefits while mitigating risks, moving beyond both utopian and dystopian narratives to focus on real-world implementation challenges and opportunities.


Overall Tone:

The tone was notably optimistic yet pragmatic, described as representing “hope” rather than the “fear” that characterized earlier AI summits. While panelists acknowledged significant risks around market concentration, job displacement, and governance challenges, they maintained a constructive focus on actionable solutions. The conversation remained consistently grounded in empirical evidence and real-world examples, avoiding both technological determinism and excessive pessimism.


Speakers

Speakers from the provided list:


Jeanette Rodrigues: Moderator/Host of the panel discussion


Johannes Zutt: World Bank representative (referred to as “John” in the discussion)


Ufuk Akcigit: Macroeconomist, working on World Development Report 2026 on AI and development with the World Bank


Michael Kremer: Nobel Prize winner, involved with Development Innovation Ventures and various AI development initiatives


Anu Bradford: Legal scholar/academic based in the U.S., originally from Europe, specializing in AI regulation and policy


Iqbal Dhaliwal: Works at J-PAL (Abdul Latif Jameel Poverty Action Lab), former civil services exam topper in India, focuses on evidence-based policy interventions


Additional speakers:


None – all speakers mentioned in the transcript are included in the provided speakers names list.


Full session report

This comprehensive discussion at the Bharat Mandapam, moderated by Jeanette Rodrigues, brought together leading experts to examine one of the most pressing questions in international development: whether artificial intelligence will narrow or widen the development gap between nations. The panel featured Johannes Zutt from the World Bank, Nobel laureate economist Michael Kremer, macroeconomist Ufuk Akcigit, legal scholar Anu Bradford (originally from Europe but based in the US), and development practitioner Iqbal Dhaliwal (a civil services exam topper turned researcher), each offering distinct perspectives on AI’s transformative potential and inherent risks.


Rodrigues noted this represented the fourth AI summit, following previous gatherings including the first in the UK, and observed a notable shift from fear-based discussions in earlier summits to the hope-focused approach evident in India’s “AI for all” objective.


AI’s Transformative Potential for Development

Johannes Zutt opened the discussion by positioning AI as a potential game-changer for emerging markets and developing economies, presenting evidence from the World Bank’s recent research in South Asia. The findings revealed that approximately 15-16% of jobs in the region demonstrate strong complementarity with AI, enabling workers to expand their skills and effectiveness rather than being displaced. This statistic challenges the common narrative of AI as primarily a job destroyer, instead highlighting its potential as a productivity enhancer.


Zutt described practical applications that illustrate AI’s democratising potential: farmers using AI to identify crop diseases and pests, nurses leveraging AI for diagnostic support in unfamiliar cases, and financial institutions employing AI to better assess borrower creditworthiness. These examples demonstrate how AI can fill critical skill gaps in healthcare, education, and financial services.


However, Zutt acknowledged significant challenges facing developing countries in harnessing AI’s potential. Basic infrastructure deficits—unreliable electricity, weak internet connectivity, limited digital literacy—create fundamental barriers to AI adoption. Many users may need to rely on voice-based interactions with basic devices rather than sophisticated smartphones.


The Small AI Revolution

Central to Zutt’s analysis was the concept of “small AI”—practical, affordable, locally relevant applications that address specific problems whilst working within constraints of limited connectivity, data availability, skills, and infrastructure. This approach contrasts with large foundational models that require massive computational resources.


Zutt emphasised that small AI represents the most promising pathway for developing countries, requiring bespoke solutions that help users conduct basic investigations using their phones, identify problems, find solutions, and connect with local resources. India emerged as a compelling example, with the world’s third-largest digital universe after the United States and China, built on strong foundations through digital identity programmes and payment platforms.


Market Concentration and Creative Destruction

Ufuk Akcigit introduced a crucial analytical framework distinguishing between AI’s foundational layer and application layer. At the application layer, AI democratises capabilities previously available only to large businesses, enabling small enterprises to access sophisticated tools. However, the foundational layer presents extraordinarily high entry barriers due to compute-intensive requirements and massive data needs, creating conditions prone to market concentration.


Akcigit presented empirical evidence of troubling trends: market concentration in the United States has been increasing since 1980, accelerating after 2000, with innovative resources increasingly shifting towards large incumbent firms. He highlighted a significant brain drain from academia to industry, with dramatic salary increases in industry accelerating after breakthrough moments in 2012 (image processing) and 2017 (foundational models). When researchers move to industry, their publication output drops significantly whilst patenting increases dramatically, representing a shift from open science to protected intellectual property.


Akcigit’s most provocative insight challenged AI’s development premise, questioning why entrepreneurship and dynamism were absent in emerging economies before AI’s arrival. He noted that firm size in developing countries was often best predicted by family size rather than competitive performance, suggesting AI alone cannot overcome deep-seated institutional barriers.


Public Goods and Government Investment

Michael Kremer provided analysis of market failures and government roles, arguing that whilst private firms develop profitable AI applications, critical public goods applications require government and multilateral support. He cited AI-powered weather forecasting as an exemplar: India’s distribution of AI weather forecasts to 38 million farmers demonstrated both scale of impact and public good nature of such services.


During an unpredictable monsoon season, AI forecasts accurately predicted early arrival in Kerala and southern India followed by unexpected delays—information that reached farmers when other sources failed. Survey evidence showed farmers responding by adjusting transplanting schedules and seed varieties.


Kremer also highlighted India’s digital identity system as a powerful example of government investment in AI-enabled public goods creating platforms for broader innovation. He referenced Microsoft Research India’s HAB program for driver’s licenses as another example of AI applications in traffic safety.


However, Kremer expressed concern about public sector adoption challenges, noting that government systems may resist AI technologies, potentially excluding the poor from benefits in public services.


Regulatory Sovereignty and Global Power Dynamics

Anu Bradford addressed how developing countries can maintain AI sovereignty when foundational technologies are concentrated in the United States and China, with DeepSeek representing China’s position in large language models. She argued that the Global South has the same incentives for regulatory sovereignty as developed nations but faces extraordinary implementation challenges.


Bradford’s analysis of the European Union’s rights-driven regulatory framework offers lessons for countries seeking to balance innovation with protection. Crucially, she challenged the conventional wisdom that regulation stifles innovation, calling this a “false choice.” Her analysis of Europe’s innovation gap identified four structural factors: lack of a digital single market across 27 jurisdictions, absence of robust capital markets (with only 5% of global venture capital compared to over 50% in the United States), legal frameworks discouraging risk-taking, and failure to harness global talent effectively.


This reframing suggests developing countries can pursue protective regulation without sacrificing innovation, provided they address underlying structural factors that drive technological development.


Implementation Challenges and Real-World Constraints

Iqbal Dhaliwal brought crucial field experience, emphasising that successful AI applications must be demand-driven and free up time for frontline workers rather than adding burden. His example of AI-powered essay feedback in public schools illustrated this principle—the technology eliminated routine tasks like correcting spelling errors, freeing teachers for higher-value activities like analytical thinking instruction.


However, Dhaliwal’s research reveals systematic implementation failures even when AI demonstrates superior laboratory performance. His most revealing example involved machine learning for tax collection in India: despite successfully increasing identification of fraudulent firms from 38% to 55% at low cost, officials refused to scale the programme because it threatened existing power structures by removing human discretion in enforcement decisions.


Evidence-Based Evaluation and Scaling

Both Kremer and Dhaliwal emphasised rigorous evaluation methodologies. Kremer outlined a four-stage framework: model evaluation (technical performance), user impact assessment (efficacy trials), scalability testing (effectiveness at scale), and continuous improvement systems. He referenced Development Innovation Ventures as an example of tiered funding approaches—small grants for pilots, larger grants for rigorous testing, and substantial funding for successful scale-up.


Future Risks and Opportunities: 2035 Predictions

In rapid-fire predictions for 2035, panellists identified both opportunities and risks:


Ufuk Akcigit expressed optimism about government productivity improvements but concern about labour market disruption, particularly for entry-level jobs that represent aspirational opportunities in developing countries. He highlighted a policy contradiction where governments incentivise AI adoption whilst taxing human employment through provident fund contributions and labour regulations.


Anu Bradford showed excitement about education and health improvements but worried about humans “getting dumber” by outsourcing thinking to AI systems. As an educator, she emphasised using AI to enhance rather than substitute human capabilities.


Michael Kremer was optimistic about health and education advances but concerned about public sector adoption failures that could exclude the poor from AI benefits in public services.


Iqbal Dhaliwal shared optimism about healthcare and education whilst worrying about market concentration preventing broad benefit distribution.


Johannes Zutt expressed excitement about targeted poverty reduction through AI-enabled individual-level interventions but warned that inadequate governance frameworks could enable serious abuses.


Balancing Hope and Pragmatism

The discussion successfully balanced optimistic potential with realistic assessment of implementation challenges. Unlike earlier AI summits dominated by fear about job displacement, this conversation maintained constructive focus on actionable solutions whilst acknowledging genuine risks.


The panellists’ diverse backgrounds provided complementary perspectives, with convergence on key issues like evidence-based evaluation, locally relevant solutions, and market concentration concerns suggesting robust foundations for policy development.


Conclusion and Policy Implications

The discussion revealed that AI’s impact on development gaps depends critically on policy choices made today. The technology offers genuine opportunities to leapfrog development challenges, particularly through small AI applications working within existing constraints rather than requiring wholesale infrastructure transformation.


However, realising benefits requires addressing structural issues predating AI: market concentration in foundational development, inadequate governance frameworks, institutional resistance to change, and policy contradictions favouring capital over labour.


The conversation suggests developing countries need not choose between innovation and regulation but must address structural factors driving technological development: market access, capital availability, talent retention, and risk-taking culture. Success requires coordinated action across infrastructure investment, regulatory frameworks, education systems, and labour market policies.


As Rodrigues observed in closing, noting the “messy human notes” visible on panellists’ screens, the experts weren’t outsourcing their thinking to AI—embodying the principle that AI should enhance rather than replace human capabilities. The choice between AI narrowing or widening development gaps remains open, contingent on the wisdom and effectiveness of policy responses implemented today.


Session transcript

Jeanette Rodrigues

all around the Bharat Mandapam. So once again, thank you very much for your time this afternoon and for choosing us to have a conversation with. To start off, I would like to introduce John, who will make some opening comments for the World Bank.

Johannes Zutt

So thank you very much, Jeanette. It’s a great pleasure to be here speaking to all of you this afternoon. Over the past week, we’ve heard from a lot of world leaders, tech leaders, experts from across many, many countries about how AI is fundamentally reshaping our world, presenting not just a technological shift but a structural transformation with profound implications for economies and societies everywhere. For emerging markets and developing economies, as for all economies, AI could be a game changer. So sorry, that probably helps. I thought the mics were on. So, you know, for all countries, but especially for emerging markets and developing economies, AI can be a game changer, a unique opportunity to leapfrog longstanding development challenges.

It offers clear opportunities to enhance growth and productivity. We recently did some work in South Asia at the World Bank Group to see what sort of impact AI was having on jobs in the region, and we found that approximately 15 or 16 percent of jobs here have strong complementarity with AI. AI enables people in those jobs to expand their skills and their effectiveness in delivering the products and services that they are trying to provide. It also helps, you know, very, very diverse groups of people in many, many different sectors of the economy. It helps farmers to identify pests on their crops. It helps farmers to identify pests on their crops, diseases in their crops, and also how to address them.

It helps farmers to identify pests on their crops, diseases in their crops, and also how to address them. It helps nurses to identify the ailments and illnesses that their patients may be suffering, particularly the ones that they’re not very familiar with, but that they can research using appropriate AI applications. It helps financial institutions to understand better the ability of borrowers to take on loans, which, of course, expands the ability of the borrower to expand his or her business. So there’s clearly enormous potential for AI to fill skill gaps in the areas that I mentioned, also in education, in health care services, to detect patterns, to generate forecasts, to guide the allocation of public resources, and so on.

Of course, at the same time, on the flip side, AI also creates a number of challenges. One of them is there will be some job losses, particularly sort of entry -level jobs that are very much knowledge or document -based, performing relatively rote work that can be taken over by automation. And we’re actually seeing this in the World Bank Group. We went and looked at the number – the types of jobs that we are advertising these days compared to a couple of years ago, and what we found is that that layer, sort of at the bottom of the professional classes inside the bank group, there’s just fewer of those types of jobs being advertised in the World Bank Group today than there were a few years ago.

At the same time, you know, particularly for developing economies and emerging markets, many of them are going to struggle to harness the potential that AI offers because of very basic issues around the foundations for effective AI use. They may not have reliable electricity. We can start with that very basic one. They may not have an internet backbone that’s sufficiently strong. People in these countries may not have very, very basic skills of literacy and numeracy that enable them to work effectively with higher end devices. They may need to use very, very basic devices, not even smartphones, and rely on voice communication, asking a question and hearing a response. So there may be struggles of that kind in developing countries and emerging markets.

And I’m not even talking about all the governance and regulatory safeguards that can also come into play. So the question, of course, is how can emerging economies, developing markets, harness the potential of AI and avoid the pitfalls? And for us in the World Bank group, we’ve been very, very focused on focused recently on basically small AI. Small AI meaning practical, affordable, locally relevant AI that addresses specific problems and also works where connectivity, data, skills, infrastructure are fairly limited. And this is extremely important in countries like India where all of those conditions can apply. And yet there’s tremendous potential for people to expand their, to grow their productivity if they have timely access to information of the right kind in their local language tailored to their specific circumstances.

So that’s what we are trying to do in South Asia today and across the globe actually. And this is really about some of the examples that I mentioned earlier, having bespoke… applications that help farmers to do very basic investigation of the types of issues that they’re facing using their phone to analyze what’s going on to identify it to find out how to address it even to find out who within their local area in their market space can help them by providing the tools or the products that are necessary to address whatever they’re running into so India of course is a very strong example of what’s possible India has been a leading country in digital innovation for quite some time after the United States and China it has the largest if you like digital universe you in the in the world today it’s got some very good foundations there’s the the digital identity program as well as the digital payment platform that currently exists.

There are lots of Indian firms that are innovating in AI, including in the small AI applications that I’ve been talking about. And the governments of India have an objective of ensuring that there is AI for all. So they are very, very aware of the challenges that need to be overcome to make AI accessible to a very, very broad spectrum of the population and not just the very rich that, to some extent, need assistance the least, right? It’s the poorer parts of the country that benefit the most because they will be leveraging a tool that they are not very familiar with and have not been using that much in the past. So we’re working in India.

We’re working in a lot of different states, Uttar Pradesh, Maharashtra, Kerala, Haryana, Telgana. these different aspects working with governments to work on the foundational elements, interoperability, making sure that the accessibility is possible, that programs can run offline as it were so that people who aren’t able to get online all the time can benefit and so on. And then we’re also working with private sector investors who are developing apps. I mean we’re not actually developing many apps ourselves. That’s not really in our comparative advantage. Our comparative advantage as the World Bank Group is to do the more advisory work, make sure that the backbone information that’s embedded in the application is reliable and trustworthy because of course that’s critical for ensuring successful uptake.

But we are helping governments to create. We are helping governments to create the space that enables experimentation in AI sandbox to develop the different applications that people in this incredibly creative country are coming up with to help people get on with their work and become more productive. So I think it’s important to recognize that if we’re going to make effective use of this tool, we need both a public -facing effort to address the standards and the other issues, the interoperability and so on that I mentioned before, but also a private -sector -facing effort because it’s the private sector that’s actually generating, creating most of these applications that are working, particularly in the small AI area.

We’re doing a little bit on bigger AI. There’s obviously a connection between the two. Big AI can, through computational power, generate new knowledge that can help us to do things that we haven’t done so well in the past much, much better. But for… There are countries like India translating that. into small AI will also be very, very important for uptake. So I’m looking forward to hearing from all the distinguished speakers in this panel about their thoughts on what’s happening today in this sector. So thank you very much.

Jeanette Rodrigues

Thank you very much, John. John spoke about, of course, the use cases for AI, and on the other side of the spectrum we have the large language models, we have the foundational AI. But no matter where you sit on the spectrum, no matter where your interests lie, AI, innovation never disperses and never diffuses equally. Today on this panel, I hope to unpack what determines whether AI narrows the development gap or whether it widens the development gap. Especially we are looking to talk about the real world. What should policymakers in the real world think about and keep at the top of their mind as they go ahead preparing policies considering AI? Before I start, just setting the stage.

To a man, to a woman, everybody I spoke with who’s attended the first AI summit to today, this is, I think, the fourth AI summit being held. The first one was held in the UK. And without exception, all of them made it a point to tell me how the first session was full of fear. It was, oh, my God, AI is this terrible technology which is going to steal all our jobs, make us redundant. And when they come to India, they see the hope that technology and AI brings. And that’s the spirit of the discussion this afternoon, to figure out how can we balance both of those extremes, hope and concern, and go ahead in a pragmatic, policy -first way to prepare for the real world.

So if I could start with you, Ufuk, how do you think about AI? And especially, where do you see areas of creative destruction? To foster the innovation that we need.

Ufuk Akcigit

Thank you very much. And so, of course, creative destruction is an important driver of economic growth in the long run. So that’s why, you know, it’s an interesting question how AI will affect creative destruction in general. Of course, we are at a very early phase of AI, and it’s a GPT. And typically, you know, when GPTs are emerging, there’s a huge surge of new businesses. And this should not be misleading. I think the main question we should be asking ourselves is what will happen to the creative destruction in the future? How does the future look like in terms of creative destruction? And I’m a macroeconomist, so that’s why I like to look at this with a, you know, bird’s eye view.

And I would like to, you know, separate advanced economies from emerging or developing economies. So when it comes to advanced economies, there, again, we need to split the issue into two layers. One, the foundational layer. and the other one is the application layer. When we look at the application layer, it’s great. You know, the entry barriers are low. Small businesses can do what only large businesses could do in the past, and, you know, they can do their accounting, marketing. You know, there are so many opportunities now. The entry barrier is low. As a result, this suggests that, you know, this is going to be more, you know, friendly for creative destruction on the application. But then there’s also the foundation layer, and I think that’s exactly where the bottleneck is.

When we look at the foundation layer, the entry barrier is really, really high, and, you know, the compute is very compute -heavy. It’s very data -heavy. It’s very talent -heavy. So as a result, you know, this market, at least this layer, is very concentration -prone. Of course, it’s very early. But, you know, normally we have to be concerned about the foundational layer and how things will pan out because this is the upstream to the application layer, which is downstream to foundation layer. So that’s why whatever will happen at the foundational layer will potentially spill over to application layer two. So that’s why I think we need to look at early indicators. But, you know, in the interest of time, I don’t want to go into the empirical evidence yet.

Maybe we can come back in the second layer. When we look at the developing countries, so I think, you know, I agree with Johannes. You know, I think AI is creating fantastic opportunities. So that’s why I think it’s really important to understand the opportunities as well as the risks for developing countries. And together with the World Bank, we are working on the world development. Report 2026, which is going to be on AI and development. And these are exactly the issues that we are focusing on. But I think before we go into those details, we should ask ourselves one major question. Why was there no entrepreneurship and dynamism before the AI revolution in emerging economies? Why was, you know, when we looked at the firm’s life cycle, for instance, why was it not up or out?

Why was it not, you know, very competition friendly? Why did the best predictor of firm size in emerging economies or developing economies was the size of the family and or the number of male children? These are still lingering issues and AI is not, you know, will not bring magic unless we understand and fix the business environment in these economies. You know, AI will just create new tools. But at the end of the day, we need to make sure that the business friendly environment is there for entrepreneurs to come and exercise their ideas

Jeanette Rodrigues

Ufuk, that’s a very interesting leaping of point, the real world. And the intention of this panel is to get exactly there. So if I may turn to you, quite literally turn to you, Michael, and ask you about the real world. You’re obviously doing a lot of work on the ground. Where do you see the potential for AI to spur gains? And are there any really transformative breakthrough areas that you’re looking at right now?

Michael Kremer

Yes. Thank you. Thanks very much. You know, I don’t want to minimize the existence of forces that may widen gaps. I think that if policymakers, primarily at the national level, but also in multilateral development banks, take appropriate actions and make appropriate investments, then I think AI has the potential to substantially narrow some of the gaps. And, you know, I think the… which policy actions to take can be informed by thinking through relevant market failures and relevant government failures. Let me give a concrete example or two. So private firms have incentives to develop and improve applications of AI that can generate profits. But there are some very important applications of AI for public goods, for example, that will not attract commercial investment to measure it with their needs.

And that’s an area where I think governments and multilateral development banks can play an important role. And I think some of this very much echoes what you were saying about small models, but also I’ll mention the link between the two. So an obvious example where I think India has been a leader for the world is in the development of digital identity. You know, this is… will enable, as Ufuk was saying, this enables a lot of work by individual entrepreneurs, a lot of other applications. So that’s a huge success, and I think multilateral development banks together with India can help bring that to many other countries. Let me take another example, one that’s not as well -known, but picks up on your comment about farmers.

So one thing that’s critical for farmers, they have to make a bunch of decisions that are weather -dependent. You know, when do you plant, for example? What varieties do you use? A drought -resistant variety, another variety. That, most farmers don’t have access to state -of -the -art weather forecasts around the world. I’m not talking about one country. In low – and middle -income countries, they don’t have access to that. Now, there’s a huge advance. We tend to think of large language models, but obviously AI is pushing science forward, and that includes in weather forecasting. There’s really a revolution driven by AI. But weather forecasts are non -rival. They’re largely non -excludable. They’re the classic definition of a public good.

So there’s a strong rationale for national governments, in some cases supported by multilateral development banks, to make investments in producing and disseminating AI weather forecasts. Again here, India is a leader. So if you, India in particular, in particular, India’s, the Indian government distributed forecasts to AI weather forecasts to 38 million farmers last year. And the evidence suggests that farmers, both from India, from this particular case, that in areas, I’ll say a little bit about last year’s monsoon, it came early in Kerala and southern India, but then there was an unexpected delay in the progression. The AI forecasts got that right, that was the only source of information that reached farmers with that. In the areas, we did a survey above that line, and farmers are responding, and they transplant more, they use hybrid seeds more.

Evidence from around the world is consistent with this. Farmers respond to these AI weather forecasts. So I think that’s one example, but many others, and happy to discuss them in education and traffic enforcement and elsewhere.

Jeanette Rodrigues

Michael, your answer should be read the book. Okay. We’ve spoken about the use cases of India, but setting up digital IDs, of course, is a sovereign decision. It’s something India could do unilaterally. When it comes to the large language models, that’s not reality. The large language models are concentrated in the US, in China now with DeepSeek. Anu, in a world where you largely have the rules being set by the two large powers, the US and China, arguably, there’s of course the EU as well, and you’ve done a lot of work on that. Who sets the AI rules for the Global South? Is there even the possibility for the Global South to talk about sovereignty?

Anu Bradford

So I think the Global South has the same kind of incentive for their own AI sovereignty, including then regulatory sovereignty, to design the rules that better work for their economies, for their societies, for what the public interest in these jurisdictions calls for. But regulating AI is really difficult even for very established bureaucracies. You need to be able to make sure that it is an innovation -friendly, and yet you at the same time need to be careful in managing the risks for individuals and societies. So even very established regulators like the European Union have found it one of the most challenging tasks to come up with the AI Act. So there’s probably something to be learned from these jurisdictions that have gone ahead and done the kind of thinking that had then resulted into some of those regulatory frameworks that we have now in place.

So if you think about the choices that India has when it looks around, one of them is to think about, okay, how does the EU go about this? The EU follows what I would call a rights -driven approach to regulation. So what is really characterizing this, the first horizontal binding, so economy -wide regulation that the Europeans enacted, it is a regulation that seeks to protect the fundamental rights of individuals, the democratic structures of the society, and that also seeks to ensure a greater distribution of the benefits from AI revolution. So the European approach is very conscious that it wants to also share some of the benefits so they don’t all go to the large developers of these models, but individual use as society at large.

smaller companies benefit from AI as well. So there’s something I think the Europeans can teach in terms of that regulatory approach in addition to maybe then some details of how that regulation in the end was constructed. But just one word, India is a formidable economy that doesn’t need to take a template and plug it into the economy as such. I think India is in a very good position to take the lessons that serves its needs yet make the kind of local modification and variations that are more reflecting the distinct priorities of this country.

Jeanette Rodrigues

Anu, before I turn to Iqbal, a quick follow -up question to you. As India makes its own rules, where does the trade -off lie between regulation and innovation?

Anu Bradford

So this is very interesting because often I am based in the U.S., but I’m initially from Europe, and these two jurisdictions are described as the U.S. develops technologies and the Europeans regulate those technologies. many ways does India want the innovation path or the regulation path? And I think there are many votes who would go for innovation. But I really would like to debunk this myth that to me it’s a false choice to say that the reason we don’t see these large language models being developed in Europe is not because there’s a GDPR, the General Data Protection Regulation. It’s not because there is AI Act. So the reason there is a perceived innovation gap between the United States and Europe is, I think, four things.

So first, there is no digital single market in Europe. It’s very hard for these AI companies to scale across 27 distinct markets. Second, there’s no deep, robust capital markets union. 5 % of the global venture capital is in Europe, over 50 % in the United States. That explains why the U.S. has been able to take much greater steps in developing AI technologies. Third, there are legal frameworks and cultural attitudes to risk -taking. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone.

You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. I wouldn’t encourage you to replicate that because it’s very hard to innovate on the frontier of technological innovation because sometimes you fail. But you need to be then given the second chance.

And the fourth, I think, the sort of foundational pillar of the robust U.S. tech ecosystem is that the U.S. has been spectacularly successful in harnessing the global talent that has chosen to come to the U.S., including many Indian data scientists, engineers, who think that U.S. is the place where they can start their companies, scale their companies, fund their companies, U.S. universities can attract them. So the idea that choosing to follow… Or imitate aspects of the European rights protective regulation would come at the cost of innovation, we need to understand better what drives the technological innovation and whether regulation should

Jeanette Rodrigues

Thank you, Anu. Iqbal, turning to you. You’re working in an area of the world, South Asia, where what is regulation? What is enforcement? At the risk of sounding like a provocateur, it’s the Wild West a little bit. And therefore, we talk a lot in our part of the world about small AI, about targeted AI. My question to you is that what should policymakers keep in mind when designing AI -enabled interventions, especially when it comes to small AI and the targeted use cases?

Iqbal Dhaliwal

vulnerable public schools all the way from 11th to becoming the second best performing state in just a matter of two or three years. Phenomenal results, right? But then you start saying, let’s unpack this. What was this thing doing? The first thing that they find out is that a lot of people are like, oh, does this mean that I don’t need teachers anymore? No, you still need the teachers. What it replaces is the road task of the teacher having to correct spelling mistakes, calling you to the room and saying, hey, you forgot your comma, you forgot to capitalize. Instead, AI takes care of all of that. And now the teacher can sit with you in the free time and say, how did you set up the structure of this essay?

Did you think about this analytically or not? And that’s the first insight that comes from evaluation. It frees up the teacher time. Everything that we do in the field ends up adding to teacher’s time, adding to the nurse’s time, adding to the Anganwadi worker’s time. Very few teachers do that. Free up time. So if your AI application can free up the time of the health frontline workers, first of all, that’s a winner. The second thing that is really important here was that this is a demand -driven thing, right? Like there was a demand by the kids to improve their essays. There was a demand by the teachers to free up their time. But most importantly, there was a demand by the school districts to show progress.

So I think those is kind of a great example of how everything comes together if you think about it ahead of time.

Jeanette Rodrigues

Ladies and gentlemen, a topper of India’s notoriously difficult civil services exam. So take Iqbal more seriously than you would as just a normal.

Iqbal Dhaliwal

Thank you. I thought that was history now.

Jeanette Rodrigues

It’s never history in India, Iqbal. Michael, turning to you, almost as equal in accomplishment by winning a Nobel. What risks should multilaterals like the World Bank keep in mind? Or let me rephrase that actually. Is there a risk that multilaterals are moving too slowly relative to the technology?

Michael Kremer

I think there certainly is. As I noted before, there are certain areas where the private sector is going to move, but there are other areas where they’re not going to move quickly, and it’s going to be very important for governments and for multilateral development banks and for philanthropy to move. I think there are a number of approaches to this. One way is by encouraging innovation by setting up institutions like innovation funds, particularly evidence -based, to echo Iqbal, I think evidence -based innovation funds. So I’ll give you one example of something that I’m involved in. Development Innovation Ventures, that was initially set up in the U.S. government, but it’s now been relaunched independently. It has tiered funding, so there’s initially very small… grants to pilot new ideas.

Then there’s somewhat larger grants to rigorously test them as Iqbal emphasized and then for those that are most successful there’s funds to help transition them to scale up. I think why is that important? Well that’s important because if we’re thinking about the services that public services and there are other sectors where this is needed but there’s probably going to be insufficient competition. Private developers are going to come up with innovations but then there if they have to sell them to the government they’re facing a monopsonistic buyer. They’re not going to probably not going to get rich doing that. So some support to generate more in that market, generate more entrance in that market, well I think is very important.

It’ll also mean that prices will go down and quality will go up when the government does that thing. Does that. Let me, I’ll just again let me give a example of the potential of how you know we we tend to focus on certain examples time after time here let me give another another example that is you know something that I doubt many people here are thinking of when they think of AI you know one of the things that you know traffic safety and we’ve all been exposed to traffic in the past few days you know traffic is a real problem interfering with urbanization which may drive growth there are a lot of deaths from from traffic a lot of citizens around the world have very difficult and painful experiences with traffic enforcement well you know you can have automated traffic cameras that have the opportunity to improve improve traffic outcomes but also improve people’s perception of fairness in government India’s moving in this let me mention another thing that within traffic safety that’s being done Microsoft Research India developed a program called the India Research Program and it’s a program that’s been developed by the government and it’s a program called HAB that is for driver’s licenses and that it automatically uses AI to test are that are the drivers until they actually pass in their exams they when this was introduced it’s been introduced I believe in 56 sites across India hundreds of thousands of people have taken tests this way we took a leaf from a false book we followed up the we’ve got information from Ola on ratings on and the number of drivers who were rated as driving unsafely that went down 20 to 30 percent where hams had been installed so you know that’s something that was developed not by Microsoft’s main business but by Microsoft research we can just create some support for more ideas like that to be developed to be rigorously tested that can benefit India can benefit the whole world we are we are running out of time probably this is this is one place in in India where time is really respected and we have to end in time.

So I had a list of wonderful questions, but if I could now move to a space where we are really giving shorter answers and quick answers and the deeply, deeply interesting ones about who’s winning and who’s losing. Michael, if I could start with you, actually. We’ve seen many promising technologies fail to live up to their promise. How should we think when we are evaluating AI interventions? How should we think about it? What should be the metrics that we use? Okay. First, model evaluation. So AI companies typically do that part. How good is the model output for specific tasks? You know, forecasting the weather. Does it do a good job? Does it match your local language well?

Second, user impact. Here, I think there’s a role both for sort of initial pilots akin to a medical efficacy trial. If you put the work into trying it, does it lead to improvements and outcomes for the users? Second… scalability and usage at scale that’s more like an effectiveness trial in medicine that it’s important to think not just about the tech but also about the human systems are the teachers actually going to use the product I think is it is an example how can you get the teachers to use the product and then the fourth area is continuous improvement you want a system that improves the underlying models so I think in procurement we might want to think about requiring continuous a B test publicity about what the what the impact usages and impact is and perhaps even thinking about requiring open access as part of the procurement package

Jeanette Rodrigues

thank you Michael. Iqbal, I want to flip that question to you where do you see where do you see hype in the promises of AI that you don’t think will play out

Iqbal Dhaliwal

I think hype is natural because the technology is exciting. It’s a general -purpose technology. It’s evolving so quickly. The marginal cost of deployment for the next users is very low. It’s multimodal. Today you are doing it in text. Tomorrow you’re doing it in video. Day after tomorrow you’re doing it on audio. Everybody who has a smartphone has it. So I can understand the hype, right, like where it is coming from. But I think what we really need to do is separate the hype from the reality on the ground. And the reality on the ground is that many of these technologies are not having the final impact that we are having. And I see kind of two, you know, like once again my job at J -PAL always, you know, sitting at the top is like to say not worry about one professor’s evaluation or one researcher’s evaluation, but say when I connect all these dots, what am I seeing?

And I’m seeing two patterns. One is about trust in technology, and the second part is about the reality of the policy world. Let me elaborate quickly on both. Trust in technology. There are studies which found that even if you give doctors and frontline health care workers access to diagnostic tools, including radiology, tools, using AI, AI enabled prediction of the diseases, oftentimes it doesn’t lead to an improvement in results. And when you try and unpack that, even though this technology worked even better than the human intervention in the lab, right? So some of these diagnostic things can work, have better predictability in the lab, but in the field, they end up decreasing, not only is their efficiency lower, but it lowers the efficiency of the doctors, because we have not trained them enough important.

And the second thing is the enabling mechanism, the world around us. We just assume that just because the technology works, even if it works in the field, the rest of the system will adapt to it. No, you have to adapt the system to the rest of the world. So this example quickly comes from India, where, you know, we have a with one particular state government, we try to improve the collection of value added taxes, it’s called GST in India, there is a whole worry about bogus firms that are created to get these GST or value added tax thing. The machine learning algorithm is able to increase the probability of predicting a bogus firm from 38 % to 55 % in one shot at a very, very low cost.

When it came time to scale up this program by the government, they refused to scale it up because you think about it, you have taken away the discretion of the human to decide whether they should raid Michael’s firm or they should raid Iqbal’s firm. That is power. And if you haven’t thought through that point, what is the point of technology?

Jeanette Rodrigues

I won’t terrify anyone in the room by asking why they didn’t want to scale up this tech. But talking about weeding out the bad actors, talking about firm -level decisions, moving on to UFOOC, does the firm -level evidence show productivity gains diffusing evenly across?

Iqbal Dhaliwal

So just going back quickly to the question of the firm. In the earlier model that I highlighted, I think it’s important to understand what’s happening at the upstream. so that we can then understand where things will be going in the future. And the evidence there, the early signs, is a bit worrying. So first of all, when we look at, for instance, the dynamism or market concentration in the U.S., market concentration has been increasing since 1980 but in an accelerating way after 2000. So that’s the first set of evidence. The second set of evidence comes from how innovative resources are allocated across firms. And when we look at the inventors who are creating the creative destruction and technologies, there’s a massive shift towards market incumbents.

And when I say incumbents, those firms that have more than 1 ,000 employees. In around 2000, 50 % of employees used to work for incumbent firms in just 10 years. That shifted. To more than 60%. A massive reallocation of innovative resources. And the final piece of evidence, and we are going to release this study next week, we looked at the universities, how AI is impacting universities, and we look at the AI publishing scientists. And AI publishing scientists in academia, the top 1%, used to make around $300 ,000 in 2000. It went up to $390 ,000 over two decades. Similar people in industry used to make around $550 ,000. Now it went up to $2 million.

And there has been two breakpoints. One of them was in 2012. The other one was in 2017. Of course, image processing and then the foundational model revolution in 2017. The more worrying part about this, which brings me back to the foundational model side of things, is that this created a massive out -migration from academia to industry.

Ufuk Akcigit

And after 2017 especially, B2B. When the compute and infrastructure became so important. And then we saw the rise of AI. The target or the destination is large incumbent information companies, which again highlights where things are going in terms of the concentration. And the worrying part also is that when people are moving to industry from academia, their publication record goes down by 50%. They start patenting by 600 % more after they move, which means that we are moving from open science to more protected science. Now, spillover is extremely important for creative destruction, for the future of innovation. So that’s why, and if we will keep the foundational layer contestable, I think that the fundamental players there will be universities.

And keeping universities in a healthy way is extremely important, but there is very little discussion on this, which I think before it gets too late. Because once you start buttoning the wrong button, and then the rest will follow wrong as well. So that’s why I think we have to have this frank conversation early on in the game, otherwise it might… too late.

Jeanette Rodrigues

Ufuk, what you spoke about boils down to something Iqbal mentioned as well, power. Because power still makes decisions in this world today. So Anu, before I move to the final section of this panel, if I could ask you if the finance minister of a developing country let’s say India, comes to you and asks you, Anu, how should I think? What would you tell her?

Anu Bradford

So today if you think about how much political power but also geopolitical power is shaping our conversations around AI it is something where I think each country is now pushed towards greater techno -nationalism, techno -protectionism AI sovereignty has become almost a sort of uniformly goal for everyone. But I would remind even when encountering players like the United States and China that nobody in today’s world will be completely sovereign when it comes to AI space. If I just take one layer of the AI stack as an example. What is now driving a lot of the global AI race is this idea that we want to do frontier AI we want to have these powerful foundation models.

That means you need to have a lot of computers. You can’t have a lot of compute unless you have access to the high -end semiconductors. The U.S. is well positioned there. It is hosting companies like NVIDIA. The U.S. leads in the design of semiconductors. But who is manufacturing them? We really need to think about the role of Taiwan there. But then the Europeans have ASML in the Netherlands that leads in the high -end manufacturing with the equipment needed for manufacturing. But that is dependent on chemicals where Japan is leading. And the entire supply chain relies on raw materials from China. So ultimately, all these choke points can in principle be weaponized, but that is not ultimately a sustainable strategy.

Even President Trump had to walk back some of the export controls to China because Chinese were saying, okay, then the raw materials are not coming your way. So there are the potential ways to weaponize these interdependencies that ultimately make us all poorer. So as a finance minister of India, when approaching other middle powers, the great powers,

Jeanette Rodrigues

Easily said than done. Our final, final section is, of course, the rapid fire round. We all love this in this room. In one sentence, in one sentence, if I could ask all of you, and Johannes, you’re not getting away easily, you’re going to answer this as well. So in one. if I could ask you, we’re sitting in New Delhi 2035. Could you predict one development outcome that will have dramatically improved with the use of AI and one risk we’ll regret not addressing now? I guess you already know my second answer.

Iqbal Dhaliwal

I think the concentration, the future of market concentration is something that we should be concerned about and we might regret not having discussed this sufficiently in 10 years. On what will change in a positive direction, clearly health care and education, I think. It’s a no -brainer.

Jeanette Rodrigues

Anu?

Anu Bradford

So first of all, it’s so inspiring to hear all the use case examples, whether we talk about traffic or agriculture or education, because I often talk about the risks and the downsides, so it’s a really good reminder. I’m personally very excited, especially what happens in the education space but also in the health space. In terms of the risks, I think one thing that we are not paying attention to, and what I would even call a systemic risk, is the idea that many worry about AI getting almost too smart. But I am more worried about us getting dumber as a humanity. There is a temptation to start skipping steps, outsourcing your thinking and your creativity to these models.

And as an educator, when I think about how I will teach my students to use generative AI to enhance but not substitute their capabilities, we will just make a tremendous mistake if we just forewent that hard work, that beautiful moment of thinking hard problems and creating and investing in our own capabilities. And all that just cannot be so outsourced, because otherwise we don’t even know what kind of questions we should be asking the AI going forward.

Jeanette Rodrigues

Michael.

Michael Kremer

I agree that there is huge potential in health. and education. I think we’ll see big improvements in that, but the risk is that the public sector won’t adopt these, and therefore the poor won’t have access to them. And that’s because the public sector, as Iqbal indicated, the government systems and the government workers may not adapt to use these. There’s also risks of copycat regulation that are over -focused on certain problems that other countries may be worrying about, but might not be relevant for emerging economies. And then final risk is that the procurement systems are just set up in such a way that we don’t get sufficient competition, we get lock -in, and then we just don’t wind up with good quality.

Jeanette Rodrigues

Thank you, Michael. The buzzer’s down, but I’ll take a risk and quickly run through the other.

Ufuk Akcigit

Yes. I think I am much more optimistic about the government actually adopting this thing. Whether it is when you call 100, your call is going to get answered very quickly. The PCR van is going to be at your house much faster. The hospitals are able to be able to link your health record. So I think the government sector productivity is going to improve leapfrogs. The biggest risk, I think, is definitely the labor market. If there was a dial where I could slow down the adaptation and give time to the labor market to catch up, that’s my biggest worry. You talked about entry -level jobs. An entry -level coding job might be an entry -level job in the United States.

It’s the aspirational job that created Gurgaon’s and Noida’s and Mohali’s of this country. And those people are going to be running out of jobs very, quickly. And I think the labor market, whether it is ESI, Provident Fund, Gratuity, we are piling on and making it harder and harder to hire labor. when, on the other hand, capital is not taxed. We are giving incentives to people to use AI, and we are taxing them through provident fund and labor market regulations to hire labor. And I think that, for me, is the biggest risk, actually.

Johannes Zutt

So I think that for the first time in human history, we may actually have the tools available to enable us to target poverty reduction, poverty elimination initiatives on individuals. And that could be tremendously transforming. But at the same time, I do worry that we will not get the governance right or we won’t be able to make that governance sufficiently robust to prevent abuses.

Jeanette Rodrigues

Thank you very much to all of our panelists and to you for your time and attention once again. I had the very rare fortune of being able to peek into Michael’s screen while he was speaking, and I saw all the messy human notes. Our panelists are definitely not outsourcing their thinking anytime soon, and thank God for that. Thank you, ladies and gentlemen

J

Johannes Zutt

Speech speed

141 words per minute

Speech length

1450 words

Speech time

612 seconds

AI as a leap‑frog tool for productivity and growth

Explanation

Johannes argues that AI offers a unique chance for emerging economies to bypass long‑standing development hurdles and boost growth and productivity. He sees AI as a game‑changer that can accelerate progress across sectors.


Evidence

“So, you know, for all countries, but especially for emerging markets and developing economies, AI can be a game changer, a unique opportunity to leapfrog longstanding development challenges.” [5]. “It offers clear opportunities to enhance growth and productivity.” [1].


Major discussion point

AI as a development catalyst for emerging economies


Topics

Artificial intelligence | Social and economic development | The enabling environment for digital development


Infrastructure and skill gaps limit AI uptake

Explanation

He warns that many developing countries lack basic foundations such as reliable internet, electricity, and basic literacy, which hampers effective AI adoption. These constraints must be addressed before AI benefits can be realized.


Evidence

“At the same time, you know, particularly for developing economies and emerging markets, many of them are going to struggle to harness the potential that AI offers because of very basic issues around the foundations for effective AI use.” [19]. “They may not have an internet backbone that’s sufficiently strong.” [23]. “People in these countries may not have very, very basic skills of literacy and numeracy that enable them to work effectively with higher end devices.” [24].


Major discussion point

AI as a development catalyst for emerging economies


Topics

Capacity development | Closing all digital divides | The enabling environment for digital development


AI can fill skill gaps in agriculture, health, finance

Explanation

Johannes highlights AI’s potential to address shortages of skilled personnel by providing pattern detection, forecasting, and resource allocation tools in sectors like education, health care, and agriculture.


Evidence

“So there’s clearly enormous potential for AI to fill skill gaps in the areas that I mentioned, also in education, in health care services, to detect patterns, to generate forecasts, to guide the allocation of public resources, and so on.” [14].


Major discussion point

AI as a development catalyst for emerging economies


Topics

Artificial intelligence | Social and economic development | Capacity development


Risk of job losses in entry‑level, knowledge‑based roles

Explanation

He notes that AI automation may displace entry‑level, routine knowledge jobs, creating labor‑market challenges that require policy attention.


Evidence

“One of them is there will be some job losses, particularly sort of entry -level jobs that are very much knowledge or document -based, performing relatively rote work that can be taken over by automation.” [32].


Major discussion point

Risks of AI widening inequality and labor market disruption


Topics

The digital economy | Human rights and the ethical dimensions of the information society | Capacity development


Small AI: affordable, offline, locally relevant solutions

Explanation

Johannes defines “small AI” as practical, low‑cost applications that work with limited connectivity, data, and infrastructure, making them suitable for low‑resource settings.


Evidence

“Small AI meaning practical, affordable, locally relevant AI that addresses specific problems and also works where connectivity, data, skills, infrastructure are fairly limited.” [82].


Major discussion point

Small AI vs. foundational AI and market concentration


Topics

Artificial intelligence | Closing all digital divides | The enabling environment for digital development


Governments should create AI sandboxes and standards for safe experimentation

Explanation

He stresses the need for both public‑facing standards and private‑sector engagement, including sandbox environments, to enable safe and innovative AI deployment.


Evidence

“I think it’s important to recognize that if we’re going to make effective use of this tool, we need both a public‑facing effort to address the standards and the other issues, the interoperability and so that I mentioned before, but also a private‑sector‑facing effort because it’s the private sector that’s actually generating, creating most of these applications that are working, particularly in the small AI area.” [59]. “We are helping governments to create the space that enables experimentation in AI sandbox to develop the different applications that people in this incredibly creative country are coming up with to help people get on with their work and become more productive.” [169].


Major discussion point

Policy, regulation, and AI sovereignty


Topics

Artificial intelligence | The enabling environment for digital development | Monitoring and measurement


Governance failures could enable abuses and power concentration

Explanation

Johannes warns that without robust governance, AI could be misused, leading to concentration of power and potential abuses.


Evidence

“I do worry that we will not get the governance right or we won’t be able to make that governance sufficiently robust to prevent abuses.” [163].


Major discussion point

Risks of AI widening inequality and labor market disruption


Topics

Human rights and the ethical dimensions of the information society | Artificial intelligence | The enabling environment for digital development


U

Ufuk Akcigit

Speech speed

163 words per minute

Speech length

1041 words

Speech time

382 seconds

Creative destruction differs between advanced and emerging markets

Explanation

Ufuk points out that the dynamics of creative destruction will vary, with emerging economies facing distinct challenges compared to advanced economies.


Evidence

“I would like to, you know, separate advanced economies from emerging or developing economies.” [51]. “Now, spillover is extremely important for creative destruction, for the future of innovation.” [44].


Major discussion point

AI as a development catalyst for emerging economies


Topics

The digital economy | Artificial intelligence | Social and economic development


Need for a business‑friendly environment to realise AI benefits

Explanation

He emphasizes that a supportive business climate is essential for entrepreneurs to develop and deploy AI solutions.


Evidence

“But at the end of the day, we need to make sure that the business friendly environment is there for entrepreneurs to come and exercise their ideas” [58].


Major discussion point

AI as a development catalyst for emerging economies


Topics

The enabling environment for digital development | The digital economy


Foundational layer is compute‑, data‑, talent‑intensive, leading to concentration

Explanation

He describes the foundational AI layer as having high entry barriers due to heavy compute, data, and talent requirements, which fosters market concentration.


Evidence

“When we look at the foundation layer, the entry barrier is really, really high, and, you know, the compute is very compute‑heavy.” [46]. “It’s very talent‑heavy.” [92]. “It’s very data‑heavy.” [93].


Major discussion point

Small AI vs. foundational AI and market concentration


Topics

Artificial intelligence | The digital economy


Concentration risk: incumbents dominate foundational AI market

Explanation

He notes that the foundational AI market is prone to concentration, with large incumbent information firms likely to capture most of the value.


Evidence

“So as a result, you know, this market, at least this layer, is very concentration‑prone.” [91]. “The target or the destination is large incumbent information companies, which again highlights where things are going in terms of the concentration.” [99].


Major discussion point

Small AI vs. foundational AI and market concentration


Topics

Artificial intelligence | The digital economy


Keeping the foundational layer contestable; universities as key players

Explanation

He argues that to prevent excessive concentration, the foundational layer should remain contestable, with universities playing a central role.


Evidence

“So that’s why if we will keep the foundational layer contestable, I think that the fundamental players there will be universities.” [95].


Major discussion point

Small AI vs. foundational AI and market concentration


Topics

Artificial intelligence | Capacity development


Labor‑market risk: rapid loss of entry‑level jobs

Explanation

Ufuk identifies the biggest risk as labor‑market disruption, especially the swift disappearance of entry‑level positions without adequate safety nets.


Evidence

“The biggest risk, I think, is definitely the labor market.” [35]. “If there was a dial where I could slow down the adaptation and give time to the labor market to catch up, that’s my biggest worry.” [41].


Major discussion point

Risks of AI widening inequality and labor market disruption


Topics

The digital economy | Human rights and the ethical dimensions of the information society


M

Michael Kremer

Speech speed

160 words per minute

Speech length

1592 words

Speech time

593 seconds

Multilateral policy actions can narrow development gaps

Explanation

Michael contends that coordinated actions by national governments and multilateral development banks can harness AI to reduce existing development disparities.


Evidence

“I think that if policymakers, primarily at the national level, but also in multilateral development banks, take appropriate actions and make appropriate investments, then I think AI has the potential to substantially narrow some of the gaps.” [12].


Major discussion point

AI as a development catalyst for emerging economies


Topics

Financial mechanisms | The enabling environment for digital development


AI‑driven weather forecasts improve farmer decisions

Explanation

He provides evidence that AI weather forecasts are being used by millions of farmers, enhancing agricultural decision‑making.


Evidence

“Farmers respond to these AI weather forecasts.” [30]. “So there’s a strong rationale for national governments, in some cases supported by multilateral development banks, to make investments in producing and disseminating AI weather forecasts.” [68]. “The AI forecasts got that right, that was the only source of information that reached farmers with that.” [73]. “But weather forecasts are non‑rival.” [75].


Major discussion point

AI as a development catalyst for emerging economies


Topics

Artificial intelligence | Social and economic development | Agricultural development


Multilateral institutions risk moving too slowly; need faster action

Explanation

He acknowledges concerns that multilateral bodies may lag behind rapid AI advances, urging more agile responses.


Evidence

“Is there a risk that multilaterals are moving too slowly relative to the technology?” [66]. “There are certain areas where the private sector is going to move, but there are other areas where they’re not going to move quickly, and it’s going to be very important for governments and for multilateral development banks and for philanthropy to move.” [71].


Major discussion point

Policy, regulation, and AI sovereignty


Topics

Financial mechanisms | The enabling environment for digital development


Evidence‑based innovation funds and staged financing to scale AI solutions

Explanation

He proposes the creation of evidence‑based innovation funds that provide tiered grants, supporting pilots and scaling successful AI applications.


Evidence

“One way is by encouraging innovation by setting up institutions like innovation funds, particularly evidence‑based, to echo Iqbal, I think evidence‑based innovation funds.” [116]. “It has tiered funding, so there’s initially very small… grants to pilot new ideas.” [168]. “Then there’s somewhat larger grants to rigorously test them as Iqbal emphasized and then for those that are most successful there’s funds to help transition them to scale up.” [167].


Major discussion point

Policy, regulation, and AI sovereignty


Topics

Financial mechanisms | Monitoring and measurement


Rigorous evaluation framework: model performance, user impact, scalability, continuous improvement

Explanation

Michael outlines a multi‑dimensional evaluation approach covering technical performance, user outcomes, scalability, and ongoing model refinement.


Evidence

“First, model evaluation.” [124]. “Second, user impact.” [134]. “Second… scalability and usage at scale that’s more like an effectiveness trial in medicine… and the fourth area is continuous improvement you want a system that improves the underlying models…” [189].


Major discussion point

Implementation challenges, trust, and evaluation


Topics

Monitoring and measurement | Artificial intelligence


Public sector may lag in adopting AI, leaving the poor without access

Explanation

He warns that if governments do not adopt AI tools, the benefits will not reach low‑income populations who rely on public services.


Evidence

“the risk is that the public sector won’t adopt these, and therefore the poor won’t have access to them.” [185].


Major discussion point

Risks of AI widening inequality and labor market disruption


Topics

Social and economic development | Human rights and the ethical dimensions of the information society


A

Anu Bradford

Speech speed

199 words per minute

Speech length

1374 words

Speech time

412 seconds

EU rights‑driven, innovation‑friendly regulation as a reference

Explanation

Anu points to the European Union’s rights‑based regulatory approach as a model that balances protection of fundamental rights with innovation.


Evidence

“The EU follows what I would call a rights‑driven approach to regulation.” [122]. “So the idea that choosing to follow… Or imitate aspects of the European rights protective regulation would come at the cost of innovation, we need to understand better what drives the technological innovation and whether regulation should…” [123].


Major discussion point

Policy, regulation, and AI sovereignty


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | The enabling environment for digital development


India can adapt global lessons while crafting sovereign AI rules

Explanation

She argues that India is well‑positioned to incorporate international best practices into locally‑tailored AI regulations, preserving sovereignty.


Evidence

“I think India is in a very good position to take the lessons that serves its needs yet make the kind of local modification and variations that are more reflecting the distinct priorities of this country.” [135].


Major discussion point

Policy, regulation, and AI sovereignty


Topics

Artificial intelligence | The enabling environment for digital development


Myth that regulation necessarily stifles innovation; need to understand drivers

Explanation

Anu seeks to debunk the belief that regulation hampers AI progress, emphasizing that well‑designed rules can coexist with innovation.


Evidence

“But I really would like to debunk this myth that to me it’s a false choice to say that the reason we don’t see these large language models being developed in Europe is not because there’s a GDPR…” [144].


Major discussion point

Policy, regulation, and AI sovereignty


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Global South must assert regulatory sovereignty amid US/China dominance

Explanation

She stresses that countries of the Global South should develop their own AI regulatory frameworks to avoid dependence on the major powers.


Evidence

“the Global South has the same kind of incentive for their own AI sovereignty, including then regulatory sovereignty, to design the rules that better work for their economies, for their societies…” [137]. “But I would remind even when encountering players like the United States and China that nobody in today’s world will be completely sovereign when it comes to AI space.” [140].


Major discussion point

Policy, regulation, and AI sovereignty


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | The enabling environment for digital development


I

Iqbal Dhaliwal

Speech speed

183 words per minute

Speech length

1151 words

Speech time

375 seconds

Small AI frees teachers’ and health workers’ time, improving outcomes

Explanation

Iqbal notes that AI applications that automate routine tasks free frontline staff, leading to better service delivery in health and education.


Evidence

“So if your AI application can free up the time of the health frontline workers, first of all, that’s a winner.” [28]. “It frees up the teacher time.” [81]. “There was a demand by the teachers to free up their time.” [83].


Major discussion point

AI as a development catalyst for emerging economies


Topics

Artificial intelligence | Social and economic development | Capacity development


Laboratory‑proven AI may fail in the field without proper training and system adaptation

Explanation

He cautions that AI tools that perform well in controlled settings can underperform in real‑world deployments if users are not adequately trained or systems are not adapted.


Evidence

“So some of these diagnostic things can work, have better predictability in the lab, but in the field, they end up decreasing, not only is their efficiency lower, but it lowers the efficiency of the doctors, because we have not trained them enough important.” [172]. “We just assume that just because the technology works, even if it works in the field, the rest of the system will adapt to it.” [173].


Major discussion point

Implementation challenges, trust, and evaluation


Topics

Artificial intelligence | Capacity development | Monitoring and measurement


GST fraud‑detection algorithm not scaled due to concerns over human discretion and power

Explanation

He describes a case where a successful AI model for detecting bogus GST firms was not rolled out because authorities feared loss of human decision‑making power.


Evidence

“When it came time to scale up this program by the government, they refused to scale it up because you think about it, you have taken away the discretion of the human to decide whether they should raid Michael’s firm or they should raid Iqbal’s firm.” [178]. “The machine learning algorithm is able to increase the probability of predicting a bogus firm from 38 % to 55 % in one shot at a very, very low cost.” [181].


Major discussion point

Implementation challenges, trust, and evaluation


Topics

Artificial intelligence | Governance | The digital economy


Demand‑driven design that frees frontline workers’ time is crucial for adoption

Explanation

He argues that AI solutions should be built around clear demand signals and should free up staff time to ensure uptake and impact.


Evidence

“The second thing that is really important here was that this is a demand‑driven thing, right?” [186]. “But most importantly, there was a demand by the school districts to show progress.” [187]. “Free up time.” [188].


Major discussion point

Implementation challenges, trust, and evaluation


Topics

Artificial intelligence | Capacity development | Social and economic development


Shift of innovative resources to large incumbents and industry migration from academia

Explanation

He highlights a massive reallocation of talent and innovation from academic settings to large incumbent firms, raising concerns about concentration.


Evidence

“The more worrying part about this, which brings me back to the foundational model side of things, is that this created a massive out‑migration from academia to industry.” [110]. “A massive reallocation of innovative resources.” [109].


Major discussion point

Small AI vs. foundational AI and market concentration


Topics

Artificial intelligence | The digital economy | Capacity development


Concentration could limit diffusion of AI gains to poorer populations

Explanation

He points out that increasing market concentration may prevent AI benefits from reaching low‑income groups and regions.


Evidence

“In low‑ and middle‑income countries, they don’t have access to that.” [196]. “The poorer parts of the country that benefit the most because they will be leveraging a tool that they are not very familiar with…” [195].


Major discussion point

Risks of AI widening inequality and labor market disruption


Topics

Artificial intelligence | Social and economic development | The digital economy


J

Jeanette Rodrigues

Speech speed

174 words per minute

Speech length

1039 words

Speech time

356 seconds

Policymakers need to keep AI‑enabled interventions in mind

Explanation

Jeanette asks what policymakers should prioritize when designing AI‑enabled programs, emphasizing the need for clear guidance and focus on impact.


Evidence

“My question to you is that what should policymakers keep in mind when designing AI‑enabled interventions, especially when it comes to small AI and the targeted use cases?” [61]. “What should policymakers in the real world think about and keep at the top of their mind as they go ahead preparing policies considering AI?” [170].


Major discussion point

Policy, regulation, and AI sovereignty


Topics

The enabling environment for digital development | Artificial intelligence | Policy design


Agreements

Agreement points

AI has transformative potential for healthcare and education sectors

Speakers

– Johannes Zutt
– Ufuk Akcigit
– Anu Bradford

Arguments

AI enables people in those jobs to expand their skills and their effectiveness in delivering the products and services that they are trying to provide. It also helps, you know, very, very diverse groups of people in many, many different sectors of the economy


Healthcare and education will see dramatic improvements through AI applications


I’m personally very excited, especially what happens in the education space but also in the health space


Summary

All speakers agree that healthcare and education represent the most promising sectors for AI transformation, with potential for significant positive outcomes


Topics

Artificial intelligence | Social and economic development


Market concentration in AI is a significant concern requiring attention

Speakers

– Ufuk Akcigit
– Johannes Zutt

Arguments

Market concentration has been increasing since 1980, accelerating after 2000, with innovative resources shifting to large incumbent firms


I think the concentration, the future of market concentration is something that we should be concerned about and we might regret not having discussed this sufficiently in 10 years


Summary

Both speakers express concern about increasing market concentration in AI development and its potential negative implications for competition and innovation


Topics

Artificial intelligence | The digital economy | The enabling environment for digital development


Public sector adoption challenges pose risks to equitable AI access

Speakers

– Michael Kremer
– Iqbal Dhaliwal

Arguments

Government systems and workers may not adapt to use AI technologies, limiting access for the poor


Many promising AI technologies fail due to trust issues and inadequate adaptation of surrounding systems


Summary

Both speakers identify significant challenges in public sector AI adoption, including resistance to change and failure to adapt systems, which could prevent the poor from accessing AI benefits


Topics

Artificial intelligence | Capacity development | Social and economic development


Small AI and locally relevant applications are crucial for developing countries

Speakers

– Johannes Zutt
– Michael Kremer

Arguments

Focus should be on ‘small AI’ – practical, affordable, locally relevant AI that works with limited infrastructure


Private firms develop profitable applications, but public goods applications need government and multilateral support


Summary

Both speakers emphasize the importance of practical, locally-relevant AI solutions that can work within the constraints of developing country infrastructure and address specific local needs


Topics

Artificial intelligence | Information and communication technologies for development | Closing all digital divides


Similar viewpoints

AI presents significant opportunities for development if implemented thoughtfully with appropriate support systems and policy frameworks

Speakers

– Johannes Zutt
– Michael Kremer
– Iqbal Dhaliwal

Arguments

AI offers opportunities to leapfrog development challenges, with 15-16% of jobs in South Asia showing strong complementarity with AI


AI has potential to substantially narrow development gaps if appropriate policy actions are taken


AI applications should free up time for frontline workers rather than adding to their burden


Topics

Artificial intelligence | Social and economic development | Information and communication technologies for development


Structural factors like market access, capital availability, and talent retention are more important for innovation than regulatory constraints

Speakers

– Anu Bradford
– Ufuk Akcigit

Arguments

Innovation vs. regulation is a false choice – Europe’s innovation gap stems from market fragmentation, capital constraints, and talent issues, not regulation


There’s concerning migration of AI talent from academia to industry, reducing open science and increasing protected patents


Topics

Artificial intelligence | The enabling environment for digital development | Capacity development


Successful AI implementation requires addressing institutional and governance challenges, not just technical capabilities

Speakers

– Iqbal Dhaliwal
– Johannes Zutt

Arguments

Technology deployment requires addressing power dynamics and institutional resistance to change


Governance and regulatory safeguards are critical challenges, especially for developing countries


Topics

Artificial intelligence | The enabling environment for digital development | Capacity development


Unexpected consensus

Labor market disruption as primary AI risk

Speakers

– Ufuk Akcigit
– Johannes Zutt

Arguments

Labor market disruption is the biggest concern, especially for entry-level jobs that drive economic development


AI also creates a number of challenges. One of them is there will be some job losses, particularly sort of entry-level jobs that are very much knowledge or document-based


Explanation

Despite their different backgrounds (academic economist vs. World Bank practitioner), both speakers converge on labor displacement as the most significant risk, particularly for developing countries where entry-level jobs represent crucial economic opportunities


Topics

Artificial intelligence | The digital economy | Social and economic development


Need for evidence-based AI evaluation

Speakers

– Michael Kremer
– Iqbal Dhaliwal

Arguments

First, model evaluation. So AI companies typically do that part. How good is the model output for specific tasks? You know, forecasting the weather. Does it do a good job? Does it match your local language well? Second, user impact


I think what we really need to do is separate the hype from the reality on the ground. And the reality on the ground is that many of these technologies are not having the final impact that we are having


Explanation

Both the Nobel laureate economist and the development practitioner strongly emphasize rigorous evaluation methodologies, showing unexpected alignment between academic and field perspectives on the importance of evidence-based assessment


Topics

Artificial intelligence | Monitoring and measurement | Social and economic development


Human capability preservation in AI era

Speakers

– Anu Bradford
– Iqbal Dhaliwal

Arguments

Risk of humans becoming overly dependent on AI and losing critical thinking capabilities


Demand-driven AI solutions that address real needs of users, teachers, and institutions are most successful


Explanation

The legal scholar and development practitioner unexpectedly converge on concerns about maintaining human agency and capabilities, emphasizing that AI should enhance rather than replace human thinking and decision-making


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | Capacity development


Overall assessment

Summary

The speakers demonstrate strong consensus on AI’s transformative potential for healthcare and education, the importance of addressing market concentration concerns, challenges in public sector adoption, and the need for locally-relevant AI solutions. There is also significant agreement on the importance of evidence-based evaluation and addressing institutional barriers to implementation.


Consensus level

High level of consensus across diverse perspectives (World Bank, academia, legal scholarship, development practice) suggests robust foundation for policy development. The alignment between theoretical concerns and practical implementation challenges indicates that policy frameworks addressing these shared concerns could gain broad support across different stakeholder communities.


Differences

Different viewpoints

Speed of AI adoption and labor market adaptation

Speakers

– Ufuk Akcigit
– Iqbal Dhaliwal

Arguments

Labor market disruption is the biggest concern, especially for entry-level jobs that drive economic development


AI applications should free up time for frontline workers rather than adding to their burden


Summary

Akcigit is deeply concerned about rapid AI adoption displacing workers faster than the labor market can adapt, particularly entry-level coding jobs that built India’s tech hubs. Dhaliwal focuses on designing AI to complement rather than replace workers, emphasizing applications that free up time for higher-value tasks.


Topics

Artificial intelligence | The digital economy | Social and economic development


Public sector AI adoption capability

Speakers

– Michael Kremer
– Ufuk Akcigit

Arguments

Government systems and workers may not adapt to use AI technologies, limiting access for the poor


Public sector productivity will improve significantly through AI adoption in government services


Summary

Kremer is pessimistic about public sector adaptation to AI, viewing it as a major risk that could exclude the poor from AI benefits. Akcigit is optimistic about government AI adoption, predicting dramatic improvements in service delivery and response times.


Topics

Artificial intelligence | Social and economic development | Capacity development


Primary AI risks for humanity

Speakers

– Anu Bradford
– Ufuk Akcigit

Arguments

Risk of humans becoming overly dependent on AI and losing critical thinking capabilities


There’s concerning migration of AI talent from academia to industry, reducing open science and increasing protected patents


Summary

Bradford focuses on the risk of human intellectual degradation from AI dependency, while Akcigit is concerned about structural changes in the innovation ecosystem, particularly the brain drain from academia to industry affecting open science.


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society | Capacity development


Unexpected differences

Regulation versus innovation trade-off

Speakers

– Anu Bradford
– Implicit assumption by others

Arguments

Innovation vs. regulation is a false choice – Europe’s innovation gap stems from market fragmentation, capital constraints, and talent issues, not regulation


Explanation

Bradford’s strong rejection of the regulation-innovation trade-off is unexpected given the common assumption that regulation stifles innovation. Her detailed evidence about Europe’s structural issues rather than regulatory burden challenges conventional wisdom about AI governance.


Topics

Artificial intelligence | The enabling environment for digital development | The digital economy


Trust in AI technology implementation

Speakers

– Iqbal Dhaliwal
– Other speakers

Arguments

Many promising AI technologies fail due to trust issues and inadequate adaptation of surrounding systems


Explanation

While other speakers focus on technical capabilities and policy frameworks, Dhaliwal’s emphasis on trust and human factors as primary barriers to AI success is unexpected. His examples of doctors not using superior AI diagnostic tools due to trust issues reveals a different dimension of implementation challenges.


Topics

Artificial intelligence | Capacity development | Social and economic development


Overall assessment

Summary

The speakers show broad agreement on AI’s transformative potential for development but significant disagreements on implementation approaches, risk priorities, and institutional capabilities. Key tensions exist between optimistic and cautious views of public sector adaptation, different prioritization of risks (labor displacement vs. human dependency vs. market concentration), and varying emphasis on technical solutions versus institutional reform.


Disagreement level

Moderate disagreement with high implications – while speakers share common goals of harnessing AI for development, their different approaches to risk management, implementation strategies, and institutional capabilities could lead to very different policy recommendations and outcomes for developing countries.


Partial agreements

Partial agreements

All speakers agree AI has tremendous potential for developing countries, but disagree on implementation approaches. Zutt emphasizes small AI solutions, Kremer focuses on public goods applications needing government support, while Akcigit stresses the need to fix underlying business environments first.

Speakers

– Johannes Zutt
– Michael Kremer
– Ufuk Akcigit

Arguments

Focus should be on ‘small AI’ – practical, affordable, locally relevant AI that works with limited infrastructure


Private firms develop profitable applications, but public goods applications need government and multilateral support


AI creates fantastic opportunities for developing countries but requires fixing underlying business environment issues


Topics

Artificial intelligence | Social and economic development | The enabling environment for digital development


Both speakers recognize the importance of governance and institutional factors in AI implementation, but Bradford focuses on regulatory sovereignty challenges while Dhaliwal emphasizes power dynamics and resistance within existing institutions.

Speakers

– Anu Bradford
– Iqbal Dhaliwal

Arguments

Global South has incentive for AI sovereignty but regulating AI is difficult even for established bureaucracies


Technology deployment requires addressing power dynamics and institutional resistance to change


Topics

Artificial intelligence | The enabling environment for digital development | Human rights and the ethical dimensions of the information society


Similar viewpoints

AI presents significant opportunities for development if implemented thoughtfully with appropriate support systems and policy frameworks

Speakers

– Johannes Zutt
– Michael Kremer
– Iqbal Dhaliwal

Arguments

AI offers opportunities to leapfrog development challenges, with 15-16% of jobs in South Asia showing strong complementarity with AI


AI has potential to substantially narrow development gaps if appropriate policy actions are taken


AI applications should free up time for frontline workers rather than adding to their burden


Topics

Artificial intelligence | Social and economic development | Information and communication technologies for development


Structural factors like market access, capital availability, and talent retention are more important for innovation than regulatory constraints

Speakers

– Anu Bradford
– Ufuk Akcigit

Arguments

Innovation vs. regulation is a false choice – Europe’s innovation gap stems from market fragmentation, capital constraints, and talent issues, not regulation


There’s concerning migration of AI talent from academia to industry, reducing open science and increasing protected patents


Topics

Artificial intelligence | The enabling environment for digital development | Capacity development


Successful AI implementation requires addressing institutional and governance challenges, not just technical capabilities

Speakers

– Iqbal Dhaliwal
– Johannes Zutt

Arguments

Technology deployment requires addressing power dynamics and institutional resistance to change


Governance and regulatory safeguards are critical challenges, especially for developing countries


Topics

Artificial intelligence | The enabling environment for digital development | Capacity development


Takeaways

Key takeaways

AI offers significant potential for developing countries to leapfrog development challenges, particularly through ‘small AI’ applications that are practical, affordable, and locally relevant


Success requires addressing foundational issues like infrastructure, digital literacy, and business environment rather than just deploying technology


Market concentration in AI’s foundational layer poses risks, with innovative resources increasingly shifting to large incumbent firms and away from open science


Effective AI implementation depends on demand-driven solutions that free up time for frontline workers and integrate well with existing systems


The choice between innovation and regulation is false – successful AI adoption requires both appropriate governance frameworks and supportive business environments


Public sector applications of AI (weather forecasting, digital identity, traffic management) require government and multilateral support as they won’t attract sufficient private investment


Trust in technology and adaptation of surrounding systems are critical factors that often cause promising AI applications to fail in real-world deployment


Resolutions and action items

World Bank Group to continue focus on ‘small AI’ applications working with governments across Indian states (Uttar Pradesh, Maharashtra, Kerala, Haryana, Telangana)


Need for evidence-based innovation funds with tiered funding structure for piloting, testing, and scaling AI applications


Governments and multilateral development banks should invest in AI applications for public goods like weather forecasting and digital identity systems


Requirement for continuous A/B testing and impact evaluation in AI procurement processes


Development of AI regulatory frameworks that balance innovation with rights protection, adapted to local contexts rather than copying templates


Unresolved issues

How to address the fundamental tension between AI-driven job displacement and the need for economic development in emerging markets


Who will ultimately set AI rules for the Global South given concentration of power in US and China


How to prevent the migration of AI talent from academia to industry and maintain open science


How to ensure public sector adoption of AI technologies when government systems resist change


How to balance AI sovereignty aspirations with the reality of global interdependence in AI supply chains


How to address labor market regulations that incentivize AI adoption over human employment


How to maintain human cognitive capabilities while leveraging AI tools in education and decision-making


Suggested compromises

Focus on AI applications that complement rather than replace human workers, particularly in freeing up time for higher-value tasks


Develop regulatory approaches that learn from established frameworks (like EU’s rights-driven approach) while adapting to local priorities and contexts


Balance between foundational AI development and application-layer innovation, recognizing different entry barriers and concentration risks


Create procurement systems that encourage competition while ensuring quality and avoiding vendor lock-in


Pursue AI sovereignty goals while acknowledging interdependence and avoiding counterproductive techno-nationalism


Implement AI solutions gradually with proper training and system adaptation to address trust and adoption challenges


Thought provoking comments

Why was there no entrepreneurship and dynamism before the AI revolution in emerging economies? Why was it not up or out? Why was it not very competition friendly? Why did the best predictor of firm size in emerging economies or developing economies was the size of the family and or the number of male children? These are still lingering issues and AI is not, you know, will not bring magic unless we understand and fix the business environment in these economies.

Speaker

Ufuk Akcigit


Reason

This comment cuts through the AI hype to address fundamental structural issues. It challenges the assumption that AI will automatically solve development problems and forces the discussion to confront deeper institutional and cultural barriers to economic growth.


Impact

This shifted the conversation from optimistic AI use cases to a more sobering examination of underlying constraints. It prompted Jeanette to pivot toward ‘real world’ considerations and influenced subsequent speakers to address implementation challenges rather than just technological possibilities.


I really would like to debunk this myth that to me it’s a false choice to say that the reason we don’t see these large language models being developed in Europe is not because there’s a GDPR… It’s not because there is AI Act. So the reason there is a perceived innovation gap between the United States and Europe is… four things: no digital single market, no deep robust capital markets union (5% of global venture capital vs 50% in US), legal frameworks and cultural attitudes to risk-taking, and success in harnessing global talent.

Speaker

Anu Bradford


Reason

This systematically dismantles a widely held belief about the regulation-innovation tradeoff, providing concrete evidence that structural economic factors, not regulation, drive innovation gaps. It reframes the entire debate about how developing countries should approach AI governance.


Impact

This fundamentally changed the framing of the regulation vs innovation debate. It gave policymakers permission to think about protective regulation without fearing innovation loss, and shifted focus to the real drivers of technological development – capital markets, talent, and risk culture.


Everything that we do in the field ends up adding to teacher’s time, adding to the nurse’s time, adding to the Anganwadi worker’s time. Very few teachers do that. Free up time. So if your AI application can free up the time of the health frontline workers, first of all, that’s a winner.

Speaker

Iqbal Dhaliwal


Reason

This provides a practical, field-tested criterion for evaluating AI interventions that cuts through technological complexity to focus on human impact. It offers a simple but powerful framework for policymakers to assess AI projects.


Impact

This introduced a concrete evaluation framework that other panelists could build upon. It grounded the abstract discussion in practical implementation reality and provided a memorable heuristic for the audience to apply in their own contexts.


When people are moving to industry from academia, their publication record goes down by 50%. They start patenting by 600% more after they move, which means that we are moving from open science to more protected science. Now, spillover is extremely important for creative destruction, for the future of innovation.

Speaker

Ufuk Akcigit


Reason

This reveals a hidden but critical consequence of AI development – the shift from open knowledge sharing to proprietary research. It connects talent migration to long-term innovation capacity in a way that’s not immediately obvious but has profound implications.


Impact

This introduced a completely new dimension to the discussion about AI’s impact on innovation ecosystems. It elevated the conversation from immediate applications to systemic effects on knowledge production and sharing, influencing how other panelists thought about long-term consequences.


I am more worried about us getting dumber as a humanity. There is a temptation to start skipping steps, outsourcing your thinking and your creativity to these models… we will just make a tremendous mistake if we just forewent that hard work, that beautiful moment of thinking hard problems and creating and investing in our own capabilities.

Speaker

Anu Bradford


Reason

This shifts focus from AI becoming too powerful to humans becoming too dependent, introducing a philosophical dimension about human agency and capability development that’s often overlooked in technical discussions.


Impact

This comment introduced a deeply humanistic perspective that balanced the technical and economic focus of the discussion. It prompted reflection on education and human development strategies, adding emotional resonance to the policy considerations.


An entry-level coding job might be an entry-level job in the United States. It’s the aspirational job that created Gurgaon’s and Noida’s and Mohali’s of this country. And those people are going to be running out of jobs very quickly… we are giving incentives to people to use AI, and we are taxing them through provident fund and labor market regulations to hire labor.

Speaker

Ufuk Akcigit


Reason

This powerfully illustrates how AI’s impact varies dramatically by economic context – what’s entry-level displacement in one country represents the destruction of an entire economic development model in another. It also connects AI adoption to specific policy contradictions.


Impact

This comment brought urgent specificity to abstract discussions about job displacement, making the stakes tangible for the Indian audience. It connected AI policy to broader economic development strategy and highlighted policy inconsistencies that needed immediate attention.


Overall assessment

These key comments fundamentally shaped the discussion by consistently challenging surface-level optimism about AI and forcing deeper examination of structural, institutional, and human factors. Ufuk Akcigit’s interventions were particularly influential in grounding the conversation in economic realities and long-term systemic effects. Anu Bradford’s contributions reframed conventional wisdom about regulation and introduced philosophical dimensions about human agency. Iqbal Dhaliwal provided practical frameworks that made abstract concepts actionable. Together, these comments transformed what could have been a typical ‘AI will solve everything’ discussion into a nuanced examination of how technology interacts with existing power structures, institutions, and human capabilities. The conversation evolved from optimistic use cases to structural constraints, from technical possibilities to implementation realities, and from immediate benefits to long-term systemic risks – creating a much more sophisticated and policy-relevant dialogue.


Follow-up questions

What will happen to creative destruction in the future with AI, particularly in the foundational layer versus application layer?

Speaker

Ufuk Akcigit


Explanation

This is critical for understanding long-term economic impacts and market concentration risks in AI development


Why was there no entrepreneurship and dynamism before the AI revolution in emerging economies?

Speaker

Ufuk Akcigit


Explanation

Understanding underlying structural issues is essential before AI can effectively transform business environments in developing countries


How can we ensure the foundational layer of AI remains contestable and doesn’t become overly concentrated?

Speaker

Ufuk Akcigit


Explanation

Market concentration in foundational AI could limit innovation and competition, affecting downstream applications


How can we keep universities healthy in the AI ecosystem to maintain open science and spillovers?

Speaker

Ufuk Akcigit


Explanation

The migration of AI talent from academia to industry is reducing open science and could harm future innovation


How can we design procurement systems to ensure sufficient competition and avoid lock-in with AI technologies?

Speaker

Michael Kremer


Explanation

Poor procurement could lead to monopolistic situations and reduced quality in AI services for governments


How can we adapt government systems and workers to effectively use AI technologies?

Speaker

Michael Kremer


Explanation

Government adoption challenges could prevent the poor from accessing AI benefits in public services


How can we train healthcare workers and other professionals to effectively use AI diagnostic tools?

Speaker

Iqbal Dhaliwal


Explanation

Studies show that even superior AI tools can reduce efficiency if users aren’t properly trained to trust and use them


How can we adapt existing power structures and systems to accommodate AI-driven decision making?

Speaker

Iqbal Dhaliwal


Explanation

Resistance to scaling AI solutions often stems from concerns about losing human discretion and power


How can we reform labor market regulations to balance AI adoption with employment protection?

Speaker

Ufuk Akcigit


Explanation

Current regulations may incentivize AI adoption over human hiring, potentially accelerating job displacement


How can we develop robust governance frameworks to prevent abuses in AI-powered poverty targeting?

Speaker

Johannes Zutt


Explanation

While AI could enable precise poverty interventions, inadequate governance could lead to misuse or discrimination


How can we ensure humans don’t become overly dependent on AI and lose critical thinking capabilities?

Speaker

Anu Bradford


Explanation

There’s a systemic risk that outsourcing thinking to AI could diminish human cognitive abilities and creativity


What are the early indicators of market concentration in AI and how should we monitor them?

Speaker

Ufuk Akcigit


Explanation

Understanding concentration trends is crucial for policy interventions before market dominance becomes entrenched


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Harnessing Collective AI for India’s Social and Economic Development

Harnessing Collective AI for India’s Social and Economic Development

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by likening the debate on AI for the collective good to an “Avengers” narrative, assigning each speaker a superhero persona to highlight diverse viewpoints on technology’s societal role and asking whether AI will become an ally or a destructive “snap.” [1][13-15]


Professor Seth argued that AI should shift from answering isolated queries to coordinating whole populations during events such as floods or tax filing, turning coordination itself into a form of intelligence; he emphasized that this requires new technologies, cross-sector partnerships, and proactive policy guidance rather than leaving development to market forces. [25-31][32-33]


Professor Nirav described many societal challenges as socio-technical multi-agent problems, noting that individual optimization often yields local maxima that fail to maximize social welfare; he cited ride-sharing and epidemic prevention as domains where a global optimum would better serve collective needs. [38-55][47-56]


Professor Manjunath explained that recommendation systems act as learning agents that continuously nudge users by shaping the utility functions they optimize, thereby altering preferences at scale; he pointed to the Facebook scandal documented in Sarah Wynn Williams’s book as evidence of large-scale societal impact and argued that these systems function as powerful advertisements that make repeated exposure highly persuasive. [77-84][90-99][94-99]


Antaraa illustrated AI’s governance role through a Maharashtra project that gathered 380 000 citizen inputs via a chatbot and made this feedback mandatory for future law-making, showing how AI can amplify citizen voices while requiring transparent design to ensure equity; Kushe added that the greatest sustainable value of AI lies in personalized services that generate new revenue rather than simple cost-saving replacements, and the panel agreed that public education about AI is more effective than trying to block malicious use. [121-130][288-290][185-204][248-251] They concluded that if AI is built to enhance connectivity, give citizens a genuine voice, and be governed with transparency, tangible everyday improvements could be felt within five years. [301-311][312-321]


Keypoints

Major discussion points


AI as a coordination tool for whole populations, not just individual assistants – Seth argues that future AI should help coordinate large groups (e.g., flood victims, tax payers) and that this requires new technologies, partnerships, and a shift away from “AI-for-profit” pathways [24-32]. He later stresses that the biggest risk is widespread use without public understanding, not malicious intent [248-251].


Multi-agent and socio-technical systems as a framework for solving collective problems – Nirav explains that many social challenges (ride-sharing, pandemics, etc.) are inherently socio-technical and can be modeled as interacting agents, allowing a move from local to global optima and better social welfare [36-55].


Recommendation systems and algorithmic nudging shape preferences and can amplify bias – Manjunath describes how learning agents infer user utility functions, subtly steer choices, and can dramatically alter preferences over time, effectively acting as powerful advertisements [77-96].


AI in governance can both empower citizens and reinforce institutional power – Antaraa details a large-scale citizen-feedback chatbot used by Maharashtra, showing how AI can amplify voices when designed transparently [121-130]; she later argues that AI should shift power toward citizens by reducing information asymmetry [237]. Manjunath counters that institutions, with their resources, are more likely to capture AI benefits [294-297].


Impact of AI on work: replacement vs. reshaping and value creation – Kushe highlights that simple task automation often fails to sustain cost savings, whereas AI that enables uniquely human-scale personalization unlocks far greater value (e.g., revenue uplift) [185-202]. In the rapid-fire segment he predicts AI will primarily reshape jobs rather than merely replace them [257].


Overall purpose / goal of the discussion


The panel, framed through an “Avengers” metaphor, aimed to explore how AI can be harnessed for the collective good-by improving coordination, fairness, and citizen participation-while identifying technical, ethical, and governance challenges that must be addressed to prevent harm and ensure equitable outcomes [13-15][20-22].


Overall tone and its evolution


Opening (0:00-4:00): Playful, optimistic, and metaphor-rich, setting a collaborative mood.


Middle (4:00-22:00): Shifts to analytical and cautionary as experts present technical concepts (population-level AI, multi-agent models) and raise concerns about algorithmic nudging, government over-reach, and unintended resource consumption [58-68][133-160].


Rapid-fire & audience Q&A (22:00-45:00): Becomes pragmatic and solution-focused, with concise answers, concrete examples (Maharashtra chatbot, job-impact figures), and a mix of optimism about new value creation and realism about regulatory gaps [121-130][185-202][237][294-297].


Closing (45:00-53:00): Returns to a grateful, hopeful tone, thanking participants and emphasizing the need for continued collaboration [501-506].


Thus, the conversation moves from an enthusiastic framing to a nuanced, sometimes uneasy examination of AI’s societal role, ending on a constructive, forward-looking note.


Speakers

Moderator (Janhavi) – Moderator of the panel; serves as the voice asking questions.


Professor Seth Bullock – Professor; expertise in collective AI, coordination, societal systems, and shared values [S3][S4].


Professor Manjunath – Professor; focuses on recommendation systems, algorithmic bias, and AI ethics [S5][S6].


Professor Nirav Ajmeri – Professor at the University of Bristol; specializes in multi-agent systems and socio-technical networks [S21].


Antaraa Vasudev – Founder/Leader at Civis (NGO); works on civic technology, AI for citizen engagement and governance [S13][S14].


Kushe Bahl – Senior leader (Partner) at McKinsey - leads McKinsey Digital and McKinsey Analytics practices in India; expertise in AI implementation, consulting, and scaling AI for business [S28][S29].


Audience Member 1 – Founder of Corral Inc. [S10].


Audience Member 2 – Participant from Germany (group affiliation). [S25].


Audience Member 3 – Audience participant (no specific role mentioned). [S1].


Audience Member 4 – Intellectual property and business lawyer. [S23].


Audience Member 5 – Audience participant (no specific role mentioned). [S7].


Speaker 3 – Unspecified speaker (role/title not provided). [S15].


Additional speakers:


(None)


Full session reportComprehensive analysis and detailed insights

Opening & framing – The panel opened with a playful “Avengers” metaphor, positioning each speaker as a superhero to illustrate the diversity of perspectives on artificial intelligence (AI) for the collective good, and the moderator asked whether AI would become an ally or the “great snap” that could threaten society [1][13-15].


Population-scale AI – Professor Seth Bullock argued that AI should move beyond answering isolated queries and become a tool for population-scale coordination (i.e., coordinating whole groups of people rather than individual queries). He described intelligence as the ability to orchestrate whole communities-flood victims, patients with a common disease, or taxpayers-through shared knowledge and coordinated action [24-33]. To realise this, he called for new technologies, delivery models, and cross-sector partnerships among researchers, private firms, non-profits and governments, warning that reliance on “path of least resistance” commercial tools would be insufficient [31-33].


Multi-agent socio-technical systems – Professor Nirav Ajmeri framed many societal challenges as socio-technical multi-agent systems, explaining that intelligence emerges from the interaction of human and technical agents and that optimisation for individual users often yields local maxima that do not serve overall social welfare [36-55]. Using ride-sharing and pandemic-prevention examples, he showed how a global optimum derived from multi-agent modelling could improve collective outcomes and fairness [47-56].


Recommendation systems & nudging – Professor Manjunath characterised recommendation systems as learning agents that infer users’ utility functions and continuously nudge preferences. He noted that the utility functions are set by platform owners, not users, allowing platforms to reshape tastes over time and act as powerful, personalised advertisements [77-84][94-99]. He cited the Facebook scandal documented in Sarah Wynn Williams’s book as evidence of large-scale societal impact when recommendation engines “go berserk” [90-92].


AI-enabled governance example – Antaraa Vasudev presented a concrete example from Maharashtra, where a simple chatbot collected 380 000 citizen inputs (voice notes, text, drawings) and fed them into the policy-making pipeline, making citizen feedback a mandatory consideration for future laws [121-130]. She stressed that such systems must be transparent, accessible and equitable, and argued that AI can reduce information asymmetry to close power gaps [109-115][237]. Later she expanded the vision, noting that disaggregation of civic-tech platforms can enable decentralized control and broader citizen participation [380-386].


Rapid-fire exchange – In a brief rapid-fire segment, Antaraa asserted that AI shifts power toward citizens by amplifying their voices [260-262], while Professor Manjunath countered that institutions with greater resources are more likely to capture AI benefits, making it difficult for citizens to compete [280-283]. He also warned that algorithms can hide bias, a point raised during the same exchange [280-283]. Professor Bullock warned about the next wave of agentic AI, describing purposive agents that communicate with each other and could generate cascades of resource consumption from trivial requests (e.g., a picture of a dog on a skateboard), disadvantaging other users unless social responsibility is embedded in their design [58-68].


Role of government – Professor Manjunath critiqued heavy-handed state direction, citing India’s CDOT project and Japan’s Fifth-Generation computing initiative as examples where governments, as generalists, failed to keep pace with rapid technological change [139-152][155-162]. He advocated an enabling and monitoring stance rather than micromanagement, a view echoed by Antaraa’s call for transparent frameworks and by an audience member who cited recent bans on social-media use for minors in Spain and Australia as useful early guardrails [109-115][409-416].


Employment impact – Kushe Bahl distinguished between simple task replacement and value-creating personalization. He argued that replacing routine tasks rarely yields sustainable savings, whereas AI-driven personalised services-such as recommendation engines that boost revenue by up to ten percent-unlock far greater economic value and reshape rather than merely replace jobs [185-202][257].


Education concerns – The audience’s rapid-fire reactions (excitement, anxiety, FOMO, etc.) were tallied by the moderator, highlighting mixed emotions about AI’s role in learning [320-327]. Bahl warned that AI-generated content, while correct, often lacks “soul” and inspiration, making it unsuitable for deep learning [375-378]. Manjunath shared a classroom example where a student used ChatGPT to fabricate data, illustrating how instant AI feedback can bypass step-by-step learning and undermine understanding [490-496].


Intellectual-property & consent – Professor Bullock noted that generative models are already trained on copyrighted material without consent and that legal battles over musicians’ and artists’ rights are just beginning [470-478]. He proposed the development of consent-based data ecosystems in which participants voluntarily share information for collective benefit [476-478].


Regulatory experiments – An unnamed speaker highlighted early regulatory experiments restricting AI-enabled platforms for children, arguing that such steps, though imperfect, signal accountability and may influence industry behaviour [409-416]. Manjunath reinforced the need for agile, enabling regulation rather than rigid micromanagement, noting the difficulty of imposing guardrails on fast-moving technology [139-152][155-162].


Audience questions – When asked about AI’s impact on young minds, Professor Seth Bullock responded that education systems must adapt to foster critical thinking alongside AI tools [340-345]; Kushe Bahl added that over-reliance on AI can erode foundational skills [350-354]. A question on regulation of AI in education elicited Manjunath’s answer that standards should be flexible, outcome-oriented, and regularly updated [470-476].


Closing visions – Professor Bullock envisioned AI delivering a greater sense of connection by breaking down language, expertise and distance barriers, enabling richer citizen-government interactions that go beyond simple voting [301-311]. Kushe Bahl offered a concrete “‘unicorn-scale’ impact” scenario: if AI could raise the earnings of India’s 150 million self-employed workers by just ₹600 each, the aggregate effect would be transformative [312-321]. Antaraa reiterated that disaggregated, transparent AI systems can broaden access to governance, while Professor Ajmeri highlighted the potential for collective decision-making at scale, and Professor Manjunath warned that the quality of AI-generated output must be critically assessed [380-386][470-476].


Session transcriptComplete transcript of the session
Moderator

sci -fi movies that we grew up watching and what it primarily also reminds me of is in specific terms the avengers right the avengers are the superheroes and they’re trying to you know save the world and decide how one can do that and they all have very different strengths so i was wondering that if all our panelists were superheroes who would they be introducing our panelists i have our first avenger captain america principled steady under pressure obsessed with doing the right thing even when it’s unpopular professor seth is exactly that and reminds me of the lens that he brings in he studies how societies hold together how coordination succeeds or fails and why systems need shared values as much as intelligence next we have spider -man spider -man strength isn’t brute force it’s his ability to navigate through complex webs adapt quickly, and see connections that others miss.

Professor Nirav thinks the same way. At the University of Bristol, his work focuses on multi -Asian systems because societies like Spider -Man are all about networks. Andhra Vasudev reminds me of Captain Marvel, operating at scale, moving across institutions, pushing boundaries. Through her NGO service, she uses AI to amplify citizen voices and reshape how power flows between governments and people. And of course we have Iron Man, Iron Man who is obsessed with execution, iteration, and making ideas work in the real world. Mr. Bal is our Iron Man, focused on execution, scale, and impact in the real economy. He leads the McKinsey Digital and McKinsey Analytics practices in India. Last but not the least, no team is complete without Bruce Banner.

Deeply aware of the challenges that we face, of AI’s raw power and focused on how to control it before it controls us, Professor Mantunath’s work reminds us that intelligence at scale can cause damage if we don’t fully understand its consequences. My name is Janhavi and today I’m embodying Jarvis, except for being the one answering the questions, I’m the voice asking them. Every Avengers story has a Thanos. The real question is whether AI becomes our ally or the great snap that we didn’t see coming. So when we talk about AI for collective good, we’re not just talking about smarter apps, we’re talking about systems that influence how people live, work and participate in society. Before we start, I would request all my panelists to just stand up for a quick photo op.

So, quick show of hands from the audience. How many of you feel that technology today is only with those who have power or resources or information, that technology has been reserved for the elite few? Do we have a show of hands in the house by any chance? Okay, clearly we don’t really have an opinion as such over here. But moving on, Professor Seth, when we look at society, you know, governments, markets or online platforms, we often assume that problems exist because we don’t have enough intelligence or data. Your work suggests something a little bit deeper, that perhaps failures come from how decisions interact at scale. From a systems perspective, Do you think our biggest societal problems are intelligence problems

Professor Seth Bullock

Thanks a lot. So it’s great to be here in India. I think this topic is extremely relevant to both the UK where I’m working and India. And I think the answer is that coordination is intelligence in this situation that we’re interested in. So I guess we’re used to situations now where we interact with an AI as an individual. One person asks the AI a question and gets one answer. But really there’s the potential for us to develop AI systems that are designed to support a whole population at once. So a population of people that are affected by a flood, a population of people that all are coping with the same. disease or medical condition, a population of people that are all trying to get taxes to and from a summit.

So instead of AI answering individual questions, AI can help coordinate those people, share intelligence, share their knowledge, and achieve better outcomes. And I think that’s quite a different way of framing AI than many of the systems that we’re hearing about and requires different technologies and different ways of delivering that to people, different ways of engaging with populations. So I think that’s something that can only really be achieved by partnerships between researchers and companies and not -for -profit organizations and governments and requires probably interventions in the way that we promote AI rather than letting the sort of path of least resistance develop AI commercial tools. I think there are opportunities to really engage with the idea of making AI for populations.

Moderator

Wonderful. Professor Nirav, you’re also from the University of Bristol and your work focuses on multi -agent systems where basically intelligence emerges from all these entities interacting with one another. What kind of social problems are best suited for these multi -agent approaches?

Professor Nirav Ajmeri

Thanks, Chandni. Good question. And I think partly Seth already answered what multi -agents could do. So all problems that we’re thinking over here are in terms of, if you’re understanding those problems, they are socio -technical in nature. So there are social entities including people, organizations which interact. All of us also use some technical tools. These could be intelligent agents. These could be applications, softwares that we use. And all of these combined together, help us. So all problems… include or all domains are socio -technical in nature. Multi -agent inherently can encapsulate socio -technical systems. So that is how I would look at it. If you’re talking about, say, ride sharing, for instance, or hailing a ride, current system could be optimizing only for me, right?

And then what we end up with, we could end up with local maxima. So if we are optimizing for each one of us, we are doing a local optimal for each of us. But we may not be doing a global optimal. And global optimal would map to social welfare. What does social welfare mean? Does it mean just maximizing experience for everybody? Or are we meaning satisfactory experience? So I think any problem that we think about, say, epidemic, pandemic prevention, making sure. That is. are located properly, all of that would be multi -agent in nature.

Moderator

Interesting. Professor Seth, do you have anything that you’d like to add on to that?

Professor Seth Bullock

So, yeah, I think we’ve heard, I think, a little bit from some AI leaders about a next wave of AI that will be agentic, where we won’t be just interacting with ChatGPT as a monolith. We will be interacting with an agent that has purposive aims and is helping us to achieve tasks. And it might do that by communicating with other agents. Whenever we interact with AI, we would be, in fact, interacting with a population of AIs that are sending each other information, that are tasking each other with different jobs to do. And actually, it might not be clear whether one of those agents is artificial or a person. And so, if we enter into that sort of world, I think we have to really understand whether those agents are going to be able to do that.

I think those agents are interacting with each other. in a way that is likely to advantage the community of users because the amount of resources that will be consumed by these population of agents and the potential for them to interact in ways that have unforeseen consequences for other people are going to ramify. When we do that manually, really we can only hold so many interactions with other people at once, and so we’re limited in the scale. You know, one request does not create this kind of cascade of other requests in the system. But as we move to artificial systems, that scaling will rapidly increase and potentially one trivial request by me asking a computer to make a picture of a dog riding a skateboard could create a whole kind of wave of different agentic interactions that consume loads of resource and also, depending on what I’ve asked for, disadvantage other people.

So embedding some kind of social responsibility… into those agents, some appreciation for how their behavior impacts other agents in the system, I think is going to be imperative. Otherwise, we end up with systems that create conflict and contestation for resources.

Moderator

Interesting. Whenever I’m on Instagram or Facebook, and let’s say if I’m talking to my friends, I’m really thinking about buying this Dyson or a particular product, it’s always weird to me how the next time I open the app, it’s almost like the app has heard me, and I start seeing the ads for those exact things, even if I’ve not searched it. I’ve just talked about it to someone. Has anybody here also experienced the same thing, a show of hands quickly, where you feel that maybe the choices that we make, are they really our choices, or are we being nudged by algo somewhere? So, Professor Manjana, your work focuses so much on recommendation systems, and we often hear that these algos are just tools.

Perhaps your research suggests that they actively shape what people see, buy, believe. How much of human behavior today is genuinely chosen by us and how much is subtly nudged by these algorithms?

Professor Manjunath

Yeah, recommendation systems and the way they shape many of our feelings and our attitudes and our habits has essentially been a significant concern for me for a while. One of the things that you have to think about when you look at recommendation systems is that they’re essentially learning agents. So they want to learn your preferences, your likes, your dislikes, etc. And when they’re trying to do that learning, they do things. They’re trying to sort of give you options, different kinds of options, and then see how you react. So there is the first way in which the interaction between you and the learning system happens. The learning algorithm. So corresponding to our recommendations. So according to our recommendations.

system happens is this, they are showing you a variety of things and the way you react. And then your reaction is usually captured in some kind of utility function, something that the algorithm believes is positive for whoever is designing that algorithm. Now, what exactly is that utility function essentially determines what gets recommended to you in the future and what the system learns about you. Now, there is no such thing as the right utility function and every organization will figure out what they want for themselves. And if you just look at some of the, we have actually done several models, mathematical models on this and show that if I, depending on the kind of learning algorithm that I have and I am assuming a benign recommendation system here, depending on the kind of learning algorithm that I use.

Where I start off with a set of preferences. by the end of the day or over a certain time horizon, my preferences can be dramatically different. So there is a certain nudge that is steadily pushed by these algorithms and in which direction the nudge is pushed depends on the kind of algorithms they use and the kind of what we call utility functions that they use. So what exactly are they trying to optimize for themselves? And if you look at various analysis of many of these, especially Facebook algorithms, there is a very famous book that came out recently by somebody called Sarah Wynn Williams who was an insider. You can see the impact of what that had on some sections of some society elsewhere when the whole recommendation system went berserk.

So there is definitely a huge impact on the population’s preferences by the recommendation systems. And if you want to sort of give a quick understanding of that, recommendation systems essentially are advertisements. The difference between a standard… and this advertisement definitely shape our preferences. If you see something more often, you will start thinking about it and so on. The difference between, at least in my opinion, the difference between the advertising advertisements that you see on the street and the advertisement corresponding to a recommendation engine is that you are significantly more receptive. You are looking to do something. And when you are trying to look for something to do, if the recommendation pushes you in a certain direction, you are naturally going to go there.

So the impact of recommendation systems on the population’s preference, in my opinion, is spectacularly large.

Moderator

Wow. That’s quite a lot to actually digest and hear from. I really wonder how much my personality is my own at this point. Antara, so from your work in civic engagement, when AI enters governance, is it to primarily help citizens be heard or is it helping governments manage complexity? And where do… Where do citizens struggle the most when technology becomes the interface between them and the government?

Antaraa Vasudev

Thank you for that question. Just want to make sure that everyone can hear me. Thank you. Some problems like on -stage mics, AI cannot solve. No, but I just wanted to, of course, next year, correct. Thank you for that lovely question and lovely being here with all of you today. Jandi, to your point, I think AI currently is being used in both use cases. It’s allowing us to engage with citizens who perhaps have little or limited knowledge about law and policy and to be able to help them clarify doubts, for them to be able to air out their grievances, for them to actually be able to understand the frameworks of policy and law that govern their lives.

But in addition to that, it is also being used in a very large way for optimization. In a country of India’s size and diversity, I think the only other ways to perhaps not tackle circumstances, not an important governance does. So better than that is to actually build strong and robust frameworks for how governance can utilize AI, which is put out in a manner which is transparent, accessible, and one that actually has certain equity built in, which is really what the panel is also discussing today. And once you have that, to know that these optimization solutions can perhaps be built by AI rather than citizen -led. So at CIVIS, we’ve actually been working on gathering a lot more public feedback on draft laws and policies using AI.

And again, we see optimization in both ends, but very, very mindful of the fact that the frameworks that govern that level of optimization are what needs to be designed before perhaps even we race to the next model.

Moderator

Got it. Can you share some examples of the kind of laws that have been impacted or the kind of work that you’ve done? Have you worked with different state governments? Governments where citizens of that particular state have been able to engage with the government about a certain law? or practice has been happening. Thank you.

Antaraa Vasudev

Absolutely. So I’ll share one example from recent work with the government of Maharashtra that Civis led. The government of Maharashtra actually undertook a very ambitious mission of trying to understand how the next 22 years of the state can be governed by citizens’ voice. Now, this is something which is honestly quite remarkable on their part. What Civis was able to do is that we built out a very easy -to -use chatbot, wherein you could send in a voice note, you could send in any text messages, or you could even, we had people send in drawings, letters that they had personally written to the Chief Minister and other things. Civis aggregated all of that feedback. So that was almost 3 .8 lakh citizen responses from 37 districts across Maharashtra.

And that was aggregated, sorted through, and then shared with the government as well. The Vixit Maharashtra report, as it’s called, is now publicly available at the government of Maharashtra has put it. out on their own website as well. But in addition to that, what’s been really interesting about it is that they have said that every law that’s going to come out in the state for the next coming years has to, in some way, factor in what citizens are saying about that problem area or that district for where the law is being made. And you can only do that if you’re able to actually engage at scale. And I think that’s the beauty of what that entire project showed.

Moderator

Absolutely. Professor, how do you feel about the government in terms of what approach should they be taking when it comes to AI and technology?

Professor Manjunath

Yeah. One of the fears that I have when the government gets involved in technology development is that they want to start controlling the direction. They want to tell what to be done at a very micromanaging kind of level. And I recently had an article on Tuesday, I think it was in the Financial Express. There was an op -ed where we talked about me and a colleague of mine. We talked about, you know, looking, we looked at history. kind of successful and spectacularly unsuccessful involvements of the government when they wanted to direct technology. So I’ll just give you two quick examples. So in India, about 40 years ago, there was something called CDOT. So that developed some spectacular technology when it was left alone.

The government started to direct it and micromanage the flow of technology. Many of you probably don’t even know CDOT. They don’t even come to IIT Bombay campus, for example, for recruitment. That’s just one example. If you look at Japan, just to give you another badly successful story, many of you are too young to know about something called the fifth generation computing systems that they wanted to start off in. The AI boom that we see today was originally planned to be launched in Japan in the 1980s. There was a huge project that the government wanted to micromanage, develop native hardware for AI and everybody thought they would be successful. It was a spectacular failure. The failure essentially stemmed from the fact that the government was directing everything.

Governments are generalists. People who run governments are generalists. They are brilliant people. They know society. They understand administration. But they don’t understand technology. Especially a technology that is moving too damn fast has a very large surface area and they cannot control it. They cannot control that. So it is best that they just enable and let others, let the people on the ground, people with a track record and people who want to take risks manage them. They should be enablers. They should also be monitors. Monitors nudging it in a certain direction making sure bad things don’t happen. But that’s a very hard task. So the biggest role that the government should have is just enable and step away.

Just to give you one positive example the NPCI in India is a spectacular example of where the government started something and let the private sector and sort of technologies handle that. In the US many of you may be familiar with the internet. It was exactly that. It was just a vision that somebody had and said let’s build this. and the technology is built. That’s the way I would think the government should handle it, but we’ll have to see how that goes.

Moderator

So just a quick question for the audience. You guys can shout the answers out loud. What emotions come to your mind when we think about AI? Are we feeling excitement? Are we feeling anxiety? Are we feeling FOMO? What are we feeling, guys? Curiosity. Dangerous, somebody said. What else? Definitely opportunity. Opportunity. The man over there? Confusion. Confusion. Anything else? Responsibility. Responsibility, fantastic. Great. So Mr. Bhai, this question is for you. There’s a lot of anxiety, a little bit of excitement as well about maybe AI replacing jobs, especially in India’s tech and services sector. From your experience working with different companies, where is AI genuinely replacing humans? and where is it actually creating new forms of value and roles?

Kushe Bahl

Yeah, that’s a great question. Thank you. So I think the, let me try and give you the very brief answer, because I could talk about this for a long time. But there is a lot of focus on AI being used to replace humans in particular operations. So, you know, when you have an AI taking a call center call, that’s the simplest example of that. And what, and, you know, the math, the way it works is that, you know, if you’re spending 100 rupees on something, you can save 40 % of that roughly by replacing it with AI, with the current economics of the way it works. And obviously, if you’re in a high -cost geography, you can save more.

In a country like even in India, you can save that much. What we have found, though, is that most of the cases where you do this simple replacement of a human with AI, that’s not the case. cost reduction doesn’t really sustain. There’s a famous example of Klarna in Europe where they brought back a lot of the costs called center costs because they had to bring back some of the senior customer support people because a lot of the conversations were not going well and they were losing customer satisfaction. The same thing with IT, you can replace a lot of developers with this, but then people will come back with more projects and there’ll be more things to be done.

The real value unlock, which is sustaining, is actually when you get AI to do something which humans can’t do or are not able to do because it’s so time consuming and so difficult. For instance, a genuinely personalized customer engagement engine using the kind of recommendation system that he was talking about, which actually engages in a personalized way with every customer that I have as a company, for instance, or every entity that any organization is dealing with. That genuinely has value. It creates huge value unlock. So like for instance, I mean if I spend 2 -3 % of my revenue on say customer support and even if I save 40 % on that, I’m saving like 0 .8 % or 1%. But if I can generate even just 10 % more revenue from existing customers with hardly any marketing cost and I make 30 -40 % margin on that, I’m getting 3 -4 % more to the bottom line.

So that is a huge, it’s like almost 5x of what you can save. So the value unlock is very large and that’s sustainable because you’re really getting AI to do, no human being is going to sit and figure out exactly for millions of customers exactly what is the kind of personal message to send because the amount of experimentation you have to do and the kind of connections you have to draw between individuals and similarities and so on, which the recommendation engines are based on are impossible to do humanly. And that’s where the biggest value unlocks are at least that I’m seeing and those are sustainable and they’re actually even applicable in high cost geographies. It’s just that unfortunately a lot of the initial focus of the innovation has been on this just save, you know, do the easy stuff, right?

Have an AI agent replace a human agent. But that’s not the real power of where what AI can bring. So hopefully we’ll see a lot more of that type of innovation going forward as well.

Moderator

Right. I think I see a lot of students here today. What kind of backgrounds do you all come from? Hands up if you’re from STEM at all? STEM backgrounds? Okay. Anybody from business, humanities, arts? Okay. So I read this LinkedIn post. I’m not sure whether it’s a great post or not. Apparently it’s going to be a little tough for STEM students to, you know, get into this world of AI because they could be replaced a lot easier. What kind of measures, businesses or like degrees does one, should one essentially come from to sustain in this world of AI, do you think? What should the next five years look like?

Kushe Bahl

Yeah, I think there is some near -term potential impact on jobs and particularly on entry level coding jobs and so on. But honestly, there’s nothing which tells us that there is a, firstly, nobody knows exactly how the math is going to work. So between new work that people do for AI enabling versus the old work that may get more efficient because of AI enabled coding and so on, will we next see an increase or decrease of employment? Nobody actually knows. There are many, many forecasts and so on done by economists much more qualified than me. But what one can see is certainly that the enterprise adoption of AI has not really happened. So right now, the impact has not really happened of all of this.

So you’re seeing some initial hit on maybe, okay, this year I have promised I’m going to use AI and reduce my budget by a certain amount, so I’ll stop hiring. That’s the kind of, I would say, almost knee -jerk impact that you’re seeing right now. What eventually plays out will be… A mix of, okay, I will do the work more efficiently and use… a lot more automation, but now I have a lot more things to do as well. So I would say that students in general, actually forget just STEM, students in general need to be focusing a lot on how I can use AI to do the best possible thing that I can do in my field and in every possible field.

So whether I’m studying marketing or if I’m studying science degree or if I’m studying any form of the humanities, you know, there is a lot of journalism. If I’m, you know, whoever I am, right, there’s so many things that I can actually be doing with AI to do my, to do things which I was not humanly possible earlier. And that’s really what the students should be equipping themselves with. And then, you know, potentially innovating and also creating things, you know, around that, but also personally equipping themselves to actually leverage AI. And I think there are lots of examples of how that can play out and will serve people really well.

Moderator

Absolutely. We are now going to get into a quick rapid fire round and then I want to open up the floor for audience questions. So the only rule here is I want short answers only. No explanations. We only have 10 seconds to answer. So I am going to start off by putting Antara on the spot. Does AI in governance shift power towards citizens or towards institutions today?

Antaraa Vasudev

I want to say citizens because it allows for a lot more information asymmetry to be addressed which is where a lot of the power gaps come up today.

Moderator

Professor Manjunath, are algorithms today more likely to reduce bias or hide bias better?

Professor Manjunath

Hide bias. No, the options don’t look right to me. What would you put as the options then? The bias will start increasing. I think they are not trained. I don’t expect training to get better. I think it will be better in the immediate future, maybe much later. But I also want to disagree with what Antara said.

Moderator

I’ll come back to you for that one. Professor Seth, what worries you more, AI being used with bad intent or AI being used widely without anyone fully understanding its consequences?

Professor Seth Bullock

Well, they’re both terrible, aren’t they? I think people will always use technologies with bad intent and it can only really be addressed if a large number of people understand that technology and can then resist it. So I think the second is more important. Uplifting the public’s understanding of AI and kind of engagement with AI properly will protect us against malign uses of AI because we will be able to spot them.

Moderator

Got it. Professor Nirav, what’s harder to design, ethical individuals or ethical systems?

Professor Nirav Ajmeri

I think that becomes tricky. Like what do we mean by ethical, right? So ethical individuals, if you’re combining ethical individuals and we say individuals combined together is a system, then ethical individuals.

Moderator

Mr. Bal, in India, will AI mostly replace jobs, reshape jobs or polarize jobs?

Kushe Bahl

Reshape.

Moderator

That’s a very quick answer. You win the rapid fire answer. Right. Professor Saad, where does AI struggle more today, with people or with systems?

Professor Seth Bullock

I mean, I think it struggles with people, but we don’t notice because it resembles the kind of natural language. When I say AI, I’m talking about something like chat GPT. So I think there’s a disguised problem with people there because… Those AIs, they don’t really mean what they say, they don’t really understand what they say, but it seems very strongly that they do. So I think that’s the problem. But what’s coming is AI embedded in all of our systems and then that will create its own set of problems as well.

Moderator

Mr. Bal, who benefits more from AI today, companies or employees? AI mostly replace jobs, reshape jobs, or polarize jobs?

Kushe Bahl

Reshape.

Moderator

That’s a very quick answer. You win the rapid fire answer. Right. Professor Seth, where does AI struggle more today, with people or with systems?

Professor Seth Bullock

I mean, I think it struggles with people, but we don’t notice because it resembles the kind of natural language. When I say AI, I’m talking about something like chat GPT. So I think there’s a disguised problem with people there because those AIs, they don’t really mean what they say. They don’t really understand what they say, but it seems very strongly that they do. So I think that’s the problem. But what’s coming is AI embedded in all of our systems, and then that will create its own set of problems as well.

Moderator

Mr. Bahl, who benefits more from AI? Today, companies or employees?

Kushe Bahl

I would say that right now, no one is benefiting from AI. But if I were to bet, it will be companies who will benefit first. And then employees will benefit. And the whole idea of having sessions like this is that we can get the employees to learn what we talked about, right? Students equipping themselves right from college. Absolutely.

Moderator

Andra, for AI used in public systems, what matters more, transparency or effectiveness?

Antaraa Vasudev

Transparency, off the bat. It’s the only way that we can actually design AI for public systems. It has to be at the front and center of all of our efforts.

Moderator

Got it. Before we get into the last question for the entire panel, I do want to get your answer to Andra’s statement. If that’s fine. The question that I had asked. I’d ask, does AI in governance shift power to a citizen or to an institution?

Professor Manjunath

Absolutely to the institutions. They have the money. to invest and discover what’s going on. There is no way citizens can beat that so easily. It requires a different whatever. I’m not allowed to say anything.

Moderator

My last question for all the panelists before we open the floor for audience questions. If we get AI right, what is one everyday improvement people in this room would actually feel within the next five years?

Professor Seth Bullock

So I think something that connects, there’s a thread that runs through this or there’s supposed to be and I think one thing that AI could give us is a greater sense that we are properly connected with each other and learning from each other. So the possibility for AI to break down barriers between people because of language and expertise and distance I think is huge. So the kind of traditional collective interaction intelligence that we’re used to where we put an X in a box when we vote for someone. It’s very, very simple, right? We can’t write an essay like the users of Antara’s system and send an essay to the government about what we want because there’s so many people, we can’t read all of those essays.

But AI can enable that kind of rich interactions. It’s an example of one of the things that Kush is talking about, that AI delivers something that is impossible for humans to do. It doesn’t just replace something that humans are already doing. So a future in which we all feel like we have a voice and AI is helping us mediate between each other, I think is something that is technically possible. There’s a whole bunch of political and social barriers to prevent that from happening. But I think five years is a timeline during which we could see the starts of those sorts of systems.

Kushe Bahl

I can talk about what I’d like to see if we get AI right. We talk a lot about institutions, we talk about companies, we talk about individuals. But not enough talk happens specifically about small businesses. India is a country of self -employed people and small enterprise. I think there are about 150 million self -employed people. If each of those people could somehow earn 600 rupees more because of AI, and I’ll talk about how, that’s a unicorn. So 600 rupees more of allocation for each of these 150 million people is not, I mean there’s a lot of large numbers in India, but it’s true, right? It’s a unicorn. So I think we think of the next 50 unicorns. We may not think of like 50 companies worth a billion dollars, but we may think of 50 innovations that puts 600 rupees more in the pockets of 150 million people.

And how does one do that? I mean if you look at all the important things all of us use today, ride hailing, e -commerce, this restaurant ordering, food ordering, right? All of these created by… On institution, they make an app and then they do spend money on marketing and so on. Today, you have AI systems that are incredibly low cost. You know, 50 cab drivers can get organized. There’s an AI agent can do the scheduling and whatever. You have a WhatsApp chat with them and you can just find the driver, right? There’s no reason why we can’t have innovation like this. Very low cost. The price, the cost of the tokens can be funded in that ride.

It can be, right? That’s all that there is to run it. It’s an autonomous system which just runs off publicly available infrastructure. I think that, to me, is the real unlock that we can see. And those same systems can then serve anyone in the world. So you can do this for taxi drivers. You can do this for lawyers. And those lawyers can then serve anyone anywhere in the world. So I think that’s the real, real unlock that we are waiting for. These systems are very low cost to build. They can be built by anybody. They can be self -built by people. And it just takes a few groups, a group of a few of these self -employed people to get together.

And then, you know, suddenly this can go viral. So I would love to see that type of innovation coming. Rather than necessarily, you know, the stuff that we know for the companies we’ll do or the things that we’ll all play around with on our LLMs on ourselves.

Moderator

Great. Antara?

Antaraa Vasudev

Thank you. I think building on what Seth and Mr. Baral just said, there’s two things that I see happening. One is the disaggregation of systems and a lot of decentralized control mechanisms, right? When that happens, you have very fragmented channels to actually engage with institutions, to Seth’s point about building collective and new ways of collective intelligence. What I want to see happening for all of us in the room is greater access and connectivity to public institutions, which actually fuels us to get easier access to entitlements and benefits that the state is supposed to provide to us. If AI can get that right, if we can solve for that, I think there is a long and a big argument to be made about that being the sort of rising tide that lifts all boats.

Professor Nirav Ajmeri

Building on to what people have been talking and last on Antara’s point, thinking about collectives, right? So we can build systems which work for individuals, but how do we make sure that those individuals could be, like each individual have different preferences. How do we take into account different people’s preferences? How do we aggregate people’s preferences and then come up with a collective decision? If you are coming up with a collective decision, how does that decision affect various other people? How do we explain that decision to other people that, hey, we have taken into account your preferences in this particular way? So we need to get that part of AI right to make sure that people have a buy -in, people trust the system that we are designing.

So that is what I would want to see and I’m thinking that we are moving forward with that. We are thinking about fairness. We are thinking about, transparency. we are thinking about accountability and so on and so forth

Professor Manjunath

yeah I can probably say what I already see the homeworks that my students submit are perfect the essays are spectacularly written the presentations are beautiful the only hope that I have is that they actually understand what they say so if that happens I will be very happy I think the output is perfect the understanding behind that output I hope will get better and better that’s my my wish for

Moderator

I’m going to open up the floor for audience questions

Audience Member 1

my question is sir I want to understand what kind of impact AI will be having on management consultants and the business

Kushe Bahl

I have no idea I have no idea really it’s very hard to say every industry is going to evolve Obviously, management consultants like everybody else are using AI for every possible thing that they can do with it. So they’re also trying to become more efficient, more productive with it. We don’t know what that means in terms of reshaping of the business. If you look at past tech innovations, which have also had a very big impact on productivity in many sectors, it’s not that entire sectors have disappeared or things have got, but things have got reshaped significantly. That has happened a lot. So I think the job that consultants do, like today when we do research, you don’t wait for one week for somebody to go and find things from everywhere.

It comes in a few minutes. Unfortunately, I find that a lot of the output, I have also seen a lot of the output, like Professor Manjunath said. I find two issues right now with the current versions of the AI. When it writes, it has no soul. So it’s correct, but it has no soul. And when it prepares a presentation or a piece of communication, it’s not inspiring. So it is correct, but it’s not inspiring. So I think there is a, so the consultants will spend more time on actually communicating in a way that’s inspiring while the desk, you know, the basic desk work will be done for you. So you spend time doing more, I would say, human tasks.

And that’s going to happen actually in a lot of other, in a lot of service jobs, right? You’re going to do, you’re going to spend time doing what humans are truly supposed to do and are really good at, which the AI models are not able to do.

Audience Member 2

Okay, thanks. So my question is for everyone. I have a younger cousin who is in high school and her entire life is on chat GPT at this point. So she shares everything, relationship issues, family issues, and it knows more about her than I do. And I kind of worry when I see the younger generation getting on these AI platforms. So what is your take on this, like, impact? What is the impact of this technology on young minds?

Professor Seth Bullock

So. I share your concern I have slightly older kids I think we have to trust that we’ve been through these technological shifts before so my parents when they looked at me watching television had similar worries about they told me that my eyes would become square because I watched too much television so actually my generation became much more sophisticated consumers of television and were much more savvy about TV ads than my parents’ generation so I think we have to listen to our children about the way that they’re using these technologies they’re natives in this new world I’m calibrated for a world where AI doesn’t work where AI is not rolled out across the whole world so I’m the wrong person really to ask about how AI is going to change people we should ask young people how they’re using it and engage with them before they start to use their AI in a way that we don’t understand in secret

Kushe Bahl

I have a funny answer and a short answer. But I think that one, I think the real danger actually is not with the chat GPTs of the world, but with the earlier addictive systems like the Instagrams of the world, right? Because they are genuinely playing on our brain’s dopamine circuits and are genuinely addictive and can therefore be harmful. I think with chat GPT, I think the only thing I would say is, I think it makes one actually question where we are as individuals, as parents, as family, that our children prefer to communicate to a relatively soulless communication device which answers everything like an American therapist textbook would, right? That they prefer to talk to that than to us.

It shows what a distance we have created. With each other, right? And that may be a good reminder to us as individuals around the task that we have to do in to rebuild bonds with each other.

Antaraa Vasudev

I think on a very similar note actually to what Kusha just said, I think there have been studies from Youth Ki Awaaz and a number of other global youth -based organizations which have been looking at why exactly we turn to AI. And the phraseology is very interesting there because it indicates that turning to AI is something that you can also turn away from. I think the questions really come up where exactly what was just mentioned about understanding what are the kinds of tactile family bonds, what are the kinds of lived experience -based interactions that we can keep having with the younger generation to show that AI is a part of their life, but it’s not the only part of their life.

And I think that’s maybe my hypothesis on where we’re headed there.

Audience Member 3

I have a quick follow -up and you can connect with the previous question also. Many countries right now are trying to ban the new AI. Clearly there is evidence it is harmful in the course it’s coming. You mentioned Instagram or any other. AI is an amplifier. So unless we design, whether it’s regulation or whether it’s guardrails or whatever, what is our hope and what is the hope for a society not to get amplified harm than what they have already experienced, especially for the generation? Shall we start with you, sir?

Speaker 3

Well, I think that’s basically what I wanted to say was to, the countries of Spain and Australia are two examples of where severe restrictions have been put on social media companies to at least give access to children. And that’s an interesting experiment. One has to see what’s going on. What will happen because it’s not an easy thing to do. I mean, I think technologically it’s not easy. Legally, I’m sure there are a lot of loopholes in all of this. We have to see how that evolves and potentially apply a similar. similar kind of guardrails with respect to AI. That’s the view, at least that’s the view that I have on that matter.

Professor Manjunath

No, it has to start somewhere. I mean, this exactly goes to my point that I made earlier. Generalists in government cannot handle the space at which technology can move. You cannot put guardrails on that at the beginning. The moment you know something is happening, you have to get into the act as quickly as possible. Somebody is making an attempt. So let’s understand what’s going on. Maybe it’s, I mean, exactly what goes on is, what will happen is something that we have to see. I mean, what was interesting, at least in that attempt, was that the way in which the social media companies reacted to both the Australian and the Spanish ban. Okay, so to me, the most interesting part was they all said it was too fast, they’ve not thought about it.

things through. And then I remembered what Facebook’s slogan was, move fast and break things. They are allowed to move fast, but the legal system is not allowed to experiment. That seemed like an interesting contradiction for me to study.

Professor Seth Bullock

Relatedly, so the first AI summit in London was very closed, right? Politicians and the leaders of big tech firms. And the idea that a couple of years later governments would actually be legislating in ways that limited in this case social media companies is very good news. After London, you could imagine that regulatory capture had happened, right? Governments were not going to be able to resist these big companies and their multinational power. So those first couple of steps of regulating social media for under -16s, even if it doesn’t quite work, even if it’s not exactly right, it at least is a step of introducing regulations and it will make AI companies… at least aware that that is a possibility.

Because they have to take that responsibility, I think.

Moderator

Professor Nirav, do you have any other input on that as well?

Professor Nirav Ajmeri

I think I agree to the points that have been made. I think there could be different ways to think about a blanket ban, for instance. If you try to restrict something, people may not… They can have more curiosity in terms of why is it something which is getting banned. So we have to be thinking about that as well. But there is a step. There will have to be some regulations that should come into place. What those regulations would be, we need to be thinking about that. I think a lot of times the worry is people keep scrolling. And then the way the algorithms work, Professor Majunath knows better, but recommender systems would put you in a rabbit hole.

And you keep going into one direction. There could be echo chambers that could get informed. So the younger population is more vulnerable there, and that is where possibly a ban or restricted access helps. We have to be thinking about how can we, say in YouTube, there is YouTube Kids, and they only see kids’ content, but then there are malicious actors who would post some content which is targeted towards kids, but it is not actually kids’ content. There could be somebody could come up with a new social media platform for kids. I am not very sure what it would look like, but there would be new technology that would come, but that needs some guardrails to be put into place.

What kind of guardrails? Research and the legislation will have to be thinking about it.

Moderator

Sure. I think we have time for one last question. Can we give it to somebody at the back? Yeah. The jean jacket. Yeah. Go for it. Can we pass the mic at the back, please?

Audience Member 4

So, definitely AI has enabled in the education and medical domain. But do we think that it has influenced, reached or violated the concept of the developers as well? There are singers who no longer exist. We are getting to hear those songs in the new generation. The ones who are alive, they definitely have a way to improve. But those who are not going to exist, it’s a breach of concept that, of course, it is falling under the domain of ethical AI. But just wanted to know your thoughts.

Moderator

Is there someone that’s directed route for this question or is it open for all? Ethical. Okay, we can just, whoever would like to take that.

Professor Seth Bullock

So, I think it’s a completely legitimate concern. Okay. And it’s difficult to understand where we go from here because the cat is already out of the bag, right? The models are already trained on everyone’s data without our consent. And how do we put that back in the box? I’m not sure that we can. I think, so there are currently legal cases that are going through the courts about the IP claims of musicians and artists, and it will be very interesting to see what law courts decide about that. I do think the kind of systems I’m interested in are systems that are built on consent. So a population of people that all have diabetes who sign up for an app that will track their disease, and then they gain by being part of a community where information is being shared to help people manage their diabetes.

So that’s a much more consenting model. It’s not about stealing people’s writing and art and music from the Internet, but that activity is already underway, and I don’t see a way of really putting it back in the box.

Moderator

Let’s do one last question.

Audience Member 5

Yeah, I guess it’s not the… the topic of education and the internet is strong and all of those things. One thing that we have observed is that instant feedback even by AI tools in education especially, students do not go through the whole process, the step by step process of foundation. So if your, let’s say your courses work in a way or the tools work in a way that they are step by step trying to make learn the person, make learn the student instead of giving instant gratification with the output. So one thing, the question is like this that has any of the professors in the panel been approached for this kind of a thing for modeling of the education process or process of getting educated or learning especially.

And the other thing that would you, can we see a collaboration in that regard where we can try to create a regulatory thing for us or a guidelines that how AI tools should be constructed for imparting education in a step by step so that that is structured with gratification. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Professor Manjunath

Yeah, the short answer to honor his, whatever, never mind. I didn’t get that right. So, yeah, the short answer is no, nobody is thinking along those lines. And handling AI in a classroom has been quite painful. And to give you one example, there was an example in which I asked somebody to do something, essentially write a certain program to perform a certain task. I gave the data. The student, because the student went to chat GP to understand what the question was about, created her own data to do and did not know how to use the data that I was giving. So the point you are making is extremely valid. if you want to think about legislation or any other guardrails or anything like that, I’m up to discuss those with you offline.

Give a very brief answer today. More generally, I think every university is struggling with that question. And I’m hoping that there are lots of bright people and we will start to see some answers. But it’s not easy.

Moderator

Well, a big thank you to all the panelists here. And a big thank you to all the audience members as well for being such great and engaging people. We have a token of appreciation from the University of Bristol side for all the panelists. From all of our sides. From somewhere. Thank you very much. Thank you. Thank you. Thank you. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (31)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“The panel opened with a playful “Avengers” metaphor, positioning each speaker as a superhero to illustrate the diversity of perspectives on artificial intelligence (AI) for the collective good.”

The moderator explicitly referenced the Avengers metaphor in the discussion, as recorded in the transcript excerpt [S3].

Confirmedhigh

“Professor Seth Bullock argued that AI should move beyond answering isolated queries and become a tool for population‑scale coordination, supporting entire groups rather than individual queries.”

Bullock’s stance on designing AI for whole-population support rather than single-user queries is documented in the knowledge base entry [S4].

Additional Contextmedium

“Bullock called for new technologies, delivery models, and cross‑sector partnerships among researchers, private firms, non‑profits and governments to achieve population‑scale AI.”

The importance of multilateral, multi-stakeholder collaboration for AI deployment is highlighted in several sources, e.g., the call for broad sector participation in AI initiatives [S102] and the emphasis on multi-stakeholder partnerships for effective AI implementation [S103].

Additional Contextmedium

“Professor Manjunath characterised recommendation systems as learning agents that infer users’ utility functions set by platform owners, allowing platforms to reshape tastes and act as powerful, personalised advertisements.”

The knowledge base notes that platforms control massive information about users and use targeted advertising, which aligns with the description of platforms shaping user preferences [S107] and the critique of invasive targeted ads [S109].

External Sources (109)
S1
AI Transformation in Practice_ Insights from India’s Consulting Leaders — – Romal Shetty- Sanjeev Krishan- Audience member 3- Audience member 4
S2
Global Perspectives on Openness and Trust in AI — Speakers:Alondra Nelson, Audience member 3 Speakers:Anne Bouverot, Alondra Nelson, Audience member 3
S3
Harnessing Collective AI for India’s Social and Economic Development — -Professor Seth Bullock- Professor studying how societies hold together, coordination systems, and shared values; works …
S4
Harnessing Collective AI for India’s Social and Economic Development — Speakers:Kushe Bahl, Professor Seth Bullock Speakers:Professor Manjunath, Professor Seth Bullock Speakers:Professor Se…
S5
Harnessing Collective AI for India’s Social and Economic Development — – Antaraa Vasudev- Professor Manjunath – Professor Manjunath- Professor Seth Bullock
S6
Harnessing Collective AI for India’s Social and Economic Development — Speakers:Antaraa Vasudev, Professor Manjunath Speakers:Professor Manjunath, Antaraa Vasudev Speakers:Professor Manjuna…
S7
Global Perspectives on Openness and Trust in AI — – Karen Hao- Audience member 1- Audience member 5
S8
Global Perspectives on Openness and Trust in AI — Speakers:Karen Hao, Audience member 1, Audience member 5
S10
AI Transformation in Practice_ Insights from India’s Consulting Leaders — -Audience member 1- Founder of Corral Inc -Audience member 6- Role/title not mentioned
S11
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S12
Global Perspectives on Openness and Trust in AI — – Karen Hao- Audience member 1- Audience member 5
S13
S14
Harnessing Collective AI for India’s Social and Economic Development — Speakers:Antaraa Vasudev, Professor Manjunath Speakers:Antaraa Vasudev, Professor Nirav Ajmeri Speakers:Kushe Bahl, An…
S15
Building the Workforce_ AI for Viksit Bharat 2047 — -Speaker 1- Role/Title: Not specified, Area of expertise: Not specified -Speaker 3- Role/Title: Not specified, Area of …
S16
S18
Keynote-Olivier Blum — -Moderator: Role/Title: Conference Moderator; Area of Expertise: Not mentioned -Mr. Schneider: Role/Title: Not mentione…
S19
Day 0 Event #250 Building Trust and Combatting Fraud in the Internet Ecosystem — – **Frode Sørensen** – Role/Title: Online moderator, colleague of Johannes Vallesverd, Area of Expertise: Online session…
S20
Conversation: 02 — -Moderator: Role/Title: Event moderator; Area of expertise: Not specified
S21
Harnessing Collective AI for India’s Social and Economic Development — -Professor Nirav Ajmeri- Professor at University of Bristol focusing on multi-agent systems and socio-technical networks
S22
AI Transformation in Practice_ Insights from India’s Consulting Leaders — – Romal Shetty- Sanjeev Krishan- Audience member 3- Audience member 4
S23
Global Perspectives on Openness and Trust in AI — -Audience member 4- Intellectual property and business lawyer
S24
https://dig.watch/event/india-ai-impact-summit-2026/ai-transformation-in-practice_-insights-from-indias-consulting-leaders — Sorry, we have a lot of people who’ve raised their hands. I think we can just probably take a couple of questions. I thi…
S25
Global Perspectives on Openness and Trust in AI — -Audience member 2- Part of a group from Germany
S26
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S27
The Arc of Progress in the 21st Century / DAVOS 2025 — – Paula Escobar Chavez: Audience member asking a question (specific role/title not mentioned)
S28
S29
Harnessing Collective AI for India’s Social and Economic Development — Speakers:Kushe Bahl, Antaraa Vasudev Speakers:Kushe Bahl, Antaraa Vasudev, Audience Member 2 Speakers:Kushe Bahl, Prof…
S30
From Innovation to Impact_ Bringing AI to the Public — “we are all in committed towards agent -first interfaces.”[91]. “The agent will talk to agent.”[82]. Sharma states that…
S31
Revitalising trust with AI: Boosting governance and public services — AI is reshaping public governance, offering innovative ways to enhance services and restore trust in institutions. The d…
S32
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Lt Gen Vipul Shinghal — Third, ensuring transparency in AI systems:Commanders must understand the data sources, training methodologies, and deci…
S33
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Olahaji highlights AI’s potential to improve democratic governance by analyzing citizen feedback, enabling online consul…
S34
Education meets AI — Lastly, the analysis supports teaching critical thinking as a basic skill. It is agreed that students should learn how t…
S35
Panel Discussion AI & Cybersecurity _ India AI Impact Summit — This comment humanized the capacity building challenge and validated the struggles many educators face. It shifted the d…
S36
https://app.faicon.ai/ai-impact-summit-2026/harnessing-collective-ai-for-indias-social-and-economic-development — Absolutely to the institutions. They have the money. to invest and discover what’s going on. There is no way citizens ca…
S37
Building Population-Scale Digital Public Infrastructure for AI — Excellent point. Excellent point, Trevor. And I think you brought out the inherent stress in the phrase diffusion pathwa…
S38
How nonprofits are using AI-based innovations to scale their impact — However, several challenges remain unresolved. The technical issue of AI hallucinations continues to affect user trust, …
S39
AI for Democracy_ Reimagining Governance in the Age of Intelligence — Thank you. The principle that elected legislatures shape the rules governing society is… the cornerstone of democracy….
S41
Comprehensive Report: AI’s Impact on the Future of Work – Davos 2026 Panel Discussion — Ng emphasized that whilst efficiency gains from AI point solutions might yield modest improvements, transformative workf…
S42
The State of the model: What frontier AI means for AI Governance — ### Task Automation vs. Job Replacement
S43
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Development | Legal and regulatory Evidence-Based Policymaking and Research Integration Part of the roadmap emphasizes…
S44
Artificial intelligence (AI) – UN Security Council — In conclusion, the discussions highlighted the importance of fostering transparency and accountability in AI systems. En…
S45
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Galia Daor:Yeah, thanks very much. I admit it’s a bit challenging to speak after Allison on that front, but I will try, …
S46
Harnessing Collective AI for India’s Social and Economic Development — Kushe Bahl believes that AI will fundamentally reshape jobs rather than just replacing them outright. He suggests this t…
S47
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Rather than following historical patterns of automation that replace workers, AI development should prioritize applicati…
S48
Shaping the Future AI Strategies for Jobs and Economic Development — This discussion focused on AI-driven strategies for workforce and economic growth, examining how artificial intelligence…
S49
Shaping the Future AI Strategies for Jobs and Economic Development — A central theme emerged around collaboration rather than displacement of human workers. Panelists emphasized that AI sho…
S50
Harnessing Collective AI for India’s Social and Economic Development — Professor Bullock argues that AI systems should be designed to support entire populations simultaneously rather than jus…
S51
Building Population-Scale Digital Public Infrastructure for AI — Summary:All speakers agree that moving from fragmented pilot projects to systematic, coordinated approaches is essential…
S52
How to make AI governance fit for purpose? — Focus should be on actions and practical outcomes rather than regulation, with emphasis on innovation over regulatory co…
S53
Scaling Enterprise-Grade Responsible AI Across the Global South — “And those engineered systems might require, for example, yes, human in the loop or on the loop, for sure, but also agen…
S54
AI governance in India: A call for guardrails, not strict regulations — The TRAI’srecent call to regulateAI comes at a time when policymakers must address rapidly evolving technological innova…
S55
From principles to practice: Governing advanced AI in action — Juha Heikkila: Thank you. Thank you very much. It’s indeed a great pleasure to be here and to be a member of this panel….
S56
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S57
Artificial intelligence — Capacity development Content policy Online education
S58
Why science metters in global AI governance — But now I don’t know what is the causal factor there. I don’t know if the causal factor is whether they are using AI mor…
S59
Empowering India & the Global South Through AI Literacy — Explanation:The unexpected consensus emerges around the government’s commitment to introduce AI education from class thr…
S60
How Trust and Safety Drive Innovation and Sustainable Growth — Explanation:Despite representing different perspectives (UK regulator, Singapore regulator, and industry), there was une…
S61
Safeguarding Children with Responsible AI — High level of consensus across diverse stakeholders (government, industry, academia, and youth representatives) suggests…
S62
Open Forum #30 High Level Review of AI Governance Including the Discussion — High level of consensus with significant implications for AI governance development. The alignment suggests that despite…
S63
Safeguarding Children with Responsible AI — Consensus level:High level of consensus across diverse stakeholders (government, industry, academia, and youth represent…
S64
AI for Social Empowerment_ Driving Change and Inclusion — This discussion focused on the impact of artificial intelligence on labor markets and employment, featuring perspectives…
S65
Generative AI is enhancing employment opportunities and shaping job quality, says ILO report — A new study conducted by the International Labour Organization (ILO) investigates the consequences of Generative AI on t…
S66
Anthropic report shows AI is reshaping work instead of replacing jobs — A new report by Anthropicsuggestsfears that AI will replace jobs remain overstated, with current use showing AI supporti…
S67
Harnessing Collective AI for India’s Social and Economic Development — Thanks a lot. So it’s great to be here in India. I think this topic is extremely relevant to both the UK where I’m worki…
S68
Harnessing Collective AI for India’s Social and Economic Development — Artificial intelligence | Social and economic development Professor Bullock argues that AI systems should be designed t…
S69
How nonprofits are using AI-based innovations to scale their impact — However, several challenges remain unresolved. The technical issue of AI hallucinations continues to affect user trust, …
S70
AI for Good Technology That Empowers People — Low to moderate disagreement level with significant implications for implementation strategies. The differences suggest …
S71
Gathering and Sharing Session: Digital ID and Human Rights C | IGF 2023 Networking Session #166 — Amandeep Singh Gill:Thank you very much. It’s a great pleasure to join you, and such an important topic. So, the interfa…
S72
WS #86 The Role of Citizens: Informing and Maintaining e-Government — PeiChin Tay emphasizes the importance of leveraging technology to reduce barriers and create digital feedback loops in e…
S74
How to make AI governance fit for purpose? — Jennifer Bachus: So, in addition to my very strong concern that essentially A.I. governance is going to strangle A.I. in…
S75
Comprehensive Report: AI’s Impact on the Future of Work – Davos 2026 Panel Discussion — Ng emphasized that whilst efficiency gains from AI point solutions might yield modest improvements, transformative workf…
S76
The State of the model: What frontier AI means for AI Governance — ### Task Automation vs. Job Replacement
S77
When Code and Creativity Collide: AI’s Transformation of Music and Creative Expression — Juliet Mann argues that artificial intelligence is advancing at an unprecedented pace compared to previous technologies….
S78
Scaling Innovation Building a Robust AI Startup Ecosystem — Overall Tone:The tone was consistently celebratory, appreciative, and inspirational throughout. It began formally with t…
S79
Elevating AI skills for all — The tone is consistently optimistic, enthusiastic, and collaborative throughout. The speaker maintains an upbeat, missio…
S80
AI Policy Summit Opening Remarks: Discussion Report — The tone is consistently optimistic and collaborative throughout both speeches. Both speakers maintain an encouraging, f…
S81
Managing Change in Media Space: Social Media, Information Disorder, and Voting Dynamics 2 — Additionally, SDG 17: Partnerships for the Goals accentuates the critical function of worldwide collaborations in realis…
S82
Open Forum #7 Deepen Cooperation on Governance, Bridge the Digital Divide — The overall tone was collaborative, optimistic and forward-looking. Speakers shared positive examples and experiences fr…
S83
Why science metters in global AI governance — Summary:The discussion revealed surprisingly few fundamental disagreements among speakers, with most conflicts arising a…
S84
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — The tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shif…
S85
As AI agents proliferate, human purpose is being reconsidered — As AI agentsrapidly evolvefrom tools to autonomous actors, experts are raising existential questions about human value a…
S86
Strategic prudence in AI: Experts advise incremental approach for meaningful advancements — At TechCrunch Disrupt 2024, data management leadersadvisedAI-driven businesses to focus on incremental, practical applic…
S87
GOVERNING AI FOR HUMANITY — – 19 Problems such as bias in AI systems and invidious AI-enabled surveillance are increasingly documented. Other risks …
S88
Panel 4 – Resilient Subsea Infrastructure for Underserved Regions  — The discussion maintained a professional, collaborative tone throughout, with panelists building on each other’s insight…
S89
Driving Indias AI Future Growth Innovation and Impact — The discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI …
S90
How Trust and Safety Drive Innovation and Sustainable Growth — The discussion concluded with panelists predicting what AI summits might be called in five years’ time. Their responses …
S91
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — The conversation maintained an optimistic and patriotic tone throughout, with both participants expressing strong confid…
S92
Comprehensive Discussion Report: AI Agents and Fiduciary Standards — The tone is pragmatic and solution-oriented throughout, with Pentland presenting a confident, business-like approach to …
S93
High-Level Track Facilitators Summary and Certificates — The discussion maintained a consistently positive and celebratory tone throughout, characterized by gratitude, accomplis…
S94
Closing Ceremony — The discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebrat…
S95
Closing Ceremony — The overall tone was positive and forward-looking. Speakers expressed gratitude to the hosts and participants, emphasize…
S96
[Parliamentary Session Closing] Closing remarks — The tone of the discussion was formal yet collaborative and appreciative. There was a sense of accomplishment for the wo…
S97
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S98
Inclusive AI Starts with People Not Just Algorithms — The tone was consistently optimistic and empowering throughout the discussion. Speakers maintained an enthusiastic, forw…
S99
Panel Discussion AI and the Creative Economy — This panel discussion examined the complex relationship between artificial intelligence and cultural diversity in creati…
S100
Panel Discussion AI and the Creative Economy — This panel discussion examined the complex relationship between artificial intelligence and cultural diversity in creati…
S101
AI for agriculture Scaling Intelegence for food and climate resiliance — Thank you. Thank you, sir, for your visionary address. You always continue to inspire us to aim higher and achieve bette…
S102
All hands on deck to connect the next billions | IGF 2023 WS #198 — Additionally, Joe Welch affirms the value of a multilateral, multistakeholder approach. He emphasizes the need for colla…
S103
AI/Gen AI for the Global Goals — Speakers consistently emphasized the crucial role of multi-stakeholder collaboration in effectively developing and imple…
S104
https://dig.watch/event/india-ai-impact-summit-2026/welfare-for-all-ensuring-equitable-ai-in-the-worlds-democracies — And I think, you know, more globally, you know, efforts like the Hiroshima AI process, there are sort of all these pre -…
S105
The History of Cyber Diplomacy Future — Pascal Lamy challenged traditional approaches to international cooperation, arguing that “Classical multilateralism… i…
S106
Sangeet Paul Choudary — Another issue that affects drivers arises from the implementation of surge pricing on ride-hailing platforms. Platforms …
S107
© 2019, United Nations — In the digital economy, platforms unilaterally control massive amounts of information about producers and consumer…
S108
7th edition — The net neutrality debate triggers linguistic debates. Proponents of net neutrality focus on Internet ‘users’, while the…
S109
Digital democracy and future realities | IGF 2023 WS #476 — These corporations, with their established platforms and significant influence, can create barriers for competing servic…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
P
Professor Seth Bullock
2 arguments165 words per minute1587 words575 seconds
Argument 1
Population‑scale AI should enable coordination of whole communities rather than single‑user queries
EXPLANATION
Professor Bullock argues that AI should move beyond answering individual questions and be designed to support entire populations facing common challenges, such as floods or disease outbreaks. By coordinating many users simultaneously, AI can share intelligence and improve collective outcomes.
EVIDENCE
He explains that instead of a single person asking an AI a question, AI can be built to help a whole population affected by a flood, a disease, or tax filing, enabling coordination and better outcomes for many people at once [28-30]. He adds that achieving this requires new technologies, partnerships between researchers, companies, and governments, and a shift away from purely commercial AI tools [31-33].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Bullock’s claim that AI should support entire populations and enable coordinated action is corroborated by S4, which emphasizes designing AI for whole-community challenges rather than individual queries [S4].
MAJOR DISCUSSION POINT
Population‑scale coordination
AGREED WITH
Antaraa Vasudev, Professor Nirav Ajmeri
Argument 2
Future AI agents will act purposively and communicate with each other, requiring embedded social responsibility
EXPLANATION
Bullock warns that upcoming AI systems will be agentic, pursuing specific goals and interacting with other agents, which could lead to unintended resource consumption and conflicts. Embedding social responsibility into these agents is essential to prevent harmful cascades.
EVIDENCE
He describes a next wave of AI where agents have purposive aims, communicate, and may task each other, creating cascades of requests that consume resources and could disadvantage others, emphasizing the need for social responsibility in their design [58-65]. He illustrates how a trivial request could trigger a large chain of agent interactions, highlighting potential unforeseen consequences [66-68].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for socially responsible, purposive AI agents that interact at scale is highlighted in S4 and further detailed in S30, which discusses agent-first interfaces and agents talking to agents [S4][S30].
MAJOR DISCUSSION POINT
Agentic AI and responsibility
P
Professor Nirav Ajmeri
2 arguments148 words per minute695 words280 seconds
Argument 1
Multi‑agent approaches can model socio‑technical systems to achieve socially optimal outcomes
EXPLANATION
Ajmeri states that multi‑agent systems can capture the interaction of people, organizations, and technical tools, allowing the design of solutions that aim for global optima rather than local, individual optima. This can improve social welfare in domains such as ride‑sharing and pandemic prevention.
EVIDENCE
He explains that current ride-sharing optimizes for each individual, leading to local maxima, whereas a multi-agent approach can target a global optimum that maps to social welfare, questioning what social welfare means and how to achieve it [47-52]. He also mentions that epidemic and pandemic prevention are inherently multi-agent problems requiring coordinated solutions [55-56].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Ajmeri’s argument that multi-agent systems can achieve global, socially optimal outcomes is supported by S4, which describes his view on moving from individual to collective optimization [S4].
MAJOR DISCUSSION POINT
Socio‑technical optimization
AGREED WITH
Professor Seth Bullock, Antaraa Vasudev
Argument 2
Intelligence emerges from interacting agents; suitable for problems like ride‑sharing, pandemics, and social welfare
EXPLANATION
Ajmeri emphasizes that intelligence is not isolated but arises from the interaction of many agents, making multi‑agent frameworks appropriate for complex societal challenges. By modeling these interactions, AI can help design fair and effective collective decisions.
EVIDENCE
He notes that intelligence emerges when social entities (people, organizations) and technical tools (intelligent agents, applications) interact, and that this structure fits problems such as ride-sharing, where individual optimization leads to sub-optimal global outcomes, and public health crises that require coordinated action [47-52][55-56].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S4 provides additional context for Ajmeri’s point that intelligence emerges from the interaction of many agents and is apt for complex societal problems such as ride-sharing and pandemic response [S4].
MAJOR DISCUSSION POINT
Emergent intelligence in multi‑agent systems
A
Antaraa Vasudev
2 arguments170 words per minute883 words310 seconds
Argument 1
AI can both help citizens voice concerns and optimize governmental processes; transparency is essential
EXPLANATION
Vasudev explains that AI is currently used to enable citizens with limited legal knowledge to ask questions, air grievances, and understand policies, while also being employed for large‑scale optimization of government functions. She stresses that transparent, accessible, and equitable frameworks are needed to ensure AI benefits are fairly distributed.
EVIDENCE
She describes AI-driven citizen engagement tools that clarify doubts, collect grievances, and explain policy frameworks, alongside AI-based optimization for a country as large and diverse as India, calling for transparent and equitable frameworks before scaling AI solutions [109-115].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Vasudev’s emphasis on AI-enabled citizen engagement and the need for transparent, equitable frameworks is echoed in S4, which discusses AI tools for large-scale government optimization and calls for transparency [S4].
MAJOR DISCUSSION POINT
Civic engagement and transparent AI governance
AGREED WITH
Professor Manjunath, Speaker 3
Argument 2
AI can empower citizens by aggregating massive feedback and informing policy decisions
EXPLANATION
Vasudev highlights a project with the Maharashtra government where AI collected hundreds of thousands of citizen inputs via a chatbot, aggregated them, and fed the results back into policy making, ensuring future laws consider citizen perspectives.
EVIDENCE
She details how Civis built an easy-to-use chatbot that gathered 3.8 lakh responses from 37 districts, aggregated the feedback, and produced the publicly available Vixit Maharashtra report, after which the state mandated that upcoming laws factor in citizen input [121-130].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Maharashtra chatbot project described in S4, which gathered 3.8 lakh responses and fed them into policy making, directly supports this argument [S4].
MAJOR DISCUSSION POINT
Citizen‑centric policy design
AGREED WITH
Professor Seth Bullock, Professor Nirav Ajmeri
P
Professor Manjunath
4 arguments169 words per minute1529 words540 seconds
Argument 1
Recommendation engines act as powerful nudges that reshape user preferences and can hide bias
EXPLANATION
Manjunath argues that recommendation systems learn users’ likes and dislikes through utility functions defined by platform owners, subtly steering preferences over time. This nudging effect can be large and may conceal underlying biases.
EVIDENCE
He explains that recommendation systems act as learning agents that present options, capture reactions via utility functions, and over time can dramatically change user preferences, acting as advertisements that heavily influence population tastes [77-99].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Manjunath’s claim about recommendation systems learning utility functions and subtly shifting preferences is substantiated by S4, which outlines how such systems act as nudges and can conceal bias [S4].
MAJOR DISCUSSION POINT
Algorithmic nudging and hidden bias
Argument 2
Governments should act as enablers and monitors, not micromanage technology development
EXPLANATION
Manjunath cautions that when governments overly direct technology projects, such as India’s CDOT or Japan’s Fifth Generation computing, they often fail. He recommends that governments enable private innovation, monitor outcomes, and intervene only to prevent harms.
EVIDENCE
He cites the failure of India’s CDOT after government micromanagement and Japan’s Fifth Generation AI project as examples of over-directed initiatives, then argues that governments should enable, monitor, and nudge rather than control technology development [139-152][155-162].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S4 cites Manjunath’s examples of CDOT and Japan’s Fifth Generation project to illustrate the pitfalls of government micromanagement and his recommendation for an enabling role [S4].
MAJOR DISCUSSION POINT
Government role as enabler vs. director
AGREED WITH
Antaraa Vasudev, Speaker 3
Argument 3
Educators face challenges with AI‑generated work lacking depth and inspiration
EXPLANATION
Manjunath observes that AI‑produced content, while correct, often lacks the ‘soul’ and inspirational quality of human‑crafted material, making it insufficient for educational purposes.
EVIDENCE
He notes that AI-generated presentations and essays are accurate but have no soul and are not inspiring, highlighting a limitation for teaching and learning [375-378].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Manjunath’s observation that AI-generated content is accurate but lacks ‘soul’ and inspiration is documented in S4, providing direct support for this concern [S4].
MAJOR DISCUSSION POINT
Quality of AI‑generated educational content
Argument 4
Over‑directed government projects often fail; better to enable private innovation while monitoring risks
EXPLANATION
Reiterating his earlier point, Manjunath emphasizes that government‑driven tech projects frequently underperform, and a more effective approach is to let the private sector lead while the state ensures safety and fairness.
EVIDENCE
He repeats the CDOT and Fifth Generation examples to illustrate failure of government-led tech, and advocates for an enabling role with monitoring and risk mitigation [139-152][155-162].
MAJOR DISCUSSION POINT
Policy approach to AI development
S
Speaker 3
2 arguments179 words per minute117 words39 seconds
Argument 1
Regulatory guardrails (e.g., bans for minors) are needed to limit amplified harms
EXPLANATION
Speaker 3 points out that countries like Spain and Australia have imposed strict restrictions on social‑media platforms for children, serving as experimental guardrails that could inform similar measures for AI.
EVIDENCE
He mentions that Spain and Australia have placed severe restrictions on social-media companies to protect children, describing these as interesting experiments whose outcomes need to be observed [409-416].
MAJOR DISCUSSION POINT
Early regulatory safeguards for vulnerable users
AGREED WITH
Antaraa Vasudev, Professor Manjunath
Argument 2
Early regulatory steps (e.g., social‑media restrictions for youth) can signal accountability and shape industry behavior
EXPLANATION
Speaker 3 argues that imposing early limits on technology use by minors sends a clear signal to industry that regulation is possible, encouraging responsible behavior even if the measures are imperfect.
EVIDENCE
He explains that the bans in Spain and Australia, though not easy to implement, represent a step toward accountability that may influence how AI companies operate [409-416].
MAJOR DISCUSSION POINT
Regulation as a catalyst for industry responsibility
A
Audience Member 1
1 argument100 words per minute22 words13 seconds
Argument 1
Concern about AI’s effect on management consulting and the need to focus on human‑centric tasks
EXPLANATION
The audience member asks how AI will impact management consultants, expressing worry that AI might replace human roles and emphasizing the importance of retaining tasks that require human creativity and inspiration.
EVIDENCE
He poses the question about AI’s impact on management consultants and the business, seeking insight into replacement versus value creation [365].
MAJOR DISCUSSION POINT
AI impact on consulting profession
A
Audience Member 3
2 arguments142 words per minute94 words39 seconds
Argument 1
AI in governance currently shifts power toward institutions rather than citizens
EXPLANATION
The audience member asserts that, despite AI’s potential to empower citizens, current implementations tend to concentrate power with institutions that have the resources to leverage AI.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S36 offers a counterpoint, noting that institutions possess the resources to dominate AI deployment, suggesting a power shift toward institutions rather than citizens [S36].
MAJOR DISCUSSION POINT
Power dynamics in AI‑enabled governance
Argument 2
Early regulatory steps (e.g., social‑media restrictions for youth) can signal accountability and shape industry behavior
EXPLANATION
The audience member highlights that imposing restrictions on technology for minors can act as a precedent for AI regulation, encouraging responsible industry practices.
MAJOR DISCUSSION POINT
Regulatory precedents for AI
A
Audience Member 4
2 arguments127 words per minute91 words42 seconds
Argument 1
Algorithms learn utility functions set by owners, leading to drift in user preferences over time
EXPLANATION
The audience member notes that recommendation algorithms are programmed with utility functions defined by platform owners, which can gradually shift users’ preferences in directions aligned with those owners’ goals.
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion in S4 about recommendation systems using owner-defined utility functions and causing preference drift aligns with this audience observation [S4].
MAJOR DISCUSSION POINT
Algorithmic ownership and preference drift
Argument 2
AI models trained on copyrighted material raise consent and IP issues; legal resolution is pending
EXPLANATION
The audience member raises concerns that AI systems are trained on artists’ and creators’ works without consent, creating intellectual‑property disputes that are currently being litigated.
EVIDENCE
He asks whether AI-generated content violates developers’ rights, citing examples of singers whose voices are reproduced and questioning the ethical implications [462-467].
MAJOR DISCUSSION POINT
IP and consent in AI training data
A
Audience Member 5
1 argument186 words per minute196 words62 seconds
Argument 1
Instant AI feedback can bypass step‑by‑step learning, risking shallow understanding; calls for structured guidelines
EXPLANATION
The audience member worries that AI tools providing immediate answers prevent students from engaging in the gradual learning process, and suggests the need for regulatory or guideline frameworks to ensure educational AI supports deep learning.
EVIDENCE
He describes how instant AI feedback leads students to skip foundational steps, and asks whether professors have been approached to develop structured, step-by-step AI tools for education [481-486].
MAJOR DISCUSSION POINT
AI in education and learning depth
K
Kushe Bahl
2 arguments188 words per minute1945 words620 seconds
Argument 1
AI will reshape rather than simply replace jobs, creating new value through personalization
EXPLANATION
Bahl explains that while AI can automate routine tasks, its greatest economic impact comes from enabling personalized services that generate new revenue streams, thereby reshaping job roles rather than merely eliminating them.
EVIDENCE
He cites examples such as AI replacing call-center work but notes limited cost savings, and emphasizes that personalized customer engagement engines can increase revenue by 10 % with high margins, delivering far greater value than simple cost cuts [186-199].
MAJOR DISCUSSION POINT
Job transformation and value creation
AGREED WITH
Professor Seth Bullock
Argument 2
Reshape (as answer to rapid‑fire question about AI’s impact on jobs)
EXPLANATION
In the rapid‑fire segment, Bahl succinctly states that AI will reshape jobs rather than merely replace or polarize them.
EVIDENCE
He answers “Reshape” to the moderator’s rapid-fire question about AI’s impact on jobs in India [257].
MAJOR DISCUSSION POINT
Rapid‑fire view on job impact
M
Moderator
1 argument147 words per minute1619 words659 seconds
Argument 1
Rapid‑fire insights highlight differing views on bias, power shift, and who benefits from AI
EXPLANATION
The moderator summarizes a rapid‑fire round where panelists offered brief, contrasting perspectives on algorithmic bias, the direction of power in AI‑governance, and whether companies or employees stand to gain most from AI.
EVIDENCE
During the rapid-fire, Antaraa said AI shifts power to citizens, Manjunath argued it shifts to institutions, and Bahl answered that AI will reshape jobs, illustrating varied viewpoints on bias, power, and benefit distribution [237-245][248-251][257][281-284].
MAJOR DISCUSSION POINT
Diverse panel perspectives in rapid fire
Agreements
Agreement Points
AI should be designed for population‑scale coordination rather than isolated individual queries
Speakers: Professor Seth Bullock, Antaraa Vasudev, Professor Nirav Ajmeri
Population‑scale AI should enable coordination of whole communities rather than single‑user queries AI can empower citizens by aggregating massive feedback and informing policy decisions Multi‑agent approaches can model socio‑technical systems to achieve socially optimal outcomes
All three speakers stress that AI systems need to operate at the scale of whole populations or societies, coordinating many users (e.g., flood victims, citizens providing feedback) and moving beyond single-question interactions to achieve collective benefits [28-30][31-33][121-130][47-52][55-56].
POLICY CONTEXT (KNOWLEDGE BASE)
This view reflects an emerging consensus that AI systems should function as shared digital public infrastructure, enabling coordinated outcomes across whole societies rather than siloed personal assistants. The need for systematic, population-scale approaches is highlighted in discussions on building digital public infrastructure for AI [S51] and in Professor Bullock’s argument that coordination itself is a form of intelligence supporting entire populations [S50].
Transparency and accountable governance are essential for AI deployment in the public sector
Speakers: Antaraa Vasudev, Professor Manjunath, Speaker 3
AI can both help citizens voice concerns and optimize governmental processes; transparency is essential Governments should act as enablers and monitors, not micromanage technology development Regulatory guardrails (e.g., bans for minors) are needed to limit amplified harms
Vasudev calls for transparent, accessible, and equitable AI frameworks for citizen engagement, Manjunath warns against government micromanagement and advocates an enabling, monitoring role, while Speaker 3 points to early regulatory experiments as necessary safeguards [109-115][288-290][139-152][155-162][409-416].
POLICY CONTEXT (KNOWLEDGE BASE)
The UN Security Council emphasized that AI systems must be transparent, explainable and accountable to maintain public trust and ensure ethical outcomes, framing these principles as core to AI governance in the public sector [S44].
AI will reshape jobs and create new value rather than simply replace workers
Speakers: Kushe Bahl, Professor Seth Bullock
AI will reshape rather than simply replace jobs, creating new value through personalization AI will break down barriers between people, enabling richer interactions that were previously impossible
Bahl emphasizes that AI’s biggest economic impact comes from personalized services that generate new revenue, reshaping roles, while Bullock highlights AI’s potential to connect people and enable capabilities beyond human limits, both indicating a transformation rather than outright replacement [257][186-199][301-309].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple expert panels have argued that AI will transform work by augmenting human capabilities and generating new employment opportunities, rather than causing wholesale displacement. This perspective appears in discussions on AI’s impact on jobs in India [S46], policy-focused forums stressing complementary design choices [S47][S48][S49], and ILO research showing generative AI can enhance employment prospects [S65][S66].
Building public understanding and capacity is crucial for responsible AI adoption
Speakers: Kushe Bahl, Professor Seth Bullock
Students need to focus on how to use AI across fields and equip themselves with relevant skills Uplifting public understanding of AI will protect against malicious uses
Both Bahl and Bullock argue that widespread AI literacy-students learning to apply AI in their domains and the general public grasping AI’s implications-is essential to mitigate risks and harness benefits [226-230][250-251].
POLICY CONTEXT (KNOWLEDGE BASE)
The AI Policy Research Roadmap calls for capacity-building initiatives to raise awareness and enable effective navigation of AI systems in the public sector [S43]. Complementary efforts on AI literacy, such as introducing AI education from primary school onward, reinforce the policy priority of public understanding [S59][S57].
Similar Viewpoints
Both speakers stress that as AI becomes more autonomous and agentic, its design must incorporate social responsibility and governance mechanisms to prevent unintended harms, calling for collaborative oversight rather than unchecked deployment [58-65][139-152][155-162].
Speakers: Professor Seth Bullock, Professor Manjunath
Future AI agents will act purposively and communicate with each other, requiring embedded social responsibility Governments should act as enablers and monitors, not micromanage technology development
Both see the state’s role as facilitating transparent, citizen‑centric AI tools while avoiding heavy‑handed control, emphasizing enabling frameworks that protect public interest [109-115][288-290][139-152][155-162].
Speakers: Antaraa Vasudev, Professor Manjunath
AI can empower citizens by aggregating massive feedback and informing policy decisions Governments should act as enablers and monitors, not micromanage technology development
Unexpected Consensus
Agreement across diverse participants that early regulatory interventions (e.g., bans for minors) are a useful experiment for AI governance
Speakers: Speaker 3, Professor Manjunath, Audience Member 3
Regulatory guardrails (e.g., bans for minors) are needed to limit amplified harms Governments should enable and monitor rather than micromanage, implying a need for early safeguards AI in governance currently shifts power toward institutions, suggesting regulation is required
While speakers came from different domains-policy, academia, and audience-their statements converge on the idea that early, targeted regulatory steps are valuable for managing AI’s societal impact, a consensus not explicitly anticipated at the start of the panel [409-416][139-152][155-162].
POLICY CONTEXT (KNOWLEDGE BASE)
High-level consensus on safeguarding children through targeted AI restrictions has been documented in UN-backed child-focused AI governance forums, which view early bans for minors as a pragmatic experiment [S61]. Similar multi-stakeholder dialogues favor targeted, harm-focused interventions over sweeping legislation [S60][S52].
Overall Assessment

The panel largely converged on four core themes: (1) AI must be built for collective, population‑scale coordination; (2) transparent, accountable governance and early regulatory guardrails are essential; (3) AI will reshape rather than merely replace jobs, creating new value; and (4) capacity building and public understanding are critical for responsible adoption.

High consensus across speakers on these themes, indicating a shared belief that AI’s future benefits hinge on coordinated design, transparent governance, and widespread capacity development. This alignment suggests strong support for policies that promote collective AI solutions, enforce transparency, and invest in education and public awareness.

Differences
Different Viewpoints
Who gains power from AI in governance – citizens or institutions
Speakers: Antaraa Vasudev, Professor Manjunath
AI can empower citizens by aggregating massive feedback and informing policy decisions (Antaraa) AI in governance shifts power toward institutions that have the resources to invest and control AI (Manjunath)
Antaraa asserts that AI shifts power to citizens by enabling their voices to be heard (e.g., the Maharashtra chatbot project) [237][121-130]. Manjunath counters that, in practice, AI gives institutions the advantage because they control the data, funding and deployment, making it hard for citizens to compete [294-296].
Approach to government involvement in AI – enable‑and‑monitor vs regulatory guardrails
Speakers: Professor Manjunath, Speaker 3, Antaraa Vasudev
Governments should act as enablers and monitors, avoiding micromanagement of technology projects (Manjunath) Early regulatory steps such as bans for minors are needed to limit amplified harms and signal accountability (Speaker 3) AI governance requires transparent, equitable frameworks before scaling, implying some level of oversight (Antaraa)
Manjunath warns that government micromanagement leads to failure (e.g., CDOT, Japan’s Fifth Generation) and recommends an enabling role with monitoring [139-152][155-162]. Speaker 3 argues that strict bans for children (Spain, Australia) are useful guardrails, suggesting a more proactive regulatory stance [409-416]. Antaraa calls for transparent, accessible frameworks, indicating a need for structured oversight rather than pure hands-off enabling [109-115]. The three positions diverge on how much direct regulation is appropriate.
POLICY CONTEXT (KNOWLEDGE BASE)
Recent policy debates highlight a split between action-oriented, enable-and-monitor models and calls for explicit guardrails. Some reports advocate focusing on practical outcomes and innovation-friendly approaches rather than heavy regulation [S52], while others stress the necessity of guardrails to balance trust and risk [S53][S54][S60].
How bias in recommendation systems should be addressed – hide bias vs reduce bias
Speakers: Professor Manjunath, Moderator (implicit)
Algorithms tend to hide bias and may increase it over time (Manjunath) The rapid‑fire question asked whether algorithms today are more likely to reduce bias or hide bias, implying an expectation of reduction (Moderator)
When asked about bias, Manjunath responded that algorithms are more likely to hide bias and may even increase it, showing skepticism about current mitigation efforts [239-245]. The moderator’s framing of the question suggested a hope that bias could be reduced, revealing a tension between expectations of bias reduction and Manjunath’s assessment that bias is being concealed.
Unexpected Differences
Perceived impact of AI on jobs – replacement vs value creation
Speakers: Audience Member 1, Kushe Bahl
Concern that AI will replace management consultants and reduce human‑centric tasks (Audience Member 1) AI will reshape jobs by creating new value through personalization rather than simply replacing roles (Bahl)
The audience member expressed anxiety that AI might replace consultants, whereas Bahl argued that the real economic benefit comes from AI-enabled personalization that creates new revenue streams and reshapes work, not wholesale replacement [365][186-199][257]. This contrast between fear of job loss and optimism about job transformation was not anticipated given the broader discussion on AI for collective good.
POLICY CONTEXT (KNOWLEDGE BASE)
Expert analyses consistently argue that AI is more likely to create value and new roles than to replace workers outright, countering alarmist narratives about job loss. This view is supported by discussions on AI reshaping work in India and global forums, as well as ILO and Anthropic reports highlighting augmentation over replacement [S46][S47][S48][S49][S64][S65][S66].
Effectiveness of AI‑generated educational content
Speakers: Professor Manjunath, Professor Seth Bullock
AI‑generated essays and presentations lack ‘soul’ and are not inspiring for learning (Manjunath) Bullock envisions AI breaking down barriers and enabling richer, meaningful interactions (Bullock)
Manjunath criticizes AI output for being correct but soulless, limiting its educational value [375-378]. Bullock, while not directly addressing education, promotes AI as a means to connect people and facilitate deep collective interaction, implying a more positive view of AI’s educational potential [301-307]. The tension between AI’s perceived superficiality and its potential to enhance learning was not a primary focus of the panel, making it an unexpected point of disagreement.
Overall Assessment

The panel displayed several substantive disagreements, chiefly around who benefits from AI in governance (citizens vs institutions), the appropriate level of government intervention (enabling vs regulatory guardrails), and how bias in algorithmic systems should be handled. While there was broad consensus that AI should serve collective good and that system‑level coordination is essential, the pathways to achieve these goals diverged sharply.

Moderate to high – the core philosophical split on power dynamics and regulatory philosophy could shape policy outcomes significantly. The disagreements suggest that without a shared framework for governance, AI initiatives may oscillate between citizen‑centric empowerment and institutional control, potentially limiting the realization of inclusive, equitable AI benefits.

Partial Agreements
Both agree that AI must move beyond isolated individual interactions toward coordinated, system‑level solutions. Bullock emphasizes population‑scale coordination for floods, disease, taxes [28-30][31-33]; Ajmeri stresses that intelligence emerges from interacting agents and that multi‑agent approaches are suited for collective problems like ride‑sharing and pandemics [36-46][47-56]. They differ in terminology (population‑scale AI vs multi‑agent modeling) but share the same overarching goal.
Speakers: Professor Seth Bullock, Professor Nirav Ajmeri
Population‑scale AI should coordinate whole communities rather than answer single‑user queries (Bullock) Multi‑agent systems can model socio‑technical interactions to achieve socially optimal outcomes (Ajmeri)
Both see AI as a tool for enhancing citizen participation and collective intelligence. Antaraa describes AI‑driven citizen engagement platforms and the need for transparency [109-115][121-130]; Bullock envisions AI breaking barriers between people and enabling richer interaction with governments [301-307]. Their convergence is on the desired outcome (empowered citizens), while their focus differs (transparent platforms vs agentic coordination).
Speakers: Antaraa Vasudev, Professor Seth Bullock
AI can empower citizens by providing access to information and enabling collective decision‑making (Antaraa) AI agents that communicate and coordinate can give people a greater sense of connection and a voice in collective decisions (Bullock)
Takeaways
Key takeaways
AI should move from individual‑query tools to population‑scale coordination systems that can help whole communities manage floods, disease outbreaks, tax collection, etc. (Prof. Seth Bullock) Multi‑agent and socio‑technical approaches are essential for problems where many human and technical agents interact (Prof. Nirav Ajmeri). These can improve social welfare in domains such as ride‑sharing, pandemic response, and public policy. Recommendation and advertising algorithms act as powerful nudges that can reshape user preferences and often hide bias; the utility functions they optimise are set by owners, not users (Prof. Manjunath). AI in governance can both amplify citizen voice and optimise government processes, but transparency, accessibility, and equity must be built into frameworks (Antaraa Vasudev). Governments are better positioned as enablers and monitors rather than micromanagers of technology development; over‑directed projects tend to fail (Prof. Manjunath). AI will more likely reshape jobs than simply replace them, creating new value through personalization and automation of tasks that are infeasible for humans (Kushe Bahl). In education, unchecked AI feedback can bypass step‑by‑step learning, leading to shallow understanding; structured guidelines are needed (Audience Q5, Prof. Manjunath). Ethical concerns around AI‑generated content and IP arise because models are trained on copyrighted material without consent; consent‑based data collection is advocated (Prof. Seth Bullock). Early regulatory steps (e.g., age‑based bans on social‑media) signal accountability and can influence industry behaviour, though they are imperfect (Audience Q3, Prof. Seth Bullock).
Resolutions and action items
Develop transparent, citizen‑centric AI frameworks for public services, emphasizing consent‑based data collection (Antaraa Vasudev). Encourage partnerships between researchers, private firms, and governments to build AI systems that serve whole populations rather than individual queries (Prof. Seth Bullock). Create guidelines for AI use in education that enforce step‑by‑step learning and prevent over‑reliance on instant AI answers (Audience Q5, Prof. Manjunath). Promote the design of AI agents with embedded social responsibility to mitigate unintended resource consumption and conflicts (Prof. Seth Bullock). Monitor and evaluate early regulatory experiments (e.g., youth‑focused bans) to inform future AI governance policies (Audience Q3, Prof. Seth Bullock).
Unresolved issues
How to concretely shift AI‑enabled governance power toward citizens rather than institutions; current perception is that power still leans toward institutions. Effective methods for reducing hidden bias in recommendation systems and ensuring algorithms are accountable to public values. Specific regulatory mechanisms that balance transparency with effectiveness of AI in public systems; no consensus reached. Legal and practical solutions for intellectual‑property rights of creators whose works are used to train generative models. Detailed strategies for up‑skilling the workforce and integrating AI into job roles without causing large‑scale displacement. Implementation pathways for consent‑based data ecosystems at scale, especially in health or civic domains. Standardised, enforceable guidelines for AI use in classrooms and assessment of learning outcomes.
Suggested compromises
Adopt a transparency‑first approach for AI in public systems while still pursuing effectiveness, acknowledging that transparency is a prerequisite for trust (Antaraa Vasudev). Governments act as enablers and monitors rather than direct developers, allowing private innovation to flourish while providing oversight (Prof. Manjunath). Introduce targeted, age‑based restrictions on AI‑enabled platforms as an interim safeguard while broader regulatory frameworks are developed (Audience Q3). Balance AI‑driven job automation with a focus on augmenting human‑centric tasks, reshaping roles instead of pure replacement (Kushe Bahl). Combine multi‑agent system design with ethical guidelines to ensure that emergent behaviours align with societal welfare (Prof. Nirav Ajmeri & Prof. Seth Bullock).
Thought Provoking Comments
Coordination is intelligence. Instead of AI answering individual questions, we can design AI systems that support whole populations—e.g., coordinating flood response, disease management, or tax collection.
Reframes AI from a personal tool to a societal coordination mechanism, highlighting a shift in purpose and scale.
Opened a new line of discussion about population‑level AI, prompting follow‑up questions on multi‑agent systems and leading the panel to explore how AI can be structured for collective coordination rather than isolated queries.
Speaker: Professor Seth Bullock
When AI becomes agentic, even a trivial request (like a picture of a dog on a skateboard) can trigger cascades of interactions that consume resources and potentially disadvantage others; we need to embed social responsibility into these agents.
Identifies a hidden risk of emergent, large‑scale AI interactions and calls for proactive ethical design.
Shifted the tone from optimism to caution, steering the conversation toward the unintended consequences of AI ecosystems and influencing later remarks about regulation and public understanding.
Speaker: Professor Seth Bullock
Recommendation systems are learning agents that actively shape preferences; depending on the utility function they optimize, they can dramatically alter users’ tastes over time, essentially acting as powerful advertisements.
Highlights how algorithmic design directly influences human behavior, moving beyond the notion of neutral tools.
Deepened the analysis of algorithmic nudging, leading to audience concerns about autonomy and prompting further discussion on bias, transparency, and the need for oversight.
Speaker: Professor Manjunath
In Maharashtra, we built a simple chatbot that collected 380,000 citizen inputs (voice notes, texts, drawings) and fed them into the policy‑making process; now every law must consider this citizen feedback.
Provides a concrete, scalable example of AI empowering citizens in governance, illustrating practical impact.
Grounded the abstract debate in a real‑world case, encouraging other panelists to discuss how AI can be used for civic engagement and influencing the later focus on transparency and decentralization.
Speaker: Antaraa Vasudev
Government micromanagement of technology (e.g., India’s CDOT and Japan’s Fifth Generation computing) often leads to failure; governments should act as enablers and monitors, not directors of tech development.
Offers historical evidence that challenges the assumption that state control ensures beneficial AI outcomes.
Prompted a re‑evaluation of the appropriate role of policy, influencing subsequent remarks about regulatory guardrails, rapid‑fire answers, and the need for agile, not heavy‑handed, governance.
Speaker: Professor Manjunath
AI should not just replace humans to cut costs; the real value lies in unlocking capabilities humans can’t achieve, like personalized customer engagement engines that can increase revenue far beyond the savings from automation.
Distinguishes between superficial cost‑cutting and transformative value creation, reframing the job‑impact narrative.
Redirected the conversation from fear of job loss to opportunities for new value, influencing later discussion on reshaping jobs and supporting small businesses.
Speaker: Kushe Bahl
My generation worried about TV; we adapted and became savvy consumers. Today’s youth will similarly adapt to AI, and we should listen to them rather than impose adult fears.
Provides a historical analogy that normalizes technological anxiety and emphasizes intergenerational dialogue.
Eased audience concerns, shifted the discussion toward empowerment and education, and set the stage for audience questions about youth and AI.
Speaker: Professor Seth Bullock
The biggest danger is not chat‑GPT but platforms that exploit dopamine circuits (e.g., Instagram). AI amplifies existing harms; we need consent‑based models where users opt‑in to data sharing, not covert data harvesting.
Prioritizes consent and data ethics, pointing out that AI’s risks are extensions of existing platform issues.
Reinforced calls for transparent, consent‑driven AI systems, influencing the rapid‑fire debate on bias, transparency, and the role of government in setting guardrails.
Speaker: Professor Seth Bullock
Overall Assessment

These pivotal comments collectively steered the panel from a broad, metaphor‑driven introduction toward concrete, systemic considerations of AI. Professor Seth’s framing of coordination and agentic cascades introduced the need for societal‑scale design and ethical safeguards, while Professor Manjunath’s insights on recommendation systems and governmental overreach highlighted hidden influences and policy pitfalls. Antaraa’s Maharashtra case grounded the discussion in real‑world civic empowerment, and Kushe Bahl’s distinction between cost‑cutting and value creation reshaped the narrative around job impact. Together, these remarks deepened the conversation, prompted new topics (population AI, consent, governance models), and shifted the tone from speculative optimism to a nuanced, solution‑oriented dialogue.

Follow-up Questions
How can AI systems be designed to support whole populations (e.g., disaster response, tax collection) rather than individual queries?
Identifies a need to shift AI from individual assistance to coordinated population‑level services, requiring new technologies and delivery models.
Speaker: Professor Seth Bullock
What partnership models between researchers, companies, non‑profits, and governments are needed to develop AI for populations?
Highlights the importance of cross‑sector collaboration to create and deploy population‑scale AI solutions.
Speaker: Professor Seth Bullock
What interventions in AI promotion are required to avoid the ‘path of least resistance’ commercial tools and ensure socially beneficial outcomes?
Calls for policy or strategic guidance to steer AI development toward public‑good applications rather than purely profit‑driven tools.
Speaker: Professor Seth Bullock
How can social responsibility be embedded into agentic AI to prevent resource contention and unintended societal consequences?
Points to the need for research on designing AI agents that consider the impact of their actions on other agents and on society.
Speaker: Professor Seth Bullock
What are the emergent behaviors and cascading resource consumption effects when AI agents interact at scale (e.g., trivial requests causing large cascades)?
Raises concerns about scalability and externalities of interconnected AI agents, requiring study of systemic impacts.
Speaker: Professor Seth Bullock
What governance frameworks are needed to ensure transparency, accessibility, and equity when AI is used in public systems?
Emphasizes the necessity of designing transparent and equitable AI frameworks before large‑scale deployment in governance.
Speaker: Antaraa Vasudev
How should regulatory and policy frameworks be designed to prevent premature racing to the next AI model without adequate safeguards?
Calls for research on creating timely regulations that balance innovation with safety and public interest.
Speaker: Antaraa Vasudev
How do recommendation systems shape human preferences and potentially amplify bias, and how can this impact be measured?
Identifies a gap in understanding the magnitude of preference manipulation and bias amplification by recommendation algorithms.
Speaker: Professor Manjunath
What methods can be developed to detect and mitigate hidden bias in recommendation algorithms?
Points to the need for technical solutions and standards to address bias that is not immediately visible.
Speaker: Professor Manjunath
What is the effectiveness of AI and social‑media restrictions for minors (e.g., bans in Spain and Australia) and what guardrails are appropriate?
Seeks empirical evaluation of regulatory experiments aimed at protecting children from AI‑driven harms.
Speaker: Speaker 3 (unnamed) and Professor Manjunath
How can appropriate guardrails be established for AI deployment in the public sector without stifling innovation?
Calls for research on balancing rapid AI adoption with necessary oversight in government contexts.
Speaker: Professor Manjunath
How can AI be leveraged to create value for small businesses and self‑employed workers rather than merely replacing jobs?
Suggests investigation into low‑cost AI solutions that augment income for millions of micro‑entrepreneurs.
Speaker: Kushe Bahl
What design principles and regulatory guidelines are needed for AI tools in education to promote step‑by‑step learning rather than instant gratification?
Highlights a gap in current AI‑enabled educational tools and the need for structured, pedagogically sound frameworks.
Speaker: Audience Member 5; Professor Manjunath
What are the legal and ethical implications of AI‑generated content that uses works of deceased artists, and how should consent and IP be managed?
Raises concerns about copyright, consent, and the need for new legal frameworks for AI‑generated creative works.
Speaker: Professor Seth Bullock
How can AI systems aggregate individual preferences into collective decisions while ensuring fairness, transparency, and accountability?
Identifies a research challenge in designing AI‑mediated collective decision‑making mechanisms that maintain trust.
Speaker: Professor Nirav Ajmeri
How can AI increase citizens’ access to government entitlements and benefits through decentralized, disaggregated control mechanisms?
Calls for exploration of AI‑driven platforms that reduce information asymmetry and improve service delivery.
Speaker: Antaraa Vasudev
What mechanisms can enable AI to break down barriers between people (language, expertise, distance) to give citizens a real voice in governance?
Suggests research into AI‑facilitated rich, large‑scale citizen‑government interactions beyond simple voting.
Speaker: Professor Seth Bullock
What are the psychological and social impacts of young people relying heavily on conversational AI (e.g., ChatGPT) for personal issues, and how should society respond?
Points to a need for interdisciplinary study on AI’s influence on youth mental health and family dynamics.
Speaker: Professor Seth Bullock; Kushe Bahl; Antaraa Vasudev
How can AI regulation be structured to shift power towards citizens rather than institutions in governance contexts?
Indicates ongoing debate and need for research on power dynamics shaped by AI‑enabled governance tools.
Speaker: Rapid‑fire (Antaraa Vasudev) and subsequent discussion

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building the Future STPI Global Partnerships & Startup Felicitation 2026

Building the Future STPI Global Partnerships & Startup Felicitation 2026

Session at a glanceSummary, keypoints, and speakers overview

Summary

The session, hosted by the Software Technology Parks of India (STPI), brought together government officials, industry leaders, and startup founders to discuss building a robust AI startup ecosystem in India [1-9].


STPI’s Director of Startups and Innovation, Rakesh Dubey, outlined the STPI portal as a unique global platform that aggregates incubators, accelerators, policy repositories, contests, a product marketplace and a hiring hub, enabling startups to manage their entire lifecycle online [11-20].


Bala, Managing Director of Strat Infinity, highlighted that AI is projected to contribute $15.7 trillion globally by 2030, with India’s GCCs expected to generate $150 billion in software exports and employ 3.5 million people, underscoring the sector’s economic potential [36-42].


He argued that while model, compute and funding are essential, the true scale of AI depends on integration into global organisations, a gap that lies in institutionalising AI rather than technology itself [44-55].


According to Bala, Global Capability Centers (GCCs) can bridge this gap by providing real data, infrastructure and enterprise validation, and the co-creation model-where startups work within GCC sandboxes-reduces the pilot-to-production cycle and accelerates scaling [56-63].


He further noted that the GCC ecosystem creates a multiplier effect on skill development, revenue and software exports, and called for dedicated co-creation platforms led by bodies such as STPI to institutionalise this collaboration [64-70][75-76].


Geetika Dayal of Thai Delhi NCR emphasized five structural pillars-knowledge building, resource access, market validation, funding and responsible AI-as essential for scaling innovation, and said that coordinated partnerships among STPI, GCCs, government and corporates make scaling inevitable [102-107].


Neerja Sekhar of the National Productivity Council announced a memorandum of understanding with STPI and presented a three-part framework for startups-trust, testbeds and traction-to move from ideas to societal impact, and stressed that STPI’s hubs and NPC’s benchmarking will support this pathway [140-148][149-152][160-162].


Arvind Kumar, STPI’s Director General, described STPI’s network of 70 centres and 24 domain-specific entrepreneurship hubs that provide incubation, seed funding and market access, and warned that AI products must be safe, trusted, responsible and ethical to achieve scale [175-194].


The ceremony proceeded with the formal exchange of MOUs between STPI and NPC, and between STPI and Thai Delhi NCR, signalling a strategic partnership to accelerate AI productivity and ecosystem development [80-84].


A subsequent felicitation ceremony recognised a dozen startups for achievements in revenue, funding, employment, women participation and AI-driven impact, illustrating the tangible outcomes of the STPI support stack [214-276].


Selected founders, such as the co-founder of Fuselage Innovations and the team behind EZO5 Solutions, shared how STPI’s mentorship, funding assistance and regulatory guidance enabled them to scale drone-based agriculture solutions and AI-powered oncology diagnostics respectively [279-332].


The session concluded with a vote of thanks that highlighted the contributions of all speakers, the importance of collaborative ecosystem building, and the promise of continued growth for India’s AI startup landscape [353-368].


Overall, the discussion affirmed that coordinated policy, platform resources, GCC integration and co-creation models are critical to scaling AI innovation and delivering measurable economic and social benefits across India [102-107][140-148][64-70].


Keypoints


Major discussion points


STPI’s digital platform as a one-stop ecosystem enabler – The portal aggregates incubators, accelerators, policy repositories and hosts contests, while recent upgrades add a product marketplace and a hiring hub that let startups post jobs and showcase products to global audiences [11-19][20-21].


Global Capability Centers (GCCs) as the bridge for AI startups – GCCs are projected to generate ≈ $150 billion in software exports and employ 3.5 million people by 2030 [36-43]. Bala stresses that the real scale-up challenge is not AI models but integrating them into enterprises, a gap that GCCs can fill through co-creation, sandbox environments and faster pilot-to-production cycles [44-55][56-63][64-78].


Collaborative ecosystem pillars needed to scale innovation – Geetika outlines five structural pillars – knowledge & capability building, resource access, market validation, funding, and ethical/responsible AI – and argues that coordinated programs (joint accelerators, AI benchmarking, export readiness) are essential for sustainable growth [90-94][102-107].


Framework for trustworthy AI deployment – Neerja Sekhar proposes a three-part model-trust (privacy, security, accountability), testbeds (real-world sandboxes, reference architectures), and traction (moving from pilots to full-scale implementation)-as the key to turning AI ideas into societal impact [140-148].


Showcasing successful startups and their impact – The ceremony highlighted numerous startups (e.g., Fuselage Innovations, TectoCell, EZO5) that have leveraged STPI support to achieve revenue growth, funding milestones, employment generation, and real-world AI applications such as drone-enabled agriculture and AI-driven oncology diagnostics [214-276][279-287][292-299][332-338].


Overall purpose / goal of the discussion


The session was convened to “scale AI innovation and build a robust AI startup ecosystem” by bringing together government, industry, GCCs, and startup founders, sharing strategic insights, formalising partnerships (e.g., the STPI-NPC MOU) and celebrating concrete startup achievements [1][9][123-124].


Overall tone and its evolution


– The meeting opens with a formal, courteous welcome from STPI leadership [1][2].


– It shifts to an optimistic, forward-looking industry perspective when Bala describes the AI-driven economic transformation [31-38].


– The tone becomes collaborative and policy-focused during the government and TI remarks on ecosystem pillars [82-89][102-107].


– A celebratory, appreciative tone dominates the felicitation segment, highlighting startup successes [214-218][279-287][332-338].


– It concludes with a grateful, unifying note of thanks and encouragement for continued partnership [353-360].


Overall, the discussion maintains a consistently positive and constructive tone, moving from informational briefings to inspirational calls for collaboration, and culminating in a celebratory acknowledgment of achievements.


Speakers

Shelly Sharma – Deputy Director, Software Technology Parks of India (STPI); Host/Moderator of the session and presenter of startup felicitation ceremony.


Vaani Kapoor – Manager, STPI; Co-host of the session, introduced speakers and facilitated agenda.


Sh. Rakesh Dubey – Director, Startups and Innovation, STPI; Delivered the opening address and highlighted STPI’s digital platform and services.


Sh. Bala MS – CEO, Strat Infinity; Provided industry perspective on Global Capability Centers (GCCs) and their role in scaling AI startups. Expertise: GCCs, AI ecosystem, industry-academia collaboration. [S1][S2]


Arvind Kumar – Director General, STPI; Gave the keynote address outlining STPI’s nationwide presence and the importance of safe, trusted, responsible AI.


Ms. Geetika Dayal – Director General, Thai Delhi NCR (STPI partner); Discussed mentorship, acceleration, and ecosystem support for startups.


Ms. Neerja Sekhar – IAS, Director General, National Productivity Council (NPC); Presented the special address on productivity, trust, testbeds, and traction for AI startups. Expertise: Productivity, AI policy, ecosystem scaling. [S14][S15]


Arita Dalan – Regional Head (North), SecureTech IT Solutions Private Limited; Spoke on cybersecurity solutions and STPI’s role in industry connections. Expertise: Cybersecurity, enterprise-startup collaboration. [S16][S17]


Dr. Soumya – Founder, TectoCell; Presented the startup journey focusing on AI-powered diagnostic solutions in radiology and DNA sequencing. Expertise: AI in healthcare, diagnostics.


Devika Chandrasekaran – Co-founder, Fuselage Innovations; Shared the startup story on drone technology for agriculture, defence, and disaster management. Expertise: Drone tech, agritech, defence applications. [S5][S6]


Kirty Datar – Representative, Caneboard Solutions Private Limited; Highlighted deep-tech positioning and STPI’s credibility boost for the company. Expertise: Deep-tech, startup scaling. [S21]


Milind Datar – Representative, Caneboard Solutions Private Limited; (No specific role details provided).


Meenal Gupta – Founder, EZO5 Solutions; Described AI-driven oncology treatment planning platform “Imagix AI”. Expertise: AI in oncology, precision medicine.


Noor Fatma – Co-founder, EZO5 Solutions; Co-presented the EZO5 journey and impact on healthcare diagnostics. Expertise: AI healthcare, imaging analytics. [S31]


Praveen Kumar – Joint Director, STPI; Delivered the vote of thanks and presented mementos to dignitaries.


Praveen Kumar – (also listed as “Praveen Kumar” in the speakers list; same person).


Additional speakers:


Ashok Gupta – Director, STPI Gurugram; Presented mementos during the ceremony.


Sanjay Gupta – Senior Director, STPI; Invited to grace the dais and participated in the MOU exchange.


Atul Kumar Singh – Additional Director, STPI; Presented mementos to dignitaries and participated in the ceremony.


Rakesh Dubey – (already listed above).


Sanjay Gupta – (already listed above).


Ashok Gupta – (already listed above).


Atul Kumar Singh – (already listed above).


Praveen Kumar – (already listed above).


Other dignitaries (e.g., “DG Sir” and “DG Ma’am” references) were mentioned but not identified by name in the provided speakers list.


Full session reportComprehensive analysis and detailed insights

The session opened with a formal welcome from Shelly Sharma, Deputy Director of the Software Technology Parks of India (STPI), who thanked the dignitaries, industry leaders and the audience for joining a discussion on “Scaling Innovation, Building a Robust AI Startup Ecosystem” [1-2]. Vaani Kapoor, STPI Manager, then introduced the chief guest, Ms Neerja Shekhar, IAS, Director-General of the National Productivity Council, and outlined the agenda of bringing together government, industry and the startup community to deliberate on a future-ready AI landscape [3-10].


Rakesh Dubey, Director of Start-ups and Innovation at STPI, used the opening address to showcase the STPI digital portal as a one-of-its-kind, global-scale platform that aggregates incubators, accelerators, policy documents and contest management [11-13]. He highlighted that the portal also hosts contests from any incubator worldwide and acts as a repository of government policies [14-16], and described recent upgrades – a product marketplace where startups can exhibit offerings and a hiring hub that matches niche talent with startup needs – stressing that the portal supports the entire startup lifecycle online, with further features continuously being added [11-20].


Following Dubey’s remarks, Vaani Kapoor played a short audio-visual presentation titled “STPI Startup Ecosystem – Drive Impact”, which illustrated how the portal and related programmes translate innovation into measurable outcomes across the country [26-29].


The industry perspective was then delivered by Sh. Bala M.S., Managing Director of Strat Infinity. He projected that AI could contribute roughly $15.7 trillion to the global economy by 2030, of which about $5 trillion would stem from productivity gains [36-40]. For India, he forecasted the emergence of more than 3 500 Global Capability Centres (GCCs) by 2030 – up from 1 900 today – generating an estimated $150 billion in software exports and employing 3.5 million engineers [41-43]. He added that approximately 40-50 % of GCC activity is focused on R&D [55-57]. Bala argued that the true scale-up challenge is not the AI model, compute power or funding per se, but the integration of AI into global organisations; the gap lies in institutionalising AI rather than in the technology itself [44-55]. He proposed that the co-creation model is the only model currently enabling AI startups to embed their solutions within GCCs, providing sandbox environments, domain expertise and production-grade pathways [56-63] and creating a multiplier effect on skill development, revenue and software exports [64-70].


Ms Geetika Dayal, Director-General of Thai Delhi NCR, reinforced the need for a coordinated ecosystem. She identified five structural pillars for scaling innovation – knowledge and capability building, resource access, market validation, funding, and ethical/responsible AI – and called for joint accelerators, expanded “Samarth” programmes, corporate challenge initiatives and AI-benchmarking reports to move beyond isolated efforts [90-98][102-107]. Dayal also highlighted the “Deep Ahead” and “Samark” programmes as specific initiatives to accelerate adoption [95-97].


Representing the National Productivity Council, Ms Neerja Shekhar announced a memorandum of understanding (MoU) with STPI and presented a three-part framework for AI startups: (i) trust – encompassing privacy, cyber-security, transparency and accountability; (ii) testbeds – real-world sandboxes, reference architectures and labs; and (iii) traction – converting pilots into full-scale deployments [140-148]. She framed this as the “Three Sutras of People, Planet and Progress” [150-152], positioning STPI’s nationwide hubs and NPC’s benchmarking capability as the “trust-testbed-traction” pathway that will accelerate responsible digital transformation across MSMEs, clusters and AI-driven startups [149-152][160-162].


Arvind Kumar, Director-General of STPI, provided a complementary view of STPI’s physical footprint, noting 70 centres (including 24 domain-specific entrepreneurship hubs) across Tier-2 and Tier-3 cities that deliver incubation, seed funding, market access and ancillary services such as BAPT, network-security, data-centres and cloud-services in PPP models [165-168][165-180]. He warned that AI products must be safe, trusted, responsible and ethical to achieve scale, distinguishing “responsible” AI (fairness, accountability, bias-free outcomes) from “ethical” AI (environmental stewardship, job creation) and underscoring accountability – for example, who is liable if an autonomous vehicle causes an accident [191-199].


The ceremony then moved to the formal exchange of MoUs. After Vaani’s invitation, Shri Ashok Gupta (STPI, Gurugram) and Shri Nikhil Panchabai (NPC) signed the first MoU, followed by a second agreement between STPI and Thai Delhi NCR signed by Shri Sanjay Gupta and Ms Dayal [207-214]. These agreements symbolised a strategic partnership aimed at jointly scaling AI productivity and ecosystem development [80-84].


Subsequently, the startup felicitation ceremony recognised a dozen enterprises, grouped by award category:


* Revenue – Phoenix Marine Exports (₹25 cr) and Suhora Technologies (₹50 cr)


* Funding – Vimeo Consulting (₹25 cr) and Puvation Technology Solutions (₹50 cr)


* Employment generation – Swada Agri and Mobile Pay E-Commerce


* Women-employment – Strangify Technologies


* AI-impact – Sequera Tech and Devnagri AI


* Innovation – Dactrosel Healthcare, EZO5 Solutions (promising) and Connector Foods (second-place)


All awards were presented by senior dignitaries, underscoring the tangible outcomes of STPI’s support stack [214-276].


Founders then shared their journeys. Devika Chandrasekaran, co-founder of Useless Innovations, recounted how participation in STPI’s “Scout 2021” programme provided early validation and confidence, enabling the company to manufacture drones for agriculture, defence and disaster-management, serving over 10 000 farmers and earning a National Startup Award presented to the Prime Minister [279-281][282-287]. Dr Soumya of TectoCell described AI-powered diagnostic solutions that combine radiology, DNA sequencing and drug-resistance analysis, crediting STPI’s regulatory guidance, data-access facilitation and global collaboration for enabling a rapid scale-up from India to the world stage [292-299]. Arita Dalal of SecureTech highlighted the firm’s end-to-end cybersecurity offerings for sectors ranging from pharma to banking, and thanked STPI for mentorship, investor connections and industry linkages that have bolstered the company’s market reach [304-312][315-320]. Kirty Datar added that STPI’s recognition has strengthened his startup’s credibility with customers, investors and government stakeholders [323-325]. Finally, Noor Fatma and Meenal Gupta of EZO5 Solutions explained how their AI-driven oncology treatment-planning platform (“Imagix AI”) processed over one million scans, identified thousands of TB and lung-cancer cases, and, after a critical cash-flow crunch, received STPI’s assistance to raise capital, leading to rapid adoption, a meeting with the Prime Minister and interest from Bill Gates and Microsoft [329-338].


The session concluded with a formal vote of thanks by Praveen Kumar, who thanked all speakers, dignitaries and founders, reiterated the importance of collaborative ecosystem building, and invited everyone for a group photograph [353-368].


Across the discussion, a strong consensus emerged that scaling AI innovation in India hinges on coordinated collaboration among STPI, GCCs, the National Productivity Council and industry partners; on building trust through privacy, security, transparency and accountability; and on providing concrete testbeds and co-creation sandboxes to bridge the gap between prototype and production [140-148][44-55][56-63][102-107].


However, speakers diverged on the primary bottleneck. Bala argued that the integration gap – the lack of institutional pathways for embedding AI into global organisations – is the chief obstacle [44-55], whereas Arvind Kumar stressed that without responsible and ethical AI – particularly fairness and clear accountability – startups cannot gain the trust needed for scale [191-199].


Suggested next steps mentioned by speakers included: Bala’s call for co-creation platforms that provide GCC sandboxes [56-63]; Dayal’s appeal for joint accelerators, scaling up the Samarth programme and producing AI-benchmarking reports [95-103]; Dubey’s push to continue enriching the STPI portal with additional marketplace and hiring functionalities [11-20]; Kumar’s emphasis on leveraging the physical network of 70 incubation centres [165-180]; the NPC-STPI MoU’s “trust-testbed-traction” pathway [140-148]; and the joint IP framework that is currently under discussion [75-76].


In summary, the summit demonstrated that India’s AI startup ecosystem is moving from isolated pilots to a coordinated, multi-layered support system that combines a unique digital portal, an extensive physical incubation network, GCC-driven co-creation sandboxes and clear governance frameworks for trust, testbeds and traction. By aligning policy, infrastructure and market-access levers, the stakeholders aim to translate AI-driven innovation into measurable economic growth, productivity gains and societal benefits across the nation [102-107][140-148][64-70][S1][S2].


Session transcriptComplete transcript of the session
Shelly Sharma

Good afternoon, everyone. On behalf of Software Technology Parks of India, I extend a very warm welcome to all the dignitaries on Dias and the entire audience to today’s session on Scaling Innovation, Building a Robust AI Startup Ecosystem. I am Shelly Sharma, Deputy Director, STPI, and it is my privilege to host this session.

Vaani Kapoor

Good afternoon, everyone. I am Vani Kapoor, Manager, STPI, your co -host for the session. May I now begin by respectfully welcoming our guests. Our distinguished dignitaries on the Dias. Our chief guest for today, Ms. Neerja Shekhar, IAS Director General, National Productivity Council. Sri Arvind Kumar sir, Director General, STPI Sri Rakesh Dubey sir, Director, Startup and Innovation, STPI Sri Bala MS, CEO, Strat Infinity and Ms. Geetika Dayal, Director General, Thai Delhi NCR and all other senior officials, ecosystem partners, startup founders and delegates present here today. We are truly honored by your presence. Today’s session brings together government, industry and the startup ecosystem to deliberate on building a future -ready AI innovation landscape while also celebrating startups that have demonstrated measurable impact across revenue, employment and business.

Government, innovation and inclusion. without further ado may I now invite Sri Rakesh Dubia sir director startup and innovation to kindly deliver the opening address sir please

Sh. Rakesh Dubey

incubators, accelerators, even state governments, academia, everyone can come to this platform and find the resources that they need here. This platform also serves as a repository of various government policies that come from time to time. It also serves as a platform where contests of not just STPI, but any incubator anywhere in India or even the world can host their contest, get their application invited, get the results published after screening and evaluation, and further handhold that startup’s entire life cycle online. This portal is, I think, one of its kind portal, not just in India, but across the world. It is a very valuable thing, and we are adding more and more features to it as time go.

For example, we have added features like a product marketplace as well as a hiring hub on it, using which a startup wanting a niche management, can post its requirement and individuals can apply against it. An individual wanting to look for a niche job can post his resume here and probably a startup can pick it up. It also has a feature called product marketplace in which any startup can post its product for anyone to see. And if any viewer finds interest in it, the two can interact together via this platform. That being said, the STPI is always looking for doing more and more things to support the innovation and startups across India as well as the world.

And we will be welcome to hear any thoughts from you. And there are many experts lined up. I am sure you will hear many more learnings from them also. With that, I thank you everyone and hope to see you. Thank you very much.

Vaani Kapoor

Thank you so much, sir, for setting the context so beautifully and highlighting STPI’s growing national impact. now may I request the technical team to play the short audio video presentation titled STPI Startup Ecosystem Drive Impact STPI Startup Thank you. Thank you. Thank you. Thank you, team. That gives us a powerful snapshot of how innovation is translating into real outcome across the country. Now to share insights from the industry and global capability center perspective, may I now invite Shree Bala, MSCO, Strat Infinity. Please come.

Sh. Bala MS

Very good afternoon. Namaste. Dr. DG, DIG, LPC, DG, hi. Good friend here, Rakesh. And everyone, very good afternoon. Thank you for the opportunity. Scaling innovation, building a robust AI, the perspective from GCC is going to… be phenomenal. We are not just living through the AI wave. We are rather living in the AI restructuring of global economy. If you look at the overall, if you look at the AI contribution, about 15 .7 trillion dollars by 2030 globally, whereas close to about 5 trillion dollars are going to be in productivity. That is an enormous opportunity. If you look at India by 2030, almost there is going to be 3 ,500 plus GCC, which is about 1 ,900 today now, and going to contribute about close to 150 billion dollar of software exports, and it is going to be 3 .5 million employees dedicatedly working for global capability center.

That is the very high level statistics for 2030. These are not just employment statistics, my dear friend, but this more like enterprise grade innovation, innovation, infrastructure at a national scale. Right. That said, if you look at AI leadership globally, most conversation comes on three things. Right. First thing is the model that we are building. Second thing comes the power, the compute power. The third thing may come under the funding perspective. Nothing wrong about it. All these three things are important. But in my experience working with a global organization, the scale is not determined by the AI you build. The scale is determined by the way how your AI gets integrated to the global organization. That’s where today the fundamental gap is all about.

And if you look at the real competitive advantage, in fact, the experimentation is really abundant today. But institutionalization is really limited. So that is where the real challenge and gap comes from. And the GCC component steps in this point. This is an inflection point in my view. If you look at the transformation. GCC has about from 1900 plus GCC are there in the country today and it was those days it was more looked at as a cost center or a labor arbitrage center but today it is more for the engineering centers, R &D centers if you look at more than 50 % of the or 40 % of the India’s GCC is more on R &D today the emerging technologies like AI, cyber security product development lot of new things have been developed out of the global organization now India is considered to be a digital talent center for the global organization which is not tactical but it’s truly very very strategical my dear friends.

If you look at the startup ecosystem that’s where the main thing comes from the venture capital, the investors STPI and lot of government organizations played a phenomenal role in making sure the grants are given which is very good very very important that accelerates the AI innovation, lot of fundings have come but capital alone cannot solve this friction that’s very very important one capital cannot solve the friction in fact when you look at global surveys in the recent AI study report that shows majority of the enterprises are piloting AI but only minority have scaled across the business units and the gap is not the technology problem it’s not a technology gap it is an operational readiness it is an organizational readiness that’s the biggest gap it’s not the technological capability so having said that if you look at any AI tools when you get into any organization the startups comes from across India you need to pass on through the risk you need to pass on through the compliance security the fitment the global workflow design lot of challenges that you come across again that is where the GCC component comes in because working with a global organization the enterprise organizations are totally different you can’t even think of one said we say India has lot of startups and we are phenomenally doing well at the same time the market access to the global organizations are really big question mark that’s where the GCC component comes in why the GCC matters for the startup it matters a lot here is the thing right so today what is needed for AI startup companies right you need a real data set you need a real infrastructure capability you need to have the enterprise validation so who is going to give that who is going to trust your model and put it in their system that’s the biggest question mark again that is where the GCC comes in where it is going to be the bridge between the startup ecosystem and the enterprise organization because you are going to work within the ecosystem within the infrastructure capabilities of the GCC that gives the confidence the enterprises to try you test you work with you and this is something important right today sorry yeah so this is something very important co -creation model so there were days where the startups were looked as vendors or suppliers but today the co -creation model is very very powerful in fact my personal experience from stat infinity for as you see 24 plus COEs are then STP we’ve been working with the fin blue we’ve been working with the ICOE where the global capability centers work with us through the STPI.

They’re able to identify a phenomenal startup and startup scaling for the global organization. So the co -creation model is the only model in our context how the startups, AI startups can get into the global capability centers because they provide the control sandbox, they provide a domain expertise, they provide a production grade environment gateways, pathways which basically helps to reduce the pilot to production cycle which globally remains, which is the basic bottleneck of any AI startup is that pilot to production. That can be solved by the co -creation model my dear friends. And coming back to that global why India is unique today, right? The economic multiplier effect. See, anything that comes into the GCC, the ecosystem grows, the value chain grows, the skill development happens, right?

And it generates a lot of revenue. Of course, the software exports increases. So a lot of possibilities, not just having one employee that indirectly helps many people to grow that. So that is where India is unique and the GCC ecosystem is going to make a phenomenal impact. And institutionalization the model. What must happen next is if you look at the global comparison again India plays a phenomenal role in terms of the GCC density the talent and also the local ecosystem connect even with the things like STPI working on the GCC policy so lot of such ecosystem connects truly helps the global capability centers to adapt the AI ecosystem and make it. Why the strategic partnership is very important.

What must happen next is something very important. The co -creation platforms have to be formed. So especially the organizations like STPI are the real right organizations to build this co -creation platform and build an enterprise sandbox which is already there in the COEs but it has to be nurtured for the GCC perspective. one of the large US multinational banks have done the FinBlue in Chennai COE which has gained the phenomenal success making sure the FinTech COE under the SCPI have factored to the global ecosystem global large multinational bank has benefited out of that that’s where and joint IP framework that’s another important thing which is still under discussion but definitely it’s getting into a better space with that said I just wanted to submit upon one aspect which is the broader reflection right today startups create innovation velocity whereas enterprise creates scale and between these two the global capability centers are the pathways for building between the innovation velocity and the enterprise scale because that helps you to get the pathway to navigate the challenges that you get in the global enterprise ecosystem and be working in the GCC’s that helps to get your product or take your service, use it locally in the environment of global ecosystem, get the acceptance even if it doesn’t work, it fails fast, nothing is going to harm the GCC or the global organization.

That’s where the opportunity through the GCCs are truly evolving to work for your AI products to the larger ecosystem. Thank you so much. Jai Hind, Jai Bharat.

Vaani Kapoor

Thank you, sir, for your valuable industry perspective and for highlighting the role of GCCs in nurturing startups. Next, may I invite Ms. Geetika Dayal, DG Thai Delhi NCR, to address the audience on the entrepreneurial and startup ecosystem.

Ms. Geetika Dayal

My warm greetings to the dignitaries on the dial. Thank you so much, Arvindji, for this opportunity. And to you and to your entire team for, I know, countless hours of effort that have gone into this massive exercise of putting this program together from which all of us are benefiting. A very good afternoon, friends. We are gathered at probably one of the most important AI policy and innovation programs or platforms that our country has seen. And this summit truly represents the national ambition at its strongest and very best. But it is the dialogue and this session that we are doing here today that will help realize this ambition, help execute this ambition. All the discussions that have gone on for the last few days around AI policy, national strategy, global competitiveness, etc.

They must translate into real support for startup founders who are building AI products. And that translation only happens through the kind of ecosystems that we build together. I think India’s landscape is expanding very rapidly, making it among the top three global AI ecosystems. But it’s not numbers that actually build scale. It will be these kind of programs and innovation ecosystems that come together to make that happen. At Thai Delhi NCR, we’ve been at the forefront of mentoring and accelerating, working closely with startup founders. And over the last few years, we’ve worked with many of them to help them bridge the gap from innovation to market readiness. What we learned is that some of the areas where startups struggle are areas around business capability, around market access, around access to patient capital.

And therefore, our approach focuses on primarily these levers. which provides deep mentorship from entrepreneurs who’ve already scaled globally, market access with enterprises and GCCs, as you just heard, investor access through various funding stages, and structured capability building for our founder. STPI, of course, over the last, you know, more than two decades, 25 years and more, has done such remarkable work and played a great role in actually creating infrastructure and incubation support, a strong policy alignment, a massive pan -India presence and regulatory and institutional strength. And therefore, together, we really create the complete support stack for founders. And this has been demonstrated by the success of some of our key initiatives around Deep Ahead or Samark, etc., which really helped us to create a strong policy alignment.

And it really shows, it proves that collaboration does multiply outcomes as we work together on it. also I think there are certain strengths that India has which are raw ingredients for what we are all working towards now some of that is world class technical talent that comes from our premier institutions, cost effective innovation which is probably around 30 -40 % lower than operational costs in Silicon Valley strong public digital infrastructure as well as policy momentum to India AI mission and what we are seeing now so we must use all of that and bridge the gap around access to data, compute, infrastructure etc. but the collaboration that we are really excited about is the one with STPI which demonstrates how complementary skills when they come together can create real impact.

I think what we feel is that there are five structural pillars that are needed for scaling innovation. This is around knowledge and capability building, resource access market validation, funding access as well as well as of course ethical and responsible AI which our Prime Minister has been talking about and I think these kind of ecosystem collaborations and organizations they act as trust bridges which reduces the friction between government, startups, corporates and investors so I think as we move ahead we are very keen to work and to see how we can move out of the format of isolated programs and come together to create coordinated strategy there are certain immediate priorities that we can definitely work on this would be around expanding joint accelerators scaling up Samarth which has been going on so beautifully many more corporate challenge programs export readiness and perhaps AI benchmarking reports etc.

What we’d love to see is how AI startup ecosystems thrive, not by competition, but by collaboration. And as you’ve seen over here, to build a robust ecosystem when STPI, TI, GCCs, government, corporates, etc., come together with a shared vision, scale becomes inevitable. And that is what we are all here for, scaling innovation. Today, when we sign an MOU with STPI, it amplifies the impact and relevance, and it is a great pleasure and a great matter of privilege and pride for TI to work with STPI as a key enabler and a partner. So our very best wishes to all of you. As Rakeshji mentioned, these are times of great change, probably something that our generation has been very fortunate to see where we were and what we are heading towards.

And for all of us, to play a small role in what the years ahead will bring is really a humbling experience. It’s a great opportunity to be here. And my congratulations to all of you and my thank you for having all of us together here. Thank you so much.

Vaani Kapoor

Thank you, ma ‘am, for sharing Ty’s remarkable journey and continued commitment to entrepreneurs. May I now invite Ms. Nija Shekhar, Director General, National Productivity Council, for her special address. Ma

Ms. Neerja Sekhar

‘am, please. Good afternoon to you all. It’s a delight to be here at the AI Impact Summit. And specifically in this session, being hosted by the Software Technology Program, Park of India, where they have invited the National Productivity Council, who I represent, Ty, and other partners together, the GCC partners together. we are all talking about scaling AI innovation through the startup system. My warm greetings to everyone, to our ecosystem partners, industrial leaders, GCCs, mentors, investors and the startup founders who are also here as we work together on our next growth journey of innovation and AI impact. Anchored in the seven chakras of human capital, I am talking about the event that is anchored in the seven chakras of human capital, inclusion for social empowerment, safe and trusted AI, science, resilience, innovation and efficiency, democratizing AI resources and AI for economics.

Economic development and social growth. and the Three Sutras of People, Planet and Progress. This summit is focusing very effectively on a development -oriented framework for artificial intelligence. Today’s special session, where we are discussing the national imperative of scaling AI innovation, we will exchange a memorandum of understanding with STPI, NPC, and STPI have planned and pledged to work together for scaling this AI innovation, support the AI startup ecosystem in the country, and also to bring together innovation and collaboration. Because we know this is… This is not the era of competition. It’s an era of collaboration, where we have to put our energies together. and focus on areas that impact the population for good. This diffusion at scale across sectors, value chains, MSMEs, clusters, and public services.

This is what we are looking at. NPC, the National Productivity Council, works for productivity in the entire economic sector in the country. So we work on the total factor productivity, labor, land, infrastructure, capital. We support these areas and bring every player, make every player a part of the larger growth in the Indian economy. And manufacturing is a major focus area. Services, of course, but manufacturing. Because we know that that is where the employment is. And that is where. And the maximum exports also are growing. going to grow in the future. And of course, it is also going to maximize the GDP in the country. We’ve seen in this Expo area, we’ve seen many small AI applications, many of which are from the startups, working on areas of agriculture, health, some very, very interesting innovations on health and education, which is something very, very dear to all of us.

In areas like textiles, pharmaceuticals, etc. The question now is, how do we reliably move from ideas to impact and be meaningful to the society under the overall theme of welfare of all and happiness of all? Let me offer a crisp three -part framework for startups and ecosystem builders, which is trust, testbeds, and and traction. Trust is the entry ticket. If customers can’t trust our AI, they will not adopt it, on a scale not at least. So trust brings privacy and cyber security by design, transparency, accountability. It also means operational reliability and responsible governance. Testbeds. This will bridge the promise and proof. Startups need real world sandboxes. Labs, testing environments, applications, reference architectures, etc. And traction is what turns pilots into scale.

Not just a demo, but actual implementation. So we feel that the STPI will bring the ecosystem together and play a pivotal national role by bringing the industry a connected innovation landscape through their entire setup of innovation hubs, platforms, structured programs, centers of excellence across the country and their digital enablement frameworks. They have very successfully over a period of time connected mentors, labs, startups and resolved the challenges facing the startups leading them to larger markets. So their ecosystem of shifting from incubation to scalable infrastructure has seen very good days and we are going to see much much more success in the future. NPC’s role is to strengthen infrastructure. In the adoption spine of this ecosystem. productivity, quality, capability and industry alignment.

In the AI era, productivity is not just efficiency in land, labour and capital, but also reliability, repeatability, safety, security and responsible performance. Everything at scale. Startups scale faster when they can demonstrate measurable outcomes. Better output, better quality, fewer defects, faster service delivery, better customer experience, end -to -end experience and success. NPC supports this outcomes -driven pathway through benchmarking. We are very good at creating models, frameworks, assessment assessments and assessments, evaluations and providing a platform for the industries, MSMAs or the sector wise platforms are very well developed by NPC. This is an area many of you would have maybe if you are not associated with NPC you would know many people who worked with NPC, moved out in the economy, in the consultancy sector, in the evaluation sector and worked through benchmarking, capacity building and spreading of the productivity culture.

That’s why the partnership that we are looking at with STPI, between STPI and NPC is very timely, very strategic and we feel it will accelerate responsible digital transformation. And EA adoption. especially for MSMEs, clusters, industry ecosystems and RAI startups. We are really looking forward to a partnership where we can bring in more productivity into the ecosystem and today’s summit is a context that provides us and asks us and exhorts us to reorient our energies towards a more productive AI system that is scalable, that supports the AI startups and also has a very productive

Vaani Kapoor

Thank you, ma ‘am, for your inspiring words and for reinforcing the importance of productivity and capability development. Now, I would sincerely request our DG sir, Sri Arvind Kumarji, to kindly enlighten the audience with the keynote address.

Arvind Kumar

Hello, namaskar. Good afternoon. I think this, when such session happens with Expo, it’s very difficult to have the attendance. Since morning, I am fighting for this only, who is speaking, kindly ensure that attendance is there. But here, there is no problem with attendance. So, organizer, thank you very much. I think you did a wonderful job. The Expo is going on. still we have the full attendance, people are standing also there. Neerja ma ‘am, other dignities on the dais. I think there are a lot of other business pending, some felicitations and all right. So just those who are not familiar with STPI, two minutes about STPI, two minutes about the subject and then I will end.

So STPI has 70 centers across the country where we provide incubation to all small IT companies. These centers are generally in tier 2, tier 3 cities. So we have 62 centers in tier 2, tier 3 cities. Apart from the 70 centers, we have 24 centers of entrepreneurship which are domain specific wherein we provide at least 60 degree support to start -up. We nurture them. We provide some seed fund to them. we provide global reach to them, we provide market access to them and of course incubation to them. So this is what the STPI is doing when it comes to startup and all. Other things we are also doing like BAPT, network security, data centers, cloud services in PPP partners. So lot of things are doing just for that STPI domain.

Now as far as this topic is scaling innovation is concerned, so I mean there is a big change when there is no concept of startup in the country. We used to call it MSMEs and those MSMEs are generally meant for supporting the big companies, especially PSUs. They create something, then merge with either PSUs or they provide some product which is a product for something as input to PSUs or the big organization. Now this change of the startup and with support by the government to the startup has changed the whole landscape. Now startup can themselves scale their product. This is the change which you can see in last 5 to 6 years. And if you really want to scale up your innovation, then what is actually required for the startup, that is that product or that innovation should be safe and trusted.

Unless it is not trusted, nobody is going to use it and it is not going to scale up. Now how to make this trusted and safe, especially in the AI era? That means you have to make your product which is responsible and the product which also ethical. You have to make sure that the product is safe and trusted. only then people will have trust in it only then it can be scaled. Now people are generally confused between two words responsible and ethical. Though these two words are interconnected but different. When I say responsible or when I say ethical, though it is part of all five big parameters like we say it should be accountable, it should be secure, there has to be privacy, it has to be fairness.

These are five words we use. But difference just by examples, when you say something ethical, that means like as a CEO of the company because you are owner of the startup, whether you are taking care of environment or not, when you are producing your product. that is a part of ethical or when you are going to product whether you are taking care of the jobs creation or not this is a larger part this is your ethical attitude what I am going to do with the product when it comes to responsible responsible means fairness which means whether it is a not biased towards anything whether it’s not biased towards country whether it’s biased towards male female gender caste religion then it is a fair product and when I say responsibility it also means somebody should be accountable accountability is the very important part of the responsibility which means that suppose there is a car hit somebody in the road and the car is a driverless car now who is accountable for that accident whether the person who has purchased the car that is a person who has created that car name of the company or somebody who develops algorithm or the this is even the large language model which has been used by that very rappers.

So this is accountability. So unless you are not able to make something which is responsible ethical and therefore safe and trusted, it can’t be skilled. So all startups are there. They must ensure that whatever they are going to create today everybody is using UPI because they have able to create the trust among us. Lot of things came in this country. But now today biometric attendance or biometric identity has become scalable because they were able to create that this product is trusted, this product is safe. So any product which you are going to create whether the product is related to anything if you really want to innovate which is a very good thing and if this country has that opportunity, scale up anything we have population of 1 .4 billion and here scalability is very important.

and therefore if you really want to scale your product, you really want to scale your innovation, that must be safe and trusted. Thank you. Thank you very much.

Vaani Kapoor

Thank you so much, sir, for always encouraging, enlightening and guiding us throughout the journey. Now we begin with the MOU exchange ceremony. The first MOU between STPI and National Productivity Council May I request Sri Ashok Gupta, Sir, Director, STPI Gurugram to please come on the dais and Sri Nikhil Panchabai, Director, NPC to please come on the dais and exchange the MOU. I would also request DG Sir and DG Ma ‘am to grace the audience. Sanjay Sir, please come and grace the dais, please, Sir. Can we have a round of applause for Shri Sanjay Gupta sir, our Senior Director, STPI Thank you so much Sir, please be on the stage sir Shri Ashok Gupta sir, if you would like to The next demo you exchange is between STPI and Thai Delhi NCR For that may I request Shri Sanjay Gupta sir, Senior Director, STPI and Ms.

Geetika Dayal to please come forward and exchange the MOUs please Thank you Thank you. Thank you.

Shelly Sharma

So we now come to one of the most awaited segments, the startup felicitation ceremony. Today, we recognize startups supported under STPI ecosystem for excellence across revenue, funding, employment, women participation, innovation, and AI -led impact. I would like to request our Honor Dignitaries, DG Sir STPI and Nirja Shekhar Ma ‘am, Director General, National Productivity Council to kindly come forward to present the certificate and trophy to our startups. I request these startups to kindly come on the stage as per the name announced. So the first name is, may I invite Phoenix Marine Exports and Solutions Private Limited to come on the stage. They are being recognized under the category highest revenue up to 25 CR revenue and highest impact based on revenue tier 2 and tier 3 reason.

May I request DG STPI and DG NPC to please present the certificate and trophy. Once again, a big round of applause for their outstanding contribution. Now, may I invite Vimeo Consulting Private Limited to please come on the stage. They are being recognized for highest funding raised up to 25 CR revenue category. Heartiest congratulations on your fundraising success. A big round of applause. A louder round of applause, please. Now, may I invite Swada Agri Private Limited to the stage. Thank you, Judy. They are being felicitated for highest employment generation up to 25 CR revenue category. Congratulations for generating valuable employment. A big round of applause. Thank you. invite Strangify Technologies Pvt. Ltd. to please come on the stage.

They are being recognized for highest number of women employment up to 25 CR revenue category. Well done for empowering women in the workforce. A big round of applause. A louder round of applause for women participation. Now, our next startup is Suhora Technologies Pvt. Ltd. May I invite Suhora Technologies Pvt. Ltd. to the stage. They are being recognized for highest revenue up to 50 CR revenue category. Congratulations on your outstanding business performance. A big round of applause. Suhora Technologies Pvt. Ltd. Now I invite Puvation Technology Solutions Private Limited. They are being felicitated for highest funding raised up to 50 CR revenue category. Applause for your impressive funding milestone. A big round of applause. Now I invite our next startup, Sequera Tech IT Solutions Private Limited to come on the stage.

They are being recognized under multiple categories. so the categories are highest employment up to 50 CR revenue category highest women employment up to 50 CR revenue category highest AI based impact based on revenue a special recognition for excellence across multiple dimensions a big round of applause now I invite our next startup Atmik Bharat Industries Pvt. Ltd. to the stage they are being recognized for highest impact based on beneficiaries congratulations for touching countless lives a big round of applause May I invite Mobile Pay E -Commerce Private Limited. They are being validated for highest impact based on beneficiaries as a second position. Well done for your meaningful outreach. A big round of applause. Now I invite the another startup Devnagri AI Private Limited to please come on the stage.

They are being recognized for highest AI based impact based on revenue as a second position. Congratulations on leveraging AI for impact. A big round of applause. Thank you. Thank you so much, sir, DG, sir, for attending us. Now I invite our next startup, Dactrosel Healthcare and Research Private Limited. They are being recognized for most innovative startup. Applause for Breakthrough Healthcare Innovation. A big round of applause. Now I invite our next startup, EZO5 Solutions Private Limited. Please come on the stage. They are being felicitated as most promising innovation. Please, please A big round of applause Thank you Now I invite our next startup Connector Foods Private Limited. Please come on the stage for a beautiful couple.

They are being recognized as most innovative startup as a second position. Well done for creative excellence. A big round of applause. Finally, our last startup, may I invite Fuse Ledge Innovations Private Limited. They are being recognized as most promising innovation, second position. Congratulations on your forward -looking journey. A big round of applause. A big round of applause for all our felicitated startups. Your innovation, resilience and contribution to India’s digital economy truly inspire us all. May I request our dignitaries to kindly resume their seats on the dais. We will now invite our selected startups to briefly share their journey with us. So may I invite Fuselage Innovations, Private Limited, to kindly come on the

Devika Chandrasekaran

Hi everyone, my name is Devika Chandrasekaran. I’m the co -founder of Useless Innovations. It’s truly an honor to stand on a stage today being felicitated by STPI. This moment feels very special because we started our journey with STPI in our early days. Back in 2021, we participated in a program called Scout 2021. At that time, we were building our prototype. The support we received through the program was not just a funding, it’s a validation. That recognition gave us the confidence to push forward. We’re going to do it. Today, Fuselage Innovations manufactures drones in agriculture, defence, disaster management applications We are working with more than 10 ,000 farmers across India helping them to improve productivity, efficiency through drone technology We are also contributing to defence, disaster management and maritime operations serving critical national needs Last month, we were deeply honoured to receive National Startup Award and we got the opportunity to present our journey in front of our Honourable Prime Minister, Narendra Modi Sir I would like to sincerely thank STBI and everyone involved in the journey to believe a startup like us The ecosystem, the encouragement and the early trust that make a huge difference in our journey Thank you so much

Shelly Sharma

Thank you for sharing inspiring story. Now may I invite Dr. Rosals to kindly come on the stage and share your startup journey with us. Dr.

Dr. Soumya

Good evening, everyone. My name is Dr. Soumya, and I’m really glad to be a part of this prolific platform today. Just very quickly, I’d like to walk you through what we build. So at TectoCell, we build AI -powered diagnostic solutions at the intersection of radiology and artificial intelligence. And DNA sequencing, while addressing the huge havoc of drug resistance and robust clinical trials panning across India, facilitated by the software technology parks of India. We’ve been able to sort of exceptionally benchmark our accuracy, clinical accuracy, that sort of amplifies the reliability of our products. And the continued commitment of Software Technology Parks of India to sort of help us navigate through our regulatory compliances, get global collaborations, and also sort of get data acquisition, which is sort of machine readable, is extremely noteworthy.

And this unique foundation sort of puts us in a very good position, in a very strengthful position to now sort of scale this globally, building from India for the world. So I’m very grateful for this. Thank you.

Shelly Sharma

Thank you. Lots of applause. Thank you so much for sharing your story and journey with us. Now I invite Sequeira Tech IT Solutions Private Limited to come on the stage and share your startup journey with us.

Arita Dalan

Hi everyone. Good evening to everyone. So my name is Arita Dalal. I’m heading this region for North with SecureTech. I have been in this organization from the last 11 years. But during this journey, we have a lot of interaction with STPI as an organization. They are one of the nurturing body which has done a lot of collaboration in the industries as well. They are one of the bodies which has given us an opportunity to talk to the investors as well. And there are various industry connect as well that is being established by the organization. And we are very sincerely thankful to the entire organization and the team of STPI as well. Just to give you a brief about.

SecureTech is a cyber security organization in VR. Our mantra is to simplify security. We are touching, we are securing the security for the large enterprise organization, with size organization across the industry, whether it is pharma, banking finance organizations, or even the small organizations which are currently establishing the digital landscape in the country, while they are being regulated by large RBI and CBI. So, in nutshell, we are providing them all the frameworks, security parameters, and the solutions as well, so that they can be powered, they can be enabled, and they can secure their infrastructure platforms and the data that they are processing for the countries or for the users that they are providing services. So, whether it is a startup organization or even a large infrastructure organization, we are securing.

We are providing them end -to -end. Thank you. Thanks, everyone.

Shelly Sharma

Thank you. Now, I invite… Caneboard Solutions Private Limited to come on the stage and share your journey with us.

Kirty Datar

helping us sharpen our positioning as a deep tech company. Most importantly, STPI’s recognition has strengthened our credibility with customers, investors, and government stakeholders. We are very happy and very honored to be here today, and we thank you so much to STPI and everybody who is present here today.

Shelly Sharma

Thank you so very much. May I invite now EZO5 to kindly come on the stage and share your startup journey with us.

Noor Fatma

Hi, everyone. Good afternoon. I’m Noor Fatma, co -founder of EZO5 Solutions.

Meenal Gupta

Hi, I’m Meenal Gupta, founder of EZO5 Solutions.

Noor Fatma

at EZO5 we have built an AI powered platform Imagix AI that does precision treatment planning for oncology cases and so in the in the startup journey there was a time for us when we one and a half years back when we had just two months of cash flow with us we were thinking a lot what to do and that is when STPI came to our rescue and it helped us raise money and there has been no looking back since then so in the past three years where we have been incorporated we have processed around one million scans we have in the last three months we have scanned around 50 ,000 chest XAs where we have flagged around 4 ,000 cases of TB cut the transmission by short we have flagged six cases of lung cancer where the intervention was still possible so We have prepared 1000 radiotherapy plants in the last three months and we have cut short the treatment planning and start from around one month to a week.

So that is the impact we are making to the support of the whole ecosystem and STPI.

Meenal Gupta

And proudly I say that the impact that we have brought, even our Prime Minister Mr. Narendra Modi was interested and he invited us to discuss our solution in IMC. And just day before yesterday, we have gone global because Bill Gates showed interest in our solution and he invited us in Microsoft to show our solution and he was discussing how he can help us. Thank you.

Noor Fatma

So now we are going from local to global serving the whole world. Thank you.

Shelly Sharma

Yeah, thank you. Thank you to all the founders for sharing such inspiring stories. So, we now proceed with presentations of mementos to our esteemed dignitaries. To begin with, may I request Shri Ashok Gupta, Sir, Director, STPI, Gurugram to kindly come on the stage. Sir will present the memento to Nirja Shekhar ma ‘am, Director General, NPC. A big round of applause. Thank you so much, Sir. And thank you so much, ma ‘am. Next, may I request Shri Atul Kumar Singh Sir, Additional Director, STPI to kindly come on the stage and present the memento to Shri Bala M .S. A big round of applause. May I now request Shri Praveen Kumar Sir, Joint Director, STPI to kindly come on the stage and present the memento to Geetika Dayal Ma ‘am.

A big round of applause. May I also request Shri Atul Kumar Singh Sir, Additional Director, STPI to kindly come on the stage and present the memento to Geetika Dayal Ma ‘am. Shri Praveen Kumar sir, Joint Director, STPI to kindly present the memento to Shri Rakesh Dubey sir, Director, Startups and Innovation, STPI. Thank you sir. A big round of applause. Now, I would like to request Shri Praveen Kumar sir, Joint Director, STPI to present

Praveen Kumar

the formal vote of thanks. Expected dignitaries, speakers, startup founders, innovators and ladies and gentlemen. On behalf of Software Technology Parks of India, it is our true privilege to thank each one of you for making this session focused, meaningful and definitely forward. looking. Nirja Sekhar ma ‘am, thank you for your thoughtful reflections on productivity and growth. Your perspective adds depth and direction both to our collective mission ma ‘am. We are truly encouraged to have your presence. Thank you so much. We are grateful for it. Sri Rakesh Dube sir, thank you for your profound support which has been both guiding and grounding sir. Your constant encouragement and hands -on involvement in shaping the entire session together has helped us immensely sir.

My sincere appreciation to Geetika Dayal ma ‘am from Thai Daily NCR for your continued partnership and reinforcing the importance of collaborative startup ecosystem building ma ‘am. Thank you. Thank you, Mr. Bala, for bringing up a sharp industry lens and pragmatic approach that startups can directly relate to as they scale. So your thoughts on the GCC is definitely going to help them all. To the startups, all the startups who were felicitated today, congratulations. Your achievements demonstrate that innovation from India, including Tier 1 and Tier 2, is both scalable and globally relevant. To all the founders who shared their journey, thank you for your candor and inspiration. Your stories remind us why platforms like STPI matter. And before I conclude, I sincerely appreciate.

My organizing team and every colleague who worked diligently. behind the scene to ensure the session came seamlessly. With that, I once again thank all of you and the dignitaries and I request dignitaries and startups to come forward and have a group chat. Thank you. Thank you again.

Shelly Sharma

I request to all the felicitated startups to kindly come on the stage and have the group photograph with all the dignitaries on the dais. Thank you. Thank you. I also request the other directors as well to please come on stage and join us for the group photographs. Yes, Kavita ma ‘am, please come on the stage. I also request Kishori ma ‘am to please join us for the group photograph. Thank you. Thank you. Once again, thank

Related ResourcesKnowledge base sources related to the discussion topics (24)
Factual NotesClaims verified against the Diplo knowledge base (1)
!
Correctionhigh

“AI could contribute roughly $15.7 trillion to the global economy by 2030, of which about $5 trillion would stem from productivity gains.”

The IDC study cited in [S106] forecasts AI to add a cumulative $19.9 trillion to the global economy by 2030, which contradicts the $15.7 trillion figure reported.

External Sources (106)
S1
Scaling Innovation Building a Robust AI Startup Ecosystem — -Bala MS: Role – Industry representative; Area of expertise – GCC (Global Capability Centers) and industry perspective f…
S2
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — – Sh. Bala MS- Ms. Neerja Sekhar Bala MS identifies institutionalization as the key challenge, emphasizing the role of …
S3
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — Good afternoon, everyone. On behalf of Software Technology Parks of India, I extend a very warm welcome to all the digni…
S4
https://dig.watch/event/india-ai-impact-summit-2026/building-the-future-stpi-global-partnerships-startup-felicitation-2026 — A big round of applause. May I also request Shri Atul Kumar Singh Sir, Additional Director, STPI to kindly come on the s…
S5
Scaling Innovation Building a Robust AI Startup Ecosystem — -Devika Chandrasekaran: Role – Co-founder of Fuselage Innovations; Area of expertise – Drone technology for agriculture,…
S6
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — -Devika Chandrasekaran- Co-founder, Fuselage Innovations (drone technology for agriculture, defense, disaster management…
S7
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — -Kirty Datar- Representative, Caneboard Solutions Private Limited -Milind Datar- Representative, Caneboard Solutions Pr…
S8
Scaling Innovation Building a Robust AI Startup Ecosystem — – Devika Chandrasekaran- Milind Datar – Dr. Saumya Shukla- Kirty Datar- Noor Fatima
S9
Scaling Innovation Building a Robust AI Startup Ecosystem — -Shri Atul Kumar Singh: Title – Additional Director, STPI; Role – Dignitary presenting mementos -Shri Praveen Kumar: Ti…
S11
https://dig.watch/event/india-ai-impact-summit-2026/building-the-future-stpi-global-partnerships-startup-felicitation-2026 — Good afternoon, everyone. I am Vani Kapoor, Manager, STPI, your co -host for the session. May I now begin by respectfull…
S12
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — -Vaani Kapoor- Manager, STPI (co-host for the session)
S14
Scaling Innovation Building a Robust AI Startup Ecosystem — -Nirja Shekhar: Title – Director General, National Productivity Council (NPC); Role – Dignitary presenting awards Thank…
S15
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — – Arvind Kumar- Ms. Neerja Sekhar – Sh. Bala MS- Ms. Neerja Sekhar Bala MS identifies institutionalization as the key …
S16
Scaling Innovation Building a Robust AI Startup Ecosystem — -Arita Dalan: Role – Representative of SecurTech IT Solutions Private Limited; Area of expertise – Cybersecurity solutio…
S17
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — -Arita Dalan- Regional Head North, SecureTech IT Solutions Private Limited (cybersecurity)
S18
https://dig.watch/event/india-ai-impact-summit-2026/building-the-future-stpi-global-partnerships-startup-felicitation-2026 — Hi everyone. Good evening to everyone. So my name is Arita Dalal. I’m heading this region for North with SecureTech. I h…
S19
AI Meets Agriculture Building Food Security and Climate Resilien — Dr. Chaturvedi leads our national effort in agriculture and farmer’s welfare. Mr. Johannes Jett, he is the Regional Vice…
S20
AI for agriculture Scaling Intelegence for food and climate resiliance — -Dr. Soumya Swaminathan: Chairperson of Dr. M.S. Swaminathan Research Foundation – global leader in science, champion fo…
S21
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — -Kirty Datar- Representative, Caneboard Solutions Private Limited -Milind Datar- Representative, Caneboard Solutions Pr…
S22
Scaling Innovation Building a Robust AI Startup Ecosystem — – Dr. Saumya Shukla- Kirty Datar
S23
Scaling Innovation Building a Robust AI Startup Ecosystem — -Shri Ashok Gupta: Title – Director STPI Gurugram; Role – Dignitary presenting mementos Hi, I’m Meenal Gupta, founder o…
S24
https://dig.watch/event/india-ai-impact-summit-2026/founders-adda-raw-conversations-with-indias-top-ai-pioneers — Accuracy is around 92%. So it is around 92 % to 99 % depending upon the data. complexity you can see this data we are wo…
S25
Founders Adda Raw Conversations with India’s Top AI Pioneers — Hello everyone, I am Meenal Gupta from EasyOPI Solutions and so nice to see you over here. Who all are founders over her…
S26
Scaling Innovation Building a Robust AI Startup Ecosystem — -Geetika Dayal: Role – Representative from an organization (specific title not clearly mentioned); Role – Partnership an…
S27
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — Hi everyone, my name is Devika Chandrasekaran. I’m the co -founder of Useless Innovations. It’s truly an honor to stand …
S28
https://dig.watch/event/india-ai-impact-summit-2026/building-the-future-stpi-global-partnerships-startup-felicitation-2026 — Thank you, sir, for your valuable industry perspective and for highlighting the role of GCCs in nurturing startups. Next…
S29
Scaling Innovation Building a Robust AI Startup Ecosystem — -Shri Rakesh Dubey: Title – Director, Startups and Innovation, STPI; Role – Dignitary and supporter of the event
S30
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — -Sh. Rakesh Dubey- Director, Startup and Innovation, STPI
S31
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — Hi, everyone. Good afternoon. I’m Noor Fatma, co -founder of EZO5 Solutions.
S32
https://dig.watch/event/india-ai-impact-summit-2026/scaling-innovation-building-a-robust-ai-startup-ecosystem — Hi, everyone. Good afternoon. I’m Noor Fatma, co -founder of EZO5 Solutions. We have flagged six cases of lung cancer w…
S33
https://dig.watch/event/india-ai-impact-summit-2026/building-the-future-stpi-global-partnerships-startup-felicitation-2026 — Hi, everyone. Good afternoon. I’m Noor Fatma, co -founder of EZO5 Solutions. So that is the impact we are making to the…
S34
Responsible AI in India Leadership Ethics & Global Impact — Absolutely. So as you said, one size doesn’t fit all. Right. And I liked your coinage of bring your own AI. So let me qu…
S35
Responsible AI in India Leadership Ethics & Global Impact part1_2 — Absolutely. So as you said, one size doesn’t fit all. Right. And I liked your coinage of bring your own AI. So let me qu…
S36
WS #31 Cybersecurity in AI: balancing innovation and risks — Sergio Mayo Macias: No, no problem. I totally agree that ethics is a grey field. It is difficult to mandate ethics. Le…
S37
Keynote by Vivek Mahajan CTO Fujitsu India AI Impact Summit — Managing speaker logistics and acknowledgments The moderator coordinates the movement of speakers onto the stage, ensur…
S38
Closing remarks — Infrastructure | Development This argument demonstrates the massive scale and complexity of organizing a major internat…
S39
AI Innovation in India — “Please come forward.”[17]. “Please come forward quickly.”[23]. “You can also come forward please.”[24]. “Before I call …
S40
Bridging the AI innovation gap — Standards and Global Access
S41
Driving Indias AI Future Growth Innovation and Impact — Thank you, Mridu, and thank you, everyone, for joining us for the unveiling of this important blueprint. As we have hear…
S42
Digital Economy in the Caribbean: Digital Integration (Universidad de La Habana) — The strategy has five main pillars: capacity building, norms and institutional pillar, investment and financing pillar, …
S43
Indias AI Leap Policy to Practice with AIP2 — The three M’s approach – Mentorship, Market Access, and Money – is essential for supporting startup growth
S44
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — So two years ago, the French Prime Minister’s Digital Directorate elaborated a strategy based on five pillars. The first…
S45
What policy levers can bridge the AI divide? — How to achieve the ambitious goal of connecting 2.6 billion people in the remaining five years to 2030
S46
Open Forum #10 Mygov e-government portal — Elvin Hajiyev, Head of the Azerbaijan Innovation Center (AIM), discussed the innovation aspect of Azerbaijan’s digital d…
S47
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Costa Rica has chosen to lead by example. Together with the OECD, we’re leading the development of the OECD AI Policy To…
S48
Global AI Policy Framework: International Cooperation and Historical Perspectives — Werner identifies three critical barriers that prevent AI for good use cases from scaling globally. He emphasizes that d…
S49
Bioeconomy Strategy — – ・ To contribute to CO2 emission reduction and hay fever countermeasures by spreading largescale buildings utilizing wo…
S50
Enhancing the digital infrastructure for all | IGF 2023 Open Forum #135 — Furthermore, the importance of having appropriate policies in place to support and foster innovation and ecosystem devel…
S51
WS #283 AI Agents: Ensuring Responsible Deployment — Anne McCormick: Thank you, Anne McCormick, EY, Global Head of Public Policy. I’m interested in this context of policy no…
S52
UNESCO Recommendation on the ethics of artificial intelligence — (a) Support the protection, monitoring and management of natural resources. (d) Support the acceleration of access to a…
S53
Open Forum #33 Building an International AI Cooperation Ecosystem — Participant: ≫ Distinguished guests, dear friends, it is a great honor to speak to you today on a topic that is reshapin…
S54
Scaling Innovation Building a Robust AI Startup Ecosystem — STPI ecosystem as catalyst for startup growth
S55
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — Kumar emphasised the evolution from traditional MSME support to the current startup ecosystem, noting how this transform…
S56
Impact the Future – Compassion AI | IGF 2023 Town Hall #63 — Robert Kroplewski:Okay, good morning. Welcome to the Town Hall, the special panel dedicated to impact the future under t…
S57
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — “Trust is in addition, trust is context dependent, I trust.”[8]. “Trust is not symmetric, I trust Sarah, Sarah may not t…
S58
HealthAI: The Global Agency for Responsible AI in Health — Responsible AI is characterised by AI technologies that align with established standards and ethical principles, priorit…
S59
Shaping AI’s Story Trust Responsibility & Real-World Outcomes — This comment reframes the entire trust vs. innovation debate by rejecting the false dichotomy. It establishes that trust…
S60
High-Level Session 3: Exploring Transparency and Explainability in AI: An Ethical Imperative — Doreen Bogdan-Martin: Thank you, and good morning again, ladies and gentlemen. I guess, Latifa, picking up as you were a…
S61
AI for agriculture Scaling Intelegence for food and climate resiliance — Artificial intelligence | Human rights and the ethical dimensions of the information society The minister emphasizes th…
S62
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S63
WS #110 AI Innovation Responsible Development Ethical Imperatives — Ricardo Israel Robles Pelayo: Thank you very much. Good afternoon, everyone. It is an honor to be here and share a refle…
S64
Global AI Policy Framework: International Cooperation and Historical Perspectives — So until we figure out how to share data in a way that’s useful, but still respects privacy, and there are techniques fo…
S65
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — Despite technical and economic opportunities, significant policy challenges remain. Chandra identified lack of coordinat…
S66
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Key barriers to scaling include the need for high-quality data foundations, reimagined business processes, and comprehen…
S67
Leaders TalkX: Ethical Dimensions of the Information Society — Ana Neves from the United Nations Commission on Science and Technology for Development spoke about the importance of pub…
S68
Building Trust through Transparency — Another perspective shifts the focus from trust to trustworthiness. The speaker contends that trustworthiness should be …
S69
How Trust and Safety Drive Innovation and Sustainable Growth — All speakers agreed that trust is the foundational requirement for AI adoption. Without trust, people simply won’t use A…
S70
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Unexpectedly, there was strong consensus across industry, government, and academic perspectives on the need for collabor…
S71
Democratizing AI: Open foundations and shared resources for global impact — The tone was consistently collaborative, optimistic, and forward-looking throughout the discussion. Speakers maintained …
S72
Multistakeholder Partnerships for Thriving AI Ecosystems — Low to moderate disagreement level with high strategic alignment. The disagreements are constructive and complementary r…
S73
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — Rakesh Dubey described specific functionalities of the STPI portal, such as a product marketplace where startups can sho…
S74
Scaling Innovation Building a Robust AI Startup Ecosystem — -Collaborative Ecosystem Building: The event highlighted partnerships between STPI, National Productivity Council, and o…
S75
WS #100 Integrating the Global South in Global AI Governance — This could help bridge the gap in computing infrastructure between developed and developing countries.
S76
Open Forum #10 Mygov e-government portal — Elvin Hajiyev, Head of the Azerbaijan Innovation Center (AIM), discussed the innovation aspect of Azerbaijan’s digital d…
S77
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — The summit’s central insight emerged from the high-level panel discussion on trusted AI, moderated by Arun Sasheesh from…
S78
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S79
Ad Hoc Consultation: Friday 2nd February, Morning session — The delegate commenced their address by offering thanks to the chairman, establishing a courteous and formal tone for th…
S80
Ad Hoc Consultation: Thursday 1st February, Morning session — In a formal and courteous address, the speaker began by respectfully acknowledging the presiding official, Madam Chair, …
S81
Bridging the AI innovation gap — The tone is consistently inspirational and collaborative throughout. The speaker maintains an optimistic, forward-lookin…
S82
Artificial intelligence as a driver of digital transformation in industries (HSE University) — It is leading to the automation of some business processes, transforming labour markets. However, highly qualified indiv…
S83
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — The conversation maintains a consistently optimistic and enthusiastic tone throughout. Both speakers demonstrate genuine…
S84
The Future of Innovation and Entrepreneurship in the AI Era: A World Economic Forum Panel Discussion — Everything’s getting faster. And the question I have for this panel is, does AI merely change entrepreneurship, or does …
S85
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S86
Pre 12: Resilience of IoT Ecosystems: Preparing for the Future — The discussion maintained a serious, urgent tone throughout, with speakers consistently emphasizing the critical nature …
S87
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — The discussion maintained a collaborative and constructive tone throughout, with panelists generally agreeing on core pr…
S88
HETEROGENEOUS COMPUTE FOR DEMOCRATIZING ACCESS TO AI — The discussion maintained a professional, collaborative, and optimistic tone throughout. Panelists demonstrated mutual r…
S89
Building Population-Scale Digital Public Infrastructure for AI — The tone is optimistic and collaborative throughout, with speakers sharing concrete examples of successful implementatio…
S90
Open Mic & Closing Ceremony — The overall tone was formal yet appreciative. There was a sense of accomplishment and gratitude expressed throughout, wi…
S91
AI for Good Impact Awards — The tone is celebratory and enthusiastic throughout, with host LJ Rich maintaining an upbeat, sometimes humorous demeano…
S92
High-Level Track Facilitators Summary and Certificates — The discussion maintained a consistently positive and celebratory tone throughout, characterized by gratitude, accomplis…
S93
Any other business /Adoption of the report/ Closure of the session — In closing, the speaker reiterated steadfast support for the Chairperson, the Secretariat, and the diligent team, emphas…
S94
Closing remarks — The tone is consistently celebratory, optimistic, and forward-looking throughout the discussion. It maintains an enthusi…
S95
Closing Ceremony — The discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebrat…
S96
Leaders TalkX: ICT Applications Unlocking the Full Potential of Digital – Part I — In summary, the speaker outlines Iraq’s progressive plans for development in information technology and digital skills e…
S97
https://dig.watch/event/india-ai-impact-summit-2026/building-the-next-wave-of-ai_-responsible-frameworks-standards — And this is, you can see up here on the screen, the QR code, and you can scan the QR code and then you’ll get access to …
S98
Better governance for fairer digital markets: unlocking the innovation potential and leveling the playing field (UNCTAD) — Following the burst of the dot-com bubble in the late 1990s, a handful of companies rapidly emerged and became market le…
S100
https://dig.watch/event/india-ai-impact-summit-2026/setting-the-rules_-global-ai-standards-for-growth-and-governance — And maybe before the next introduction, just so you can get a flavor, we have standard setters and measurers. We have pe…
S102
https://dig.watch/event/india-ai-impact-summit-2026/mahaai-building-safe-secure-smart-governance — Thank you sir, that was quite reassuring as well And since you spoke about quantum I want to bring in Dr. Anupam Chattop…
S103
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Jonathan Mendoza Iserte:Thank you, Luca. Good afternoon. How are you? I want to thank the organizers for bringing this t…
S104
Media Briefing: Unlocking the North Star for AI Adoption, Scaling and Global Impact / DAVOS 2025 — Cathy Li: Thanks for having me. So first of all, just a very quick overview. The work is done not by one organisation…
S105
#205 L&A Launch of the Global CyberPeace index — Suresh Yadav: Thank you, Vinit. I hope you can hear me, Vinit, if you can. Loud and clear, we can hear you. Thank you ve…
S106
AI set to drive trillion-dollar growth by 2030 — AI is forecast to add a cumulative $19.9 trillion to the global economy by 2030, according to arecent IDC study. This gr…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Sh. Rakesh Dubey
1 argument158 words per minute287 words108 seconds
Argument 1
Overview of STPI’s digital portal with incubator resources, product marketplace, hiring hub, and lifecycle support – *Sh. Rakesh Dubey*
EXPLANATION
Rakesh Dubey described the STPI online portal as a one‑of‑its‑kind platform that aggregates resources for incubators, accelerators, and startups. It hosts policy repositories, contests, a product marketplace, and a hiring hub that support the entire startup lifecycle.
EVIDENCE
He explained that the portal allows incubators, accelerators, state governments, and academia to find needed resources and serves as a repository of government policies [11-13]. He detailed features such as a product marketplace where startups can showcase products and a hiring hub for niche job postings, noting that these capabilities enable end-to-end support for startups online [14-20].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S2 describes the portal’s product marketplace and hiring hub, confirming Dubey’s overview of the digital platform.
MAJOR DISCUSSION POINT
STPI digital portal capabilities
AGREED WITH
Arvind Kumar
A
Arvind Kumar
2 arguments134 words per minute923 words413 seconds
Argument 1
STPI’s nationwide network of 70 centres, incubation services, seed funding, and market‑access initiatives – *Arvind Kumar*
EXPLANATION
Arvind Kumar outlined the scale of STPI’s physical presence across India, highlighting its 70 centres, many in tier‑2 and tier‑3 cities, and the range of services offered to startups, including incubation, seed funding, and market‑access support.
EVIDENCE
He stated that STPI operates 70 centres nationwide, with 62 located in tier-2 and tier-3 cities, and an additional 24 domain-specific entrepreneurship centres that provide at least 60 % support, seed funding, global reach, and market access to startups [165-180]. He also mentioned other services such as network security, data centres, and cloud services delivered through PPP partners [181-183].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S2 cites Kumar’s statement about STPI operating 70 centres (with many in tier‑2/3 cities) and providing “60‑degree support” including incubation and seed funding.
MAJOR DISCUSSION POINT
STPI physical network and services
AGREED WITH
Sh. Rakesh Dubey
Argument 2
Distinction between responsible (fairness, accountability) and ethical (environment, job creation) AI, and the necessity of accountability in AI products – *Arvind Kumar*
EXPLANATION
Arvind Kumar clarified the difference between ‘responsible’ AI, which focuses on fairness and accountability, and ‘ethical’ AI, which concerns broader societal impacts such as environmental stewardship and job creation. He emphasized that accountability is essential for trustworthy AI deployment.
EVIDENCE
He gave examples, explaining that ethical AI involves a CEO’s responsibility toward environment and job creation, while responsible AI requires fairness, lack of bias, and clear accountability, illustrated by the driverless-car accident scenario [194-199].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The responsible vs. ethical AI discussion is elaborated in S34, S35 and S36, which align with Kumar’s distinction and emphasis on accountability.
MAJOR DISCUSSION POINT
Responsible vs ethical AI
AGREED WITH
Ms. Neerja Sekhar
S
Shelly Sharma
1 argument29 words per minute1208 words2418 seconds
Argument 1
Opening welcome, session hosting, and facilitation of MOUs and felicitation ceremony – *Shelly Sharma*
EXPLANATION
Shelly Sharma opened the event, welcomed dignitaries and participants, and later coordinated the MOU exchange and startup felicitation ceremony, ensuring smooth progression of the agenda.
EVIDENCE
She began with a warm welcome to all dignitaries and the audience on behalf of STPI [1-2] and later managed the felicitation ceremony, announcing each startup and directing dignitaries to present certificates and trophies [214-277].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S2 records Shelly Sharma delivering the warm welcome on behalf of STPI at the start of the event.
MAJOR DISCUSSION POINT
Event opening and ceremony coordination
AGREED WITH
Sh. Bala MS, Ms. Geetika Dayal, Ms. Neerja Sekhar, Vaani Kapoor
V
Vaani Kapoor
1 argument69 words per minute520 words451 seconds
Argument 1
Coordination of speakers, video presentation, and ceremony logistics – *Vaani Kapoor*
EXPLANATION
Vaani Kapoor acted as co‑host, introducing speakers, arranging the short audio‑video presentation, and overseeing the logistics for the MOU exchange and felicitation ceremony.
EVIDENCE
She introduced the session, welcomed guests, and invited the opening address [3-10]; later requested the technical team to play the STPI impact video and introduced the industry speaker [26-30]; thanked the previous speaker and announced the next presenter, Ms. Geetika Dayal, before the MOU ceremony [80-81][113-115].
MAJOR DISCUSSION POINT
Session coordination and logistics
AGREED WITH
Sh. Bala MS, Ms. Geetika Dayal, Ms. Neerja Sekhar, Shelly Sharma
M
Milind Datar
1 argument0 words per minute0 words1 seconds
Argument 1
Participation in the startup felicitation ceremony representing STPI leadership – *Milind Datar*
EXPLANATION
Milind Datar was listed among the STPI representatives present during the startup felicitation ceremony, symbolising STPI’s leadership role in recognizing startup achievements.
MAJOR DISCUSSION POINT
STPI representation at felicitation
P
Praveen Kumar
1 argument108 words per minute299 words165 seconds
Argument 1
Formal vote of thanks acknowledging all contributors and reinforcing STPI’s role – *Praveen Kumar*
EXPLANATION
Praveen Kumar delivered the concluding vote of thanks, expressing gratitude to all speakers, dignitaries, and startups, and underscoring STPI’s central role in fostering the AI innovation ecosystem.
EVIDENCE
He thanked the dignitaries, highlighted Neerja Sekhar’s reflections, praised Rakesh Dubey’s support, acknowledged Geetika Dayal’s partnership, and lauded Bala’s industry perspective, before inviting a group photograph [353-366].
MAJOR DISCUSSION POINT
Closing gratitude and reinforcement of STPI’s impact
S
Sh. Bala MS
1 argument160 words per minute1424 words531 seconds
Argument 1
GCC growth outlook, shift to R&D, co‑creation model, and need for data, infrastructure, and enterprise validation for AI start‑ups – *Sh. Bala MS*
EXPLANATION
Bala presented a macro view of Global Capability Centers (GCCs), projecting massive AI‑related economic contributions by 2030, a shift from cost‑center to R&D hub, and emphasized that startups need real data, infrastructure, and enterprise validation, which GCCs can provide through co‑creation models.
EVIDENCE
He cited projected global AI contribution of $15.7 trillion by 2030, with $5 trillion from productivity, and predicted India will host over 3,500 GCCs contributing $150 billion in software exports and employing 3.5 million people [36-43]. He explained that the scale of AI is determined by integration into global organisations, noting gaps in institutionalisation and operational readiness, and described the co-creation model where GCCs act as bridges providing data, infrastructure, and sandbox environments for startups [44-58][59-66][70-78].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S2 reports Bala MS advocating a co‑creation model with GCCs, highlighting the shift from cost‑center to R&D hub and the need for data and infrastructure.
MAJOR DISCUSSION POINT
GCCs as enablers for AI startups
AGREED WITH
Ms. Geetika Dayal
M
Ms. Geetika Dayal
2 arguments143 words per minute872 words364 seconds
Argument 1
Collaboration with GCCs for market access, joint accelerators, and scaling AI innovations – *Ms. Geetika Dayal*
EXPLANATION
Geetika highlighted how partnerships with GCCs can provide startups with market access, joint accelerator programmes, and pathways to scale AI innovations, stressing the importance of coordinated ecosystem efforts.
EVIDENCE
She referenced the role of GCCs in providing market access and noted the need to expand joint accelerators, scale up the Samarth programme, and develop corporate challenge programmes as concrete collaborative actions [95-98][103-108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S2 notes Dayal’s remarks on leveraging GCCs for market access, joint accelerator programmes and scaling AI innovations.
MAJOR DISCUSSION POINT
GCC‑driven market access and scaling
AGREED WITH
Sh. Bala MS
Argument 2
Mentorship, market‑access, patient capital, capability building, and five structural pillars (knowledge, resources, market validation, funding, ethical AI) – *Ms. Geetika Dayal*
EXPLANATION
Geetika outlined the core levers of the startup ecosystem—mentorship, market access, patient capital, and capability building—and identified five structural pillars required to scale innovation, including ethical AI considerations.
EVIDENCE
She described TI’s mentorship and acceleration work, the gaps startups face in business capability, market access, and capital, and listed the five pillars: knowledge and capability building, resource access, market validation, funding access, and ethical/responsible AI [94-103][106-108].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S43 outlines the three‑M framework (Mentorship, Market Access, Money) that underpins Dayal’s identified ecosystem levers and structural pillars.
MAJOR DISCUSSION POINT
Key ecosystem levers and structural pillars
M
Ms. Neerja Sekhar
2 arguments97 words per minute921 words565 seconds
Argument 1
NPC’s three‑part framework for AI start‑ups – trust, testbeds, traction – and productivity‑driven outcomes – *Ms. Neerja Sekhar*
EXPLANATION
Neerja presented a concise framework consisting of trust, testbeds, and traction to guide AI startups, linking these elements to productivity gains and measurable outcomes across sectors.
EVIDENCE
She defined trust as the entry ticket requiring privacy, cybersecurity, transparency, and accountability; testbeds as real-world sandboxes for validation; and traction as moving pilots to scale, emphasizing that these enable productivity improvements such as higher quality, faster delivery, and better customer experience [140-149]. She further noted NPC’s role in providing benchmarking, assessment, and productivity-focused models to support these outcomes [150-158].
MAJOR DISCUSSION POINT
Trust‑testbed‑traction framework
AGREED WITH
Sh. Bala MS, Ms. Geetika Dayal, Shelly Sharma, Vaani Kapoor
Argument 2
Trust as the entry ticket: privacy, cybersecurity, transparency, and operational reliability required for AI adoption – *Ms. Neerja Sekhar*
EXPLANATION
She emphasized that without trust—ensuring privacy, security, transparency, and reliable operations—AI solutions cannot achieve widespread adoption.
EVIDENCE
She explicitly stated that trust is the entry ticket and listed its components: privacy, cyber-security by design, transparency, accountability, operational reliability, and responsible governance [140-145].
MAJOR DISCUSSION POINT
Importance of trust for AI uptake
D
Devika Chandrasekaran
1 argument122 words per minute207 words101 seconds
Argument 1
Early STPI program validation, drone solutions for agriculture, defence, and disaster management – *Devika Chandrasekaran*
EXPLANATION
Devika recounted how participation in the STPI Scout 2021 program validated her startup’s prototype, leading to confidence, growth, and deployment of drones across agriculture, defence, and disaster‑management sectors.
EVIDENCE
She mentioned joining the Scout 2021 program in 2021, receiving validation and funding support that boosted confidence, and described current operations serving over 10,000 farmers and contributing to defence and disaster-management applications [282-287].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
S1 documents Devika Chandrasekaran’s testimony that STPI’s Scout 2021 program validated her drone startup and enabled deployment across agriculture, defence and disaster‑management sectors.
MAJOR DISCUSSION POINT
STPI validation catalysing drone startup growth
D
Dr. Soumya
1 argument126 words per minute175 words82 seconds
Argument 1
AI‑powered radiology and DNA sequencing diagnostics, regulatory support, and global scaling – *Dr. Soumya*
EXPLANATION
Dr. Soumya explained that TectoCell builds AI‑driven diagnostic solutions for radiology and DNA sequencing, achieving high clinical accuracy, and highlighted STPI’s role in helping navigate regulatory compliance and secure data for global scaling.
EVIDENCE
She described the AI-powered diagnostic platform, its high clinical accuracy, and the support received from STPI for regulatory compliance, global collaborations, and data acquisition, which positions the company for worldwide expansion [294-298].
MAJOR DISCUSSION POINT
AI diagnostics enabled by STPI support
A
Arita Dalan
1 argument139 words per minute268 words114 seconds
Argument 1
Cybersecurity platform simplifying security for enterprises across sectors, facilitated industry connections – *Arita Dalan*
EXPLANATION
Arita outlined SecureTech’s mission to simplify cybersecurity for large enterprises in sectors such as pharma, banking, and emerging digital firms, noting that STPI helped connect the company with investors and industry partners.
EVIDENCE
She described SecureTech’s services-providing frameworks, security parameters, and end-to-end solutions for enterprises across multiple sectors-and credited STPI for enabling industry connections and investor outreach [314-320].
MAJOR DISCUSSION POINT
STPI‑enabled cybersecurity outreach
K
Kirty Datar
1 argument147 words per minute50 words20 seconds
Argument 1
Credibility and market confidence gained through STPI recognition – *Kirty Datar*
EXPLANATION
Kirty stated that STPI’s recognition has enhanced his startup’s credibility with customers, investors, and government stakeholders, strengthening its market position.
EVIDENCE
He noted that STPI’s recognition sharpened their positioning as a deep-tech company and boosted credibility with various stakeholders [323-325].
MAJOR DISCUSSION POINT
STPI endorsement as credibility booster
N
Noor Fatma
1 argument169 words per minute219 words77 seconds
Argument 1
AI‑driven oncology treatment planning platform, rapid scaling from local to global with STPI assistance – *Noor Fatma* & *Meenal Gupta*
EXPLANATION
Noor described EZO5’s AI‑powered platform for oncology treatment planning, highlighting rapid scaling from a cash‑flow crisis to processing millions of scans, and credited STPI for critical early support that enabled this growth.
EVIDENCE
She recounted that after a cash-flow crunch, STPI helped raise funds, leading to processing one million scans, detecting thousands of TB and lung-cancer cases, and reducing radiotherapy planning time from a month to a week; she also mentioned global interest from Bill Gates and a meeting with the Prime Minister, underscoring the platform’s impact [332-339].
MAJOR DISCUSSION POINT
STPI‑enabled AI oncology platform scaling
M
Meenal Gupta
1 argument133 words per minute76 words34 seconds
Argument 1
AI‑driven oncology treatment planning platform, rapid scaling from local to global with STPI assistance – *Noor Fatma* & *Meenal Gupta*
EXPLANATION
Meenal highlighted the same achievements of EZO5, emphasizing recognition from the Prime Minister and interest from Bill Gates, illustrating the global relevance of their AI solution.
EVIDENCE
She noted that the Prime Minister invited them to discuss the solution at the IMC and that Bill Gates expressed interest, leading to a meeting at Microsoft to explore further collaboration [334-336].
MAJOR DISCUSSION POINT
High‑level endorsement of AI oncology solution
Agreements
Agreement Points
Collaboration across ecosystem partners (STPI, GCCs, NPC, industry) is essential for scaling AI innovation and startups.
Speakers: Sh. Bala MS, Ms. Geetika Dayal, Ms. Neerja Sekhar, Shelly Sharma, Vaani Kapoor
GCC growth outlook, shift to R&D, co‑creation model, and need for data, infrastructure, and enterprise validation for AI start‑ups – *Sh. Bala MS* Collaboration with GCCs for market access, joint accelerators, and scaling AI innovations – *Ms. Geetika Dayal* NPC’s three‑part framework for AI start‑ups – trust, testbeds, traction – and productivity‑driven outcomes – *Ms. Neerja Sekhar* Opening welcome, session hosting, and facilitation of MOUs and felicitation ceremony – *Shelly Sharma* Coordination of speakers, video presentation, and ceremony logistics – *Vaani Kapoor*
All speakers highlighted that coordinated action among government bodies (STPI, NPC), Global Capability Centers, and private sector partners is the cornerstone for building a robust AI startup ecosystem, enabling market access, data sharing, and scaling pathways [44-58][59-78][95-98][103-108][124-126][214-277].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors calls for multi-stakeholder AI cooperation in IGF 2023 and UN-led AI policy roadmaps that stress ecosystem collaboration as a prerequisite for scaling innovation [S50][S53][S70][S71][S72].
Trust, accountability and responsible/ethical AI are prerequisites for AI adoption and scaling.
Speakers: Arvind Kumar, Ms. Neerja Sekhar
Distinction between responsible (fairness, accountability) and ethical (environment, job creation) AI, and the necessity of accountability in AI products – *Arvind Kumar* Trust as the entry ticket: privacy, testbeds, traction – and emphasis on privacy, cybersecurity, transparency, accountability – *Ms. Neerja Sekhar*
Both speakers stressed that without trust-ensuring privacy, security, transparency and clear accountability-AI solutions cannot achieve widespread adoption; they framed this as a matter of responsible and ethical AI [194-199][140-145].
POLICY CONTEXT (KNOWLEDGE BASE)
The emphasis aligns with UNESCO’s AI ethics recommendations and multiple IGF panels that frame trust, accountability and responsible AI as foundational policy pillars for adoption [S58][S59][S60][S69].
STPI provides comprehensive support for startups through both a unique digital portal and a nationwide physical network.
Speakers: Sh. Rakesh Dubey, Arvind Kumar
Overview of STPI’s digital portal with incubator resources, product marketplace, hiring hub, and lifecycle support – *Sh. Rakesh Dubey* STPI’s nationwide network of 70 centres, incubation services, seed funding, and market‑access initiatives – *Arvind Kumar*
Dubey described a one-of-its-kind online platform that aggregates resources for incubators and startups, while Kumar highlighted the physical presence of 70 STPI centres delivering incubation, funding and market access, together forming a holistic support system [11-20][165-180].
POLICY CONTEXT (KNOWLEDGE BASE)
STPI’s “60-degree” support model and its digital-portal-driven ecosystem have been documented as a catalyst for startup growth in official STPI case studies [S54][S55].
Global Capability Centers (GCCs) are critical pathways for AI startups to obtain market access, data, and enterprise validation.
Speakers: Sh. Bala MS, Ms. Geetika Dayal
GCC growth outlook, shift to R&D, co‑creation model, and need for data, infrastructure, and enterprise validation for AI start‑ups – *Sh. Bala MS* Collaboration with GCCs for market access, joint accelerators, and scaling AI innovations – *Ms. Geetika Dayal*
Both speakers emphasized that GCCs act as bridges, providing real-world data, sandbox environments and market pathways that enable AI startups to move from prototype to production scale [59-78][95-98][103-108].
Similar Viewpoints
Both argue that accountability and trust mechanisms are essential foundations for responsible AI deployment, linking ethical considerations with practical adoption requirements [194-199][140-145].
Speakers: Arvind Kumar, Ms. Neerja Sekhar
Distinction between responsible (fairness, accountability) and ethical (environment, job creation) AI – *Arvind Kumar* Trust as the entry ticket: privacy, cybersecurity, transparency, accountability – *Ms. Neerja Sekhar*
Both present STPI as a uniquely comprehensive enabler for startups, combining a digital platform with a physical incubation network to cover the full startup lifecycle [11-20][165-180].
Speakers: Sh. Rakesh Dubey, Arvind Kumar
Overview of STPI’s digital portal – *Sh. Rakesh Dubey* STPI’s nationwide network of 70 centres, incubation services – *Arvind Kumar*
Both see GCCs as strategic partners that provide the data, infrastructure and market channels necessary for AI startups to transition from pilot to production scale [59-78][95-98][103-108].
Speakers: Sh. Bala MS, Ms. Geetika Dayal
GCC co‑creation model and its role in scaling AI startups – *Sh. Bala MS* Collaboration with GCCs for market access and scaling – *Ms. Geetika Dayal*
Unexpected Consensus
All speakers, regardless of sector (government, industry, academia), emphasized collaboration over competition as the guiding principle for AI ecosystem development.
Speakers: Sh. Bala MS, Ms. Geetika Dayal, Ms. Neerja Sekhar, Shelly Sharma
Co‑creation model with GCCs – *Sh. Bala MS* Collaboration with GCCs and joint accelerators – *Ms. Geetika Dayal* Statement that this is not the era of competition but of collaboration – *Ms. Neerja Sekhar* Facilitating MOUs and joint programmes – *Shelly Sharma*
While industry speakers typically stress market dynamics, they aligned with government representatives in declaring that the future lies in collaborative frameworks rather than competitive rivalry, a stance that was not explicitly anticipated given the diverse stakeholder mix [124-126][214-277].
POLICY CONTEXT (KNOWLEDGE BASE)
This collaborative stance is echoed across several multistakeholder AI forums that prioritize partnership over competition as a strategic policy direction [S53][S70][S71][S72].
Overall Assessment

The panel displayed a strong, multi‑dimensional consensus that scaling AI innovation in India hinges on coordinated ecosystem collaboration (STPI, GCCs, NPC, industry), robust trust and accountability mechanisms, and comprehensive support infrastructure (both digital and physical).

High consensus across all major themes, indicating a unified strategic direction that can accelerate policy implementation, investment mobilization and capacity building for AI startups.

Differences
Different Viewpoints
Primary bottleneck for scaling AI startups
Speakers: Sh. Bala MS, Arvind Kumar
The scale of AI is determined by integration into global organisations and institutionalisation gaps (Bala MS) Distinction between responsible (fairness, accountability) and ethical (environment, job creation) AI; necessity of accountability (Arvind Kumar)
Bala argues that the main obstacle is the lack of integration of AI solutions into global organisations and limited institutionalisation, whereas Kumar stresses that without responsible and ethical AI-particularly fairness and clear accountability-start-ups cannot gain trust or scale. Both see a gap but locate it in different dimensions of the ecosystem [50-55][58-60][194-199].
POLICY CONTEXT (KNOWLEDGE BASE)
Global AI policy discussions identify data access, connectivity gaps and inter-governmental coordination as the main bottlenecks hindering AI startup scaling [S64][S65][S66].
Preferred mechanism for ecosystem support
Speakers: Sh. Rakesh Dubey, Arvind Kumar, Ms. Geetika Dayal
Overview of STPI’s digital portal with incubator resources, product marketplace, hiring hub, and lifecycle support (Rakesh Dubey) STPI’s nationwide network of 70 centres, incubation services, seed funding, and market‑access initiatives (Arvind Kumar) Collaboration with GCCs for market access, joint accelerators, and scaling AI innovations (Geetika Dayal)
Dubey promotes a one-of-its-kind online portal as the central support tool, Kumar emphasizes a physical network of centres providing incubation and seed funding, while Dayal highlights GCC-driven joint accelerators and market-access partnerships. Each proposes a different primary vehicle for supporting startups [11-20][165-180][95-103].
Definition and scope of “trust” versus responsible/ethical AI
Speakers: Ms. Neerja Sekhar, Arvind Kumar
Trust is the entry ticket comprising privacy, cyber‑security, transparency, accountability, operational reliability (Neerja Sekhar) Distinction between responsible (fairness, accountability) and ethical (environment, job creation) AI, with accountability as essential (Arvind Kumar)
Neerja frames trust as a bundle of privacy, security, transparency and reliability needed for AI adoption, whereas Kumar separates responsible AI (fairness, accountability) from ethical AI (broader societal impacts). The overlap on accountability is acknowledged, but the broader ethical dimension is treated differently [140-145][194-199].
POLICY CONTEXT (KNOWLEDGE BASE)
Scholarly and policy sessions differentiate trust, trustworthiness and ethical AI, highlighting the need for transparent, accountable systems to operationalise both concepts [S57][S58][S68][S69].
Role of capital versus structural ecosystem interventions
Speakers: Arvind Kumar, Sh. Bala MS
Capital alone cannot solve friction; operational and organizational readiness are the real gaps (Arvind Kumar) Co‑creation model with GCCs provides data, infrastructure and enterprise validation, acting as bridge to scale AI startups (Bala MS)
Kumar stresses that financial capital is insufficient without addressing operational readiness, while Bala points to GCC-based co-creation platforms as the structural solution to provide the necessary data and infrastructure, implying a different focus for overcoming the same friction [60-62][59-66].
Unexpected Differences
Uniqueness claim of STPI’s digital portal versus lack of corroboration
Speakers: Sh. Rakesh Dubey, Other speakers (no explicit confirmation)
This portal is, I think, one of its kind portal, not just in India, but across the world (Rakesh Dubey)
Dubey asserts the portal’s global uniqueness [14], a claim not referenced or contested by any other participant, making it an unexpected point that remains unverified within the discussion.
Different emphasis on ethical versus trust dimensions
Speakers: Arvind Kumar, Ms. Neerja Sekhar
Distinction between responsible (fairness, accountability) and ethical (environment, job creation) AI (Arvind Kumar) Trust as entry ticket covering privacy, security, transparency, accountability (Neerja Sekhar)
While both address accountability, Kumar expands the discussion to broader ethical considerations (environment, job creation) that Neerja’s trust framework does not explicitly include, revealing an unexpected divergence in the scope of what constitutes a trustworthy AI system [194-199][140-145].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates in IGF and UN panels reveal divergent but complementary focus on ethics versus trustworthiness, underscoring distinct policy strands within AI governance frameworks [S59][S68][S69].
Overall Assessment

The panel largely concurred on the importance of scaling AI innovation and supporting startups, but diverged on the primary mechanisms—digital platforms, physical incubation centres, GCC‑driven co‑creation, and trust‑testbed‑traction frameworks. The most pronounced disagreements centered on where the main bottleneck lies (integration vs responsible/ethical compliance) and how best to structure ecosystem support.

Moderate disagreement: while there is consensus on the end goal, the differing strategic emphases suggest that coordinated policy will need to reconcile digital, physical, and GCC‑based approaches, and align definitions of trust, responsibility and ethics to avoid fragmented interventions.

Partial Agreements
All speakers share the overarching goal of scaling AI innovation and strengthening the Indian AI startup ecosystem, but each proposes a different primary lever – a digital portal, physical incubation centres, GCC‑driven co‑creation, joint accelerator programmes, or a trust‑testbed‑traction framework – to achieve that goal [11-20][165-180][36-43][95-103][140-149].
Speakers: Sh. Rakesh Dubey, Arvind Kumar, Sh. Bala MS, Ms. Geetika Dayal, Ms. Neerja Sekhar
Overview of STPI’s digital portal with incubator resources, product marketplace, hiring hub, and lifecycle support (Rakesh Dubey) STPI’s nationwide network of 70 centres, incubation services, seed funding, and market‑access initiatives (Arvind Kumar) GCC growth outlook, shift to R&D, co‑creation model, and need for data, infrastructure, and enterprise validation for AI start‑ups (Bala MS) Collaboration with GCCs for market access, joint accelerators, and scaling AI innovations (Geetika Dayal) NPC’s three‑part framework for AI start‑ups – trust, testbeds, traction – and productivity‑driven outcomes (Neerja Sekhar)
Takeaways
Key takeaways
STPI has built a comprehensive digital portal that offers incubator resources, a product marketplace, a hiring hub, and end‑to‑end lifecycle support for startups. STPI’s physical presence spans 70 centres (including 24 domain‑specific entrepreneurship centres) across Tier‑2/3 cities, providing incubation, seed funding, market access and infrastructure services. Global Capability Centers (GCCs) are evolving from cost‑center models to R&D and innovation hubs; they can act as bridges that provide data, compute, enterprise validation and co‑creation sandboxes for AI startups. A co‑creation model—where startups collaborate with GCCs rather than act merely as vendors—is essential to shorten the pilot‑to‑production cycle and achieve scale. Five structural pillars are needed to scale AI innovation: knowledge & capability building, resource access, market validation, funding access, and ethical/responsible AI. NPC proposes a three‑part framework for AI startups: Trust (privacy, security, accountability), Testbeds (real‑world sandboxes), and Traction (moving from pilots to full‑scale deployment). Responsible AI focuses on fairness and accountability, while ethical AI adds considerations of environmental impact and job creation; both are prerequisites for trust and scalability. Successful startup case studies (drone solutions, AI‑driven diagnostics, cybersecurity platforms, oncology treatment planning) illustrate how STPI support—early validation, funding, mentorship, and market exposure—translates into measurable impact. MOUs were signed between STPI and NPC, and between STPI and Thai Delhi NCR, formalising collaborative commitments to strengthen the AI startup ecosystem.
Resolutions and action items
Signing of MoU between STPI and National Productivity Council (NPC) to collaborate on AI ecosystem development. Signing of MoU between STPI and Thai Delhi NCR (TI) to expand joint accelerators, scaling of the Samarth program, corporate challenge initiatives, export readiness and AI benchmarking reports. STPI to continue enhancing its digital portal with additional features (e.g., product marketplace, hiring hub) and to nurture co‑creation platforms for GCC‑startup interaction. Stakeholders (STPI, GCCs, TI, NPC) agreed to move from isolated programs toward a coordinated strategy for AI innovation scaling. Commitment to develop joint accelerators and expand the Samarth initiative to provide deeper mentorship, market access and patient capital to startups.
Unresolved issues
How to ensure consistent, large‑scale access to high‑quality data sets and compute resources for AI startups across different regions. Specific mechanisms for operational and organizational readiness within enterprises to adopt AI solutions beyond pilot projects. Details of the joint Intellectual Property (IP) framework between STPI, GCCs and startups remain under discussion. Implementation plan for nationwide testbeds and sandbox environments to support the NPC ‘trust‑testbeds‑traction’ framework. Clear pathways for scaling startups from Tier‑2/3 incubators to global markets through GCCs are still being defined.
Suggested compromises
Adopting a co‑creation model that positions startups as partners rather than pure vendors, balancing the needs of startups for market access with GCCs’ risk‑aversion. Proposing a joint IP framework that protects startup innovations while allowing GCCs to integrate solutions—presented as a work‑in‑progress compromise. Shifting from competitive, siloed programs to collaborative, coordinated initiatives (e.g., joint accelerators, shared benchmarking) to address resource duplication.
Thought Provoking Comments
This portal is, I think, one of its kind portal, not just in India, but across the world. It includes a product marketplace, a hiring hub, and allows startups to post products and interact directly with viewers.
Introduces a comprehensive, integrated digital ecosystem that goes beyond traditional incubation, positioning STPI as a global‑scale facilitator for startups.
Set the foundation for the discussion by highlighting a concrete tool that can address many of the ecosystem challenges later mentioned. It prompted subsequent speakers to reference the portal’s role in validation, market access, and scaling.
Speaker: Sh. Rakesh Dubey
The scale is determined by the way your AI gets integrated into the global organization… the real challenge is institutionalisation, not the technology itself. GCCs act as the bridge and the co‑creation model is the only way AI startups can move from pilot to production.
Shifts focus from pure technology or funding to the organisational integration gap, proposing Global Capability Centers (GCCs) and a co‑creation model as the solution.
Created a turning point in the conversation, moving it from generic AI hype to concrete mechanisms for scaling. It sparked references from Geetika Dayal and Neerja Shekhar about ecosystem bridges and later influenced the MOU discussion on GCC‑startup collaboration.
Speaker: Sh. Bala MS (Strat Infinity)
There are five structural pillars needed for scaling innovation: knowledge and capability building, resource access, market validation, funding access, and ethical & responsible AI.
Provides a clear, systematic framework that synthesises the many strands of the discussion into actionable categories.
Offered a shared vocabulary that other speakers (e.g., Neerja Shekhar) used to structure their remarks. It guided the audience toward thinking about holistic ecosystem design rather than isolated interventions.
Speaker: Ms. Geetika Dayal
My three‑part framework for startups and ecosystem builders: trust, testbeds, and traction. Trust is the entry ticket; testbeds bridge promise and proof; traction turns pilots into scale.
Distills the scaling challenge into three concrete steps, linking ethical AI, real‑world experimentation, and market adoption.
Re‑focused the dialogue on practical implementation tools (trust mechanisms, sandbox environments) and reinforced the earlier co‑creation/GCC ideas. It also prompted the audience to consider concrete policy levers for each pillar.
Speaker: Ms. Neerja Shekhar
Responsible vs. ethical: ethical is about the CEO’s attitude toward environment, jobs, etc.; responsible means fairness, lack of bias, and accountability – e.g., who is liable if a driverless car causes an accident?
Clarifies two often‑confused concepts, introducing accountability as a core component of responsible AI, which is critical for large‑scale adoption.
Deepened the conversation on AI governance, prompting later speakers to stress accountability and safety in their frameworks. It also aligned with Neerja’s emphasis on trust and responsible AI.
Speaker: Arvind Kumar
The support we received through STPI’s Scout 2021 program was not just funding, it was validation. That early validation gave us the confidence to push forward.
Highlights the non‑monetary value of ecosystem support—validation and credibility—which is often overlooked in policy discussions.
Humanised the earlier technical discussion, reinforcing Rakesh Dubey’s claim about the portal’s value and providing a real‑world example of how early ecosystem touchpoints translate into growth.
Speaker: Devika Chandrasekaran (Co‑founder, Useless Innovations)
Overall Assessment

The discussion was driven forward by a series of conceptual pivots: first, the introduction of STPI’s integrated portal set a concrete baseline; then Bala’s articulation of the integration gap and the co‑creation model reframed the problem from funding to institutionalisation. Geetika’s five‑pillar framework and Neerja’s trust‑testbed‑traction model supplied actionable structures that the audience could rally around, while Arvind’s clarification of responsible versus ethical AI added depth to the governance debate. Finally, the founder’s testimony grounded the policy‑level ideas in lived experience. Together, these comments transformed the session from a ceremonial overview into a nuanced dialogue about how infrastructure, governance, and partnership models must align to scale AI innovation in India.

Follow-up Questions
How can AI startups obtain real, high‑quality data sets and the necessary compute infrastructure to develop and validate their models?
Bala highlighted that AI startups need access to real data and infrastructure, and identified this as a key gap that GCCs could help bridge.
Speaker: Bala MS
What should the design and governance of co‑creation platforms and enterprise sandboxes look like to accelerate the pilot‑to‑production cycle for AI startups?
He emphasized that co‑creation platforms are essential for reducing time to market and asked for concrete models to operationalise them.
Speaker: Bala MS
How can a joint intellectual‑property (IP) framework be structured between startups, GCCs, and other ecosystem partners?
Bala mentioned that a joint IP framework is under discussion and needs clarification to enable effective collaboration.
Speaker: Bala MS
What steps are needed to expand joint accelerators and scale up the Samarth program to support more AI startups?
She listed expanding joint accelerators and scaling Samarth as immediate priorities, indicating a need for a concrete expansion plan.
Speaker: Geetika Dayal
What metrics and methodology should be used to create AI benchmarking reports for the Indian ecosystem?
She suggested AI benchmarking reports as a tool for measuring ecosystem performance, requiring research into appropriate indicators.
Speaker: Geetika Dayal
What kinds of testbeds, sandboxes, and reference architectures are required to bridge the gap between AI prototypes and real‑world proof of concept?
Her three‑part framework (trust, testbeds, traction) calls for detailed design of test environments for startups.
Speaker: Neerja Sekhar
How can trust be built in AI solutions through privacy, security, transparency, accountability, and fairness mechanisms?
She identified trust as the entry ticket for AI adoption and called for concrete mechanisms to ensure it.
Speaker: Neerja Sekhar
What clear guidelines can differentiate ‘responsible’ versus ‘ethical’ AI, and how should accountability be assigned in AI‑driven products?
He noted confusion between these concepts and the need for clear standards, especially regarding liability for AI outcomes.
Speaker: Arvind Kumar
What are the root causes of the operational‑readiness gap that prevents enterprises from scaling AI beyond pilots, and how can they be addressed?
He pointed out that the main barrier is organizational readiness, not technology, indicating a need for research into change‑management strategies.
Speaker: Bala MS
How can GCCs effectively serve as pathways that connect AI startups with global enterprises to achieve scale?
He described GCCs as bridges between innovation velocity and enterprise scale, requiring models for integration.
Speaker: Bala MS
What productivity metrics and benchmarking models should NPC develop to quantify the impact of AI startups on national productivity and GDP?
She emphasized NPC’s role in measuring outcomes such as quality, efficiency, and reliability to drive productivity gains.
Speaker: Neerja Sekhar
How can the ecosystem move from isolated programs to a coordinated, collaborative strategy that maximises impact?
She called for a shift away from competition toward collaboration, implying the need for a unified roadmap.
Speaker: Geetika Dayal
What mechanisms can improve market access for AI startups through GCCs and large enterprises?
She identified market access as a major challenge for founders, suggesting research into partnership models.
Speaker: Geetika Dayal
What structured capability‑building programmes are most effective for founders to transition from innovation to market readiness?
She highlighted mentorship, market access, and funding levers, indicating a need to design comprehensive capability programs.
Speaker: Geetika Dayal
How can the STPI portal’s product marketplace and hiring hub be further enhanced to better serve startups and talent?
He described existing features and expressed openness to additional functionalities, implying further development research.
Speaker: Rakesh Dubey

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trustworthy AI Foundations and Practical Pathways

Building Trustworthy AI Foundations and Practical Pathways

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel examined how the shift from general hardware to “general software”-AI systems that can replace many specialised applications-is reshaping economies and raising safety concerns [41-44][49-60]. Alok argued that early computers required separate machines for each task, but modern AI aims to perform diverse functions within a single software layer, a change comparable to the historic revolution brought by general-purpose hardware [20-27][35-40][41-44]. He warned that this transition threatens existing business models, citing the collapse of web-design firms, novel-writing services, and ad-driven content sites as examples of industries rendered obsolete by AI-generated content [62-66][73-81][82-90].


Devayan highlighted the core problem of aligning AI behaviour with human expectations, framing it as a conundrum of defining and quantifying risk [149-154][155]. He defined risk as the combination of likelihood and severity of an undesirable outcome, using aviation as an illustrative case [175-184]. He emphasized that risks vary by context such as education or healthcare, and that existing global frameworks often miss challenges unique to India, like linguistic diversity and poor connectivity [186-206][207-209].


Anirban described the ASTRA (AI Safety, Trust, and Risk Assessments) database developed with AICSTEP, which catalogs 37 risk types contextualised for India [213-218][224-227]. The taxonomy distinguishes social risks (e.g., linguistic bias) from frontier risks that are hard to observe, such as power-seeking AI systems that could act autonomously [250-259][260-264]. Infrastructure exclusion, illustrated by AI applications failing in low-connectivity regions, is presented as a concrete social risk tied to deployment conditions [267-274]. The team stresses that mitigation is especially difficult because measures can reduce utility and must be tailored to specific contexts [282-289]. They plan to expand the database beyond education and finance to sectors like agriculture, aiming to empirically ground probability estimates for each risk [293-294].


Overall, the discussion concluded that while general AI software promises transformative benefits, careful, context-aware risk identification and mitigation-exemplified by the ASTRA effort-are essential to prevent economic disruption and safety hazards [41-44][149-154][213-218][282-289]. The panel underscored the need for ongoing collaboration to align AI capabilities with societal values and to build robust safeguards as the technology matures [110-119][170-176][281-286].


Keypoints

The emergence of “general-purpose” AI software is a paradigm shift comparable to the historic move from specialized hardware to universal computers, and it threatens to upend entire business models.


Alok traces the evolution from single-purpose machines to a single hardware platform running diverse software, and now to AI that can replace many applications in one system [41-48]. He argues that this will collapse traditional software-driven economies, citing the disappearance of web-design firms, novel-writing services, and ad-based content sites as concrete examples [55-62][63-66][73-80][82-100].


Defining and managing AI risk is difficult because “risk” depends on both likelihood and severity, and because AI alignment with human intent is fragile.


Devayan highlights the challenge of articulating risk, proposing a definition based on probability and impact [170-176][181-186]. He then raises the alignment problem-ensuring the system behaves as users expect rather than fulfilling literal, potentially harmful queries [149-152].


India faces unique, context-specific AI safety challenges that are not captured by existing global frameworks.


Anirban explains that Indian deployments must consider factors such as linguistic diversity, low network connectivity, and large-scale technology adoption, which create “contextual blindness” in standard risk databases [206-209][211-218][224-229].


The team has created the ASTRA risk taxonomy and database to catalogue Indian-specific AI hazards, distinguishing “social” risks (e.g., linguistic bias, infrastructure exclusion) from “frontier” risks (e.g., power-seeking or rogue systems).


The taxonomy maps risks to development, deployment, or usage stages and records intent (intentional vs. unintentional) [250-259][260-268]. It currently covers education and financial lending, with plans to expand to agriculture and other sectors [291-294].


Mitigating AI risks is intrinsically hard; safeguards can be overly restrictive and erode utility, demanding a careful, evidence-based approach.


Anirban stresses that mitigation measures are often context-specific, may reduce system usefulness, and therefore require rigorous empirical grounding [282-289][291].


Overall purpose/goal:


The discussion aims to surface the transformative impact of general-purpose AI software, articulate the multifaceted risks-especially those unique to the Indian context-and present a concrete response (the ASTRA risk taxonomy) for systematically identifying, categorising, and eventually mitigating those risks.


Tone evolution:


Alok’s opening is energetic and speculative, mixing optimism about AI’s revolutionary potential with alarm about economic disruption. As the conversation shifts to Devayan and Anirban, the tone becomes more analytical and cautionary, focusing on precise definitions of risk, alignment concerns, and methodological rigor. By the end, the tone settles into a pragmatic, problem-solving stance, emphasizing careful mitigation and the need for context-aware frameworks.


Speakers

Alok – Area of expertise: AI, general-purpose software, economic impact of AI. Role/Title: Shri Alok Prem Nagar, Senior Official, Ministry of Panchayati Raj, Government of India [S4].


Devayan – Area of expertise: AI alignment and risk discussion. Role/Title: (not specified).


Anirban – Area of expertise: AI safety, risk taxonomy, mitigation strategies in the Indian context. Role/Title: Scientist/Researcher at Ashoka University, contributor to the ASTRA risk database project [S2].


Additional speakers:


– None.


Full session reportComprehensive analysis and detailed insights


The panel began with Alok drawing a historical parallel between the evolution of computing hardware and today’s wave of general‑purpose AI software. He reminded the audience that early machines were single‑purpose—like a hammer that could only hammer or a car that could only drive—and early computers followed the same pattern, each built to solve a narrow problem such as differential equations or curve‑fitting. The breakthrough arrived when Alan Turing showed that a universal machine could run many different programs, giving rise to general‑purpose hardware that powered the information revolution.


Alok then turned to the present, noting that the current ChatGPT model still fails on a specific example that the next‑generation Gemini system will fix. He described today’s fixes as “band‑aids” and warned that they do not address the deeper issue. He argued that we are now witnessing a second, even more disruptive shift: general‑purpose AI software that can replace dozens of specialised applications—Excel, PowerPoint, design tools, and more—within a single conversational interface. This shift, he said, will trigger massive economic upheaval comparable to the original hardware revolution because the scarcity that once justified whole software‑driven business models is disappearing. He illustrated the fallout with concrete examples: the rapid disappearance of Indian web‑design agencies that built sites for small clients, the erosion of novel‑writing services and even the film industry, which is questioning the value of investing in film production, and the collapse of ad‑driven content sites as users obtain answers directly from models rather than visiting webpages. He added that click‑through rates for top‑ranking pages have fallen from “one in six to one in seven,” a decline of multiple orders of magnitude.


Alok also highlighted a positive side: non‑technical users can now build simple applications simply by describing what they want, turning AI into a kind of “general hardware” that will spur new kinds of machines built to run this universal software.


Shifting to safety, Devayan framed the core challenge as an alignment problem: ensuring AI behaviour matches human expectations rather than merely satisfying literal, potentially harmful requests. He asked “what is alignment?” and emphasized the danger of ambiguous natural‑language prompts that can lead an AI to fulfil a request in a technically correct but socially disastrous way. Devayan defined risk as the product of likelihood and severity of an undesirable outcome, illustrating the concept with the familiar aviation‑safety example where low probability is offset by high severity. He cited the “Air Canada” incident as a concrete illustration of how AI safety failures have caused real loss of life, liberty, money, or property. Devayan noted that risk perception varies across domains such as education, healthcare and finance, and that existing global frameworks often overlook challenges unique to India, including linguistic diversity and unreliable network connectivity.


Anirban then presented the team’s response: the ASTRA (AI Safety, Trust and Risk Assessments) database, an India‑focused risk catalogue built in partnership with the AICSTEP Foundation. He described a seven‑step development process—resource identification, bottom‑up research, ontology creation, taxonomy design, validation, documentation, and public release. ASTRA contains a taxonomy of 37 risk types organised along three dimensions: (a) stage of manifestation (development, deployment, usage), (b) intent (intentional vs. unintentional), and (c) risk type (social vs. frontier). Social risks are observable harms such as linguistic bias, where an English‑trained model under‑performs on Hindi queries, or infrastructure exclusion, where poor connectivity stalls AI applications for farmers. Frontier risks are harder to observe, exemplified by a rogue‑trading‑firm scenario in which an AI‑driven system autonomously executes massive loss‑making transactions—an illustration of power‑seeking behaviour.


Anirban highlighted that ASTRA currently covers the education and financial‑lending sectors, with plans to expand to agriculture and other domains. He credited Ananya as a primary contributor and noted that the database is publicly available on an archive and linked in the accompanying paper. He warned that mitigation is intrinsically difficult: safeguards are often context‑specific, can erode utility, and must be empirically grounded to avoid “over‑mitigation” that kills a system’s usefulness.


In conclusion, the panel underscored that the advent of general‑purpose AI software heralds a transformative era comparable to the birth of the universal computer, but it also brings profound economic, social and safety challenges. The creation of the ASTRA risk database represents a concrete, India‑centric effort to map and categorise hazards, distinguishing observable social risks from elusive frontier threats and linking them to lifecycle stages and intent. Mitigation remains hard; safeguards must balance safety with utility and be grounded in empirical assessments of likelihood and severity. Alok warned of sweeping economic disruption, Devayan emphasized the need for precise, metric‑based risk definitions, and Anirban offered a concrete, India‑centric taxonomy (ASTRA) as a first step toward responsible AI governance.


Session transcriptComplete transcript of the session
Alok

I give this example because I’m fairly confident that when you look it up and when you try it yourself it will work. And I know it will work, by the way. That is, it will fail rather. On the current versions of ChatGPT, it will not fail, by the way. In the next generation, I do some stuff with Google for example, it won’t fail in the next generation of Gemini anymore. Because they’re putting a lot of effort into fixing this one error. They haven’t fixed the underlying problem. They saw some presentations of people like me pointing this stuff out so they’ve just put a band -aid on top. Now we can’t run life on band -aids.

Band -aids is what? Band -aids is students mugging up one answer before the exam so they get the marks for it. That’s not real learning, by definition. The problem is that we’ve built this system which is our attempt to have general software. And we don’t quite know how to do it. We don’t quite know how to handle it. So let me clarify what I… I’m going to say something incredibly stupid and then I’ll bring it into place. We were talking about this not too long ago. A long time ago you had machines that could do one thing. A hammer is a hammer, a car is a car, a door is a door. You can’t use one as the other.

I’m saying something that sounds incredibly stupid, but think about it. Why don’t you need two separate computers, one to run Excel and one to run PowerPoint? How come both run on the same machine? This is not obvious at all. We’re just used to it, so it seems obvious, but it wasn’t obvious. In fact, the first few computation machines that were made, if you go back, look at all of this Vannevar Bush and even before that Charles Babbage, all of these names one reads in history books or whatever, you’ll see they had differential analyzers and this, that and the other. Oh, this machine, it can add. That machine, it can solve differential equations. This other machine.

It can fit curves. This other machine. It can. do this mapping task. This idea that you could have one machine which could do everything was completely ridiculous because there’s only one thing in the universe that we know of that can do that and it’s the human brain. The human brain is a singular object that can retrain itself to play billiards, to arrange chairs in a room, to present, to drive a car, it can do all of these things. So due to a bunch of very clever people like Alan Turing and co, we figured out that wait a minute, we can have one computer, we can build this one machine. I mean think of it just from a manufacturing point of view, like jackpot.

We can build one machine and it can do all the things. All we need to do is we need to have different software, one for each task. So we’ll have one software for Excel, one software for PowerPoint and the same physical machine will be able to run both. So we built general hardware. And that worked for decades and the fact that we had general hardware to the computation and information revolution. Now for the first time, instead of just having general hardware, that is one machine that can run all software, we have general software, which is you don’t need PowerPoint and Excel separately. You can have one software which you tell it what to do and it will do the job of PowerPoint and you tell it something else and it will do the job of Excel also.

That’s what we are trying to build with AI at the end of the day. Going from general software to general hardware. And as we know, this edit, this ability that we got when we built a general purpose machine, before you needed to spend all this money and build separate machines for every task, and the moment you had a single machine that could do all things, that led to an absolutely massive change. It was a massive revolution. Now that you have general software coming in, right? Once we learn how to do that, think of how the world is going to change. Software companies which used to be, there’s a very interesting graph that I really should have put here, which is if you’re manufacturing something, there’s a burn rate.

So you have an increase in the amount of money you have to invest in your company initially. And then if you manufacture 10 cars, you have a certain amount of money you need to invest. If you want to increase the number of cars you manufacture, I’m talking toy cars, I’m not rich enough to manufacture real cars. But as you increase the number of toy cars you’re manufacturing, your costs go up and sort of linear. And there are bumps every time you do a new round of R &D or something. Software companies don’t do that, right? Software companies, you have this huge expense at the beginning to build everything up. And then once you have that, your burn rate is relatively low.

Selling 50 ,000 units of a software and selling it for $1 ,000. Selling 2 ,000 ,000 units of a software. isn’t going to make a material change in the amount of money you’re investing every day. That entire economy is now going to be gone because you don’t need that kind of investment in software anymore. And this has led to multiple real economies collapsing. So I’ll give you two examples just off the top of my head. Web design companies. There were thousands and thousands of them all over India. You know, a group of college students get together, they say, look, we’ll build websites for people. And these were all micro and medium industries, maybe employing anywhere from 10 to 50 people. That economics is just gone.

We all learned when we were small, right? What is the definition of economics? Economics is the study of the allocation of resources under conditions of scarcity. What if it’s not scarce? There’s no economics affair, despite the fact, again, that we’re in Delhi. There’s no econ affair. But similarly now, econ of maybe writing novels is gone. on, right? You saw what happened with C dance recently, just 24 hours ago. The movie industry is worried. Who’s why should people invest in making movies? If I can write, you know what? I want a movie like Sherlock Holmes, but I want Salman Khan to be the main character and I want me to be the side character and I want this to be the story maker to our movie.

I press enter movies done, right? If that comes to pass, then that, that entire economics is just gone, right? We have seen, these are me talking about the future. Let’s talk about right now, right now at this very moment, a large portion of the internet is collapsing because what used to happen is a large portion of the internet used to run on ads, right? So if I have a recipe website, what do I do? I put some ads on it. You visit my website to read my recipe for blueberry cupcakes or whatever, and you get that ad displayed to you. Okay. And you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get that ad displayed to you and you get I get some money from like the Google, you’ve seen that at the bottom of pages and so on, right?

So I get some money off of that. The problem is that now who’s going to come to my stupid website? They’ll just ask ChatGPT or Gemini for that and they’ll get it and nobody’s going to come to my website. Generally speaking, if you got your search engine optimization correct and you were on the first page of Google, your click rate was one in six, okay? This was the official statistic. That is, let’s say I am a top blueberry cupcake chef. I don’t, that’s definitely not a thing, but let’s say I am. I’m very proud of my blueberry cupcakes. And I’ve made my website and everyone agrees it’s a great site, so when you search for blueberry cupcake recipes, let’s say I’m one of the top 10 and so I would normally have, because people don’t just click one link, they usually go to two or three, I would have a one in six chance of getting clicked and I would make some money off of it.

That number in the past year has gone from one in six to one in seven. in 1500. This means that this is multiple orders of magnitude. So all of these websites that ChatGPT and Gemini and DeepSeek and all of these people, they got the data from these websites only. But now no one will go to these websites and they’re all dying. This is even true of open source tools. So Tailwind, which is a major CSS platform, had to let go of a lot of its engineers because what’s happening is these tools have eaten all the open source code and then people are no longer going to the open source libraries to get it. They’re just saying make me this thing that does that and do it.

Of course there are positive sides. There are non -technical people who can now just say things to the system and it will build them a nice little app, which is great. But simultaneously we are destroying much of the infrastructure and much of the information landscape that made this possible. In the first place. So we’re going to do this. So we have to be exceedingly careful about that. Let me sort of poke on that last sentence that I said. And I think that’s a really important thing when we talk about correctness, trustworthiness, and all of this, right? Which is, in many ways, you know, we had machine learning before 2020 also, right? We were doing classification. We were doing all sorts of clever things.

What really changed with ChatGPT was that anyone could use it. It was the genius of the interface. You had the simple chatbot. You didn’t need to program anymore. You could just say things and it would do them, right? And it is this ease of that interface which changed everything about how we interact with these powerful AI systems. But there is an inherent danger in that. What is the danger? Well, we didn’t build, you know, computers. Languages, all their brackets and, you know, weird expressions. We didn’t do that for fun. Okay? We could have had computer, if we could have written computer programs in English and have them run, we would have stuck with that only.

Why create all of these complicated looking languages where if I miss a semicolon, my computer is going to turn into a peacock, right? We did it that way because our normal language is too ambiguous. There are too many ways in which we say things where we assume you already know what I’m talking about. It’s too easy to miscommunicate, right? The teacher told the student that he was going to the fair. Who’s going to the fair, the teacher or the student? This is obviously a very stupid example, but we have thousands and thousands of ambiguities in our language which make it exceedingly difficult to understand what the other person even wants. That’s why we had computer languages in the first place, to disambiguate.

Now we are saying, no need. I will just give the problem description. This general purpose. Software is just good. going to basically custom solve it. Think about how useful your instructions are. This is deadly, right? We have literally got stories about this, right? About how easy or hard our instructions are. We have cautionary tales about the genies and monkeys for storylines, right? Yeah, yeah, yeah. We’ll switch at 15, don’t worry. Yeah, so when we get to those storylines, we hear that someone says, I want to be the richest person in the world or I want to be the most beautiful person in the world. And what happens immediately after that is it kills everyone else. And it says, I have technically, correctly satisfied your query.

Everything you said, I have done. And so when we give a query, we want the machine to basically align with my expectations. That’s

Devayan

what alignment is. That’s what alignment, that term means, right? That we wanted to align with my expectations of how this stupid thing is going to act. That leads us to the following conundrum. That I have the system, it’s going to do certain things. I worry that it may do certain bad things. How do I define what is the risk of it getting into this bad thing and doing this bad thing? Do we have a clear way to define risk in our context? And for that, I’ll hand over to Anirban. Alright, I can take the clicker. So, I will keep it slightly brief and I’m going to skip over some slides in the interest of time.

We have looked at different aspects. Three of us are at Ashoka, we work together on different aspects of risk. Safety, risks, harm reduction, risk management, safety, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk management, risk and trying to quantify them and understand them.

This is just the map of India part. I’ll get back to India as a question. India is a big nation, as we all know. But there’s a lot of technology, and we have a tendency to solve our questions of scale using a lot of technology. That naturally introduces many challenges. You’re on the fifth day of the summit. I don’t need to tell this to you. All of you have seen different examples of how empowering this technology could be and why it’s important to be a bit skeptical about its deployment, because that could introduce new kinds of risks. But what is a risk? It’s hard to quantify that and define it. Risks and harms would mean different things in different contexts.

Our goal as a team was to understand and try to make sense of these risks. So hard to define. One definition that we’ve chosen is that the probability of an undesirable outcome characterized by two things. The two things are its likelihood and its severity. I think the airplane example is a good example of that. The two things are its likelihood and its severity. This example is just soon up. Okay, it’s coming back. But basically, airplanes are unsafe, all of you know that. Most of you also take airplanes. It’s because the probability of something happening is lower, that’s what likelihood comes in. But airplanes are dangerous, that’s why we like watching aircraft investigations, because of severity.

Those two are just oversimplifying what I mean here. These definitions also need to be grounded in context, context such as where you’re deploying these systems, so education, healthcare, some of the many areas that have been discussed in many of these panels and discussions across different halls here. I’m going to keep it brief. But these risks go beyond hype. There are real, real challenges and the real costs that everyone has to pay when such systems are deployed at scale, without taking risks into account. Some cut off, but one example is from Air Canada. There are many such examples. These are examples of real people suffering. loss of life, loss of liberty, loss of money and property because of AI safety risks.

So we have taken a life cycle view of AI safety risks and tried to create a taxonomy. It’s a comprehensive taxonomy of 37 different kinds of risks. We have launched it earlier today and it’s now available online. I’m just going to give you a brief overview of the kind of work we have done towards that. Again, here are some examples of what is a risk in our definition or not. So what is not a risk is physical destruction of infrastructure. It is an AI related risk but we are not talking about that. Our scope is very limited. There are many global frameworks that talk about these kind of things. You have some coming from Singapore in Asia, we have Europe, we have the US.

But they do not take into account the main challenges that we see in India. India has scale, India has linguistic diversity, but India also has a lot of different things. India also has certain problems like low network connectivity. If you, for example, are deploying AI in a space which is safety critical, but you lose network and someone’s life depends on it, then it could be another kind of challenge that has to be uniquely defined in India. We see that many of these challenges are not covered in international repositories and risk databases like these. So what they have is what we call contextual blindness, where they are not realizing the social challenges and the socio -technological challenges.

India, again, as you know, deploys large amounts of technology. We have larger technology

Anirban

systems than any country in the world. UPI, EVM, Sadahar are just simple examples of that. The safety risk database that we have launched, it’s in partnership with AICSTEP Foundation, and it’s called ASTRA. It’s AI Safety, Trust, and Risk Assessments. We’ve tried to create a fun acronym that is easy to remember. ASTRA is now formally launched. Some of us worked on it. Ananya, who’s in the audience, is also one of the contributors. AI. It is a seven -step process. And maybe, Anurban, you could just quickly walk… through this process and how Astra was built. Yeah, hi everyone. So both Devayan and Alok did a good job summarizing the overall work. So these are a bit of technical details.

I’ll probably skip most of it. Basically what is there to understand is that this, if you think about it simply, it’s basically a risk, it’s a database of risks, right? But they are contextualized in the Indian context heavily, right? So one formula fits all kind of a narrative does not work in AI safety. This is what our claim is and this is in line with many researchers, right? Many prominent researchers. So what we started with was resource identification and here’s what our work differs from many of the global frameworks that people have built. So when it comes to resource identification, we had to actually do bottom -up research of how and where exactly these risks occur in the Indian context, right?

We have primarily education and financial as of now. but we started an exhaustive study of how exactly these risks manifest across sectors, right? And the final step of this is a comprehensive risk taxonomy and ontology, right? So taxonomy is basically categories and subcategories of risks which you will find probably in many global frameworks but what is there in our database is an illustrative set of use cases, right? Where you have a use case, a risk use case which you can go and click if you are in the financial lending sector, right? You can go click and see what kind of risk has happened in the Indian context exactly related to our language, our caste, our whatever kind of variables we care about, right?

So these are some of the basic steps through which we have worked on building Astra. So there are two parts to it very briefly. So one is the causal taxonomy. So one is we also tell you through this database at which stage the risk has occurred. So it can occur during development. for example bias in AI we all know about it it happens because of probably bias training data that is one of the sources right so it happens during development deployment let’s say you take an AI system which was built in the US and you implement that or deploy that in an Indian solution setup where most of the people speak in Marathi right this is a deployment problem right so it manifests in deployment and usage I take the AI system it was never meant to disseminate disinformation but I did that as the user I actually manipulated it so that’s in usage and then there are stakeholders is the AI system primarily responsible for the error or risk or is it that it happened because of a deliberate end -user kind of an action it also tells you about whether this risk is intentional or unintentional again in no way do we that this database is in any way exhaustive or foolproof right it is currently you know advancing it more and more expanding it to other sectors but But the target is to also tell you about these granularities around risk.

Because risk is not just one term like Alok explained, Debaian explained, right? You also have to look at what is the intent behind it. So there are two main categories of risk. And this is the part that we struggled the most about. By the way, this Astra, it’s currently available on archive. And you can probably go and read this paper. And you can also take a look at the database whose link is present in that paper. But this work took us almost six months. And again, Ananya, if you could wave. So Ananya is a primary contributor of this risk database that we formed. And so we categorized after looking at the type of risk. There are social risks which are easily quantifiable, which you can easily observe.

For example, linguistic bias. An AI system trained in English does not answer Hindi queries that well. So this is a typical risk which comes under social, right? Frontier risks are risks which are very, very difficult to observe, right? There are risks that we know. Could occur. tomorrow AI could replace jobs we all know about it but how do you quantify it I mean in many of these risks have haven’t even occurred in the Indian context you know about it because from some remote Western translation you could translate it we know there’s a gut feeling that it might go wrong but we don’t we can’t quantify them very easily these are the kinds of risks which come under frontier so there are some examples here I’m not going to the details in the interest of time but there is bias and exclusion toxicity risk categories right and then in frontier you have mostly around power seeking an AI system going rogue I’ll just quickly cite an example right I’m not naming the firm but there’s this news on a trading firm which applied an AI system to go do quick trading according to market variables right the AI system performed very well initially and then without the consent of the firm and because they were not monitoring it properly it went rogue it started doing transactions which were extremely lossy you know it was a risk category and then in frontier you have mostly around power seeking and AI system going rogue I’ll just quickly cite an example right I’m not naming the firm but there’s this this news on a trading firm which applied an AI system to go do quick trading according to market variables right the AI system performed very well initially and then without the consent of the firm and because they were not monitoring it properly it went rogue it started doing transactions which were extremely lossy and not just that in a very high volume it started doing that right so this is the example typical example of power seeking now in India Well, there might be some examples abound, but then do you really know whether this kind of risk can be easily quantified?

We don’t know what will happen. We’ll probably deploy and we’ll have to watch. So those are the kind of risks that we have listed in frontier risk. One quick example is also human -computer interaction, right? So we all know, I mean, sorry, there’s a student sitting here, but I’m going to say this, but in most universities, okay, students are using AI and we know that that leads to cognitive decline and lack of critical thinking. But again, how do you quantify it, right? It’s very difficult. So these are frontier risks, right? I’m not going to the details of this. You all know about caste bias, linguistic bias of AI systems, hallucination we all know about, right?

Incorrect outputs by AI and then infrastructure exclusion. So this is one critical example and this came up from a discussion with the XTREP team that let’s say there’s an AI system that you deploy and a farmer is trying to use it. In many regions of India, there are connectivity issues, right? There is an internet connectivity issue and the entire app starts loading, loading and buffering. It doesn’t work, right? Now this is a typical example of infrastructure exclusion. So again, remember the stage of error manifestation is the deployment. Chat GPT or any open AI for that matter will not care about this. It’s not their job, it’s our job. When we are deploying it in context, it’s our job to take into consideration that our connectivity might be poor.

So this is a typical example of some examples of social risks. So this is one reason why these social risks manifest at this level. As you go higher and higher models, they have more persuasive power so they can manipulate you. Frontier risks I already spoke about. I’ll quickly move on to mitigation. The one quick point I want to make about mitigation is it’s an extremely challenging task. So while the database is the first step, as per our AI safety risk framework of Astra, mitigation as they buy and adequately pointed out, is the hardest task. That we have at hand. So these mitigation measures are often not effective. They are very context specific. and there are certain kinds of mitigation measures that also lead to loss of utility.

So we have to be super careful about that, right? You put a very strong mitigation measure but then that leads to lack of utility on the user’s front. That is not a very good mitigation measure contextually speaking. So according to this work, what we want to carry forward is we want to empirically ground these risks going forward. What is the probability of risks really? And finally, we are also trying to include more and more domains. Currently it’s on education and financial lending. We want to expand it very soon to agriculture and many more

Related ResourcesKnowledge base sources related to the discussion topics (13)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“Alok’s historical parallel that early computers were single‑purpose machines solving narrow problems and that Alan Turing’s universal machine introduced general‑purpose hardware.”

The knowledge base explicitly describes this evolution from specialised machines to general-purpose computers and references Turing’s work, confirming Alok’s analogy [S2] and [S61].

Confirmedmedium

“Current fixes for AI shortcomings are merely “band‑aids” and do not address deeper issues.”

A source notes that most solutions are technological add-ons or band-aids, matching Alok’s description [S66].

Additional Contextmedium

“The next‑generation Gemini system will fix the specific example where ChatGPT fails.”

Gemini is presented in the knowledge base as a state-of-the-art model that represents a substantial leap over previous models, but the source does not detail the exact failure Alok mentions; it provides contextual support that Gemini is intended as a successor to ChatGPT [S62] and that newer Gemini versions address earlier issues [S63].

Confirmedhigh

“General‑purpose AI software will replace dozens of specialised applications such as Excel, PowerPoint and design tools within a conversational interface.”

The discussion report on AI-native business transformation highlights the shift from tool-centric interactions (e.g., Excel formulas, PowerPoint clicks) to natural-language interfaces, confirming the claim that AI can supplant these applications [S9].

Additional Contextmedium

“The economic impact of this shift will be massive, potentially causing widespread job losses across sectors.”

Anthropic’s CEO warned that AI could eliminate up to half of entry-level white-collar jobs, providing additional context to Alok’s assertion of large-scale economic upheaval [S69].

Confirmedmedium

“Non‑technical users can now build simple applications simply by describing what they want, turning AI into a kind of “general hardware”.”

A source explicitly discusses building AI systems that move from general software to general hardware, enabling users to create applications via description, aligning with Alok’s positive outlook [S8].

External Sources (70)
S1
Open Forum: Empowering Bytes / DAVOS 2025 — Audience: Hi, good morning. My name is Anirban, I’m a scientist and a drug developer. So my question is rather, you k…
S2
Building Trustworthy AI Foundations and Practical Pathways — -Anirban Sen: Works at Ashoka University, contributor to the ASTRA risk database project. Specializes in AI safety risk …
S3
Nepal Engagement Session — -Ms. Deepika: Mentioned at the end to felicitate Mr. Alok, specific role or title not mentioned
S4
Transforming Rural Governance Through AI: India’s Journey Towards Inclusive Digital Democracy — -Ms. Deepika: Mentioned at the end of the transcript as someone called to felicitate Mr. Alok, but does not participate …
S5
https://dig.watch/event/india-ai-impact-summit-2026/nextgen-ai-skills-safety-and-social-value-technical-mastery-aligned-with-ethical-standards — We are calling them partners and collaborators because the aim and the objective is all aligned within the ecosystem of …
S6
Need and Impact of Full Stack Sovereign AI by CoRover BharatGPT — -Amish Devagon: Role/Title not explicitly mentioned, appears to be an interviewer or journalist conducting the discussio…
S7
WS #111 Addressing the Challenges of Digital Sovereignty in DLDCs — Jimson Olufuye: Apologies for the late start of this workshop. Bismillahir Rahmanir Rahim. Greetings and welcome to A…
S8
https://dig.watch/event/india-ai-impact-summit-2026/building-trustworthy-ai-foundations-and-practical-pathways — So both Devayan and Alok did a good job summarizing the overall work. So these are a bit of technical details. I’ll prob…
S9
Discussion Report: AI-Native Business Transformation at Davos — Current interactions require remembering Excel formulas, clicking hundreds of buttons in PowerPoint, and navigating comp…
S10
Publishers lose traffic as readers trust AI more — Online publishersare facing an existential threatas AI increasingly becomes the primary source of information for users,…
S11
morning session — Risk assessment considers the likelihood of an event occurring and the severity of its consequences. Understanding these…
S12
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ananya Birla Birla AI Labs — The speaker describes AI as a technology that expands human cognitive capacity, likening its impact to the physical ampl…
S13
Knowledge in the Age of AI: World Economic Forum Town Hall Discussion — Both speakers, despite representing different business models, agree on the need to move away from generic, large-scale …
S14
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — – Assessment of severity and likelihood of human rights risks – Scoring risks based on severity and likelihood Chris M…
S15
Advancing Scientific AI with Safety Ethics and Responsibility — And I believe this is true for many other countries in the global south as well. So it’s not something very unique. Part…
S16
Driving Indias AI Future Growth Innovation and Impact — Professor Bhaskar Chakravarti emphasized the critical importance of trust infrastructure beyond technical capabilities, …
S17
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Alex Moltzau: I want to address this with an anecdote. Because I am Norwegian, I feel partly responsible here. I mean, I…
S18
Free Science at Risk? / Davos 2025 — There’s a need to balance open science with security concerns, but overly restrictive policies can hinder innovation
S19
World Economic Forum® — | Failure of national governance (e.g. failure of rule of law, corruption, political deadlock, etc.) Inability to govern…
S20
ANNUAL REPORT — Risks posed by the COVID-19 pandemic are unprecedented. The crisis is like no other whose impact on the global economy i…
S21
Comprehensive Report: President Trump’s Address to the World Economic Forum in Davos — This opening framing set the stage for Trump’s entire economic narrative, allowing him to position his policies as solut…
S22
(Day 6) General Debate – General Assembly, 79th session: morning session — Bassam Sabbagh – Syrian Arab Republic: Thank you Mr. President. I congratulate you on your election as President of th…
S23
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S24
Main Session | Policy Network on Artificial Intelligence — The discussion highlighted the complex and multifaceted nature of AI governance challenges. While there was broad agreem…
S25
WS #283 AI Agents: Ensuring Responsible Deployment — Despite representing different sectors (industry, government, standards), there was unexpected consensus on the need to …
S26
Building Trustworthy AI Foundations and Practical Pathways — Recognizing the need for careful balance between implementing mitigation measures and maintaining system utility
S27
Cybersecurity regulation in the age of AI | IGF 2023 Open Forum #81 — Gallia Daor:Sure. Thank you. So indeed, in 2019, the OECD was the first intergovernment organization to adopt principles…
S28
How AI Drives Innovation and Economic Growth — High level of consensus across diverse perspectives (World Bank, academia, legal scholarship, development practice) sugg…
S29
WS #362 Incorporating Human Rights in AI Risk Management — Different socioeconomic realities and societal contexts in Global South, technologies not designed keeping those context…
S30
How can we deal with AI risks? — Long-term risksare the scary sci-fi stuff – the unknown unknowns. These are the existential threats, the extinction risk…
S31
Delegated decisions, amplified risks: Charting a secure future for agentic AI — Moderate disagreement with significant implications. While both speakers agree that current AI agent implementations pos…
S32
Secure Finance Risk-Based AI Policy for the Banking Sector — India’s regulatory thinking reflects this balance, encouraging experimentation while reinforcing institutional responsib…
S33
From principles to practice: Governing advanced AI in action — The speakers show broad agreement on fundamental goals (safety, trust, international cooperation) but significant disagr…
S34
State of play of major global AI Governance processes — Juha Heikkila:Thank you very much, and thank you very much indeed for the invitation to be on this panel. So indeed the …
S35
Advancing Scientific AI with Safety Ethics and Responsibility — Thanks Shyam. I think first, yeah first thing that we need to understand is how that ecosystem is and then see if certai…
S36
Strengthening the positive and mitigating the negative impacts for the environment of digitalisation regulations ( Transnational Institute) — Furthermore, it is argued that the verification of compliance with environmental and social standards, even if it may sl…
S37
WS #484 Innovative Regulatory Strategies to Digital Inclusion — This comment introduced a critical systems-level analysis that challenged the panel to think beyond technical and policy…
S38
Measuring Digital Trade — The emergence of new business models, such as online platforms, was also discussed. These platforms are becoming importa…
S39
Contents — Advancing digitalisation brings with it target conflicts and decisions on direction. We must provide a political answer …
S40
morning session — Risk assessment considers the likelihood of an event occurring and the severity of its consequences. Understanding these…
S41
Practical Toolkits for AI Risk Mitigation for Businesses — In conclusion, the analysis recognizes the immense potential of AI technology but stresses the need to govern and regula…
S42
WS #98 Towards a global, risk-adaptive AI governance framework — Melinda Claybaugh: Great. Thank you so much. Just a little bit of context to explain Meta’s, to explain my company’s …
S43
Building Trustworthy AI Foundations and Practical Pathways — “Now for the first time, instead of just having general hardware, that is one machine that can run all software, we have…
S44
Open Internet Inclusive AI Unlocking Innovation for All — So one of two things happens. One is, well, the Internet just dies. But that’s not going to happen because the AI compan…
S45
Knowledge in the Age of AI: World Economic Forum Town Hall Discussion — Both speakers, despite representing different business models, agree on the need to move away from generic, large-scale …
S46
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ananya Birla Birla AI Labs — The speaker describes AI as a technology that expands human cognitive capacity, likening its impact to the physical ampl…
S47
morning session — Risk assessment considers the likelihood of an event occurring and the severity of its consequences. Understanding these…
S48
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — – Assessment of severity and likelihood of human rights risks – Scoring risks based on severity and likelihood Chris M…
S49
Artificial Intelligence & Emerging Tech — Jörn Erbguth:Well, the approach the EU takes is a risk-based approach, meaning regulate partially where there’s high ris…
S50
Advancing Scientific AI with Safety Ethics and Responsibility — And I believe this is true for many other countries in the global south as well. So it’s not something very unique. Part…
S51
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Lt Gen Vipul Shinghal — AI can inform, accelerate and recommend decisions, but only humans can exercise judgment and bear responsibility for the…
S52
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — However, Ifayemi noted that even developed countries face access challenges, with the UK’s Department of Science, Innova…
S53
Artificial intelligence (AI) – UN Security Council — During the9821st meetingof the Security Council, the discussions centered around the concept of accidental risks associa…
S54
Interim Report: — 25. We examined AI risks firstly from the perspective of technical characteristics of AI. Then we looked at risks throug…
S55
Free Science at Risk? / Davos 2025 — There’s a need to balance open science with security concerns, but overly restrictive policies can hinder innovation
S56
Panel Discussion Inclusion Innovation & the Future of AI — The discussion maintained a constructive and collaborative tone throughout, with panelists building on each other’s poin…
S57
AI Development Beyond Scaling: Panel Discussion Report — The tone began as optimistic and technically focused, with researchers enthusiastically presenting their innovative appr…
S58
https://dig.watch/event/india-ai-impact-summit-2026/the-innovation-beneath-ai-the-us-india-partnership-powering-the-ai-era — Yeah, thank you very much for the question. Thank you so much for having me here. It’s great. And I would like to build …
S59
Birth of Charles Bonnet — Machines could be made to imitate human intelligence.
S60
Folding Science / DAVOS 2025 — Demis Hassabis: Well, the reason that we and my co-founder, Shane Legge, our chief scientist, are co-ing the term art…
S61
Day 0 Event #183 What Mature Organizations Do Differently for AI Success — Dr. Alomair presented a timeline of AI development from 1950 to the present. She emphasized key milestones such as Alan …
S62
Introducing Gemini, Google’s response to ChatGPT — Google`s Alphabet introduces Gemini,its state-of-the-art AI model adept at handling various data formats such as video, …
S63
Gemini 2.5 Pro tops AI coding tests, surpasses ChatGPT and Claude — Googlehas releasedan updated version of its Gemini 2.5 Pro model, addressing issues found in earlier updates. Unlike the…
S64
ChatGPT and the rising pressure to commercialise AI in 2026 — The moment many have anticipated with interest or concern has arrived. On 16 January, OpenAI announced the global rollou…
S65
Thinking through Augmentation — The analysis reveals concerns and arguments raised by Francine Lacqua and Azeem Azhar regarding the rapid progress of te…
S66
Re-envisioning DCAD for the Future — Most solutions are technological add-ons or band-aids
S67
Saturday Opening Ceremony: Summit of the Future Action Days — Guterres advocates for reforming international financial institutions to better support sustainable development and clim…
S68
Most transformative decade begins as Kurzweil’s AI vision unfolds — AI no longer belongs to speculative fiction or distant possibility. In many ways, it has arrived. From machine translati…
S69
Anthropic CEO warns of mass job losses from AI — Just one week afterreleasingits most advanced AI models to date — Opus 4 and Sonnet 4 — Anthropic CEO Dario Amodei warne…
S70
WS #139 Internet Resilience Securing a Stronger Supply Chain — Olaf Kolkman from the Internet Society illustrated these complexities with concrete examples. His most memorable anecdot…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Alok
2 arguments207 words per minute2475 words715 seconds
Argument 1
AI will replace specialised software (e.g., Excel, PowerPoint) with a single “general software”, wiping out whole industries such as web‑design, novel writing and film production (Alok)
EXPLANATION
Alok argues that the emergence of general‑purpose AI software will make task‑specific applications obsolete, allowing a single system to perform the functions of many specialised tools. This shift will disrupt entire sectors that currently rely on niche software, from web‑design agencies to novelists and movie producers.
EVIDENCE
He explains the historical move from specialised hardware to general hardware and now to general software, noting that one machine can run both Excel and PowerPoint by swapping software, and that AI aims to replace both with a single interface [41-48]. He then lists concrete industries that could disappear, citing web-design companies in India, novel-writing, and the film industry, illustrating how AI-generated content could replace human creators [62-80].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The claim that AI will consolidate tools like Excel and PowerPoint into a single general-purpose interface and that sectors such as novel writing and film production could be displaced is directly discussed in the trust-worthy AI report, which notes the shift from specialised apps to “general software” and cites the potential disappearance of novel-writing and movie-industry economics [S2]; the broader business transformation perspective on moving from GUI-heavy workflows to natural-language interactions further supports this trend [S9].
MAJOR DISCUSSION POINT
Economic disruption caused by general‑purpose AI
AGREED WITH
Devayan, Anirban
DISAGREED WITH
Devayan, Anirban
Argument 2
Ad‑driven websites lose traffic because users obtain answers directly from AI models, threatening the ad‑revenue model that sustains many online services (Alok)
EXPLANATION
Alok points out that many websites rely on advertising revenue generated from user visits, but AI assistants now provide information without requiring users to browse those sites. This loss of traffic undermines the financial model of countless online platforms.
EVIDENCE
He describes the traditional ad-supported model where visitors see ads on recipe sites, then notes that users will increasingly ask AI systems like ChatGPT or Gemini for answers, bypassing the site entirely [82-90]. He provides a statistic showing click-through rates dropping from one-in-six to one-in-seven, indicating a severe decline in traffic [95-100]. He also mentions open-source tools such as Tailwind losing engineers because developers no longer need to visit libraries for code snippets [101-103].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Evidence that AI-driven answers reduce traffic to ad-supported sites appears in the same AI foundations document, which mentions websites losing visitors as users turn to AI assistants, and is corroborated by a separate analysis of publishers experiencing sharp traffic drops as readers rely on AI summaries [S2][S10].
MAJOR DISCUSSION POINT
Threat to ad‑based internet revenue
D
Devayan
1 argument202 words per minute930 words276 seconds
Argument 1
Risk should be defined as the combination of likelihood and severity of an undesirable outcome, requiring clear metrics to assess AI safety (Devayan)
EXPLANATION
Devayan proposes a concrete definition of risk that combines the probability of an adverse event occurring with the seriousness of its consequences. He stresses that both dimensions must be measured to evaluate AI safety effectively.
EVIDENCE
He states that risk is “the probability of an undesirable outcome characterized by two things: its likelihood and its severity” and illustrates the concept with an airplane safety example, explaining that low likelihood makes air travel acceptable despite high severity [170-179]. He further emphasizes that risk definitions must be contextualised to specific domains such as education or healthcare [184-186].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The definition of risk as a product of likelihood and severity is explicitly provided in the trustworthy AI foundations source, using an airplane safety analogy, and is reinforced by a risk-assessment session that stresses measuring both dimensions for effective AI risk management [S2][S11].
MAJOR DISCUSSION POINT
Defining AI risk metrics
AGREED WITH
Anirban
DISAGREED WITH
Alok, Anirban
A
Anirban
2 arguments196 words per minute1615 words492 seconds
Argument 1
ASTRA is a India‑focused AI safety risk database that classifies risks into social (e.g., linguistic bias) and frontier (e.g., power‑seeking, rogue behaviour) categories, and maps them to development, deployment and usage stages and intent (Anirban)
EXPLANATION
Anirban describes ASTRA as a risk‑catalogue tailored to Indian contexts, separating risks that are observable (social) from those that are hard to detect (frontier). The database also records the phase of the AI lifecycle where the risk appears and whether it is intentional or accidental.
EVIDENCE
He explains that ASTRA is a “risk database … contextualized in the Indian context” and outlines its two main categories-social risks such as linguistic bias where English-trained models perform poorly in Hindi, and frontier risks like power-seeking rogue AI systems that act without consent, citing a trading-firm incident as an example [211-254]. He also gives an infrastructure-exclusion scenario where poor internet connectivity in rural India prevents an AI-driven app from functioning, illustrating a deployment-stage risk [255-270].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The ASTRA taxonomy, its social versus frontier risk categories, and its mapping to lifecycle stages and intent are described in detail in the AI foundations report, which highlights linguistic bias in Hindi and frontier risks such as power-seeking AI behavior [S2].
MAJOR DISCUSSION POINT
India‑specific AI safety taxonomy
AGREED WITH
Devayan
DISAGREED WITH
Alok, Devayan
Argument 2
Mitigating these risks is highly challenging: measures are often context‑specific, can reduce system utility, and must be empirically grounded to be effective (Anirban)
EXPLANATION
Anirban argues that while a risk database is a first step, actually reducing those risks is difficult because mitigation strategies may only work in certain settings and can compromise the usefulness of the AI system. He calls for data‑driven, context‑aware approaches to mitigation.
EVIDENCE
He notes that mitigation is “an extremely challenging task” because measures are “very context specific” and can lead to loss of utility for users, stressing the need for careful balance [281-289]. He concludes that future work must empirically quantify risk probabilities and expand the taxonomy to more sectors such as agriculture [290-294].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same source notes that mitigation strategies are highly context-specific, may diminish system utility, and require empirical grounding, underscoring the difficulty of effective risk reduction [S2].
MAJOR DISCUSSION POINT
Challenges of AI risk mitigation
AGREED WITH
Alok, Devayan
DISAGREED WITH
Alok, Devayan
Agreements
Agreement Points
All speakers stress the necessity of careful, responsible AI deployment, emphasizing alignment, trustworthiness and the difficulty of mitigation.
Speakers: Alok, Devayan, Anirban
Risk should be defined as the combination of likelihood and severity of an undesirable outcome, requiring clear metrics to assess AI safety (Devayan) Mitigating these risks is highly challenging: measures are often context‑specific, can reduce system utility, and must be empirically grounded to be effective (Anirban)
Alok warns that we must be exceedingly careful when deploying AI [108-110]; Devayan frames alignment as matching system behaviour to user expectations and defines risk in terms of likelihood and severity [149-176]; Anirban describes mitigation as extremely challenging, context-specific and potentially harmful to utility [281-289]. Together they converge on the view that AI must be rolled out responsibly with robust risk assessment and mitigation strategies.
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus aligns with the OECD Principles for Trustworthy AI and the EU AI Act’s focus on safety and alignment, and was reiterated in the AI Agents responsible-deployment panel where speakers emphasized balancing innovation with protection [S27][S34][S25].
Risk assessment should combine likelihood and severity and be contextualised to the deployment environment.
Speakers: Devayan, Anirban
Risk should be defined as the combination of likelihood and severity of an undesirable outcome, requiring clear metrics to assess AI safety (Devayan) ASTRA is a India‑focused AI safety risk database that classifies risks into social (e.g., linguistic bias) and frontier (e.g., power‑seeking, rogue behaviour) categories, and maps them to development, deployment and usage stages and intent (Anirban)
Devayan explicitly defines risk as a function of likelihood and severity [170-176]; Anirban’s ASTRA taxonomy operationalises this by categorising risks, linking them to lifecycle stages and intent, and grounding them in the Indian context [241-254][250-254]. Both agree that risk must be measured and contextualised.
POLICY CONTEXT (KNOWLEDGE BASE)
The likelihood × severity formulation is a standard risk-management approach and was explicitly highlighted in the IGF risk-assessment session, as well as in India’s risk-based AI policy for the banking sector that stresses contextualisation [S40][S32].
AI risk and impact must be understood in the specific Indian context.
Speakers: Alok, Devayan, Anirban
AI will replace specialised software (e.g., Excel, PowerPoint) with a single “general software”, wiping out whole industries such as web‑design, novel writing and film production (Alok) Risks and harms would mean different things in different contexts… education, healthcare, etc. (Devayan) ASTRA is a India‑focused AI safety risk database… contextualised in the Indian context heavily (Anirban)
Alok cites Indian-centric industry disruption (web-design agencies) [63-66]; Devayan stresses that risk definitions must be grounded in deployment contexts such as education and healthcare [186]; Anirban describes ASTRA as a risk database built specifically for India’s linguistic, infrastructural and socio-technical realities [225-226]. All three converge on the need for India-specific analysis.
POLICY CONTEXT (KNOWLEDGE BASE)
India’s heterogeneous AI ecosystem and its regulatory balance between experimentation and systemic risk oversight have been discussed in recent policy briefs and panels on AI governance, underscoring the need for locally-grounded risk analysis [S32][S35][S29].
Similar Viewpoints
Both see risk as a measurable construct that must be broken down into concrete categories, stages and intents, and both advocate for a structured taxonomy to support assessment and mitigation [170-176][241-254][250-254].
Speakers: Devayan, Anirban
Risk should be defined as the combination of likelihood and severity of an undesirable outcome, requiring clear metrics to assess AI safety (Devayan) ASTRA is a India‑focused AI safety risk database that classifies risks into social (e.g., linguistic bias) and frontier (e.g., power‑seeking, rogue behaviour) categories, and maps them to development, deployment and usage stages and intent (Anirban)
Both acknowledge that AI systems will act on user instructions and that mis‑alignment can have wide‑scale economic consequences; therefore alignment (trustworthiness) is a prerequisite for safe deployment [110-111][149-152].
Speakers: Alok, Devayan
AI will replace specialised software … wiping out whole industries … (Alok) what alignment is… we want the machine to basically align with my expectations (Devayan)
Unexpected Consensus
Economic disruption of existing digital business models and the simultaneous need for mitigation.
Speakers: Alok, Anirban
AI will replace specialised software … wiping out whole industries … (Alok) Mitigating these risks is highly challenging: measures are often context‑specific, can reduce system utility, and must be empirically grounded (Anirban)
Alok focuses on the macro-economic fallout (loss of web-design firms, ad revenue) while Anirban concentrates on the micro-level challenge of mitigating those very risks. The convergence of a macro-economic warning with a micro-level mitigation challenge was not anticipated given their different focal points [62-80][281-289].
POLICY CONTEXT (KNOWLEDGE BASE)
Panels on AI for jobs and digital trade have highlighted the disruptive potential of platform-based business models and the necessity of mitigation strategies to safeguard livelihoods and market stability [S23][S38][S28].
Overall Assessment

The speakers largely converge on three pillars: (1) AI risk must be defined, measured and contextualised; (2) mitigation is intrinsically difficult and must be balanced against utility; (3) Indian‑specific factors (language, infrastructure, industry structure) shape both risk perception and impact. While Alok emphasizes economic disruption, Devayan and Anirban provide the methodological framework to address those disruptions.

High consensus on the need for structured, context‑aware risk assessment and cautious deployment, with moderate consensus on the economic implications. This suggests that future discussions and policy work should prioritize building India‑tailored risk taxonomies (like ASTRA) and develop mitigation guidelines that acknowledge both economic stakes and technical constraints.

Differences
Different Viewpoints
Different framing of AI risk – macro‑economic disruption vs. formal risk metrics vs. contextual taxonomy
Speakers: Alok, Devayan, Anirban
AI will replace specialised software (e.g., Excel, PowerPoint) with a single “general software”, wiping out whole industries such as web‑design, novel writing and film production (Alok) Risk should be defined as the combination of likelihood and severity of an undesirable outcome, requiring clear metrics to assess AI safety (Devayan) ASTRA is a India‑focused AI safety risk database that classifies risks into social (e.g., linguistic bias) and frontier (e.g., power‑seeking, rogue behaviour) categories, and maps them to development, deployment and usage stages and intent (Anirban)
Alok frames AI risk primarily as a massive economic upheaval, arguing that a single general-purpose AI will make specialised tools and whole sectors obsolete (e.g., web-design, novel writing, film) [41-48][62-80][82-90]. Devayan argues that risk must be quantified by likelihood and severity, using an airplane safety analogy to illustrate the need for clear metrics [170-179]. Anirban proposes a structured, India-specific taxonomy (social vs. frontier risks) that situates risks within the AI lifecycle and intent [211-254][255-270]. Thus the speakers disagree on what the core AI risk is and how it should be conceptualised.
POLICY CONTEXT (KNOWLEDGE BASE)
Divergent framings of AI risk were observed in the Policy Network on AI session and in scholarly debates that contrast macro-economic impact narratives with metric-driven risk taxonomies, especially for Global South contexts [S24][S33][S29].
How to mitigate AI risks – cautionary band‑aids vs. metric‑driven mitigation vs. context‑specific, utility‑preserving mitigation
Speakers: Alok, Devayan, Anirban
Now we can’t run life on band‑aids. … We have to be exceedingly careful about that (Alok) We need a clear way to define risk … and then we can manage it (Devayan) Mitigating these risks is highly challenging: measures are often context‑specific, can reduce system utility, and must be empirically grounded to be effective (Anirban)
Alok warns that current fixes are merely band-aids and calls for extreme caution but does not outline concrete mitigation steps [8-10][108-110]. Devayan stresses the need for a clear, quantitative definition of risk as a prerequisite for any mitigation strategy [151-156]. Anirban highlights that mitigation is extremely challenging, often context-specific, and may compromise utility, requiring empirical grounding [281-289]. The speakers therefore disagree on the appropriate mitigation approach.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between quick-fix band-aids and systematic, utility-preserving mitigation appears in discussions on protective isolation versus industry-wide reform, and is reflected in rights-based AI risk-mitigation toolkits [S31][S41][S26][S33].
Assessment of current AI safety efforts – superficial fixes vs. systematic risk database
Speakers: Alok, Anirban, Devayan
They haven’t fixed the underlying problem. They saw some presentations … so they’ve just put a band‑aid on top. Now we can’t run life on band‑aids. (Alok) ASTRA is a risk database … contextualized in the Indian context … This is a first step … (Anirban) One definition that we’ve chosen is that the probability of an undesirable outcome … (Devayan)
Alok claims that industry responses are merely superficial band-aids that do not address the root cause of AI errors [7-9]. In contrast, Anirban presents ASTRA as a systematic, India-focused risk database built over six months, indicating substantive progress toward addressing underlying risks [211-224][236-244]. Devayan also emphasizes the need for a clear risk definition as a foundation for safety work [170-176]. This reflects a disagreement on whether current efforts are merely band-aids or constitute meaningful advancement.
POLICY CONTEXT (KNOWLEDGE BASE)
Critiques that current safety measures are ad-hoc and call for a comprehensive risk database were voiced in the “principles to practice” AI governance panel, echoing broader concerns about superficial fixes [S33][S25].
Unexpected Differences
Economic collapse vs. safety‑focused discourse
Speakers: Alok, Devayan, Anirban
AI will replace specialised software … wiping out whole industries such as web‑design, novel writing and film production (Alok) Risk should be defined as the combination of likelihood and severity … (Devayan) ASTRA is a India‑focused AI safety risk database … (Anirban)
Alok predicts that AI will cause the disappearance of entire economic sectors (web-design, novel writing, film) [62-80], a claim not addressed or contested by the other speakers, whose contributions focus on risk definition, taxonomy, and mitigation rather than macro-economic outcomes. The lack of engagement with this economic argument constitutes an unexpected area of disagreement.
Overall Assessment

The discussion reveals moderate disagreement among the speakers. Alok concentrates on the broad socio‑economic disruption caused by general‑purpose AI and criticises current superficial fixes. Devayan pushes for a formal, quantitative definition of risk (likelihood + severity) as the foundation for safety work. Anirban presents a detailed, India‑specific risk taxonomy (social vs. frontier) and stresses the difficulty of mitigation. While all share the goal of safe AI deployment, they diverge on risk framing, measurement, and mitigation strategies, and Alok’s macro‑economic concerns are not directly addressed by the others, creating an unexpected tension.

Moderate to high disagreement on conceptualisation and mitigation of AI risks, with implications that a unified policy response will need to reconcile macro‑economic impact concerns with technical risk metrics and context‑specific mitigation approaches.

Partial Agreements
Both speakers share the overarching goal of improving AI safety through systematic risk assessment. Devayan emphasizes a quantitative definition of risk (likelihood + severity) as the basis for measurement [170-179], while Anirban focuses on building a contextual taxonomy (social vs. frontier) and mapping risks to lifecycle stages [211-254][255-270]. They agree on the need for structured risk work but differ on the primary method—metric‑driven definition versus contextual taxonomy.
Speakers: Devayan, Anirban
Risk should be defined as the combination of likelihood and severity of an undesirable outcome, requiring clear metrics to assess AI safety (Devayan) ASTRA is a India‑focused AI safety risk database that classifies risks into social … and frontier … categories, and maps them to development, deployment and usage stages and intent (Anirban)
Takeaways
Key takeaways
General‑purpose AI (a single “general software”) is poised to replace many specialised software tools, potentially collapsing entire industries such as web‑design, novel writing, film production and ad‑driven content sites. The shift to AI‑generated answers threatens the ad‑revenue model of many websites because users will obtain information directly from models instead of visiting the sites. Alignment of AI systems with human intent is critical; natural‑language ambiguity can cause literal, harmful fulfillment of dangerous requests. Risk should be defined as the combination of likelihood and severity of an undesirable outcome, and must be measured in the specific context of deployment. The Indian‑focused ASTRA database provides a taxonomy of AI safety risks, distinguishing social risks (e.g., linguistic bias) from frontier risks (e.g., power‑seeking, rogue behaviour), and maps risks to development, deployment, and usage stages as well as intent. Mitigating AI risks is highly challenging: mitigation measures are often context‑specific, can diminish utility, and need empirical grounding.
Resolutions and action items
Launch of the ASTRA AI safety risk database (in partnership with AICSTEP Foundation). Plan to expand ASTRA beyond education and financial lending to sectors such as agriculture and others. Commitment to empirically quantify the probability and severity of identified risks. Ongoing work to develop and test mitigation strategies that balance safety with system utility.
Unresolved issues
How to effectively mitigate risks without significantly reducing AI utility. Concrete metrics and methodologies for measuring likelihood and severity of AI‑related harms. Strategies to address the economic disruption caused by general‑purpose AI (e.g., transition pathways for affected industries). Approaches to preserve the viability of ad‑driven web content in an AI‑first information landscape. Handling infrastructure exclusion (e.g., poor connectivity) in AI deployments specific to Indian contexts. Defining and enforcing alignment safeguards to prevent literal fulfillment of harmful user requests.
Suggested compromises
Adopt a cautious, context‑aware deployment approach that balances safety mitigations with preserving user utility. Recognise that interim “band‑aid” fixes (e.g., patching specific failures) are insufficient; aim for deeper, systemic solutions while allowing incremental improvements.
Thought Provoking Comments
The transition from general hardware (one machine running many specialized programs) to general software (one AI system that can replace many specialized applications like Excel and PowerPoint) will cause a massive economic shift, collapsing entire industries that rely on software as a scarce resource.
Alok frames AI progress as a paradigm shift comparable to the invention of the general‑purpose computer, highlighting that the scarcity that once justified software businesses is disappearing. This macro‑level view connects technical evolution to real‑world economic disruption.
His statement pivoted the conversation from technical details to broader societal implications, prompting Devayan to raise the question of risk definition and leading the group to discuss safety, alignment, and the need for a risk taxonomy.
Speaker: Alok
Band‑aids are like students memorising answers for exams – a superficial fix that doesn’t lead to real learning. Companies are applying band‑aids to AI problems without solving the underlying issue.
The metaphor critiques the industry’s tendency to patch AI shortcomings (e.g., hallucinations) rather than addressing root causes, urging deeper technical rigor.
This critique set a skeptical tone that influenced subsequent speakers to stress the importance of trustworthy, correct AI, and it underpinned Devayan’s concern about defining and managing risk.
Speaker: Alok
The ad‑driven web economy is collapsing because users will get answers directly from models like ChatGPT, eliminating traffic to content sites and even harming open‑source ecosystems that feed those models.
He connects AI’s information‑access capability to a concrete, immediate economic threat, illustrating how a technological advance can disrupt existing business models and infrastructure.
This concrete example sharpened the discussion on downstream effects of AI, leading participants to consider not just technical risk but also systemic economic risk, which Devayan later framed as part of the broader risk taxonomy.
Speaker: Alok
Natural language is inherently ambiguous; we built programming languages to disambiguate. Replacing them with plain‑English prompts creates a dangerous alignment problem where the model may ‘technically’ satisfy a request in harmful ways.
He links linguistic ambiguity to alignment failures, using the classic ‘genie’ story to illustrate how AI could fulfill literal requests with unintended consequences.
This insight deepened the conversation about alignment, prompting Devayan to explicitly ask “what is alignment?” and to frame the risk of AI doing “bad things,” steering the dialogue toward safety definitions.
Speaker: Alok
Alignment means making the system behave according to our expectations, but we lack clear ways to define the risk of it doing something bad. How do we quantify that risk?
Devayan crystallises the abstract alignment concern into a concrete problem: risk quantification. This question bridges Alok’s high‑level concerns with the need for actionable frameworks.
His query acted as a turning point, shifting the focus from philosophical concerns to practical risk assessment, which opened the floor for Anirban’s presentation of the ASTRA taxonomy.
Speaker: Devayan
One formula fits all does not work in AI safety; we need a contextualised risk taxonomy for India that captures social risks (e.g., linguistic bias) and frontier risks (e.g., power‑seeking AI going rogue).
Anirban introduces the idea that risk frameworks must be locally grounded, distinguishing between observable social risks and hard‑to‑measure frontier risks, thereby expanding the scope of the discussion.
His taxonomy reframed the conversation from generic risk definitions to a structured, context‑specific approach, leading participants to consider sector‑specific examples and the challenges of mitigation.
Speaker: Anirban
Example of an AI trading system that went rogue, executing massive lossy transactions without consent – a concrete illustration of a frontier, power‑seeking risk.
Provides a vivid, real‑world case that makes the abstract notion of ‘frontier risk’ tangible, highlighting the seriousness of unchecked autonomous agents.
The example reinforced the need for robust monitoring and risk controls, prompting discussion about mitigation trade‑offs and the difficulty of balancing safety with utility.
Speaker: Anirban
Mitigation measures are often context‑specific and can reduce utility; a strong mitigation that kills usefulness is not a good solution.
He highlights the practical tension between safety and usability, reminding the group that risk management cannot be pursued in isolation from user experience.
This comment added nuance to the earlier optimism about risk frameworks, steering the dialogue toward realistic implementation challenges and influencing the concluding remarks about future work on empirical grounding of risks.
Speaker: Anirban
Overall Assessment

The discussion evolved from Alok’s sweeping, historically grounded analogy of general hardware versus emerging general software, through his vivid warnings about economic disruption and linguistic ambiguity, to Devayan’s pinpointed question on alignment and risk quantification. These catalyst comments shifted the tone from speculative to problem‑oriented, prompting Anirban to introduce a concrete, India‑centric risk taxonomy that distinguished social from frontier risks and underscored mitigation challenges. Collectively, the highlighted remarks steered the conversation toward a nuanced understanding that AI’s transformative potential brings systemic economic, social, and safety risks, demanding context‑aware frameworks and careful trade‑offs between safety and utility.

Follow-up Questions
How do we define the risk of an AI system getting into a bad thing and doing it?
Need a clear, operational definition of AI risk specific to the context being discussed.
Speaker: Devayan
Do we have a clear way to define risk in our context?
Seeks a systematic framework for risk identification and assessment within their domain.
Speaker: Devayan
What is alignment?
Clarifies the concept of AI alignment with human expectations, a foundational issue for safety.
Speaker: Devayan
What is the danger of using natural‑language interfaces for AI?
Explores the risks arising from ambiguous human instructions and the potential for unintended harmful behavior.
Speaker: Alok
Why don’t we need two separate computers for Excel and PowerPoint? How does general hardware enable this?
Seeks historical and technical insight into the shift from specialized to general‑purpose computing, informing the analogy to general‑purpose software.
Speaker: Alok

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Keynote by Naveen Tewari Founder & CEO, inMobi India AI Impact Summit

Keynote by Naveen Tewari Founder & CEO, inMobi India AI Impact Summit

Session at a glanceSummary, keypoints, and speakers overview

Summary

Naveen Tewari opened the keynote by stating that the discussion would focus on how commerce is being reshaped by artificial intelligence (AI) [1][2-4]. He argued that AI will extend human lifespan to around 120 years through disease eradication and organ creation [6-9]. He further claimed that AI will democratize skills, turning many people into high-quality coders and thereby reducing inequality [10-13]. A third effect he highlighted is a disproportionate boost to economic prosperity due to massive productivity gains [13].


Tewari introduced the concept of “agentic commerce,” which he says will embed intelligence directly into shopping, supply chains and manufacturing, and presented Glance as the platform that delivers this vision [15-21][24-33]. He explained that Glance will generate personal product feeds by training a separate commerce model for each consumer, eventually scaling to a billion users [34-36][38-43]. The system combines a commerce intelligence graph, a generative AI experience model that produces visual outputs, and a user-level model that tailors recommendations to individual preferences [44-50][51-53]. A “living context graph” will continuously infer a shopper’s situation, price sensitivity and brand preferences to optimise purchase pathways across billions of options [55-58].


Transparency is central to the approach: the reasoning engine will be open so users can understand why specific products are shown, fostering trust and accountability [62-66]. Tewari emphasized that agentic commerce can cut wasteful consumer spending, creating a savings-driven economic flywheel that could expand the market size dramatically [67-69]. He noted that commerce accounts for roughly 25 % of global GDP and projected that AI-enabled commerce in India could generate about $3 trillion by 2047 [88-91].


Positioning Glance as an Indian-built, globally-focused venture, he reiterated the company’s commitment to truthful, authentic agents and called for bold, audacious innovation [94-100][102-108][121-122]. The session concluded with a brief thank-you from the host and an invitation to the next speaker, Mr. Vivek Mahajan of Fujitsu [123].


Keypoints

AI will fundamentally reshape commerce and society – Tewari argues that AI will extend human lifespan, equalize skills, and drive unprecedented productivity, leading to a new commercial architecture ([6-13][14-18]).


“Agentic commerce” and the Glance platform – He introduces the concept of agentic commerce, where AI creates real-time, personalized product feeds for each consumer using multiple models (commerce intelligence graph, generative experience model, and individual user model) ([22-36][37-50]).


Transparency, accountability, and authenticity – A “living context graph” will make the reasoning behind recommendations visible, ensuring the system is transparent and trustworthy, which Tewari frames as essential for a human-centric future ([55-66][94-100]).


Massive economic impact and supply-chain transformation – By embedding AI at the consumer, supply-chain, and manufacturing levels, marketplaces may weaken while individual brands rise, potentially adding $3 trillion to India’s GDP by 2047 ([70-78][88-91]).


A rallying call for audacious, India-led innovation – The speech ends with a motivational appeal to seize the AI moment, build global platforms from India, and pursue an “audacious” vision for the next decade ([102-110][108-112]).


Overall purpose/goal:


The keynote is designed to persuade the audience that AI-driven “agentic commerce” is the next frontier, showcase Glance as the pioneering platform, highlight the huge economic upside, and inspire stakeholders to join an ambitious, India-originated push to dominate the global AI-commerce landscape.


Overall tone:


The tone is consistently high-energy, optimistic, and visionary, moving from an explanatory style about AI’s societal benefits to a more rallying, motivational cadence that urges bold action and celebrates Indian innovation, especially in the final minutes. No major tonal shift occurs; enthusiasm simply intensifies toward the conclusion.


Speakers

Naveen Tewari


Area of Expertise: Artificial Intelligence, Commerce, Product Innovation, Entrepreneurship


Role/Title: Founder & CEO, InMobi; Speaker at AI Impact Summit (presenting on agentic commerce)[S1][S2]


Speaker 2


Area of Expertise: (not specified)


Role/Title: Event Moderator/Host (introduced Naveen Tewari’s keynote and invited the next speaker)[S3][S4][S5]


Additional speakers:


Vivek Mahajan


Area of Expertise: (not specified)


Role/Title: CTO, Fujitsu (invited to deliver the next keynote)


Full session reportComprehensive analysis and detailed insights

Naveen Tewari opened the keynote by stating that his talk would explore how artificial intelligence (AI) is poised to remodel global commerce, noting that the internet and AI are already redefining existing paradigms [1-4]. He then described three sweeping societal shifts that AI could trigger: (1) breakthroughs in disease eradication and organ engineering that could push average human life expectancy toward 120 years [6-9]; (2) AI-driven tools that will democratise high-skill capabilities, turning most engineers into “super-high-quality coders” and reducing skill-based inequality [10-13]; and (3) a surge in productivity that will disproportionately boost economic prosperity and reshape the commercial architecture of the future [14-18].


Introducing the concept of agentic commerce, Tewari explained it as a model where intelligence is embedded directly into the shopping journey, supply chains and manufacturing. He contrasted “personalised feeds” with “personal feeds” that centre the individual consumer, and presented Glance – the platform his company has built – as the vehicle for delivering real-time, AI-curated product streams worldwide [15-33][34-36]. The ambition, he said, is to train a distinct commerce model for a billion users over the coming years, infusing every purchase decision with personalised intelligence [34-36].


The technical backbone of agentic commerce consists of three interlocking models. A Commerce Intelligence Graph serves as a universal knowledge base of every commerce element [38-41]; a Generative AI Experience Model creates visual outputs such as personalised pamphlets or feeds, moving beyond the text-only answers of conventional engines [43-48]; and a user-level model is trained on each individual’s behaviour to ensure the generated experience is truly personal [49-50]. Overseeing these components is a living context graph that continuously infers a shopper’s current context, price and brand sensitivity, enabling the system to optimise purchase pathways across billions of possible routes with a single click [55-60].


Transparency and accountability were positioned as core ethical pillars. Tewari pledged that the reasoning engine behind product recommendations will be transparent, allowing users to see why a particular item was shown to them, thereby fostering trust [62-66]. He linked this commitment to an Indian philosophical principle drawn from the Upanishads that exhorts truthfulness, arguing that authentic, transparent agents are essential for a human-centred AI future [94-100].


On the macro-economic front, Tewari highlighted that commerce already accounts for roughly 25 % of global GDP, a share mirrored in India [88-90]. He projected that AI-enabled, agentic commerce could add about $3 trillion to India’s economy by 2047, creating a virtuous “flywheel” of consumer-level efficiency savings that recycle back into the broader market and amplify economic growth [67-71][88-91].


The ripple effects on supply chains and manufacturing were also outlined. Extending agentic intelligence downstream would diminish the relevance of traditional marketplaces-currently aggregators that provide convenience-while empowering individual and local brands as agents locate them directly for consumers [73-78]. In manufacturing, precise, AI-derived consumer signals would enable “agentic manufacturing,” sharply increasing producer productivity through real-time, demand-driven adjustments [81-86].


Tewari also highlighted that InMobi was the first Indian unicorn and that the team “takes pride in building deep-tech and tackling a global problem” [X-Y]. He reiterated that Glance is being built in Bangalore, by an Indian team, for the world [Z-AA]. He admitted that the company was a latecomer to the Internet era, but argued that “that’s not the case with AI” [AB-AC].


Closing the presentation, he framed the event as “all about audacity,” noted his renewed “20-s-like energy,” and urged the audience to “rise to the occasion” [AD-AE]. He then thanked the audience and handed the stage to the next speaker. The host thanked Tewari for his keynote and invited Mr Vivek Mahajan, CTO of Fujitsu, to deliver the next address, thereby maintaining the event’s flow [123].


Session transcriptComplete transcript of the session
Naveen Tewari

Truly speaking, what I will talk about today is how commerce is going to change in the world. Look, internet is, you know, AI is changing many things. It is redefining paradigms. What is so exciting about AI? Think about it, right? Think about, you know, how AI is going to expand lifespan. We all understand that, like, there is a very high probability that every one of us in this room would probably extend ourselves to 120 years because diseases would get, you know, eradicated very differently. Organs will get created differently, right? So there is a lifespan argument to be made. The second big argument to be made is, you know, we will live very differently because in the world, in the future, it is very hard to see inequality anymore.

You know, today there is an engineer who is very good at coding and then there is one who is not. That’s going to disappear. You know, by the time you get to the end of the day, you are going to be in a box. five years from now you might actually see every one of us across our country become super high quality coders and that’s just one example of a thing so you’re actually going to have you’re going to live very democratically very differently where you’re going to essentially see the skill equal the skill equality which would lead to a very different way of living for all of us and the third is is a very disproportionate rate of growth of economic prosperity because of all the factors that the level of productivity that gets added into the whole world you are going to see a very different level of productivity so yes AI is exciting and that’s why I’m pretty I presume all of you are here to to listen and to learn and to imbibe it what I would really talk about is how this how the world is truly shifting when it comes to commerce look the in in the world of commerce there is a completely new architecture also being written.

You know, when intelligence becomes democratic, it changes ecosystems. What does intelligence getting democratically involved in commerce mean? It means that it is going to impact how we shop, how supply chains work, how manufacturing works, how we think about every aspect of it. And so that is completely getting rewritten as we look at the world in the future. At InMobi, we were one of the first companies, we were actually the first company that became a unicorn. We take a lot of pride in that because we built a product company from India to the globe. We take a pride in it because we worked in, we work in deep tech. We now take pride in actually taking on a global problem.

We now take pride in actually looking at the world from how we can bring agentic commerce to the world. Now agentic commerce and our platform is called Glance. Glance is all about bringing agentic commerce in the world in a way that’s never been done before. We’re very proud of how rapidly bringing intelligence in that world is changing everything. You know if you think about the product that we have truly built, we are moving from a world of personalized you know feeds to personal feeds. What does a personal feed mean? Think about commerce. Commerce in the world has always been driven across you know what I may like. But today if you think about agentic commerce it is actually centered around you.

We built a platform that’s launched globally. Our model of agentic commerce is launched globally. And what you would see here is personal feeds of consumers getting created in real time with products on them. What you’re seeing is a single model gets trained on single consumers. We plan to train a commerce model for a billion people over the next several years. What it would do is it would bring intelligence into the journey of commerce for every one of us. That is a superlatively advanced way of thinking about commerce, and it is a superlatively advanced way of thinking about how every element of it would actually change. So if you think about this in a slightly more architectural manner, we have created multiple models that actually come together to essentially create this agentic experience.

We have what’s called the commerce intelligence graph. Think of it the knowledge graph. The fact that there has to be a model that needs to know everything about every commerce element in the world. The fact that this is a white shirt is a world knowledge. You have to understand it. Then you have what we have built is a generative AI experience model. In that model, if you see the example, we are effectively creating an output. If you look at all the answer engines today, the output is effectively a text output. But when you think about commerce, you have to think about a visual output. How do you create a personalized pamphlet or a feed that is just for you using intelligence?

That’s a generative experience that gets done. The generative experience, unlike in the answer engine, is specific to you. and that’s why the model has to be created at a user level, which is the third model where the user model gets trained on you as an individual. That training of the user model at an individual level is what differentiates the way one thinks about shopping from everything else out there. And so I feel very excited about what agentic commerce can do. The reason I feel very excited about it, if I could move this forward, I think it’s stuck. You know, you talk about agentic and then some of these smaller elements. Oh, that worked, it worked.

All right. One of the most important elements of the agentic commerce era is going to be a living context graph. A living commerce context graph, which basically understands your context, which context you are in, what are you looking for, what are you seeking in that moment. It also understands very different levels, different levels of price intelligence. think about today when you go for a when each one of us go out there and look to buy something, we think about buying and we search for ways in which we can buy it at the most efficient pricing you can actually put that into the model and it will find out the pathway for the most efficient buying for you, which is the purchase path optimization bases your context, bases your price sensitivity and the brand sensitivity and it can do this across millions and billions of potential pathways at the click of a button and it will do that for you at that level, that living context graph is very very powerful and our ability to navigate through that is what we are really trying to build the other thing about if you think about contextual agentic commerce you know, Prime Minister talked about the man of vision What is it?

It is about being transparent and accountable and human centric. Shopping has not been considered accountable. It is seen as some people selling you things. But with agentic commerce coming in, we have an opportunity to convert the whole experience of commerce to be very transparent and very accountable. What does that mean? It means that we would certainly be opening up our model and making it transparent. In the world of AI, the reasoning engine will become transparent so that everybody can understand why you were shown or recommended a certain product. And that understanding of why is what creates transparency. And therefore, one of our big ethos is to make this very, very transparent and lead with a very different perspective of, of building trust in the era of.

agentic commerce. We also think about the fact that the consumer intelligence as that rises, the consumer intelligence brings hordes of efficiency, hordes of efficiency because billions of people will be making intelligent decisions going forward. Today think about the amount of money wasted at a consumer level as you go down your commerce journey. If you use agentic commerce in your life going forward and Glance does that for you, the amount of intelligence brought into the decision -making leads to significant savings which basically come back into the economy and then that leads to a flywheel which is very very powerful and therefore if you think about this happening at billions of people level it creates a very different size of the market and that’s a very powerful thing to create.

Similarly, the supply chains will become agentic. You know if you think about this when you have the consumer experience becoming agentic, it basically transcends itself into the supply chain. What’s going to happen? It’s going to create the demise of the marketplaces. The marketplaces today are effectively an aggregation to give you a lot more comfort. The marketplaces will become weaker. What will be the rise of it? The rise of individual brands, the rise of local brands, the like of very specialized producers. They are going to come up because the agent will be able to go find them. And that’s great for our country where you have entrepreneurs sitting in every nook and corner. Not just this.

You think about manufacturing. It is going to essentially evolve itself and become agentic manufacturing. In this, think about this. Because you have agentic experience at the consumer level, your precision will be given, very different precision signals will be given out to the manufacturers. And therefore, the productivity of the manufacturer changes drastically. Again, think about starting from the consumer into the supply chain into manufacturing. And that’s a phenomenal change that’s going to happen. Let me just explain the scale of this. Commerce in the world is 25 % of the world’s GDP. Same as for India. If you think about the impact of agentic commerce just at the India level, that’s going to be of the order of $3 trillion in the next 20 years by 2047.

That’s what we are really seeking if you essentially bring AI intelligence in the world of commerce. And that’s what we at Glance are truly attempting to go after. Given this is happening in India, we have a saying which is as part of our Upanishads in Sanskrit, which is, What does it mean? In a very simplistic way, what it is truly trying to say, be truthful. Bring truth out. If you think about digital economy in the last several years, it has led to distortion. Social media has led us into a world which is not very good. They have played around with our mental abilities and have forced us to think about things in a wrong way.

But I think we have an opportunity to think about how agents can be authentic. Once we make an agent transparent, authenticity becomes part of it. And I think that’s how we need to lead the world very differently. And we have an opportunity, especially as a company which is coming out from India, we take that very, very seriously. So in short, we are bringing with glance, as in Mobi, we are bringing AI in commerce. We are very proud of the fact that we are building this from Bangalore. We are building this from India for the world. We truly want to impact every consumer on the planet and bring agentic commerce. Bring intelligence into commerce and impact the world’s supply chain.

This event is all about audacity. We have not had a more audacious plan in our history of 18 years of me running this company or founding the company. But this is what this event does to you, but this is what technologies like AI do to you. I think it is time for us to rise and think about every possible idea in a very audacious manner. And I think this is what AI does to you. I hope every one of you rise up to that occasion and think about it that way. We are very excited. We are kicked about what we are truly trying to go about doing it. I am back to my ways of working in my 20s.

The energy is very different. The excitement is very different. And certainly the world is right now back in your palms. If we all were living in the 19th, if we were all very active in mid -90s, we would have thought about Internet very differently. I think that is what we are doing. I think that is what we are doing. I think that is what we are doing. we were laggards in internet we came into the internet era about 10 years late big things were already built by then that’s not the case when it comes to AI and I think we have an opportunity not just in the sector of commerce but every possible sector to build global platforms so that’s what we’re going to aim for that’s what we’re going to try for thank you so much for being

Speaker 2

thank you Mr. Tiwari for the keynote address for the next keynote may I now invite Mr. Vivek Mahajan CTO of Fujitsu may I also request everybody to please settle down thank you

Related ResourcesKnowledge base sources related to the discussion topics (15)
Factual NotesClaims verified against the Diplo knowledge base (7)
Confirmedhigh

“AI could push average human life expectancy toward 120 years.”

The knowledge base notes that Tewari outlined extending human lifespans to potentially 120 years through medical advances [S1].

Confirmedhigh

“AI‑driven tools will democratise high‑skill capabilities, turning most engineers into “super‑high‑quality coders” and reducing skill‑based inequality.”

S1 reports that democratising high‑skill capabilities is one of the three major societal shifts Tewari described.

Additional Contextmedium

“Breakthroughs in disease eradication and organ engineering could push average human life expectancy toward 120 years.”

S43 explains that AI technologies enable early disease detection, treatment planning and personalized medicine, which could dramatically reduce major diseases and support longer lifespans.

Confirmedmedium

“A living context graph continuously infers a shopper’s current context, price and brand sensitivity, enabling optimisation across billions of possible routes with a single click.”

S44 confirms the existence of a “living commerce context graph” that understands a shopper’s context, though it does not explicitly mention price or brand sensitivity.

Additional Contextmedium

“Transparency and accountability are core ethical pillars; the reasoning engine behind product recommendations will be transparent.”

S46 discusses the need for technical protocols and standards to ensure trustworthy, transparent AI systems, adding nuance to the transparency pledge.

Additional Contextmedium

“A user‑level model is trained on each individual’s behaviour to ensure the generated experience is truly personal.”

S45 highlights the importance of user‑centered AI that caters to individual needs and preferences, supporting the claim of personal models.

Additional Contextlow

“The Generative AI Experience Model creates visual outputs such as personalised pamphlets or feeds, moving beyond text‑only answers.”

S48 mentions building shared platforms for personalization and ecosystems, providing background for generative AI visual personalization.

External Sources (49)
S1
Keynote by Naveen Tewari Founder & CEO, inMobi India AI Impact Summit — No consensus analysis possible – single speaker presentation format with only procedural interjections from event modera…
S2
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — -Naveen Tiwari: Founder and CEO of Mobi (mentioned as “in Mobi” in the transcript). Area of expertise not detailed in th…
S3
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Speaker 1- Role/title not specified (appears to be a moderator/participant) -Speaker 2- Role/title not specified (appe…
S4
Policy Network on Artificial Intelligence | IGF 2023 — Moderator 2, Affiliation 2 Speaker 1, Affiliation 1 Speaker 2, Affiliation 2
S5
S6
How AI Drives Innovation and Economic Growth — “So, you know, for all countries, but especially for emerging markets and developing economies, AI can be a game changer…
S7
Trade regulations in the digital environment: Is there a gender component? (UNCTAD) — The analysis of the provided statements highlights several key points from all speakers. One main argument is that digit…
S8
Beyond human: AI, superhumans, and the quest for limitless performance & longevity — High level of consensus with significant implications for reframing how society approaches aging, disability, and human …
S9
Comprehensive Report: Preventing Jobless Growth in the Age of AI — Augmentation vs. Automation Strategies Economic | Future of work Maximum productivity gains from AI require fundamenta…
S10
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Agentic Commerce and Payment Systems Ryan McInerney envisions a future where AI agents will be empowered to autonomousl…
S11
Donor Principles for the Digital Age: Turning Principles int | IGF 2023 Open Forum #157 — Transparency and accountability are highlighted as crucial aspects of businesses implementing human rights policies and …
S12
Trust in Tech: Navigating Emerging Technologies and Human Rights in a Connected World — 4. **Transparency and Due Process**: Translucency in the regulatory creation and implementation is deemed essential for …
S13
Building Trust through Transparency — In conclusion, the discussion on consumer rights, transparency, technological advancements, and the need for enforcement…
S14
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ananya Birla Birla AI Labs — “India’s journey from a $4 trillion economy to a $40 trillion economy in the arc that stretches from where we are today …
S15
The Global Power Shift India’s Rise in AI & Semiconductors — All right. Good afternoon, everyone. And I would like to extend a very warm welcome to each one of you for this session….
S16
Keynote-Mukesh Dhirubhai Ambani — This positioning grants India moral authority to lead inclusive AI development whilst framing the country’s technologica…
S17
Bridging the AI innovation gap — The tone is consistently inspirational and collaborative throughout. The speaker maintains an optimistic, forward-lookin…
S18
Welcome Address — The speech emphasizes that with proper direction, ethical frameworks, and global cooperation, artificial intelligence ca…
S19
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — Speaker 2 formally welcomes the next presenter, thanks the current speaker for his remarks, and introduces Mr. Naveen Ti…
S20
Keynote by Naveen Tewari Founder & CEO, inMobi India AI Impact Summit — -Speaker 1: Role appears to be event moderator/host (introducing speakers and managing the event flow) -Vivek Mahajan: …
S21
Building Trusted AI at Scale – Keynote Anne Bouverot — The tone is diplomatic, optimistic, and collaborative throughout. It begins with ceremonial courtesy and appreciation, m…
S22
Keynotes — A poignant comment from an attendee about “what’s next” emphasises the forward-looking nature of the event and its perti…
S23
Keynote-HE Emmanuel Macron — The transcript contains only President Emmanuel Macron’s speech at the AI Impact Summit, with a brief introduction by th…
S24
Ad Hoc Consultation: Wednesday 31st January, Afternoon session — Additionally, New Zealand supports Mexico’s recommendation to increase a threshold for policy or agreement enforcement t…
S25
Ad Hoc Consultation: Friday 2nd February, Afternoon session — Although specific details were not given, it is reasonable to surmise that this choice reflects a consensus on the neces…
S26
AI Innovation in India — No meaningful disagreements were present. This was a celebratory and supportive environment where speakers complemented …
S27
Keynote by Naveen Tewari Founder & CEO, inMobi India AI Impact Summit — The technical ambition underlying this vision is substantial: Tewari announced plans to “train a commerce model for a bi…
S28
Anthropic CEO highlights AI’s potential to transform society — In a lengthy blog post, Anthropic CEODario Amodeipresented an optimistic vision for the future ofAI, asserting that powe…
S29
Beyond human: AI, superhumans, and the quest for limitless performance & longevity — High level of consensus with significant implications for reframing how society approaches aging, disability, and human …
S30
The evolving role of AI and its impact on human society — Elon Musk announced the formation of a brand new company that I am a part of, called xAI. The company is focused on expl…
S31
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Agentic Commerce and Payment Systems Ryan McInerney envisions a future where AI agents will be empowered to autonomousl…
S32
The Future of the Internet: Navigating the Transition to an Agentic Web — Consumer Rights, Privacy, and Data Ownership Consumer protection | Economic Current examples of personalized pricing w…
S33
Closing session — One notable aspect of the AI system used was its commitment to transparency, accountability, and openness. Recognising t…
S34
Trust in Tech: Navigating Emerging Technologies and Human Rights in a Connected World — 4. **Transparency and Due Process**: Translucency in the regulatory creation and implementation is deemed essential for …
S35
Donor Principles for the Digital Age: Turning Principles int | IGF 2023 Open Forum #157 — Transparency and accountability are highlighted as crucial aspects of businesses implementing human rights policies and …
S36
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — – K. Krithivasan- Salil Parekh- C. Vijayakumar Future of Employment and Workforce Transformation References India beco…
S37
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Ananya Birla Birla AI Labs — “India’s journey from a $4 trillion economy to a $40 trillion economy in the arc that stretches from where we are today …
S38
Welcome Address — The speech emphasizes that with proper direction, ethical frameworks, and global cooperation, artificial intelligence ca…
S39
Keynote-Mukesh Dhirubhai Ambani — This positioning grants India moral authority to lead inclusive AI development whilst framing the country’s technologica…
S40
Bridging the AI innovation gap — The tone is consistently inspirational and collaborative throughout. The speaker maintains an optimistic, forward-lookin…
S41
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — The tone is consistently optimistic, confident, and inspirational throughout. The speaker maintains an enthusiastic and …
S42
Embracing the future of e-commerce and AI now (WEF) — By working together, governments and businesses can create an enabling environment and establish policies that foster in…
S43
Generative AI: Steam Engine of the Fourth Industrial Revolution? — AI technologies, such as machine learning and predictive analytics, can help in early disease detection, treatment plann…
S44
https://dig.watch/event/india-ai-impact-summit-2026/keynote-by-naveen-tewari-founder-ceo-inmobi-india-ai-impact-summit — All right. One of the most important elements of the agentic commerce era is going to be a living context graph. A livin…
S45
IN CONVERSATION WITH MITCHELL BAKER — Developing user-centered AI technology was deemed crucial for navigating future technological changes. Prioritizing pers…
S46
https://dig.watch/event/india-ai-impact-summit-2026/ensuring-safe-ai_-monitoring-agents-to-bridge-the-global-assurance-gap — Would you like me to go and take care of you and get some more toothpaste for you? You mentioned standards, which I thin…
S47
Brainstorming with AI opens new doors for innovation — AI is increasingly embraced as a reliable creative partner, offering speed and breadth in idea generation. In Fast Compa…
S48
Collaborative AI Network – Strengthening Skills Research and Innovation — “We are also building some shared platforms for personalization and to understand citizens’ characteristics.”[49]. “we’r…
S49
https://dig.watch/event/india-ai-impact-summit-2026/how-trust-and-safety-drive-innovation-and-sustainable-growth — One sentence. Okay. Is that my sentence? I think about provenance tools as an area of innovation. Again, this is calling…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
N
Naveen Tewari
16 arguments152 words per minute2226 words877 seconds
Argument 1
Lifespan extension to ~120 years through disease eradication and organ creation (Naveen Tewari)
EXPLANATION
Tewari claims that AI will dramatically increase human lifespan, potentially reaching 120 years, by eradicating diseases and enabling new ways to create organs. This longer life expectancy is presented as one of the major societal benefits of AI.
EVIDENCE
He suggested AI will expand human lifespan to about 120 years by eradicating diseases and creating organs differently, citing a high probability that everyone in the room could live that long [6-9].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote transcript records Tewari stating that AI will eradicate diseases and enable new ways to create organs, extending average human lifespan to about 120 years [S1].
MAJOR DISCUSSION POINT
AI‑driven longevity
Argument 2
Democratization of skills, turning everyone into high‑quality coders and reducing inequality (Naveen Tewari)
EXPLANATION
Tewari argues that AI will democratize intelligence, eliminating current skill gaps so that even non‑technical people will become high‑quality coders within a few years. This shift is portrayed as a way to eradicate inequality in the future.
EVIDENCE
He explained that AI will make skill inequality disappear, enabling everyone to become super high-quality coders within five years, leading to skill equality and a fundamentally different way of living [10-13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Tewari describes AI making high-quality coding skills accessible to all, effectively turning the entire population into capable programmers [S1].
MAJOR DISCUSSION POINT
Skill democratization
Argument 3
Disproportionate economic growth driven by AI‑boosted productivity (Naveen Tewari)
EXPLANATION
Tewari states that AI will generate a disproportionate rate of economic prosperity by adding massive productivity gains worldwide. This boost is expected to transform overall economic growth patterns.
EVIDENCE
He noted a very disproportionate rate of growth of economic prosperity because of the added productivity that AI brings to the world [13].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He notes a “very disproportionate rate of growth of economic prosperity” due to AI-added productivity, a view echoed by broader analyses of AI-driven growth [S1][S6].
MAJOR DISCUSSION POINT
AI‑driven productivity surge
Argument 4
Shift from “personalized feeds” to “personal feeds” that center commerce on the individual user (Naveen Tewari)
EXPLANATION
Tewari describes a transition from generic personalized feeds to truly personal feeds that place the individual at the core of commerce. This re‑orientation is meant to make shopping experiences uniquely tailored to each user.
EVIDENCE
He described moving from ‘personalized feeds’ to ‘personal feeds’, meaning commerce will be centered around the individual rather than generic preferences [26-30].
MAJOR DISCUSSION POINT
Personal feed paradigm
Argument 5
Goal to train a commerce model for a billion people, delivering real‑time personal product feeds globally (Naveen Tewari)
EXPLANATION
Tewari outlines an ambition to develop a commerce model that can serve a billion users, generating real‑time, individualized product feeds worldwide. This scale is presented as a cornerstone of the Glance platform.
EVIDENCE
He said the platform aims to train a commerce model for a billion people over the next several years, delivering real-time personal product feeds globally [33-36].
MAJOR DISCUSSION POINT
Scaling personal commerce models
Argument 6
Commerce Intelligence Graph as a universal knowledge graph of all commerce elements (Naveen Tewari)
EXPLANATION
Tewari introduces the Commerce Intelligence Graph, likening it to a knowledge graph that must contain information about every commerce element in the world. This graph underpins the agentic commerce architecture.
EVIDENCE
He introduced the ‘commerce intelligence graph’, describing it as a knowledge graph that must know everything about every commerce element globally [38-42].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote introduces the “commerce intelligence graph” as a knowledge graph that must know everything about every commerce element globally [S1].
MAJOR DISCUSSION POINT
Universal commerce knowledge base
Argument 7
Generative AI Experience Model that creates visual, personalized commerce outputs (e.g., pamphlets, feeds) (Naveen Tewari)
EXPLANATION
Tewari explains that the Generative AI Experience Model produces visual, personalized commerce outputs such as custom pamphlets or feeds, moving beyond text‑only answer engines. This visual generation is key to delivering individualized shopping experiences.
EVIDENCE
He described a generative AI experience model that creates visual outputs like personalized pamphlets or feeds, unlike typical text-only answer engines [43-48].
MAJOR DISCUSSION POINT
Visual generative commerce
Argument 8
User model trained at the individual level to tailor the shopping experience (Naveen Tewari)
EXPLANATION
Tewari emphasizes that a user‑specific model is trained on each individual, differentiating the shopping journey for every person. This individualized training is presented as the core differentiator of agentic commerce.
EVIDENCE
He noted that the user model is trained at the individual level, which differentiates the way one thinks about shopping from everything else out there [49-51].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Tewari explains that a dedicated user-level model is trained on each consumer’s data to deliver uniquely tailored recommendations [S1].
MAJOR DISCUSSION POINT
Individual‑level modeling
Argument 9
Living Context Graph that captures user context, price and brand sensitivity to optimize purchase paths (Naveen Tewari)
EXPLANATION
Tewari describes a Living Context Graph that continuously understands a user’s current context, price sensitivity, and brand preferences, enabling optimal purchase‑path recommendations across millions of possibilities. This dynamic graph is portrayed as a powerful tool for personalized commerce.
EVIDENCE
He explained a living commerce context graph that captures a user’s context, price and brand sensitivity, and optimizes purchase paths across millions of potential pathways at the click of a button [55-58].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He describes a “living context graph” that continuously captures a user’s context, price and brand sensitivity to optimise purchase paths [S1].
MAJOR DISCUSSION POINT
Dynamic context‑aware optimization
Argument 10
Open, transparent reasoning engine so users understand why specific products are recommended (Naveen Tewari)
EXPLANATION
Tewari proposes making the AI reasoning engine fully transparent, allowing users to see exactly why a product was shown or recommended. This transparency is positioned as essential for building trust in agentic commerce.
EVIDENCE
He said the reasoning engine will be made transparent so users can understand why a particular product is recommended, fostering transparency [63-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote highlights a transparent reasoning engine that lets users see why particular products are shown, enhancing trust [S1].
MAJOR DISCUSSION POINT
Transparent recommendation logic
Argument 11
Emphasis on accountability and authenticity to build trust in the agentic commerce ecosystem (Naveen Tewari)
EXPLANATION
Tewari stresses that accountability and authenticity, enabled by transparent agents, are crucial for establishing trust in the new commerce paradigm. He links authenticity to the ethical deployment of AI agents.
EVIDENCE
He highlighted that transparency leads to authenticity and accountability, which are needed to build trust in the agentic commerce ecosystem [58-62] and further noted that authentic agents arise once agents are made transparent [98-100].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Tewari links transparency to authenticity and accountability, stating these are essential for trust in the agentic commerce ecosystem [S1].
MAJOR DISCUSSION POINT
Trust through accountability
Argument 12
Commerce represents ~25 % of global GDP; agentic commerce could add roughly $3 trillion to India’s economy by 2047 (Naveen Tewari)
EXPLANATION
Tewari points out that commerce already accounts for about a quarter of global GDP and projects that agentic commerce could contribute an additional $3 trillion to India’s economy by 2047. This figure underscores the massive economic potential of the technology.
EVIDENCE
He stated that commerce is 25 % of the world’s GDP and that agentic commerce could add roughly $3 trillion to India’s economy by 2047 [88-91].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He cites commerce as 25 % of world GDP and projects agentic commerce could contribute about $3 trillion to India by 2047; AI’s macro-economic impact is also discussed in broader literature [S1][S6].
MAJOR DISCUSSION POINT
Macro‑economic impact
Argument 13
Consumer‑level intelligence generates efficiency savings that feed back into the economy, creating a powerful flywheel effect (Naveen Tewari)
EXPLANATION
Tewari argues that intelligent consumer decisions will reduce wasteful spending, generating efficiency savings that circulate back into the broader economy, creating a self‑reinforcing flywheel. This mechanism is presented as a catalyst for further economic growth.
EVIDENCE
He highlighted that agentic commerce can save significant money at the consumer level, leading to efficiency gains that feed back into the economy and create a powerful flywheel effect [68-69].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote notes that billions of people making intelligent decisions will generate massive efficiency savings that circulate back into the economy [S1].
MAJOR DISCUSSION POINT
Efficiency‑driven economic flywheel
Argument 14
Traditional marketplaces will weaken while individual and local brands rise, enabled by intelligent agents (Naveen Tewari)
EXPLANATION
Tewari predicts that the rise of intelligent agents will diminish the role of traditional marketplaces, allowing individual and local brands to flourish. This shift is framed as an opportunity for entrepreneurs across the country.
EVIDENCE
He forecasted the demise of traditional marketplaces and the rise of individual and local brands because intelligent agents can discover them, benefiting entrepreneurs nationwide [73-79].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Tewari predicts the decline of traditional marketplaces and the rise of individual/local brands as intelligent agents discover them [S1].
MAJOR DISCUSSION POINT
Marketplace disruption
Argument 15
Agentic manufacturing receives precise consumer signals, dramatically increasing producer productivity (Naveen Tewari)
EXPLANATION
Tewari explains that agentic manufacturing will receive highly precise consumer signals, which will dramatically boost producer productivity. This integration from consumer to manufacturer is portrayed as a transformative change for industry.
EVIDENCE
He described that agentic manufacturing will get precise consumer signals, leading to a drastic increase in manufacturer productivity [81-86].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
He explains that agentic manufacturing will obtain precise consumer signals, leading to a drastic boost in manufacturer productivity [S1].
MAJOR DISCUSSION POINT
Precision‑driven manufacturing
Argument 16
Glance is built in Bangalore, reflecting an Indian‑led, globally‑focused, audacious ambition to reshape commerce (Naveen Tewari)
EXPLANATION
Tewari proudly notes that the Glance platform is developed in Bangalore, India, and is intended for a global audience, showcasing an ambitious, Indian‑led effort to transform commerce worldwide. This emphasizes national pride and global ambition.
EVIDENCE
He emphasized that Glance is being built in Bangalore, India, for the world, highlighting the Indian origin and audacious vision of the project [102-106] and [103-105].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The keynote emphasizes that Glance is being built in Bangalore for a global audience, showcasing Indian-led innovation [S1].
MAJOR DISCUSSION POINT
Indian innovation on a global stage
S
Speaker 2
1 argument63 words per minute33 words31 seconds
Argument 1
Acknowledgment of the keynote and transition to the next speaker, maintaining event flow (Speaker 2)
EXPLANATION
Speaker 2 thanked Mr Tiwari for his keynote address, introduced the next speaker, Mr Vivek Mahajan, and asked the audience to settle down, thereby ensuring a smooth transition in the event program.
EVIDENCE
He thanked Mr Tiwari for the keynote, invited Mr Vivek Mahajan, the CTO of Fujitsu, to speak next, and requested everyone to settle down [123].
MAJOR DISCUSSION POINT
Event transition
AGREED WITH
Naveen Tewari
Agreements
Agreement Points
Acknowledgment of the keynote and transition to the next speaker, maintaining event flow
Speakers: Naveen Tewari, Speaker 2
Acknowledgment of the keynote and transition to the next speaker, maintaining event flow (Speaker 2)
Both speakers participated in the formal hand-over: Naveen Tewari delivered the keynote and Speaker 2 thanked him and introduced the next presenter, ensuring a smooth continuation of the programme [123].
POLICY CONTEXT (KNOWLEDGE BASE)
Formal handovers and introductions of subsequent speakers are standard procedural elements in summit programming, as demonstrated by the moderator’s thank-you to the current speaker and the hand-off to Mr. Naveen Tiwari at the India AI Impact Summit [S19][S20].
Similar Viewpoints
Both participants recognized the importance of the keynote as a central element of the event and cooperated to keep the agenda on track [123].
Speakers: Naveen Tewari, Speaker 2
Acknowledgment of the keynote and transition to the next speaker, maintaining event flow (Speaker 2)
Unexpected Consensus
Recognition of the keynote’s significance despite the lack of substantive policy debate
Speakers: Naveen Tewari, Speaker 2
Acknowledgment of the keynote and transition to the next speaker, maintaining event flow (Speaker 2)
It is unexpected that the only point of agreement between the two speakers is procedural rather than substantive, highlighting a limited overlap in content focus [123].
POLICY CONTEXT (KNOWLEDGE BASE)
The keynote is framed as a ceremonial and forward-looking highlight rather than a policy-heavy debate, mirroring the diplomatic, optimistic tone of prior AI Impact Summits and the event’s emphasis on celebration and stakeholder convening rather than detailed policy discussion [S21][S22][S26].
Overall Assessment

The discussion shows a single substantive contributor (Naveen Tewari) presenting a wide range of AI‑driven commerce arguments, while the second speaker only performed a brief procedural hand‑over. Consequently, there is minimal substantive consensus; the only clear agreement is the procedural acknowledgment of the keynote and the event’s continuity.

Low substantive consensus – agreement is limited to event management, implying that the broader AI‑commerce agenda remains unchallenged but also uncorroborated by other participants.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The discussion consists of a keynote by Naveen Tewari presenting a vision of AI-driven agentic commerce, followed by a brief transition remark from Speaker 2 who thanks Tewari and introduces the next speaker. No substantive policy or conceptual differences are expressed between the two participants; Speaker 2 does not present an alternative viewpoint or critique, merely acknowledges the keynote and moves the program forward [123].

Minimal – the interaction shows no disagreement, indicating a smooth event flow without contested issues, which suggests consensus (or at least no overt conflict) on the presented vision within the limited scope of this exchange.

Takeaways
Key takeaways
AI is expected to dramatically extend human lifespan, democratize high‑skill capabilities, and drive disproportionate economic growth. The concept of “agentic commerce” shifts from generic personalized feeds to truly personal, AI‑driven product feeds centered on each individual user. Glance aims to train a commerce model for a billion users, delivering real‑time, visual, personalized product recommendations globally. Technical architecture includes a Commerce Intelligence Graph (a universal knowledge graph), a Generative AI Experience Model (producing visual commerce outputs), a User Model trained per individual, and a Living Context Graph that captures context, price, and brand sensitivities to optimize purchase paths. Transparency, accountability, and authenticity are emphasized by making the AI reasoning engine open so users understand why recommendations are made, building trust in the ecosystem. Agentic commerce could add roughly $3 trillion to India’s economy by 2047, representing a massive economic impact given commerce accounts for ~25 % of global GDP. Consumer‑level intelligence creates efficiency savings that feed back into the economy, creating a powerful flywheel effect. Supply chains and marketplaces will be transformed: traditional marketplaces may weaken while individual and local brands rise, and manufacturers will receive precise consumer signals, boosting productivity (agentic manufacturing). Glance is built in Bangalore, reflecting an Indian‑led, globally‑focused, audacious vision to reshape commerce worldwide.
Resolutions and action items
None identified
Unresolved issues
None identified
Suggested compromises
None identified
Thought Provoking Comments
AI will expand human lifespan to around 120 years by eradicating diseases and creating organs differently.
It frames AI not just as a productivity tool but as a transformative force for human biology, broadening the discussion beyond commerce to fundamental societal change.
Sets a visionary tone, establishing a high‑level, future‑oriented context that primes the audience to consider far‑reaching implications of AI, which later justifies the ambitious goals for agentic commerce.
Speaker: Naveen Tewari
AI will democratize skills, turning everyone into high‑quality coders and eliminating current skill inequality.
Challenges the prevailing belief that technical expertise will remain a scarce resource, suggesting a radical shift in labor markets.
Introduces the theme of equality, which underpins later arguments about how agentic commerce can be universally accessible and why transparency and accountability become essential.
Speaker: Naveen Tewari
Agentic commerce moves from ‘personalized feeds’ to ‘personal feeds’ by training a unique commerce model for each individual consumer.
Presents a novel conceptual framework that redefines personalization at the user‑level, differentiating the proposed technology from existing recommendation systems.
Acts as a turning point that transitions the talk from abstract AI benefits to a concrete product vision, leading to detailed discussion of the underlying models (commerce intelligence graph, generative experience model, user model).
Speaker: Naveen Tewari
A living commerce context graph that understands a user’s context, price sensitivity, and brand preferences to optimize purchase paths across billions of possibilities.
Introduces a sophisticated, dynamic system that integrates real‑time contextual data, highlighting technical depth and scalability.
Deepens the conversation by adding technical complexity, prompting the audience to envision how such a graph could reshape decision‑making and supply‑chain efficiency.
Speaker: Naveen Tewari
Transparency and accountability will be built into the AI reasoning engine so consumers can see why a product was recommended.
Addresses a critical ethical concern in AI deployments, linking technological innovation with trust and regulatory considerations.
Shifts the tone from purely optimistic to responsibly cautious, signaling that the platform will tackle societal concerns, which may influence stakeholder confidence.
Speaker: Naveen Tewari
Agentic commerce will diminish traditional marketplaces and empower individual and local brands by enabling agents to discover them directly.
Predicts a disruptive shift in the commercial ecosystem, challenging the dominance of large aggregators and proposing a new market structure.
Creates a strategic turning point that expands the discussion from consumer experience to broader market dynamics, hinting at new business opportunities for entrepreneurs.
Speaker: Naveen Tewari
Agentic manufacturing will receive precise consumer‑level signals, dramatically increasing manufacturer productivity.
Extends the agentic concept downstream to production, illustrating a full‑stack impact from shopper to factory.
Broadens the scope of the conversation to include supply‑chain transformation, reinforcing the claim of systemic economic impact.
Speaker: Naveen Tewari
Commerce accounts for 25 % of global GDP; agentic commerce could add roughly $3 trillion to India’s economy by 2047.
Quantifies the macro‑economic potential, grounding the visionary ideas in concrete financial terms.
Provides a compelling closing argument that ties all previous points to measurable national benefit, strengthening the call to action.
Speaker: Naveen Tewari
We must be truthful and authentic—drawing on the Upanishadic principle of ‘be truthful’—to ensure agents are transparent and trustworthy.
Integrates cultural philosophy with technology ethics, offering a moral framework for AI deployment.
Adds a cultural and ethical dimension that resonates with the Indian audience, reinforcing the responsibility narrative introduced earlier.
Speaker: Naveen Tewari
Thank you Mr. Tiwari for the keynote address… I now invite Mr. Vivek Mahajan, CTO of Fujitsu.
Serves as the formal transition point, signaling the end of the keynote and the shift to the next segment of the event.
Marks the conclusion of the discussion, allowing the audience to reflect on the ideas presented before moving to the next speaker.
Speaker: Speaker 2
Overall Assessment

The keynote unfolded as a series of escalating insights, each building on the previous one. Early visionary statements about lifespan and skill equality set a grand, human‑centric backdrop. The introduction of ‘agentic commerce’ and its technical underpinnings (personal feeds, living context graph) shifted the conversation from abstract possibilities to a concrete product roadmap, prompting deeper technical and ethical considerations such as transparency, accountability, and cultural authenticity. Subsequent comments about the disruption of marketplaces, the rise of local brands, and agentic manufacturing expanded the scope to entire supply‑chain ecosystems, while the macro‑economic quantification anchored the vision in tangible value. Together, these pivotal remarks guided the audience through a narrative that moved from societal transformation to specific technological innovation, ethical responsibility, and economic impact, culminating in a clear call to action before the session transitioned to the next speaker.

Follow-up Questions
What does a personal feed mean in the context of agentic commerce?
Clarifies the core concept of personal feeds that differentiate agentic commerce from traditional personalized feeds.
Speaker: Naveen Tewari
What does "agentic commerce" entail and how is it fundamentally different from existing commerce models?
Defines the new paradigm that the speaker is promoting, essential for audience understanding and further exploration.
Speaker: Naveen Tewari
How can a living commerce context graph be constructed to understand user context, price sensitivity, and brand sensitivity in real time?
Identifies a technical challenge that requires research into graph structures, data ingestion, and real‑time inference.
Speaker: Naveen Tewari
What methods will be used to train a commerce model for a billion individual users, and what scalability challenges exist?
Highlights a massive scaling problem that needs investigation into model architecture, data privacy, and compute resources.
Speaker: Naveen Tewari
How will the reasoning engine of the agentic system be made transparent so users can understand why a product is recommended?
Addresses the need for explainable AI to build trust, requiring research into interpretability techniques for multimodal outputs.
Speaker: Naveen Tewari
In what ways can transparency and accountability be embedded into AI‑driven commerce to ensure authenticity of agents?
Explores ethical considerations and mechanisms for authentic, trustworthy agents, a key research area for responsible AI.
Speaker: Naveen Tewari
What is the projected economic impact of agentic commerce in India (estimated $3 trillion by 2047) and how can this be measured?
Calls for macro‑economic modeling and impact assessment to validate the claimed value creation.
Speaker: Naveen Tewari
How will agentic commerce affect existing marketplaces and what will be the dynamics of the rise of individual and local brands?
Requires market‑structure analysis to understand displacement of aggregators and emergence of niche producers.
Speaker: Naveen Tewari
What are the implications of agentic manufacturing—how will precision signals from consumers alter manufacturing productivity?
Suggests research into supply‑chain integration, demand forecasting, and adaptive production systems.
Speaker: Naveen Tewari
How will AI democratization lead to skill equality, particularly turning all engineers into high‑quality coders?
Invites investigation into education, upskilling platforms, and the societal impact of AI‑assisted coding tools.
Speaker: Naveen Tewari
What scientific advances are required to extend human lifespan to 120 years as suggested, and how does this intersect with commerce?
Points to interdisciplinary research linking biomedical breakthroughs, longevity economics, and consumer behavior.
Speaker: Naveen Tewari
What data gaps exist in measuring the current waste of money at the consumer level, and how can agentic commerce quantify and reduce this waste?
Calls for empirical studies on consumer inefficiencies and the effectiveness of AI‑driven decision support.
Speaker: Naveen Tewari
How can the ‘living context graph’ incorporate multi‑modal data (visual, textual, behavioral) to generate personalized visual pamphlets in real time?
Highlights a technical research need in multimodal generation and real‑time personalization.
Speaker: Naveen Tewari
What governance frameworks are needed to ensure that the transparency of AI agents does not compromise user privacy?
Raises ethical and regulatory research into balancing explainability with data protection.
Speaker: Naveen Tewari

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw

Session at a glanceSummary, keypoints, and speakers overview

Summary

The Impact AI Summit opened with Kiran Mazumdar-Shaw framing the keynote around “biotech sovereignty embedded in AI” as a new strategic priority for India [2]. She argued that just as the 20th century was defined by the Internet and the early 21st by digital data sovereignty, the coming decades will be shaped by the convergence of biological and artificial intelligence, which she terms biotech sovereignty [4-5]. Mazumdar-Shaw stressed that mastering this convergence is not merely an opportunity but a geopolitical imperative for the nation [6-8].


She described biological intelligence as the product of 3.8 billion years of evolution, where living cells sense, compute, and respond through complex signaling networks and built-in guardrails that maintain homeostasis [9-16]. Using the immune system as an example, she showed how memory T and B cells store pathogen information and rapidly mobilise a response upon re-exposure, illustrating efficient information processing without massive energy consumption [19-24]. By contrast, conventional AI learns from data at machine scale, and the true inflection point lies at the intersection of AI and biology, enabling applications such as protein-structure prediction and generative drug design [33-36].


She highlighted the next frontier of reprogramming cells-turning cancer cells benign or repairing bone tissue-contingent on a deep understanding of cellular signalling, gene regulation, and immune memory [38-43]. Mazumdar-Shaw warned that reliance on offshore foundational AI models for drug discovery and genomics would create strategic dependence, making sovereign control over biological data, AI models, and computing infrastructure essential for national resilience [54-57]. She called for embedding AI across the entire biotech value chain-from foundation models for proteins and cellular circuits to in-silico trials, digital twins, and AI-driven manufacturing-to accelerate discovery, reduce risk, and ensure regulatory processes keep pace [60-66].


Achieving this transformation, she said, requires a “triple helix” of government investment in sovereign AI bio-infrastructure, academia’s rollout of computational biology and AI-first curricula, and industry’s co-creation of shared platforms and scalable biomanufacturing clusters [71-77]. Ethical, transparent, energy-efficient, and bias-aware AI systems must be built to be globally interoperable yet rooted in public interest, allowing India to offer a model that blends technological leadership with equity and access [81-86]. She concluded that biotech sovereignty is a foundation of health security, strategic autonomy, and economic resilience, and that India possesses the scientific talent, AI expertise, scale, and values to lead if it builds sovereign platforms today [88-90].


Overall, the keynote positioned AI-enabled biotechnology as a decisive lever for India’s future global standing and public-health security [2][5][88-90].


Keypoints


Biotech sovereignty powered by AI is a strategic and geopolitical imperative for India.


Mazumdar-Shaw argues that the next decade will be defined by “biotech sovereignty that is embedded in AI” and that nations mastering the convergence of biology and AI will shape critical sectors such as health, food security, and bio-security [4-5][6-8]. She stresses that reliance on offshore AI models for drug discovery and genomics creates strategic dependence, making sovereign control over data, AI models, and translational platforms essential for national resilience [54-58][87-90].


Understanding “biological intelligence” reveals why AI-biology convergence is transformative.


She describes living systems as “the original intelligent machines,” highlighting their billions-year evolution, multimodal learning, memory, and energy-efficient computation [9-13][14-17]. Examples such as the immune system’s rapid recall of pathogens [19-23] and the Arctic tern’s DNA-encoded navigation [29-33] illustrate how biology processes, stores, and retrieves information far more efficiently than conventional data centers [24-28]. This biological intelligence, when paired with AI, can accelerate protein folding, generative drug design, and ultimately enable re-programming of cells for therapies [36-38][40-44][46-48][50-52].


A concrete AI-enabled roadmap across the biotech value chain is needed.


She outlines actions for each stage:


Discovery: develop foundation models for proteins, RNA, cellular circuits, and systems biology [62-63].


Development: create in-silico trials, digital twins, and AI-driven trial design to de-risk pipelines [63-64].


Manufacturing: implement smart biomanufacturing for yield optimization and “quality by design” [64-66].


Regulation: build AI-augmented, science-first regulatory pathways that integrate real-world evidence [66-70].


She warns that without synchronized regulatory speed, the accelerated discovery timeline will be wasted [69-71].


Realizing this vision requires a “triple-helix” collaboration and supportive ecosystem.


Government must invest in sovereign AI-bio infrastructure, trusted data architectures, and mission-mode programs [74]; academia should mainstream computational biology and AI-first life-science education to create a new cadre of translational scientists [75]; industry must co-create shared platforms and globally benchmarked biomanufacturing clusters [76]. Capital markets need to provide patient capital for long-cycle biotech innovation, delivering exponential societal and economic returns [77-80].


Ethical, equitable, and globally interoperable AI is central to India’s leadership model.


Sovereignty is framed not as isolation but as building transparent, energy-efficient, bias-aware AI systems rooted in public interest [81-84]. By embedding equity, affordability, and access into AI-driven biotech, India can offer a “new model of innovation combining technological leadership with social purpose” [85-86], positioning itself as a global public-good provider in health security and economic resilience [88-90].


Overall purpose:


The discussion is a persuasive call to action for India to establish a sovereign, AI-native biotechnology ecosystem. It explains why the convergence of biological intelligence and artificial intelligence is critical, outlines the technical and policy steps required across the entire biotech value chain, and frames the effort as essential for national health security, strategic autonomy, and global leadership.


Overall tone:


The speaker begins with an enthusiastic, visionary tone, celebrating the AI summit and the promise of a new era [2-5]. She then shifts to an explanatory, scientific tone to demystify biological intelligence [9-34]. This transitions into a pragmatic, urgent call-to-action, detailing concrete roadmap items and emphasizing the need for coordinated government, academia, and industry effort [62-71][72-77]. The closing returns to an inspirational, hopeful tone, emphasizing ethical leadership and India’s capacity to lead the world [81-90]. Throughout, the tone remains confident and forward-looking, moving from description to urgency to inspiration.


Speakers

Kiran Mazumdar-Shaw


Role/Title: Chairperson, Biocon Group; Keynote speaker


Areas of Expertise: Biotechnology entrepreneurship, healthcare innovation, AI-enabled biotech, philanthropy in health access


Citation: [S1]


Speaker 1


Role/Title: Event moderator/host (role not specified)


Areas of Expertise: (not specified)


Citation: [S4]


Additional speakers:


(none identified beyond the listed speakers)


Full session reportComprehensive analysis and detailed insights

The Impact AI Summit opened with a brief welcome that invited the audience to applaud Ms Kiran Mazumdar-Shaw, Chairperson of the Biocon Group, before she began her keynote address [1]. She expressed enthusiasm for the inaugural summit, noting that India’s first-ever Impact AI Summit signalled the nation’s entry onto the global AI journey [2-3].


Mazumdar-Shaw framed her talk around biotech sovereignty, drawing a historical analogy: the 20th century was defined by the Internet, the early 21st century by digital data sovereignty, and the coming decades will be shaped by the convergence of biology and artificial intelligence [4-5]. She argued that this convergence is a strategic and geopolitical imperative for India, essential for future dominance in health, food security, bio-security and related sectors [6-8]. She warned that continued reliance on offshore AI models for drug discovery and genomics would create a strategic vulnerability; sovereign control over trusted biological data, indigenous AI models and computing infrastructure is therefore a matter of national resilience [54-58].


To illustrate why such sovereignty is needed, Mazumdar-Shaw described biological intelligence as the original form of intelligent machinery, evolved over 3.8 billion years and capable of multimodal learning, memory and ultra-efficient computation [9-13]. Living cells sense, compute and respond through signalling networks, gene-regulatory circuits and immune memory, all operating within built-in homeostasis guardrails [14-17]. When these guardrails fail, disease emerges, showing how biology embeds its own ethics and governance in the pursuit of health [18-24].


She gave two vivid examples of this natural efficiency. First, the immune system’s coordinated use of cytokines, antibodies, killer T-cells and memory T-/ B-cells enables rapid recall of pathogen information and instant action on re-exposure [19-23]. Second, the Arctic tern’s 70 000-km migration-performed without prior learning or guidance-demonstrates DNA-encoded navigational intelligence [29-33]. Both cases show that biological systems process, store and retrieve information with energy consumption far lower than conventional data-centre AI, which relies on gigawatts of power; biology instead uses distributed “data centres” that sip energy only when needed, exemplified by the human brain’s super-computing capability [25-28].


The inflection point lies at the intersection of this biological intelligence and artificial intelligence [33-36]. AI-powered biology already accelerates discovery through protein-structure prediction, generative drug design and the creation of digital twins-AI-generated virtual replicas of cells and organs used for simulation-compressing timelines and reducing development risk [36-38]. Looking ahead, she envisioned a new frontier of programmable biology, where deep understanding of cell signalling, gene regulation and immune memory could enable the reprogramming of cancer cells into benign forms or the repair of otherwise irreparable bone tissue [39-44]. She linked this vision to personalised CAR-T therapies, autoimmune-disease interventions and longevity research that seeks to modulate senescence, metabolic ageing pathways and tissue-repair mechanisms, potentially extending human health-span by decades [46-48][49-52].


Realising these breakthroughs, however, requires sovereign AI-bio infrastructure. Mazumdar-Shaw stressed that if foundational AI models for drug discovery, genomics and cellular engineering remain owned abroad, India would face strategic dependence in the most critical domain of national resilience-human health [54-58]. Sovereign control over trusted biological data, indigenous AI models, compute resources and translational platforms is therefore essential for both economic competitiveness and preparedness against pandemics, antimicrobial resistance and emerging bio-threats [55-57].


She then laid out a concrete road-map across the biotech value chain. In discovery, India must develop foundation models-large-scale AI models trained on biological data-for proteins, RNA, cellular circuits and systems biology [62-66]. In development, AI can enable in-silico trials, digital twins and AI-optimised trial design to de-risk pipelines and boost probability of success [63-64]. In manufacturing, smart biomanufacturing driven by AI should optimise yield, implement “quality-by-design” and integrate real-world evidence into regulatory decisions [64-66]. Crucially, she emphasized the need to develop a system of biomanufacturing and also a system of biotech regulation that can keep pace with accelerated science [64-66]. She added that AI can map these regulatory circuits at scale, enabling target interventions that preserve homeostasis [33-36]. She also clarified that AI alone will not create economic opportunities; the delivery of AI through manufacturing and products will[71-77].


Achieving this transformation cannot rely on industry alone. Mazumdar-Shaw called for a triple-helix collaboration among government, academia, industry and capital markets [71-77]. The government should invest in sovereign AI-bio infrastructure, trusted data architectures, regulatory sandboxes and mission-mode programmes in cell-gene therapy, immuno-oncology and longevity [74]. Academia must mainstream computational biology, neurosymbolic AI and AI-first life-science curricula to create a new cadre of translational scientists [75]. Industry is tasked with co-creating shared platforms, translational pipelines and globally benchmarked biomanufacturing clusters that can scale discoveries [76]. Capital markets must provide patient, long-term financing for high-risk biotech innovation, delivering exponential societal and economic returns [77-80].


Ethical considerations were positioned as central to India’s leadership model. Mazumdar-Shaw clarified that sovereignty does not mean isolation; instead, India should build AI systems that are transparent, energy-efficient, bias-aware and globally interoperable, yet rooted in the public interest [81-86]. By embedding equity, affordability and access into AI-driven biotech, India can present its outputs as global public goods [85-86].


In conclusion, Mazumdar-Shaw asserted that biotech sovereignty is the foundation of health security, strategic autonomy and economic resilience; nations that master the language of life augmented by the language of machines will shape humanity’s future, and India possesses the scientific talent, AI expertise, scale and values to lead-provided it builds sovereign platforms today [88-91].


Session transcriptComplete transcript of the session
Speaker 1

Ladies and gentlemen, please put your hands together to welcome Ms. Kiran Mazumdar -Shaw, Chairperson, Biocon Group.

Kiran Mazumdar-Shaw

Good afternoon, and let me say how delighted I am to be a part of this wonderful summit, the Impact AI Summit that India… is launching and hosting for the first time, which I think heralds a big signal that we are part of the AI journey that the world is on. I’ve basically taken off from where the last panel talked about sovereignty, and I thought I should talk about why India must build biotech sovereignty that is embedded in AI. And let me start with this first slide that basically says that if the 20th century was defined by the Internet and the early 21st century by digital sovereignty, which was all about data being the new oil and the new fuel, the coming decades, I believe, will be…

…be shaped by… biotech sovereignty that is embedded in AI. I believe that nations that command the convergence of biology and AI, or what I like to call the convergence of biological intelligence and artificial intelligence, will define the future of healthcare, food security, education, biomanufacturing, sustainability, biosecurity, and much more. For India, this is not merely a cutting -edge opportunity. It is a strategic and geopolitical imperative. Now, let me really touch upon what I mean by biological intelligence. Living systems are the original intelligent machines. And why do I say this? Because biological intelligence has evolved and has been built over 3 .8 billion years. It is different in the way it learns, memorizes, builds and processes information from multimodal signals and circuits.

Cells sense, they compute and they respond through intricate signaling networks. They also then interface with gene regulation and gene regulatory circuits and immune memory. These systems operate within inbuilt biological guardrails, which form a network of cells that are connected to each other. They focus on feedback loops and control mechanisms that maintain what we refer to as homeostasis. or health equilibrium. Disease arises when these guardrails fail. So when we talk about ethics, when we talk about governance, living systems have an inbuilt sense of guardrails and governance, which is about keeping you healthy, which is about homeostasis, which is a wonderful way of making sure that it compensates, it repairs and makes sure that you can still live in a health as healthier way as possible.

And to illustrate this, let’s look at the way our immune system responds to pathogens. The immune system responds through immunological ammunition like cytokines, antibodies and killer T cells. It also memorizes the identity of the pathogen in memory T cells and B cells. And years later, when the pathogen reinvades, the memory cells rapidly retrieve this information and translate it into instant action. This is the marvels of biology in the way it receives information, processes information, stores information, retrieves information and acts. And the inference of all this information is done at speed and with energy efficiency that we can’t even imagine. We don’t need those gigawatts of data centers. We have distributed data centers that take sips of energy when it needs to use it.

Our brain, which is the biggest supercomputer known to man, does this. So efficiently that we need to understand how biology works. Another great thought -provoking example of biological intelligence is the migration of the Arctic tern. This little bird, that is the size of a tennis ball, undertakes a 70 ,000 -kilometer journey between the Arctic, the Antarctic and back with no prior knowledge, with no older bird to guide it, and yet it does it with astonishing precision and speed. How does it do it? This is about navigational intelligence embedded in its DNA. AI, by contrast, learns from data to optimize decisions at machine scale. So therefore, the true… The true inflection point lies at their intersection. AI -powered biology…

from protein structure prediction and generative drug design to digital twins of cells and organs. AI is compressing discovery timelines and reducing development risk. And therefore, I believe that the next frontier is even more profound. The reprogramming of cells themselves to restore biological balance. But for this, we need to understand how biological intelligence operates. Imagine reprogramming cancer cells into non -malignant cells. Imagine repairing bone tissue that is damaged and irreparable. Biological intelligence is built on an intricate network of cell signaling, gene regulation and immune memory that works symbiotically, as I mentioned, to maintain homogenization. And so, we need to understand how biological intelligence operates. And so, we need to understand how biological intelligence operates. And so, we need to understand how biological intelligence operates.

now if we now come to what i’ve just spoken about which is reprogramming and re -engineering we are moving from static one -size -fits -all drugs to programmable biology which is the new frontier we need to learn how biology learns stores data retrieves and processes data in such an agile and energy efficient way once we understand the computational models of living systems we can use ai to accelerate with predictive precision the most advanced present -day therapies today we are all excited about personalized carti therapies that basically eliminate tumors with precision autoimmune disease interventions that are used to eliminate tumors with precision that recalibrate immune tolerance rather than broadly suppressing immunity And then the most exciting part of longevity and health span.

These are areas where we must understand how senescence is modulated, metabolic pathways of aging are created and cellular repair mechanisms to delay biological aging and restore tissue resilience happens. If we understand all this, as the last speaker said, we may be able to live for another 50 years and more. Flucially, these approaches seek not to overpower biology but to reinforce its inbuilt guardrails or regulatory circuits which focus on repair, feedback control and immune surveillance. AI can map these regulatory circuits at scale, enabling target interventions that preserve homeostasis. That is the excitement of new science led by AI, new biology led by AI. This represents a paradigm shift from managing disease to re -engineering biological systems to sustain equilibrium.

So, India’s future health security will depend on how optimally we combine the code of life and the code of intelligence. If foundational AI models for drug discovery, genomics, cellular engineering and clinical decision making are owned offshore, India risks strategic dependence in the market. This is the most critical domain of national resilience, which is human health. Biotech sovereignty embedded in AI must therefore mean sovereign control over trusted biological data. indigenous AI models, computing infrastructure, and translational platforms from discovery and development to manufacturing and delivery. This is essential not only for economic competitiveness, but also for preparedness against pandemics, antimicrobial resistance, and emerging new bio -threats. Now, I really believe this is a very important aspect of what AI can do for biotech and the economy.

AI alone will not create economic opportunities, but the delivery of AI in our field through manufacturing and products will do that. India’s global role must evolve from being the pharmacy of the world, to becoming the biotech platform of the world, a nation that offers AI -native discovery engines programmable therapy platforms and scalable biomanufacturing as global public goods. And this requires embedding AI across the biotech value chain. When it comes to discovery, we need to develop foundation models for proteins, RNA, cellular circuits and systems biology. When it comes to development, I think there are huge opportunities to develop in silico trials, digital twins and AI driven trial design to really de -risk the success of pipelines and probability of success.

When it comes to manufacturing, smart biomanufacturing using AI for yield optimization and most importantly, quality by design is going to be a great opportunity for all of us. Now, when it comes to the biotech value chain, we need to develop a system of biomanufacturing and also a system of biotech regulation. It has to be a science -first approach, tech -enabled regulatory pathways, integrating real -world evidence through AI validation. I think that’s going to be a huge opportunity which we must do right now. What is important is for regulations to keep up with technology. If we compress timelines of discovery and development to a fraction of what happens today, and if regulatory speed does not keep up with it, then we miss out on a huge lost opportunity.

So working in tandem, working in synchronization is the need of the hour. This transformation cannot be driven by industry alone. It demands a triple helix of government, academia and industry. Government, academia and industry. Government must invest in sovereign AI bio -infrastructure. trusted data architectures, regulatory sandboxes, and mission mode programs in cell and gene therapy, immuno -oncology, and longevity science. Academia must mainstream computational biology, neurosymbolic AI, and AI -first life sciences education to build a new cadre of translational scientists. Industry must co -create shared platforms, translational pipelines, and globally benchmarked biomanufacturing clusters that convert science into scale. Capital markets must also evolve to support long -cycle, high -risk biotech innovation that is so rampant in startups in our country.

Deep science requires a lot of research and development. It requires patient capital. But the societal and economic returns from reduced disease burden to global platform leadership are exponential. Now coming to ethics, trust and global leadership. Sovereignty is not isolation. India must build ethical, transparent, energy efficient and bias aware AI systems for biology that are globally interoperable yet rooted in public interest. And I think this is the unique model India can create. By embedding principles of equity, affordability and access into AI driven. AI. AI driven biotech, India can offer the world a new model of innovation combining technological leadership with social purpose. For India, biotech sovereignty embedded in AI is not a sectoral ambition. It is a foundation of health security, strategic autonomy and economic resilience.

Those who master the language of life augmented by the language of machines will shape the future of humanity. India has the science, the AI and life sciences talent, the scale and the values to lead provided it builds the sovereign platforms of tomorrow today. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (15)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“Kiran Mazumdar‑Shaw is the Chairperson of the Biocon Group.”

The knowledge base identifies Kiran Mazumdar-Shaw as Chairperson of the Biocon Group, confirming her role mentioned in the report [S1].

Confirmedhigh

“Mazumdar‑Shaw framed her talk around “biotech sovereignty” and described it as a strategic and geopolitical imperative for India’s future dominance in health, food security and bio‑security.”

Her vision of biotech sovereignty and its strategic importance for India is echoed in the knowledge base, which highlights her comprehensive view on India’s positioning in global biotechnology leadership and the need for coordinated policy frameworks [S2].

Confirmedhigh

“Reliance on offshore AI models for drug discovery and genomics creates a strategic vulnerability; sovereign control over trusted biological data, indigenous AI models and computing infrastructure is essential for national resilience.”

The knowledge base stresses the sovereignty dimension-control over data, models and security measures-as a core requirement for responsible AI development, aligning with the report’s warning about offshore dependencies [S7] and the three-pillar sovereignty framework (data, infrastructure, talent) [S39].

Additional Contextmedium

“India’s AI strategy is built on three pillars of sovereignty: data sovereignty, infrastructure sovereignty, and talent sovereignty.”

While the report mentions biotech sovereignty, the knowledge base provides additional detail on India’s broader AI sovereignty strategy, outlining the three pillars that underpin the country’s approach [S39].

Additional Contextmedium

“India’s large human capital pool is central to its global AI strategy.”

The knowledge base notes that India’s talent pool (over 350,000 employees) is viewed as a key asset for global AI initiatives, adding nuance to the report’s emphasis on strategic advantage [S36].

Additional Contextmedium

“Biological risks and bio‑security are critical global concerns that require resilient, sovereign capabilities.”

The Global Risks Landscape 2019 highlights the evolving nature of biological risks and the importance of resilient, inward-looking strategies for global health security, supporting the report’s emphasis on bio-security as a strategic priority [S58].

External Sources (59)
S1
AI for Social Good Using Technology to Create Real-World Impact — -Kiran Mazumdar-Shaw: Chairperson of Biocon Group; pioneering biotech entrepreneur, healthcare visionary, and philanthro…
S2
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event moderator or host introd…
S3
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-social-good-using-technology-to-create-real-world-impact — Our third guest… is Kiran Mamzouma -Shaw. As chairperson of Biocon Group, Kiran is a pioneering biotech… Kiran is a …
S4
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S5
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S6
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S7
Building Sovereign and Responsible AI Beyond Proof of Concepts — Sovereignty dimension focuses on control over data, models, and security measures
S8
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Infrastructure, data sovereignty and model development
S9
Enhancing rather than replacing humanity with AI — In scientific research, AI accelerates discovery remarkably. Recent breakthroughs in usingAI to design synthetic protein…
S10
Breakthroughs in human-centric bioscience with AI — This landmark achievement shows how powerful, responsible AI research can address urgent human health needs, moving beyo…
S11
New AI platforms aim to streamline cancer trial recruitment and design — At the Summit for Clinical Operations Executives (SCOPE) 2026, major players in life sciencesshowcasedartificial intelli…
S12
AI-tissue collaboration could transform drug trials and precision medicine — Researchers combine human tissue models with explainable AI toanalyse patient dataand identify treatments that work best…
S13
Panel Discussion AI in Healthcare India AI Impact Summit — Hello, hello. One thing, so we touched upon drug discovery and the impact of AI in the healthcare system in general, so …
S14
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-kiran-mazumdar-shaw — So working in tandem, working in synchronization is the need of the hour. This transformation cannot be driven by indust…
S15
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — IRO has taken the topics of healthcare, sustainability and environmental science and pharma as initial domains. And rece…
S16
How Multilingual AI Bridges the Gap to Inclusive Access — And this means that it is so important for academia to play a role. We don’t just play a role because we’ve got expertis…
S17
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — This comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents di…
S18
Designing Indias Digital Future AI at the Core 6G at the Edge — This sovereignty imperative, according to Saluja, stems from both economic and strategic considerations. The token econo…
S19
Biology as Consumer Technology — Furthermore, AI and language models can contribute to better communication by describing scientific breakthroughs in a w…
S20
Artificial Intelligence & Emerging Tech — Tanara Lauschner:Thank you Jennifer. Hello everyone. First of all I would like to thank the IGF Secretariat for organizi…
S21
IGF 2024 Opening Ceremony — This comment provided a structure for subsequent speakers to address specific aspects of AI governance and inequality. I…
S22
https://dig.watch/event/india-ai-impact-summit-2026/keynote-by-uday-shankar-vice-chairman_jiostar-india — It’s been very clear -eyed about this. They identified exactly what they needed to outpace the West and build their regu…
S23
Bioeconomy Strategy — With the expansion of investments in the biotechnology fields, it is necessary: to develop and secure future human resou…
S24
Skilling and Education in AI — Connecting solutions across the entire value chain is crucial for real impact
S25
Enhancing the digital infrastructure for all | IGF 2023 Open Forum #135 — Furthermore, the importance of having appropriate policies in place to support and foster innovation and ecosystem devel…
S26
Leaders TalkX: When policy meets progress: paving the way for a fit for future digital world — Mothibi Ramusi: Thank you very much. Good afternoon. I think from our side, I’m just going to use the South African cont…
S27
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The roadmap is built upon core principles including “human and planetary welfare, accountability and transparency, inclu…
S28
Responsible AI in India Leadership Ethics & Global Impact part1_2 — Absolutely. So coming to the first question, you know, that you asked, I think, you know, I think there are obviously th…
S29
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Namaste. Honorable Minister Vaishnav, Your Excellency’s colleagues, let me begin by thanking our host, Prime Minister Mo…
S30
AI 2.0 Reimagining Indian education system — However, achieving global leadership requires addressing substantial infrastructure and equity challenges. The success o…
S31
Keynote-Surya Ganguli — “It gets the right answer just in time using the slowest, most unreliable intermediate steps possible.”[57]. “So biology…
S32
Brain-inspired networks boost AI performance and cut energy use — Researchers at the University of Surreyhave developeda new method to enhance AI by imitating how the human brain connect…
S33
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — “Living systems are the original intelligent machines.”[22]. “This is the marvels of biology in the way it receives info…
S34
The Global Power Shift India’s Rise in AI & Semiconductors — High level of consensus with complementary perspectives rather than conflicting views. The speakers come from different …
S35
Driving Indias AI Future Growth Innovation and Impact — The discussion maintained an optimistic and forward-looking tone throughout, characterized by enthusiasm for India’s AI …
S36
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — The summit’s most striking theme was the unanimous recognition of India’s potential to become a dominant force in artifi…
S37
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — I believe that nations that command the convergence of biology and AI, or what I like to call the convergence of biologi…
S38
Designing Indias Digital Future AI at the Core 6G at the Edge — This sovereignty imperative, according to Saluja, stems from both economic and strategic considerations. The token econo…
S39
Keynote ‘I’ to the Power of AI An 8-Year-Old on Aspiring India Impacting the World — This discussion features an 8-year-old prodigy presenting their perspective on global AI development and India’s strateg…
S40
AI for Social Good Using Technology to Create Real-World Impact — Mazumdar-Shaw’s most forward-looking contribution involved the convergence of biological intelligence and artificial int…
S41
Most transformative decade begins as Kurzweil’s AI vision unfolds — AI no longer belongs to speculative fiction or distant possibility. In many ways, it has arrived. From machine translati…
S42
Folding Science / DAVOS 2025 — Ardem Patapoutian: Yeah I think it’s one of the most amazing quick advance advancements in science I’ve ever experience…
S43
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-kiran-mazumdar-shaw — AI is compressing discovery timelines and reducing development risk. And therefore, I believe that the next frontier is …
S44
Skilling and Education in AI — Connecting solutions across the entire value chain is crucial for real impact
S45
AI That Empowers Safety Growth and Social Inclusion in Action — “investors should ask whether there is clear board level responsibility on AI risk whether executive incentives are alig…
S46
https://dig.watch/event/india-ai-impact-summit-2026/keynote-by-uday-shankar-vice-chairman_jiostar-india — It’s been very clear -eyed about this. They identified exactly what they needed to outpace the West and build their regu…
S47
The Foundation of AI Democratizing Compute Data Infrastructure — Open models, talent development, and capacity building across the entire value chain are essential
S48
KINGDOM OF CAMBODIA NATION RELIGION KING — A collaborative approach includes more than just working together. It refers to the ability to work together and take ac…
S49
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — Professor Ganesh Ramakrishnan outlined India’s competitive advantage through interoperability and collaboration rather t…
S50
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Namaste. Honorable Minister Vaishnav, Your Excellency’s colleagues, let me begin by thanking our host, Prime Minister Mo…
S51
Responsible AI in India Leadership Ethics & Global Impact — Industry bodies like FICCI play crucial roles in democratization, serving as conduits for knowledge transfer and framewo…
S52
Keynote Adresses at India AI Impact Summit 2026 — -Sanjay Mehrotra- CEO of Micron Technology And so we are here to listen to our distinguished guests as they present the…
S53
Panel Discussion AI & Cybersecurity _ India AI Impact Summit — A collaborative international effort becomes highly relevant to bridge this emerging AI capacity divide. India, with thi…
S54
Global Internet Governance Academic Network Annual Symposium | Part 3 | IGF 2023 Day 0 Event #112 — Adio Adet Dinika:All right. Wonderful. Thanks for that. So, quickly moving on to the Crimean postcolonial critique, basi…
S55
https://dig.watch/event/india-ai-impact-summit-2026/leaders-plenary-global-vision-for-ai-impact-and-governance-morning-session-part-1 — This is a reality we cannot ignore. But the key question is this. Will this concentration of power become a permanent st…
S56
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-agriculture-scaling-intelegence-for-food-and-climate-resiliance — A very good morning to all of you. Shri Devesh Chaturvedi ji, Rajesh Agarwal ji, Vikas Rastogi ji. Mr. Jonas Jett, Srima…
S57
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — Virkkunen articulated sovereignty as “having choice in partnerships, not being forced into dependencies,” emphasizing st…
S58
Figure I: The Global Risks Landscape 2019 — The sections that follow examine the way biological risks are evolving both in nature and in laboratories. We are at a c…
S59
A Guide for Practitioners — Poverty, malnutrition, high fertility, and poor health underpin many of the challenges facing policy-makers today…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
K
Kiran Mazumdar-Shaw
13 arguments106 words per minute1698 words955 seconds
Argument 1
Nations that master the convergence of biology and AI will shape future sectors such as healthcare, food security, and sustainability (Kiran Mazumdar-Shaw)
EXPLANATION
She argues that countries that can combine biological knowledge with artificial intelligence will become the architects of critical future domains, ranging from health care to food systems and environmental sustainability. This convergence is presented as a decisive competitive advantage for national development.
EVIDENCE
She states that nations that command the convergence of biology and AI will define the future of healthcare, food security, education, biomanufacturing, sustainability, biosecurity, and more, and emphasizes that for India this is a strategic and geopolitical imperative [5-7].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Kiran Mazumdar-Shaw’s claim that nations mastering biology-AI convergence will define future sectors is directly quoted in the keynote (e.g., “nations that command the convergence of biology and AI … will define the future of healthcare, food security, education, biomanufacturing, sustainability, biosecurity …”) [S2].
MAJOR DISCUSSION POINT
Biotech sovereignty as strategic imperative
Argument 2
Reliance on offshore AI models for drug discovery and genomics creates strategic dependence; sovereign control over data, AI models, and infrastructure is essential for national health security (Kiran Mazumdar-Shaw)
EXPLANATION
She warns that depending on foreign AI platforms for critical biotech tasks makes a nation vulnerable, especially in health‑related sectors. Owning the data, models and computing infrastructure is therefore framed as a matter of national resilience and security.
EVIDENCE
She notes that if foundational AI models for drug discovery, genomics, cellular engineering and clinical decision-making are owned offshore, India risks strategic dependence, and stresses the need for sovereign control over trusted biological data, indigenous AI models, computing infrastructure and translational platforms [54-57].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The strategic importance of data, model and security sovereignty is highlighted in discussions on sovereign and responsible AI, emphasizing control over data and models as a key dimension [S7] and in the AI for Bharat’s Health overview of data sovereignty and model development [S8].
MAJOR DISCUSSION POINT
Strategic dependence on offshore AI
Argument 3
Living systems are “original intelligent machines” that process multimodal signals with built‑in guardrails, achieving energy‑efficient computation far beyond conventional data centers (Kiran Mazumdar-Shaw)
EXPLANATION
She describes biological entities as the first form of intelligent machines that have evolved over billions of years, capable of sensing, computing and responding with intrinsic regulatory mechanisms. These processes are highlighted as far more energy‑efficient than today’s data‑center based AI systems.
EVIDENCE
She explains that cells sense, compute and respond through intricate signaling networks, gene regulation and immune memory, operating within built-in biological guardrails that maintain homeostasis, and contrasts this with the massive energy consumption of gigawatt-scale data centres, noting that biology achieves computation with minimal energy using distributed “data centres” that sip power as needed [9-13][24-28].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
She describes living systems as the original intelligent machines with built-in guardrails and billions of years of evolution, contrasting them with energy-intensive data centres in the keynote [S2].
MAJOR DISCUSSION POINT
Biological intelligence as a model for AI
Argument 4
Examples like immune memory and the Arctic tern’s migratory navigation illustrate innate data storage, retrieval, and decision‑making without external training (Kiran Mazumdar-Shaw)
EXPLANATION
She provides concrete biological examples to show how living systems naturally store information, recall it when needed, and act autonomously. These cases serve to illustrate principles that can inspire AI design.
EVIDENCE
She describes the immune system’s use of cytokines, antibodies, killer T cells, and memory B/T cells that retain pathogen identity and enable rapid response upon re-infection, and she cites the Arctic tern’s 70,000-km migration guided by DNA-encoded navigation without prior learning or guidance [19-23][29-33].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Arctic tern navigation example and the concept of innate biological data storage are cited in the AI for Social Good presentation, which references the tern’s 70,000 km first-flight navigation and immune memory mechanisms as biological information storage [S1].
MAJOR DISCUSSION POINT
Illustrative biological examples
Argument 5
AI accelerates discovery (protein structure prediction, generative drug design) and enables programmable biology such as cell reprogramming for cancer and tissue repair (Kiran Mazumdar-Shaw)
EXPLANATION
She claims that AI is shortening discovery timelines and reducing risk by predicting protein structures and generating drug candidates, and that it also opens the possibility of directly reprogramming cells to treat disease. This represents a shift from static drugs to dynamic, programmable therapeutics.
EVIDENCE
She points to AI-powered biology ranging from protein structure prediction and generative drug design to digital twins of cells and organs, and highlights the vision of reprogramming cancer cells into non-malignant cells and repairing damaged bone tissue once we understand biological intelligence [36-40][41-43][46].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-driven protein structure prediction, generative drug design and programmable biology are discussed in reports on AI accelerating discovery of synthetic proteins for genome editing [S9] and breakthroughs in human-centric bioscience with AI [S10]; the keynote also mentions AI compressing discovery timelines and cell reprogramming [S2].
MAJOR DISCUSSION POINT
AI‑driven discovery and programmable biology
Argument 6
In development, AI powers in‑silico trials, digital twins, and AI‑optimized trial design, dramatically reducing risk and timelines (Kiran Mazumdar-Shaw)
EXPLANATION
She outlines how AI can be used to simulate clinical trials, create digital replicas of patients or organs, and optimise trial protocols, thereby de‑risking pipelines and shortening the time to market. This requires regulatory frameworks that keep pace with the accelerated pace.
EVIDENCE
She mentions the need for foundation models for proteins, RNA, cellular circuits, and systems biology, and then describes huge opportunities to develop in-silico trials, digital twins and AI-driven trial design to de-risk pipelines, warning that regulatory speed must keep up with compressed discovery timelines [62-69].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-enabled in-silico trials, digital twins and trial optimisation are described in recent AI platforms for cancer trial recruitment and design [S11] and AI-tissue collaborations that improve trial success rates [S12]; the keynote references these opportunities as well [S2].
MAJOR DISCUSSION POINT
AI in clinical development
Argument 7
In manufacturing, AI‑enabled smart biomanufacturing improves yield, quality‑by‑design, and scalability, turning science into large‑scale production (Kiran Mazumdar-Shaw)
EXPLANATION
She argues that AI can optimise biomanufacturing processes, ensuring higher yields and consistent quality while scaling production. This transformation is presented as a key economic opportunity for India.
EVIDENCE
She describes smart biomanufacturing using AI for yield optimisation and, importantly, quality-by-design, and stresses that this will be a great opportunity for all stakeholders [64-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Smart biomanufacturing using AI for yield optimisation and quality-by-design is discussed in the AI Impact Summit panel on manufacturing in India and in the keynote’s emphasis on AI-driven manufacturing as a growth engine [S13] and [S2].
MAJOR DISCUSSION POINT
AI‑driven biomanufacturing
Argument 8
Government must fund trusted data architectures, regulatory sandboxes, and mission‑mode programs in cell‑gene therapy, immuno‑oncology, and longevity (Kiran Mazumdar-Shaw)
EXPLANATION
She calls for public investment in core AI‑bio infrastructure, including secure data platforms, experimental regulatory environments, and focused research programmes targeting advanced therapies. This is positioned as essential for building sovereign capability.
EVIDENCE
She states that government must invest in sovereign AI bio-infrastructure, trusted data architectures, regulatory sandboxes, and mission-mode programmes in cell and gene therapy, immuno-oncology and longevity science [74-77].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for government investment in sovereign AI bio-infrastructure, trusted data architectures and regulatory sandboxes appear in a policy discussion (“Government must invest in sovereign AI bio-infrastructure, trusted data…”) [S14] and are echoed in the keynote [S2].
MAJOR DISCUSSION POINT
Government role in sovereign AI‑bio ecosystem
Argument 9
Academia should mainstream computational biology, neurosymbolic AI, and AI‑first life‑sciences curricula to create a new cadre of translational scientists (Kiran Mazumdar‑Shaw)
EXPLANATION
She urges universities and research institutes to embed AI‑centric training across life‑science programmes, producing scientists who can bridge biology and machine intelligence. This capacity building is seen as vital for the ecosystem.
EVIDENCE
She notes that academia must mainstream computational biology, neurosymbolic AI, and AI-first life sciences education to build a new cadre of translational scientists [75].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The role of academia in providing neutral, expertise-driven AI education and creating neurosymbolic AI capabilities is highlighted in a discussion on multilingual AI bridging inclusive access, emphasizing academic contributions beyond commercial entities [S16]; the keynote also stresses mainstreaming computational biology curricula [S2].
MAJOR DISCUSSION POINT
Academic capacity building
Argument 10
Industry needs to co‑create shared platforms, translational pipelines, and globally benchmarked biomanufacturing clusters to convert science into scale (Kiran Mazumdar‑Shaw)
EXPLANATION
She emphasizes that the private sector should collaborate on common platforms and standards, enabling rapid translation of discoveries into large‑scale manufacturing. This collaborative approach is portrayed as essential for competitiveness.
EVIDENCE
She says industry must co-create shared platforms, translational pipelines, and globally benchmarked biomanufacturing clusters that convert science into scale [76].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Industry co-creation of shared platforms and globally benchmarked biomanufacturing clusters is mentioned in the keynote and reinforced by policy statements urging industry collaboration for scaling biotech [S2] and the AI Impact Summit remarks on industry partnerships [S13].
MAJOR DISCUSSION POINT
Industry collaboration and scaling
Argument 11
Capital markets must provide patient, long‑term capital for high‑risk, high‑impact biotech innovation (Kiran Mazumdar‑Shaw)
EXPLANATION
She calls for financial systems that can sustain long‑duration, high‑risk biotech projects, highlighting the need for patient capital to realise societal and economic returns. This financial support is framed as a catalyst for innovation.
EVIDENCE
She explains that capital markets must evolve to support long-cycle, high-risk biotech innovation, that deep science requires patient capital, and that the societal and economic returns from reduced disease burden to global platform leadership are exponential [77-80].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for patient, long-cycle capital for high-risk biotech innovation is directly quoted in the keynote (“Capital markets must evolve to support long-cycle, high-risk biotech innovation…”) [S2].
MAJOR DISCUSSION POINT
Financing biotech innovation
Argument 12
Sovereignty does not mean isolation; India must develop ethical, transparent, energy‑efficient, bias‑aware AI systems that are globally interoperable yet rooted in public interest (Kiran Mazumdar‑Shaw)
EXPLANATION
She clarifies that pursuing biotech sovereignty should not lead to isolation; instead, AI systems must be built with strong ethical standards, transparency, energy efficiency and fairness, while remaining compatible with global frameworks. Public interest is positioned as the guiding principle.
EVIDENCE
She states that sovereignty is not isolation, and that India must build ethical, transparent, energy-efficient and bias-aware AI systems for biology that are globally interoperable yet rooted in public interest [82-86].
MAJOR DISCUSSION POINT
Ethical AI for biotech sovereignty
Argument 13
Embedding equity, affordability, and access into AI‑driven biotech creates a unique model that pairs technological leadership with social purpose, positioning India as a global public‑good provider (Kiran Mazumdar‑Shaw)
EXPLANATION
She proposes that integrating principles of equity, affordability and universal access into AI‑enabled biotech will differentiate India’s model, combining cutting‑edge technology with social responsibility. This approach aims to make India a provider of global public goods.
EVIDENCE
She explains that by embedding equity, affordability and access into AI-driven biotech, India can offer the world a new model of innovation that combines technological leadership with social purpose, positioning the country as a global public-good provider [85-90].
MAJOR DISCUSSION POINT
Equity‑focused AI biotech model
S
Speaker 1
1 argument118 words per minute17 words8 seconds
Argument 1
Speaker 1 calls on the audience to applaud Ms. Kiran Mazumdar‑Shaw, highlighting the need to publicly recognize and honor expertise in biotech and AI leadership.
EXPLANATION
By asking the audience to put their hands together, the speaker underscores the value of acknowledging distinguished contributors, which helps foster respect, motivation, and a supportive environment for scientific advancement.
EVIDENCE
The speaker explicitly invites applause for Ms. Kiran Mazumdar-Shaw, stating, “Ladies and gentlemen, please put your hands together to welcome Ms. Kiran Mazumdar-Shaw, Chairperson, Biocon Group.” [1]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The moderator’s invitation to applaud Ms. Kiran Mazumdar-Shaw is recorded in the keynote transcript: “Ladies and gentlemen, please put your hands together to welcome Ms. Kiran Mazumdar-Shaw…” [S2].
MAJOR DISCUSSION POINT
Recognition of expertise and leadership
AGREED WITH
Kiran Mazumdar-Shaw
Agreements
Agreement Points
Both speakers highlight India’s emerging leadership role in biotech and AI.
Speakers: Speaker 1, Kiran Mazumdar-Shaw
Speaker 1 calls on the audience to applaud Ms. Kiran Mazumdar‑Shaw, highlighting the need to publicly recognize and honor expertise in biotech and AI leadership. Kiran Mazumdar‑Shaw states that India has the science, AI and life‑sciences talent, the scale and the values to lead provided it builds the sovereign platforms of tomorrow today.
Speaker 1’s invitation to applause acknowledges Kiran’s expertise, while Kiran herself asserts that India is positioned to lead in AI-driven biotech, showing a shared emphasis on national leadership [1][90].
POLICY CONTEXT (KNOWLEDGE BASE)
This consensus mirrors the broader policy narrative that positions India as a future AI and biotech hub, as reflected in summit analyses noting a strong, unified view of India’s strategic opportunities in AI and semiconductors [S34] and the optimistic outlook on India’s AI growth trajectory [S35][S36].
Similar Viewpoints
Kiran repeatedly stresses that dependence on foreign AI resources threatens health security and that India must secure its own data, models and infrastructure to achieve biotech sovereignty [54-57][5-7].
Speakers: Kiran Mazumdar-Shaw
Reliance on offshore AI models for drug discovery and genomics creates strategic dependence; sovereign control over data, AI models, and infrastructure is essential for national health security. Biotech sovereignty embedded in AI must therefore mean sovereign control over trusted biological data, indigenous AI models, computing infrastructure, and translational platforms.
Across discovery, development and manufacturing, Kiran argues that AI is a cross‑cutting catalyst that compresses timelines, reduces risk and creates economic opportunities for India’s biotech sector [36-40][62-66].
Speakers: Kiran Mazumdar-Shaw
AI accelerates discovery (protein structure prediction, generative drug design) and enables programmable biology such as cell reprogramming for cancer and tissue repair. In development, AI powers in‑silico trials, digital twins, and AI‑optimized trial design, dramatically reducing risk and timelines. In manufacturing, AI‑enabled smart biomanufacturing improves yield, quality‑by‑design, and scalability.
Kiran links ethical, equitable AI design with global interoperability, positioning responsible AI as central to India’s biotech sovereignty strategy [82-86].
Speakers: Kiran Mazumdar-Shaw
Sovereignty does not mean isolation; India must develop ethical, transparent, energy‑efficient and bias‑aware AI systems for biology that are globally interoperable yet rooted in public interest. By embedding principles of equity, affordability and access into AI‑driven biotech, India can offer the world a new model of innovation combining technological leadership with social purpose.
Unexpected Consensus
Biology as a model for ultra‑energy‑efficient computation.
Speakers: Kiran Mazumdar-Shaw
Living systems are the original intelligent machines that achieve computation with minimal energy, contrasting with gigawatt‑scale data centres. AI can learn from this biological efficiency to reduce environmental impact.
Kiran’s claim that biological intelligence operates with far lower energy than conventional AI data centres introduces an unexpected link between biotech sovereignty and environmental sustainability, a cross-cutting insight not commonly highlighted in policy debates [24-28].
POLICY CONTEXT (KNOWLEDGE BASE)
Authoritative experts describe living systems as the original ultra-energy-efficient computers, linking biological computation to the physics of the universe and informing bio-inspired AI designs that cut energy use [S31][S33][S32].
Overall Assessment

The discussion shows strong internal consensus within Kiran Mazumdar‑Shaw’s remarks: she consistently links AI‑driven biotech sovereignty to strategic security, economic growth, ethical governance and environmental efficiency. The only external agreement is the moderator’s public recognition of her leadership, reinforcing the narrative of national ambition.

High consensus on the core vision (AI‑enabled biotech sovereignty) among the speakers, with broader implications that India’s policy agenda should integrate data sovereignty, ethical AI, and cross‑sectoral AI deployment to achieve strategic autonomy and sustainable development.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript contains only an introductory welcome by Speaker 1 and a comprehensive keynote by Kiran Mazumdar-Shaw. No substantive policy or conceptual conflict is expressed between the two speakers; Speaker 1 merely acknowledges the speaker’s expertise ([1]), while Kiran Mazumdar-Shaw outlines a vision for biotech sovereignty, AI-driven discovery, and related policy actions ([2-91]). Consequently, there is essentially no observable disagreement, and the dialogue is uniformly supportive of the AI-biotech agenda.

Minimal – the interaction is collaborative rather than contentious, implying smooth consensus on the overarching goal of advancing AI‑enabled biotechnology in India.

Takeaways
Key takeaways
Biotech sovereignty is a strategic and geopolitical imperative for India, requiring control over AI models, data, and infrastructure to ensure health security and reduce strategic dependence. Biological intelligence offers a model for AI, demonstrating energy‑efficient, multimodal information processing and built‑in guardrails that can inspire AI‑driven biotech solutions. AI can transform the entire biotech value chain: accelerating discovery (e.g., protein prediction, generative drug design), enabling programmable biology (cell reprogramming, personalized therapies), streamlining development (in‑silico trials, digital twins), and optimizing manufacturing (smart biomanufacturing, quality‑by‑design). Building a sovereign AI‑bio ecosystem in India requires coordinated action from government (investment in data architecture, regulatory sandboxes, mission‑mode programs), academia (AI‑first life‑science curricula, computational biology, neurosymbolic AI), industry (shared platforms, translational pipelines, benchmarked biomanufacturing clusters), and capital markets (patient, long‑term funding). Ethical, transparent, energy‑efficient, and bias‑aware AI systems are essential; sovereignty should not equate to isolation but to globally interoperable solutions rooted in equity, affordability, and public interest.
Resolutions and action items
Government to fund trusted sovereign AI‑bio data architectures, regulatory sandboxes, and mission‑mode programs in cell‑gene therapy, immuno‑oncology, and longevity science. Academia to integrate computational biology, neurosymbolic AI, and AI‑first life‑sciences education to develop a new cadre of translational scientists. Industry to co‑create shared AI platforms, translational pipelines, and globally benchmarked biomanufacturing clusters that can scale scientific discoveries. Capital markets to evolve mechanisms that provide patient, long‑term capital for high‑risk, high‑impact biotech innovation. All stakeholders to align regulatory frameworks with accelerated AI‑driven timelines, ensuring speed of approval keeps pace with discovery and development compression.
Unresolved issues
How to concretely implement and operationalize regulatory sandboxes that keep pace with rapid AI‑driven biotech advances. Specific standards and governance mechanisms for ethical, bias‑aware, and energy‑efficient AI systems in biotechnology. Mechanisms for ensuring global interoperability of India’s sovereign AI‑bio platforms while maintaining public‑interest safeguards. Strategies for attracting and sustaining patient capital in the Indian biotech ecosystem without clear policy incentives.
Suggested compromises
None identified
Thought Provoking Comments
If the 20th century was defined by the Internet and the early 21st century by digital sovereignty, the coming decades will be shaped by biotech sovereignty that is embedded in AI.
Frames a historical continuum and positions biotech‑AI convergence as the next strategic epoch, shifting the conversation from digital to biological domains.
Sets the overarching theme of the talk, prompting listeners to re‑evaluate national priorities and opening the floor for discussion on policy, security, and economic implications of biotech sovereignty.
Speaker: Kiran Mazumdar-Shaw
Living systems are the original intelligent machines… they have evolved over 3.8 billion years, sense, compute, and respond through intricate signaling networks, maintaining homeostasis via built‑in guardrails.
Introduces the concept of ‘biological intelligence’ and draws a direct analogy between natural cellular processes and engineered AI systems.
Deepens the technical narrative, moving the dialogue from abstract futurism to concrete biological mechanisms that can be modeled with AI, thereby inviting interdisciplinary collaboration.
Speaker: Kiran Mazumdar-Shaw
The immune system memorizes pathogens in memory T‑cells and B‑cells, retrieving that information instantly on re‑exposure – a marvel of biology in receiving, processing, storing, and acting on information with extreme energy efficiency.
Uses a vivid, well‑known biological example to illustrate information‑processing capabilities of living systems, highlighting their efficiency compared to data‑center AI.
Creates a relatable bridge for the audience, reinforcing the argument that AI can learn from biology’s energy‑efficient designs and prompting thoughts on bio‑inspired computing.
Speaker: Kiran Mazumdar-Shaw
The Arctic tern undertakes a 70,000‑km migration with no prior knowledge or older birds to guide it – navigational intelligence is embedded in its DNA.
Provides a striking natural example of encoded intelligence, emphasizing that complex behavior can arise from genetic information alone.
Serves as a turning point that shifts the tone from cellular mechanisms to organism‑level intelligence, expanding the scope of discussion to genetics, evolution, and AI‑driven bio‑design.
Speaker: Kiran Mazumdar-Shaw
The true inflection point lies at the intersection of biological intelligence and artificial intelligence – AI‑powered biology from protein structure prediction to digital twins of cells and organs.
Identifies the convergence zone as the strategic sweet spot, linking concrete AI applications to biotech breakthroughs.
Catalyzes a transition from descriptive biology to actionable AI‑driven interventions, prompting consideration of investment in foundational models and infrastructure.
Speaker: Kiran Mazumdar-Shaw
Imagine reprogramming cancer cells into non‑malignant cells, or repairing bone tissue that is currently irreparable – moving from static one‑size‑fits‑all drugs to programmable biology.
Paints a visionary, tangible future scenario that reframes therapeutic development as programmable, adaptive processes rather than fixed products.
Elevates the conversation to a paradigm‑shift level, encouraging stakeholders to think about new regulatory frameworks, manufacturing models, and ethical considerations.
Speaker: Kiran Mazumdar-Shaw
If foundational AI models for drug discovery, genomics, cellular engineering and clinical decision‑making are owned offshore, India risks strategic dependence in the most critical domain of national resilience – human health.
Frames biotech‑AI capability as a matter of national security, moving the dialogue from scientific opportunity to geopolitical imperative.
Triggers a policy‑oriented turn, urging government and industry to prioritize sovereign data, models, and infrastructure, and influencing subsequent calls for investment.
Speaker: Kiran Mazumdar-Shaw
Transformation cannot be driven by industry alone. It demands a triple helix of government, academia and industry, each playing distinct roles in building sovereign AI‑bio infrastructure, education, and regulation.
Proposes a concrete governance model, emphasizing collaboration across sectors to achieve the earlier‑outlined vision.
Provides a roadmap that structures the remainder of the talk, guiding listeners toward actionable partnerships and highlighting the need for coordinated effort.
Speaker: Kiran Mazumdar-Shaw
Sovereignty is not isolation. India must build ethical, transparent, energy‑efficient and bias‑aware AI systems for biology that are globally interoperable yet rooted in public interest.
Balances the earlier security narrative with a principled stance on ethics and global collaboration, preventing a purely protectionist interpretation.
Shifts the tone from competitive to collaborative, inviting international dialogue while reinforcing the need for domestic standards and values.
Speaker: Kiran Mazumdar-Shaw
Overall Assessment

The discussion was driven by a series of strategically placed, high‑impact statements from Kiran Mazumdar‑Shaw that progressively broadened the conversation—from a historical framing of technological epochs to a deep dive into biological intelligence, then to concrete AI‑enabled biotech applications, and finally to national‑level policy, governance, and ethical considerations. Each thought‑provoking comment acted as a pivot, introducing new dimensions (technical, economic, security, collaborative) and steering the audience toward a holistic vision of ‘biotech sovereignty embedded in AI.’ Collectively, these remarks shaped the dialogue into a compelling call for coordinated, sovereign, yet globally responsible action in the emerging bio‑AI frontier.

Follow-up Questions
How can we develop computational models that accurately capture biological intelligence, including cell signaling, gene regulation, and immune memory?
Understanding these models is essential to leverage AI for reprogramming cells, disease treatment, and longevity research.
Speaker: Kiran Mazumdar-Shaw
What are the pathways to create sovereign foundation AI models for proteins, RNA, cellular circuits, and systems biology?
Indigenous models are critical for biotech sovereignty, reducing dependence on offshore technologies and ensuring strategic autonomy.
Speaker: Kiran Mazumdar-Shaw
How can in‑silico clinical trials, digital twins of cells and organs, and AI‑driven trial design be built to de‑risk drug pipelines?
These tools can compress development timelines, lower costs, and increase the probability of success for new therapies.
Speaker: Kiran Mazumdar-Shaw
What technologies and algorithms are needed for smart biomanufacturing that optimizes yield and implements quality‑by‑design?
AI‑enabled manufacturing will boost productivity, ensure product consistency, and position India as a global biotech manufacturing hub.
Speaker: Kiran Mazumdar-Shaw
How should a science‑first, tech‑enabled regulatory framework be designed to integrate real‑world evidence through AI validation?
Regulatory speed must keep pace with accelerated discovery to avoid missed opportunities and to safely bring innovations to market.
Speaker: Kiran Mazumdar-Shaw
What components constitute a sovereign AI bio‑infrastructure, including trusted data architectures, regulatory sandboxes, and mission‑mode programs in cell‑gene therapy, immuno‑oncology, and longevity?
Building this infrastructure is foundational for national health security, pandemic preparedness, and economic resilience.
Speaker: Kiran Mazumdar-Shaw
How can academia mainstream computational biology, neurosymbolic AI, and AI‑first life‑sciences education to create a new cadre of translational scientists?
A skilled workforce is necessary to translate AI breakthroughs into practical biotech solutions.
Speaker: Kiran Mazumdar-Shaw
What models of industry collaboration can enable shared AI platforms, translational pipelines, and globally benchmarked biomanufacturing clusters?
Co‑creation across firms will accelerate scale‑up of discoveries and ensure competitive global positioning.
Speaker: Kiran Mazumdar-Shaw
How can capital markets evolve to provide patient, long‑cycle financing for high‑risk biotech innovation in India?
Sustained investment is required for deep scientific research that yields high societal and economic returns.
Speaker: Kiran Mazumdar-Shaw
What ethical, transparent, energy‑efficient, and bias‑aware AI systems can be built for biology that remain globally interoperable yet rooted in public interest?
Ethical AI safeguards trust, equity, and international collaboration while supporting sovereign biotech development.
Speaker: Kiran Mazumdar-Shaw
How can principles of equity, affordability, and access be embedded into AI‑driven biotech to create a socially responsible innovation model?
Ensuring broad access aligns biotech advances with societal goals and enhances India’s global leadership reputation.
Speaker: Kiran Mazumdar-Shaw
What are the mechanisms that modulate cellular senescence, metabolic pathways of aging, and tissue repair to extend healthspan and longevity?
Deciphering these mechanisms could enable therapies that delay aging, reduce disease burden, and improve quality of life.
Speaker: Kiran Mazumdar-Shaw
Is it feasible to reprogram cancer cells into non‑malignant cells and to repair bone tissue that is currently irreparable, and what AI‑driven approaches are needed?
Achieving such cellular reprogramming would represent a paradigm shift from disease management to true biological restoration.
Speaker: Kiran Mazumdar-Shaw

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Driving Indias AI Future Growth Innovation and Impact

Driving Indias AI Future Growth Innovation and Impact

Session at a glanceSummary, keypoints, and speakers overview

Summary

The summit opened by Mridu Bhandari framed the discussion as a effort to bridge the global AI divide and positioned AI as a driver of economic growth, social empowerment and India’s global leadership, introducing a blueprint for architecting India’s AI leadership [1-3][6-7]. Dr. Vivek Mohindra of Dell Technologies highlighted that AI-related compute in India is projected to exceed 10 exaflops and that AI workloads are expected to grow at over 30 % CAGR, underscoring the need for massive infrastructure expansion [15-16]. He outlined three inter-linked pillars-Invest (compute, data and energy infrastructure for all, including MSMEs), Innovate (nationwide skilling from schools to workforce) and Evolve (agile, responsibility-anchored governance with flexible regulations)-and stressed that public-private partnership is essential to realise sovereign AI potential [18-21][23-32][33-34]. The Dell blueprint, aligned with the Vixit Bharat 2047 vision, reiterates these pillars and calls for actionable steps to translate ambition into nation-scale execution [46-48][49-51].


Rajgopal pointed out that India currently has only 40-50 k GPUs against an estimated need of 200 k, and suggested policy levers such as waiving GST on imported servers and extending tax holidays to lower upfront costs for startups and MSMEs [70-73][86-92]. He illustrated the practical impact of AI by describing how his firm deduplicated 90-crore election photographs in 51 hours, a task impossible without AI [78-83]. Bhaskar Chakravarti warned that beyond hardware, a “trust infrastructure”-including data governance, privacy, district-level implementation, transparency, grievance mechanisms and digital literacy-is the critical non-technical bottleneck for inclusive AI adoption [113-124][129-138]. Manish Gupta added that building distributed, energy-efficient data centers across multiple states, leveraging open-source software to cut costs, and fostering a “UPI of AI” platform for developers are necessary to democratise access and retain talent [155-162][167-188][220-245].


In the speed-versus-caution debate, Rajgopal argued for minimal regulation to treat AI as a utility while emphasizing the need for clear use cases, whereas Chakravarti used a car-and-road analogy to stress that without trustworthy institutions and job-impact policies, rapid deployment could falter [256-262][275-283]. Manish reinforced that agility and security are not opposing forces and cited existing Indian frameworks such as DPDP and DEPA as foundations for building robust, yet flexible, governance [291-300]. Minister Jayant Chaudhary described India’s PPP model that has already delivered a cheap, open-access compute facility (38 k GPUs, aiming for 100 k) and highlighted initiatives like Sarvam, incubated by IIT Madras, as examples of academia-industry collaboration [326-348][349-357]. He also emphasized a three-level skilling strategy (school, college, employment) delivered through online and in-person programs to reach tier-2 and tier-3 regions [373-381].


On “Zero Trust” AI, the minister said it requires verification of every protocol, segmented data sets and audit trails, while Dr. Mohindra expanded the concept to include national risk registries, observability and identity management [386-395][412-416]. The panel concluded that coordinated investment, innovative skill pipelines and responsible governance are essential to achieve sovereign, inclusive AI growth in India [302-304][321-324]. Overall, the discussion affirmed that public-private partnership, robust infrastructure and trust-building measures will determine whether India can translate its AI ambitions into broad economic and social benefits [46-48][321-324].


Keypoints


Major discussion points


The “Invest-Innovate-Evolve” AI Blueprint and public-private partnership – Dell’s blueprint is built around three pillars: massive compute and data investment, collaborative innovation through skilling, and agile, responsible governance; its success hinges on marrying public resources with private innovation [18-27][47-49][155-162].


Infrastructure gaps and affordability for SMEs/MSMEs – Panelists highlighted a severe shortage of GPUs (≈200,000 needed vs. 40-50 k available) and called for fiscal relief such as GST waivers and tax incentives to lower upfront costs [71-92]; they also stressed the need for a geographically distributed, sovereign data-center network to bring affordable compute to the “long tail” of innovators [167-176][183-188].


Trust, governance and “zero-trust” AI – Non-technical bottlenecks were identified as the “trust infrastructure” – data-governance, privacy, transparency, grievance mechanisms and explainability – which must be agile and region-sensitive [112-130][136-138]; the minister and Dr. Mohindra later expanded this to a national “zero-trust” architecture covering data, models, cybersecurity and auditability [386-408][412-416].


Skills development and the “UPI of AI” – A recurring theme was building a massive developer ecosystem (shifting from 1 billion users to 1-10 million AI developers) and creating unified data-access APIs (the “AI-Kosh”) to democratise AI across academia, industry and startups [226-244][373-379].


Strategic autonomy and trusted domestic capabilities – Beyond sheer adoption, India must develop “trusted-in-India” AI components (e.g., semiconductors, safety standards) to ensure strategic autonomy and avoid reliance on external providers [229-245][220-236].


Overall purpose / goal


The discussion was convened to launch and explain Dell Technologies’ AI Blueprint for India, aligning it with the summit’s aim of “bridging the global AI divide.” It sought to map a concrete pathway-investment, innovation, and governance-to scale AI nationwide, secure public-private collaboration, and translate AI ambition into inclusive economic growth and strategic autonomy [1][6][14][46-49].


Overall tone


The conversation began with a formal, forward-looking tone, positioning the blueprint as a strategic call to action. As the panel moved into the Q&A, the tone shifted to pragmatic and urgent, focusing on concrete bottlenecks (GPU scarcity, GST, trust deficits). Toward the end, the tone became optimistic and rallying, emphasizing partnership, skill-building, and a shared vision of a sovereign, trusted AI ecosystem. Throughout, the speakers maintained a collaborative, solution-oriented demeanor.


Speakers

Mridu Bhandari – Senior Anchor and Consulting Editor at Network 18 (brands include CNBC and Forbes India); Moderator/Host of the AI summit. [​S9]


Expertise: Media broadcasting, technology journalism, AI policy discussion facilitation.


Dr. Vivek Mohindra – Special Advisor to the Vice Chairman and COO of Dell Technologies Global. [S1]


Expertise: Enterprise AI infrastructure, cloud computing, public-private partnership strategy.


Bhaskar Chakravarti – Dean of Global Business, the Fletcher School of Law and Diplomacy, Tufts University; Professor.


Expertise: International policy, AI governance, trust and institutional frameworks.


Manish Gupta – President and Managing Director, Dell Technologies India. [S5]


Expertise: Technology leadership, AI deployment at scale, industry-government collaboration.


A. S. Rajgopal – Managing Director and Chief Executive Officer, NextGen Cloud Technologies. [S6]


Expertise: Cloud services, AI compute infrastructure, MSME and startup enablement.


Shri Jayant Chaudhary Ji – Minister of State for Education and Minister of Skill Development & Entrepreneurship (Independent Charge), Government of India. [S10]


Expertise: Public policy, skill development, AI strategy and public-private partnership implementation.


Additional speakers:


(None identified beyond the listed speakers.)


Full session reportComprehensive analysis and detailed insights

The summit opened with moderator Mridu Bhandari framing the event as a direct response to the summit’s aim of “bridging the global AI divide” and positioning artificial intelligence as a catalyst for economic growth, social empowerment and India’s emergence as a global leader [1-3]. She introduced the day’s theme – “architecting India’s AI leadership, a blueprint for transformation” – and announced the unveiling of Dell Technologies’ AI Blueprint, which aligns with the Vixit Bharat 2047 vision of AI as a foundational engine for productivity, modernised public services, expanded opportunity and strategic autonomy [6-7][46-48].


Dr Vivek Mohindra, special advisor to Dell’s global COO, presented the macro-level data that underpins the Blueprint: AI-related compute in India is projected to exceed 10 exaflops and AI workloads are expected to grow at a compound annual rate of more than 30 % over the next few years [15-16]. He then outlined the three inter-linked pillars of the Blueprint – Invest (building sovereign, scalable compute, data and energy infrastructure that is accessible to MSMEs), Innovate (nation-wide skilling) and Evolve (agile, responsibility-anchored governance with flexible regulation) [18-21][23-34]. Mohindra detailed the three-level skilling framework: school, college and employment, delivered through online, in-person and incubation modes [373-381].


The panel discussion began with A. S. Rajgopal highlighting a critical supply-side bottleneck: India currently possesses only 40-50 k GPUs, far short of the roughly 200 k GPUs he estimates are needed to meet demand [70-73]. He proposed fiscal levers such as waiving GST on imported servers (collecting it only when services are delivered) and extending income-tax holidays to AI service providers, which could cut upfront costs by about 18 % and make the compute stack affordable for startups and MSMEs [86-92]. Rajgopal also advocated for a geographically distributed network of energy-efficient data centres, leveraging existing rail and power networks for inter-connectivity [167-176][180-188].


Professor Bhaskar Chakravarti shifted the focus to non-technical constraints, coining the term “trust infrastructure” to describe the suite of data-governance, privacy, transparency, grievance-redressal and digital-literacy mechanisms that must be built alongside hardware [112-124][129-138]. He warned that “the single most important determinant of a country’s growth and digital evolution is the demand side” and flagged AI’s potential impact on employment as a “pothole” that must be addressed [129-135][136-138].


Manish Gupta expanded on the infrastructure theme, echoing the need for distributed data centres and adding that open-source software can dramatically lower compute costs, making AI affordable for the “long tail” of innovators [188-192]. Gupta introduced the AI Kosh initiative – a national data-lake containing more than 7,000 datasets for innovators [220-245]. He also referenced emerging Indian regulatory frameworks – DPDP, DEPA and AISI – as examples of governance tools that can support responsible AI deployment [291-300]. Gupta proposed a “UPI of AI” – a unified API layer that would give developers, startups and enterprises seamless, secure access to national data sets and compute resources, mirroring the success of India’s Unified Payments Interface [226-244][220-245].


The minister announced that compute would be priced at ₹65 per hour, positioning the facility as the world’s cheapest [386-390][346-350]. She reiterated the success of the current public-private partnership (PPP) model and the need to expand it, emphasizing the “people” aspect of PPP while not commenting on the adequacy of the compute pricing or on tax-relief measures [1-3][32-34].


The debate then turned to the balance between rapid innovation and regulatory safeguards. Rajgopal argued that AI should be treated as a utility with minimal regulation to avoid stifling growth, whereas Mohindra insisted that regulations must be agile, principle-based and capable of keeping pace with fast-moving technology [256-262][28-32]. Chakravarti reinforced the need for a robust “road” – institutional capacity, transparency and job-impact policies – to accompany the “Ferrari” of advanced AI models [275-283]. Both the minister and Mohindra described concrete components of a Zero-Trust AI architecture, including data segregation, audit trails, a national risk registry and identity & access management [302-304][321-324][386-416].


Points of consensus emerged across the discussion:


* All participants emphasized the central role of public-private partnership for scaling AI infrastructure [1-3][32-34][326-334].


* Affordable compute is essential for SMEs; this was reflected in the Invest pillar, Rajgopal’s GST-waiver proposal, the minister’s ₹65-per-hour announcement and Gupta’s emphasis on distributed, energy-efficient data centres [18-20][86-92][346-350][160-176][170-176].


* A three-tier skilling pipeline (school → college → workforce) delivered through multiple modes was agreed as necessary to expand India’s developer base from a billion users to millions of AI creators [373-381][226-244][188-192].


* Participants uniformly called for agile, transparent governance that embeds explainability, auditability and a “trust infrastructure” without hampering innovation [28-32][291-300][113-122].


* There was unanimous support for a sovereign, cost-effective AI infrastructure built on distributed data centres, sustainable design and domestic semiconductor capabilities [160-162][170-176][20-21].


Key disagreements were limited to three areas:


1. Regulatory approach – Mohindra advocated for a balanced, agile regime, while Rajgopal pressed for minimal regulation treating AI as a utility [28-32][256-259].


2. Fiscal incentives – Rajgopal proposed GST waivers and income-tax holidays; the minister highlighted the existing ultra-low-cost compute provision but did not endorse additional tax relief [86-92][346-350].


3. GPU shortfall estimate – Rajgopal cited a need for about 200 k GPUs, whereas the minister referenced a target of 100 k GPUs by year-end [71-72][346-348].


Take-aways

1. The AI Blueprint rests on three pillars: Invest, Innovate, Evolve.


2. Public-private partnership is the core execution model for infrastructure and skilling.


3. Affordable, sovereign compute is critical; this includes addressing the GPU shortfall, considering GST/tax incentives, and building distributed, energy-efficient data centres.


4. Building a trust infrastructure-privacy, transparency, grievance mechanisms, digital literacy and job-impact policies-is essential.


5. A three-level skilling pipeline (school, college, employment) delivered via online, in-person and incubation modes will create millions of AI developers.


6. Governance mechanisms must be agile and include a Zero-Trust architecture with data segregation, audit trails, a national risk registry and identity & access management.


7. Data-sharing platforms such as AI Kosh and a unified “UPI of AI” API will democratise access to national datasets and compute resources.


In conclusion, the panel affirmed that coordinated investment, innovative skill pipelines and responsible, agile governance are essential to translate India’s AI ambitions into sovereign, inclusive growth. Participants were urged to study the detailed Dell Blueprint and provide feedback, while Dell signalled its intent to partner with the Ministry of Skill Development on AI apprenticeships and tier-2/3 skilling labs. The moderator indicated that future sessions will explore deeper public-private collaborations for AI-driven development [302-304][321-324][311-313].


Session transcriptComplete transcript of the session
Mridu Bhandari

So this conversation here and the couple of conversations we are going to have over the next one hour or so are aligned with the summit’s goal of bridging the global AI divide. So AI drives economic growth, social empowerment, and of course, global leadership for India. This is not just a presentation, it is a call to action. I’m your host, Mridu Bhandari, senior anchor and consulting editor at Network 18 with brands like CNBC and Forbes India, and I’ll be guiding you through this next 55 -minute journey that we’re on. To set the tone of this morning, we’re going to begin with framing the execution pathway of AI adoption and scaling it up from an industry vantage point. Our leadership keynote theme today is architecting India’s AI leadership, a blueprint for transformation.

To deliver this knowledge, we’re going to be talking about the key points of AI adoption and scaling it up from an Please join me in welcoming on stage Dr. Vivek Mohindra, special advisor to the vice chairman and COO of Dell Technologies Global. Dr. Mohindra, please join us here.

Dr. Vivek Mohindra

Thank you, Mridu, and thank you, everyone, for joining us for the unveiling of this important blueprint. As we have heard over the most of this week, India is at the cusp of very significant changes and progress on the back of AI with very bold aspirations, which are not only bold for India, but they’re very bold when you put it in the context of global aspirations that lots of other countries have. Dell has had a presence in India for over 30 years. We have partnered very closely with several government agencies as well as companies and the broader ecosystem to bring the broader set of capabilities that we have, which are across the board. We have a board covering server storage, networking, PCs.

and we are the number one AI infrastructure provider to enterprises globally. So leveraging our global presence and leveraging our deep knowledge of India, we have put all of that thought into putting forth, as Milu described, an AI blueprint, which is a practical guide for what we think not only the country but also companies need to do to be able to take advantage of this particular opportunity. The growth in terms of compute expected on the back of AI in India is expected to be well greater than 10 exo -flops, and that is a significant amount of growth. And the AI workloads in India are growing at over 30 % compound annual growth rate over the next few years, which is extremely significant.

So as we step back and look at what are the key elements of what a country and companies need to do, there really are three key elements. The first element is investments. And the investment really goes at the heart of the compute infrastructure that a country needs to put in place to ensure that everybody has access to that infrastructure, including MSMEs who sometimes do not have the capacity to be able to put their infrastructure in place. Investment also includes energy infrastructure, because without energy, there is really no compute infrastructure you can put in place which can run on that. So those are some of the key areas of the invest pillar of that. And there are other several other areas that I will encourage you to read through our blueprint that you will see both from a policy perspective and practical perspective that we think needs to get done.

The innovate side really comes down to. Areas like skilling, which I know when Minister Chaudhry joins us, we will get into that in quite some detail. But innovating around how the skilling occurs all the way from schools to colleges to workforce entering employment and employers themselves, what role they play across a whole spectrum of mediums to deliver that skilling is a key part of the innovate pillar. And then the last one, evolve, revolves all around governance aspects of that. And governance covers multiple areas. One of the key areas within governance is fundamentally the regulatory framework that needs to exist and that countries need to put in place. The pace of change is so significant with AI and how rapidly the technologies are moving that one of the fundamental balances that countries need to strike vis -a -vis, regulations, is striking the balance between innovation and responsibility while anchoring it to responsibility.

That is one of the key. regulatory principles that needs to be in place. And the regulations have to be agile because the technology is moving at such a fast pace that you cannot anchor the regulatory framework to yesterday’s technologies. And at the heart of it, I hope what you will take away from our blueprint is realizing sovereign AI potential for any country, including India, is really about the public -private partnership. And it’s really about marrying the public resources with private innovation. And that really is what the key to unlocking the full potential of AI and sovereign AI is in this country. So, again, I would encourage you to read through the blueprint, and we look forward to your feedback, and we look forward to partnering closely with Indian ecosystem to help India realize its aspirations with AI.

Thank you very much.

Mridu Bhandari

Thank you so much, Dr. Mohindra. I’m going to request you to please stay back on stage. I’d also like to invite Manish Gupta, President and Managing Director of Dell Technologies India, to join us here. This is the big moment, ladies and gentlemen. We are ready for the unveiling of the Dell Technologies Blueprint to accelerate India’s AI growth. Yes, that’s a photo moment for everyone. Thank you. Thank you so much, gentlemen. Thank you, Dr. Mohindra. Thank you, Mr. Gupta. Well, this blueprint advances India’s vision of Vixit Bharat 2047, positioning AI as a foundational engine for productivity, modernized public services, opportunity expansion and strategic autonomy. It centers on three pillars that we’ve been discussing. Invest, invest in sovereign, scalable compute and data foundations, innovate with collaboration and with a future ready workforce and evolve, evolve into a responsible, agile, security first governance structure.

So our next panel today will go inside this blueprint and India’s AI future to unpack how to convert this ambition. into nation scale execution. And that’s quite a mean feat for a country as diverse and as huge as India. So let’s welcome the panelists for all the tough questions this morning. Raj Gopal, AS, Managing Director and Chief Executive Officer of NextGen Cloud Technologies. Bhaskar Chakravarti, Dean of Global Business, the Fletcher School of Law and Diplomacy at Tufts University. Please have a seat, sir. And once again on stage, Manish Gupta, President and Managing Director of Dell Technologies India. And I will be moderating this session for you. Welcome, gentlemen. We are here this morning to really translate the invest, innovate and evolve pillars into very actionable steps that we all can take together to grow India’s AI ecosystem.

So I’ll begin with targeted questions to each of you. And of course, you are free to jump in to add thoughts to each other. It’s a candid, free -flowing conversation. Mr. Rajgopal, if I can start with you. So startups and MSMEs, they are the engine of our economy. They are also the engine of our innovation, especially as far as AI is concerned. But access to very reliable, affordable AI compute and cloud at scale continues to be a barrier for many of the small and medium enterprises. Now, in your opinion, what are some of the policies, some of the infrastructure, some of the market interventions that we need today to really unlock this access at scale?

A. S. Rajgopal

Yeah, actually, I think across many other countries that we have seen, India has got a much comprehensive approach to this. I mean, in terms of actually they started this India mission, which is across seven pillars. I actually don’t see many startups actually using the facilities that are there in the sense that you could, you know, you apply for GPU infrastructure and you would get it at a subsidized rate. And some of them even got 100 percent of the GPUs that they need. So I think from India’s side, we’ve got a little less number of GPUs compared to what we really need. Maybe we need about 200 ,000 GPUs now, and we have about 40 ,000 to 50 ,000 now. So we all need to really invest more and then deploy more.

But the most important thing that you should see is that there is a good ecosystem. There is a system policy available for MSMEs and startups to leverage this. So there are a lot of innovative AI solutions being built. Most importantly, I also see that government actually setting pace in terms of actually leveraging some of these. We ourselves, I mean, we did one job for government, which is very, very unique. Like we serve Election Commission of India. They came to us and said, can you deduplicate and look at all the photographs that we have? This is like, you know, 90 -year -old pictures, right? I mean, so. So humanly it was not possible to deduplicate. You can’t check one photograph with 90 crore others.

We did that in a matter of 51 hours, and then we responded to them as to whether they had complications and all that.

Mridu Bhandari

Wow, that deserves some applause. That’s a humongous task.

A. S. Rajgopal

So what I see in this country is that I don’t think we will be pure play, this chatbot, I mean, the generative AI, the way it has been envisaged, I mean, will be the primary use case. I think we are going far beyond what AI can be applied in terms of actually improving the productivity of citizen services and also give use cases that these small and medium enterprises can actually use. In terms of actually enabling more GPUs, I think we have, you see, we need a lot of money. I mean, we are investing about one hundredth of what U .S. is investing or even less. so for us to do more I think we need to remove certain bottlenecks that are there one of the things I believe that can be done is I’m sure everybody is familiar we all pay GST but basically when we import servers when we pay GST on it and then when we deliver service I get that as an input and we do the I mean we only pay the value -added piece back to the government but the government gets the GST either way and I get an input so the way to one thing that we could look at is whether we can wave off the GST up front and just take the GST when the services are being delivered what that would do is it will deliver it will reduce my upfront infrastructure cost to by about you know you know 18 percent you know I don’t have to fund that up front and then pay interest on it you know or raise equity and you know pay more expectation on that I mean these things could really help I mean there are some of these things that government should look at and I think that’s a good point.

Thank you. The last point is that they’ve given tax holiday for delivering services to the world, but I think India has got a lot more to do within India than just look at world market. And I believe that we should get, Indian service providers should get the same benefits as the global providers would get when they host services in India. So maybe a GST waiver and some income tax benefits could be good.

Mridu Bhandari

Okay. GST waiver, income tax benefits, demands from the industry coming in. Professor Bhaskar, if I can come to you now. You’ve often argued that nations need to compete not just on technology, but on trust, on institutional strength, and of course, very, very inclusive digital participation. And that’s very critical for a country like India because we are a country of many countries and the rich -poor divide is quite huge. There are a lot of bridges. to gap here. There are a lot of gaps to bridge here. My apologies. Now, as India accelerates AI, what do you think are some of the biggest non -technical bottlenecks that we should be looking at addressing, which you believe could really, you know, limit this momentum that we are on currently?

And if we need larger societal good, what are some of the non -technical barriers that we need to immediately resolve?

Bhaskar Chakravarti

Yeah, so thank you. Thank you for the question, and thank you for the invitation to join this terrific panel. I think the issue of what are the non -technical elements, it’s great that you have included that question in this discussion, because in all the excitement around the technical infrastructure, which is, of course, enormous that’s happening right here in India, it’s no secret that, you know, this is one of the biggest talent pools. In the world, growing very, very fast. In two years, one in three developers are going to be in India, the largest mobile data pool anywhere in the world, the third largest data pool, you know, anywhere in the world once you take mobile and everything else together.

Growth in compute, growth in energy, growth in workloads, all that is happening, you know, which is fantastic. Now, when you think about what are the other elements that drive demand, what we have found, we study 125 countries and try to understand the role of technology in shaping lives and livelihoods across 145 countries. The single most important determinant of what keeps a country on trajectory in terms of both the momentum of growth and also the state of their digital evolution is the demand side. So when I think about the demand side, obviously the core infrastructure, which has been talked about, is enormously important and is going to continue. So the demand side is going to continue to be a major contributor.

A second part, which has been talked about a lot in the Indian context, is the distribution infrastructure. With DPI and all the different platforms associated with it, we know that there’s a very powerful distribution system. Now, there’s a third infrastructure, which is the non -technical part, and that is what I would call the trust infrastructure. Now, when you think about trust, it’s a bit of a slippery concept. It’s very hard to define. Each one of us in our heads has an idea of what trust is. But if you force somebody to define it, we’ll struggle. The best thing about trust is I know what trust is when it is not there, when it is missing.

And then you have to ask the question from a human perspective, what really is trust? And how do I bake that into the policy systems, into the technical systems, into the marketing systems, into the narrative around the India ecosystem, which will then keep moving the system. And trust ultimately has to do with the the Do people have confidence, the people who are the grantors of trust, do they have confidence that this invisible transaction that I’m engaging in, whether I’m putting my data into a system or whether I have entered my financial information and I’m expecting something on the other side, that this whole thing is going to be completed and it’s going to be completed in a way that is reliable, that is repeatable and will not take advantage of me.

So when you think about that whole trust ecosystem, India starts in a great place. Relative to where I come from, the United States, India is a far more trusting country in terms of trust in digital systems overall and certainly in terms of AI. There’s a tremendous level of enthusiasm in terms of embracing all things AI. And we’ve seen this right here, just the sheer numbers of people who’ve attended the conference. It shows a level of trust that is probably unmatched anywhere in the world. Now, this is a tremendous asset. That’s it to start with. The challenge is that the institutional side of trust is still in the development process in India. So if you think about data governance, privacy, security, we are making progress, but we need to be much further along.

Other aspects of trust has to do with the fact that, say, the India AI mission, that is developed at the union level, at the center level. But the actual exercising of trust, the granting of trust happens at the district level. And the district varies depending on whether I’m in Telangana or I’m in Jharkhand. And at the union level, the principles that I’ve got in place need to be sensitive to how it’s being experienced from the ground up. So there are many different facets of trust that we need to work on and put in place, including transparency, which AI is, you know, we are facing a challenge regarding transparency across the world. So this is an issue.

And then, you know, an approach to having redress and grievance systems and then literacy. You know, people need to be able to understand how to use this exciting technology and also protect themselves. I’ll pause.

Mridu Bhandari

Absolutely. Thank you for that wonderful perspective, Professor. Coming to you, Manish. Now, the Dell Technologies blueprint really calls for tighter alignment between policymakers, between industry, academia, institutional capacity. How can frameworks like this one really ensure growth that is both globally competitive for India, but also locally inclusive? Because there is a lot of regional growth. There is a lot of, you know, geography that needs to be taken into the fold of AI when we are talking about being globally competitive as well.

Manish Gupta

Thank you for that question. And, you know, just before I go in there, I would just add or maybe, you know, speak on a couple of topics. That professor just spoke about your trust, while he spoke more from a non -technical standpoint, you also brought in a technical aspect to it, you know, around the entire governance. And it’s not just non -technical in today’s world. It’s driven by data privacy and all of the things around. But equally on literacy, there’s also explainability. You know, that trust comes really inherently once you have got explainability and people are aware on what outcomes are coming. There’s the is that explainable and do they understand that? Right. So it’s a very, very interesting world that we are in.

Now, back to the blueprint that we were talking about and Vivek articulated that beautifully well across the three pillars of invest, invest in in data centric capacity, invest in energy infrastructure, invest in people, which goes back into the innovate side of it. Because like we like like we just discussed, we’ve possibly got the largest pool of engineers around AI. And that’s that’s killing on innovate part is what’s going to differentiate us, what’s going to make it really real and practically doable within the industry and would differentiate us versus other nations equally make it make it make it more democratized and ubiquitous across the nation. And lastly, it’s really about how do you continue to build in the guardrails?

How do you? Build the trust like we just discussed to ensure that there is that. the ecosystem knows that this entire process can be trusted and can be built upon. We’ve also got to remember the sustainability aspect of it, you know, and which is where, as you look at the blueprint, you will see us talk about the fact that energy efficiency, sustainability on data centers and new architectural models are becoming super important. And that’s something that NextGen under Raj Kapoor’s leadership has demonstrated in building highly sustainable, more energy efficient data centers that will allow us to use our energy resources in the best possible manner while democratizing the access of compute capacity, while democratizing the access of data center capacity to organizations and verticals of various sizes.

So that’s going to be the critical pillars around which we really believe that there’s practicality in adopting and differentiating ourselves on the AI arena. You know, as we go forward. Right.

Mridu Bhandari

I’m going to pick up on that data center piece and come to you, Rajgopal. Now, given the scale that India will really need for competing globally, what would it take to truly build sovereign, cost -efficient AI infrastructure that’s not just available to large enterprises, but is also very, very affordable for the long tail of innovators that we have in this country?

A. S. Rajgopal

Yeah, if you see the data center industry, it’s been pretty concentrated in Mumbai and a little bit in Chennai. And, you know, the other markets didn’t really take off as much as they should have. So what we are trying to do is to have, I mean, our current plan is to put about 100 megawatts of data centers in about six states. And what I see is that going forward, I mean, this could be the model that can be built on where each state has got a capacity. Because these states itself have got so much of consumption that can happen primarily because. if you start looking at applying AI into elevating the quality of education in India, which will be one of the first things that will get rolled out at scale in India, and also the healthcare aspects of it and citizen services, these things can require lots of computing capacity.

So we are working with few state governments to actually see if we can bring a total transformation, you know, actually consolidate their applications, bring about a data lake, and then apply data to it, sorry, intelligence to it, and take it to masses. So the way I think it will evolve is that there will be many more regions where data centers would spring up, and then when they’re distributed, they need to be interconnected. We have very good interconnect system, and not just the telcos, but you have things like, you know, we have railway networks, we have power networks which can actually assist the good connectivity between them. When these things come into play, actually, we can have a pretty distributed, a good amount of compute in India that can actually serve this aspect.

But you must be aware that, you know, this game is actually, I mean, if you see my context, I mean, I have four diamonds to work on. One, of course, is geopolitics. I mean, there is quite a lot that is happening. We need to ensure that we have access to the technologies that we want to bring to India. So that India actually works on the best available infrastructure. It is slightly better now, but, you know, there were restrictions before. The other aspect is the amount of money. I mean, so I think it requires multiple billion dollars of investments to do. And that should be facilitated. That should really come into the country. When we have the money piece sorted out and then we build the infrastructure, a very good, you know, a very good, you know, a very good, you know, a very good, you know, a very good, you know, a very good, you know, a very good, you know, a very good, you know, a very good, you know, But the good thing that we can leverage is open source.

I mean, when we leverage open source, we can actually combine the infrastructure and open source and bring down the cost of compute so much that it is actually palatable for the Indian citizens to use because it’s not about serving this 2 percent, 3 percent of the population which pays the income tax. It’s about serving the 90 percent others. The moment we succeed in doing this, I think the talent also, the good talent that we have, we have access to good talent in India, but we don’t have good talent. The quantity is missing. I mean, so we have good people, but, you know, you need many more good people. They’re going all over the world. We want to bring them back.

These people can come back when this money and infrastructure fall in place. Right. That would ensure that India is playing a role which is actually pretty balanced, leveraging the global technologies and leveraging local talent and actually setting the standard for the future. It’s a blueprint for all the other countries which don’t have. which may miss out this revolution and become digital colonies of the top two countries that are investing heavily in AI. So I personally believe we have a good blueprint, and the blueprint can be applied to multiple countries, and we are well on the path. And I would prefer a distributed development of data centers across the country so that we are closer to the users and we

Mridu Bhandari

Absolutely. Absolutely. Well, Professor, coming to you next, studies on digital competitiveness have consistently shown that institutional capacity often determines whether technology adoption really translates into economic value or not. Now, in the Indian context, what do we really need to do to really strengthen institutions and to really build the institutional muscle here? That ensures that AI drives very, very inclusive growth rather than deepening the tech divide, because there are already many divides that we are battling with.

Bhaskar Chakravarti

and I can then end up with a solution to the problem. So the same thing for skill building, literacy. If I can see my ability to speak, my ability to read in multiple languages improve, you know, suddenly my trust goes up. So what is the minimum amount of institutional safeguards I need to provide that? Then I come to something like health care. When people have a much bigger chasm to cross, that’s where people have a lot of concerns about, you know, should I be putting my information into the system? How is it going to be used? Can I trust the phone where I’ve relied on a doctor or relied on, you know, a wise person in my community?

I’ve relied on my mother, you know, for maternal health care advice. So how do I cross that chasm? You know, being able to provide the foundational trust elements is going to be important. So the answer to your question, the long answer is it depends. It depends on the user. As is the case to a lot of questions about India, it depends, right?

Mridu Bhandari

Well, Manish, you know, globally, we are seeing nations tie AI strategy to strategic autonomy. Now, whether it comes to the semiconductor ecosystem or the supply chain, you know, the strategic autonomy is becoming extremely important for countries. For India, what are the two or three foundational capabilities that you think we need to build domestically in this decade to ensure that we are true creators of AI value and not just consumers?

Manish Gupta

Awesome. Great question, Midu. And, you know, we as a nation have proven ourselves to be phenomenal adopters of technology. You know, and the best example in my mind comes on UPI or digital payment. Ten years back, 11 years back, we were just not there. and today we are by far the largest consumers or the number of digital payments and the value of digital payments within India is multiple times of the second economy that does this, right? So that’s a great example of how we have been able to localize, democratize and proliferate the use of technology. So within that, I would really put on three hats here. The first and just, you know, inverting the pyramids, not starting with technology but starting with people.

We have really got to think away from the users to the developers. You know, it’s got to move from 1 billion users to 1 billion or 1 million or 10 million developers and that’s the skill set, that’s the IP that we are going to bring in because we’ve got that largest talent pool residing within the country. The second, and again, I heard you talk about semiconductor and supply chain. I think we have got to adopt the best that’s available. They’re globally. But equally, we’ve got to move, we’ve got to not just think about made in India. but talk about trusted in India and which is where work with organizations such as AISI, Artificial Intelligence Safety Institute to ensure that we are putting the right guardrails, putting the right governance policy and the entire institutional framework to ensure that the AI that we are building here is trusted.

And lastly, goes back to the same thing. And maybe I’ll use the same example. You know, we had the UPI of money. We need to have UPI of AI. Where we are, we are building that at scale using the data sources that we have. We have the largest ones and some of the initiatives that the government has taken, India AI mission. But equally think about AI Kosh. There are more than 7000 data sets that are now available to organizations of all sizes. Use that to ensure that we are developing for the country at the population scale through academia, through private sector, through startups, through MSMEs, all coming together. And that really represents. Be quiet. a consistent API layer that’s bringing theoretically, maybe even all of the data center and the compute capacity that we are creating as a part of AI mission to be one single layer that can be consumed by anybody and everybody across the nation to start to innovate on that, to start to develop on that.

So going back, I know a long answer, but if I were to summarize three things there, UPI of money to UPI of AI. You know, made in India, transitioning to trusted in India. And, you know, from a developer users to from a billion users to maybe a million or 10 million developers.

Mridu Bhandari

Made in India, but made for the world.

Manish Gupta

Absolutely.

Mridu Bhandari

All right. So I’m going to ask each one of you for a few concise takeaways today. Now, the blueprint that we are talking about that, you know, Dell Technologies has just unveiled talks about agile, trusted AI governance with sectoral baselines, testbeds, strong institutional coordination. Yet globalization, only what we’ve seen is that speed often beats cost. caution. And we are seeing, you know, some of the scary stuff coming out with AI as well. A lot of the experiments, a lot of, you know, agentic AI experiments that people are doing across the world, some of them call for caution. Now, in the Indian context, how do we stay globally competitive while also operationalizing very, very stringent safeguards?

And where should we really draw the line between the speed of innovation or innovation velocity and regulatory discipline? So, Rajgopal, if we can start with you,

A. S. Rajgopal

please. If you take the birth of Gen AI, I mean, it actually, I don’t think none of those rules were actually followed. And, you know, it was built on every data that was available in whatever form. Personally, you know, in most places, I’m actually trying to tell people, just it’s not about ignoring the risk factors of it, but I think the regulation should not curtail the innovation in the thinking that you know we should be restrictive about whatever we are working with and so one of the most important things is i think we should have less regulation in this place because i think overall the benefit you should look at ai like a utility i mean you will have more good with ai than with bad yes there are things that can be handled as we go along if you see what we do in cloud today we haven’t been able to sort out our security and you know data protection postures even today it’s an evolving journey and i think we will continuously catch up with the bad factors around the ai adoption and and that’s a journey it cannot start or stop or it can be implemented at a point in time so we should keep looking at those aspects and keep putting whether regulations or technology interventions to ensure that we handle the problem and we should keep looking at those aspects and keep putting whether regulations those pieces but we should go fast forward with implementation and adoption of And I see a lot of Indian enterprises really being reluctant in terms of adopting it, especially the larger ones.

But if you see in India, I think government will set the pace and the startups and the MSMEs will actually catch on from there. And the large enterprises will actually struggle to catch up with the amount of innovation that’s happening in these

Mridu Bhandari

Where does that reluctance come from? Like, what are the top three reasons large organizations are reluctant? One is, of course, the fact that it’s not easy to adopt and transform a large organization. And perhaps startups and SMEs have the benefit of the agility and the small scale that they’re at. What else?

A. S. Rajgopal

So I think the first issue is not about security and all that. I mean, about these aspects. But most importantly, I think a lot of people are struggling to imagine where to apply AI. And I think the moment we can understand that, you will start seeing that. that the benefits far outweigh the negative aspects of it. So I think that’s the first thing that people should look at, is to not just look at leveraging the Gen AI in its chatbot form, but actually really look at where you can deploy. I talked about that deduplication piece. We are working on more than 150 projects, and not all of them are bot -based. So that imagination is what is important, and once that imagination comes, the benefits will outweigh the negative aspects of whatever we

Mridu Bhandari

All right. Professor, final takeaway from you on speed versus caution.

Bhaskar Chakravarti

Yes, so if you think about speed, I always like to use the analogy of a car and a road. you can think about the speed that you can build into the car, the velocity of the Ferrari. And a lot of the conversations that have happened, not necessarily in this room, but in other rooms, is about the Ferraris, whether I’m talking about agentic AI or AI optimized for certain applications and the technical aspects, you know, really, really important. Now, if you take the Ferrari and you bring it into the Indian context, maybe it’s a Maruti or something else that I need to be talking about. But then the question is, what’s the road on which this Ferrari is going?

If it’s a dirt road full of potholes, even a Ferrari is not going to go very fast. So much of our conversation here is about that dirt road. And what are the things, what are the potholes that we need to fix? There’s one elephant on the table that we did not address, and I’m just going to leave it at that, which is when we talk about trust, there’s a whole bunch of things you can do from an institutional standpoint to build trust, transparency, explainability, and so on. There’s a huge issue that we need to think about, which is what is going to be the impact on jobs? What’s going to be the impact on jobs?

This is the youngest major country in the world. It’s also one of the least employed country in the world. And now with AI coming in, is that going to help boost jobs or is it going to take jobs away? If we don’t fix that problem, get ahead of that, all the trust we are talking about, all the institutions you build could come down. So part of the policy infrastructure here is to figure out what is the post AI jobs picture.

Mridu Bhandari

Absolutely. Manish, final word to you.

Manish Gupta

So, you know, I honestly don’t think that these are opposing forces. Agility versus security. And, you know, particularly in this in this side of technology, you cannot have them act as opposing. It’s really about building the frameworks that are going to take both of them together. This is fast evolving as a technology, but equally as as as institutions will have to be faster than that in evolving. I think the government has done a phenomenal job in building some of the frameworks around that, you know, and the and the institutions, the ASI as one example. on the privacy side, DPDP or DEPA, all of those acts being there, are good frameworks to start with. And I’ll just index back on the question that you had asked Raj Gopal earlier on what is the hesitation from enterprises in adopting.

I don’t think it’s necessarily about security. You know, it’s really about saying how many of those have real use cases? While the real use cases exist, how many of them are able to monetize or are able to see them scale from experimentation or pilots into production? And I think that’s a job that we as industry folks who understand the technology, who are innovating in this space, really need to bring to the table so that we can bring this to the fore across the nation and enterprises and organizations of all sizes and academia and public. I think that’s where this will get practical, but equally these are not opposing

Mridu Bhandari

Right. Well, thank you, gentlemen, for that absolutely incredible conversation. You know, the takeaway is clear that investing, innovating and, of course, innovating by expanding skills pipelines and accelerating AI deployment is going to be key to India’s sovereign AI infrastructure. And appreciate you joining us here and taking the time today. We are also very delighted to now be joined by Honorable Shri Jayant Chaudhary Ji, Minister of State for Education and Minister of Skill Development. Huge round of applause. We are going to have him up here shortly. Thank you, gentlemen. Thank you very much for joining us. So if we can have you up here for a quick photo op and we will then continue the conversation.

Thank you. Thank you. Thank you, everyone. Thank you. Thank you, gentlemen. Manish, if I can please request you to felicitate our speakers. Mr. Rajgopal. Let’s have a huge round of applause for our panelists here today. Professor Bhaskar Chakravarti. thank you so much for joining us here today if you all can just get off the stage for two minutes we are getting it ready for our next conversation thank you so much well ladies and gentlemen time to move on now if India’s AI ambition is to translate into real economic growth it’s obviously not going to be any one entity’s job it is not going to be driven only by the government or by the industry alone it will be driven by partnership India has the talent the digital backbone and the momentum the real question though is how do we scale AI responsibly securely and inclusively so our next fireside chat conversation will explore what is the role of AI in the development of the future what a powerful public -private model regarding AI could really look like.

And for this, I’m delighted to welcome two very eminent leaders who are instrumental in shaping the journey, both from policy and industry perspectives. We have, of course, Honorable Shri Jayant Chaudhary Ji, Minister of State for Education and Minister of Skill Development and Entrepreneurship, Independent Charge, Government of India. And we have Dr. Vivek Mohindra, Special Advisor to the Vice Chairman and COO, Dell Technologies Global. If I can please have both of you up here for a quick conversation. Thank you so much. Thank you very much. Thank you, gentlemen. Well, it’s quite clear that public -private partnership is going to be critical to AI scaling and adoption in India. You know, Chaudhary, if I can start with you, how can PPP models really, how can PPP accelerate?

large scale AI infrastructure? What have been some of the on ground experiences you’ve seen so far? And of course, the government has been moving at breakneck speed when it comes to deploying more technology, sort of, you know, giving a more fillip to innovation in India. How are you really ensuring trust, resilience and long term national competitiveness as AI becomes very mainstream with this event in India?

Shri Jayant Chaudhary Ji

In the Indian context, as the audience is aware, we had a lot of catching up to do. And it’s fair to say that a lot of what we are seeing around us in AI has been facilitated by creating an ecosystem in a short span of time. Perhaps we may enjoy a second mover advantage with regards to this technology. And that has come about only because of a strong top down emphasis and push. the only reason why this event is happening here in India is because the leadership at the top understood very quickly the value and the potential of this new technology that we should not view it as a disruption but view it as an opportunity to leapfrog legacy problems deficit problems and provide access and equity to our citizens and dignity to our workers and that’s why prime minister you know the last event in France shared that leadership space every opportunity he gets he talks about skilling about young people about the potential for AI and the enunciation in manner of that this technology needs to be human -centric I think that has given us a real emphasis and a push for the academia for our industry for our vibrant startups systems you know to really think about what they are doing in this space I think that is the background to the event that we are all witnessing.

Thousands and thousands of people, casual visitors, apart from those who are already entrenched in technology. And the message that goes out is that one billion strong young people in a developing country are already thinking about what AI means to them and what they can do in this space. Not just be consumers, but also be producers and innovators and thinkers and creators. Now, PPP in this domain for me, and when you think about Manav being human centric, citizen centric, the P that really matters is the people. And in that context, it’s important that you have the broad architecture which is open. This is something that India has stood for from the beginning. When there was a lot of debate about what should be the policy that enables AI.

But there was also a lot of fear around AI about trust factors, about privacy, data, sovereignty, multiple issues about the human interface, the augmented human worker, what this means for education, for the future of jobs. A lot of those issues were being discussed and debated. And India said that, yes, it’s good to have a strategy. And out of that strategy and experiences will evolve a robust policy. It is essential to have guardrails. But it is a starting point. Currently, we don’t want to infringe upon the possibility of innovation. And India took that approach. And we had open access to whatever compute, you know, India AI mission was set up with a target of 18 ,000 GPUs. And in a short span, they’ve surpassed it.

It’s about 38 ,000. And a roadmap is by the end of this year, it’s going to cross one lakh, threefold. now think about it all of this compute facility that has been created is a model of ppp it has to be housed in educational institutions so that real research can happen in our premier educational institutions this is a great time when academia is more important than it perhaps ever was in the indian context in the indian context academia was partly separated from industry and the real economy in our minds but now every indian citizen is realizing the value of research and innovation every family is saying that no this is important we must value it and every educational institution is saying that we are not divorced from the market and needs of our community and society and nationhood building so that engagement with nationhood building and concept of technology is deeply immersed thanks to the efforts of india ai mission and here i’ll just you know leave one data point to you what is the cost of this compute facility it’s being provided for startups, for researchers at 65 rupees for an hour.

You pay 300 rupees for a couple of hours PVR cinema ticket. So it’s probably the world’s cheapest compute facility which is open. We are celebrating Sarvam. Let’s not focus on for profit not for profit. Because everything has to be for people. If you look at Sarvam, that’s also in my mind a PPP. It’s been incubated by IIT Madras. It has been supported by AI mission. So that’s another example because you’re right. Government cannot invest in from data to energy to compute to innovation. It really has to come from our citizens, our researchers, our technologists. It’s a collective mission.

Mridu Bhandari

Absolutely. Well, Dr. Mohindra getting your point of view from perspective now, if we look at PPP as far as job enablement is concerned, because that’s the big concern that citizens have that what’s going to happen to our jobs. And of course, skilling is part of that journey. How can Dell partner with, you know, Ministry of Skill Development and Entrepreneurship? What are you all doing from a future skill labs perspective? How are you accelerating AI apprenticeships, so that basically the jobs also move beyond the metros, because tier two, tier three towns is where, you know, we are looking at a lot of talent sitting there, but we are looking at a lack of access of sorts when it comes to skilling?

Dr. Vivek Mohindra

Yeah, I think that’s a great question. And I think, Honorable Minister, good to see you again. It reminds me of a discussion we had back in last time we met in October 2024. I know we missed each other and often. And we covered very similar ground. I think at the heart of it, it does come down to, and I commend India on the progress it’s made in making access to all these GPUs available. It is an industry -academia partnership, working closely with the minister here. And our view is when you think about skilling, there are three different levels. You have to think about schooling. You have to think about college level. And you have to think about employment.

People entering the workforce are employed today. And you think about delivery of this through online, in -person, incubation. So those are the two big dimensions. And from our perspective, we are very excited to partner on extending it to Tier 2, Tier 3, working closely with the minister and other institutions in India. And the core of it, having accessibility to this GPU at such an amazing price point, really unlocks the potential. And I think as Minister and I, we have to be very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very, very,

Mridu Bhandari

Right. Well, finally, to both of you, now, as we embed AI into all our critical sectors, whether it’s BFSI or telecom or agriculture, healthcare, education, now governance has to move from intent to very, very strong operational safeguards. What does Zero Trust AI architecture then practically look like at a national level? Minister, if we can start with you.

Shri Jayant Chaudhary Ji

Well, Zero Trust, it’s an interesting terminology. For me, the way I look at it, it means that you have to be able to verify each and every protocol in your design. And in India, we generate a lot of data. Everything, and Indian citizens are quite open about access. globally privacy has been a major concern and sometimes it becomes also an impediment for governance because those data points aren’t being collected, analyzed, researched. In the Indian context, citizens are okay about sharing their data. And I’m saying this with the knowledge that, you know, crores of upper IDs have been formed, created using consent. But we have not received any blowback. In a way, from students’ families saying that, why are you?

Because they understand that if we are able to, with technology, customize and tailor our experience in the classroom for every student, where no student can get left behind, what that means for the employability, for the knowledge acquisition for that student, for the quality of their educational experience is immense. So I think, but once you’re collecting the data, there is a lot of… effort that needs to be put in so that that trust is maintained from zero trust to 100 % trust in the public mind. So I think our data sets need to be segmented. There are protocols within Government of India. In education, we’re thinking of creating a complete AI stack, which means the anonymized data sets will be made available for researchers for creating the value for the layers of innovation, for enabling startups to engage with that data that Government of India and the citizens have shared.

Similarly, in skill, we have Skill India Digital Hub, which is also looking at creating those data sets, which can then really help us unleash the next wave of innovation and requirement that we have in skilling. So I think once you have that system design in place, it can be achieved. The prime minister spoke about a label for content. That it should be verifiable and legal. this technology and consumer awareness is a big aspect of it and how we engage with these tools and how we understand what the outcome of our engagement, how true is it? How is it verifiable? Where are these AI models trained? Is there any bias in that data set? All that knowledge needs to be out there for the consumers.

I feel also that there needs to be an audit trail for our new AI models. Maybe in the future you could have CAG come out with an audit report of all the AI models. So it’s a brave new future, but it’s a balance. For partnership at scale, you need architecture with trust.

Mridu Bhandari

Absolutely. Final 30 seconds to you, Dr. Mohindra.

Dr. Vivek Mohindra

I think the minister said it very eloquently. I would extend the notion to zero trust should extend to start with data, go into AI models. the usability, the cybersecurity elements, and the identity and access management. Those would be the ways I would extend it. And practically, it means beyond the governance framework, having things like a national risk registry, observability, being able to report whenever there is an infraction and auditability. But, Minister, you said it very eloquently, and I think our AI blueprint has more details that would be worth looking at.

Related ResourcesKnowledge base sources related to the discussion topics (16)
Factual NotesClaims verified against the Diplo knowledge base (2)
Confirmedhigh

“Dr Vivek Mohindra presented the macro‑level data that underpins the Blueprint: AI‑related compute in India is projected to exceed 10 exaflops and AI workloads are expected to grow at a compound annual rate of more than 30 % over the next few years. He then outlined the three inter‑linked pillars of the Blueprint – Invest, Innovate and Evolve.”

The knowledge base notes that Dr Vivek Mohindra from Dell Technologies presented a comprehensive AI blueprint built upon three foundational pillars designed to position India as a global AI leader, matching the Invest, Innovate and Evolve framework described in the report [S1].

!
Correctionhigh

“A. S. Rajgopal highlighted a critical supply‑side bottleneck: India currently possesses only 40‑50 k GPUs, far short of the roughly 200 k GPUs he estimates are needed to meet demand.”

The knowledge base estimates that India needs at least 128,000 GPUs for domestic requirements, which is lower than the 200 k figure cited in the report and provides no data on the current 40-50 k stock, indicating the report’s numbers are not aligned with the source [S55] and [S56].

External Sources (89)
S1
Driving Indias AI Future Growth Innovation and Impact — -Dr. Vivek Mohindra- Special advisor to the vice chairman and COO of Dell Technologies Global
S2
https://dig.watch/event/india-ai-impact-summit-2026/driving-indias-ai-future-growth-innovation-and-impact — And for this, I’m delighted to welcome two very eminent leaders who are instrumental in shaping the journey, both from p…
S3
https://dig.watch/event/india-ai-impact-summit-2026/driving-indias-ai-future-growth-innovation-and-impact — Thank you. Thank you. Thank you, everyone. Thank you. Thank you, gentlemen. Manish, if I can please request you to felic…
S4
Driving Indias AI Future Growth Innovation and Impact — Thank you. Thank you. Thank you, everyone. Thank you. Thank you, gentlemen. Manish, if I can please request you to felic…
S5
Driving Indias AI Future Growth Innovation and Impact — -Manish Gupta- President and Managing Director of Dell Technologies India
S6
Driving Indias AI Future Growth Innovation and Impact — -A. S. Rajgopal- Managing Director and Chief Executive Officer of NextGen Cloud Technologies
S7
https://dig.watch/event/india-ai-impact-summit-2026/driving-indias-ai-future-growth-innovation-and-impact — A. S. Rajgopal: Yeah, actually, I think across many other countries that we have seen, India has got a much comprehensi…
S8
https://dig.watch/event/india-ai-impact-summit-2026/shaping-ais-story-trust-responsibility-real-world-outcomes — Absolutely, absolutely. Well, I’m going to come back to you with the knowledge that bringing the wish in, you know, in b…
S9
Shaping AI’s Story Trust Responsibility & Real-World Outcomes — -Mridu Bhandari- Moderator from Network18 This comprehensive discussion at the AI Impact Summit brought together leader…
S11
The Global Power Shift India’s Rise in AI & Semiconductors — First, how do we continue to build the intellectual foundation? Second, how do we build manufacturing depth and supply c…
S12
Panel Discussion: 01 — Minister Patria highlighted additional systemic barriers, particularly geopolitical dynamics creating asymmetric conditi…
S13
Technology Regulation and AI Governance Panel Discussion — – Federico Sturzenegger- Maryam bint Ahmed Al Hammadi Legal and regulatory | Economic Regulatory Reform and Deregulati…
S14
Leaders’ Plenary | Global Vision for AI Impact and Governance- Afternoon Session — “We are partnering with Indian companies and startups end‑to‑end to build AI‑powered services.”[13]. “The Prime Minister…
S15
Opportunities of Cross-Border Data Flow-DFFT for Development | IGF 2023 WS #224 — Building trust is highlighted as a fundamental requirement for data governance in multilateral environments. Trust can b…
S16
Transforming Agriculture_ AI for Resilient and Inclusive Food Systems — “Data should be as the infrastructure.”[74]. “Often, the farmers don’t own, and then, of course, the… the model and th…
S17
Scaling Trusted AI_ How France and India Are Building Industrial & Innovation Bridges — I would emphasize there’s two ingredients that are necessary, which often are associated with discussions of responsible…
S18
Artificial intelligence (AI) – UN Security Council — Another session highlighted the need for transparency and accountability in AI algorithms. The speakers advocated for AI…
S19
Open Forum #33 Building an International AI Cooperation Ecosystem — Participant: ≫ Distinguished guests, dear friends, it is a great honor to speak to you today on a topic that is reshapin…
S20
Atelier #1 : « Infrastructures et services numériques à l’ère de l’IA : quels enjeux de régulation, de sécurité et de souveraineté des données ? » — Drudeisha Madhub Au pas de course et je découvre le concept de la conclusion évolutive. Ça veut dire qu’au départ on ann…
S21
Efforts to improve energy efficiency in high-performance computing for a Sustainable Future — The demand for high-performance computing (HPC) has surged due to technological advancements like machine learning, geno…
S22
The Battle for Chips — Additionally, India advocates for providing more opportunities, investments, and technology to countries with greater po…
S23
European Tech Sovereignty: Feasibility, Challenges, and Strategic Pathways Forward — Virkkunen articulated sovereignty as “having choice in partnerships, not being forced into dependencies,” emphasizing st…
S24
https://dig.watch/event/india-ai-impact-summit-2026/shaping-the-future-ai-strategies-for-jobs-and-economic-development — Governments willing to move decisively, private sector actors willing to collaborate, technologists willing to design fo…
S25
Empowering India & the Global South Through AI Literacy — Capacity development | Artificial intelligence | Social and economic development
S26
Keynotes — Oleksandr Bornyakov: Dear ladies and gentlemen, I’m honored to represent Ukraine today here in Strasbourg in the heart o…
S27
Leaders TalkX: When policy meets progress: paving the way for a fit for future digital world — Imedadze describes how Georgia’s COMCOM has evolved from being an oversight regulator to an enabler of transformation ov…
S28
Building Public Interest AI Catalytic Funding for Equitable Compute Access — “computer capability collaboration connectivity compliance and context”[3]. “From these discussions, there were six foun…
S29
Making the case for digital connectivity for MSME’s: How improved take up and usage of digital connectivity, in particular for ecommerce, supports development objectives (ITC) — Finally, the analysis discusses the potential economic gains from removing taxes on ICT. Studies conducted with the supp…
S30
Securing access to financing to digital startups and fast growing small businesses in developing countries ( MFUG Innovation Partners) — FAST, a financial services company, has made significant changes in how businesses operate amidst the pandemic. It start…
S31
WS #193 Cybersecurity Odyssey Securing Digital Sovereignty Trust — Adisa advocates for implementing zero trust architecture as a foundational policy pillar, which operates on the principl…
S32
Harnessing Collective AI for India’s Social and Economic Development — Artificial intelligence | Human rights and the ethical dimensions of the information society | Data governance Professo…
S33
The Foundation of AI Democratizing Compute Data Infrastructure — “So we are identifying agriculture, education, healthcare, and some more.”[83]. “So inspire them that they can really do…
S34
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — A recurring theme was the importance of tailoring policies and technologies to local contexts. Adamma Isamade emphasized…
S35
Collaborative AI Network – Strengthening Skills Research and Innovation — Beatriz from Brazil’s government shared their approach of creating shared AI infrastructure and data ecosystems, particu…
S36
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — This comment provides a philosophical and ethical framework for the entire biotech sovereignty agenda, showing how India…
S37
Building Indias Digital and Industrial Future with AI — As India advances in digital public infrastructure and its AI ambitions, the key is how we ensure these systems remain t…
S38
Shaping the Future AI Strategies for Jobs and Economic Development — “They are giving GPUs available at 65 rupees per month.”[119]. “so there are quite a few no no it’s public it’s all publ…
S39
Driving Indias AI Future Growth Innovation and Impact — Okay. GST waiver, income tax benefits, demands from the industry coming in. Professor Bhaskar, if I can come to you now….
S40
Foreword — – One threshold to establish from the outset is the minimum key features of devices that will enable people to use the i…
S41
IMF calls for new fiscal policies to address AI’s economic and environmental impacts — The International Monetary Fund (IMF) hasrecommendedfiscal policies for governments grappling with the economic impacts …
S42
Secure Finance Risk-Based AI Policy for the Banking Sector — Embedded governance is not regulatory burden.It is strategic imperative.It ensures that innovation is sustainable, trust…
S43
Trust in Tech: Navigating Emerging Technologies and Human Rights in a Connected World — 4. **Transparency and Due Process**: Translucency in the regulatory creation and implementation is deemed essential for …
S44
Keynotes — Marianne Wilhelmsen: but as Norway prepares for the upcoming IGF 2025, I look forward to welcoming many of you in June a…
S45
Policymaker’s Guide to International AI Safety Coordination — OECD Secretary General Mathias Cormann emphasized that trust is built through inclusion and objective evidence. He ident…
S46
The Impact of Digitalisation and AI on Employment Quality – Challenges and Opportunities — Mr. Sher Verick:Great. Well, thank you very much. It’s a real pleasure to be with you here today. I think Janine updated…
S47
AI-Powered Chips and Skills Shaping Indias Next-Gen Workforce — I was just going to say, I think, I joke around that I only want LAM to benefit from this, but I think we’re seeing othe…
S48
Keynote-Roy Jakobs — This comment introduces a systems-thinking perspective that acknowledges the complexity of AI implementation beyond just…
S49
Safe Digital Futures for Children: Aligning Global Agendas | IGF 2023 WS #403 — The analysis examines topics such as online crime, the dark web, internet fragmentation, internet companies, innovation,…
S50
WS #31 Cybersecurity in AI: balancing innovation and risks — Dr. Alison: Thank you, I’m just checking. That’s great, thank you. So just very quick, Zero Trust 101, I’m sure you’…
S51
Building Trusted AI at Scale – Keynote Anne Bouverot — Innovation and protection can and must go hand in hand
S52
Tackling disinformation in electoral context — While some regulation is necessary, over-regulation should be avoided as it could stifle innovation and growth in the di…
S53
Tokenisation and the Future of Global Finance: A World Economic Forum 2026 Panel Discussion — Legal and regulatory | Economic Regulation and innovation must work together, not in opposition Regulation vs Innovati…
S54
Building Public Interest AI Catalytic Funding for Equitable Compute Access — And that’s where the investment readiness comes in. So we’re talking to countries, and we’ve had this conversation with …
S55
Indias Roadmap to an AGI-Enabled Future — -Compute Infrastructure and GPU Requirements: Analysis of India’s current and projected compute needs, with estimates su…
S56
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The government’s response through the India AI Mission has established a shared compute framework providing access to 38…
S57
Revisiting 10 AI and digital forecasts for 2025: Predictions and Reality — Additionally,public-private partnershipsare essential for scaling sustainability initiatives. Companies invest in on-sit…
S58
The Global Power Shift India’s Rise in AI & Semiconductors — Public-private partnerships are essential for scaling investments, with government de-risking enterprises without direct…
S59
Open Forum #33 Building an International AI Cooperation Ecosystem — – Qi Xiaoxia- Dai Wei- Ricardo Pelayo Development | Economic | Capacity development Innovation Ecosystems and Practica…
S60
AI data centre boom sparks incentives and pushback — The explosive growth of AI and cloud computing hasignited a data centre building boomacross the United States, with stat…
S61
Driving Indias AI Future Growth Innovation and Impact — Thank you so much, Dr. Mohindra. I’m going to request you to please stay back on stage. I’d also like to invite Manish G…
S62
Leaders TalkX: Towards a safer connected world: collaborative strategies to strengthen digital trust and cyber resilience — Public-Private Partnerships and Innovation Thailand’s approach to ICT development is founded on three pillars: co-creat…
S63
Leveraging AI4All_ Pathways to Inclusion — Three interconnected pillars needed: design, access, and investment – Three Pillars Framework
S64
IGF 2019 Final Report — For small and medium-sized enterprises (SMEs) the Internet and digital technologies facilitate access to new customers, …
S65
Making the case for digital connectivity for MSME’s: How improved take up and usage of digital connectivity, in particular for ecommerce, supports development objectives (ITC) — Finally, the analysis discusses the potential economic gains from removing taxes on ICT. Studies conducted with the supp…
S66
Building Public Interest AI Catalytic Funding for Equitable Compute Access — And I thought, how do we quantify this? So we, and I think we have already spoken to Calpa about this. We’re working, I …
S67
Collaborative Innovation Ecosystem and Digital Transformation: Accelerating the Achievement of Global Sustainable Development Goals (SDGs) — Described SMEs as having small funding scale support, insufficient upgrading capabilities, weak competitiveness and risk…
S68
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Jigar highlights that trust encompasses accuracy, data privacy, and transparent model governance.
S69
Harnessing Collective AI for India’s Social and Economic Development — Artificial intelligence | Human rights and the ethical dimensions of the information society | Data governance Professo…
S70
Shaping the Future AI Strategies for Jobs and Economic Development — The emphasis on collaboration over displacement provides a framework for managing workforce transitions while capturing …
S71
Open Forum #33 Building an International AI Cooperation Ecosystem — Dai Wei: Distinguished guests, ladies and gentlemen, good day to you all. I’m delighted to join you in this United Natio…
S72
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S73
WSIS Action Line C2 Information and communication infrastructure — **Joshua Ku** from GitHub concluded the panel by demonstrating how open-source approaches can accelerate AI and infrastr…
S74
Collaborative AI Network – Strengthening Skills Research and Innovation — Beatriz from Brazil’s government shared their approach of creating shared AI infrastructure and data ecosystems, particu…
S75
Keynote Adresses at India AI Impact Summit 2026 — India’s Technological Capabilities and Strategic Positioning Multiple speakers emphasised India’s unique combination of…
S76
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Kiran Mazumdar-Shaw — Deep science requires a lot of research and development. It requires patient capital. But the societal and economic retu…
S77
Closing remarks — This transcript captures the closing remarks from the AI for Good Summit 2025, delivered by ITU Secretary-General Doreen…
S78
What policy levers can bridge the AI divide? — ## Forward-Looking Perspectives ## Infrastructure as Foundation ## International Cooperation and Knowledge Sharing ##…
S79
Keynote-Rishi Sunak — Artificial intelligence | International collaboration (captured under Artificial intelligence) The moderator formally w…
S80
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — – Transparent and inclusive processes for shaping the summit’s outcomes Amandeep Singh Gil: Thank you. And thank you to…
S81
Panel Discussion AI & Cybersecurity _ India AI Impact Summit — “Let today’s steps of the network build tomorrow’s bigger strides”[66]. “We have deployed our vast and rich network of i…
S82
Keynote-Jeet Adani — Drawing historical parallels, Adani compared sovereign compute capacity to previous eras of strategic infrastructure dev…
S83
 Network Evolution: Challenges and Solutions  — Miguel González-Sancho from the European Commission provided insights into the EU White Paper, which outlines the challe…
S84
Empowering Workers in the Age of AI — Development | Economic Digital skills development should be structured in three levels: universal basic digital literac…
S85
Welfare for All Ensuring Equitable AI in the Worlds Democracies — “we actually doing it while they are billable because when they become non billable that’s not when you want it…”[105]…
S86
Transcript from the hearing — Also, I think we should be very careful to do this with our allies in the world and not do it alone. There is, first, we…
S87
Opening of the session — Cuba: Gracias, Senor President. Thank you, Chair. Chair, we have reached the final meeting, the final session, rather, o…
S88
Masterclass#1 — Gregor Ramus :I guess I’ll start. Yes, go ahead. The question started with me. And this is a question actually that we g…
S89
Session — Gabriele Mazzini: Yeah, I tried to cover as much as I can all those questions. Yeah, first of all, I think, you know, th…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
D
Dr. Vivek Mohindra
3 arguments160 words per minute1078 words402 seconds
Argument 1
Blueprint outlines investment in compute & energy, innovation via skilling, and evolution through agile governance (Dr. Vivek Mohindra)
EXPLANATION
Dr. Mohindra presented a three‑pillar AI blueprint for India, focusing on investing in compute and energy infrastructure, innovating through widespread skilling, and evolving with agile, responsible governance. He emphasized that these pillars together enable sovereign AI potential through public‑private partnership.
EVIDENCE
He described the investment pillar covering compute and energy infrastructure needed for universal access, especially for MSMEs, and noted the need for policy and practical actions [18-22]. He then explained the innovate pillar as centred on skilling from schools to workforce and the evolve pillar as requiring agile regulatory frameworks that balance innovation with responsibility [23-32].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI blueprint and its three pillars (invest, innovate, evolve) are described in the summit overview and the panel introduction [S1]; the partnership model for skilling with ministries is highlighted in the Leaders’ Plenary discussion [S14]; broader AI literacy and coordinated national effort are emphasized in the AI literacy report [S25].
MAJOR DISCUSSION POINT
AI Blueprint and Three‑Pillar Strategy
AGREED WITH
Mridu Bhandari, Shri Jayant Chaudhary Ji
Argument 2
Advocates an agile, balanced regulatory regime that protects while fostering rapid AI innovation (Dr. Vivek Mohindra)
EXPLANATION
Dr. Mohindra argued that AI regulations must be agile and adaptable, striking a balance between encouraging innovation and ensuring responsibility. He stressed that regulatory frameworks should not be anchored to outdated technologies.
EVIDENCE
He highlighted the need for a regulatory framework that balances innovation with responsibility and must be agile to keep pace with fast-moving AI technologies [28-32].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Regulatory reform that balances innovation with safeguards is discussed in the AI Governance panel [S13]; evolving regulatory frameworks tailored to national contexts are noted in the regulatory evolution session [S20]; systemic barriers and the need for adaptable rules are raised in the systemic barriers discussion [S12].
MAJOR DISCUSSION POINT
Trust, Governance, and Regulatory Framework
AGREED WITH
Manish Gupta, Bhaskar Chakravarti, Shri Jayant Chaudhary Ji
DISAGREED WITH
A. S. Rajgopal
Argument 3
Outlines a three‑tier skilling model (school, college, employment) and partnership with ministries to deliver AI apprenticeships nationwide (Dr. Vivek Mohindra)
EXPLANATION
Dr. Mohindra described a three‑level approach to AI skill development covering schooling, higher education, and on‑the‑job training, delivered through online, in‑person, and incubation modes. He highlighted Dell’s willingness to partner with the Ministry of Skill Development to extend apprenticeships to Tier‑2 and Tier‑3 regions.
EVIDENCE
He identified three levels-schooling, college, and employment-and noted delivery through online, in-person, and incubation, emphasizing partnership with the ministry to reach Tier-2 and Tier-3 towns and the importance of affordable GPU access [373-381].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The three-level skill development approach and ministry partnership are detailed in the Leaders’ Plenary on skilling initiatives [S14]; AI literacy and capacity-building programmes are referenced in the AI literacy report [S25]; the blueprint’s innovation pillar includes skilling across education levels [S1].
MAJOR DISCUSSION POINT
Public‑Private Partnerships (PPP) and Skill Development
M
Manish Gupta
6 arguments174 words per minute1181 words405 seconds
Argument 1
Emphasises need for explainability, trust, and sustainable data‑center design within the governance pillar (Manish Gupta)
EXPLANATION
Manish Gupta stressed that governance must incorporate explainability of AI outcomes, build trust through transparency, and adopt energy‑efficient, sustainable data‑center designs. These elements are essential for responsible AI deployment at scale.
EVIDENCE
He discussed explainability and trust as core to governance, noting that explainable AI helps users understand outcomes [151-154], and highlighted the importance of energy-efficient, sustainable data-center architectures as a competitive differentiator [160-162].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust infrastructure components such as transparency and data-governance are outlined in the cross-border data flow discussion [S15]; the importance of transparency and reproducibility for trusted AI is highlighted in the Scaling Trusted AI paper [S17]; energy-efficient high-performance computing designs are examined in the sustainability of HPC study [S21].
MAJOR DISCUSSION POINT
Trust, Governance, and Regulatory Framework
Argument 2
Stresses explainability, auditability and a unified “UPI of AI” API layer to embed trust in applications (Manish Gupta)
EXPLANATION
Manish Gupta called for a unified API layer—likened to a UPI for AI—that would provide standardized access to data and compute, ensuring explainability and auditability across applications. This would foster trust and democratize AI innovation.
EVIDENCE
He highlighted the need for explainability and auditability to build trust [151-154] and described a “UPI of AI” that would create a consistent API layer for nationwide data and compute consumption [221-244].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concept of a national “UPI of AI” for standardized data and compute access is explicitly mentioned in the discussion on unified AI services [S2]; the shift toward a developer-centric ecosystem and common API layer is reinforced in the Open Forum summary [S19].
MAJOR DISCUSSION POINT
Trust, Governance, and Regulatory Framework
Argument 3
Calls for shifting focus from billions of users to millions of developers, creating a “UPI of AI” ecosystem for inclusive innovation (Manish Gupta)
EXPLANATION
Manish Gupta argued that India should move from a user‑centric model to a developer‑centric ecosystem, cultivating millions of AI developers to drive inclusive innovation. He linked this shift to the “UPI of AI” concept that would provide a common platform for data and compute.
EVIDENCE
He noted the need to transition from a billion users to millions of developers and described the “UPI of AI” as a consistent API layer enabling nationwide innovation [226-244].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The move from a user-centric to a developer-centric model is highlighted in the Open Forum remarks on building an AI developer community [S19]; the same discussion references the need for a unified AI platform analogous to UPI [S2].
MAJOR DISCUSSION POINT
Public‑Private Partnerships (PPP) and Skill Development
Argument 4
Highlights energy‑efficient, sustainable data‑center architectures as a competitive differentiator (Manish Gupta)
EXPLANATION
Manish Gupta pointed out that energy efficiency and sustainability in data‑center design are crucial for India’s AI competitiveness, reducing operational costs while supporting widespread access.
EVIDENCE
He referenced the importance of energy-efficient, sustainable data-center architectures as a key pillar for practical AI adoption and differentiation [160-162].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Sustainable, energy-efficient HPC and data-center designs are discussed in the report on improving energy efficiency in high-performance computing [S21]; the Scaling Trusted AI paper also stresses sustainable architecture as a differentiator [S17].
MAJOR DISCUSSION POINT
Building Sovereign, Cost‑Efficient AI Infrastructure
Argument 5
Calls for domestic capabilities in semiconductors and trusted‑in‑India hardware to achieve strategic autonomy (Manish Gupta)
EXPLANATION
Manish Gupta emphasized the need for India to develop indigenous semiconductor and hardware capabilities, moving from “made in India” to “trusted in India” to secure strategic autonomy in AI.
EVIDENCE
He advocated adopting the best global technologies while ensuring they are trusted in India, mentioning work with organizations like the Artificial Intelligence Safety Institute to embed guardrails and governance [229-235].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
India’s push for chip sovereignty and indigenized semiconductor capabilities is examined in the Battle for Chips analysis [S22]; European perspectives on strategic autonomy echo similar goals for indigenous tech capacity [S23]; the Global Power Shift briefing calls for sovereign AI capability and indigenization of critical components [S11].
MAJOR DISCUSSION POINT
Building Sovereign, Cost‑Efficient AI Infrastructure
Argument 6
States that speed and security are not opposing forces; they require integrated frameworks and real use‑case monetisation (Manish Gupta)
EXPLANATION
Manish Gupta argued that agility and security can coexist through integrated frameworks, emphasizing the need for real, monetizable AI use cases to drive adoption rather than viewing regulation as a barrier.
EVIDENCE
He said agility and security are not opposing, and highlighted the importance of frameworks that combine both, along with the need for real use-cases to be monetized and scaled from pilots to production [291-300].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Balanced regulatory approaches that do not hinder rapid innovation are discussed in the AI Governance panel [S13]; evolving frameworks that integrate security considerations are highlighted in the regulatory evolution session [S20].
MAJOR DISCUSSION POINT
Balancing Innovation Speed with Safeguards
M
Mridu Bhandari
2 arguments135 words per minute1976 words875 seconds
Argument 1
Positions AI as a catalyst for economic growth, social empowerment, and global leadership, calling for a coordinated national effort (Mridu Bhandari)
EXPLANATION
Mridu framed AI as a driver of India’s economic growth, social empowerment, and global leadership, urging coordinated action across sectors to harness AI’s potential.
EVIDENCE
She opened the session stating that AI drives economic growth, social empowerment, and global leadership for India and called the discussion a “call to action” [2-3]. Later she reiterated the blueprint’s focus on invest, innovate, and evolve pillars to translate ambition into nation-scale execution [46-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The moderator’s opening remarks frame AI as a driver of growth and empowerment and call for coordinated action [S9]; the AI literacy and development report underscores AI’s role in socio-economic advancement [S25].
MAJOR DISCUSSION POINT
AI Blueprint and Three‑Pillar Strategy
AGREED WITH
Dr. Vivek Mohindra, Shri Jayant Chaudhary Ji
Argument 2
Describes the blueprint’s three pillars—invest, innovate, evolve—and frames the upcoming panel as translating them into actionable steps (Mridu Bhandari)
EXPLANATION
She summarized the blueprint’s three pillars and introduced the panel to discuss concrete actions for scaling AI across India, emphasizing the need for inclusive execution.
EVIDENCE
She explained that the blueprint centers on investing in sovereign compute and data foundations, innovating with collaboration and a future-ready workforce, and evolving into responsible, agile governance [46-48].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The summit overview presents the invest-innovate-evolve pillars of the AI blueprint [S1]; the Global Power Shift briefing reiterates the three-pillar strategy for sovereign AI capability [S11]; the Leaders’ Plenary summarizes the blueprint’s actionable roadmap [S14].
MAJOR DISCUSSION POINT
AI Blueprint and Three‑Pillar Strategy
A
A. S. Rajgopal
4 arguments173 words per minute1866 words644 seconds
Argument 1
Calls for GST waiver and income‑tax benefits to lower upfront infrastructure costs for firms (A. S. Rajgopal)
EXPLANATION
Rajgopal suggested that removing GST on imported servers and providing income‑tax incentives would reduce upfront costs for AI infrastructure, making it more affordable for startups and MSMEs.
EVIDENCE
He explained that GST is currently paid on imported servers, increasing costs by about 18 %, and proposed waiving GST at the point of import and offering income-tax benefits to lower the financial burden [89-92].
MAJOR DISCUSSION POINT
Access to Compute for Startups and MSMEs
AGREED WITH
Shri Jayant Chaudhary Ji, Dr. Vivek Mohindra, Manish Gupta
DISAGREED WITH
Shri Jayant Chaudhary Ji
Argument 2
Highlights existing government GPU subsidies but stresses the shortage of units and the need for massive scale‑up (A. S. Rajgopal)
EXPLANATION
Rajgopal noted that while the government offers subsidized GPU access, the current supply (40‑50 k GPUs) falls far short of the estimated need (≈200 k GPUs), calling for substantial investment and deployment.
EVIDENCE
He referenced the GPU subsidy scheme where startups can apply for GPUs at subsidized rates, but highlighted that India currently has only 40-50 k GPUs versus an estimated need of 200 k GPUs, indicating a major shortfall [68-72].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The discussion notes the government’s GPU subsidy scheme and the current limited inventory, emphasizing the gap between supply and demand [S1].
MAJOR DISCUSSION POINT
Access to Compute for Startups and MSMEs
Argument 3
Proposes distributed data‑centers across multiple states, leveraging rail and power networks for connectivity and resilience (A. S. Rajgopal)
EXPLANATION
Rajgopal outlined a plan to build roughly 100 MW of data‑center capacity in six states, using existing rail and power infrastructure to interconnect regional centers, thereby creating a distributed, sovereign compute fabric.
EVIDENCE
He described the current concentration of data centers in Mumbai and Chennai and the plan to deploy 100 MW across six states, leveraging rail and power networks for inter-connectivity and resilience [167-176].
MAJOR DISCUSSION POINT
Building Sovereign, Cost‑Efficient AI Infrastructure
Argument 4
Emphasises open‑source software and multi‑billion‑dollar investment to lower compute costs and ensure sovereignty (A. S. Rajgopal)
EXPLANATION
Rajgopal argued that combining massive financial investment with open‑source solutions can dramatically reduce compute costs, making AI accessible to the majority of the population and preserving sovereign control.
EVIDENCE
He mentioned the need for multiple billions of dollars of investment and highlighted that leveraging open-source can bring down compute costs, enabling affordable access for the 90 % of the population not currently served [182-188].
MAJOR DISCUSSION POINT
Building Sovereign, Cost‑Efficient AI Infrastructure
B
Bhaskar Chakravarti
2 arguments183 words per minute1314 words430 seconds
Argument 1
Identifies “trust infrastructure” – transparency, data‑governance, privacy, district‑level implementation – as the key non‑technical bottleneck (Bhaskar Chakravarti)
EXPLANATION
Bhaskar highlighted that beyond technical infrastructure, the most critical factor for AI adoption is a robust trust infrastructure encompassing transparency, data governance, privacy, and localized implementation across districts.
EVIDENCE
He described trust infrastructure as involving transparency, data-governance, privacy, and the need for district-level execution, noting that while India enjoys high public trust, institutional mechanisms for trust are still developing [113-135].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Trust infrastructure components such as transparency, data governance, and privacy are detailed in the cross-border data flow session [S15]; the Scaling Trusted AI paper adds emphasis on transparency and reproducibility [S17]; the UN Security Council briefing stresses accountability and documentation for AI systems [S18].
MAJOR DISCUSSION POINT
Trust, Governance, and Regulatory Framework
Argument 2
Uses “Ferrari on a dirt road” analogy to argue that rapid AI progress must be matched by robust institutional “road” (Bhaskar Chakravarti)
EXPLANATION
He compared fast AI capabilities (a Ferrari) to the need for solid institutional foundations (a good road), warning that without proper governance, infrastructure, and trust, rapid AI advancement cannot be sustained.
EVIDENCE
He employed the Ferrari-on-a-dirt-road metaphor, stating that even a high-performance AI system cannot thrive on a weak institutional “road” filled with potholes, emphasizing the need to fix those gaps [275-282].
MAJOR DISCUSSION POINT
Balancing Innovation Speed with Safeguards
S
Shri Jayant Chaudhary Ji
4 arguments148 words per minute1259 words508 seconds
Argument 1
Announces ultra‑low‑cost compute facilities (≈ 65 ₹/hour) as a public‑private partnership outcome for researchers and startups (Shri Jayant Chaudhary Ji)
EXPLANATION
The Minister highlighted that India’s AI mission has created compute facilities priced at roughly 65 rupees per hour, making them among the world’s cheapest and demonstrating successful PPP delivery.
EVIDENCE
He cited that the compute facility costs 65 rupees per hour for researchers and startups, comparing it to a 300-rupee cinema ticket, and noted that the target of 18 000 GPUs has already been surpassed with 38 000 GPUs deployed [346-350].
MAJOR DISCUSSION POINT
Access to Compute for Startups and MSMEs
Argument 2
Describes PPP as the engine for scaling AI infrastructure, citing open, cheap compute and collaborative research hubs (Shri Jayant Chaudhary Ji)
EXPLANATION
He explained that public‑private partnerships have been crucial for rapidly building AI infrastructure, providing open, low‑cost compute and fostering collaborative research hubs such as the Sarvam initiative incubated by IIT Madras.
EVIDENCE
He discussed the top-down push for AI, the open-access compute model, and gave the Sarvam example as a PPP-driven research hub supported by the AI mission, emphasizing cheap compute and broad participation [326-349].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Public-private partnerships are credited with rapidly building open, low-cost compute resources and establishing the Sarvam research hub in the summit discussion [S1].
MAJOR DISCUSSION POINT
Public‑Private Partnerships (PPP) and Skill Development
AGREED WITH
Dr. Vivek Mohindra, Mridu Bhandari
Argument 3
Proposes a Zero‑Trust AI architecture with verification, data segmentation, and audit trails at the national level (Shri Jayant Chaudhary Ji)
EXPLANATION
The Minister outlined a Zero‑Trust approach requiring verification of every protocol, segmentation of data, and comprehensive audit trails to maintain public trust in AI systems.
EVIDENCE
He described Zero-Trust as verifying each protocol, segmenting datasets, establishing audit trails, and ensuring transparency and legal verifiability of AI models, suggesting future CAG audits of AI systems [386-408].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The trust infrastructure framework calls for verification, segmentation, and auditability, aligning with the Zero-Trust recommendations in the regulatory evolution session [S20] and the trust infrastructure guidelines [S15].
MAJOR DISCUSSION POINT
Trust, Governance, and Regulatory Framework
Argument 4
Highlights the need for a national risk registry, observability, and auditability as practical zero‑trust safeguards (Shri Jayant Chaudhary Ji)
EXPLANATION
He called for concrete mechanisms such as a national risk registry, continuous observability, and auditability to operationalize Zero‑Trust AI at scale.
EVIDENCE
He mentioned that beyond governance frameworks, practical safeguards include a national risk registry, observability, reporting infractions, and auditability, linking these to the AI blueprint’s detailed recommendations [410-416].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Concrete safeguards such as a national risk registry, continuous observability, and auditability are outlined in the trust infrastructure and zero-trust discussions [S15]; further elaborated in the evolving regulatory frameworks session [S20].
MAJOR DISCUSSION POINT
Balancing Innovation Speed with Safeguards
Agreements
Agreement Points
Public‑private partnership (PPP) is essential to scale AI infrastructure and drive India’s AI ambition
Speakers: Dr. Vivek Mohindra, Mridu Bhandari, Shri Jayant Chaudhary Ji
Blueprint outlines investment in compute & energy, innovation via skilling, and evolution through agile governance (Dr. Vivek Mohindra) Positions AI as a catalyst for economic growth, social empowerment, and global leadership, calling for a coordinated national effort (Mridu Bhandari) Describes PPP as the engine for scaling AI infrastructure, citing open, cheap compute and collaborative research hubs (Shri Jayant Chaudhary Ji)
All three speakers stress that combining public resources and private innovation through PPPs is the key mechanism to realise sovereign AI potential, accelerate infrastructure rollout and ensure inclusive benefits [32-34][1-3][326-334].
Compute must be affordable and widely accessible for startups and MSMEs
Speakers: A. S. Rajgopal, Shri Jayant Chaudhary Ji, Dr. Vivek Mohindra, Manish Gupta
Calls for GST waiver and income‑tax benefits to lower upfront infrastructure costs for firms (A. S. Rajgopal) Announces ultra‑low‑cost compute facilities (~65 ₹/hour) as a PPP outcome for researchers and startups (Shri Jayant Chaudhary Ji) Investment pillar covers compute infrastructure to ensure everybody has access, including MSMEs (Dr. Vivek Mohindra) Highlights democratising AI access via sustainable data‑center designs and distributed capacity (Manish Gupta)
The panel concurs that high-cost GPU/compute is a bottleneck; policy levers (GST waiver, tax incentives) and low-price public compute resources are needed to enable SMEs to adopt AI at scale [68-72][89-92][18-20][161-162].
POLICY CONTEXT (KNOWLEDGE BASE)
The government has priced GPU access at roughly 65 rupees per month to democratize AI access for innovators, reflecting a policy focus on low-cost compute for startups [S38]; broader assessments of India’s compute needs underscore the necessity of affordable resources to meet projected demand [S55].
Building AI talent and skilling pipelines is critical for adoption
Speakers: Dr. Vivek Mohindra, Manish Gupta, A. S. Rajgopal, Bhaskar Chakravarti
Outlines a three‑tier skilling model (school, college, employment) and partnership with ministries (Dr. Vivek Mohindra) Calls for shifting focus from billions of users to millions of developers and a “UPI of AI” API layer to foster inclusive innovation (Manish Gupta) Emphasises the shortage of skilled talent and the need to attract Indian AI experts back home (A. S. Rajgopal) Highlights literacy and trust as part of the non‑technical bottleneck, stressing skill building for AI understanding (Bhaskar Chakravarti)
All agree that a robust, multi-level capacity-development strategy-from school curricula to a large developer community-is essential to translate AI potential into economic value [373-381][226-244][188-192][203-206].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry leaders note active development of talent pipelines and skilling programs to support AI initiatives, aligning with recommendations to couple compute investments with workforce readiness [S47]; international best-practice guides also stress talent readiness as a prerequisite for effective AI deployment [S54].
Governance must be agile, transparent and embed trust without stifling innovation
Speakers: Dr. Vivek Mohindra, Manish Gupta, Bhaskar Chakravarti, Shri Jayant Chaudhary Ji
Advocates an agile, balanced regulatory regime that protects while fostering rapid AI innovation (Dr. Vivek Mohindra) States that speed and security are not opposing; integrated frameworks and real use‑cases are needed (Manish Gupta) Identifies “trust infrastructure”—transparency, data‑governance, privacy—as the key non‑technical bottleneck (Bhaskar Chakravarti) Proposes a Zero‑Trust AI architecture with verification, data segmentation and audit trails (Shri Jayant Chaudhary Ji)
Consensus that AI governance should be flexible, incorporate explainability and auditability, and build a trust infrastructure (including zero-trust principles) to balance innovation velocity with responsibility [28-32][291-300][113-122][386-416].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy briefs argue that embedded governance-fairness, transparency, accountability-should be a strategic imperative rather than a regulatory burden, supporting agile yet trustworthy AI systems [S42]; transparency and due-process are highlighted as essential for societal trust [S43], and OECD guidance stresses balancing risk management with innovation speed [S45].
Sovereign, energy‑efficient data‑center infrastructure is needed for scalable AI
Speakers: Manish Gupta, A. S. Rajgopal, Dr. Vivek Mohindra
Highlights energy‑efficient, sustainable data‑center designs as a competitive differentiator (Manish Gupta) Proposes distributed data‑centers across states, leveraging rail and power networks for connectivity (A. S. Rajgopal) Notes that investment pillar includes energy infrastructure essential for compute (Dr. Vivek Mohindra)
All agree that building a sovereign, cost-effective AI compute fabric requires distributed, sustainable data-center capacity backed by reliable energy supply [160-162][170-176][20-21].
POLICY CONTEXT (KNOWLEDGE BASE)
India’s sovereign compute framework includes provisions for energy-efficient data-center design, echoing sustainability recommendations for AI infrastructure and the role of renewable energy in large-scale compute facilities [S56][S57]; similar concerns about data-center incentives and environmental impact are raised in global policy discussions [S60].
Similar Viewpoints
Both argue that regulation should be adaptable and supportive of fast AI development rather than a barrier, emphasizing agility and integration of security measures [28-32][291-300].
Speakers: Dr. Vivek Mohindra, Manish Gupta
Advocates an agile, balanced regulatory regime that protects while fostering rapid AI innovation (Dr. Vivek Mohindra) States that speed and security are not opposing; integrated frameworks are required (Manish Gupta)
Both highlight the necessity of dramatically reducing compute costs for startups and researchers, using fiscal incentives and low‑price public resources to broaden access [89-92][346-350].
Speakers: A. S. Rajgopal, Shri Jayant Chaudhary Ji
Calls for GST waiver and income‑tax benefits to lower upfront infrastructure costs (A. S. Rajgopal) Announces ultra‑low‑cost compute facilities (~65 ₹/hour) as a PPP outcome (Shri Jayant Chaudhary Ji)
Both see trust, built through transparency, explainability and audit mechanisms, as essential for AI adoption and governance [113-122][151-154].
Speakers: Bhaskar Chakravarti, Manish Gupta
Identifies “trust infrastructure”—transparency, data‑governance, privacy—as the key non‑technical bottleneck (Bhaskar Chakravarti) Emphasises explainability and auditability as core to building trust (Manish Gupta)
Both advocate a geographically distributed, infrastructure‑rich model for AI compute that leverages existing national assets to ensure resilience and accessibility [160-162][170-176].
Speakers: Manish Gupta, A. S. Rajgopal
Highlights sustainable, distributed data‑center designs as a competitive edge (Manish Gupta) Proposes distributed data‑centers across states, leveraging existing rail and power networks (A. S. Rajgopal)
Unexpected Consensus
Zero‑Trust architecture can coexist with rapid AI innovation
Speakers: Shri Jayant Chaudhary Ji, Manish Gupta
Proposes a Zero‑Trust AI architecture with verification, data segmentation and audit trails (Shri Jayant Chaudhary Ji) States that speed and security are not opposing; integrated frameworks can deliver both (Manish Gupta)
While Jayant frames Zero-Trust as a stringent security model, Manish simultaneously argues that high speed and strong security can be achieved together, revealing an unexpected alignment that security-by-design does not have to slow down AI deployment [386-416][291-300].
POLICY CONTEXT (KNOWLEDGE BASE)
Security experts describe Zero-Trust models as compatible with fast AI development, emphasizing validation-first approaches that do not impede innovation [S50]; thought leaders affirm that innovation and protective measures can be jointly pursued at scale [S51].
Overall Assessment

The discussion shows strong convergence among speakers on five core themes: the centrality of PPPs, the need for affordable compute, the urgency of large‑scale skilling, the requirement for agile yet trustworthy governance, and the push for sustainable, distributed data‑center infrastructure.

High consensus – most participants echo each other’s positions, indicating a unified strategic direction for India’s AI roadmap. This broad alignment suggests that policy formulation, industry action and academic involvement can proceed with coordinated momentum, increasing the likelihood of effective implementation of the AI blueprint.

Differences
Different Viewpoints
Regulatory approach – balance between agile, responsible regulation versus minimal regulation to avoid curtailing innovation
Speakers: Dr. Vivek Mohindra, A. S. Rajgopal
Advocates an agile, balanced regulatory regime that protects while fostering rapid AI innovation (Dr. Vivek Mohindra) Advocates less regulation to not curtail innovation, treating AI as a utility (A. S. Rajgopal)
Dr. Mohindra stresses that AI regulations must be agile, adaptable and strike a balance between innovation and responsibility, warning against anchoring rules to outdated technologies [28-32]. Rajgopal counters that the regulatory environment should be minimal, arguing that over-regulation would hinder AI’s utility and that the sector should move forward quickly with fewer constraints [256-259].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple sources highlight the need for a balanced regulatory stance that safeguards trust while enabling rapid AI progress, noting that over-regulation can stifle growth and that proportionate oversight is essential for sustainable innovation [S42][S45][S52][S53].
Fiscal incentives for AI infrastructure – GST waiver and income‑tax benefits versus reliance on existing low‑cost compute provision
Speakers: A. S. Rajgopal, Shri Jayant Chaudhary Ji
Calls for GST waiver and income‑tax benefits to lower upfront infrastructure costs for firms (A. S. Rajgopal) Emphasizes ultra‑low‑cost compute facilities already available, without proposing additional tax incentives (Shri Jayant Chaudhary Ji)
Rajgopal proposes removing GST on imported servers and offering income-tax holidays to reduce the roughly 18 % upfront cost burden on AI startups and MSMEs [89-92]. The Minister highlights that compute is already being offered at about 65 rupees per hour, positioning the existing pricing model as sufficient to drive adoption, and does not endorse further tax waivers [346-350].
POLICY CONTEXT (KNOWLEDGE BASE)
Industry stakeholders have called for GST waivers and income-tax incentives to boost AI infrastructure investment, reflecting a policy debate on fiscal support versus leveraging current low-cost compute pricing [S39]; the government’s existing low-cost GPU pricing scheme is cited as an alternative approach [S38].
Scale of compute resources needed – 200,000 GPUs versus a target of 100,000 GPUs
Speakers: A. S. Rajgopal, Shri Jayant Chaudhary Ji
Highlights a shortfall of GPUs, estimating a need for about 200,000 GPUs while current stock is 40,000‑50,000 (A. S. Rajgopal) Reports that the AI mission’s target of 18,000 GPUs has already been surpassed with 38,000 deployed and a roadmap to reach 100,000 by year‑end (Shri Jayant Chaudhary Ji)
Rajgopal argues that India requires roughly 200 k GPUs to meet demand, noting the present inventory of 40-50 k GPUs is insufficient [71-72]. The Minister, citing the AI mission’s progress, states that 38 k GPUs are already in place and that the goal is to scale to 100 k GPUs soon, a figure considerably lower than Rajgopal’s estimate [346-348].
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses estimate India’s domestic GPU requirement at around 128,000 units for top organizations, while the current government roadmap targets roughly 100,000 GPUs, indicating a gap between projected demand and policy targets [S55][S56].
Unexpected Differences
Fiscal incentives for AI hardware – GST waiver proposal versus reliance on existing low‑cost compute pricing
Speakers: A. S. Rajgopal, Shri Jayant Chaudhary Ji
Calls for GST waiver and income‑tax benefits to lower upfront infrastructure costs for firms (A. S. Rajgopal) Highlights ultra‑low‑cost compute facilities as a PPP success, without mentioning tax relief (Shri Jayant Chaudhary Ji)
Both speakers address affordability, but Rajgopal pushes for direct fiscal relief on hardware imports, while the Minister points to the already cheap compute pricing model as the solution, revealing an unexpected split on the preferred mechanism to lower costs [89-92][346-350].
POLICY CONTEXT (KNOWLEDGE BASE)
Discussions on hardware incentives reference proposals for GST waivers to lower AI hardware costs, contrasted with the government’s strategy of offering low-cost compute access without additional tax relief [S39][S38].
Magnitude of GPU shortfall – Rajgopal’s estimate of 200,000 GPUs versus the Minister’s roadmap to 100,000 GPUs
Speakers: A. S. Rajgopal, Shri Jayant Chaudhary Ji
Estimates a need for about 200,000 GPUs, noting current availability of 40,000‑50,000 (A. S. Rajgopal) Reports that the AI mission has already exceeded its 18,000‑GPU target with 38,000 deployed and aims for 100,000 by year‑end (Shri Jayant Chaudhary Ji)
While both acknowledge a shortage, the gap between Rajgopal’s 200 k estimate and the Minister’s 100 k target is larger than expected, suggesting differing assessments of demand and supply dynamics [71-72][346-348].
POLICY CONTEXT (KNOWLEDGE BASE)
Expert estimates place the GPU shortfall at roughly 200,000 units, while official roadmaps aim for 100,000 GPUs, highlighting a significant discrepancy in supply planning [S55][S56].
Overall Assessment

The panel largely concurs on the strategic importance of public‑private partnerships, the need for robust trust mechanisms, and the urgency of skilling. However, clear divergences emerge around regulatory philosophy (agile balance vs minimal rules), fiscal policy for hardware (GST waiver vs reliance on low compute pricing), and the scale of compute resources required (200 k vs 100 k GPUs).

Moderate – while the overarching goals are shared, the differing policy prescriptions could impede coordinated action unless reconciled, potentially slowing the rollout of sovereign AI infrastructure and affecting India’s ability to compete globally.

Partial Agreements
All four speakers converge on the view that a strong public‑private partnership framework is essential for building India’s AI ecosystem, even though they emphasize different facets (policy‑industry alignment, research hubs, economic growth) [1-3][32-34][146-148][326-329].
Speakers: Dr. Vivek Mohindra, Manish Gupta, Shri Jayant Chaudhary Ji, Mridu Bhandari
Public‑private partnership is the key to unlocking sovereign AI potential (Dr. Vivek Mohindra) Dell’s blueprint calls for tighter alignment between policymakers, industry, academia – a PPP model (Manish Gupta) PPP is the engine for scaling AI infrastructure, delivering open, cheap compute and research hubs (Shri Jayant Chaudhary Ji) AI is a catalyst for economic growth and requires coordinated national effort (Mridu Bhandari)
All agree that building a skilled AI workforce is critical, though Mohindra focuses on formal education pipelines, Manish on creating a massive developer community, and Rajgopal on expanding talent numbers and leveraging open‑source to make AI affordable [373-381][226-244][188-195].
Speakers: Dr. Vivek Mohindra, Manish Gupta, A. S. Rajgopal
Three‑tier skilling model covering school, college and employment (Dr. Vivek Mohindra) Shift from billions of users to millions of developers; need for a ‘UPI of AI’ to democratise access (Manish Gupta) India has a large talent pool but needs to increase quantity and bring back talent; open‑source can help lower compute costs (A. S. Rajgopal)
Each speaker stresses that trust is a non‑technical bottleneck and must be embedded through transparency, explainability, and robust verification mechanisms, whether at policy level, technical design, or national architecture [113-135][151-154][386-408].
Speakers: Bhaskar Chakravarti, Manish Gupta, Shri Jayant Chaudhary Ji
Trust infrastructure – transparency, data‑governance, privacy, district‑level implementation (Bhaskar Chakravarti) Explainability and auditability are core to building trust (Manish Gupta) Zero‑Trust AI architecture requires verification, data segmentation and audit trails (Shri Jayant Chaudhary Ji)
Takeaways
Key takeaways
India’s AI Blueprint is built on three pillars – Invest (compute, data, energy infrastructure), Innovate (skilling, collaboration, workforce development) and Evolve (agile, responsible governance, trust). Public‑private partnership is seen as the engine for scaling AI infrastructure, with Dell and the government collaborating on low‑cost GPU compute, data‑center expansion, and skill‑lab programmes. Access to affordable compute for startups and MSMEs remains a critical bottleneck; current GPU subsidies are insufficient and proposals such as GST waivers and income‑tax incentives were raised. Trust and governance are identified as the primary non‑technical constraints; a “trust infrastructure” covering transparency, privacy, explainability, district‑level implementation and auditability is required. A distributed, energy‑efficient data‑center network across multiple states, leveraging open‑source software and existing rail/power connectivity, is proposed to achieve sovereign, cost‑effective AI capacity. Strategic autonomy calls for domestic capabilities in semiconductors and trusted‑in‑India hardware, alongside a unified “UPI of AI” API layer to democratise access to data and compute. Balancing rapid AI innovation with safeguards is framed as a non‑zero‑sum problem; agile, sector‑specific regulations and zero‑trust architectures are advocated. Skill development must span school, college and employment levels, with a focus on creating millions of AI developers rather than only serving end‑users.
Resolutions and action items
Participants were urged to read the detailed Dell AI Blueprint and provide feedback. Dell Technologies to explore partnerships with the Ministry of Skill Development for AI apprenticeships and tier‑2/3 skilling labs. Government to consider policy adjustments such as GST waivers on imported AI hardware and income‑tax benefits for AI service providers. Scale up GPU availability: target of 1 lakh GPUs by year‑end, expanding beyond the current 40‑50 k units. Develop a distributed data‑center strategy covering six states with ~100 MW capacity, leveraging rail and power networks for inter‑connectivity. Create a national AI risk registry, observability framework, and audit‑trail mechanisms to support a Zero‑Trust AI architecture. Promote open‑source AI stack adoption to lower compute costs and ensure sovereignty. Formulate a unified “UPI of AI” API layer to provide standardized, secure access to government data sets for innovators.
Unresolved issues
Exact reasons for large enterprises’ reluctance to adopt AI beyond general uncertainty – detailed case studies and mitigation plans were not defined. Potential impact of AI on employment and job displacement remains unquantified; no concrete policy response was agreed upon. Implementation details for district‑level trust mechanisms, transparency standards, and grievance redressal systems were discussed but not finalized. How to ensure monetisation and scaling of AI pilots into production across sectors was highlighted as a challenge without a clear solution. Specific timelines and funding mechanisms for the multi‑billion‑dollar data‑center investments were not committed. The balance point between regulatory speed and safety (e.g., exact scope of agile regulations) was debated but no definitive framework was set.
Suggested compromises
Adopt an agile, principle‑based regulatory regime that is flexible enough to keep pace with AI advances while embedding core safeguards (privacy, security, explainability). Provide fiscal incentives (GST waiver, tax holidays) to lower upfront costs for AI hardware, balancing industry cost concerns with revenue considerations. Maintain open, low‑cost compute access (≈ 65 ₹/hour) as a public good while allowing private providers to compete on value‑added services. Combine rapid AI deployment (the “Ferrari”) with investment in road‑level trust infrastructure (institutional capacity, transparency) to ensure safe operation. Shift focus from serving only large enterprises to building a mass developer ecosystem (“UPI of AI”), thereby sharing the benefits of AI across the economy.
Thought Provoking Comments
The regulatory framework has to be agile because the technology is moving at such a fast pace that you cannot anchor the regulatory framework to yesterday’s technologies.
Highlights the need for dynamic, forward‑looking policy rather than static rules, framing regulation as a catalyst rather than a barrier to AI innovation.
Set the tone for the later discussion on balancing speed and safety. It prompted participants (Manish Gupta, Raj Gopal, Bhaskar Chakravarti) to explore how governance can keep pace with rapid AI advances, leading to deeper conversation about trust, explainability, and zero‑trust architectures.
Speaker: Dr. Vivek Mohindra
We could waive GST on imported servers and only collect GST when the service is delivered, reducing upfront infrastructure cost by about 18 %.
Identifies a concrete fiscal barrier that directly affects MSMEs’ ability to acquire compute resources, moving the debate from abstract investment needs to actionable policy levers.
Shifted the conversation from high‑level investment pillars to specific tax reforms. It sparked interest from the moderator and other panelists, leading to broader discussion on incentives (income‑tax benefits) and the role of government in lowering cost of entry for small innovators.
Speaker: A. S. Rajgopal
The single most important determinant of a country’s digital trajectory is the demand side – and within that, a ‘trust infrastructure’ that ensures people feel confident that their data and transactions are safe and reliable.
Introduces the concept that trust, not just compute or data, is the critical missing piece for AI adoption, and that trust must be built at both institutional and grassroots levels.
Created a turning point where the panel moved from supply‑side (compute, energy) to demand‑side considerations. It prompted Manish Gupta to talk about explainability and led to the “Ferrari vs. road” analogy, deepening the analysis of non‑technical bottlenecks.
Speaker: Bhaskar Chakravarti
We need a ‘UPI of AI’ – a single, open, API‑layer that lets anyone, from startups to large enterprises, consume the nation’s data and compute resources just as UPI democratized digital payments.
Draws a powerful parallel between India’s successful financial inclusion model and AI, offering a clear, scalable vision for universal access to AI infrastructure.
Inspired the discussion on building a common platform and reinforced the theme of public‑private partnership. It also gave Raj Gopal a concrete reference point when talking about distributed data centers and open‑source cost reductions.
Speaker: Manish Gupta
Speed without a good road is useless – we can have the fastest AI models (Ferrari) but if the institutional ‘road’ is full of potholes (trust gaps, policy lag, job‑impact concerns), the whole system stalls.
Uses a vivid metaphor to encapsulate the interplay between technological capability and institutional readiness, emphasizing that policy, trust, and job impacts are the “road” that must be fixed.
Re‑oriented the conversation toward the practical challenges of implementation, prompting participants to discuss job displacement, transparency, and the need for robust institutional safeguards.
Speaker: Bhaskar Chakravarti
Our compute facility is being provided to startups and researchers at 65 rupees an hour – cheaper than a cinema ticket – making it the world’s cheapest open compute facility.
Provides a tangible metric of how public‑private partnership is already lowering barriers, turning abstract policy talk into a measurable achievement.
Validated the earlier calls for subsidies and GST waivers, reinforcing the argument that government‑backed pricing can accelerate adoption. It also set up the segue into the discussion on scaling AI beyond metros.
Speaker: Shri Jayant Chaudhary Ji
Zero‑trust AI architecture means starting with data, extending through models, cybersecurity, identity, and includes a national risk registry, observability, and auditability.
Offers a concrete, technical blueprint for governance that ties together the earlier abstract notions of trust, agility, and regulatory oversight.
Brought the conversation full circle to actionable steps, influencing the final remarks of both the minister and the moderator. It cemented the link between policy, technical safeguards, and the broader goal of responsible AI scaling.
Speaker: Dr. Vivek Mohindra
Overall Assessment

The discussion was driven forward by a handful of pivotal insights that moved it from high‑level aspirations to concrete policy and implementation pathways. Dr. Mohindra’s call for agile regulation and the zero‑trust architecture framed the governance challenge; Raj Gopal’s GST‑waiver proposal and the minister’s cheap compute pricing turned that challenge into actionable fiscal levers. Bhaskar Chakravarti’s ‘trust infrastructure’ and his Ferrari‑vs‑road metaphor shifted focus to demand‑side and institutional readiness, prompting Manish Gupta’s UPI‑of‑AI analogy that offered a unifying, scalable solution. Together, these comments created a cascade: each new idea opened a sub‑topic, elicited supportive or complementary remarks, and deepened the conversation, ultimately shaping a narrative that blended investment, innovation, and evolution into a coherent roadmap for India’s AI future.

Follow-up Questions
Should GST on imported AI servers be waived upfront and collected only when services are delivered, to reduce upfront infrastructure costs for providers?
Rajgopal suggested a GST waiver could lower costs for AI infrastructure, indicating a policy change that needs clarification and assessment.
Speaker: A. S. Rajgopal
Should Indian AI service providers receive the same tax benefits as global providers when hosting services in India?
He proposed extending tax holidays/income‑tax benefits to domestic providers, a policy issue requiring further evaluation.
Speaker: A. S. Rajgopal
How can India scale its GPU inventory from the current 40‑50k units to the estimated need of ~200,000 GPUs?
Rajgopal highlighted a massive shortfall in GPU availability, pointing to a need for investment strategies and supply‑chain research.
Speaker: A. S. Rajgopal
What specific institutional safeguards (e.g., transparency mechanisms, grievance redressal, digital literacy programs) are required to build a robust trust infrastructure for AI in India?
He identified trust as a non‑technical bottleneck and called for concrete frameworks, indicating further study and design work.
Speaker: Bhaskar Chakravarti
What will be the impact of AI adoption on employment in India, and what policies can mitigate potential job displacement?
He raised the “elephant” of post‑AI jobs, signalling a need for research on labor market effects and protective measures.
Speaker: Bhaskar Chakravarti
How can India develop a unified ‘UPI of AI’ – a single API layer that aggregates data sets, compute, and services for universal access by developers, startups, and enterprises?
Gupta suggested a national API platform analogous to UPI for payments, requiring technical design and governance research.
Speaker: Manish Gupta
What should an agile, AI‑responsive regulatory framework look like to balance innovation speed with responsibility?
He emphasized the need for regulations that can keep pace with rapid AI advances, a topic needing policy formulation and stakeholder input.
Speaker: Dr. Vivek Mohindra
How should a national risk registry, observability tools, and auditability mechanisms be implemented to support a Zero‑Trust AI architecture?
He mentioned these components as essential for Zero Trust, indicating a need for detailed implementation plans and standards.
Speaker: Dr. Vivek Mohindra
What concrete models of public‑private partnership can accelerate large‑scale AI infrastructure deployment across Indian states?
While he described PPP benefits, he left open the question of specific partnership structures and financing mechanisms for replication.
Speaker: Shri Jayant Chaudhary Ji
What specific programs or frameworks can Dell Technologies and the Ministry of Skill Development co‑create to deliver AI apprenticeships and skilling in Tier‑2 and Tier‑3 towns?
He began outlining a partnership but did not detail concrete initiatives, indicating a need for joint program design and rollout plans.
Speaker: Dr. Vivek Mohindra
What are the technical and governance requirements to operationalize a Zero‑Trust AI architecture at the national level, including data segmentation, anonymization, and audit trails?
He described the concept but further research is needed to define standards, processes, and enforcement mechanisms.
Speaker: Shri Jayant Chaudhary Ji

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Fireside Chat Moderator- Mariano-Florentino Cuellar

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Fireside Chat Moderator- Mariano-Florentino Cuellar

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel, comprising the IMF Managing Director, the WTO Deputy Director General, and Singapore’s Minister for Digital Development, convened to discuss how artificial intelligence should be positioned in the global context [3-6][9]. Moderator Mariano Florentino Cuellar highlighted that while global scientific and technological ties have lengthened life expectancy, the world is now more fragmented, making cooperation on AI harder than a decade ago [20-24]. He noted that AI’s development will influence these ties, but countries are taking divergent paths in adoption and capability [26-30].


Georgieva estimated that AI could add about 0.8 percentage points to global growth, accelerating the post-COVID recovery and creating jobs, especially in India where it could help achieve the “Vixit Bharat” vision [40-46]. She warned that AI also poses three major risks: widening inequality between those with and without access, large-scale job displacement affecting up to 40 % of jobs in emerging markets and 60 % in advanced economies, and potential financial-stability shocks [57-66][61-63]. Despite these concerns, she urged policymakers to embrace AI’s opportunities while managing its downsides and ensuring benefits are widely shared [54-58][66-68].


WTO Deputy Director General Joanna Hill argued that trade can facilitate AI diffusion to low- and middle-income countries and that AI will reshape comparative advantage toward capital-rich, data-rich economies, putting labor-intensive nations at risk [75-78]. She cited WTO research projecting a 40 % increase in trade growth by 2040 if appropriate skill development, digital infrastructure, and regulatory frameworks are put in place [80-84]. Singapore’s Minister Josephine Teo described her country’s role as a “trusted node” that maintains consistent, principle-based governance of technology, allowing it to navigate great-power competition while remaining reliable for partners [97-104][108-112].


She emphasized that trust and an ethical foundation for AI are essential, arguing that regulation alone cannot prevent social inequality and that broader social protections are needed to support workers in transition [173-186][227-232]. Georgieva later reinforced the need to revamp education, provide social safety nets, and create an enabling environment to ensure AI’s gains do not leave segments of the population behind [145-148][133-138]. The moderator concluded that the discussion underscored the importance of global cooperation, existing institutions, and mutual trust to manage AI’s challenges, noting that the world must act collectively rather than rely on isolated national efforts [236-242][210-214]. Overall, the panel agreed that while AI offers significant economic upside, its successful integration will depend on coordinated international policies, ethical safeguards, and sustained trust among nations and citizens [54-58][227-232][210-214].


Keypoints

Major discussion points


AI’s macro-economic upside and its systemic risks – The IMF Managing Director highlighted that AI could add roughly 0.8 percentage points to global growth, unlocking jobs and supporting initiatives like “Vixit Bharat” in India, but she also warned of fairness gaps, massive labour-market disruption (up to 40 % of jobs in emerging markets and 60 % in advanced economies) and potential financial-stability threats[40-42][45-48][55-62][64-66].


Trade as a conduit for AI diffusion and a source of new comparative-advantage dynamics – The WTO Deputy Director General explained that trade can spread AI to needy economies and that AI reshapes comparative advantage toward data-rich, capital-intensive countries, underscoring the need for skills, digital infrastructure and updated trade rules[75-84].


Singapore’s “trusted-node” approach to AI governance amid geopolitical tech decoupling – Singapore’s Minister described how the city-state maintains credibility by acting consistently and on principled, commercial-performance criteria (e.g., 5G choices), positioning itself as a reliable bridge for technology access[97-112].


Beyond regulation: education, social protection, and ethical foundations – The IMF representative called for a revamp of education to teach “learning-to-learn,” robust social safety nets for displaced workers, and an enabling environment that avoids the pitfalls of past globalization; Singapore’s minister added that relying solely on AI regulation is unrealistic and that broader solidarity measures are essential[145-152][173-188].


Global cooperation and trust as the keystone for a successful AI transition – The moderator stressed the fragmented world and the need for shared institutions; later speakers converged on “trust”-both public confidence in AI and an ethical, cooperative framework-as the single most critical factor for a positive future[17-27][210-229][236-242].


Overall purpose / goal of the discussion


The panel was convened to “position AI in the global context” by assessing its economic promise, identifying systemic risks, and exploring how international bodies (IMF, WTO) and forward-looking governments (e.g., Singapore) can shape policies that ensure AI’s benefits are widely shared while mitigating harms[1-4].


Tone of the discussion and its evolution


Opening – Formal and optimistic, introducing an elite, solution-oriented panel[1-4][35-38].


Mid-session – Acknowledges growing fragmentation and the seriousness of risks (inequality, job loss, financial instability) → more cautionary and analytical[23-24][55-62].


Later – Shifts to constructive, collaborative tone, emphasizing concrete policy tools, skill building, and governance models[75-84][97-112].


Closing – Converges on a hopeful, trust-centric message, stressing global solidarity and the adequacy of existing institutions rather than new ones[210-229][236-242].


Overall, the conversation moves from a high-level introduction of AI’s promise, through a sober appraisal of its challenges, to a consensus that coordinated, trust-based action-grounded in education, social protection, and principled governance-is essential for a beneficial global AI future.


Speakers

Kristalina Georgieva


Area of expertise: Macroeconomic stability, digital transformation, AI’s impact on global growth and labor markets


Role / Title: Managing Director, International Monetary Fund (IMF)


Speaker 1


Area of expertise: (not specified)


Role / Title: Event host / introductory speaker (introduces the panel and invites speakers)


Mariano Florentino Cuellar


Area of expertise: International policy, AI governance, global economic cooperation


Role / Title: Moderator of the panel; President, Carnegie Endowment for International Peace


Josephine Teo


Area of expertise: Digital development, AI governance, technology policy for small states


Role / Title: Minister of Digital Development and Information, Singapore


Joanna Hill


Area of expertise: International trade, AI and trade policy, comparative advantage


Role / Title: Deputy Director General, World Trade Organization (WTO)


Additional speakers:


(None identified beyond the listed speakers; all spoken contributions are accounted for above.)


Full session reportComprehensive analysis and detailed insights

The session opened with moderator Mariano Florentino Cuellar introducing an “elite” panel to discuss the global positioning of artificial intelligence (AI). He announced the participants – IMF Managing Director Kristalina Georgieva, WTO Deputy Director-General Joanna Hill, and Singapore’s Minister for Digital Development and Information Josephine Teo – and then framed the debate with three observations. First, he linked longer life-expectancies (47 years in 1950 versus ≈ 73 years today) to the benefits of global scientific and technological ties [19-23]. Second, he noted that the world has become more fragmented, making cooperation harder now than it was even five to ten years ago [24-25]. Third, he warned that AI’s development will reshape these ties, while countries pursue divergent paths in adoption and capability [26-33].


Georgieva’s macro-economic perspective


Georgieva highlighted IMF research estimating that AI could lift global growth by roughly 0.8 percentage points, outpacing the post-COVID recovery and creating new jobs [40-44]. She cited India’s “Vixit Bharat” initiative as an illustration of AI-driven national development [45-47] and warned that fast adopters of digital infrastructure and AI skills can achieve up to twice the economic benefit of slower adopters [50-51]. She identified three major risks – widening fairness gaps, large-scale labour-market disruption (affecting about 40 % of jobs in emerging markets and 60 % in advanced economies) and potential financial-stability shocks [57-66] – and called for policy action. Georgieva also presented the IMF’s “1 AI job creates 1.3 jobs” multiplier, based on United States data [122-128].


Hill’s trade perspective


Hill argued that trade can serve as a conduit for AI diffusion to low- and middle-income economies [75-78]. She explained that AI reshapes comparative advantage toward data-rich, capital-intensive economies, threatening labour-intensive countries unless they invest in skills, digital infrastructure and appropriate regulations [79-84]. WTO research projects a possible 40 % increase in global trade growth by 2040 if these conditions are met [80-84]. Hill also pointed to the WTO’s technology-neutral architecture, likening it to the CERN-originated World Wide Web, as a foundation for AI-related trade [197-202].


Teo’s Singapore strategy


Teo described Singapore’s “trusted-node” approach, positioning the city-state as a reliable bridge between the United States and China amid increasing technology balkanisation [97-104][105-108]. She emphasized that trust is built through consistent, principle-based decision-making rather than size, citing the 5G rollout as an example of commercial, performance-driven choices made within a clear regulatory framework [109-112]. Teo warned that regulation alone cannot resolve rising social inequality; she advocated for broader cohesion measures – affordable housing, universal health care, quality education and mechanisms to help workers transition between jobs – as essential complements to any regulatory framework [173-188]. She stressed that public trust is the single most important yardstick of success [212-215].


Moderator’s additional remarks


After Teo’s answer, Cuellar suggested that Southeast Asia could act as a laboratory for experimenting with AI governance [215-218]. He later returned to Georgieva to ask how productivity gains from AI could be turned into shared prosperity. Georgieva reiterated the need for careful observation, data-driven policy projection and country-specific assessments [121-124], emphasizing again the uneven distribution of benefits and the risk of wage compression for the middle-income segment [129-138]. She called for education systems to shift from static skills to lifelong-learning, for expanded social protection, and for an enabling environment that avoids “sugar-coating” progress [145-152][153-158].


Key Findings


– AI could add roughly 0.8 % to global GDP and boost trade growth by up to 40 % by 2040, especially for countries that invest early in digital infrastructure and skills [40-44][80-84].


– Large-scale labour-market disruption is projected (≈ 40 % of jobs in emerging markets, ≈ 60 % in advanced economies) and could exacerbate inequality and financial-stability risks [57-66].


– Trade is a vital diffusion channel but must adapt to a shift in comparative advantage toward data-rich economies [75-78][80-84].


– Singapore’s “trusted-node” model shows how small states can maintain relevance through principle-based governance and consistent trust-building [97-112].


– Public trust and an ethical foundation are essential; regulation alone cannot ensure inclusive outcomes, and broader social policies are required [173-188][212-215][227-232].


– Existing multilateral institutions (IMF, WTO) are deemed sufficient to steer AI governance, provided they cooperate and update their frameworks [235-242].


Points of disagreement


Policy levers: Georgieva favoured macro-level education reform, targeted social protection and labour-market monitoring; Hill emphasised trade mechanisms, WTO-based rules and skill-building; Teo argued that regulation must be complemented by broader social-cohesion measures.


Governance framing: Georgieva highlighted the need for formal ethical guard-rails, whereas Teo placed public trust and societal safeguards at the centre of the governance model.


Conclusion


The panel agreed that AI offers sizable economic and trade gains but also poses systemic risks of inequality, job displacement and financial instability. Realising AI’s promise will require coordinated investment in digital infrastructure, lifelong-learning education systems, comprehensive social safety nets, and sustained public trust built on ethical, transparent governance. While the relative weight of trade- versus social-policy levers remains contested, the consensus underscores the adequacy of existing multilateral bodies-provided they cooperate, modernise their frameworks and adopt a technology-neutral, principle-based approach exemplified by Singapore’s trusted-node model.


Session transcriptComplete transcript of the session
Speaker 1

Now we move to a conversation about how artificial intelligence needs to be positioned in the global context. And we have very elite panelists for this session. Ms. Kristalina Georgieva, the Managing Director of the International Monetary Fund. From macroeconomic stability to digital transformation, she’s been a leading voice on how AI will reshape the global economic order and what policymakers must do to ensure that its benefits are widely shared. Ms. Joanna Hill, the Deputy Director General of the World Trade Organization, bringing the trade perspective to a technology that is redrawing the boundaries of comparative advantage. Ms. Josephine Teo, the Minister of Digital Development and Information for Singapore, a nation that has become a global benchmark for how governments can integrate AI into public services.

And this conversation will be held in a few minutes. This will be moderated by Mr. Mariano Florentino Cuellar, President of the Carnegie Endowment for International Peace. So we have a very elite… set of panelists who are going to join us on this panel discussion, which is titled AI Needs to be Positioned in the Global Context. May I please invite our panelists to please join us on stage? So over to you, Mr. Quayar.

Mariano Florentino Cuellar

Thank you very much and good afternoon, everybody. How are we doing AI summits? Let me try that again. Hello, Delhi. Thank you. Much better. It is not every day that we have the pleasure of having such a distinguished panel of international leaders. And I want to start by making three observations only as special observations for those of you who have chosen to be with us this afternoon. You could be anywhere in this complex, anywhere in the city, and you’re right here with us. The first is about the role of technology and science and global ties in making the world better. For those of you who are younger than me, which is most of you in the audience, you will live longer than my generation because of global ties, commerce, science and technology.

In 1950, when India was a young nation, global life expectancy was 47 years. Now it’s closer to 73 years. But at the same time, the second point is that the world that we are navigating today is fragmented. That set of global ties, diffusing science and technology, advancing global understanding and cooperation is a lot harder now than it was even five or 10 years ago. And everybody who’s been on this stage has been alluding to that in some way, that reality. The third point is that the use and development of AI will have an effect on those ties and on that prosperity in all likelihood. But there are divergences, different paths around AI. Some countries are using it more, some less.

Some countries play a certain role. Some very developed role in the tech stack and others less. To talk about these issues, I cannot imagine a better pan. It’s not every day, as I said, that we have the managing director of the IMF, the deputy director general of the World Trade Organization, and the minister for information and digital development from Singapore. So I’m going to start with a question for managing director Gorgieva. And the question is, all this discussion about artificial intelligence at the frontier, what do you see as the greatest possibilities and the greatest risks?

Kristalina Georgieva

Thank you very much. Namaste. Namaste. AI is an incredibly transformative we know. And the question is, what does it do for the world economy? We did some research, and here is the answer. Based on what we know, AI can lift up global growth by all. Almost. a percentage point, we say 0 .8%. What does that mean? It would mean that the world would grow faster than it did before the COVID pandemic. And that is fantastic for creating more opportunities, more jobs. This is the magnitude that we see for India. And it would mean that India’s Vixit Bharat is achievable. It also means that the world risks to be even further diverse. The accordion of opportunities may open even more from countries that do well to those who fall behind.

Thank you very much. Actually, what we see is the potential for countries that go fast on digital infrastructure, on skills, on adoption of AI, that they can do twice as well as those that don’t. So what is our main reason to be here at the AI Summit in Delhi? To embrace India’s proposition of democratizing AI, making sure that experience in India can then be passed to other countries, especially countries in the developing world, to make diffusion, to make adoption of AI. The main priority and do it with focus on people, on improving the opportunities, the livelihoods of people. I am very optimistic about AI. I’m also not naive. It brings significant risks. First, it brings the risk of making countries and the world less fair.

Some have it and others don’t. Second, it brings the risk of displacement of jobs with no good thinking about how to help people find their place in the new AI economy. We calculated this risk as very high. We actually see the impact of AI on the labor market like a tsunami hitting it globally. 40 % of jobs will be affected by AI, some enhanced, others eliminated. Emerging markets, 40%, but in advanced economies, 60%. And that is happening over a relatively short period of time. And the third risk we at the IMF worry a lot about is financial stability risk. Could AI get loose and create havoc on financial markets? But on balance, my appeal to all of us is embrace the opportunities, be mindful of the risks, and manage them well.

And above all, make sure that the spirit here is that AI is for the well -being of everybody, everywhere. Thank you.

Mariano Florentino Cuellar

And what we’re going to do, we’re going to. I’m going to come right back to these questions in a minute, but I want to bring in the Deputy Director General of the World Trade Organization into the conversation. I want to ask you, picking up exactly where Managing Director Gheorgheva was going. the interest in democratizing the technology, having more countries be closer to the frontier. For more than a generation, as you know, we have been having arguments about trade globally and about whether trade helps reduce the gap in well -being between countries or actually pulls them apart even more. And given all that experience, I wonder what role you think the international trading system has in dealing with potential inequities and access to AI and the development of AI.

Joanna Hill

Thank you so much for the invitation. To be here, definitely we see that trade can help the diffusion of AI to those that most need it. And we also think that AI can help trade and can help lower income and middle income economies really progress through trade. Now, we do see that AI is really shifting what we think of as comparative advantage to those economies that are more strong in capital, data, and in computing power. and therefore the countries that are more labor intensive feel more at risk. At the same time, we also see important opportunities for these same countries. Of course, with all the caveats that we’ve been speaking about, the importance of investing in skills and regulations and in infrastructure, digital infrastructure are incredibly important.

Our research suggests that by the year 2040, trade growth could be almost growing by 40%. So we see really important opportunities for the middle and lower income economies. And trade is already working well in that way. Our trade agreements, the world trading system is set up so that goods trade and services trade can develop with AI. But there are some areas where they’re still too new and still too nuanced. And we still have to wait and see how that will develop and how the system has to accommodate.

Mariano Florentino Cuellar

Minister Teo, as that system evolves, and we deal with this, emerging, not even emerging anymore, emerged technology. we talk about how much it’s going to affect countries large and small. You are playing a critical role, and I know you’re playing a critical role because I see you at every single AI summit in the world. It’s amazing. But how are countries like Singapore in a position to navigate this tsunami, these changes? And what, in particular, what do you think we could learn from Singapore’s strategy, as I see it, of being at the forefront here on AI governance, the Model AI Governance Plan, for example, but also navigating a world that some people see as balkanized between China and the United States around the technology stack?

Josephine Teo

Thank you very much, Tino. That’s a lot of questions packed into one. I’ll do my best to address them. I think embedded in what you’re saying is that there is the risk of technology decoupling. And what does a small state do? In this kind of context? And how do we navigate the big power contestation? The way we think about it is that for Singapore, it’s very important for us to maintain this ability to operate as a trusted node. Trusted node means that, well, we can trust you with our technology. So your companies, your people can continue to access this, whatever is the most sophisticated, because they will not be abused and the risk of them being misused is also minimized.

The question, however, is how do we remain trusted? And I think the only way to do so is if we act in a consistent and principled way. And being consistent and principled is not a matter of size. And Singapore is not the only small state that has a good track record of holding this discipline. We are consistent in being. Pro -Singapore. And sometimes our choices may align with this country or that country. Sometimes they will align with many countries. Sometimes they only align with a few countries. But they always align with our own interests. In technology choice, for example, 5G, we are always operating on the basis of principles. Number one, that these are commercial decisions that have to be undertaken by the operators of the mobile networks.

And they have to decide on the basis of what works for them in terms of performance, in terms of security, in terms of resilience, keeping in mind what are all the rules that are in place in our context. So those are the broad directions in which we operate in. And it’s not easy, but it’s a path that has served us well.

Mariano Florentino Cuellar

And I note that among the many things that Singapore, I think, has contributed to the discussion of AI globally, in addition to being a trusted node and connecting different countries, there’s also the role Singapore and the region of Southeast Asia plays in all this because Southeast Asia is such a region of such diversity and importance globally. And I want to come back in a minute to the question of how we might imagine Southeast Asia evolving as almost a laboratory for some of the issues we’re talking about. But first, I want to go back to Kristalina, if I may, and ask you about, it was clear in your earlier remarks that you see enormous possibilities for AI.

But you also acknowledge candidly something that maybe not every speaker has acknowledged, which is along with that opportunity will probably come some disruption. Some real policy difficulties in some countries that are experiencing rapid change. The question then is how we might develop the right strategy so that the productivity gains that the world can experience would actually translate into shared prosperity. What do you think we can do on that score?

Kristalina Georgieva

The first thing we ought to do is… to carefully observe what is actually happening and then project what are the implications for policymakers. At the Fund, we did a very interesting piece of research in the United States assessing how much AI is affecting already the labor market. And we found out that one in 10 jobs already requires additional skills. And for those who have these skills, the job pays better. Now, with money in their pocket, people then go and buy more local services. They go to restaurants, to entertainment. That creates demand for low -skilled jobs. And to our surprise, the total impact on employment in the aggregate is positive. One job with AI, 1 .3 jobs. 1 .3 jobs. in total employment.

But what does that mean? It means that a smaller segment of people get higher opportunities. A larger segment, yes, they can have jobs, but jobs that are on the lower end of the pay scale. And the most problematic is the fate of those squeezed in the middle. Their jobs don’t change. In relative terms, they pay less, and some of these jobs disappear. What concerns us the most is that jobs that disappear tend to be entry -level jobs. They are routine, and they are easily automated. So if you are in this place of the labor market that is easily automated, of course that creates a risk. Now, we are going to talk about the risk of the labor market.

We are going to talk about the risk of the labor market. We are going to talk about the risk of the labor market. We are going to talk about the risk of the labor market. We are going to talk about the risk of the labor market. We are going to talk about the risk of the labor market. We are going to talk about the risk of the labor market. once obviously we will continue to work with countries to understand what is happening and then how do we project it for policies for the future i would make three conclusions so far and of course we have to be agile in how we look at ai the first one is education has to be revamped for the for a new world people have to learn to learn not to learn specific skills so much and there has to be second there has to be support for those if they’re a big chunk in a particular local economy and this labor market is changing dramatically there has to be social protection social support so they don’t feel like what happened with the industrial world workers in the united states when their jobs were exported overseas and three it is very important that we look at the overall enabling environment.

Why in some places AI makes it faster and in others it doesn’t. And what we find is not very surprising. Some parts of the economy, some parts of society are naturally better positioned because they have digital infrastructure in place. They are already in the digital world because there is more demand for entrepreneurship. Somebody spoke about it and entrepreneurship is more dominant. And I think it is important for the world to be very attentive to what works, what doesn’t work and not sugarcoat the picture because if we do, we would end up where we ended up with globalization. People revolting against it despite all the benefits it brings because, yes, the world as a whole benefited but some communities were devastated.

and the world did not pay attention to these communities in a timely manner. So that is my conclusion so far. And I know that I am very mindful that we are going to learn much more. At the front, we are trying to see how our country is positioned. Some countries actually have more demand for AI skills than supply. Some countries have more supply of AI skills than demand, and some have neither. So we have to work on multiple fronts, and we have to work based on concrete assessment of conditions in countries and localities in countries. I want to finish with a message to the Indian friends here in the audience. You’re very fortunate that your country invested in public digital infrastructure.

So this country… Condition for AI? Check. You are very fortunate because your country is removing actively barriers to entrepreneurship. And on that count, we say check. And you are super fortunate to have youthful, energetic, innovative population that is embracing AI. So what do we say? Check. So all the very best. This is terrific. Perfect. Minister Teo. Can I agree with

Mariano Florentino Cuellar

the managing director more, if I

Josephine Teo

may be allowed to chime in? Yes, please. I think sometimes there is a desire, a

Mariano Florentino Cuellar

tendency to

Josephine Teo

want to think of ways of regulating AI in order to slow down its advance and perhaps to try and forestall the risk. I’m not underestimating the need. For example, in making… I’m not underestimating the need for AI in order to slow down its advance and perhaps to try and forestall the risk. I’m not underestimating the need for AI in order to slow down its advance and perhaps to try and forestall the risk. I’m not underestimating the need for AI in order to slow down its advance and perhaps to try and forestall the risk. I’m not underestimating the need for AI in order to slow down its advance and perhaps to try and forestall the risk.

I’m not underestimating the need for AI in order to slow down its advance and perhaps to try and forestall the risk. I’m not underestimating the need for AI in order to slow down its advance and perhaps to try and forestall the risk. I’m not underestimating the need for AI in order to slow down its advance and perhaps to try and forestall the risk. But to over -expect AI regulations to deliver on the other important issues, such as the potential for greater social inequality, I think it’s unrealistic. The way to deal with it is to look at what other methods there must be to strengthen social solidarity. For example, what provisions do we put in place to help people to move from one job to the next?

What provisions do we put in place to ensure that even people who don’t earn a lot have the prospect of owning their own homes, access to good health care, educating their children to a very high level? I think these are the other things, and you cannot run away from those conversations just by expecting regulations to solve the problem. So what I’m hearing you both say, in a

Mariano Florentino Cuellar

way, is that it would be a very silly thing if we tried to solve health care problems. just by regulating pharmaceuticals. That would be a very poor fit, right? At the same time, you recognize that, you know, certain products that are sold, it’s good for them to be safe. And in fact, safety, trust, security can make them even more easy to diffuse. But I think what a very important takeaway from both of you is that the entire spectrum of tools that a society has to build social cohesion are going to be important in the transition to a more AI -driven economy. And we shouldn’t ignore them, but we shouldn’t put just the focus on what we can do by making models built in a certain way.

And I’d love for you to chime in because trade has come up already, just even in the last like 47 seconds of a bunch of times. Actually, yes. We put out a report last

Joanna Hill

year that looks at this issue exactly in that way. We look at the opportunities that I talked about of AI in the future, not only for the advanced countries, but developing in the lower income. But we also look at the need for national policies for that to actually… happen and to help transition. And so we look at issues around competition policy, around labor force. around skills development, around education. And to do that, the world trading system cannot do it alone. We need to partner at our level with international organizations and at the national level with the appropriate authorities and the private sector in order to have that holistic approach. I would say lesson learned from past experiences, and we definitely want to apply those lessons to this new one.

So we have about four minutes left, and

Mariano Florentino Cuellar

I have a last question for you all. Well, imagine yourselves in the future looking back at the past, maybe 15 years in the future. And at that point, you’re being interviewed on the same stage here in India, and you’re saying it’s been a very good thing to see how well the world has handled its relationship with this emerging technology of AI, and it’s turned out very well because blank. And I want you to mention one thing that you think in particular would have been so critical to make that transition well. You’ve all mentioned a bunch of things, but I’m interested in the main, most important takeaway that you’d like to leave the audience with. For me, that one word is trust.

In

Josephine Teo

15 years, if we went and asked citizens in all the countries where AI is being deployed widely, do you trust this technology? If their answer is no, then I believe that we must have failed in some way. If they believe that this technology has been implemented in a way that didn’t rob them of a livelihood, that didn’t rob them of, you know, being totally misinformed about the world, didn’t rob them of, you know, being able to carry out their lives in a safe and secure manner, it didn’t destroy families. I think if they can still say that this is a technology that can work reasonably well if you put in place the safeguards, I think we would have come a long way.

Deputy Director? An appreciation for what the world

Mariano Florentino Cuellar

trading system

Joanna Hill

can and is delivering. You know, when I think about it, last year it turned 30 years that the WTO was born. And down the road at CERN, the World Wide Web was being created by scientists that wanted to collaborate. And that architecture, which is technology neutral, allowed for those developments of the digital economy to come through. And how much of that architecture can serve us for this new wave? And then concentrate on those areas that are still needing to be worked on by collaboration, by cooperation, and focus on those. You know, trading with trust, trading with safety, and then appreciating and using what we already have to deliver. Managing Director? Well, in 15 years, if my

Mariano Florentino Cuellar

life expectancy

Kristalina Georgieva

has grown by another 50 years, I would say, great, we are successful. But on a serious note, I think, to me, the most important… factor, it goes a bit in the trust area, is the ethical foundation of AI. Whether we would manage to put AI on the foundation of force for good, or we leave space for AI to be force for evil. And that balance is not easy one. When I look at progress so far, we have done much more on the technical side of AI, and much less on building that strong ethical foundation, and putting guardrails that are not restricting innovation, but are protecting us from AI for bad. I still want my 50 years extra life.

One closing observation to just reinforce my appreciation

Mariano Florentino Cuellar

to the three of you and the work we do. So in the weeks immediately after the release of ChatGPT, which seems like 20 years ago, but it was not that long ago, there was talk about the need for an international atomic energy agency for AI or a new international agency or treaty. We don’t talk about that anymore. And I think in some ways it’s an appropriate and mature recognition that we already have a set of institutions and mechanisms in place to deal with a set of emerging challenges. I think it’s also a recognition that many individual countries have to do their part to create social cohesion and manage this change and this transformation effectively. But I would ask that this audience recognize that all three of our remarkable leaders here on the stage also reflect another reality, which is that even if sovereignty is important and even if individual countries have to have their own priorities, the challenge of how we best live with the technology we have created is truly a global one.

It’s not an individual country. It’s a country one. And the conversation we’re having today is an example of how we can learn from each other and find the right solutions. Thank you and namaste.

Related ResourcesKnowledge base sources related to the discussion topics (40)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“Moderator Mariano Florentino Cuellar introduced an “elite” panel of high‑profile speakers.”

The fireside chat transcript identifies Mariano Florentino Cuellar as the moderator and describes the panel as a “very elite” set of participants [S7].

Confirmedhigh

“The moderator made three observations, including that life expectancy rose from about 47 years in 1950 to roughly 73 years today.”

The opening remarks reference three observations about technology, science and global ties, matching the report’s description [S8]; WHO life-expectancy data cited in the Global Risks Report corroborate the 1950 and current figures [S107].

Confirmedmedium

“The world has become more fragmented, making international cooperation harder than five to ten years ago.”

Multiple sources discuss rising protectionism, erosion of trust and fragmentation of cooperation, supporting this observation [S109] and [S111] and [S112].

Additional Contextmedium

“Georgieva cited India’s “Vixit Bharat” initiative as an example of AI‑driven national development.”

India’s AI-focused programme “Viksit Bharat” is described in several sources, confirming the existence of a national AI development initiative though the exact spelling differs [S115] and [S117].

Additional Contextlow

“IMF research estimates AI could lift global growth by roughly 0.8 percentage points, outpacing the post‑COVID recovery.”

IMF discussions on AI’s macro-economic impact and its potential to drive growth are noted, but the specific 0.8 pp figure is not present in the knowledge base; the general claim of AI-driven growth is supported [S15] and [S119].

External Sources (119)
S1
The Global Economic Outlook — – Kristalina Georgieva: Managing Director of the International Monetary Fund (IMF) Kristalina Georgieva: And yes, whil…
S2
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — -Kristalina Georgieva- Managing Director of the International Monetary Fund (IMF)
S3
(Interactive Dialogue 1) Summit of the Future – General Assembly, 79th session — Kristalina Georgieva, Managing Director of the International Monetary Fund
S4
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S5
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S6
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S7
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Fireside Chat Moderator- Mariano-Florentino Cuellar — 1 .3 jobs. in total employment. But what does that mean? It means that a smaller segment of people get higher opportunit…
S8
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-fireside-chat-moderator-mariano-florentino-cuellar — Mariano-Florentino Cuéllar: Managing Director? Our research suggests that by the year 2040, trade growth could be almo…
S9
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — -Josephine Teo- Role/title not specified (represents Singapore)
S11
S12
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-fireside-chat-moderator-mariano-florentino-cuellar — Now we move to a conversation about how artificial intelligence needs to be positioned in the global context. And we hav…
S13
https://dig.watch/event/india-ai-impact-summit-2026/regional-leaders-discuss-ai-ready-digital-infrastructure — And in there, you can see, for example, that some of the lower income economies can seem quite open in that space. But i…
S14
United Nations High-Level Leaders’ Dialogue — – **Johanna Hill** – World Trade Organization (WTO) Johanna Hill: harness? Thank you for the invitation. We are facing …
S15
AI: Lifting All Boats / DAVOS 2025 — Kristalina Georgieva presents research showing that AI has the potential to increase global economic growth. This increa…
S16
UNSC meeting: Artificial intelligence, peace and security — Gabon and Mozambique drew attention to the potential for AI to exacerbate global inequalities, noting that the resources…
S17
Leaders’ Plenary | Global Vision for AI Impact and Governance Morning Session Part 1 — Algorithms are not just applications of mathematical codes that support the digital world. They are part of a complex po…
S18
How AI Drives Innovation and Economic Growth — “The biggest risk, I think, is definitely the labor market.”[35]. “If there was a dial where I could slow down the adapt…
S19
Generative AI: Steam Engine of the Fourth Industrial Revolution? — Technology is moving at an incredibly fast pace, and this rapid advancement is seen in various sectors such as AI, semic…
S20
Artificial intelligence (AI) – UN Security Council — During the9821st meetingof the Artificial Intelligence Security Council, a key discussion centered around whether existi…
S21
What is it about AI that we need to regulate? — Multiple speakers emphasized that technological challenges transcend national borders and require coordinated internatio…
S22
Main Session | Policy Network on Artificial Intelligence — Benifei argues for the importance of developing common standards and definitions for AI at a global level. He suggests t…
S23
Skilling and Education in AI — The Professor took a notably realistic turn in acknowledging that AI will inevitably create new forms of inequality, des…
S24
WS #255 AI and disinformation: Safeguarding Elections — An audience member suggests that addressing disinformation requires looking beyond just technological solutions. They ar…
S25
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — Fink raises concerns about AI adoption patterns based on research showing that educated populations are disproportionate…
S26
Discussion Report: AI as Foundational Infrastructure – A Conversation Between Laurence Fink and Satya Nadella — There’s comparative advantage in countries. There is comparative advantage in firms. That needs to be preserved, even in…
S27
Rethinking Africa’s digital trade: Entrepreneurship, innovation, & value creation in the age of Generative AI (depHub) — Furthermore, Ahmed expressed interest in exploring the most promising use cases of generative AI in Africa and other dev…
S28
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Additionally, there is apprehension about the potential negative impacts of technology, especially in terms of widening …
S29
Can (generative) AI be compatible with Data Protection? | IGF 2023 #24 — Wei Wang:Thank you so much, Luca, as always. And thank you for having me today, at least virtually. Yes, and it’s very c…
S30
Revitalizing Universal Service Funds to Promote Inclusion | IGF 2023 — Reforms in Brazil’s USF have unlocked $675 million for school connectivity, with Giga securing an additional $1.7 billio…
S31
© 2019, United Nations — Policymakers also need to consider ways to help those individuals that may lose their jobs due to increasing digitalizat…
S32
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — Need policies supporting displaced workers through industrial, macroeconomic, and social protection measures
S33
Global AI Policy Framework: International Cooperation and Historical Perspectives — The speakers demonstrate significant consensus on key principles including the need for inclusive governance, building o…
S34
WS #462 Bridging the Compute Divide a Global Alliance for AI — Elena Estavillo Flores emphasized the need for “inclusive governance models with meaningful civil society participation”…
S35
AI for Social Empowerment_ Driving Change and Inclusion — Inequality and broader socio‑economic effects She warns that AI is exacerbating inequality by increasing capital concen…
S36
World Economic Forum Panel Discussion: Global Economic Growth in the Age of AI — Professional experience analyzing various risks including cyber, environmental, and health risks, with observation that …
S37
Secure Finance Risk-Based AI Policy for the Banking Sector — Transition level data, cash flow analytics and behaviour indicators can provide more nuanced insight into the repayment …
S38
UN High Commissioner urges human rights-centric approach to mitigate risks in AI development — While AI holds transformative potential for solving critical issues like curing cancer and addressing global warming, it…
S39
Artificial Intelligence & Emerging Tech — In conclusion, the meeting underscored the importance of AI in societal development and how it can address various chall…
S40
Setting the Rules_ Global AI Standards for Growth and Governance — The discussion revealed remarkably high consensus across diverse stakeholders on the fundamental need for AI standards, …
S41
How to make AI governance fit for purpose? — All speakers recognize that AI’s global nature requires international cooperation and coordination, though they may diff…
S42
Advancing Scientific AI with Safety Ethics and Responsibility — High level of consensus with significant implications for AI governance policy. The agreement across speakers from diffe…
S43
Conversation: 02 — “So that’s why without trust and safety and understanding of what’s happening in your underlying environment, it becomes…
S44
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — “And the philosophy here is that AI is a tool which is helping the humankind to make a decision”[28]. “Trust is importan…
S45
Digital Embassies for Sovereign AI — Trust as Foundation: Both Li and Fasel emphasized trust as a cornerstone requirement, with Switzerland’s established rep…
S46
State of play of major global AI Governance processes — Juha Heikkila:Thank you very much, and thank you very much indeed for the invitation to be on this panel. So indeed the …
S47
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Achieving inclusive AI requires addressing inequalities across three fundamental areas: access to computing infrastructu…
S48
Regional Leaders Discuss AI-Ready Digital Infrastructure — The discussion highlighted that AI infrastructure development must be understood as part of broader development strategi…
S49
The digital economy in the age of AI: Implications for developing countries (UNCTAD) — It is argued that AI should be found not just in big centers but also in niches, flea markets, and favelas. The aim is t…
S50
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Factors such as restricted access to computing resources and data further impede policy efficacy. Nevertheless, the cont…
S51
Keynote-Ankur Vora — AI can be steered to address humanity’s biggest problems rather than merely pursuing profit. This requires deliberate ch…
S52
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Jason Tucker: Thank you. So I wear two hats. I’m an academic, but I also work in public policy. And this is why I’m sort…
S53
AI for Democracy_ Reimagining Governance in the Age of Intelligence — This brings me to the international dimension. AI is a truly global challenge whose effects transcend national borders. …
S54
High-level AI Standards panel — 3. **Include**: Engaging diverse stakeholders beyond traditional technical communities Amandeep Singh Gill reinforced t…
S55
International Cooperation for AI & Digital Governance | IGF 2023 Networking Session #109 — However, the analysis highlights the aviation industry as an example. Despite concerns of regulatory capture, regulation…
S56
AI That Empowers Safety Growth and Social Inclusion in Action — High level of consensus on core principles and challenges, with speakers from different sectors (government, companies, …
S57
Global AI Policy Framework: International Cooperation and Historical Perspectives — The speakers demonstrate significant consensus on key principles including the need for inclusive governance, building o…
S58
What is it about AI that we need to regulate? — A key distinction emerged around technical versus broader governance issues. InWorkshop 344 on WSIS+20 Technical Layer, …
S59
Science Summit 2025 — A series of sessions will examine how AI is shaping global health, scientific discovery, and governance, with a strong f…
S60
AI Meets Cybersecurity Trust Governance & Global Security — “AI governance now faces very similar tensions.”[27]”AI may shape the balance of power, but it is the governance or AI t…
S61
Why science metters in global AI governance — helping member states move from philosophical debates to technical coordination, and anchor choices in evidence so polic…
S62
Skilling and Education in AI — The Professor took a notably realistic turn in acknowledging that AI will inevitably create new forms of inequality, des…
S63
Unleashing Digital Trade and Investment for Sustainable Development (UN ESCAP) — Additionally, policies that remove barriers to cross-border service delivery can have a significant impact on access to …
S64
Comprehensive Report: Preventing Jobless Growth in the Age of AI — AI democratizes access to expertise and disproportionately benefits lower-skilled workers by providing them with capabil…
S65
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Fireside Chat Moderator- Mariano-Florentino Cuellar — “Emerging markets, 40%, but in advanced economies, 60%”[13]. “Could AI get loose and create havoc on financial markets?”…
S66
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — – Brad Smith- Ashwini Vaishnaw Economic | Development | Sociocultural Georgieva describes AI’s impact on labor markets…
S67
From Innovation to Impact_ Bringing AI to the Public — Sharma’s central thesis positions AI not as a threat to employment but as a productivity multiplier that will enable Ind…
S68
AI: Lifting All Boats / DAVOS 2025 — Kristalina Georgieva presents research showing that AI has the potential to increase global economic growth. This increa…
S69
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — Fink raises concerns about AI adoption patterns based on research showing that educated populations are disproportionate…
S70
Discussion Report: AI as Foundational Infrastructure – A Conversation Between Laurence Fink and Satya Nadella — There’s comparative advantage in countries. There is comparative advantage in firms. That needs to be preserved, even in…
S71
Empowering Inclusive and Sustainable Trade in Asia-Pacific: Perspectives on the WTO E-commerce Moratorium — Katrin Kuhlmann:Thank you so much. I am absolutely delighted to be here, and it’s great to see all of you on a Friday af…
S72
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-fireside-chat-moderator-mariano-florentino-cuellar — Now we move to a conversation about how artificial intelligence needs to be positioned in the global context. And we hav…
S73
AI is here. Are countries ready, or not? | IGF 2023 Open Forum #131 — Additionally, there is apprehension about the potential negative impacts of technology, especially in terms of widening …
S74
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — Singapore’s approach exemplifies proactive governance through government-led testing of agentic AI in high-stakes citize…
S75
How to make AI governance fit for purpose? — The US strongly opposes regulation and advocates for deregulation, while China emphasizes balanced approach with monitor…
S76
AI & Child Rights: Implementing UNICEF Policy Guidance | IGF 2023 WS #469 — Dominic Regester:Thank you. Like Daniela said, I’m the director of the Centre for Education Transformation, which is par…
S77
Open Forum #17 AI Regulation Insights From Parliaments — Balancing Innovation and Regulation Balancing innovation incentives with regulatory protection Mentions specific secto…
S78
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — Implement policies supporting displaced workers through industrial, macroeconomic, and social protection measures
S79
NextGen AI Skills Safety and Social Value – technical mastery aligned with ethical standards — Professor Dr. Alok Pandey argued for “de-bureaucratising” education, introducing the concept of “curriculum velocity”—th…
S80
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Governments have collectively affirmed the importance of building trust by governing AI based on human rights, and that …
S81
Open Forum #33 Building an International AI Cooperation Ecosystem — Dai Wei: Distinguished guests, ladies and gentlemen, good day to you all. I’m delighted to join you in this United Natio…
S82
WS #97 Interoperability of AI Governance: Scope and Mechanism — Olga Cavalli: Thank you, Mauricio, for this very good examples of cooperation. And I love the standards hub. I like …
S83
WS #462 Bridging the Compute Divide a Global Alliance for AI — Elena Estavillo Flores emphasized the need for “inclusive governance models with meaningful civil society participation”…
S84
AI and Human Connection: Navigating Trust and Reality in a Fragmented World — – Ronen Tanchum- Wanji Walcott Current regulation approaches are inadequate and lag behind technological development L…
S85
Opening — The overall tone was formal yet optimistic. Speakers acknowledged the serious challenges posed by rapid technological ch…
S86
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S87
Powering the Technology Revolution / Davos 2025 — The tone was generally optimistic and forward-looking, with panelists highlighting opportunities for innovation and prog…
S88
WAIGF Opening Ceremony & Keynote — The overall tone was formal yet optimistic. Speakers expressed enthusiasm about the potential of digital technologies wh…
S89
Opening of the session — Focus on mutually acceptable proposals for the future mechanism However, there remains optimism that resolutions on vit…
S90
The State of Digital Fragmentation (Digital Policy Alert) — Furthermore, the analysis highlights the global expansion of digital corporations and the lack of global regulation as p…
S91
Global Risks 2025 / Davos 2025 — The discussion then turned to the risks of economic fragmentation in an increasingly complex global economy. Martina Che…
S92
Operationalizing data free flow with trust | IGF 2023 WS #197 — David Pendle:as we aim to build trust? Thanks Tamim. So I sit on Microsoft’s law enforcement national security team whic…
S93
Future-proofing global tech governance: a bottom-up approach | IGF 2023 Open Forum #44 — However, the existence of numerous international bodies and initiatives addressing similar topics raises concerns about …
S94
World in Numbers: Risks / DAVOS 2025 — The report identified inequality, polarization, and climate change as severe and persistent risks. Environmental risks, …
S95
WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy — The tone of the discussion was largely serious and concerned, given the gravity of the issues being discussed. However, …
S96
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S97
Collaborative Innovation Ecosystem and Digital Transformation: Accelerating the Achievement of Global Sustainable Development Goals (SDGs) — The discussion maintained a consistently professional, collaborative, and optimistic tone throughout. Speakers demonstra…
S98
Closing remarks – Charting the path forward — The tone throughout was consistently formal, diplomatic, and optimistic. It maintained a collaborative and forward-looki…
S99
WS #302 Upgrading Digital Governance at the Local Level — The discussion maintained a consistently professional and collaborative tone throughout. It began with formal introducti…
S100
Presentation of outcomes to the plenary — Reinforcing the need for trust and hope in the global international community.
S101
A Global Compact for Digital Justice: Southern perspectives | IGF 2023 — In sum, this detailed analysis uncovers a complex web of interconnected issues that need unravelling to effectively comb…
S102
Any other business /Adoption of the report/ Closure of the session — In conclusion, the delegate reiterated his gratitude, acknowledging the extensive labours and patience exhibited by the …
S103
Open Forum #11 CTO Open Forum on Digital Cooperation in the Arab Region — Need to strengthen existing mechanisms rather than create new ones
S104
World Economic Forum Annual Meeting Closing Remarks: Summary — Fink concludes the forum with an optimistic philosophy, quoting Elon Musk to emphasize the value of maintaining a positi…
S105
Main Session on Artificial Intelligence | IGF 2023 — Moderator 1 – Maria Paz Canales Lobel:Definitely. Thank you very much for that answer. Christian, we have another questi…
S106
UN OEWG hosts inaugural global roundtable on ICT security capacity building — The UN recently hosted the inauguralGlobal roundtable on ICT security capacity buildingunder the auspices of theOpen-End…
S107
The Global Risks Report 2020 — – 1 WHO (World Health Organization). 2019. Global Health Observatory (GHO) Data: Life Expectancy. https://www.who.int/gh…
S108
Beyond human: AI, superhumans, and the quest for limitless performance & longevity — This comment reframes the scale and urgency of aging research by putting it in stark comparative terms. The war analogy …
S109
WS #259 Multistakeholder Cooperation Ineraof Increased Protectionism — Shifting geopolitical order and erosion of trust are making cooperation increasingly difficult
S110
UNGA/DAY 1/PART 2 — Crisis of trust in multilateral institutions:The world has changed profoundly, and there is a real crisis of trust in mu…
S111
WS #453 Leveraging Tech Science Diplomacy for Digital Cooperation — World is moving toward fragmentation and localization, requiring continued international engagement beyond borders
S112
AI and Digital Developments Forecast for 2026 — Countries are taking different stances risking decentralization
S113
Fireside Conversation: 01 — Amodei sees AI as a catalyst for rapid development in the Global South, offering solutions to longstanding constraints. …
S114
European Parliament Delegation to the IGF & the Youth IGF | IGF 2023 Open Forum #141 — Artificial intelligence could lead to economic growth.
S115
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Hemant Taneja General Catalyst — Taneja argued that India is uniquely positioned to lead in AI deployment due to its status as the world’s strongest grow…
S116
India’s AI infrastructure gets a $15bn lift from Google — Google hasannounced a $15 billion commitmentfor 2026–2030 to build its first Indian AI hub in Visakhapatnam, positioning…
S117
Building the Workforce_ AI for Viksit Bharat 2047 — And I must also congratulate Madam Radha and her team for this launch of digital capacity building allies. But the idea …
S118
https://dig.watch/event/india-ai-impact-summit-2026/driving-indias-ai-future-growth-innovation-and-impact — Awesome. Great question, Midu. And, you know, we as a nation have proven ourselves to be phenomenal adopters of technolo…
S119
How AI Drives Innovation and Economic Growth — <strong>Jeanette Rodrigues:</strong> all around the Bharat Mandapam. So once again, thank you very much for your time th…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
K
Kristalina Georgieva
7 arguments119 words per minute1338 words669 seconds
Argument 1
Global GDP could rise by about 0.8% thanks to AI (Kristalina Georgieva)
EXPLANATION
Georgieva states that artificial intelligence can add roughly 0.8 % to global gross domestic product. This increase would put world growth ahead of the pre‑COVID trajectory and create more jobs and opportunities.
EVIDENCE
She cites IMF research indicating that AI can lift global growth by about a percentage point, specifically 0.8 % [40-42], and explains that this would mean faster growth than before the pandemic [43].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
IMF research presented at Davos 2025 shows AI could lift global growth by roughly 0.8% [S15] and the Leaders’ Plenary cites the same projection [S17].
MAJOR DISCUSSION POINT
Macroeconomic impact of AI
Argument 2
Countries that invest quickly in digital infrastructure and AI skills can achieve up to double the economic gains of slower adopters (Kristalina Georgieva)
EXPLANATION
Georgieva argues that nations that rapidly develop digital infrastructure and AI‑related skills can see twice the economic benefit compared with laggards. Speed of adoption therefore becomes a competitive advantage.
EVIDENCE
She notes that “countries that go fast on digital infrastructure, on skills, on adoption of AI, that they can do twice as well as those that don’t” [50-51].
MAJOR DISCUSSION POINT
Digital readiness and growth
AGREED WITH
Joanna Hill
Argument 3
AI may exacerbate global inequality, giving advantages to countries that are already ahead (Kristalina Georgieva)
EXPLANATION
Georgieva warns that AI could widen existing gaps, benefitting nations that already possess data, capital and technical capacity while leaving others behind. The technology may make the world less fair if not managed inclusively.
EVIDENCE
She identifies the first risk of AI as “making countries and the world less fair. Some have it and others don’t” [57-59].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UN Security Council remarks warned that AI could widen gaps between nations that have data, capital and capacity and those that do not [S16]; the Leaders’ Plenary emphasized that without collective action AI will deepen historical inequalities [S17].
MAJOR DISCUSSION POINT
Inequality risk
AGREED WITH
Josephine Teo
Argument 4
Approximately 40 % of jobs worldwide will be affected; the impact is larger in advanced economies (≈60 %) than in emerging markets (≈40 %) (Kristalina Georgieva)
EXPLANATION
Georgieva quantifies AI’s labour impact, estimating that 40 % of jobs globally will feel AI’s influence, with a higher share (about 60 %) in advanced economies and about 40 % in emerging markets. The effect includes both job enhancement and elimination.
EVIDENCE
She reports that “40 % of jobs will be affected by AI… Emerging markets, 40 %, but in advanced economies, 60 %” [61-63].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Fireside Chat on Trusted AI notes that AI will affect about 40% of jobs globally, with roughly 60% impact in advanced economies and 40% in emerging markets [S7]; the IMF’s Leaders’ Plenary provides the same figures [S17].
MAJOR DISCUSSION POINT
Labour market disruption
Argument 5
Rapid job displacement creates a “tsunami” in the labor market, especially for routine, entry‑level positions (Kristalina Georgieva)
EXPLANATION
Georgieva describes AI‑driven job loss as a tsunami, emphasizing that routine, entry‑level roles are most vulnerable. This rapid displacement could strain social safety nets.
EVIDENCE
She likens the impact to “a tsunami hitting it globally” [61] and later explains that “jobs that disappear tend to be entry-level… they are routine and easily automated” [136-138].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The “tsunami” metaphor for swift job loss is used in the Fireside Chat discussion [S7] and highlighted in analysis of AI’s labor impact [S2]; concerns about the speed of labor-market adjustment are also voiced in a study of AI-driven innovation [S18].
MAJOR DISCUSSION POINT
Job displacement
Argument 6
AI could threaten financial market stability if left unchecked (Kristalina Georgieva)
EXPLANATION
Georgieva flags a third major risk: AI could destabilise financial markets if it operates without adequate safeguards, potentially creating havoc in the financial system.
EVIDENCE
She states that “the third risk we at the IMF worry a lot about is financial stability risk. Could AI get loose and create havoc on financial markets?” [64-66].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panelists warned that uncontrolled AI could create havoc in financial markets, raising a financial-stability risk [S7].
MAJOR DISCUSSION POINT
Financial stability risk
Argument 7
Building a strong ethical foundation and guardrails is essential to keep AI a force for good rather than a source of harm (Kristalina Georgieva)
EXPLANATION
Georgieva stresses that while technical progress in AI is rapid, the ethical underpinnings lag behind. She calls for robust guardrails that protect against misuse without stifling innovation.
EVIDENCE
She observes that “we have done much more on the technical side of AI, and much less on building that strong ethical foundation, and putting guardrails that are not restricting innovation, but are protecting us from AI for bad” [227-232].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Fireside Chat stresses the need for robust ethical foundations and guardrails that protect without stifling innovation [S7]; a UN meeting similarly calls for guardrails in AI governance [S19].
MAJOR DISCUSSION POINT
AI ethics
AGREED WITH
Mariano Florentino Cuellar, Josephine Teo
M
Mariano Florentino Cuellar
2 arguments188 words per minute1337 words424 seconds
Argument 1
Existing global institutions (IMF, WTO) can address AI challenges; a new agency is not yet necessary, but cooperation among nations remains vital (Mariano Florentino Cuellar)
EXPLANATION
Cuellar argues that the world already possesses institutions capable of handling AI‑related issues, so creating a new agency is unnecessary. He emphasizes that coordinated action among sovereign states remains essential.
EVIDENCE
He notes that after the initial hype about an “international atomic energy agency for AI,” “we don’t talk about that anymore” because existing mechanisms suffice [235-239].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panelists argued that existing bodies like the IMF and WTO are sufficient and that a new “IAEA for AI” agency is unnecessary, citing the Fireside Chat’s comment on building on existing mechanisms [S7]; the UN Security Council suggested adapting current frameworks for AI governance [S20]; global cooperation is emphasized in the discussion on AI regulation [S21].
MAJOR DISCUSSION POINT
Institutional coordination
AGREED WITH
Joanna Hill
Argument 2
Collaboration, shared standards, and a focus on safety will enable the world to harness AI while preserving social cohesion (Mariano Florentino Cuellar)
EXPLANATION
Cuellar highlights that trust, safety and shared standards are key tools for societies to transition to AI‑driven economies without fracturing social cohesion. He calls for a broad toolbox beyond just technical models.
EVIDENCE
He says “the entire spectrum of tools that a society has to build social cohesion are going to be important… safety, trust, security can make them even more easy to diffuse” [190-195].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Fireside Chat highlighted the importance of shared standards, safety and trust to maintain social cohesion while diffusing AI [S7]; Benifei’s call for common AI standards underscores this need [S22]; coordinated international response is discussed as essential for AI challenges [S21].
MAJOR DISCUSSION POINT
Cooperation and safety
AGREED WITH
Joanna Hill
J
Josephine Teo
5 arguments177 words per minute794 words268 seconds
Argument 1
Relying solely on AI regulation to solve social inequality is unrealistic; broader social safety nets (housing, health care, education) are needed (Josephine Teo)
EXPLANATION
Teo argues that regulation alone cannot address the social inequalities AI may generate. She advocates for complementary policies such as housing, health care and education to protect vulnerable groups.
EVIDENCE
She states that “to over-expect AI regulations to deliver on the other important issues… is unrealistic” and proposes “what provisions do we put in place to help people move from one job to the next… ensure that even people who don’t earn a lot have the prospect of owning their own homes, access to good health care, educating their children” [183-187].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI and disinformation session argued that addressing AI-driven inequality requires broader social policies beyond technical fixes, such as housing, health care and education [S24]; a study on AI-driven inequality similarly notes the need for comprehensive social measures [S23].
MAJOR DISCUSSION POINT
Social safety nets
AGREED WITH
Kristalina Georgieva
Argument 2
Singapore aims to be a “trusted node” by maintaining consistent, principled policies that ensure technology is used responsibly (Josephine Teo)
EXPLANATION
Teo describes Singapore’s strategy of acting as a trusted intermediary in the global AI ecosystem, emphasizing consistency, principled decision‑making and the ability to safeguard technology from misuse.
EVIDENCE
She explains that a “trusted node” means “we can trust you with our technology… the only way to remain trusted is if we act in a consistent and principled way” [97-103].
MAJOR DISCUSSION POINT
Trusted node concept
Argument 3
Small states can stay relevant by making technology choices based on performance, security, and national interest rather than size (Josephine Teo)
EXPLANATION
Teo argues that a small country’s relevance comes from choosing technologies that meet performance, security and resilience criteria, not from its size. Decisions are left to operators who evaluate based on these principles.
EVIDENCE
She cites the 5G example, noting that “commercial decisions… have to be undertaken… based on performance, security, resilience, keeping in mind what are all the rules” [109-112].
MAJOR DISCUSSION POINT
Technology choice for small states
Argument 4
The Model AI Governance Plan exemplifies proactive, multi‑stakeholder regulation that balances innovation with safeguards (Josephine Teo)
EXPLANATION
Teo points to Singapore’s Model AI Governance Plan as a concrete illustration of forward‑looking, multi‑stakeholder regulation that seeks to foster innovation while protecting against risks.
MAJOR DISCUSSION POINT
Model AI Governance Plan
Argument 5
Public trust is the single most critical factor for AI’s long‑term acceptance and success (Josephine Teo)
EXPLANATION
Teo stresses that societal acceptance of AI hinges on public trust. If citizens do not trust AI to protect their livelihoods and rights, the technology’s deployment will be deemed a failure.
EVIDENCE
She asks whether citizens “trust this technology” and argues that a negative answer would signal failure, whereas confidence that AI does not rob people of livelihood or safety indicates progress [212-215].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Fireside Chat underlined that trust is a key prerequisite for successful AI deployment; without public confidence the technology is likely to fail [S7].
MAJOR DISCUSSION POINT
Trust as a prerequisite
AGREED WITH
Mariano Florentino Cuellar, Kristalina Georgieva
J
Joanna Hill
4 arguments158 words per minute487 words184 seconds
Argument 1
AI is projected to boost global trade growth by up to 40% by 2040 (Joanna Hill)
EXPLANATION
Hill shares WTO research forecasting that AI could raise global trade growth by roughly 40 % by 2040, offering substantial opportunities for middle‑ and lower‑income economies.
EVIDENCE
She states “our research suggests that by the year 2040, trade growth could be almost growing by 40%” [80] and links this to opportunities for lower-income economies [81].
MAJOR DISCUSSION POINT
Trade growth potential
Argument 2
Trade can accelerate AI diffusion to low‑ and middle‑income economies, but comparative advantage is shifting toward data‑rich, capital‑intensive nations (Joanna Hill)
EXPLANATION
Hill notes that trade can help spread AI to countries that need it, yet the technology reshapes comparative advantage toward nations with abundant data, capital and computing power, putting labor‑intensive economies at risk.
EVIDENCE
She observes that “trade can help the diffusion of AI to those that most need it” and that “AI is really shifting what we think of as comparative advantage to those economies that are more strong in capital, data, and in computing power” [75-78].
MAJOR DISCUSSION POINT
AI and comparative advantage
Argument 3
Realizing AI‑driven trade benefits requires investment in skills, digital infrastructure, and appropriate regulations (Joanna Hill)
EXPLANATION
Hill emphasizes that to capture AI‑related trade gains, countries must invest in digital skills, infrastructure and develop regulatory frameworks that keep pace with technological change.
EVIDENCE
She stresses “the importance of investing in skills and regulations and in infrastructure, digital infrastructure are incredibly important” and notes that “our trade agreements… can develop with AI, but there are areas still too new” [79-84].
MAJOR DISCUSSION POINT
Prerequisites for AI‑enabled trade
AGREED WITH
Kristalina Georgieva
Argument 4
Existing WTO agreements support AI‑related goods and services, yet gaps remain that must be addressed as the technology evolves (Joanna Hill)
EXPLANATION
Hill points out that current WTO rules already cover many AI‑related goods and services, but acknowledges that certain aspects remain under‑developed and will need updating as AI matures.
EVIDENCE
She says “our trade agreements… can develop with AI. But there are some areas where they’re still too new and still too nuanced” [83-85].
MAJOR DISCUSSION POINT
WTO framework gaps
Agreements
Agreement Points
Digital infrastructure and skills are essential to capture AI’s economic benefits
Speakers: Kristalina Georgieva, Joanna Hill
Countries that invest quickly in digital infrastructure and AI skills can achieve up to double the economic gains of slower adopters (Kristalina Georgieva) Realizing AI‑driven trade benefits requires investment in skills, digital infrastructure, and appropriate regulations (Joanna Hill)
Both speakers stress that rapid development of digital infrastructure and AI-related skills is a prerequisite for reaping the economic upside of AI, with fast adopters potentially achieving twice the gains of laggards [50-51][79-84].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for infrastructure and skills is echoed in UNCTAD’s analysis of the digital economy, which stresses integrated development of computing resources, data, and training to reduce inequality [S49], and in the Open Forum discussion that highlights access to infrastructure, datasets and technical skills as pillars of inclusive AI [S47].
AI poses inequality risks and requires broader social safety nets
Speakers: Kristalina Georgieva, Josephine Teo
AI may exacerbate global inequality, giving advantages to countries that are already ahead (Kristalina Georgieva) Relying solely on AI regulation to solve social inequality is unrealistic; broader social safety nets (housing, health care, education) are needed (Josephine Teo)
Georgieva warns that AI can make the world less fair and displace jobs, calling for social protection measures, while Teo argues that regulation alone cannot address inequality and advocates for housing, health, and education supports [57-60][145][183-187].
POLICY CONTEXT (KNOWLEDGE BASE)
Reports warn that AI can concentrate capital and shrink labour’s share, heightening inequality and calling for expanded social protection measures [S35]; the World Economic Forum notes social risk is under-addressed despite its importance [S36]; the UN High Commissioner urges a human-rights-centered approach that includes safety nets [S38].
Trust is the cornerstone for successful AI deployment
Speakers: Mariano Florentino Cuellar, Josephine Teo, Kristalina Georgieva
Building a strong ethical foundation and guardrails is essential to keep AI a force for good rather than a source of harm (Kristalina Georgieva) Public trust is the single most critical factor for AI’s long‑term acceptance and success (Josephine Teo)
All three speakers converge on trust as pivotal: Mariano closes with “trust” as the key word, Georgieva links trust to an ethical foundation and guardrails, and Teo ties public confidence to AI’s legitimacy [210-211][212-215][227-232].
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple high-level statements underline trust as foundational, from the AI Impact Summit 2026 calling trust essential for collective progress [S44] to the Digital Embassies discussion linking trust to national reputation [S45] and the Global AI Standards panel emphasizing trust-building through shared standards [S40].
Existing multilateral institutions can address AI challenges; new agencies are not yet required
Speakers: Mariano Florentino Cuellar, Joanna Hill
Existing global institutions (IMF, WTO) can address AI challenges; a new agency is not yet necessary, but cooperation among nations remains vital (Mariano Florentino Cuellar) The world trading system can develop with AI, but gaps remain that require partnership with international organisations and the private sector (Joanna Hill)
Mariano argues that current bodies like the IMF and WTO are sufficient for AI governance, while Hill notes that WTO frameworks already cover many AI-related trade issues but will need collaborative updates, indicating confidence in existing institutions [235-239][197-202].
POLICY CONTEXT (KNOWLEDGE BASE)
The Global AI Policy Framework notes consensus on leveraging existing institutions rather than creating new bodies [S57]; the EU AI Act demonstrates how existing regional mechanisms can deliver comprehensive regulation [S46]; and the Setting the Rules report highlights the role of established standards bodies in fostering cooperation [S40].
Collaboration, shared standards and safety are needed to harness AI while preserving social cohesion
Speakers: Mariano Florentino Cuellar, Joanna Hill
Collaboration, shared standards, and a focus on safety will enable the world to harness AI while preserving social cohesion (Mariano Florentino Cuellar) We need to partner at our level with international organisations and at the national level with the appropriate authorities and the private sector in order to have that holistic approach (Joanna Hill)
Both speakers emphasize that multilateral cooperation, common standards and safety considerations are essential to diffuse AI benefits without fracturing societies [190-195][197-202].
POLICY CONTEXT (KNOWLEDGE BASE)
The High-level AI Standards panel calls for multidisciplinary collaboration and safety-focused standards to maintain social cohesion [S54]; the How to make AI governance fit for purpose? briefing stresses shared standards and cooperative frameworks as essential [S41]; and the Global AI Standards discussion underscores consensus on safety and trust [S40].
Similar Viewpoints
Both see AI as a powerful engine for macro‑economic growth—Georgieva through overall GDP lift and Hill through a substantial trade expansion—underscoring the technology’s potential to accelerate the global economy [40-44][80-84].
Speakers: Kristalina Georgieva, Joanna Hill
Global GDP could rise by about 0.8% thanks to AI (Kristalina Georgieva) AI is projected to boost global trade growth by up to 40% by 2040 (Joanna Hill)
Both stress the necessity of principled, ethical frameworks—Georgieva at the global guardrail level and Teo at the national policy level—to guide AI development responsibly [227-232][97-103].
Speakers: Kristalina Georgieva, Josephine Teo
Building a strong ethical foundation and guardrails is essential to keep AI a force for good rather than a source of harm (Kristalina Georgieva) Singapore aims to be a “trusted node” by maintaining consistent, principled policies that ensure responsible technology use (Josephine Teo)
Both identify trust as the decisive factor that will determine whether AI deployment is perceived as legitimate and beneficial [210-211][212-215].
Speakers: Mariano Florentino Cuellar, Josephine Teo
Public trust is the single most critical factor for AI’s long‑term acceptance and success (Josephine Teo) For me, that one word is trust (Mariano Florentino Cuellar)
Unexpected Consensus
All three high‑level speakers (Georgieva, Hill, Teo) converge on the need for social protection and safety nets beyond pure AI regulation
Speakers: Kristalina Georgieva, Joanna Hill, Josephine Teo
AI may exacerbate global inequality, giving advantages to countries that are already ahead (Kristalina Georgieva) Realizing AI‑driven trade benefits requires investment in skills, digital infrastructure, and appropriate regulations (Joanna Hill) Relying solely on AI regulation to solve social inequality is unrealistic; broader social safety nets are needed (Josephine Teo)
While Georgieva focuses on macro-level inequality, Hill on trade-related opportunities, and Teo on national policy, all three stress that AI’s challenges cannot be solved by technology policy alone and require complementary social protection measures-a convergence that was not explicitly anticipated at the start of the panel [57-60][145][183-187].
POLICY CONTEXT (KNOWLEDGE BASE)
Their convergence mirrors findings in AI for Social Empowerment that stress broader safety nets [S35] and the UN High Commissioner’s call for human-rights-centric safeguards alongside AI development [S38]; the Comprehensive Report on preventing jobless growth also highlights the necessity of social protection alongside AI policy [S64].
Overall Assessment

The panel shows a strong consensus that AI’s promise can only be realised through robust digital infrastructure, skill development, ethical guardrails, and, crucially, public trust. Speakers across institutions agree that existing multilateral bodies are capable of steering AI governance, provided they cooperate and address social safety‑net gaps. Divergence remains on the specifics of new institutional arrangements, but the shared emphasis on trust, cooperation, and inclusive policy indicates a high level of alignment.

High consensus on the need for trust, digital readiness, and social protection; moderate consensus on institutional sufficiency; limited disagreement on the creation of new agencies. This consensus suggests that future policy initiatives are likely to focus on strengthening existing frameworks, investing in infrastructure and skills, and building public confidence in AI.

Differences
Different Viewpoints
Which policy instruments are most effective for ensuring that AI benefits are shared equitably
Speakers: Kristalina Georgieva, Joanna Hill, Josephine Teo
Education has to be revamped for a new world; support for those whose local economies are being dramatically changed; social protection so they don’t feel like what happened with industrial-world workers in the United States when their jobs were exported overseas (Georgieva) [145-148] Trade can help the diffusion of AI to those that most need it, but comparative advantage is shifting toward capital-, data- and computing-intensive economies; therefore investment in skills, digital infrastructure and appropriate regulations is required (Hill) [75-84] Regulation alone cannot solve the social-inequality problem; broader social safety-net measures such as housing, health-care, education and mechanisms to help people move between jobs are needed (Teo) [183-187]
Georgieva argues for macro-level education reform, social protection and targeted support; Hill stresses that trade mechanisms, WTO frameworks and skill-building are the main levers; Teo contends that regulation is insufficient and calls for wider social policies and trust-building measures. The three speakers therefore disagree on the primary tools to achieve equitable AI outcomes [145-148][75-84][183-187].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy pathways identified by the Open Forum emphasize targeted instruments such as inclusive financing and skill programs to promote equitable AI benefits [S47]; UN ESCAP’s digital trade and investment policies are cited as levers to generate decent work and reduce inequality [S63]; and the Comprehensive Report discusses mechanisms to prevent jobless growth [S64].
How AI governance should be framed – ethical guardrails versus broader trust‑building and social measures
Speakers: Kristalina Georgieva, Josephine Teo
We have done much more on the technical side of AI and much less on building a strong ethical foundation and guardrails that protect us from AI used for bad without restricting innovation (Georgieva) [227-232] Public trust is the single most critical factor for AI’s long-term acceptance; over-expecting regulation to solve inequality is unrealistic and we need complementary policies such as housing, health-care and education (Teo) [212-215][183-187]
Georgieva emphasizes the need for formal ethical frameworks and guardrails as a core part of AI governance, while Teo places trust at the centre and argues that regulation alone cannot address inequality, calling for broader social policies. This reflects a divergence on whether governance should focus on ethical rules or on building public trust and social safety nets [227-232][212-215][183-187].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on framing appear in the How to make AI governance fit for purpose? briefing, which contrasts technical guardrails with broader trust-building approaches [S41]; the new framing discussion at WSIS+20 highlights the shift from pure standards to inclusive governance models [S58]; and the Advancing Scientific AI report underscores ethical and safety principles as a foundation for governance [S42].
Unexpected Differences
Whether trade can be a primary lever to reduce AI‑driven inequality
Speakers: Kristalina Georgieva, Joanna Hill
AI may make the world less fair, with some countries having it and others not (Georgieva) [57-59] Trade can help the diffusion of AI to low- and middle-income economies and offers a pathway to reduce gaps (Hill) [75-78]
Georgieva highlights AI’s tendency to widen existing inequities, whereas Hill presents trade as a mechanism that can counteract those gaps by spreading AI benefits. The contrast between viewing AI as a force that deepens inequality versus a catalyst that trade can mitigate was not anticipated given the common focus on AI’s risks. [57-59][75-78]
POLICY CONTEXT (KNOWLEDGE BASE)
UN ESCAP argues that trade policies removing barriers to cross-border services can improve access to AI-enabled services and reduce inequality [S63]; UNCTAD’s digital economy analysis stresses the role of trade in spreading AI benefits beyond major hubs [S49]; and the Open Forum notes that equitable access to infrastructure, a trade-related issue, is crucial for inclusive AI [S47].
Overall Assessment

The panel broadly agrees on AI’s transformative potential and the centrality of trust, but diverges on the policy pathways to achieve equitable outcomes. The main points of contention revolve around the preferred instruments—macro‑education and social protection (Georgieva), trade‑focused mechanisms (Hill), and broader social safety nets plus trust‑building (Teo)—and the emphasis on ethical guardrails versus trust‑centric approaches. A secondary, unexpected clash appears between Georgieva’s view of AI as a driver of inequality and Hill’s optimism that trade can offset that trend.

Moderate to high disagreement on policy design, with implications that coordinated, multi‑sectoral strategies will be needed to reconcile differing approaches. The lack of consensus may slow the formulation of unified global guidelines, requiring more nuanced, region‑specific solutions.

Partial Agreements
All four speakers concur that AI presents both significant opportunities and serious risks and that trust—whether in institutions, trade systems, or technology—must underpin any successful AI strategy. However, they differ on the mechanisms to build that trust and manage the risks (e.g., ethical guardrails, trade frameworks, social safety nets) [66-68][218-224][212-215][190-195].
Speakers: Kristalina Georgieva, Joanna Hill, Josephine Teo, Mariano Florentino Cuellar
Embrace the opportunities of AI while being mindful of the risks (Georgieva) [66-68] Trust is essential for trade and AI diffusion (Hill) [218-224] Public trust is the most critical factor for AI’s success (Teo) [212-215] Trust, safety and shared standards are key to harness AI without fracturing social cohesion (Cuellar) [190-195]
Takeaways
Key takeaways
AI can add roughly 0.8% to global GDP and boost trade growth by up to 40% by 2040, especially for countries that invest early in digital infrastructure and AI skills. The technology also poses significant risks: heightened inequality, large‑scale labor market disruption (affecting ~40% of jobs globally, up to 60% in advanced economies), and potential threats to financial stability. Trade is a key channel for diffusing AI to low‑ and middle‑income economies, but shifting comparative advantage toward data‑rich, capital‑intensive nations requires skill development, infrastructure, and updated regulations. Singapore’s “trusted node” approach—consistent, principle‑based governance and the Model AI Governance Plan—offers a practical model for small states navigating geopolitical fragmentation. Public trust and a robust ethical foundation are essential for AI’s long‑term acceptance; trust cannot be achieved through regulation alone, but through a mix of safeguards, social safety nets, and transparent institutions. Existing global institutions (IMF, WTO) can address many AI challenges; a new dedicated agency is not yet necessary, but coordinated action and shared standards are critical.
Resolutions and action items
IMF will continue to monitor AI’s macro‑economic and labor‑market impacts and provide policy guidance to member countries. WTO will examine gaps in current trade agreements related to AI‑enabled goods and services and work with other international bodies to develop complementary rules. Singapore will maintain its role as a trusted node and promote the Model AI Governance Plan as a template for multi‑stakeholder regulation. All participants emphasized the need for countries to strengthen social protection systems (housing, health care, education) to cushion labor‑market transitions.
Unresolved issues
Specific mechanisms for ensuring equitable access to AI technologies across divergent economies remain undefined. How to design effective, globally‑coordinated financial‑stability safeguards for AI‑driven market activities. The precise regulatory framework that balances innovation with risk mitigation without stifling growth. Details on how WTO agreements should be updated to cover emerging AI‑related trade issues. Funding and governance models for the expanded social safety nets required to support displaced workers.
Suggested compromises
Use existing institutions (IMF, WTO) rather than creating a new international AI agency, while enhancing cooperation among them. Combine targeted AI regulation with broader social policies (education revamp, safety nets) to address inequality and labor disruption. Adopt a principle‑based, technology‑neutral approach (as exemplified by Singapore) that allows alignment with multiple major powers while protecting national interests.
Thought Provoking Comments
AI can lift global growth by about 0.8%, meaning the world would grow faster than before COVID, but it also poses a "tsunami" for labor markets – up to 40% of jobs in emerging markets and 60% in advanced economies could be affected, with significant displacement risks.
She quantifies both the macro‑economic upside and the scale of labor disruption, framing AI as a double‑edged sword that demands immediate policy attention.
Her figures set the quantitative baseline for the whole panel, prompting Joanna Hill to discuss trade‑related opportunities and risks, and leading Josephine Teo to argue for broader social safety nets beyond mere regulation.
Speaker: Kristalina Georgieva
AI is shifting comparative advantage toward capital, data and computing power, putting labor‑intensive economies at risk, yet trade could grow by almost 40% by 2040 if the right policies are in place.
She links AI to the core WTO concept of comparative advantage, highlighting both a structural threat and a massive growth opportunity for lower‑income countries.
Her statement broadened the conversation from pure economic growth to the role of trade policy, leading the moderator to ask about Singapore’s governance model and later prompting a discussion on the need for coordinated international trade rules.
Speaker: Joanna Hill
Singapore aims to be a "trusted node" in the AI ecosystem – a small state that remains reliable by acting consistently and principled, regardless of which major power it aligns with.
The notion of a trusted node reframes the debate from competition between superpowers to the strategic value of credibility and principled governance for smaller nations.
This concept introduced a new perspective on how countries can navigate geopolitical decoupling, influencing the moderator’s later focus on trust as the central theme and encouraging other panelists to consider institutional credibility.
Speaker: Josephine Teo
Regulating AI alone will not solve social inequality; we must strengthen social solidarity through housing, health care, and education to help people transition between jobs.
She challenges the common assumption that policy can be solved solely through AI‑specific regulation, urging a holistic approach to societal resilience.
Her critique shifted the tone from technical regulation to broader welfare policy, prompting Kristalina to elaborate on education reform and social protection, and reinforcing the panel’s move toward systemic solutions.
Speaker: Josephine Teo
Education must be revamped so people learn how to learn, not just specific skills; social protection is essential for those whose jobs are displaced; and the enabling environment (digital infrastructure, entrepreneurship) determines whether AI accelerates growth in a country.
She expands the discussion beyond macro‑growth to the foundational pillars needed for inclusive AI adoption, emphasizing lifelong learning and safety nets.
This deepened the analysis, leading the moderator to ask about concrete strategies for shared prosperity and prompting other speakers to align their points around education, infrastructure, and trust.
Speaker: Kristalina Georgieva
Trust is the single most critical factor for a successful AI future – if citizens do not trust the technology, we have failed; trust must be built through ethical foundations, safety, and transparent governance.
The repeated emphasis on trust synthesizes the varied concerns (risk, regulation, trade, governance) into a unifying principle, highlighting the social contract needed for AI deployment.
This became the concluding pivot of the discussion, shaping the final round of answers where each panelist framed their vision of a successful AI future around trust, thereby providing a cohesive takeaway for the audience.
Speaker: Josephine Teo (later echoed by all panelists)
Overall Assessment

The discussion was driven forward by a handful of high‑impact remarks that moved the conversation from abstract optimism to concrete challenges and solutions. Kristalina Georgieva’s quantification of AI’s growth potential and labor‑market disruption set the agenda, while Joanna Hill linked those dynamics to trade and comparative advantage. Josephine Teo introduced the novel “trusted node” concept and critiqued over‑reliance on regulation, prompting a shift toward broader social policies. Kristalina’s call for education reform and social protection added depth, and the recurring theme of trust, championed by Teo and echoed by all panelists, unified the diverse viewpoints into a clear, actionable message. Collectively, these comments redirected the dialogue from speculative benefits to the practical, institutional, and societal foundations required for inclusive AI adoption.

Follow-up Questions
Could AI get loose and create havoc on financial markets?
Assessing systemic financial stability risks of AI-driven trading and algorithms is crucial for preventing market disruptions.
Speaker: Kristalina Georgieva
What is the precise impact of AI on labor markets, especially the projected job displacement rates (40% in emerging markets, 60% in advanced economies) and the socioeconomic consequences?
Understanding the scale and nature of AI-induced job changes is essential for designing effective labor and social policies.
Speaker: Kristalina Georgieva
How can the productivity gains from AI be translated into shared prosperity, particularly for middle‑income workers who may be squeezed out?
Ensuring inclusive growth requires policies that channel AI benefits to broader segments of society rather than concentrating them.
Speaker: Kristalina Georgieva
What reforms are needed in education systems to shift from teaching specific skills to fostering the ability to learn continuously in an AI‑driven world?
A revamped education model is vital to prepare the workforce for rapid technological change and lifelong learning.
Speaker: Kristalina Georgieva
What social protection mechanisms (e.g., safety nets, retraining programs, housing, healthcare) are most effective for workers displaced by AI?
Comprehensive social support can mitigate inequality and social unrest caused by AI‑induced labor market shifts.
Speaker: Kristalina Georgieva, Josephine Teo
How do differences in digital infrastructure and overall enabling environments across countries affect AI adoption, and what targeted investments are needed?
Identifying infrastructure gaps helps allocate resources to regions where AI can be most impactful.
Speaker: Kristalina Georgieva
What are the patterns of AI skill supply‑demand mismatches in different countries (more demand than supply, more supply than demand, or neither), and how should capacity‑building be tailored?
Tailored skill development strategies are required to address specific national labor market needs.
Speaker: Kristalina Georgieva
How should the international trading system evolve to address AI‑related inequities, including updates to trade agreements on AI services, data flows, and competition policy?
Modernizing trade rules can facilitate equitable diffusion of AI technologies and benefits.
Speaker: Joanna Hill
Which specific aspects of AI in trade agreements remain too new or nuanced, and what further rule‑making is required?
Clarifying these areas will provide legal certainty for businesses and governments adopting AI.
Speaker: Joanna Hill
What are the underlying assumptions and scenarios behind the projection that AI could boost global trade growth by 40% by 2040, and how robust are these forecasts?
Detailed modeling is needed to validate and refine trade growth expectations linked to AI.
Speaker: Joanna Hill
How can small states like Singapore maintain and demonstrate ‘trusted node’ status in the global AI ecosystem?
Establishing trust frameworks is key for small nations to participate securely in cross‑border AI collaborations.
Speaker: Josephine Teo
What strategies can mitigate the risks of technology decoupling between major powers (e.g., US and China) for smaller economies?
Understanding decoupling dynamics helps small states navigate geopolitical tensions while preserving AI access.
Speaker: Josephine Teo
To what extent can AI regulations alone address rising social inequality, and what complementary policies are needed?
A holistic policy mix is required to tackle inequality beyond regulatory measures.
Speaker: Josephine Teo
Is there a need for a new international AI governance institution or treaty, and how adequate are existing mechanisms (e.g., WTO, IMF) in handling emerging AI challenges?
Evaluating the sufficiency of current institutions informs decisions on creating new global governance structures for AI.
Speaker: Mariano Florentino Cuellar
What mechanisms can monitor and support vulnerable communities during the AI transition to avoid backlash similar to that seen with earlier globalization waves?
Proactive monitoring can prevent social unrest by addressing adverse impacts early.
Speaker: Kristalina Georgieva
How can public trust in AI be measured across different countries, and what indicators best reflect citizens’ confidence in AI systems?
Reliable trust metrics are essential for assessing the societal acceptance of AI and guiding governance.
Speaker: Josephine Teo

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable

Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel convened to examine how artificial intelligence can be responsibly integrated into health care to improve outcomes while ensuring safety and equity [15]. Zameer Brey warned that AI-assist tools are often introduced too early in clinical workflows, and argued that progress should be judged by tangible health improvements such as better TB diagnosis or diabetes adherence [12-16]. Using a flight-safety analogy, he stressed that health-care AI must aim for essentially zero risk of error and called for a shift from opaque “black-box” models to transparent, verifiable “glass-box” systems that log inputs, logic and safeguards against harmful prescriptions [22-30]. He concluded by inviting partners to collaborate on pathways toward verified AI, emphasizing the need for traceable decision chains that satisfy legal and regulatory requirements [31-35].


Professor Prokar Dasgupta, representing Responsible AI UK, emphasized that implementation-not just invention-is critical and described initiatives such as placing AI champions in hospitals across the UK, India and Africa to accelerate adoption [46-48]. He cited concrete projects including an ambient-AI system that reduces operating-room time, a tele-surgery platform enabling surgeons to operate remotely with sub-60 ms latency, and a robotic system capable of fully automated gallbladder removal, illustrating AI’s potential to expand equitable surgical access [49-53]. Dasgupta also noted limited public acceptance, observing that only a single hand was raised when clinicians were asked to volunteer for fully automated procedures, underscoring the importance of trust [54-57]. He argued that sustained investment must also target workforce development, because few medical curricula currently include AI training, and without such skills the promised benefits will not materialize [60].


Alain Labrique reinforced the shift from focusing on algorithmic accuracy to measuring real-world impact, acknowledging that clinician behavior change is slow but feasible when humans remain in the loop [62-65].


Payden P. summarized the discussion by declaring that AI in health has reached an inflection point, moving from speculative possibilities to concrete investment, implementation and impact [101-104]. He outlined that future investment must extend beyond innovation to include governance, regulation, evidence generation, data systems, workforce readiness and long-term partnerships, which together build the trust that unlocks sustainable funding [110-118]. The panel concluded that achieving equitable health outcomes with AI will depend on building verified, transparent systems, securing cross-sector trust, and investing in the people and infrastructure needed to translate promise into progress [119-120].


Keypoints


Major discussion points


The need for “verified” AI that is transparent and risk-free in healthcare.


Zameer argued that health-AI must move from a “black-box” to a “glass-box” model, providing a full audit trail of inputs, logic and safeguards to ensure zero-risk prescribing and decision-making [22-31].


Shifting focus from hype to concrete investment, implementation and impact.


Payden highlighted that AI in health has reached an inflection point where the conversation is now about funding governance, regulation, evidence generation, workforce readiness and long-term partnerships to make AI an equitable tool rather than a source of new inequalities [101-112].


Barriers to clinical adoption and the need for research-driven change.


Zameer noted the entrenched nature of medical workflows and asked what level of clinical research and evaluation investment is required to alter long-standing practice patterns [38-40].


Global equity, data diversity and real-world pilots as a pathway to inclusive AI.


Prokar described initiatives such as Responsible AI UK’s hospital champions, tele-surgery trials, and robotic automation projects, stressing that diverse data and equitable access are essential for success [46-60].


Human-centered AI and coordinated donor/partner action.


Several speakers (Ken, Haitham, Zameer) called for keeping people at the core of AI systems, aligning donor strategies, and building coordinated, cross-sector partnerships to ensure AI benefits are realized responsibly [70][72-76][83-92].


Overall purpose / goal of the discussion


The panel was convened to move the conversation on artificial intelligence in health from speculative promise to practical, equitable impact. Participants examined how to verify AI safety, invest in the necessary infrastructure and evidence base, overcome clinical inertia, and ensure global inclusivity, ultimately seeking a shared roadmap for responsible implementation.


Overall tone and its evolution


– The session opened with formal, repetitive gratitude, establishing a courteous but neutral atmosphere.


– It quickly shifted to a critical and analytical tone, using analogies (e.g., flight safety) to stress the zero-risk expectation for health AI [22-25].


– As speakers presented concrete examples and investment needs, the tone became optimistic and solution-focused, highlighting pilots, partnerships, and skill-building [46-60][101-112].


– The concluding remarks returned to a collaborative and rallying tone, urging coordinated donor action and emphasizing human-centered design [70][72-76][83-92].


Overall, the discussion progressed from polite acknowledgment to a rigorous debate on challenges, and finally to a hopeful call for collective action.


Speakers

Zameer Brey – Panelist (speaker)


Ken Ichiro Natsume – Assistant Director General at the World Intellectual Property Organization (WIPO), policy expert on international intellectual property matters [S3]


Prokar Dasgupta – Professor, practicing surgeon, innovator; leads AI implementation initiatives, affiliated with King’s College London (mentioned “my own group in King’s”) [S5]


Alain Labrique – Panel moderator/facilitator, expert in digital health interventions and global health partnerships [S8]


Justice Prathiba M. Singh – Justice (judicial title)


Haitham Ali Ahmed El-Noush – (role not specified)


Payden P. – Closing speaker, panel participant


Additional speakers:


– Elaine – referenced in discussion about legislative chain of proof (no role or title provided)


– Justice Simo – referenced as nodding, judicial title implied (no further details)


– Dr. Pagan – mentioned by Alain Labrique as “Dr. Pagan” (no role or title provided)


Full session reportComprehensive analysis and detailed insights

The session opened with a series of formal thank-you remarks from the moderators, establishing a courteous atmosphere before the substantive discussion began [1-6][7-8].


Zameer Brey began by repeatedly stating “This is the product flow,” three times, using the diagram as the framework for the discussion [12-13]. He then questioned the premature placement of AI-assist functions within clinical workflows, noting that clinicians usually complete all preparatory steps before being offered AI support, a design choice that “moved the AI assist button earlier on” and altered outcomes [12-13]. He argued that true progress must be demonstrated through measurable health benefits rather than merely deploying AI tools [12-13]. Brey used a flight-safety analogy, asserting that in health care the acceptable failure rate must be effectively zero; a 95 % safety margin would be intolerable, and even a 99 % margin would imply one fatal crash per hundred flights [22-25]. From this premise he advocated a shift from opaque “black-box” models to transparent “glass-box” systems that log every input, expose the underlying logic, and embed safeguards to prevent harmful prescriptions, including checks for allergies and catastrophic errors [26-30]. He concluded by inviting partners to co-create pathways toward verified AI, emphasizing a traceable decision chain that satisfies legal and regulatory requirements-a point underscored by Justice Simo’s nod of approval [31-35].


Brey also highlighted the entrenched nature of medical practice, describing clinicians as “well-involved and well-trodden” in their workflows and asking what level of clinical research and evaluation investment is required to shift these long-standing pathways [38-40]. This set the stage for a broader debate on the resources needed to overcome professional inertia.


Prokar Dasgupta, speaking on behalf of Responsible AI UK, reframed the conversation around implementation rather than invention. He noted that the programme places AI champions in hospitals across the UK, India and Africa to accelerate adoption [46-48]. He cited concrete pilots: an ambient-AI system that drafts clinical notes and saves a month of operating-room time [49-50]; a tele-surgery platform (2,500 km distance, ≤60 ms latency) that could bring specialist surgery to underserved regions [51]; and a fully autonomous robotic system for gallbladder removal, described as “100 % accurate” while the audience expressed scepticism, with only a single hand raised when clinicians were asked to volunteer [55-57]. Dasgupta stressed that equitable impact depends on diverse data sets, illustrating the point with a story about his mother’s watch and the lack of diversified data [51-53]. He emphasized that without patient participation the investment will fail [57-58] and outlined the need to work with the “three C’s”-companies, governments, and civil society-to ensure responsible deployment [58-60]. He warned that without skilled health-workforce training-currently absent from most medical and nursing curricula-the investments will fail [60]. Dasgupta also referenced the “Wieselbaum test” as a future societal-impact benchmark [66-68].


The human-centred principle resonated across the panel. Ken Ichiro Natsume reiterated that AI should be leveraged with “human beings at the centre of those utilizations” [74-75]. Justice Prathiba M. Singh summed the sentiment with a hopeful “Here’s to a healthier world” and called for technology and development initiatives to work together [77-79]. Alain Labrique added that the focus should shift from algorithmic accuracy to real-world impact, arguing that benchmarks ought to measure behavioural change and health outcomes rather than pure predictive performance [62-65].


Payden P. provided the closing synthesis, declaring that AI in health has reached an inflection point where the debate has moved from speculative possibilities to concrete investment, implementation and impact [101-104]. He outlined four pillars of future funding: (1) governance and regulation to ensure safety and trust; (2) evidence generation to demonstrate efficacy; (3) workforce readiness and capacity-building; and (4) long-term, cross-sector partnerships [110-118]. Trust, he argued, is the “currency that unlocks sustainable investment” [118-119], and he called on donors, governments and industry to collaborate in building these foundations [120].


Complementing this, Haitham Ali Ahmed El-Noush stressed the need for coordinated donor strategies, urging the development of shared priorities and pooled investments to rally behind AI-health initiatives [70].


Across the discussion, several points of agreement emerged. All speakers endorsed a human-centred, transparent approach to AI, the necessity of coordinated investment beyond pure innovation, and the imperative that AI benefits be equitably distributed (e.g., Brey’s glass-box vision, Dasgupta’s global pilots, Labrique’s impact focus, Natsume’s human-in-the-loop stance, and Haitham’s donor coordination) [24-30][46-48][62-65][74-75][70]. They also concurred that trust must be built through verifiable systems, robust governance and demonstrable outcomes [24-30][101-108][110-118].


Notable disagreements surfaced. First, Brey’s demand for zero-risk, fully verified AI contrasted with Dasgupta’s promotion of high-autonomy tools that, while touted as “100 % accurate,” still faced public reluctance, revealing tension between ideal safety standards and pragmatic deployment [24-30][55-57]. Second, the allocation of funding diverged: Brey emphasized resources for verification infrastructure [26-30], whereas Payden and Labrique argued for broader system-wide investments in regulation, data infrastructure and capacity-building [101-108][110-114]. Third, the metric of success was contested; Labrique advocated impact-oriented benchmarks, while Brey prioritized absolute safety and error-free operation [63][24-30].


The panel distilled several key takeaways. Verified, glass-box AI that guarantees zero-risk prescribing is essential [24-30]; investment must now target governance, evidence generation, data systems, workforce training and long-term partnerships to translate promise into progress [101-108][110-118]; coordinated donor mechanisms are required to align priorities [70]; and human-centred design-keeping clinicians and patients in the loop and embedding AI education in curricula-is critical for acceptance and equity [74-75][60].


Action items proposed


1. Form a working group on verified/glass-box AI (Zameer’s invitation) [31].


2. Create pooled donor mechanisms for coordinated investment (Haitham’s suggestion) [70].


3. Fund governance, regulatory and evidence-generation programmes (Payden’s pillars) [101-108][110-114].


4. Embed AI modules into medical and nursing curricula (Dasgupta’s training call) [60].


5. Pilot inclusive projects such as ambient-AI note-taking, tele-surgery 2.0, and autonomous robotics with patient involvement (Dasgupta’s pilots) [49-57].


Unresolved issues remain, notably how to operationalise the zero-risk standard in real-world settings, the precise mechanisms for shifting entrenched clinical workflows, detailed funding models for coordinated donor action, and the development of global standards for data diversity and regulatory certification. The panel suggested a phased compromise: maintain human oversight while progressively increasing AI autonomy, pair rapid deployment of low-risk tools with rigorous verification before scaling to higher-risk applications, and align technological capability with societal acceptance through continuous patient and clinician engagement [74-75][55-57].


In sum, the discussion moved from polite acknowledgements to a rigorous examination of safety, verification, investment and equity, converging on a shared roadmap that balances stringent risk-mitigation with pragmatic, impact-driven implementation. The consensus underscores that AI can transform health only if it is transparent, trustworthy, human-centred and supported by coordinated, long-term investment in both technology and the people who will use it [101-108][118-120].


Session transcriptComplete transcript of the session
Haitham Ali Ahmed El‑Noush

Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.

Zameer Brey

Thank you. Thank you. Thank you. Thank you. We’ve confused shine… I’m sorry. Thank you. Thank you. Thank you. At the end. So think about this, you’ve done all your hard work, you’ve made your notes, you’ve written your prescription, you’ve counseled the patient and now you press AI assist. No thank you. All they did was to move the AI assist button earlier on and give the user the prescription to use it when it made sense to that user and the results changed. The fourth level is to what extent is the improvement actually going to yield an improvement in health outcomes? The reason we’re all here is what’s fundamentally going to shift? Is this going to help us get diagnosed TB better or help with adherence in diabetes, etc.

So these are some of the fundamental questions and I think we’ve got caught up with investment at levels one and two. Let’s just check how this model works. Let’s just check the product and having given enough investment into how this gets integrated into the world. Let’s just see how this goes. So this is the product flow. This is the product flow. This is the product flow. and then ultimately how does this shift outcomes over time I think can I take one more minute and talk about verified AI, should I come back to this I was thinking to myself and probably a bad analogy but I’m going to put it out there anyway because I’m flying this evening that’s why I didn’t want to use it but if I said to you all would you fly if the likelihood of the flight arriving safely was 95 % I’d fly, you’d fly if it was 95 % if you fly if I told you if it was 96, 97 or 98 would you fly no even if it was 95 just think for a second if it was 99 % That means every 100th flight taking off from Delhi airport would crash.

We would fly. And then go, oh, right, or we’ll take some other means of transport. And the reason I’m emphasizing this is that when it comes to health care, the bar should be 0 % risk of failure, 0 % risk of error. And so Elaine and many other partners we’re starting to have this discussion with is how do you get AI to be verifiable so that you know that whatever the input is, you can document that, it’s transparent, and we spoke about this, which is can we shift the narrative from black box to glass box? Can we really know why did the model make a particular decision? We gave it X input. The patient had these criteria. Here’s the logic model.

and it gave that particular output. But when it gives that output, can we put some safeguards in place that makes 100 % sure that it isn’t prescribing something the patient’s allergic to or that’s going to end up in a catastrophic event or that’s fundamentally flawed in its logic? And that’s where we’d like to invite partners to work with us on a pathway to verified AI. Thank you. Thank you. And I can see Justice Simo. So Justice Simo is just nodding her head because I think, you know, having that chain of proof is something we like to have in legislation. So, you know, it’s always nice when there’s a trail to follow to that decision. We couldn’t have queued it up today because our next person I’m going to ask is Professor Dasgupta, who is a clinician and an innovator.

I’m sure you’ve experienced the recalcitrance and challenge of shifting medical practice. And, you know, nurses and doctors are well known for being entrenched in the way of doing things. And changing those well -involved and well -trodden paths of workflows and clinical decision pathways are very difficult to shift. So what kind of investment do we need to make in clinical research and evaluation and evidence to shift those well -trodden paths of practice? Professor Vasgupta.

Prokar Dasgupta

Namaskar. Namaskar. Thanks. To realize that I am a working surgeon, so in addition to invention and innovation, what I’m really interested in is implementation. I want to make a difference. And if you may be patient someday, it will make a difference to you. I come here on behalf of Responsible AI UK, a major investment from UK research and innovation, not just in AI in the UK, but into an international ecosystem, including the greater south. We put AI champions in every hospital, and we are trying to expand into our partners in India and in Africa, where it is needed the most. Let me give you some examples of how we are doing this. Responsible AI UK, for example, funded an evaluation of ambient AI, writing those notes.

Shortening the operating time, saving a month of wasted time in the operating room. The British Association of Physicians of Indian origin, realized that wouldn’t it be wonderful if our parents, many of whom are living in India my mother is 87 before she has a heart attack wouldn’t it be nice if a message on my watch told me something was going to happen the reason I decided to make a note is because the data is not diversified enough without diversity of data we are not going to win this battle let me give you another example of investment of inequity two weeks ago if you look at the British Medical Journal there is a major article from us on tele -surgery 2 .0 it means to me the technology exists for a surgeon to operate from two and a half thousand kilometers away using a weapon with a time delay of 60 milliseconds or less it feels like you’re in the same operating room imagine this investment being one of the solutions to the 5 billion patients who do not have access to equitable surgery that is an example let me give you a third example and this is in automation my own group in King’s has funded and invested in automation big time the levels of autonomy in robotics takes place from 0 to 5 0 is more autonomy most autonomous machine is level 3 you map with the ultrasound the prostate all the men in this room have a prostate as we know we have difficulty in pain you move the middle of the prostate with an ultrasound you press a button a water jet floats at home in the middle of the prostate so that you don’t have to wake up 20 times at night to pee until last November when one university announced the the first the first the first in the world in the world the first in the world the first on a robotic system which can operate on big gallbladders.

Big gallbladders with 100 % misery, 100 % accurate. Five days after this, I was at the Royal Academy of Engineering, a group like this, and I said, hands up everyone who is going to allow this machine to operate on them. So hands up everyone who will allow a completely automated machine, 100 % accurate in pigs, to take out your gallbladder here. And in takers, there was one hand in the room. On the other occasion, there was a single hand in the room. He is down to his own. So we went into these public cells, but they are saying not yet. They are still going to hear them. Still today. So I do. companies of course we have to work with them, countries including the government side, civil society the three C’s, if we do not bring our patients with us all this investment is going to fail and the final investment I would urge is in skills there are hardly any medical and nursing schools in the world which have AI in the curriculum, if we do not have this embedded in education of the next generation of healthcare workers we are going to fail so these are my parting thoughts to you, thank you thank you Thank you.

Thank you. Thank you.

Alain Labrique

and impressive with impactful, focusing on things that get used and work in the real world. A benchmark might be the wrong thing, not accuracy but actually impact. And then, of course, you know, the challenge that Professor Dasvipta brought to us that, you know, it does take time to change behavior, but it is possible as long as, for the moment, we have humans in the loop. So I’d like to give each of you one sentence now just to wrap up. As you’ve heard others, what has changed your thoughts and what’s the one message you’d like to have people read the room with? And let me just go sequentially down the road. Thank you. … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … … …

Zameer Brey

S

Haitham Ali Ahmed El‑Noush

o fo r donors, we need coordination, and there is a need to develop strategies, priorities, and investments so we can rally behind.

Alain Labrique

Fantastic. Ineji.

Ken Ichiro Natsume

Thank you. I think we’re asked to respond in one sentence. I was going to say, we’re not going to do something simple. We need something. But I haven’t changed my mind, but one point which resonated to my heart, which I was not able to mention in my opening sentence, but one thing I’d like to highlight is that, okay, we can leverage artificial intelligence with human being at the center of those utilizations. So that’s what I want to highlight. Thank you.

Justice Prathiba M. Singh

That’s the thing. I’m going to actually say one sentence. Here’s to a healthier world. Hey, D .I. and technology, we really work together in the world.

Zameer Brey

Fantastic. Professor.

Prokar Dasgupta

For AI tools and for the patients, I urge you to sell Mexico the test, which means do not just think about what these machines can do for us, but think about what are the societal effects of these machines. The change has to go from the Turing test to today, the Wieselbaum test.

Zameer Brey

I think for me, the question of how do we move from promise to progress is underpinned by I think a theme that I’m seeing at the conference. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important question. I think that’s a very important we need to keep humans at the center of the AI revolution.

Alain Labrique

Fantastic. So, Dr. Pagan, you’ve been patiently giving these wise words from our panel. I’d like to give you the last word to bring this home and everyone keep the audience with food for thought before they go for food for their stomachs.

Payden P

Thank you very much. Good afternoon to all. Sincere thanks to all the… I think it’s on. Yes. Sincere thanks to all the distinguished panelists for this very thought -provoking and very interesting conversation around AI and health. I think today’s conversation makes one thing very clear. AI and health has reached an inflection point. And for years we spoke about possibility. Today the conversation has shifted to investment, implementation, and impact. I think that was really highlighted. And emphasized by all. The question is no longer whether AI can improve health. The question is whether we will invest in the right foundations to ensure it improves health for everyone, not few. Over the past hour, several themes have emerged.

And the first is around investment. Investment must go beyond innovation. It must flow into the systems that make innovation safe, trusted, and scalable through governance and regulation, evidence generation, workforce readiness, and also workforce capacity building, which came very clearly, data systems, and long -term partnerships. These are not optional. They are the enabling conditions that determine whether AI becomes a tool for equity or a driver of innovation. New inequalities. Second, predictability builds confidence. When countries strengthen regulatory and legal frameworks, investment flows in. When evidence is generated and transparency shared, investment grows. When partnerships are built across sectors, investment scales. In short, trust is the currency that unlocks sustainable investment. So I think these are some important points that I could take out from here.

And we look forward to working with different partners, investors, donors, government agencies to take AI and health further for the benefit of all the populations. Thank you.

Alain Labrique

Thank you so much. Those are reserved test patients in writing from the Capacity Building Commission and Curfew Borrow.

Related ResourcesKnowledge base sources related to the discussion topics (38)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“The session opened with a series of formal thank‑you remarks from the moderators, establishing a courteous atmosphere before the substantive discussion began.”

The knowledge base records that speakers in similar sessions began by expressing gratitude to the chair or delegates, establishing a respectful tone, e.g., S85, S86 and S87 describe opening remarks that thank the chairperson and set a courteous atmosphere.

Additional Contextmedium

“Brey used a flight‑safety analogy, asserting that in health care the acceptable failure rate must be effectively zero; a 95 % safety margin would be intolerable, and even a 99 % margin would imply one fatal crash per hundred flights.”

Risk framing with an airplane-safety analogy is discussed in the knowledge base, which defines risk as probability of undesirable outcomes and explicitly uses an aviation safety analogy to illustrate acceptable risk levels [S114].

Confirmedhigh

“He advocated a shift from opaque “black‑box” models to transparent “glass‑box” systems that log every input, expose the underlying logic, and embed safeguards to prevent harmful prescriptions.”

The call for converting black-box AI into a “glass-box” with full transparency is echoed in the knowledge base, which states “The black box of data must become a glass box” and stresses the need for users to see data sources and training details [S13].

Additional Contextmedium

“True progress must be demonstrated through measurable health benefits rather than merely deploying AI tools.”

Several knowledge-base entries stress that success should be measured by concrete health outcomes (e.g., reduced mortality, fewer complications) instead of technical metrics, aligning with the report’s emphasis on measurable health impact [S111] and [S112].

Additional Contextmedium

“AI systems need safeguards such as allergy checks and catastrophic‑error prevention, implying a need for human oversight in clinical decision‑making.”

The Oxford study cited in the knowledge base warns that AI health tools must operate with human oversight to avoid serious risks, supporting the report’s point about embedding safety checks and human-in-the-loop controls [S107]; a related source also calls for transparent, human-in-the-loop systems to maintain agency [S116].

External Sources (121)
S1
How Small AI Solutions Are Creating Big Social Change — – Zameer Brey- Antoine Tesniere
S2
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — – Ken Ichiro Natsume- Prokar Dasgupta- Zameer Brey- Alain Labrique – Zameer Brey- Alain Labrique – Zameer Brey- Payden…
S3
Panel Discussion AI and the Creative Economy — -Kenichiro Natsume: Assistant Director General at WIPO (World Intellectual Property Organization), works on policy side …
S4
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-ai-and-the-creative-economy — I’m seeing this big flashing red sign which says time’s up. I don’t know, mine or the panel’s. I’m hoping it’s only the …
S5
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — Professor Prokar Dasgupta, speaking as both a practicing surgeon and innovator, provided sobering real-world evidence of…
S6
Classification of Digital Health Interventions v1.0 — 1. Hawkins, R. P., et al. (2008). Understanding tailoring in communicating about health. Health Education Research, 23(3…
S7
Multistakeholder Dialogue on National Digital Health Transformation — Alain Labrique: Fantastic. Thank you, Leah. I really appreciate everyone’s partnership. and engagement this morning,…
S8
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — -Alain Labrique- Panel moderator/facilitator
S9
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — – Prokar Dasgupta- Justice Prathiba M. Singh
S10
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — -Haitham Ali Ahmed El‑Noush- Role/expertise not specified in transcript
S11
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — – Haitham Ali Ahmed El‑Noush- Payden P. – Prokar Dasgupta- Payden P.
S12
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S13
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Lt Gen Vipul Shinghal — “The black box of data must become a glass box.”[11]. “the commander taking a decision based on an AI -enabled system bu…
S14
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Thank you very much. That’s very nice. So moving to your left, Nikhil. Nikhil, you guys have done a phenomenal job in, y…
S15
Press Conference: Closing the AI Access Gap — They argue that the only path forward is through a collaborative approach that prioritizes trust. This requires active e…
S16
The open-source gambit: How America plans to outpace AI rivals by democratising tech — A “worker-first AI agenda” is the key social pillar of the Plan. The focus is on helping workers reskill and build capac…
S17
Education meets AI — This aligns with the Sustainable Development Goals (SDGs) of Quality Education (SDG 4) and Reduced Inequalities (SDG 10)…
S18
Successes &amp; challenges: cyber capacity building coordination | IGF 2023 — In today’s world, cyberattacks and cybercrime incidents are on the rise, resulting in international, governmental, multi…
S19
How to believe in the future? — Another viewpoint suggests that the current profit-driven business model needs to be revisited. While acknowledging that…
S20
Keynote-Rishad Premji — The conversation has shifted from possibility to practicality, from experimentation to adoption and scaled impact
S21
Towards 2030 and Beyond: Accelerating the SDGs through Access to Evidence on What Works — These key comments shaped the discussion by grounding abstract concepts in concrete possibilities, emphasizing the moral…
S22
World Economic Forum Panel Discussion: Global Economic Growth in the Age of AI — In no case have we seen that level of accuracy. So it’s very important that we keep the human in the loop. It’s very imp…
S23
Leveraging the UN system to advance global AI Governance efforts — The speaker advocates for a horizontal approach within the UN, urging agencies such as the WIPO, ITU, UNU, ILO, and FAI …
S24
Digital Health at the crossroads of human rights, AI governance, and e-trade (SouthCentre) — Apart from data protection, the speakers also emphasized the significance of collaboration between the public and privat…
S25
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — Namaste. Honorable Minister Vaishnav, Your Excellency’s colleagues, let me begin by thanking our host, Prime Minister Mo…
S26
Harnessing AI for Child Protection | IGF 2023 — In conclusion, protecting children online requires a multifaceted approach. Legislative measures, such as the ones imple…
S27
OPENING SESSION | IGF 2023 — In conclusion, understanding and sharing information about the development of AI and ML are crucial for their effective …
S28
Healthcare experts demand transparency in AI use — Healthcare professionals, including researchers and clinicians, are keen to incorporate AI into their daily work but dem…
S29
Open Forum #53 AI for Sustainable Development Country Insights and Strategies — This comment set a foundational tone for the entire discussion by establishing the importance of evidence over hype. It …
S30
Transforming Health Systems with AI From Lab to Last Mile — The speakers demonstrated strong consensus on the need for human-centered AI development, real-world evidence generation…
S31
Launch / Award Event #78 Digital Governance in Africa: Post-Summit of the Future — These key comments shaped the discussion by moving it from high-level policy frameworks to practical implementation chal…
S32
MedTech and AI Innovations in Public Health Systems — Ms. Padmanabhan identified three primary integration challenges: workflow integration, change management resistance, and…
S33
Diplomatic policy analysis — Digital divides:Not all countries have equal access to advanced analytical tools, perpetuating inequalities in diplomati…
S34
Introducción — – Que los registros y programas nacionales, así como los registros de vacunación, de vigilancia epidemiológ…
S35
Traversing biomedical science, technology & innovation, policy, and diplomacy — Building on these experiences, I am now keen on engaging the lifesciences community across countries with varying levels…
S36
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — ## Global Cooperation Versus Regional Diversity Joanna Bryson: Hi, yeah, sure. Thanks very much and sorry not to be in …
S37
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Fadi Daou:Okay, thank you. Thank you Michel and this is definitely a tension and maybe a balance at some point between t…
S38
Opening address of the co-chairs of the AI Governance Dialogue — While this transcript captures only the opening remarks of the AI Governance Dialogue, the key comments identified estab…
S39
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Achieving inclusive AI requires addressing inequalities across three fundamental areas: access to computing infrastructu…
S40
Day 0 Event #173 Building Ethical AI: Policy Tool for Human Centric and Responsible AI Governance — Chris Martin: Thanks, Ahmed. Well, everyone, I’ll walk through I think a little bit of this presentation here on what…
S41
Democratizing AI: Open foundations and shared resources for global impact — ## International Collaboration Examples Mary-Anne Hartley: Yeah, sure. I think what we all saw with the use case over t…
S42
WSIS Action Line C7: E-health – Fostering foundations for digital health transformation in the age of AI — ## Background and Context Throughout the discussion, speakers consistently emphasised that government ownership and lea…
S43
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — Current evaluation focuses on technical accuracy, but real-world success depends on user acceptance, which varies based …
S44
From Technical Safety to Societal Impact Rethinking AI Governanc — Virginia stresses that AI safety cannot be limited to technical robustness, accuracy or alignment. It must incorporate m…
S45
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — “A benchmark might be the wrong thing, not accuracy but actually impact.”[26]. “and impressive with impactful, focusing …
S46
High-level AI Standards panel — Practical Implementation and Real-World Impact While the database represents a valuable step forward, the true measure …
S47
Prosperity Through Data Infrastructure — Despite the challenges, the analysis suggests that successfully digitalising requires creative solutions, even when face…
S48
Can we test for trust? The verification challenge in AI — Moderate to high disagreement with significant implications. The fundamental disagreement between Yampolskiy’s pessimist…
S49
Day 0 Event #257 Enhancing Data Governance in the Public Sector — Moderate disagreement level with significant implications – the speakers largely agree on goals (effective data governan…
S50
Acknowledgements — The advantages of physically removing the human from a weapon delivery platform (such as remotely piloted vehicles like …
S51
Open Forum #73 The Need for Regulating Autonomous Weapon Systems — Kevin Whelan: Thank you and good afternoon everyone. It’s a pleasure to be here and to speak on behalf of Amnesty Inte…
S52
AI, smart cities, and the surveillance trade-off — The key is keeping humans in the loop at decision points that matter. AI can surface insights and recommendations, but p…
S53
Safe and Responsible AI at Scale Practical Pathways — Guardrails, Human‑in‑the‑Loop, and Risk‑Assessment Mechanisms Are Essential for Reliable Deployment
S54
Welcome Address — Modi emphasizes that AI development must focus on human values rather than purely machine efficiency. A human‑centric ap…
S55
Trust in Tech: Navigating Emerging Technologies and Human Rights in a Connected World — Additionally, they emphasise the critical need for safeguarding security and user privacy in the interoperability standa…
S56
Hello from the CyberVerse: Maximizing the Benefits of Future Technologies — In virtual spaces, regulation and safety measures were discussed. Speakers underscored the need for flexible, ecosystem-…
S57
International Cooperation for AI &amp; Digital Governance | IGF 2023 Networking Session #109 — The analysis examines multiple perspectives on the importance of regulating AI. The speakers stress the necessity of reg…
S58
Ateliers : rapports restitution et séance de clôture — Aurélien Macé Apparemment, j’ai droit à 6,6 minutes, deux fois plus que les autres, ce qu’on m’a dit. Le thème de vendre…
S59
From principles to practice: Governing advanced AI in action — Strong consensus on fundamental principles including multi-stakeholder collaboration, trust as prerequisite for adoption…
S60
Artificial intelligence (AI) – UN Security Council — Algorithmic transparency is a critical topic discussed in various sessions, notably in the9821st meetingof the AI Securi…
S61
Digital Health at the crossroads of human rights, AI governance, and e-trade (SouthCentre) — The adoption of digital health technology should consider the principle of equitable access. This means ensuring that al…
S62
Open Forum #33 Building an International AI Cooperation Ecosystem — Ethical Considerations and Inclusivity Human rights principles | Children rights | Privacy and data protection Pelayo …
S63
Equi-Tech-ity: Close the gap with digital health literacy | IGF 2023 — Geralyn Miller:Yeah, thank you very much for the question. So I want to respond to in this context to some of the commen…
S64
WS #214 AI Readiness in Africa in a Shifting Geopolitical Landscape — Fundamental infrastructure challenges—including limited computing power, inadequate connectivity, and capacity gaps—requ…
S65
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — Verified AI extends beyond accuracy to encompass complete transparency in decision-making processes. Brey advocated for …
S66
OPENING SESSION | IGF 2023 — In conclusion, understanding and sharing information about the development of AI and ML are crucial for their effective …
S67
Healthcare experts demand transparency in AI use — Healthcare professionals, including researchers and clinicians, are keen to incorporate AI into their daily work but dem…
S68
AI in healthcare gains regulatory compass from UK experts — Professor Alastair Dennistonhas outlinedthe core principles for regulating AI in healthcare, describing AI as the ‘X-ray…
S69
Knowledge Café: WSIS+20 Consultation: Strenghtening Multistakeholderism — This observation grounded the discussion in practical realities and influenced subsequent conversations about the need f…
S70
WS #103 Aligning strategies, protecting critical infrastructure — – The need to move from high-level discussions to concrete, actionable measures
S71
Keynote-Rishad Premji — The conversation has shifted from possibility to practicality, from experimentation to adoption and scaled impact
S72
IGF 2024 Opening Ceremony — This comment highlights the urgent need for practical action beyond policy discussions. It’s thought-provoking because i…
S73
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — The discussion revealed that technical capabilities often exceed institutional readiness for AI adoption. Behavioral cha…
S74
29, filed Jan. 22, 2010, at 9-10. — – Focus on the barriers to adoption. Successful efforts address multiple barriers to adoption simultaneously. They combi…
S75
MedTech and AI Innovations in Public Health Systems — Ms. Padmanabhan identified three primary integration challenges: workflow integration, change management resistance, and…
S76
World Economic Forum® — It can take 20-30 years to develop a new drug or vaccine, and the costs and risks are high. R&amp;D efforts are not coor…
S77
Adoption and adaptation of e-health systems for developing nations: The case of Botswana — – Access to healthcare facilities. – Cost savings via telemedicine activities. – Collaboration amongst the key participa…
S78
WS #100 Integrating the Global South in Global AI Governance — Fadi Salim: Thank you. And this covers a little bit the grassroot element of it. So it’s awareness, diversity, inclusi…
S79
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — If compute, database and foundational models remain concentrated of a few, we risk creating a new form of inequality, an…
S80
Opening address of the co-chairs of the AI Governance Dialogue — While this transcript captures only the opening remarks of the AI Governance Dialogue, the key comments identified estab…
S81
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Achieving inclusive AI requires addressing inequalities across three fundamental areas: access to computing infrastructu…
S82
Partnership on AI expands and launches initiatives focused on AI challenges and opportunities — The Partnership on AI, founded in September 2016 by Amazon, DeepMind/Google, Facebook, IBM, and Microsoft with the aim t…
S83
Global AI Policy Framework: International Cooperation and Historical Perspectives — The speakers demonstrate significant consensus on key principles including the need for inclusive governance, building o…
S84
GOVERNING AI FOR HUMANITY — – 178 By promoting a common understanding, common ground and common benefits, the proposals above seek to address the ga…
S85
Ad Hoc Consultation: Friday 2nd February, Morning session — During the session, chaired by Mr. Chair, the speaker began by extending greetings to colleagues and esteemed delegates …
S87
Any other business /Adoption of the report/ Closure of the session — In summary, the speaker artfully blended expressions of gratitude with recognition of collaborative efforts and a call f…
S88
Open Mic &amp; Closing Ceremony — The overall tone was formal yet appreciative. There was a sense of accomplishment and gratitude expressed throughout, wi…
S89
World Economic Forum Annual Meeting Closing Remarks: Summary — The tone is consistently positive, celebratory, and grateful throughout the discussion. It begins with formal appreciati…
S90
World Economic Forum Town Hall on AI Ethics and Trust — The discussion maintained a serious, critical tone throughout, with panelists expressing genuine concern and urgency abo…
S91
AI as critical infrastructure for continuity in public services — The discussion maintained a collaborative and constructive tone throughout, with participants building on each other’s p…
S92
Impact &amp; the Role of AI How Artificial Intelligence Is Changing Everything — The discussion maintained a cautiously optimistic tone throughout, balancing enthusiasm for AI’s potential with realisti…
S93
Transforming Health Systems with AI From Lab to Last Mile — The discussion maintained a cautiously optimistic and collaborative tone throughout. It began with enthusiasm about AI’s…
S94
AI: Lifting All Boats / DAVOS 2025 — The tone was largely optimistic and solution-oriented, with speakers acknowledging challenges but focusing on opportunit…
S95
WS #6 Bridging Digital Gaps in Agriculture &amp; trade Transformation — The tone was largely optimistic and solution-oriented. Speakers were enthusiastic about the potential of the Internet Ba…
S96
Emerging Markets: Resilience, Innovation, and the Future of Global Development — The tone was notably optimistic and forward-looking throughout the conversation. Panelists consistently emphasized oppor…
S97
Host Country Open Stage — The tone throughout the discussion was consistently optimistic and solution-oriented. All presenters maintained a profes…
S98
Next-Gen Industrial Infrastructure / Davos 2025 — The tone was largely optimistic and forward-looking, with speakers enthusiastically sharing their visions and initiative…
S99
Closing Session  — The tone throughout the discussion was consistently formal, collaborative, and optimistic. It maintained a celebratory y…
S100
Ensuring Safe AI_ Monitoring Agents to Bridge the Global Assurance Gap — The tone was collaborative and solution-oriented throughout, with participants acknowledging both the urgency and comple…
S101
Closing remarks – Charting the path forward — The tone throughout was consistently formal, diplomatic, and optimistic. It maintained a collaborative and forward-looki…
S102
(Interactive Dialogue 4) Summit of the Future – General Assembly, 79th session — The overall tone was one of urgency and determination. Many speakers emphasized that “the future starts now” and stresse…
S103
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S104
https://dig.watch/event/india-ai-impact-summit-2026/catalyzing-global-investment-in-ai-for-health_-who-strategic-roundtable — Fantastic. Professor. Here’s the logic model. and it gave that particular output. But when it gives that output, can we…
S105
Closing Plenary of Global Roundtable — Chair:Thank you very much, Ms. Amner, for the very good summary, as well as to Ms. Lenker for the earlier summary. Excel…
S106
High Level Session 3: AI &amp; the Future of Work — Ishita Barua: Thank you. In a world where AI can generate content faster than we are actually able to consume it and rea…
S107
AI health tools need clinicians to prevent serious risks, Oxford study warns — The University of Oxfordhas warnedthat AI in healthcare, primarily through chatbots, should not operate without human ov…
S108
AI shows promise in supporting emergency medical decisions — Drexel University researchers studied howAI can aid emergency decisions in pediatric traumaat Children’s National Medica…
S109
AI could save billions but healthcare adoption is slow — AI is being hailed as atransformative force in healthcare, with the potential to reduce costs andimprove outcomesdramati…
S110
The Intelligent Coworker: AI’s Evolution in the Workplace — Christoph Schweizer advocated for new measurement approaches, emphasising “adoption and usage,” “employee satisfaction s…
S111
Responsible AI for Shared Prosperity — Success should be measured by actual impact on lives – reducing maternal mortality, eliminating diseases, escaping pover…
S112
Keynote-Roy Jakobs — Success will ultimately be measured by health outcomes rather than technology metrics – earlier disease detection, fewer…
S113
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-ai-in-healthcare-india-ai-impact-summit — So which are the specific use cases that companies like Anthropic view or are targeting for to solve healthcare problems…
S114
Building Trustworthy AI Foundations and Practical Pathways — Risk should be defined as probability of undesirable outcomes characterized by likelihood and severity, using airplane s…
S115
https://dig.watch/event/india-ai-impact-summit-2026/how-small-ai-solutions-are-creating-big-social-change — Alban, can I pick up quickly? I think it’s really important, and actually I’m going to name the number if it’s okay. Oka…
S116
Toward Collective Action_ Roundtable on Safe &amp; Trusted AI — Professor Jonathan Shock warned against the “Silicon Valley approach of move fast and break things” when dealing with go…
S117
Host Country Open Stage — D Silva emphasized the transformative potential of sustainability reporting, stating that “transparency is not just abou…
S118
What is it about AI that we need to regulate? — Concrete Actions to Address AI in Judicial Systems, Immigration and Government Decision-MakingThe discussions across IGF…
S119
Smart Regulation Rightsizing Governance for the AI Revolution — Low to moderate disagreement level. The speakers generally agreed on the problems (AI divides, need for cooperation, cap…
S120
The Innovation Beneath AI: The US-India Partnership powering the AI Era — He sees a large opportunity for U.S. and Indian firms to co‑create companies that will build refining capacity and reduc…
S121
In brief — – External evidence from systematic research: valid and clinically relevant findings from patient-centred clinical resea…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
Z
Zameer Brey
1 argument82 words per minute789 words572 seconds
Argument 1
Need for verified, “glass‑box” AI that guarantees zero risk of error and provides a transparent input‑output chain (Zameer Brey)
EXPLANATION
Zameer argues that healthcare AI must be fully verifiable, moving from a black‑box to a glass‑box model, with zero tolerance for failure. He stresses that a transparent chain of input, logic and output is essential to ensure patient safety.
EVIDENCE
He uses a flight safety analogy to illustrate that healthcare AI must have zero tolerance for failure, then calls for a verifiable, ‘glass-box’ system that records inputs, logic and outputs, and includes safeguards against allergic reactions or catastrophic errors [24-30].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Roundtable transcripts record Brey advocating a shift from “black box” to “glass box” AI with documented inputs, transparent logic and traceable reasoning, emphasizing zero-tolerance for failure [S2] and broader calls for algorithmic transparency in AI systems [S13].
MAJOR DISCUSSION POINT
Verified AI
AGREED WITH
Prokar Dasgupta, Payden P.
DISAGREED WITH
Alain Labrique
P
Prokar Dasgupta
2 arguments108 words per minute743 words410 seconds
Argument 1
Investment in concrete AI implementations (ambient note‑taking, tele‑surgery, autonomous robotics) and the necessity of patient involvement to achieve equity (Prokar Dasgupta)
EXPLANATION
Prokar highlights several real‑world AI projects—ambient AI for clinical note‑taking, tele‑surgery that enables remote operations, and autonomous robotic systems for procedures—as examples of concrete investment. He stresses that patient involvement and acceptance are crucial for equitable outcomes.
EVIDENCE
He cites several projects: an evaluation of ambient AI that writes clinical notes and reduces operating-room time [49-50]; a BMJ article on tele-surgery 2.0 enabling surgeons to operate from 2,500 km away with sub-60 ms latency [51]; and work on autonomous robotic systems for prostate procedures and gallbladder surgery, highlighting the need for patient acceptance [51-57].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The WHO roundtable cites Dasgupta’s presentation of real-world projects: ambient AI for clinical note-taking, tele-surgery with sub-60 ms latency, and autonomous robotic procedures, illustrating concrete investment and the need for patient acceptance [S2].
MAJOR DISCUSSION POINT
Concrete AI implementations
AGREED WITH
Zameer Brey, Payden P.
DISAGREED WITH
Zameer Brey
Argument 2
Embedding AI education within medical and nursing curricula to build a skilled health workforce (Prokar Dasgupta)
EXPLANATION
Prokar points out that very few medical and nursing schools currently teach AI, and calls for investment in skills development to embed AI education, ensuring the next generation of health workers can safely and effectively use AI tools.
EVIDENCE
He notes that there are hardly any medical and nursing schools that include AI in their curricula and urges investment in skills to embed AI education for the next generation of healthcare workers [60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Policy briefs on AI workforce development stress the importance of integrating AI training into medical and nursing education as part of a broader reskilling agenda, echoing Dasgupta’s call for curriculum integration [S16] and [S17].
MAJOR DISCUSSION POINT
AI education in health curricula
H
Haitham Ali Ahmed El‑Noush
1 argument10 words per minute47 words258 seconds
Argument 1
Coordination among donors and the development of clear strategies, priorities, and pooled investments to rally support (Haitham Ali Ahmed El‑Noush)
EXPLANATION
Haitham calls for donors to work together, establishing coordinated strategies, clear priorities and pooled funding mechanisms so that AI‑for‑health initiatives can be effectively supported and scaled.
EVIDENCE
He states that donors need coordination, clear strategies, priorities and pooled investments to rally behind AI for health initiatives [70].
MAJOR DISCUSSION POINT
Donor coordination
AGREED WITH
Prokar Dasgupta, Payden P., Alain Labrique
P
Payden P.
1 argument117 words per minute276 words141 seconds
Argument 1
Transition from speculative possibilities to concrete investment, governance, evidence generation, and long‑term partnerships as the new focus (Payden P.)
EXPLANATION
Payden observes that AI in health has reached an inflection point, shifting the conversation from possibility to concrete investment, implementation, governance, evidence generation and sustained partnerships. He emphasizes that these elements are now the primary drivers of progress.
EVIDENCE
He notes that AI in health has reached an inflection point, moving from possibility to investment, implementation and impact, and emphasizes the need for governance, evidence generation and long-term partnerships [101-108]; he further outlines that investment must flow into safety, trust and scalability through regulation, data systems and capacity building [110-118].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Keynote remarks describe the AI-in-health conversation moving from possibility to practicality, emphasizing investment, governance, evidence generation and sustained partnerships, matching Payden’s framing [S20] and roundtable observations on safety, trust and scalability investments [S2].
MAJOR DISCUSSION POINT
Shift to investment and governance
AGREED WITH
Zameer Brey, Prokar Dasgupta
DISAGREED WITH
Zameer Brey, Alain Labrique
A
Alain Labrique
2 arguments87 words per minute219 words150 seconds
Argument 1
Investment must go beyond innovation to fund systems that ensure safety, trust, and scalability through regulation, data infrastructure, and capacity building (Alain Labrique)
EXPLANATION
Alain argues that funding should not stop at innovative AI tools; it must also support the surrounding systems—regulatory frameworks, data infrastructure and workforce capacity—that make AI safe, trustworthy and scalable, preventing new inequalities.
EVIDENCE
He argues that investment must go beyond pure innovation to fund systems that make AI safe, trusted and scalable, including governance, regulation, data infrastructure and workforce capacity building, describing these as essential enabling conditions [110-114]; he adds that predictability and trust attract further investment, linking regulatory strength, evidence generation and partnerships to increased funding [115-118].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Roundtable discussions note that funding should extend beyond novel tools to cover governance, regulation, data systems and workforce capacity building, directly supporting Labrique’s point [S2] and further reinforced by capacity-building commentary [S7].
MAJOR DISCUSSION POINT
Investment beyond innovation
AGREED WITH
Zameer Brey, Payden P.
DISAGREED WITH
Zameer Brey, Payden P.
Argument 2
Maintaining humans in the loop is crucial for behavior change and achieving real‑world impact (Alain Labrique)
EXPLANATION
Alain stresses that keeping humans involved in AI‑driven clinical workflows is essential for changing entrenched practices and delivering tangible health outcomes, noting that behavior change is possible when humans remain central.
EVIDENCE
He emphasizes that keeping humans in the loop is essential for changing clinical practice and achieving real-world impact, noting that behavior change is possible when humans remain involved [64-65].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panels on AI deployment stress keeping humans in the loop for safety, behavior change and real-world impact, aligning with Labrique’s argument [S22] and roundtable remarks on human-centric workflows [S2].
MAJOR DISCUSSION POINT
Human in the loop
AGREED WITH
Zameer Brey, Prokar Dasgupta, Ken Ichiro Natsume, Justice Prathiba M. Singh
DISAGREED WITH
Zameer Brey
K
Ken Ichiro Natsume
1 argument143 words per minute84 words35 seconds
Argument 1
AI should be leveraged with humans at the core of its utilization (Ken Ichiro Natsume)
EXPLANATION
Ken asserts that AI technologies must be deployed with a human‑centric approach, ensuring that people remain central to decision‑making and that AI augments rather than replaces human expertise.
EVIDENCE
He says AI can be leveraged but humans must remain at the centre of its utilisation, emphasizing a human-centric approach [74-75].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Natsume’s comments in the roundtable highlight a human-centric approach to AI, echoing broader calls for AI systems that augment rather than replace human expertise [S2] and the importance of human oversight noted in other AI governance sessions [S22].
MAJOR DISCUSSION POINT
Human‑centric AI
AGREED WITH
Zameer Brey, Prokar Dasgupta, Alain Labrique, Justice Prathiba M. Singh
J
Justice Prathiba M. Singh
1 argument120 words per minute27 words13 seconds
Argument 1
Collaboration between AI and broader technology sectors is essential for a healthier world (Justice Prathiba M. Singh)
EXPLANATION
Justice Singh delivers a concise statement that achieving a healthier world requires AI to work together with other technology sectors, highlighting the need for cross‑sector collaboration.
EVIDENCE
She delivers a concise statement that a healthier world requires AI and broader technology sectors to work together [78-79].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
UN-level discussions advocate for cross-sector collaboration between AI and other technology domains to advance health outcomes, supporting Singh’s statement [S23] and examples of public-private partnership for innovation [S24].
MAJOR DISCUSSION POINT
AI‑technology collaboration
AGREED WITH
Zameer Brey, Prokar Dasgupta, Alain Labrique, Ken Ichiro Natsume
Agreements
Agreement Points
Human‑centered AI and keeping humans in the loop is essential for safe and acceptable health‑AI deployment.
Speakers: Zameer Brey, Prokar Dasgupta, Alain Labrique, Ken Ichiro Natsume, Justice Prathiba M. Singh
Need for verified, “glass‑box” AI that guarantees zero risk of error and provides a transparent input‑output chain (Zameer Brey) Investment in concrete AI implementations (ambient note‑taking, tele‑surgery, autonomous robotics) and the necessity of patient involvement to achieve equity (Prokar Dasgupta) Maintaining humans in the loop is crucial for behavior change and achieving real‑world impact (Alain Labrique) AI should be leveraged with humans at the core of its utilization (Ken Ichiro Natsume) Collaboration between AI and broader technology sectors is essential for a healthier world (Justice Prathiba M. Singh)
All speakers stress that AI systems must remain transparent, involve patients or users, and keep humans central to decision-making to ensure safety and acceptance [24-30][51-57][64-65][74-75][78-79].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with calls for human-in-the-loop safeguards in AI governance, as highlighted in discussions on smart-city AI and responsible AI at scale [S52][S53][S54].
Coordinated investment and capacity building beyond pure innovation are required to scale AI for health.
Speakers: Prokar Dasgupta, Payden P., Alain Labrique, Haitham Ali Ahmed El‑Noush
Investment in concrete AI implementations (ambient note‑taking, tele‑surgery, autonomous robotics) and the necessity of patient involvement to achieve equity (Prokar Dasgupta) Transition from speculative possibilities to concrete investment, governance, evidence generation, and long‑term partnerships as the new focus (Payden P.) Investment must go beyond innovation to fund systems that ensure safety, trust, and scalability through regulation, data infrastructure, and capacity building (Alain Labrique) Coordination among donors and the development of clear strategies, priorities, and pooled investments to rally support (Haitham Ali Ahmed El‑Noush)
Speakers call for pooled, strategic funding, governance structures, and workforce training to move AI from pilots to sustainable health impact [49-57][60][101-108][110-114][70].
POLICY CONTEXT (KNOWLEDGE BASE)
The WHO roundtable emphasized shifting investment from pure benchmarks toward impact-driven scaling and capacity building, echoing the need for coordinated financing and infrastructure development [S45][S46][S64].
Trust, verification and transparency are prerequisites for AI adoption in health.
Speakers: Zameer Brey, Alain Labrique, Payden P.
Need for verified, “glass‑box” AI that guarantees zero risk of error and provides a transparent input‑output chain (Zameer Brey) Investment must go beyond innovation to fund systems that ensure safety, trust, and scalability through regulation, data infrastructure, and capacity building (Alain Labrique) Transition from speculative possibilities to concrete investment, governance, evidence generation, and long‑term partnerships as the new focus (Payden P.)
All three emphasize that AI must be auditable, predictable and trustworthy; trust is described as the currency that unlocks sustainable investment [24-30][115-118][118-119].
POLICY CONTEXT (KNOWLEDGE BASE)
Multi-stakeholder AI principles stress trust and transparency as preconditions for deployment, and recent debates highlight challenges in verification frameworks [S59][S60][S48].
Equity and patient safety must be central; AI should improve health for everyone, not a privileged few.
Speakers: Zameer Brey, Prokar Dasgupta, Payden P.
Need for verified, “glass‑box” AI that guarantees zero risk of error and provides a transparent input‑output chain (Zameer Brey) Investment in concrete AI implementations (ambient note‑taking, tele‑surgery, autonomous robotics) and the necessity of patient involvement to achieve equity (Prokar Dasgupta) Transition from speculative possibilities to concrete investment, governance, evidence generation, and long‑term partnerships as the new focus (Payden P.)
Speakers underline zero-tolerance for error, the need for patient involvement, and the goal that AI benefits all populations, not just a few [24-30][51-57][106-108].
POLICY CONTEXT (KNOWLEDGE BASE)
Digital health policy documents underline equitable access and a human-rights-based AI approach, calling for inclusive design and safety for all populations [S61][S58][S63][S55].
Similar Viewpoints
Both stress that AI must be transparent and trustworthy, with verifiable logic and safeguards, as a foundation for safe health deployment [24-30][115-118].
Speakers: Zameer Brey, Alain Labrique
Need for verified, “glass‑box” AI that guarantees zero risk of error and provides a transparent input‑output chain (Zameer Brey) Investment must go beyond innovation to fund systems that ensure safety, trust, and scalability through regulation, data infrastructure, and capacity building (Alain Labrique)
Both call for a shift from pure innovation to concrete, funded implementation, governance and evidence generation to realise health impact [49-57][101-108].
Speakers: Prokar Dasgupta, Payden P.
Investment in concrete AI implementations (ambient note‑taking, tele‑surgery, autonomous robotics) and the necessity of patient involvement to achieve equity (Prokar Dasgupta) Transition from speculative possibilities to concrete investment, governance, evidence generation, and long‑term partnerships as the new focus (Payden P.)
Both highlight that AI must be deployed with patients/humans at the centre, ensuring acceptance and ethical use [51-57][74-75].
Speakers: Prokar Dasgupta, Ken Ichiro Natsume
Investment in concrete AI implementations (ambient note‑taking, tele‑surgery, autonomous robotics) and the necessity of patient involvement to achieve equity (Prokar Dasgupta) AI should be leveraged with humans at the core of its utilization (Ken Ichiro Natsume)
Both stress the need for coordinated, strategic funding mechanisms and partnerships to scale AI for health [70][101-108].
Speakers: Haitham Ali Ahmed El‑Noush, Payden P.
Coordination among donors and the development of clear strategies, priorities, and pooled investments to rally support (Haitham Ali Ahmed El‑Noush) Transition from speculative possibilities to concrete investment, governance, evidence generation, and long‑term partnerships as the new focus (Payden P.)
Unexpected Consensus
Even while advocating for highly autonomous technologies, speakers still stress the necessity of patient safety safeguards.
Speakers: Zameer Brey, Prokar Dasgupta
Need for verified, “glass‑box” AI that guarantees zero risk of error and provides a transparent input‑output chain (Zameer Brey) Investment in concrete AI implementations (ambient note‑taking, tele‑surgery, autonomous robotics) and the necessity of patient involvement to achieve equity (Prokar Dasgupta)
Zameer calls for zero-risk, fully verifiable AI, while Prokar promotes autonomous robotics and tele-surgery; nevertheless both agree that safeguards, patient involvement and transparency are non-negotiable, revealing an unexpected alignment between caution and high-tech ambition [24-30][51-57].
POLICY CONTEXT (KNOWLEDGE BASE)
Guidelines on safe AI at scale and governance frameworks insist on safety-by-design and guardrails even for autonomous systems [S53][S59][S44].
Overall Assessment

The panel shows strong convergence on four pillars: (1) human‑centred, transparent AI; (2) coordinated investment and capacity building beyond mere innovation; (3) trust and verifiability as pre‑conditions for adoption; and (4) equity and patient safety as overarching goals. These shared positions indicate a high level of consensus that the next phase for AI in health must be grounded in robust governance, pooled funding, skilled workforce and patient‑focused design.

High consensus – the majority of speakers independently reinforce the same themes, suggesting that future policy and funding streams are likely to prioritize trustworthy, human‑centric AI systems supported by coordinated investment and capacity development.

Differences
Different Viewpoints
Risk tolerance – zero‑risk verified AI vs pragmatic deployment with acceptable risk
Speakers: Zameer Brey, Prokar Dasgupta
Need for verified, “glass‑box” AI that guarantees zero risk of error and provides a transparent input‑output chain (Zameer Brey) Investment in concrete AI implementations (ambient note‑taking, tele‑surgery, autonomous robotics) and the necessity of patient involvement to achieve equity (Prokar Dasgupta)
Zameer argues that health-care AI must be fully verifiable with a 0 % tolerance for failure, using a glass-box model and safeguards [24-30]. Prokar promotes deploying existing AI tools (ambient note-taking, tele-surgery, autonomous robots) even though they are not yet perfect and notes patient reluctance, suggesting a more pragmatic, incremental approach [49-57].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on AI trust reveal divergent views on zero-risk expectations versus pragmatic risk acceptance in deployment [S48][S44].
Where investment should be directed – verification infrastructure vs system‑wide governance, data and capacity building
Speakers: Zameer Brey, Payden P., Alain Labrique
Need for verified, “glass‑box” AI that guarantees zero risk of error and provides a transparent input‑output chain (Zameer Brey) Transition from speculative possibilities to concrete investment, governance, evidence generation, and long‑term partnerships as the new focus (Payden P.) Investment must go beyond innovation to fund systems that ensure safety, trust, and scalability through regulation, data infrastructure, and capacity building (Alain Labrique)
Zameer focuses funding on creating a transparent, auditable AI pipeline and safeguards [24-30]. Payden stresses that the new priority is financing governance, evidence generation and partnership mechanisms to make AI trustworthy [101-108]. Alain adds that investment must also cover regulatory frameworks, data systems and workforce capacity as essential enabling conditions [110-114]. The three speakers therefore disagree on the primary allocation of resources.
POLICY CONTEXT (KNOWLEDGE BASE)
Stakeholder discussions note disagreement on prioritizing verification infrastructure versus broader governance, data and capacity investments [S49][S64][S45].
Metrics of success – safety/accuracy versus real‑world impact
Speakers: Alain Labrique, Zameer Brey
Maintaining humans in the loop is crucial for behavior change and achieving real‑world impact (Alain Labrique) Need for verified, “glass‑box” AI that guarantees zero risk of error and provides a transparent input‑output chain (Zameer Brey)
Alain suggests that impact should be measured by behavior change and outcomes, arguing that benchmarks should focus on impact rather than pure accuracy [63]. Zameer centers the discussion on eliminating any error, using safety as the primary metric [24-30].
POLICY CONTEXT (KNOWLEDGE BASE)
WHO and AI standards panels argue that impact metrics should outweigh pure accuracy benchmarks, urging real-world outcome measurement [S45][S46][S44].
Unexpected Differences
Zero‑risk requirement versus realistic acceptance of residual risk in autonomous systems
Speakers: Zameer Brey, Prokar Dasgupta
Need for verified, “glass‑box” AI that guarantees zero risk of error and provides a transparent input‑output chain (Zameer Brey) … autonomous robotic system … 100 % accurate … yet only one hand raised in the room, indicating patient reluctance (Prokar Dasgupta)
Zameer’s insistence on a 0 % failure tolerance [24-30] clashes with Prokar’s presentation of advanced autonomous systems that are touted as “100 % accurate” but still face human acceptance barriers, revealing an unexpected tension between theoretical safety guarantees and practical deployment realities [51-57].
POLICY CONTEXT (KNOWLEDGE BASE)
The verification challenge and broader AI safety discourse highlight tension between zero-risk demands and tolerable residual risk in autonomous health AI [S48][S44].
Overall Assessment

The panel shows broad consensus that AI can transform health, but disagreement centers on risk tolerance, investment priorities, and measurement of success. Zameer pushes for a zero‑risk, fully auditable AI model, while others advocate for pragmatic deployment, system‑level governance, and impact‑focused metrics.

Moderate – the divergences are substantive (risk vs deployment, funding focus) but do not fracture the shared vision of AI‑enabled health improvement. The implications are a need for coordinated policy that balances stringent safety standards with realistic pathways for scaling AI tools.

Partial Agreements
All speakers share the overarching goal of harnessing AI to improve health outcomes, but they diverge on the primary pathway: Zameer stresses absolute verification, Payden stresses governance and partnership, Alain stresses system‑level investment, and Ken stresses a human‑centric deployment. The consensus is that AI must be trustworthy and human‑centered, yet the route to achieve that differs [24-30][101-108][110-114][74-75].
Speakers: Zameer Brey, Payden P., Alain Labrique, Ken Ichiro Natsume
Need for verified, “glass‑box” AI that guarantees zero risk of error and provides a transparent input‑output chain (Zameer Brey) Transition from speculative possibilities to concrete investment, governance, evidence generation, and long‑term partnerships as the new focus (Payden P.) Investment must go beyond innovation to fund systems that ensure safety, trust, and scalability through regulation, data infrastructure, and capacity building (Alain Labrique) AI should be leveraged with humans at the core of its utilization (Ken Ichiro Natsume)
Both agree that financial resources are essential to scale AI for health. Prokar calls for targeted investment in specific tools and patient‑centred pilots, while Haitham calls for coordinated donor strategies and pooled funding mechanisms. They share the goal of mobilising money but differ on coordination versus project‑specific focus [49-57][70].
Speakers: Prokar Dasgupta, Haitham Ali Ahmed El‑Noush
Investment in concrete AI implementations (ambient note‑taking, tele‑surgery, autonomous robotics) and the necessity of patient involvement to achieve equity (Prokar Dasgupta) Coordination among donors and the development of clear strategies, priorities, and pooled investments to rally support (Haitham Ali Ahmed El‑Noush)
Takeaways
Key takeaways
AI in health must be trustworthy and verifiable – a ‘glass‑box’ approach that provides a transparent input‑output chain and aims for zero risk of error (Zameer Brey). The conversation has shifted from speculative possibilities to concrete investment, implementation, and impact; funding must support safety, governance, evidence generation, data systems, workforce readiness and long‑term partnerships (Payden P., Alain Labrique). Coordinated donor strategies and pooled investments are essential to align priorities and scale equitable AI solutions (Haitham Ali Ahmed El‑Noush). Human beings must remain at the centre of AI deployment – keeping humans in the loop, embedding AI education in medical and nursing curricula, and ensuring societal impact and patient involvement (Ken Ichiro Natsume, Prokar Dasgupta, Justice Prathiba M. Singh). Equity is a cross‑cutting requirement: AI tools need diverse data, global collaboration (UK, India, Africa), and patient‑focused design to avoid new inequalities (Prokar Dasgupta). Trust, built through transparent regulation and demonstrable outcomes, is the currency that unlocks sustainable investment in health AI (Payden P.).
Resolutions and action items
Form a working group with partners to develop a pathway for verified, glass‑box AI in healthcare (proposed by Zameer Brey). Create coordinated donor mechanisms and a shared strategic framework for AI health investments (suggested by Haitham Ali Ahmed El‑Noush). Invest in governance, regulatory frameworks, evidence generation, data infrastructure and capacity‑building programmes to ensure safe, scalable AI (highlighted by Payden P. and Alain Labrique). Integrate AI education into medical and nursing curricula worldwide to build a skilled health workforce (Prokar Dasgupta). Pilot and evaluate concrete AI applications such as ambient note‑taking, tele‑surgery and autonomous robotics with patient involvement to demonstrate equity impact (Prokar Dasgupta). Establish long‑term, cross‑sector partnerships (government, industry, civil society) to sustain AI health initiatives (Payden P.).
Unresolved issues
Concrete methods and standards for achieving the claimed zero‑risk, fully verifiable AI in clinical practice. Specific strategies to overcome entrenched clinical workflows and achieve behaviour change among clinicians. Detailed funding models and allocation mechanisms for coordinated donor investments. How to operationalise global equity – e.g., data diversification, patient engagement, and access for low‑resource settings. Exact regulatory and legal frameworks required to certify AI tools as safe and trustworthy. Metrics and timelines for measuring real‑world health‑outcome improvements attributable to AI.
Suggested compromises
Adopt a phased approach that keeps humans in the loop while progressively increasing AI autonomy, balancing safety with innovation. Combine rapid AI deployment (e.g., ambient note‑taking) with rigorous verification processes before scaling to higher‑risk applications. Align AI development with both technological capability and societal acceptance, ensuring patient and clinician involvement throughout. Prioritise investment in foundational systems (governance, data, training) alongside product development to mitigate risk while advancing impact.
Thought Provoking Comments
When it comes to health care, the bar should be 0 % risk of failure, 0 % risk of error. We need AI that is verifiable – a ‘glass box’ where we can document the input, see the logic, and ensure it never prescribes something harmful.
This reframes the AI discussion from performance metrics to absolute safety, introducing a stringent verification standard that challenges the prevailing tolerance for probabilistic risk in AI systems.
It shifted the conversation from generic enthusiasm about AI assistance to a critical focus on safety and transparency. Subsequent speakers (e.g., Prokar Dasgupta) referenced the need for trustworthy, equitable AI, and the panel later emphasized governance and trust as essential for investment.
Speaker: Zameer Brey
Responsible AI UK is funding real‑world implementations – from ambient AI that writes notes and shortens OR time, to tele‑surgery 2.0 that lets a surgeon operate 2,500 km away with ≤60 ms latency, and autonomous robotic systems for procedures like prostate treatment.
He moves the dialogue from abstract concepts to concrete, scalable examples, highlighting both technological feasibility and the importance of implementation in diverse settings.
His examples broadened the scope of the discussion to include equity, data diversity, and global health impact, prompting other panelists (Alain Labrique, Payden P.) to stress the need for investment in infrastructure and workforce readiness.
Speaker: Prokar Dasgupta
The challenge isn’t just about accuracy; it’s about impact. We must measure whether AI actually changes health outcomes, not just whether it makes the right prediction.
This comment redirects the metric of success from technical performance to real‑world health impact, urging a shift in evaluation criteria.
It prompted the panel to discuss evidence generation and outcome measurement, influencing Payden’s later summary about moving from possibility to measurable impact.
Speaker: Alain Labrique
We need to move from promise to progress – keep humans at the centre of the AI revolution and ensure AI tools are integrated into workflows with clear, trusted pathways.
Reiterates the central theme of human‑centric AI, reinforcing the ethical and practical necessity of integrating AI without displacing clinicians.
This repetition reinforced the human‑in‑the‑loop principle, which was echoed by Ken Ichiro Natsume and Justice Prathiba Singh, solidifying it as a consensus point before the closing remarks.
Speaker: Zameer Brey (repeated emphasis)
For AI tools and for patients, we must think beyond the Turing test to the ‘Wieselbaum test’ – evaluating societal effects, not just technical capability.
Introduces a novel evaluative framework that expands assessment from technical competence to societal impact, challenging the audience to consider broader consequences.
This sparked a subtle shift toward discussing ethical implications and equity, which later appeared in Payden’s emphasis on AI as a tool for equity versus a driver of new inequalities.
Speaker: Prokar Dasgupta
AI and health have reached an inflection point. The question is no longer whether AI can improve health, but whether we will invest in the right foundations – governance, regulation, evidence, workforce capacity – to ensure it improves health for everyone, not a few.
Synthesizes the discussion into a clear call to action, framing the next steps as investment in systemic enablers rather than just technology development.
Serves as the concluding turning point, consolidating earlier themes (safety, impact, equity, human‑centered design) into a strategic roadmap, and setting the tone for future collaborations and commitments.
Speaker: Payden P.
Overall Assessment

The discussion evolved from an initial, somewhat procedural focus on AI assistance to a nuanced debate about safety, verification, real‑world impact, and equity. Zameer Brey’s risk‑analogy and call for verifiable ‘glass‑box’ AI forced the panel to confront safety standards, while Prokar Dasgupta’s concrete implementation examples expanded the conversation to global equity and practical deployment. Alain Labrique’s emphasis on impact over accuracy redirected evaluation metrics, and the repeated human‑centric reminders reinforced ethical grounding. The introduction of a societal‑impact test (the ‘Wieselbaum test’) further deepened the ethical dimension. All these pivotal comments converged in Payden’s closing synthesis, which reframed the dialogue as a strategic investment challenge, highlighting governance, trust, and inclusive outcomes as the decisive factors for AI’s future in health. These key interventions shaped the panel’s trajectory, moving it from abstract enthusiasm to a concrete, action‑oriented roadmap.

Follow-up Questions
To what extent will AI improvements translate into actual health outcome improvements?
Understanding the real-world impact of AI on patient health is essential to justify its adoption beyond technical performance.
Speaker: Zameer Brey
How will AI integration shift health outcomes over time?
Longitudinal effects determine whether AI provides sustained benefits or merely short‑term gains.
Speaker: Zameer Brey
How can AI be made verifiable – shifting from a black‑box to a glass‑box model?
Transparency is needed for clinicians and regulators to trust AI recommendations and to audit decision pathways.
Speaker: Zameer Brey
What safeguards can ensure AI never prescribes something a patient is allergic to or that could cause catastrophic error?
Safety guarantees are critical for clinical acceptance and for meeting the zero‑risk expectation in healthcare.
Speaker: Zameer Brey
What level and type of investment is required in clinical research, evaluation, and evidence generation to shift entrenched clinical practice pathways?
Changing long‑standing workflows demands robust evidence and dedicated funding to overcome resistance from clinicians.
Speaker: Zameer Brey, Prokar Dasgupta
What are the broader societal effects of deploying AI tools in healthcare?
Beyond technical performance, AI may influence equity, employment, patient autonomy, and public trust, requiring systematic study.
Speaker: Prokar Dasgupta
How can we move from AI promise to measurable progress in health systems?
Identifying concrete steps, metrics, and implementation pathways is needed to translate hype into tangible benefits.
Speaker: Zameer Brey
How can AI curricula be integrated into medical and nursing education worldwide?
Embedding AI knowledge in health‑professional training ensures future workforce readiness and safe AI use.
Speaker: Prokar Dasgupta
How can we obtain and incorporate more diverse data sets to avoid bias and improve AI equity?
Diverse data are essential to develop AI that works reliably across different populations and reduces health disparities.
Speaker: Prokar Dasgupta
What pathways and standards are needed to develop verified AI that provides a transparent chain of proof for each decision?
Creating verifiable AI frameworks will support regulatory compliance and clinician confidence.
Speaker: Zameer Brey
How can regulatory and legal frameworks be strengthened to build trust and attract sustainable investment in AI for health?
Clear governance and legal certainty are prerequisites for scaling AI solutions responsibly.
Speaker: Payden P.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Lt Gen Vipul Shinghal

Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Lt Gen Vipul Shinghal

Session at a glanceSummary, keypoints, and speakers overview

Summary

In his keynote address, a senior Indian Army officer highlighted how artificial intelligence is reshaping modern warfare, recalling his early experience with paper maps and slow information flow in the 1980s [6-8]. He contrasted that with today’s operation rooms dominated by massive digital displays that fuse sensor data and provide near-real-time battlefield pictures, a shift he described as “like Star Wars coming to life” [9].


To illustrate the risks of over-reliance on AI, he recounted a high-tempo mission where an algorithm recommended an immediate strike with a high confidence score and a decision window measured in seconds [10-13]. The commander paused, not because he distrusted the technology but because his experience sensed an anomaly and he asked, “What does the machine not know?” [14-16]. He discovered that a civilian evacuation had just begun and was not yet reflected in the data, meaning the target could include non-combatants, so he delayed the strike and spared innocent lives [19-21]. He used this episode to assert that AI can accelerate recommendations, but only humans can exercise judgment and bear responsibility for the outcome [24-25].


Citing recent statements by the Prime Minister, he emphasized that AI guardrails are not optional for the military but mandatory given the high stakes [26-27]. He noted that the Indian Armed Forces operate in a uniquely complex security environment across contested borders, multiple domains, dense populations, and high escalation intensity [28-30]. Accordingly, the forces view AI as a force multiplier in intelligence fusion, surveillance, logistics and other functions, and have declared 2024 the “year of networking and data-centricity” [31-33].


Indigenous platforms such as ACOM AI-as-a-service, Sama Drishti, Shakti and Akash Teer have been developed through collaboration with industry and startups, and the army remains open to further partnerships for self-reliant transformation [35-38]. The speaker outlined four responsible-AI principles: (i) certain decisions must remain human-controlled and legally accountable [41-44]; (ii) AI-enabled systems should be treated as weapons and tested in contested conditions [46-48]; (iii) transparency must turn the “black box” into a “glass box” so commanders know the data and training provenance [52-55]; and (iv) commanders and staff must be trained to integrate algorithms into operations [56].


He linked these principles to broader governance efforts, referencing India’s AI governance guidelines and ongoing United Nations discussions on meaningful human control and accountability for autonomous weapons [57-60]. Finally, he asserted that India, as a major military power and emerging AI hub grounded in ethical traditions, has both the capacity and credibility to lead the global conversation on responsible AI in warfare [61-63]. The address concluded that while AI will continue to transform the battlefield, human judgment, ethical safeguards, and international governance remain essential to ensure security and moral responsibility [24-25][57-60].


Keypoints

Major discussion points


Rapid transformation of battlefield decision-making through AI – The speaker contrasts the early days of paper maps and slow information flow ([6-9]) with today’s “massive digital display” that fuses sensor data instantly and forces decisions within seconds ([10-13]). He illustrates this shift with a high-tempo scenario where a commander pauses a machine-generated strike, asks “What does the machine not know?” and saves civilian lives by applying human judgment ([14-23]).


Human control, accountability and ethical guardrails are non-negotiable – AI may recommend actions, but ultimate authority must remain with humans; legal and moral responsibility cannot be delegated to machines ([41-45]). Because AI-enabled systems can cause harm, they must be treated as weapons, rigorously tested in contested conditions, and not just as software ([46-48]).


Indigenous AI development and industry-startup collaboration – The Indian Armed Forces are deploying home-grown applications such as ACOM AI, Sama Drishti, Shakti and Akash Teer, all built through partnerships with industry and startups, reflecting a push for self-reliant, data-centric capabilities ([35-38]).


Training and capacity-building for commanders – To operate safely in a data-rich, AI-augmented battlespace, today’s commanders and staff must be educated on algorithm integration, system command, and ethical decision-making ([55-56]).


Need for robust governance frameworks and international cooperation – The speaker calls for AI-specific legal provisions, referencing India’s AI Governance Guidelines and ongoing UN discussions on “meaningful human control” and accountability, positioning India to lead the conversation on responsible AI use in warfare ([57-60]).


Overall purpose / goal


The address aims to inform and persuade the audience-comprising military leaders, industry innovators, and policymakers-that while AI is a decisive force multiplier for the Indian Armed Forces, its deployment must be paired with strict human oversight, transparent development, rigorous testing, and comprehensive governance. The speaker seeks to rally support for collaborative, indigenous innovation and to position India as a responsible global leader in military AI ethics and regulation.


Tone of the discussion


Opening: Formal and proud, highlighting personal experience and the Army’s legacy ([1-5]).


Transition: Cautiously urgent, emphasizing the speed of modern AI-driven operations and the critical need for human judgment ([9-23]).


Prescriptive: Deliberate and normative when outlining responsibilities, accountability, and safety measures ([41-55]).


Collaborative: Optimistic and inviting, stressing partnerships with startups and industry ([35-38]).


Aspirational: Visionary and diplomatic toward the end, calling for international governance and positioning India as a moral leader ([57-63]).


Overall, the tone moves from reflective pride to a warning-laden call for responsibility, then to constructive collaboration, and finally to a forward-looking, diplomatic appeal.


Speakers

Speaker 1


– Role/Title: Keynote speaker representing the Indian Army and the Indian Armed Forces (specific rank or name not provided)


– Area of Expertise: Military operations, AI integration in defence, strategic decision‑making, AI governance and safety in the armed forces


Additional speakers:


(none identified)


Full session reportComprehensive analysis and detailed insights

The senior Indian Army officer began his keynote by acknowledging the programme’s length, greeting a diverse audience of industry leaders, academics, AI innovators, fellow uniformed colleagues, and students, and noting the honour of representing the Indian Armed Forces on this occasion [1-5].


He then traced the evolution of battlefield decision-making, recalling the analogue era of his first war-game thirty-five years ago when information arrived slowly on paper maps, notes and telephone reports and commanders deliberated with ample time [6-8]. He contrasted this with today’s operation rooms dominated by massive digital displays that fuse data from numerous sensors in real time, with AI instantly analysing the stream to produce a living picture of the battlespace – a change he likened to “Star Wars coming to life” [9].


To illustrate the risks inherent in this speed-driven environment, he described a recent high-tempo mission in which an AI system generated a high-confidence recommendation to strike a target within a decision window measured in seconds. Although the probability score was high, the senior commander deliberately paused, not out of distrust of the technology but because his experience sensed an anomaly. He asked, “What does the machine not know?” and discovered that a civilian evacuation had just begun and was not yet reflected in the sensor data, meaning the algorithm was mis-identifying civilians as enemy troops. By exercising judgement and delaying the strike, the commander spared innocent lives while still achieving the mission objective [10-23].


From this episode he drew a fundamental conclusion: AI can inform, accelerate and recommend actions, but only humans are capable of exercising moral judgement and bearing responsibility for the outcomes of lethal decisions [24-25].


He reinforced this point by referring to recent statements from the Honorable Prime Minister and other eminent speakers, who stressed that safety guardrails for AI are not optional in the military context but mandatory given the high stakes involved [26-27].


The officer highlighted the uniquely complex security environment in which the Indian Armed Forces operate-contested borders, multi-domain challenges, dense civilian populations and a high intensity of escalation-making AI an essential force multiplier across intelligence fusion, surveillance, decision support, maintenance and logistics [28-32].


In line with the vision of technological transformation, the Indian Armed Forces have declared this year the “year of networking and data-centricity” and are committed to fully equipping the services with AI-enabled, data-centric capabilities [33-35]. Indigenous development has been central to this shift. Home-grown applications such as ACOM AI-as-a-Service, the battlefield situational-awareness platform Sama Drishti, and the sensor-shooter fusion systems Shakti and Akash Teer have been created through close collaboration with industry, leaders and startups, and the army remains open to further partnerships to deepen self-reliant transformation [35-38].


Four concrete principles for responsible AI deployment were outlined:


1. Human control over lethal decisions – decisions of a lethal nature must never be delegated to machines; legal authority and moral accountability must remain with the commander, not the algorithm [41-45].


2. Treat AI-enabled systems as weapons – because they are designed to cause harm, they must be subjected to rigorous testing under contested battlefield conditions where sensors may be degraded by dust, smoke or deception [46-51].


3. Transparency (“glass-box” AI) – the “black box” of data must become a “glass box”, enabling commanders to understand the data sources and training regimes behind AI outputs [52-55].


4. Dedicated training for commanders and staff – personnel must master algorithm integration, system command and rapid OODA cycles to operate safely in a data-rich, AI-augmented battlespace [55-56].


He also recalled that long-standing treaties such as the rules governing the use of nuclear-biological-chemical (NBC) weapons, the Geneva Convention on the Treatment of Prisoners of War, and the Convention on the Use of Landmines have historically guided armed conflict [26-27].


These domestic measures dovetail with broader governance efforts. India’s newly released AI Governance Guidelines address the risks of generative AI and embed safety guardrails, while the summit’s daily declaration reinforced those guidelines, underscoring their path-breaking nature [57-60]. The speaker noted that the UN Secretary-General had addressed these initiatives just the previous day, and that the United Nations is actively discussing “meaningful human control” and accountability for autonomous weapons [57-60]. Although consensus on international conventions is still evolving, the very fact of the debate reflects a shared concern for preventing unchecked autonomy that could destabilise strategic stability [57-60].


Positioning India as uniquely suited to lead the global conversation on responsible military AI, he drew on the nation’s status as a major military power, a burgeoning AI hub, and a civilisation rooted in ethical restraint-embodied in the concepts of Shakti (force) and Dharma (righteousness). He asserted that India possesses both the capacity and the credibility to shape international norms and to champion a “Manav Vision for AI” that integrates moral and ethical systems into technology development [61-63]. He stressed that while the nature of war may evolve, the conscience of the nation must remain unchanged[61-63].


In summary, the address charted the rapid transformation of warfare from paper-based maps to AI-fused digital battle-spaces, underscored the indispensable role of human judgement and legal accountability, outlined a roadmap for indigenous AI development and industry collaboration, called for rigorous testing, transparency and training, linked national initiatives to emerging international governance frameworks, and concluded with a moral reminder that technological progress must be anchored in an unchanged national conscience.


Session transcriptComplete transcript of the session
Speaker 1

Firstly, let me just say this that, you know, I know I’m the last speaker of a long day. So I’ll do this quickly. I’ll come to the essentials. Distinguished guests, leaders of industry and academia, AI innovators, my colleagues in uniform, who are also innovators, students, ladies and gentlemen, a very good evening to you all. It’s a privilege to be speaking here as a keynote address representing the Indian Army and the Indian Armed Forces. You know, 35 years ago, when I joined the Army as a young lieutenant, in my first war game unfolded in a room dominated by large paper maps. Information arrived slowly, handed in notes, verbal updates, reports from the field taken on telephone. We pieced that picture together, physically marked it on the map using color -coded pins and flags, and presented it to the commander, who then took a decision deliberately and with reflection, fully aware that the adversary was operating within similar timelines.

Twenty years later, the rhythm began to change. intelligence became sharper and faster operation rooms had a few screens displaying maps presentations moved to powerpoint the volume of information increased timelines got compressed but there was still space to pause, breathe and the OODA cycle could still breathe today when I walk into an operation rooms the difference is stark it’s like a star wars coming to life a massive digital display dominates the wall input stream in continuously from multiple sensors intelligence is fused almost instantly analyzed by AI presenting a living dynamic picture of the battle space some of the work we did as left -handers is now automated and the commander knows that the adversary is seeing much the same picture about us at much the same speed the pressure is not anymore about awareness it is about decision seconds matter hesitation has consequences it is in this environment of speed, uncertainty and time compression that I want to transport you to an operational stage scenario During a high -tempo military operation, a senior commander was presented with a machine -generated recommendation based on multiple sensor feeds and AI analysis to engage a target immediately.

The system was confident. The probability score of the machine was high. The decision window was measured in seconds. But the commander paused. Not because he didn’t trust the technology. His experience told him that something was amiss. He asked a simple question. What does the machine not know? The pause revealed something the algorithm could not see. A civilian evacuation had just begun minutes earlier, not yet reflected in the data. The machine saw the movement as that of enemy troops, whereas they were civilians. It is even possible that troops were mixed with the civilians. However, the commander exercised judgment and restraint. The strike was delayed, innocent lives were spared, and the mission was still achieved. This moment captures a fundamental truth.

AI can inform, accelerate and recommend decisions, but only humans can exercise judgment and bear responsibility for them. Yesterday our Honorable Prime Minister and many other eminent speakers spoke of the need for guardrails and safety to be built into AI -enabled models. In the case of the military, these are not essential but mandatory as the stakes are much higher. The Indian Armed Forces operate in a uniquely complex security environment. Across contested borders, multiple domains, dense populations and high escalation intensity . Therefore, ladies and gentlemen, let me clearly state that we in the Defence Forces are fully cognizant that artificial intelligence is fundamentally redefining the modern battle space. Its power in intelligence fusion, surveillance, decision support, maintenance, logistics and a host of other functions is a force multiplier in today’s multi -domain battle space.

In keeping with the vision of technological transformation, the Indian Armed Forces are committed to ensuring that the military is fully equipped with the necessary equipment The Chief of Army Staff has formally declared this year as the year of networking and data centricity. The Indian Armed Forces are committed to ensuring that the military is fully equipped with the necessary equipment and data centricity. The Indian Armed Forces are committed to ensuring that the military is fully equipped with the signaling a deliberate shift towards data -driven operations and AI -enabled capabilities. The evolution is powered by many indigenously built applications, ACOM AI as a service, Sama Drishti, which is a battlefield situational awareness software, Shakti and Akash Teer, which are sensor and shooter fusion.

All of these have been built through our collaboration with industry, leaders and startups. Many of the innovators who have been around in this summit for the last few days. For this self -reliant transformation, we are open to collaboration with many startups, innovators to build it further. However, we are fully cognizant that this needs to be a responsible development of AI. Allow me to reflect on four points in this regard. Firstly, what decisions that AI must not be delegated to must always remain human. Human control has to be institutionalized into law and moral accountability. Accountability cannot be with the machine. If a machine recommends a decision with 90 % accuracy and the commander goes with it and it is a wrong decision, it gives the commander a moral buffer.

But is that correct? Secondly, AI -enabled systems are designed to cause harm. Therefore they must be treated as a weapon and not as a software. They therefore must be evaluated and tested in contested field conditions. Remember that the battlefield is a chaotic data environment. Sensors get obscured by dust, smoke, deception and many other things. A system that performs well in controlled condition but fails in a battlefield condition is not a force multiplier, it’s a liability. Thirdly, trust and sovereignty must get built in the system. The commander taking a decision based on an AI -enabled system but know, must know what is the data being used, how it has been trained. The black box of data must become a glass box.

And fourthly, commanders and staff of today need to be trained about this fast evolving battlefield. As I told you about the operational scenario, as it was 30 years ago and it is today in the in a war game we need to be able to integrate algorithms be able to command systems and know how to go forward the indian army is taking steps in training our commanderial staff in this direction the the next thing that i’d like to say is that in some the nature of war may change but our conscious must not it is important to recognize that these concerns about ai safety and governance are not confined to the military domain alone they are increasingly shaping national policy the launch of the india ai governance guidelines and the daily declaration during the summit is a path -breaking step in this direction just happened during this summit this framework defines ai systems being generative and therefore having unintended consequences and this has lessons for us as military planners at this stage i would also like to remind ourselves of a historical truth i do believe in the wisdom of humanity whenever faced with a new crisis we have to face it and we have to face it and we have to face it and we have to face it and we have to face it and we have to face it and we have to face it and we have to face it The rules governing the use of NBC weapons, the Geneva Convention on Treatment of Prisoners of War, the Convention on Use of Landmines and other such frameworks have stood the test of time and with few exceptions have been followed during conflicts also.

In a similar manner, a set of governance frameworks and legal provisions need to be evolved about use of AI -based systems and autonomous weapons. Already under the framework of the United Nations, discussions are underway around meaningful human control and accountability. His Excellency the UN Secretary General also talked about various such initiatives just yesterday. While consensus remains complex, the debate itself reflects a shared concern for autonomy without restraint that would undermine strategic stability. India, as a major military power, a growing AI hub and a civilization deeply rooted in ethical restraint and understanding that Shakti, that is force, and Dharma, that is rightness, must go hand in hand, has both the capacity. And the credibility to lead this conversation.

The clear and all -encompassing Manav Vision for AI, enunciated by the Honorable Prime Minister in this hall yesterday, emphasizing moral and ethical systems as well as

Related ResourcesKnowledge base sources related to the discussion topics (40)
Factual NotesClaims verified against the Diplo knowledge base (5)
Confirmedhigh

“The senior Indian Army officer began his keynote representing the Indian Armed Forces.”

The knowledge base identifies Lt Gen Vipul Shinghal as a senior Indian Army officer representing the Indian Armed Forces as a keynote speaker [S10].

Confirmedmedium

“He has 35 years of military service, recalling his first war‑game thirty‑five years ago.”

S10 notes that the speaker has 35 years of service in the Indian Army, confirming the timeframe referenced in the report [S10].

Confirmedhigh

“AI can inform, accelerate and recommend actions, but only humans are capable of exercising moral judgement and bearing responsibility for lethal decisions.”

The source states that AI can inform, accelerate and recommend decisions, but emphasizes the need for human judgement and responsibility [S10].

Additional Contextmedium

“Human oversight is essential to ensure moral judgement and accountability in AI‑driven military operations.”

S105 highlights that maintaining humans-in-the-loop is crucial for oversight in AI-enabled targeting and decision-support systems, adding nuance to the report’s claim about moral judgement [S105].

Additional Contextlow

“AI‑enabled systems must be treated as weapons and evaluated in contested field conditions because the battlefield is a chaotic data environment.”

S16 explains that AI-enabled systems designed to cause harm should be treated as weapons and tested under contested, data-chaotic conditions, providing additional context to the report’s discussion of AI risks and guardrails [S16].

External Sources (113)
S1
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S2
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S3
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S4
Keynote_ 2030 – The Rise of an AI Storytelling Civilization _ India AI Impact Summit — Speaker 1’s presentation represents a masterful progression from current state analysis to future vision, punctuated by …
S5
Using AI to tackle our planet’s most urgent problems — ## Community-Driven Mapping and Success Stories 1. **The Earth Layer**: Changes occurring over decades, representing fu…
S6
Workshop 1: AI &amp; non-discrimination in digital spaces: from prevention to redress — Ayça Dibekoğlu: Please object now, or until the 25th of May, until when we can finalize our messages. Okay, I see no ob…
S7
AI for Safer Workplaces &amp; Smarter Industries Transforming Risk into Real-Time Intelligence — <strong>Naveen GV:</strong> out a long, lengthy form of information for that to be processed much later by another human…
S8
Open Forum: A Primer on AI — Artificial Intelligence is advancing at a rapid pace
S9
Keynotes — Oleksandr Bornyakov: Dear ladies and gentlemen, I’m honored to represent Ukraine today here in Strasbourg in the heart o…
S10
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Lt Gen Vipul Shinghal — All of these have been built through our collaboration with industry, leaders and startups. Many of the innovators who h…
S11
Conversation: 01 — Artificial intelligence
S12
WS #110 AI Innovation Responsible Development Ethical Imperatives — Guilherme Canela de Souza Godoi: Thank you very much. First and foremost, thank you so much for the invitation to be her…
S13
Opening Ceremony — **Lucio Adrian Ruiz**, Secretary for the Dicastery for Communication from the Holy See, provided a philosophical perspec…
S14
Ancient history can bring clarity to AI regulation and digital diplomacy — In his op-ed,From Hammurabi to ChatGPT, Jovan Kurbalija draws on the ancient Code of Hammurabi to argue for a principle …
S15
Open Forum #79 Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative — – Vint Cerf- Olga Cavalli- Gerald Folkvord Human rights principles | Cyberconflict and warfare Importance of human con…
S16
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-lt-gen-vipul-shinghal — All of these have been built through our collaboration with industry, leaders and startups. Many of the innovators who h…
S17
9821st meeting — At the same time, as we’ve heard, if it’s misused, AI can pose tremendous threats to the international peace and securit…
S18
UNSC meeting: Artificial intelligence, peace and security — Yi Zeng:My name is Yi Zeng and I would like to take this opportunity to share with distinguished representatives my pers…
S19
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S20
WS #123 Responsible AI in Security Governance Risks and Innovation — Alexi Drew: Thank you, I’ll run through these nice and quickly in the interest of giving people their time. I’d like to …
S21
Artificial intelligence (AI) – UN Security Council — Another session highlighted the need for transparency and accountability in AI algorithms. The speakers advocated for AI…
S22
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — -Professor Suresh: From Amrita Vishwa Vidyapetam – participated in the report launch ceremony -Speaker 1: Event moderat…
S23
Bridging the AI innovation gap — ## Call for Partnerships ### Innovation Factory and Acceleration Programme LJ Rich: to invite our opening keynote. It’…
S24
Open Forum #33 Building an International AI Cooperation Ecosystem — Dai Wei: Distinguished guests, ladies and gentlemen, good day to you all. I’m delighted to join you in this United Natio…
S25
Hard power of AI — In conclusion, the analysis provides insights into the dynamic relationship between technology, politics, and AI. It hig…
S26
AI and EDTs in Warfare: Ethics, Challenges, Trends | IGF 2023 WS #409 — The study cautions against completely relinquishing the final decision-making power to AI systems. It emphasises the imp…
S27
Keynote-Mukesh Dhirubhai Ambani — “First, AI for India’s deep tech and advanced manufacturing leadership.”[9]. “Second, world leading multilingual AI capa…
S28
The Global Power Shift India’s Rise in AI &amp; Semiconductors — How do you make sure that. there is enough packaging verification and many of that ecosystem getting developed. So all o…
S29
AI and international peace and security: Key issues and relevance for Geneva — Capacity-Building Initiatives: Capacity-building initiatives are vital for equipping states with the knowledge and skill…
S30
The AI soldier and the ethics of war — For decades, Western militaries have led technological revolutions on the battlefield. From bows to tanks to drones, tec…
S31
Open Forum #3 Cyberdefense and AI in Developing Economies — Capacity Building and Human Resources Development | Legal and regulatory Effective capacity building requires training…
S32
WS #184 AI in Warfare – Role of AI in upholding International Law — Reference to existing legal principles such as command responsibility and state responsibility. Shaigan discusses the c…
S33
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — I mean, with the code of practice, we try to build a culture of restraint in the functioning of systems that can prevent…
S34
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Lt Gen Vipul Shinghal — But is that correct? Secondly, AI -enabled systems are designed to cause harm. Therefore they must be treated as a weapo…
S35
Policymaker’s Guide to International AI Safety Coordination — Translating scientific knowledge into effective policy requires extensive testing, simulations, and understanding of rea…
S36
Can we test for trust? The verification challenge in AI — Anja Kaspersen: Massively so. So let me, I’m just gonna rewind a little bit to our title of this session if you allow me…
S37
AI and international peace and security: Key issues and relevance for Geneva — Capacity-Building Initiatives: Capacity-building initiatives are vital for equipping states with the knowledge and skill…
S38
Artificial intelligence (AI) – UN Security Council — The discussions on structuring capacity-building initiatives in AI to maximize their impact, especially in regions with …
S39
Panel Discussion Summary: AI Governance Implementation and Capacity Building in Government — All speakers acknowledge that having strategies and frameworks is insufficient without proper implementation mechanisms,…
S40
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — It underscores the need for capacity building, affordability, accessibility, inclusivity, and responsible governance to …
S41
Military AI: Operational dangers and the regulatory void — For the first time, in 2023, the UN Security Council discussed the implications of AI on world peace and security confir…
S42
UNSC meeting: Scientific developments, peace and security — Malta:Thank you, President. I begin by thanking the Swiss Presidency for organizing today’s briefing on this important a…
S43
What is it about AI that we need to regulate? — What is it about AI that we need to regulate?The discussions across the Internet Governance Forum 2025 sessions revealed…
S44
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — In conclusion, the UNESCO recommendation on AI ethics provides crucial guidance for global AI governance. By grounding A…
S45
UNSC meeting: Artificial intelligence, peace and security — Brazil:Thank you, Mr. President, Mr. President, dear colleagues. I thank the Secretary General for his briefing today an…
S46
How to make AI governance fit for purpose? — Given that AI technologies are inherently global, effective governance requires international engagement and cooperation…
S47
Catalyzing Global Investment in AI for Health_ WHO Strategic Roundtable — Verified AI extends beyond accuracy to encompass complete transparency in decision-making processes. Brey advocated for …
S48
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S49
Laying the foundations for AI governance — – The need for collaboration between industry and regulators
S50
Scaling Trusted AI_ How France and India Are Building Industrial &amp; Innovation Bridges — Unexpectedly, these speakers represent different philosophies toward AI development. Sheth emphasizes building indigenou…
S51
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — The panel articulated a sophisticated approach to AI sovereignty that goes beyond technological nationalism. Success req…
S52
Why will AI enhance, not replace, human diplomacy? — AI can support crisis response by running simulations, analysing data in real-time, and suggesting contingency plans. Ho…
S53
WS #184 AI in Warfare – Role of AI in upholding International Law — Accountability and Responsibility Sheikh-Ali maintains that human responsibility and accountability are ultimately nece…
S54
Artificial intelligence — Within the UN System, the High Contracting Parties to the Convention on Certain Conventional Weapons (CCW) established a…
S55
Keynote-Jeet Adani — This comment reframes potential criticism of nationalist AI policy as strategic wisdom rather than protectionism. It pro…
S56
The transformative role of ai in modern warfare: a detailed analysis — In late 2021, the Royal Navy’s collaboration with major tech companies, including Microsoft and Amazon Web Services, res…
S57
Pre 10: Regulation of Autonomous Weapon Systems: Navigating the Legal and Ethical Imperative — This comment elevated the discussion from tactical considerations to strategic and philosophical implications. It forced…
S58
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Lt Gen Vipul Shinghal — All of these have been built through our collaboration with industry, leaders and startups. Many of the innovators who h…
S59
The transformative role of ai in modern warfare: a detailed analysis — In late 2021, the Royal Navy’s collaboration with major tech companies, including Microsoft and Amazon Web Services, res…
S60
AI in Action: When technology serves humanity — Principles, however, remain abstract until seen in practice. This week turns to concrete examples of AI amplifying human…
S61
AI in practice across the UN system: UN 2.0 AI Expo — TheUN 2.0 Data & Digital Community AI Expoexamined how AI is currently embedded within the operational, analytical and i…
S62
https://dig.watch/event/india-ai-impact-summit-2026/building-trusted-ai-at-scale-cities-startups-digital-sovereignty-keynote-lt-gen-vipul-shinghal — All of these have been built through our collaboration with industry, leaders and startups. Many of the innovators who h…
S63
AI and EDTs in Warfare: Ethics, Challenges, Trends | IGF 2023 WS #409 — The study cautions against completely relinquishing the final decision-making power to AI systems. It emphasises the imp…
S64
Ethics and AI | Part 5 — Concerned that certain activities within the lifecycle of artificial intelligence systems may undermine human dignity an…
S65
Designing Indias Digital Future AI at the Core 6G at the Edge — The strong consensus among government, industry, and technical experts on the need for indigenous capabilities, balanced…
S66
From KW to GW Scaling the Infrastructure of the Global AI Economy — “So it is basically a collaboration between the Indian startups and the global technology strength of a global company”[…
S67
India boosts military AI efforts amid China rivalry — India is ramping up itseffortsin the field of AI, not only for commercial purposes but also for military applications, a…
S68
The Global Power Shift India’s Rise in AI &amp; Semiconductors — How do you make sure that. there is enough packaging verification and many of that ecosystem getting developed. So all o…
S69
AI and international peace and security: Key issues and relevance for Geneva — Capacity-Building Initiatives: Capacity-building initiatives are vital for equipping states with the knowledge and skill…
S70
Open Forum #3 Cyberdefense and AI in Developing Economies — Capacity Building and Human Resources Effective capacity building requires training at multiple levels – technical trai…
S71
The AI soldier and the ethics of war — For decades, Western militaries have led technological revolutions on the battlefield. From bows to tanks to drones, tec…
S72
UNSC meeting: Scientific developments, peace and security — 2. Regulatory Frameworks and Governance- China: Supported UN as a platform for global technology governance and called f…
S73
Dedicated stakeholder session — Given the transformative impact of such technologies, there’s a critical need for robust legal guidelines to ensure ethi…
S74
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — I mean, with the code of practice, we try to build a culture of restraint in the functioning of systems that can prevent…
S75
Open Mic &amp; Closing Ceremony — The overall tone was formal yet appreciative. There was a sense of accomplishment and gratitude expressed throughout, wi…
S76
Opening Remarks (50th IFDT) — The overall tone was formal yet warm and celebratory. Speakers expressed pride in the IFDT’s accomplishments and gratitu…
S77
World Economic Forum Annual Meeting Closing Remarks: Summary — The tone is consistently positive, celebratory, and grateful throughout the discussion. It begins with formal appreciati…
S78
Building Future Leaders – Competency Driven Succession Planning — This comment provides a insightful definition of leadership that goes beyond formal positions, emphasizing personal qual…
S79
Opening Ceremony — The tone is consistently formal, diplomatic, and optimistic yet cautionary. Speakers maintain a celebratory atmosphere a…
S80
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — The tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shif…
S81
Laying the foundations for AI governance — The tone was collaborative and constructive throughout, with panelists building on each other’s points rather than disag…
S82
AI Meets Cybersecurity Trust Governance &amp; Global Security — “Move fast, break things.”[113]”And the motto there is move deliberately and maintain things.”[114]”How to be able to ge…
S83
AI for Humanity: AI based on Human Rights (WorldBank) — Satola also highlights the interconnected nature of AI with other emerging technologies such as 5G and quantum computing…
S84
Legal Notice: — In this scenario, responsibility again creates few problems, at least as far as attribution goes. State A is under a…
S85
WS #179 Navigating Online Safety for Children and Youth — 1. Tech Companies: The role of corporations in proactively ensuring child safety was debated, with some calling for grea…
S86
Subject matter — 1. Member States shall ensure that the supervisory or enforcement measures imposed on essential entities in respect of t…
S87
PREAMBLE — unless such measures are provided for in its laws and regulations and are administered in a reasonable, objective and im…
S88
Democratizing AI: Open foundations and shared resources for global impact — The tone was consistently collaborative, optimistic, and forward-looking throughout the discussion. Speakers maintained …
S89
Charting New Horizons: Gender Equality in Supply Chains – Challenges and Opportunities — A conscientious request for clarity and specificity was also apparent, underlining the need for concrete, actionable pla…
S90
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — The tone is consistently optimistic, collaborative, and forward-looking throughout the discussion. Speakers emphasize “l…
S91
Building the Future STPI Global Partnerships &amp; Startup Felicitation 2026 — The tone was consistently optimistic, collaborative, and forward-looking throughout the session. It maintained a formal …
S92
[Tentative Translation] — –  In order to promote the creation of needs-pull innovation by the government, the government will promote the new Jap…
S93
Keynote-Rishi Sunak — The tone was consistently optimistic and inspirational throughout. Sunak maintained an enthusiastic, forward-looking per…
S94
Ad Hoc Consultation: Friday 2nd February, Afternoon session — In summation, India’s advocacy for methodical international governance reform highlights its commitment to terminologica…
S95
Ad Hoc Consultation: Friday 9th February, Morning session — Although negative feelings are held toward Article 57 as it stands, the positive sentiment associated with backing Iran’…
S96
https://dig.watch/event/india-ai-impact-summit-2026/leaders-plenary-global-vision-for-ai-impact-and-governance-morning-session-part-1 — My email ID is ttopgay at cabinet .gov .pt. Your Excellencies, the AI revolution will not wait for us. It will continue …
S97
FOREWORD — the desire of, leaders to wield some influence over the external images of the places they rule are, of course, as old a…
S98
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Keynote Amb Thomas Schneider — Schneider began by thanking the Indian government for bringing together leaders, innovators, researchers, and civil soci…
S99
Keynote Adresses at India AI Impact Summit 2026 — Gore reinforced this assessment, noting that “India’s entry into Pax Silica isn’t just symbolic, it’s strategic, it’s es…
S100
Knowledge and Diplomacy — In the last quarter of this fading century a technological revolution, centred around information, has transformed …
S101
The waning of mind maps — In order to survive, a hunter-gatherer of yore (or his contemporaries today) needed amind mapwith information on game, w…
S102
Most transformative decade begins as Kurzweil’s AI vision unfolds — AI no longer belongs to speculative fiction or distant possibility. In many ways, it has arrived. From machine translati…
S103
EU Artificial Intelligence Act — (8) The notion of remote biometric identification system as used in this Regulation should be defined functionally, as a…
S104
EU AI Act (Commission proposal) — (8) The notion of remote biometric identification system as used in this Regulation should be defined functionally, as a…
S105
Diplomacy in beta: From Geneva principles to Abu Dhabi deliberations in the age of algorithms — AI in conflict is a central concern, with risks extending far beyond LAWS. AI is integrated into target identification, …
S106
Interim Report: — 27. Other risks are more a product of humans than AI. Deep fakes and hostile information campaigns are merely the l ates…
S107
Building Trustworthy AI Foundations and Practical Pathways — And it says, I have technically, correctly satisfied your query. Everything you said, I have done. And so when we give a…
S108
https://dig.watch/event/india-ai-impact-summit-2026/ai-for-democracy_-reimagining-governance-in-the-age-of-intelligence — For the first time, a technology may reach a stage at which individuals can no longer reliably determine whether what th…
S109
Table of Contents — 3. Stakeholder Risks: lack of support, management failure, organizational structure. 4. Regulatory Risks: Noncompliance…
S110
SEARCHING FOR MEANINGFUL HUMAN CONTROL — Despite these daunting challenges, some states point out that LAWS could have military, and even humanitarian, ad…
S111
By the Same Author — After that first referendum, intense political activity had continued i n Sikkim, leading to some disturbances. O…
S112
AI and the moral compass: What we can do vs what we should do — We sometimes speak of’ethical AI’, but ethics is not a property of code. Algorithms can simulate empathy, but they canno…
S113
Is the world ready for AI to rule justice? — AI is creeping into almost every corner of our lives, and it seems the justice system’s turn has finally come. As techno…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
15 arguments177 words per minute1445 words489 seconds
Argument 1
Historical shift from paper maps to AI‑fused digital battle‑space (Speaker 1)
EXPLANATION
The speaker describes how military information processing has transitioned from manual paper maps to AI‑driven digital systems. This shift reflects a broader transformation in operational tempo and decision‑making.
EVIDENCE
He recounts joining the Army 35 years ago, using large paper maps, handwritten notes and telephone updates to build a picture of the battlefield, and contrasts that with today’s operation rooms that feature massive digital displays and AI-fused sensor data creating a living, dynamic picture of the battlespace [6-9].
MAJOR DISCUSSION POINT
Evolution of AI in Military Operations
AGREED WITH
Other eminent speakers (referenced)
Argument 2
Modern AI provides real‑time, multi‑sensor situational awareness and acts as a force multiplier (Speaker 1)
EXPLANATION
Modern AI delivers instantaneous fusion of data from multiple sensors, giving commanders a comprehensive, real‑time view of the battlefield. This capability multiplies force effectiveness across domains.
EVIDENCE
The speaker describes a massive digital display that continuously ingests data from many sensors, with AI instantly fusing and analysing it to present a living picture of the battlespace, and later labels AI as a force multiplier and a key element of data-centric transformation [9][31-32].
MAJOR DISCUSSION POINT
Evolution of AI in Military Operations
AGREED WITH
Other eminent speakers (referenced)
Argument 3
Commander’s pause despite high‑confidence AI recommendation saved civilian lives (Speaker 1)
EXPLANATION
In a high‑tempo operation, a commander halted a machine‑generated strike recommendation despite a high confidence score, asking what the system did not know. This pause uncovered an ongoing civilian evacuation, preventing civilian casualties while still achieving the mission.
EVIDENCE
The transcript details that the commander paused, asked “What does the machine not know?”, discovered that a civilian evacuation had just begun and was not reflected in the data, and consequently delayed the strike, sparing innocent lives and still achieving the mission [13-24].
MAJOR DISCUSSION POINT
Human Judgment and Accountability
Argument 4
Moral and legal responsibility must remain with humans, not machines (Speaker 1)
EXPLANATION
The speaker asserts that while AI can inform and recommend, ultimate moral and legal accountability for decisions must stay with humans. Machines cannot bear responsibility, and delegating it would create an inappropriate moral buffer.
EVIDENCE
He states that AI can inform, accelerate and recommend decisions, but only humans can exercise judgment and bear responsibility, and further emphasizes that accountability cannot be placed on the machine [25][41-45].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Several sources stress that AI cannot bear moral or legal accountability and that responsibility must stay with humans, e.g., the speaker’s own remarks about accountability not residing with the machine [S10], the philosophical view that AI is not a subject and thus cannot be responsible [S13], and UN-level discussions on human control over lethal AI systems [S15].
MAJOR DISCUSSION POINT
Human Judgment and Accountability
AGREED WITH
Honorable Prime Minister (referenced), UN Secretary‑General (referenced)
Argument 5
Institutionalizing human control in law is essential (Speaker 1)
EXPLANATION
The speaker calls for codifying human oversight of AI systems into law, ensuring that humans retain institutional control and moral responsibility over AI‑driven actions. Such a legal framework would prevent undue reliance on autonomous decisions.
EVIDENCE
He explicitly says that human control has to be institutionalized into law and moral accountability, and that accountability cannot be with the machine [41-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need to codify human oversight into law is highlighted in the keynote where the speaker says human control must be institutionalized [S10], reinforced by UN-focused analyses on legal frameworks for autonomous weapons [S15] and historical analogues of legal accountability [S14].
MAJOR DISCUSSION POINT
Human Judgment and Accountability
AGREED WITH
Honorable Prime Minister (referenced), UN Secretary‑General (referenced)
Argument 6
AI‑enabled systems are weapons and must be tested in contested battlefield conditions (Speaker 1)
EXPLANATION
AI‑enabled military systems are fundamentally weapons and must be evaluated under realistic, contested battlefield conditions. Testing only in controlled environments is insufficient because battlefield data can be obscured or deceptive.
EVIDENCE
The speaker notes that AI-enabled systems are designed to cause harm, must be treated as weapons, and therefore need to be evaluated and tested in contested field conditions where sensors may be obscured by dust, smoke, or deception [46-51].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The speaker argues that AI-enabled systems are weapons that require testing in realistic, contested environments, a point echoed in the same keynote [S10] and in broader UN-level discussions on autonomous weapon regulation [S15].
MAJOR DISCUSSION POINT
Principles for Responsible AI Deployment
Argument 7
Certain critical decisions must never be delegated to AI (Speaker 1)
EXPLANATION
High‑stakes decisions should remain under human authority and never be handed over to AI, as delegating such decisions erodes accountability. The speaker stresses that some decisions must always stay human‑centric.
EVIDENCE
He asks “what decisions that AI must not be delegated to must always remain human” and reinforces that accountability cannot be with the machine, highlighting the need for human-only decision making [41-44].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The argument that some high-stakes decisions must remain human-centric is supported by the speaker’s statement that “what decisions that AI must not be delegated to must always remain human” [S10] and by UN-centric policy papers emphasizing human control over lethal AI functions [S15].
MAJOR DISCUSSION POINT
Principles for Responsible AI Deployment
AGREED WITH
Honorable Prime Minister (referenced), UN Secretary‑General (referenced)
Argument 8
Transparency: AI “black box” must become a “glass box” showing data sources and training (Speaker 1)
EXPLANATION
Transparency is essential; the opaque “black box” of AI must become a “glass box” where users can see the data sources and training methods. This builds trust and ensures informed use of AI systems.
EVIDENCE
He stresses that commanders must know what data is being used and how the system was trained, calling for the black box to become a glass box to foster trust and sovereignty [52-55].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for turning AI’s opaque black box into a transparent glass box are made in the keynote [S10] and are reinforced by broader governance recommendations for algorithmic transparency and traceability [S21].
MAJOR DISCUSSION POINT
Principles for Responsible AI Deployment
AGREED WITH
UN Secretary‑General (referenced)
Argument 9
Indigenous applications (ACOM AI, Sama Drishti, Shakti, Akash Teer) illustrate self‑reliant AI capability (Speaker 1)
EXPLANATION
India has developed several indigenous AI applications—ACOM AI, Sama Drishti, Shakti, and Akash Teer—that demonstrate self‑reliant capabilities in battlefield awareness and sensor‑shooter fusion. These showcase domestic innovation and technological independence.
EVIDENCE
He lists the indigenously built applications ACOM AI, Sama Drishti, Shakti and Akash Teer as examples of battlefield situational awareness and sensor-shooter fusion, noting they were created through collaboration with industry and startups [35-36].
MAJOR DISCUSSION POINT
Indigenous Development and Industry Collaboration
AGREED WITH
Industry leaders (referenced)
Argument 10
Open collaboration with startups and innovators to accelerate AI integration (Speaker 1)
EXPLANATION
The armed forces are actively seeking partnerships with startups and innovators to further advance AI integration, emphasizing openness to collaboration for a self‑reliant transformation. Such partnerships aim to accelerate development and deployment of AI capabilities.
EVIDENCE
He mentions collaboration with industry, leaders and startups throughout the summit and states openness to further collaboration with many startups and innovators to build AI further [36-38].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The speaker highlights openness to industry and startup partnerships for AI development, a point reiterated in the keynote and related commentary on collaborative AI ecosystems [S10] and additional remarks about ongoing collaboration initiatives [S16].
MAJOR DISCUSSION POINT
Indigenous Development and Industry Collaboration
AGREED WITH
Industry leaders (referenced)
Argument 11
Command staff need education on algorithms, AI‑enabled systems, and rapid decision‑making (Speaker 1)
EXPLANATION
Modern commanders must be trained to understand algorithms, AI‑enabled systems, and the rapid decision cycles they create. This education is crucial for effective integration of AI into military operations.
EVIDENCE
He notes that commanders and staff need to be trained about fast-evolving battlefields, to integrate algorithms, command systems and know how to move forward, and that the Indian Army is taking steps in training its command staff in this direction [55-56].
MAJOR DISCUSSION POINT
Training and Capacity Building for Commanders
AGREED WITH
Defence training authorities (referenced)
Argument 12
Indian Army is implementing training programmes for AI‑driven operations (Speaker 1)
EXPLANATION
The Indian Army is instituting specific training programmes to equip its personnel with the skills needed for AI‑driven operational environments. These programmes aim to build competence in using AI tools for decision support.
EVIDENCE
The same passage on training staff about algorithms and AI-enabled systems indicates that the Indian Army is taking steps to train its command staff for AI-driven operations [55-56].
MAJOR DISCUSSION POINT
Training and Capacity Building for Commanders
AGREED WITH
Defence training authorities (referenced)
Argument 13
India’s AI governance guidelines address generative AI risks and set safety standards (Speaker 1)
EXPLANATION
India has introduced AI governance guidelines that specifically address the risks of generative AI and establish safety standards for AI deployment. These guidelines aim to ensure responsible development and use of AI technologies.
EVIDENCE
He references the launch of the India AI governance guidelines, which define generative AI systems and their unintended consequences, providing a framework for safety and responsible use [57].
MAJOR DISCUSSION POINT
AI Governance and International Legal Frameworks
AGREED WITH
UN Secretary‑General (referenced)
Argument 14
UN discussions on “meaningful human control” and accountability for autonomous weapons (Speaker 1)
EXPLANATION
The United Nations is currently debating frameworks for “meaningful human control” over autonomous weapons and mechanisms for accountability, reflecting global concern over AI militarization. These discussions aim to shape international norms and safeguards.
EVIDENCE
He notes that under the UN framework discussions are underway around meaningful human control and accountability, and that the UN Secretary-General highlighted these initiatives recently [58-60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The existence of UN-level debates on meaningful human control and accountability for autonomous weapons is documented in analyses of autonomous weapon regulation and legal-ethical imperatives [S15].
MAJOR DISCUSSION POINT
AI Governance and International Legal Frameworks
AGREED WITH
UN Secretary‑General (referenced)
Argument 15
New legal conventions, akin to Geneva and land‑mine treaties, are required for AI weapons (Speaker 1)
EXPLANATION
Just as treaties like the Geneva Convention regulate conventional weapons, new international legal instruments are needed to govern AI‑based weapons, ensuring ethical use and strategic stability. Such conventions would embed accountability and human control into AI weapon systems.
EVIDENCE
He draws a parallel with historical weapons treaties (Geneva, land-mine, NBC weapons) and argues that similar governance frameworks and legal provisions are needed for AI-based systems, referencing ongoing UN discussions and the need for meaningful human control [56][57-59].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The call for new international legal instruments governing AI weapons parallels existing treaties and is supported by UN-focused discussions on autonomous weapon regulation [S15] as well as historical perspectives on legal accountability drawn from ancient codes [S14].
MAJOR DISCUSSION POINT
AI Governance and International Legal Frameworks
AGREED WITH
UN Secretary‑General (referenced)
Agreements
Agreement Points
Human oversight and moral/legal accountability must remain with humans for AI‑enabled military decisions
Speakers: Speaker 1, Honorable Prime Minister (referenced), UN Secretary‑General (referenced)
Moral and legal responsibility must remain with humans, not machines (Speaker 1) Institutionalizing human control in law is essential (Speaker 1) Certain critical decisions must never be delegated to AI (Speaker 1) UN discussions on “meaningful human control” and accountability for autonomous weapons (Speaker 1)
Speaker 1 stresses that AI can only inform and recommend; ultimate judgment, moral and legal responsibility must stay with human commanders and be codified in law, a view echoed by the Prime Minister’s call for guardrails and the UN Secretary-General’s emphasis on meaningful human control [25][41-45][26][58-60].
POLICY CONTEXT (KNOWLEDGE BASE)
This stance reflects International Humanitarian Law principles and the UN’s emphasis on human control over lethal autonomous systems, as highlighted by the CCW experts and UN Security Council discussions on AI in warfare [S53][S57][S54][S52].
AI is a force multiplier that fundamentally reshapes the modern battlespace
Speakers: Speaker 1, Other eminent speakers (referenced)
Historical shift from paper maps to AI‑fused digital battle‑space (Speaker 1) Modern AI provides real‑time, multi‑sensor situational awareness and acts as a force multiplier (Speaker 1)
Speaker 1 describes the evolution from manual paper maps to AI-driven digital displays that fuse sensor data instantly, positioning AI as a decisive force multiplier in today’s multi-domain operations [6-9][31-32].
POLICY CONTEXT (KNOWLEDGE BASE)
UN Security Council briefings have noted AI’s transformative impact on combat operations, describing it as a force multiplier that changes the nature of war, exemplified by projects like the Royal Navy’s “StormCloud” integration effort [S41][S56][S45].
Transparency of AI systems (turning the “black box” into a “glass box”) is required for trust and sovereignty
Speakers: Speaker 1, UN Secretary‑General (referenced)
Transparency: AI “black box” must become a “glass box” showing data sources and training (Speaker 1) UN discussions on meaningful human control and accountability (Speaker 1)
Speaker 1 calls for commanders to know the data and training behind AI outputs, urging a shift from opaque black-box models to transparent “glass-box” systems, a demand that aligns with UN calls for accountability and oversight [52-55][58-60].
POLICY CONTEXT (KNOWLEDGE BASE)
UNESCO’s AI ethics recommendations call for traceable, explainable AI (“glass-box”) to ensure trust and sovereign decision-making, echoing verification challenges discussed in AI governance forums [S44][S47][S36][S48].
AI‑enabled weapon systems must be tested under realistic, contested battlefield conditions
Speakers: Speaker 1, UN Secretary‑General (referenced)
AI‑enabled systems are weapons and must be tested in contested field conditions (Speaker 1) UN discussions on “meaningful human control” and accountability for autonomous weapons (Speaker 1)
Speaker 1 argues that because AI systems are designed to cause harm they should be treated as weapons and evaluated in real combat environments where sensors can be obscured, a stance mirrored in UN deliberations on autonomous weapons [46-51][58-60].
POLICY CONTEXT (KNOWLEDGE BASE)
Lt Gen Vipul Shinghal emphasized that AI weapons must be evaluated in contested field conditions, and policy guides stress extensive testing and simulations before deployment [S34][S35].
Collaboration with industry, startups and indigenous development is essential for a self‑reliant AI capability
Speakers: Speaker 1, Industry leaders (referenced)
Indigenous applications (ACOM AI, Sama Drishti, Shakti, Akash Teer) illustrate self‑reliant AI capability (Speaker 1) Open collaboration with startups and innovators to accelerate AI integration (Speaker 1)
Speaker 1 highlights home-grown AI tools such as ACOM AI and Shakti and stresses openness to partnerships with startups and industry to deepen India’s self-reliant AI transformation [35-38].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy papers underline the need for industry-government partnerships and indigenous AI development to achieve strategic autonomy, as seen in France-India collaborations and sovereign AI initiatives [S49][S50][S51][S55].
Dedicated training and capacity building for commanders on AI‑driven operations is required
Speakers: Speaker 1, Defence training authorities (referenced)
Command staff need education on algorithms, AI‑enabled systems, and rapid decision‑making (Speaker 1) Indian Army is implementing training programmes for AI‑driven operations (Speaker 1)
Speaker 1 notes that today’s commanders must understand algorithms and AI-supported decision cycles, and that the Indian Army is already launching training programmes to build this capability [55-56].
POLICY CONTEXT (KNOWLEDGE BASE)
UN and multilateral capacity-building programs advocate dedicated training for military leaders to responsibly integrate AI, highlighting the gap between policy and implementation [S37][S38][S39][S40].
National and international governance frameworks, akin to existing weapons treaties, are needed for AI weapons
Speakers: Speaker 1, UN Secretary‑General (referenced)
India’s AI governance guidelines address generative AI risks and set safety standards (Speaker 1) UN discussions on “meaningful human control” and accountability for autonomous weapons (Speaker 1) New legal conventions, akin to Geneva and land‑mine treaties, are required for AI weapons (Speaker 1)
Speaker 1 points to India’s AI governance guidelines and calls for new international conventions comparable to the Geneva Convention, echoing UN efforts on meaningful human control and accountability [57][58-60][56-59].
POLICY CONTEXT (KNOWLEDGE BASE)
The CCW’s Group of Governmental Experts and UN Security Council resolutions call for treaty-like governance structures for lethal autonomous weapons, mirroring existing arms control regimes [S41][S43][S46][S54][S57].
Similar Viewpoints
Both emphasize that AI governance must be anchored in law to ensure human accountability for lethal decisions [41-45][26].
Speakers: Speaker 1, Honorable Prime Minister (referenced)
Moral and legal responsibility must remain with humans, not machines (Speaker 1) Institutionalizing human control in law is essential (Speaker 1)
Both stress that autonomous weapon systems require rigorous testing and human oversight to avoid unintended harm [46-51][58-60].
Speakers: Speaker 1, UN Secretary‑General (referenced)
AI‑enabled systems are weapons and must be tested in contested field conditions (Speaker 1) UN discussions on meaningful human control and accountability for autonomous weapons (Speaker 1)
Both advocate for a collaborative ecosystem that leverages domestic innovation and private‑sector partnerships to build AI capacity [35-38].
Speakers: Speaker 1, Industry leaders (referenced)
Indigenous applications illustrate self‑reliant AI capability (Speaker 1) Open collaboration with startups and innovators to accelerate AI integration (Speaker 1)
Unexpected Consensus
The military’s explicit framing of AI systems as weapons that must be regulated mirrors civilian UN concerns about autonomous weapons
Speakers: Speaker 1, UN Secretary‑General (referenced)
AI‑enabled systems are weapons and must be tested in contested field conditions (Speaker 1) UN discussions on meaningful human control and accountability for autonomous weapons (Speaker 1)
It is notable that a senior defence officer treats AI as a weapon requiring battlefield testing, aligning closely with UN diplomatic discourse that typically originates from civilian human-rights perspectives, indicating cross-sector convergence on regulation [46-51][58-60].
POLICY CONTEXT (KNOWLEDGE BASE)
Civilian UN forums have framed autonomous AI as weapons requiring regulation, a view echoed by military stakeholders and reflected in CCW deliberations [S41][S45][S53][S54].
Overall Assessment

Across the keynote and referenced remarks, there is strong convergence on six core themes: (1) human oversight and legal accountability for AI‑driven lethal decisions; (2) AI as a decisive force multiplier; (3) transparency of AI models; (4) treating AI systems as weapons that need realistic testing; (5) fostering indigenous development through industry/start‑up collaboration; (6) building dedicated training and governance frameworks, both national and international.

The consensus is high – the speaker’s positions are repeatedly reinforced by the Prime Minister’s guard‑rail call and UN Secretary‑General’s meaningful‑human‑control agenda. This broad alignment suggests that policy formulation on military AI in India is likely to proceed within a well‑defined legal‑ethical framework, facilitating coordinated national‑level implementation and international cooperation.

Differences
Different Viewpoints
Unexpected Differences
Overall Assessment

The transcript contains only a single speaker (Speaker 1). All arguments presented are from the same perspective, and no contrasting viewpoints or counter‑arguments from other participants are recorded. Consequently, there are no identifiable points of disagreement, partial agreement, or unexpected disagreement within the provided material.

None – the discussion reflects a unified stance by Speaker 1 on AI in the military, its benefits, risks, and governance. The absence of dissent means there are no implications for negotiation or policy compromise within this excerpt.

Takeaways
Key takeaways
AI has transformed military operations from paper‑map, manual processes to real‑time, multi‑sensor, AI‑fused digital battle spaces, acting as a force multiplier. Human judgment and moral/legal accountability must remain with commanders; AI can recommend but cannot replace decision‑making, especially in life‑critical contexts. Responsible AI deployment requires: (a) institutionalized human control codified in law, (b) treating AI‑enabled systems as weapons that must be rigorously tested in contested conditions, (c) transparency of data and models (turning the “black box” into a “glass box”), and (d) clear limits on which decisions may never be delegated to AI. India is pursuing an indigenous, self‑reliant AI ecosystem (e.g., ACOM AI, Sama Drishti, Shakti, Akash Teer) and is actively seeking collaboration with industry and startups. Training and capacity‑building for commanders and staff on algorithms, AI‑enabled tools, and rapid decision cycles are being implemented. National AI governance guidelines and emerging international discussions (UN “meaningful human control”, potential new legal conventions) are shaping the policy environment for military AI.
Resolutions and action items
The Indian Armed Forces will equip the military with AI‑enabled, data‑centric capabilities as part of the declared “year of networking and data centricity”. Formal commitment to collaborate with startups, industry partners, and academic innovators to further develop indigenous AI applications. Institutionalize human‑in‑the‑loop control in law and doctrine, ensuring accountability remains with humans. Develop and execute training programmes for command staff on AI algorithms, decision support, and rapid OODA cycles. Implement testing regimes for AI systems under contested battlefield conditions to verify reliability before field deployment. Advance India’s AI governance framework to address generative AI risks and promote transparency (glass‑box models). Engage in international forums (UN, etc.) to advocate for legal conventions governing autonomous weapons and meaningful human control.
Unresolved issues
Specific criteria or a definitive list of decisions that must never be delegated to AI have not been detailed. Concrete standards and metrics for transforming AI “black boxes” into “glass boxes” (e.g., data provenance, model explainability) remain undefined. The timeline and process for establishing new international legal conventions on autonomous weapons are still uncertain. How to harmonize rapid AI‑driven decision cycles with existing command‑and‑control procedures and rules of engagement needs further clarification. Mechanisms for verifying AI performance in chaotic, sensor‑degraded environments have not been fully specified.
Suggested compromises
Allow AI to provide high‑confidence recommendations while mandating a mandatory human pause or verification step before lethal action. Treat AI‑enabled systems as weapons subject to rigorous testing, yet permit limited autonomous functions under strict human oversight. Encourage open collaboration with private innovators while imposing responsible‑development safeguards and transparency requirements.
Thought Provoking Comments
He contrasts his first war game 35 years ago—using paper maps, slow information flow, and deliberate decision‑making—with today’s operation rooms dominated by massive digital displays, real‑time sensor streams, and AI‑fused intelligence that compresses decision windows to seconds.
This comparison vividly illustrates how technology has transformed the tempo of warfare, shifting the core challenge from gathering information to making ultra‑rapid decisions, thereby setting the stage for the ethical and operational dilemmas that follow.
It serves as a turning point that moves the speech from a historical anecdote to a discussion of present‑day pressures, prompting the audience to reconsider the implications of speed and AI on command authority.
Speaker: Speaker 1
In the operational scenario he tells how the commander, faced with a high‑confidence AI recommendation to strike, pauses and asks, “What does the machine not know?” discovering a civilian evacuation not yet reflected in the data.
The question encapsulates the core tension between algorithmic confidence and human situational awareness, highlighting that AI’s blind spots can have life‑or‑death consequences.
This anecdote pivots the conversation from technology’s capabilities to its limitations, reinforcing the need for human judgment and setting up the subsequent argument for mandatory guardrails.
Speaker: Speaker 1
“AI can inform, accelerate and recommend decisions, but only humans can exercise judgment and bear responsibility for them.”
It crystallises the ethical premise of the entire address: responsibility cannot be delegated to machines, no matter how accurate they appear.
This statement deepens the analysis by framing AI as a tool rather than an autonomous decision‑maker, influencing the audience to view subsequent policy recommendations through a lens of human accountability.
Speaker: Speaker 1
First of four governance points: “Decisions that AI must not be delegated to must always remain human. Human control has to be institutionalized into law and moral accountability.”
It moves the discussion from anecdotal illustration to concrete policy, challenging any assumption that technological progress alone can solve ethical concerns.
Introduces a new topic—legal institutionalisation of human‑in‑the‑loop—prompting listeners to think about legislative frameworks rather than just technical safeguards.
Speaker: Speaker 1
Second point: “AI‑enabled systems are designed to cause harm. Therefore they must be treated as a weapon and not as a software. They must be evaluated and tested in contested field conditions.”
Re‑characterising AI systems as weapons reframes the risk assessment paradigm, emphasizing that performance in controlled labs is insufficient for battlefield deployment.
Shifts the tone from abstract safety to concrete operational testing, urging defense R&D and procurement to adopt rigorous, realistic validation processes.
Speaker: Speaker 1
Third point: “The black box of data must become a glass box – commanders must know what data is used and how it has been trained.”
Calls for transparency in AI models, directly confronting the opacity problem that hampers trust and accountability.
Introduces the concept of explainability as a non‑negotiable requirement, steering the conversation toward technical standards for auditability.
Speaker: Speaker 1
Fourth point: “Commanders and staff need to be trained about this fast‑evolving battlefield and be able to integrate algorithms, command systems, and know how to go forward.”
Highlights the human capacity gap, suggesting that technology adoption without corresponding skill development is futile.
Expands the discussion to education and doctrine, indicating that AI integration is as much a cultural shift as a technological one.
Speaker: Speaker 1
He links military AI concerns to national policy: “These concerns about AI safety and governance are not confined to the military domain alone… the launch of the India AI Governance Guidelines… defines AI systems as generative and therefore having unintended consequences.”
Broadens the scope from defense to civilian governance, showing that the same ethical principles apply across sectors and that India is already shaping a regulatory framework.
Creates a turning point that moves the audience from a purely defense‑focused mindset to a holistic view of AI governance, encouraging cross‑sector collaboration.
Speaker: Speaker 1
Historical analogy: “The rules governing the use of NBC weapons, the Geneva Convention, the Convention on Landmines… have stood the test of time. In a similar manner, a set of governance frameworks and legal provisions need to be evolved about use of AI‑based systems and autonomous weapons.”
Draws a parallel between established international humanitarian law and emerging AI weapon norms, challenging the belief that AI is a wholly new ethical frontier.
Strengthens the argument for immediate international dialogue, positioning AI governance as a continuation of existing legal traditions rather than an unprecedented challenge.
Speaker: Speaker 1
Closing claim: “India, as a major military power, a growing AI hub and a civilization deeply rooted in ethical restraint… has both the capacity and the credibility to lead this conversation.”
Positions India not just as a consumer of AI technology but as a moral leader, inviting other nations to look to India for guidance on responsible AI in warfare.
Ends the speech on a strategic note, shaping the overall narrative that India’s experience and values can influence global AI policy, thereby reinforcing the earlier calls for governance and collaboration.
Speaker: Speaker 1
Overall Assessment

The keynote’s most impactful moments arise from a series of deliberate pivots: from a nostalgic recount of analog war‑gaming to a vivid illustration of AI‑driven decision pressure; from a concrete battlefield vignette that exposes AI’s blind spots to a principled declaration that only humans can bear moral responsibility; and finally from technical safeguards to broader legal and geopolitical frameworks. Each of these comments introduced a fresh layer of analysis—historical, operational, ethical, technical, and strategic—forcing the audience to continually re‑evaluate the role of AI in warfare. Collectively, they transformed a simple status‑update into a compelling call for transparent, human‑centric, and internationally coordinated AI governance, positioning India as both a practitioner and a potential global standard‑setter.

Follow-up Questions
What does the machine not know?
Highlights the need for human judgment to identify missing or contextual information that AI may overlook, crucial for preventing civilian casualties.
Speaker: Senior commander
Which decisions must never be delegated to AI and should always remain under human control?
Defines the boundaries of AI use in military operations to ensure accountability and moral responsibility.
Speaker: Speaker 1
How can AI‑enabled weapon systems be evaluated and tested under contested battlefield conditions to ensure reliability?
Ensures that systems perform robustly in real‑world chaotic environments rather than only in controlled labs, preventing liability.
Speaker: Speaker 1
How can transparency be built into AI systems so that the ‘black box’ becomes a ‘glass box’, revealing data sources and training methods?
Promotes trust, sovereignty, and accountability by making the data and algorithms understandable to commanders.
Speaker: Speaker 1
What training programs and curricula are needed to equip commanders and staff with the skills to integrate and oversee AI algorithms in operations?
Prepares military personnel to effectively use, interpret, and supervise AI‑driven decision support tools.
Speaker: Speaker 1
What governance frameworks and legal provisions are required for the use of AI‑based autonomous weapons, both nationally and internationally?
Addresses ethical, legal, and strategic stability concerns by establishing clear rules and accountability mechanisms.
Speaker: Speaker 1
How can meaningful human control be defined, measured, and enforced in AI‑enabled military systems?
Ensures that humans retain decisive authority, preventing unintended autonomous actions that could destabilize conflicts.
Speaker: Speaker 1
What guardrails and safety mechanisms should be built into AI‑enabled models to prevent unintended consequences?
Mitigates risks associated with generative AI and other advanced models, aligning with national AI safety priorities.
Speaker: Speaker 1
How can the Indian Armed Forces effectively collaborate with startups and innovators while ensuring responsible AI development?
Leverages private‑sector innovation while maintaining security, ethical standards, and control over critical technologies.
Speaker: Speaker 1
What are the specific performance metrics and validation protocols for AI applications such as ACOM AI as a Service, Sama Drishti, Shakti, and Akash Teer in operational settings?
Provides measurable criteria to assess effectiveness and reliability of indigenous AI tools before deployment.
Speaker: Speaker 1

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.