How AI Drives Innovation and Economic Growth

20 Feb 2026 15:00h - 16:00h

How AI Drives Innovation and Economic Growth

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel examined how artificial intelligence can both accelerate development and deepen inequalities, focusing on emerging economies [6-7]. Johannes highlighted AI’s capacity to boost productivity in sectors such as agriculture, health, and finance, noting that 15-16 % of South Asian jobs show strong AI complementarity [12-14][15-19]. He warned that automation may eliminate entry-level, knowledge-based positions and that many low-income countries lack basic infrastructure like reliable electricity and internet [22-24][26-30]. To address these gaps, the World Bank promotes “small AI”-affordable, locally relevant applications that function with limited connectivity and skills [34-36].


India was presented as a leading example, with its digital identity system, AI-enabled farmer tools, and AI-generated weather forecasts that reached 38 million farmers and improved planting decisions [39-41][133-154]. The Bank’s role is advisory, helping governments create sandbox environments and ensuring data reliability, while private firms develop the actual apps [49-53]. Ufuk distinguished a high-barrier foundational layer (compute, data, talent) that tends toward concentration from a low-barrier application layer that fuels creative destruction [86-94][95-98]; without a competitive foundational layer, the benefits of AI applications may be limited, especially in developing economies [97-100].


Anu emphasized the difficulty of regulating AI, citing the EU’s rights-based AI Act as a model but noting that India can adapt such frameworks to its own priorities [168-176][177]. Michael stressed that AI for public goods-such as AI-driven weather forecasts and digital IDs-requires government and multilateral investment because the private sector lacks incentives [129-132][145-152]. He proposed evidence-based innovation funds and a four-stage evaluation framework (model performance, user impact, scalability, continuous improvement) to guide effective deployment [266-274][284-292].


Panelists agreed that hype must be tempered; Iqbal warned that trust gaps and institutional inertia can prevent promising tools from scaling, as seen in a GST fraud-detection pilot that was halted [309-318]. Both Ufuk and Iqbal highlighted rising market concentration and the migration of talent from academia to large incumbents as a systemic risk to inclusive innovation [325-332][342-348]. The discussion concluded that AI offers transformative potential in health, education, and agriculture, but realizing these gains will depend on proactive policy, robust regulation, and safeguards against job loss and concentration [383-386][394-398]. Overall, the panel underscored that coordinated public-private effort and careful governance are essential to ensure AI narrows rather than widens development gaps [57][414-416].


Keypoints

Major discussion points


AI as a development catalyst for emerging markets – The speakers highlighted AI’s capacity to “fundamentally reshape… economies and societies” and to “leapfrog longstanding development challenges” by complementing 15-16 % of jobs in South Asia and helping farmers, nurses, and financial institutions [6-12][15-20][34-38].


Infrastructure, skills, and job-displacement risks – While AI offers gains, the panel warned that many developing countries lack “reliable electricity,” “internet backbone,” and basic “literacy and numeracy” to use it, and that “entry-level… knowledge-based” jobs are already being reduced [21-30].


Coordinated policy and multilateral support – The World Bank’s focus on “small AI,” advisory work, and sandbox environments, together with government-backed digital ID and AI-driven weather forecasts, were presented as concrete policy levers; Michael Kremer added that “innovation funds” and evidence-based financing can bridge gaps where the private sector will not [49-53][124-132][266-274].


Governance, regulation, and AI sovereignty – Anu Bradford stressed the need for the Global South to craft its own AI rules, noting the EU’s “rights-driven” approach and the broader geopolitical contest between the US, China, and other powers that shapes who sets the rules [165-176][357-363].


Market concentration and labor-market implications – Ufuk Akcigit warned that the “foundational layer” of AI is “compute-heavy… talent-heavy,” fostering concentration, while Iqbal Dhaliwal presented evidence of rising market concentration, incumbent dominance in innovation, and the migration of talent from academia to industry, all of which could exacerbate inequality [75-100][322-330][342-350].


Overall purpose / goal of the discussion


The panel was convened to examine whether AI will narrow or widen the development gap between advanced and emerging economies, to share concrete use-case experiences (e.g., agriculture, health, education), and to identify the policy, regulatory, and institutional actions needed to harness AI’s benefits while mitigating its risks for inclusive growth.


Overall tone and its evolution


– The conversation opened with a optimistic, forward-looking tone, emphasizing AI’s transformative promise.


– It then shifted to a cautious, problem-focused tone, acknowledging infrastructure deficits, job losses, and governance challenges.


– Mid-discussion the tone became pragmatic and solution-oriented, detailing concrete policy tools, public-private collaborations, and multilateral initiatives.


– Towards the end, the tone grew more critical and reflective, stressing concentration risks, labor market threats, and the need for careful regulation.


– The final rapid-fire segment blended hopeful optimism about sectoral gains (health, education) with warnings about concentration and governance failures, ending on a balanced but vigilant note.


Speakers

Jeanette Rodrigues


– Role/Title: Moderator / Host of the panel discussion


– Areas of Expertise: AI policy, development economics, panel facilitation


Johannes Zutt


– Role/Title: World Bank representative (referred to as “John” in the discussion), Regional Vice President of the World Bank Group


– Areas of Expertise: AI applications in development, agriculture, health, and finance; emerging market impacts of AI


Anu Bradford


– Role/Title: Policy researcher/analyst focusing on AI regulation and governance (affiliated with research institutions on AI policy)


– Areas of Expertise: AI regulatory frameworks, AI sovereignty, comparative analysis of EU, US, and Indian AI policies


Ufuk Akcigit


– Role/Title: Macroeconomist (academic researcher)


– Areas of Expertise: Creative destruction, AI’s impact on economic growth, foundational vs. application layers of AI, concentration in AI markets


Iqbal Dhaliwal


– Role/Title: Global Director, J‑PAL at MIT


– Areas of Expertise: Impact evaluation, education technology, AI interventions in public services, evidence‑based policy


Michael Kremer


– Role/Title: Economist, Nobel laureate, senior researcher in development economics (affiliated with Harvard University and the World Bank)


– Areas of Expertise: Development economics, AI for public goods, AI in agriculture, health, education, and policy design


Additional speakers:


– None identified beyond the listed speakers.


Full session reportComprehensive analysis and detailed insights

The panel opened with Jeanette Rodrigues introducing the session and handing the floor to Johannes Zutt, who described artificial intelligence (AI) as a technology that is “fundamentally reshaping our world” and driving a “structural transformation with profound implications for economies and societies” [??]. He called AI a “game‑changer” for emerging markets, offering a chance to “leap‑frog longstanding development challenges” [??]. Zutt cited recent World Bank work in South Asia showing that roughly 15‑16 % of jobs have strong complementarity with AI, meaning AI can boost workers’ skills and productivity [??]. He illustrated this with sector‑specific examples: AI helps farmers detect pests and diseases, assists nurses in diagnosing unfamiliar ailments, and enables financial institutions to better assess borrowers’ creditworthiness [??].


Ufuk Akcigit then presented a structural lens, distinguishing a foundational layer (compute‑heavy, data‑heavy, talent‑heavy) from an application layer where entry barriers are low [??]. He warned that the foundational layer is “highly concentration‑prone” and that its dynamics will spill over to the application layer, potentially limiting AI benefits for developing economies [??]. Akcigit called for early‑indicator monitoring to anticipate how creative destruction will unfold in both advanced and emerging markets [??].


Michael Kremer followed with development‑oriented use cases. He highlighted the World Bank’s digital identity programme and a digital‑payments platform that provide a solid foundation for AI deployment [??]. He described AI‑enhanced weather forecasts that reached 38 million Indian farmers, improving planting decisions and demonstrating measurable uptake [??]. Kremer added two further concrete examples: (i) automated traffic‑camera systems that improve road safety, and (ii) the “HAB” AI‑driven driver‑license testing programme (Microsoft Research India) that reduced unsafe‑driver ratings by 20‑30 % [??]. He warned that existing public‑sector procurement systems can create lock‑in risk, limiting competition and stifling innovation [??].


Anu Bradford then turned to governance and sovereignty. She argued that the Global South must pursue its own AI sovereignty, crafting rights‑based regulatory frameworks inspired by the EU’s AI Act to protect fundamental rights and broaden the distribution of AI benefits [??]. Bradford listed four structural reasons the EU lags behind the US: (a) the absence of a digital single market, (b) a weak capital‑markets union, (c) a risk‑averse legal and cultural environment, and (d) challenges in attracting talent (including the “You are not alone” repetition) [??]. She cautioned, however, that full sovereignty is constrained by the global AI supply chain—high‑end semiconductors are designed in the United States, manufactured in Taiwan, equipped by Dutch firms, and rely on raw materials from China—so any techno‑nationalist approach must balance interdependence [??].


Iqbal Dhaliwal presented a “small‑AI” school example and a GST‑fraud‑detection pilot. He explained that an AI model raised the detection rate of bogus firms from 38 % to 55 %, but the government refused to scale it because the model removed human discretionary power, highlighting the “power” and “institutional alignment” dimensions of AI deployment [??]. Dhaliwal also shared upstream evidence of market concentration: (i) rising US market concentration since 1980, (ii) an increasing share of innovative resources residing in firms with more than 1,000 employees, and (iii) soaring earnings of the top 1 % of AI scientists in academia and industry, with break‑points in 2012 and 2017 [??].


Returning to Michael Kremer, he advocated for evidence‑based innovation funds such as Development Innovation Ventures, which provide tiered financing—small grants for pilots, larger grants for rigorous testing, and further funding for scaling successful solutions [??]. Kremer outlined a four‑stage evaluation framework—model performance, user impact, scalability, and continuous improvement—to ensure AI interventions deliver real‑world benefits and can be iteratively refined [??]. He emphasized the need for public‑sector investment in AI‑driven public goods that the private market will not fund, citing again the weather‑forecast example that reached millions of farmers [??].


In the subsequent discussion, Iqbal warned that unchecked market concentration could become a regrettable legacy if not addressed [??], and cautioned that over‑reliance on generative models might make humanity “dumber” by outsourcing critical thinking [??]. Ufuk Akcigit highlighted labour‑market shocks, especially the rapid erosion of entry‑level coding jobs that underpin India’s tech hubs, and called for competition‑friendly regulation, access to finance, and entrepreneurship‑supporting policies [??]. Johannes Zutt concluded with a rapid‑fire remark that “we may have the tools to target poverty reduction on individuals,” underscoring the need for coordinated policy [??].


Points of consensus emerged across the panel: (i) AI can be a powerful catalyst for development in agriculture, health, finance, education, and other public‑good services; (ii) “small AI”—affordable, locally‑relevant applications that can operate despite limited connectivity, data, and device capability—is essential for low‑resource settings; (iii) coordinated public‑private collaboration and robust governance are required to prevent misuse when targeting individuals for poverty reduction [??].


Key disagreements centered on three axes: (1) the primary mechanism to harness AI—Zutt championed technical deployment of small AI and sandbox support, Akcigit argued that broader business‑environment reforms are needed, and Kremer advocated staged, evidence‑based funding [??]; (2) AI sovereignty versus interdependence—Bradford stressed the limits imposed by global supply chains, while Zutt reported that Indian governments have an “AI for all” objective (as conveyed by Zutt) [??]; (3) regulatory approach—Zutt advocated pragmatic standards and sandbox environments, whereas Bradford called for a comprehensive, rights‑based framework modeled on the EU AI Act [??].


Overall, the panel concluded that AI offers a historic opportunity to accelerate development, but real‑world impact hinges on proactive policy, robust rights‑based regulation, evidence‑based scaling mechanisms, and measures to mitigate concentration and labour displacement. A multi‑track, balanced approach—combining affordable small‑AI pilots, advisory standards and sandboxes, staged funding, rights‑based regulatory frameworks, and policies preserving competition in the foundational AI layer while guarding against geopolitical supply‑chain vulnerabilities—will be needed to turn AI’s promise into inclusive, sustainable development for emerging economies [??].


Session transcriptComplete transcript of the session
Jeanette Rodrigues

all around the Bharat Mandapam. So once again, thank you very much for your time this afternoon and for choosing us to have a conversation with. To start off, I would like to introduce John, who will make some opening comments for the World Bank.

Johannes Zutt

So thank you very much, Jeanette. It’s a great pleasure to be here speaking to all of you this afternoon. Over the past week, we’ve heard from a lot of world leaders, tech leaders, experts from across many, many countries about how AI is fundamentally reshaping our world, presenting not just a technological shift but a structural transformation with profound implications for economies and societies everywhere. For emerging markets and developing economies, as for all economies, AI could be a game changer. So sorry, that probably helps. I thought the mics were on. So, you know, for all countries, but especially for emerging markets and developing economies, AI can be a game changer, a unique opportunity to leapfrog longstanding development challenges.

It offers clear opportunities to enhance growth and productivity. We recently did some work in South Asia at the World Bank Group to see what sort of impact AI was having on jobs in the region, and we found that approximately 15 or 16 percent of jobs here have strong complementarity with AI. AI enables people in those jobs to expand their skills and their effectiveness in delivering the products and services that they are trying to provide. It also helps, you know, very, very diverse groups of people in many, many different sectors of the economy. It helps farmers to identify pests on their crops. It helps farmers to identify pests on their crops, diseases in their crops, and also how to address them.

It helps farmers to identify pests on their crops, diseases in their crops, and also how to address them. It helps nurses to identify the ailments and illnesses that their patients may be suffering, particularly the ones that they’re not very familiar with, but that they can research using appropriate AI applications. It helps financial institutions to understand better the ability of borrowers to take on loans, which, of course, expands the ability of the borrower to expand his or her business. So there’s clearly enormous potential for AI to fill skill gaps in the areas that I mentioned, also in education, in health care services, to detect patterns, to generate forecasts, to guide the allocation of public resources, and so on.

Of course, at the same time, on the flip side, AI also creates a number of challenges. One of them is there will be some job losses, particularly sort of entry -level jobs that are very much knowledge or document -based, performing relatively rote work that can be taken over by automation. And we’re actually seeing this in the World Bank Group. We went and looked at the number – the types of jobs that we are advertising these days compared to a couple of years ago, and what we found is that that layer, sort of at the bottom of the professional classes inside the bank group, there’s just fewer of those types of jobs being advertised in the World Bank Group today than there were a few years ago.

At the same time, you know, particularly for developing economies and emerging markets, many of them are going to struggle to harness the potential that AI offers because of very basic issues around the foundations for effective AI use. They may not have reliable electricity. We can start with that very basic one. They may not have an internet backbone that’s sufficiently strong. People in these countries may not have very, very basic skills of literacy and numeracy that enable them to work effectively with higher end devices. They may need to use very, very basic devices, not even smartphones, and rely on voice communication, asking a question and hearing a response. So there may be struggles of that kind in developing countries and emerging markets.

And I’m not even talking about all the governance and regulatory safeguards that can also come into play. So the question, of course, is how can emerging economies, developing markets, harness the potential of AI and avoid the pitfalls? And for us in the World Bank group, we’ve been very, very focused on focused recently on basically small AI. Small AI meaning practical, affordable, locally relevant AI that addresses specific problems and also works where connectivity, data, skills, infrastructure are fairly limited. And this is extremely important in countries like India where all of those conditions can apply. And yet there’s tremendous potential for people to expand their, to grow their productivity if they have timely access to information of the right kind in their local language tailored to their specific circumstances.

So that’s what we are trying to do in South Asia today and across the globe actually. And this is really about some of the examples that I mentioned earlier, having bespoke… applications that help farmers to do very basic investigation of the types of issues that they’re facing using their phone to analyze what’s going on to identify it to find out how to address it even to find out who within their local area in their market space can help them by providing the tools or the products that are necessary to address whatever they’re running into so India of course is a very strong example of what’s possible India has been a leading country in digital innovation for quite some time after the United States and China it has the largest if you like digital universe you in the in the world today it’s got some very good foundations there’s the the digital identity program as well as the digital payment platform that currently exists.

There are lots of Indian firms that are innovating in AI, including in the small AI applications that I’ve been talking about. And the governments of India have an objective of ensuring that there is AI for all. So they are very, very aware of the challenges that need to be overcome to make AI accessible to a very, very broad spectrum of the population and not just the very rich that, to some extent, need assistance the least, right? It’s the poorer parts of the country that benefit the most because they will be leveraging a tool that they are not very familiar with and have not been using that much in the past. So we’re working in India.

We’re working in a lot of different states, Uttar Pradesh, Maharashtra, Kerala, Haryana, Telgana. these different aspects working with governments to work on the foundational elements, interoperability, making sure that the accessibility is possible, that programs can run offline as it were so that people who aren’t able to get online all the time can benefit and so on. And then we’re also working with private sector investors who are developing apps. I mean we’re not actually developing many apps ourselves. That’s not really in our comparative advantage. Our comparative advantage as the World Bank Group is to do the more advisory work, make sure that the backbone information that’s embedded in the application is reliable and trustworthy because of course that’s critical for ensuring successful uptake.

But we are helping governments to create. We are helping governments to create the space that enables experimentation in AI sandbox to develop the different applications that people in this incredibly creative country are coming up with to help people get on with their work and become more productive. So I think it’s important to recognize that if we’re going to make effective use of this tool, we need both a public -facing effort to address the standards and the other issues, the interoperability and so on that I mentioned before, but also a private -sector -facing effort because it’s the private sector that’s actually generating, creating most of these applications that are working, particularly in the small AI area.

We’re doing a little bit on bigger AI. There’s obviously a connection between the two. Big AI can, through computational power, generate new knowledge that can help us to do things that we haven’t done so well in the past much, much better. But for… There are countries like India translating that. into small AI will also be very, very important for uptake. So I’m looking forward to hearing from all the distinguished speakers in this panel about their thoughts on what’s happening today in this sector. So thank you very much.

Jeanette Rodrigues

Thank you very much, John. John spoke about, of course, the use cases for AI, and on the other side of the spectrum we have the large language models, we have the foundational AI. But no matter where you sit on the spectrum, no matter where your interests lie, AI, innovation never disperses and never diffuses equally. Today on this panel, I hope to unpack what determines whether AI narrows the development gap or whether it widens the development gap. Especially we are looking to talk about the real world. What should policymakers in the real world think about and keep at the top of their mind as they go ahead preparing policies considering AI? Before I start, just setting the stage.

To a man, to a woman, everybody I spoke with who’s attended the first AI summit to today, this is, I think, the fourth AI summit being held. The first one was held in the UK. And without exception, all of them made it a point to tell me how the first session was full of fear. It was, oh, my God, AI is this terrible technology which is going to steal all our jobs, make us redundant. And when they come to India, they see the hope that technology and AI brings. And that’s the spirit of the discussion this afternoon, to figure out how can we balance both of those extremes, hope and concern, and go ahead in a pragmatic, policy -first way to prepare for the real world.

So if I could start with you, Ufuk, how do you think about AI? And especially, where do you see areas of creative destruction? To foster the innovation that we need.

Ufuk Akcigit

Thank you very much. And so, of course, creative destruction is an important driver of economic growth in the long run. So that’s why, you know, it’s an interesting question how AI will affect creative destruction in general. Of course, we are at a very early phase of AI, and it’s a GPT. And typically, you know, when GPTs are emerging, there’s a huge surge of new businesses. And this should not be misleading. I think the main question we should be asking ourselves is what will happen to the creative destruction in the future? How does the future look like in terms of creative destruction? And I’m a macroeconomist, so that’s why I like to look at this with a, you know, bird’s eye view.

And I would like to, you know, separate advanced economies from emerging or developing economies. So when it comes to advanced economies, there, again, we need to split the issue into two layers. One, the foundational layer. and the other one is the application layer. When we look at the application layer, it’s great. You know, the entry barriers are low. Small businesses can do what only large businesses could do in the past, and, you know, they can do their accounting, marketing. You know, there are so many opportunities now. The entry barrier is low. As a result, this suggests that, you know, this is going to be more, you know, friendly for creative destruction on the application. But then there’s also the foundation layer, and I think that’s exactly where the bottleneck is.

When we look at the foundation layer, the entry barrier is really, really high, and, you know, the compute is very compute -heavy. It’s very data -heavy. It’s very talent -heavy. So as a result, you know, this market, at least this layer, is very concentration -prone. Of course, it’s very early. But, you know, normally we have to be concerned about the foundational layer and how things will pan out because this is the upstream to the application layer, which is downstream to foundation layer. So that’s why whatever will happen at the foundational layer will potentially spill over to application layer two. So that’s why I think we need to look at early indicators. But, you know, in the interest of time, I don’t want to go into the empirical evidence yet.

Maybe we can come back in the second layer. When we look at the developing countries, so I think, you know, I agree with Johannes. You know, I think AI is creating fantastic opportunities. So that’s why I think it’s really important to understand the opportunities as well as the risks for developing countries. And together with the World Bank, we are working on the world development. Report 2026, which is going to be on AI and development. And these are exactly the issues that we are focusing on. But I think before we go into those details, we should ask ourselves one major question. Why was there no entrepreneurship and dynamism before the AI revolution in emerging economies? Why was, you know, when we looked at the firm’s life cycle, for instance, why was it not up or out?

Why was it not, you know, very competition friendly? Why did the best predictor of firm size in emerging economies or developing economies was the size of the family and or the number of male children? These are still lingering issues and AI is not, you know, will not bring magic unless we understand and fix the business environment in these economies. You know, AI will just create new tools. But at the end of the day, we need to make sure that the business friendly environment is there for entrepreneurs to come and exercise their ideas

Jeanette Rodrigues

Ufuk, that’s a very interesting leaping of point, the real world. And the intention of this panel is to get exactly there. So if I may turn to you, quite literally turn to you, Michael, and ask you about the real world. You’re obviously doing a lot of work on the ground. Where do you see the potential for AI to spur gains? And are there any really transformative breakthrough areas that you’re looking at right now?

Michael Kremer

Yes. Thank you. Thanks very much. You know, I don’t want to minimize the existence of forces that may widen gaps. I think that if policymakers, primarily at the national level, but also in multilateral development banks, take appropriate actions and make appropriate investments, then I think AI has the potential to substantially narrow some of the gaps. And, you know, I think the… which policy actions to take can be informed by thinking through relevant market failures and relevant government failures. Let me give a concrete example or two. So private firms have incentives to develop and improve applications of AI that can generate profits. But there are some very important applications of AI for public goods, for example, that will not attract commercial investment to measure it with their needs.

And that’s an area where I think governments and multilateral development banks can play an important role. And I think some of this very much echoes what you were saying about small models, but also I’ll mention the link between the two. So an obvious example where I think India has been a leader for the world is in the development of digital identity. You know, this is… will enable, as Ufuk was saying, this enables a lot of work by individual entrepreneurs, a lot of other applications. So that’s a huge success, and I think multilateral development banks together with India can help bring that to many other countries. Let me take another example, one that’s not as well -known, but picks up on your comment about farmers.

So one thing that’s critical for farmers, they have to make a bunch of decisions that are weather -dependent. You know, when do you plant, for example? What varieties do you use? A drought -resistant variety, another variety. That, most farmers don’t have access to state -of -the -art weather forecasts around the world. I’m not talking about one country. In low – and middle -income countries, they don’t have access to that. Now, there’s a huge advance. We tend to think of large language models, but obviously AI is pushing science forward, and that includes in weather forecasting. There’s really a revolution driven by AI. But weather forecasts are non -rival. They’re largely non -excludable. They’re the classic definition of a public good.

So there’s a strong rationale for national governments, in some cases supported by multilateral development banks, to make investments in producing and disseminating AI weather forecasts. Again here, India is a leader. So if you, India in particular, in particular, India’s, the Indian government distributed forecasts to AI weather forecasts to 38 million farmers last year. And the evidence suggests that farmers, both from India, from this particular case, that in areas, I’ll say a little bit about last year’s monsoon, it came early in Kerala and southern India, but then there was an unexpected delay in the progression. The AI forecasts got that right, that was the only source of information that reached farmers with that. In the areas, we did a survey above that line, and farmers are responding, and they transplant more, they use hybrid seeds more.

Evidence from around the world is consistent with this. Farmers respond to these AI weather forecasts. So I think that’s one example, but many others, and happy to discuss them in education and traffic enforcement and elsewhere.

Jeanette Rodrigues

Michael, your answer should be read the book. Okay. We’ve spoken about the use cases of India, but setting up digital IDs, of course, is a sovereign decision. It’s something India could do unilaterally. When it comes to the large language models, that’s not reality. The large language models are concentrated in the US, in China now with DeepSeek. Anu, in a world where you largely have the rules being set by the two large powers, the US and China, arguably, there’s of course the EU as well, and you’ve done a lot of work on that. Who sets the AI rules for the Global South? Is there even the possibility for the Global South to talk about sovereignty?

Anu Bradford

So I think the Global South has the same kind of incentive for their own AI sovereignty, including then regulatory sovereignty, to design the rules that better work for their economies, for their societies, for what the public interest in these jurisdictions calls for. But regulating AI is really difficult even for very established bureaucracies. You need to be able to make sure that it is an innovation -friendly, and yet you at the same time need to be careful in managing the risks for individuals and societies. So even very established regulators like the European Union have found it one of the most challenging tasks to come up with the AI Act. So there’s probably something to be learned from these jurisdictions that have gone ahead and done the kind of thinking that had then resulted into some of those regulatory frameworks that we have now in place.

So if you think about the choices that India has when it looks around, one of them is to think about, okay, how does the EU go about this? The EU follows what I would call a rights -driven approach to regulation. So what is really characterizing this, the first horizontal binding, so economy -wide regulation that the Europeans enacted, it is a regulation that seeks to protect the fundamental rights of individuals, the democratic structures of the society, and that also seeks to ensure a greater distribution of the benefits from AI revolution. So the European approach is very conscious that it wants to also share some of the benefits so they don’t all go to the large developers of these models, but individual use as society at large.

smaller companies benefit from AI as well. So there’s something I think the Europeans can teach in terms of that regulatory approach in addition to maybe then some details of how that regulation in the end was constructed. But just one word, India is a formidable economy that doesn’t need to take a template and plug it into the economy as such. I think India is in a very good position to take the lessons that serves its needs yet make the kind of local modification and variations that are more reflecting the distinct priorities of this country.

Jeanette Rodrigues

Anu, before I turn to Iqbal, a quick follow -up question to you. As India makes its own rules, where does the trade -off lie between regulation and innovation?

Anu Bradford

So this is very interesting because often I am based in the U .S., but I’m initially from Europe, and these two jurisdictions are described as the U .S. develops technologies and the Europeans regulate those technologies. many ways does India want the innovation path or the regulation path? And I think there are many votes who would go for innovation. But I really would like to debunk this myth that to me it’s a false choice to say that the reason we don’t see these large language models being developed in Europe is not because there’s a GDPR, the General Data Protection Regulation. It’s not because there is AI Act. So the reason there is a perceived innovation gap between the United States and Europe is, I think, four things.

So first, there is no digital single market in Europe. It’s very hard for these AI companies to scale across 27 distinct markets. Second, there’s no deep, robust capital markets union. 5 % of the global venture capital is in Europe, over 50 % in the United States. That explains why the U .S. has been able to take much greater steps in developing AI technologies. Third, there are legal frameworks and cultural attitudes to risk -taking. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone.

You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. I wouldn’t encourage you to replicate that because it’s very hard to innovate on the frontier of technological innovation because sometimes you fail. But you need to be then given the second chance.

And the fourth, I think, the sort of foundational pillar of the robust U .S. tech ecosystem is that the U .S. has been spectacularly successful in harnessing the global talent that has chosen to come to the U .S., including many Indian data scientists, engineers, who think that U .S. is the place where they can start their companies, scale their companies, fund their companies, U .S. universities can attract them. So the idea that choosing to follow… Or imitate aspects of the European rights protective regulation would come at the cost of innovation, we need to understand better what drives the technological innovation and whether regulation should

Jeanette Rodrigues

Thank you, Anu. Iqbal, turning to you. You’re working in an area of the world, South Asia, where what is regulation? What is enforcement? At the risk of sounding like a provocateur, it’s the Wild West a little bit. And therefore, we talk a lot in our part of the world about small AI, about targeted AI. My question to you is that what should policymakers keep in mind when designing AI -enabled interventions, especially when it comes to small AI and the targeted use cases?

Iqbal Dhaliwal

vulnerable public schools all the way from 11th to becoming the second best performing state in just a matter of two or three years. Phenomenal results, right? But then you start saying, let’s unpack this. What was this thing doing? The first thing that they find out is that a lot of people are like, oh, does this mean that I don’t need teachers anymore? No, you still need the teachers. What it replaces is the road task of the teacher having to correct spelling mistakes, calling you to the room and saying, hey, you forgot your comma, you forgot to capitalize. Instead, AI takes care of all of that. And now the teacher can sit with you in the free time and say, how did you set up the structure of this essay?

Did you think about this analytically or not? And that’s the first insight that comes from evaluation. It frees up the teacher time. Everything that we do in the field ends up adding to teacher’s time, adding to the nurse’s time, adding to the Anganwadi worker’s time. Very few teachers do that. Free up time. So if your AI application can free up the time of the health frontline workers, first of all, that’s a winner. The second thing that is really important here was that this is a demand -driven thing, right? Like there was a demand by the kids to improve their essays. There was a demand by the teachers to free up their time. But most importantly, there was a demand by the school districts to show progress.

So I think those is kind of a great example of how everything comes together if you think about it ahead of time.

Jeanette Rodrigues

Ladies and gentlemen, a topper of India’s notoriously difficult civil services exam. So take Iqbal more seriously than you would as just a normal.

Iqbal Dhaliwal

Thank you. I thought that was history now.

Jeanette Rodrigues

It’s never history in India, Iqbal. Michael, turning to you, almost as equal in accomplishment by winning a Nobel. What risks should multilaterals like the World Bank keep in mind? Or let me rephrase that actually. Is there a risk that multilaterals are moving too slowly relative to the technology?

Michael Kremer

I think there certainly is. As I noted before, there are certain areas where the private sector is going to move, but there are other areas where they’re not going to move quickly, and it’s going to be very important for governments and for multilateral development banks and for philanthropy to move. I think there are a number of approaches to this. One way is by encouraging innovation by setting up institutions like innovation funds, particularly evidence -based, to echo Iqbal, I think evidence -based innovation funds. So I’ll give you one example of something that I’m involved in. Development Innovation Ventures, that was initially set up in the U .S. government, but it’s now been relaunched independently. It has tiered funding, so there’s initially very small…

grants to pilot new ideas. Then there’s somewhat larger grants to rigorously test them as Iqbal emphasized and then for those that are most successful there’s funds to help transition them to scale up. I think why is that important? Well that’s important because if we’re thinking about the services that public services and there are other sectors where this is needed but there’s probably going to be insufficient competition. Private developers are going to come up with innovations but then there if they have to sell them to the government they’re facing a monopsonistic buyer. They’re not going to probably not going to get rich doing that. So some support to generate more in that market, generate more entrance in that market, well I think is very important.

It’ll also mean that prices will go down and quality will go up when the government does that thing. Does that. Let me, I’ll just again let me give a example of the potential of how you know we we tend to focus on certain examples time after time here let me give another another example that is you know something that I doubt many people here are thinking of when they think of AI you know one of the things that you know traffic safety and we’ve all been exposed to traffic in the past few days you know traffic is a real problem interfering with urbanization which may drive growth there are a lot of deaths from from traffic a lot of citizens around the world have very difficult and painful experiences with traffic enforcement well you know you can have automated traffic cameras that have the opportunity to improve improve traffic outcomes but also improve people’s perception of fairness in government India’s moving in this let me mention another thing that within traffic safety that’s being done Microsoft Research India developed a program called the India Research Program and it’s a program that’s been developed by the government and it’s a program called HAB that is for driver’s licenses and that it automatically uses AI to test are that are the drivers until they actually pass in their exams they when this was introduced it’s been introduced I believe in 56 sites across India hundreds of thousands of people have taken tests this way we took a leaf from a false book we followed up the we’ve got information from Ola on ratings on and the number of drivers who were rated as driving unsafely that went down 20 to 30 percent where hams had been installed so you know that’s something that was developed not by Microsoft’s main business but by Microsoft research we can just create some support for more ideas like that to be developed to be rigorously tested that can benefit India can benefit the whole world we are we are running out of time probably this is this is one place in in India where time is really respected and we have to end in time.

So I had a list of wonderful questions, but if I could now move to a space where we are really giving shorter answers and quick answers and the deeply, deeply interesting ones about who’s winning and who’s losing. Michael, if I could start with you, actually. We’ve seen many promising technologies fail to live up to their promise. How should we think when we are evaluating AI interventions? How should we think about it? What should be the metrics that we use? Okay. First, model evaluation. So AI companies typically do that part. How good is the model output for specific tasks? You know, forecasting the weather. Does it do a good job? Does it match your local language well?

Second, user impact. Here, I think there’s a role both for sort of initial pilots akin to a medical efficacy trial. If you put the work into trying it, does it lead to improvements and outcomes for the users? Second… scalability and usage at scale that’s more like an effectiveness trial in medicine that it’s important to think not just about the tech but also about the human systems are the teachers actually going to use the product I think is it is an example how can you get the teachers to use the product and then the fourth area is continuous improvement you want a system that improves the underlying models so I think in procurement we might want to think about requiring continuous a B test publicity about what the what the impact usages and impact is and perhaps even thinking about requiring open access as part of the procurement package

Jeanette Rodrigues

thank you Michael. Iqbal, I want to flip that question to you where do you see where do you see hype in the promises of AI that you don’t think will play out

Iqbal Dhaliwal

I think hype is natural because the technology is exciting. It’s a general -purpose technology. It’s evolving so quickly. The marginal cost of deployment for the next users is very low. It’s multimodal. Today you are doing it in text. Tomorrow you’re doing it in video. Day after tomorrow you’re doing it on audio. Everybody who has a smartphone has it. So I can understand the hype, right, like where it is coming from. But I think what we really need to do is separate the hype from the reality on the ground. And the reality on the ground is that many of these technologies are not having the final impact that we are having. And I see kind of two, you know, like once again my job at J -PAL always, you know, sitting at the top is like to say not worry about one professor’s evaluation or one researcher’s evaluation, but say when I connect all these dots, what am I seeing?

And I’m seeing two patterns. One is about trust in technology, and the second part is about the reality of the policy world. Let me elaborate quickly on both. Trust in technology. There are studies which found that even if you give doctors and frontline health care workers access to diagnostic tools, including radiology, tools, using AI, AI enabled prediction of the diseases, oftentimes it doesn’t lead to an improvement in results. And when you try and unpack that, even though this technology worked even better than the human intervention in the lab, right? So some of these diagnostic things can work, have better predictability in the lab, but in the field, they end up decreasing, not only is their efficiency lower, but it lowers the efficiency of the doctors, because we have not trained them enough important.

And the second thing is the enabling mechanism, the world around us. We just assume that just because the technology works, even if it works in the field, the rest of the system will adapt to it. No, you have to adapt the system to the rest of the world. So this example quickly comes from India, where, you know, we have a with one particular state government, we try to improve the collection of value added taxes, it’s called GST in India, there is a whole worry about bogus firms that are created to get these GST or value added tax thing. The machine learning algorithm is able to increase the probability of predicting a bogus firm from 38 % to 55 % in one shot at a very, very low cost.

When it came time to scale up this program by the government, they refused to scale it up because you think about it, you have taken away the discretion of the human to decide whether they should raid Michael’s firm or they should raid Iqbal’s firm. That is power. And if you haven’t thought through that point, what is the point of technology?

Jeanette Rodrigues

I won’t terrify anyone in the room by asking why they didn’t want to scale up this tech. But talking about weeding out the bad actors, talking about firm -level decisions, moving on to UFOOC, does the firm -level evidence show productivity gains diffusing evenly across?

Iqbal Dhaliwal

So just going back quickly to the question of the firm. In the earlier model that I highlighted, I think it’s important to understand what’s happening at the upstream. so that we can then understand where things will be going in the future. And the evidence there, the early signs, is a bit worrying. So first of all, when we look at, for instance, the dynamism or market concentration in the U .S., market concentration has been increasing since 1980 but in an accelerating way after 2000. So that’s the first set of evidence. The second set of evidence comes from how innovative resources are allocated across firms. And when we look at the inventors who are creating the creative destruction and technologies, there’s a massive shift towards market incumbents.

And when I say incumbents, those firms that have more than 1 ,000 employees. In around 2000, 50 % of employees used to work for incumbent firms in just 10 years. That shifted. To more than 60%. A massive reallocation of innovative resources. And the final piece of evidence, and we are going to release this study next week, we looked at the universities, how AI is impacting universities, and we look at the AI publishing scientists. And AI publishing scientists in academia, the top 1%, used to make around $300 ,000 in 2000. It went up to $390 ,000 over two decades. Similar people in industry used to make around $550 ,000. Now it went up to $2 million. And there has been two breakpoints. One of them was in 2012. The other one was in 2017.

Of course, image processing and then the foundational model revolution in 2017. The more worrying part about this, which brings me back to the foundational model side of things, is that this created a massive out -migration from academia to industry.

Ufuk Akcigit

And after 2017 especially, B2B. When the compute and infrastructure became so important. And then we saw the rise of AI. The target or the destination is large incumbent information companies, which again highlights where things are going in terms of the concentration. And the worrying part also is that when people are moving to industry from academia, their publication record goes down by 50%. They start patenting by 600 % more after they move, which means that we are moving from open science to more protected science. Now, spillover is extremely important for creative destruction, for the future of innovation. So that’s why, and if we will keep the foundational layer contestable, I think that the fundamental players there will be universities.

And keeping universities in a healthy way is extremely important, but there is very little discussion on this, which I think before it gets too late. Because once you start buttoning the wrong button, and then the rest will follow wrong as well. So that’s why I think we have to have this frank conversation early on in the game, otherwise it might… too late.

Jeanette Rodrigues

Ufuk, what you spoke about boils down to something Iqbal mentioned as well, power. Because power still makes decisions in this world today. So Anu, before I move to the final section of this panel, if I could ask you if the finance minister of a developing country let’s say India, comes to you and asks you, Anu, how should I think? What would you tell her?

Anu Bradford

So today if you think about how much political power but also geopolitical power is shaping our conversations around AI it is something where I think each country is now pushed towards greater techno -nationalism, techno -protectionism AI sovereignty has become almost a sort of uniformly goal for everyone. But I would remind even when encountering players like the United States and China that nobody in today’s world will be completely sovereign when it comes to AI space. If I just take one layer of the AI stack as an example. What is now driving a lot of the global AI race is this idea that we want to do frontier AI we want to have these powerful foundation models.

That means you need to have a lot of computers. You can’t have a lot of compute unless you have access to the high -end semiconductors. The U .S. is well positioned there. It is hosting companies like NVIDIA. The U .S. leads in the design of semiconductors. But who is manufacturing them? We really need to think about the role of Taiwan there. But then the Europeans have ASML in the Netherlands that leads in the high -end manufacturing with the equipment needed for manufacturing. But that is dependent on chemicals where Japan is leading. And the entire supply chain relies on raw materials from China. So ultimately, all these choke points can in principle be weaponized, but that is not ultimately a sustainable strategy.

Even President Trump had to walk back some of the export controls to China because Chinese were saying, okay, then the raw materials are not coming your way. So there are the potential ways to weaponize these interdependencies that ultimately make us all poorer. So as a finance minister of India, when approaching other middle powers, the great powers,

Jeanette Rodrigues

Easily said than done. Our final, final section is, of course, the rapid fire round. We all love this in this room. In one sentence, in one sentence, if I could ask all of you, and Johannes, you’re not getting away easily, you’re going to answer this as well. So in one. if I could ask you, we’re sitting in New Delhi 2035. Could you predict one development outcome that will have dramatically improved with the use of AI and one risk we’ll regret not addressing now? I guess you already know my second answer.

Iqbal Dhaliwal

I think the concentration, the future of market concentration is something that we should be concerned about and we might regret not having discussed this sufficiently in 10 years. On what will change in a positive direction, clearly health care and education, I think. It’s a no -brainer.

Jeanette Rodrigues

Anu?

Anu Bradford

So first of all, it’s so inspiring to hear all the use case examples, whether we talk about traffic or agriculture or education, because I often talk about the risks and the downsides, so it’s a really good reminder. I’m personally very excited, especially what happens in the education space but also in the health space. In terms of the risks, I think one thing that we are not paying attention to, and what I would even call a systemic risk, is the idea that many worry about AI getting almost too smart. But I am more worried about us getting dumber as a humanity. There is a temptation to start skipping steps, outsourcing your thinking and your creativity to these models.

And as an educator, when I think about how I will teach my students to use generative AI to enhance but not substitute their capabilities, we will just make a tremendous mistake if we just forewent that hard work, that beautiful moment of thinking hard problems and creating and investing in our own capabilities. And all that just cannot be so outsourced, because otherwise we don’t even know what kind of questions we should be asking the AI going forward.

Jeanette Rodrigues

Michael.

Michael Kremer

I agree that there is huge potential in health. and education. I think we’ll see big improvements in that, but the risk is that the public sector won’t adopt these, and therefore the poor won’t have access to them. And that’s because the public sector, as Iqbal indicated, the government systems and the government workers may not adapt to use these. There’s also risks of copycat regulation that are over -focused on certain problems that other countries may be worrying about, but might not be relevant for emerging economies. And then final risk is that the procurement systems are just set up in such a way that we don’t get sufficient competition, we get lock -in, and then we just don’t wind up with good quality.

Jeanette Rodrigues

Thank you, Michael. The buzzer’s down, but I’ll take a risk and quickly run through the other.

Ufuk Akcigit

Yes. I think I am much more optimistic about the government actually adopting this thing. Whether it is when you call 100, your call is going to get answered very quickly. The PCR van is going to be at your house much faster. The hospitals are able to be able to link your health record. So I think the government sector productivity is going to improve leapfrogs. The biggest risk, I think, is definitely the labor market. If there was a dial where I could slow down the adaptation and give time to the labor market to catch up, that’s my biggest worry. You talked about entry -level jobs. An entry -level coding job might be an entry -level job in the United States.

It’s the aspirational job that created Gurgaon’s and Noida’s and Mohali’s of this country. And those people are going to be running out of jobs very, quickly. And I think the labor market, whether it is ESI, Provident Fund, Gratuity, we are piling on and making it harder and harder to hire labor. when, on the other hand, capital is not taxed. We are giving incentives to people to use AI, and we are taxing them through provident fund and labor market regulations to hire labor. And I think that, for me, is the biggest risk, actually.

Johannes Zutt

So I think that for the first time in human history, we may actually have the tools available to enable us to target poverty reduction, poverty elimination initiatives on individuals. And that could be tremendously transforming. But at the same time, I do worry that we will not get the governance right or we won’t be able to make that governance sufficiently robust to prevent abuses.

Jeanette Rodrigues

Thank you very much to all of our panelists and to you for your time and attention once again. I had the very rare fortune of being able to peek into Michael’s screen while he was speaking, and I saw all the messy human notes. Our panelists are definitely not outsourcing their thinking anytime soon, and thank God for that. Thank you, ladies and gentlemen

Related ResourcesKnowledge base sources related to the discussion topics (39)
Factual NotesClaims verified against the Diplo knowledge base (4)
Confirmedhigh

“AI can be a game‑changer for emerging markets, offering a unique opportunity to leap‑frog longstanding development challenges.”

The knowledge base explicitly states that AI is a game changer for emerging markets and can help leapfrog longstanding development challenges [S6].

Confirmedhigh

“The structural lens distinguishes a foundational layer (compute‑heavy, data‑heavy, talent‑heavy) from an application layer where entry barriers are low.”

A similar description of a foundational layer versus an application layer is provided in the source, confirming the terminology and distinction [S30].

Additional Contextmedium

“AI helps farmers detect pests and diseases.”

The source on AI for Good in food and agriculture outlines precision-agriculture applications, including pest and disease detection, offering concrete examples that add nuance to the claim [S33] and a related agrotech talk also mentions AI-driven farming tools [S105].

Additional Contextlow

“Roughly 15‑16 % of jobs have strong complementarity with AI, meaning AI can boost workers’ skills and productivity.”

While the exact percentage is not given, the knowledge base reports productivity gains from AI in specific sectors such as call centers and software development, providing broader context on AI’s complementarity with work [S15].

External Sources (106)
S1
How AI Drives Innovation and Economic Growth — -Jeanette Rodrigues: Moderator/Host of the panel discussion This comprehensive discussion at the Bharat Mandapam, moder…
S2
Extreme poverty and human rights * — 16 Jeanette Rodrigues, ‘India ID program wins World Bank praise despite ‘Big Brother’ fears’, Bloomberg, 16 March 201…
S3
AI Meets Agriculture Building Food Security and Climate Resilien — -Johannes Zutt- Regional Vice President, World Bank
S4
How AI Drives Innovation and Economic Growth — -Johannes Zutt: World Bank representative (referred to as “John” in the discussion)
S5
Keynotes — Michael O’Flaherty: EuroDIG, dear friends. Last Saturday, we watched as the newly elected Pope explained why he had ch…
S6
How AI Drives Innovation and Economic Growth — – Johannes Zutt- Ufuk Akcigit- Anu Bradford
S7
How AI Drives Innovation and Economic Growth — – Johannes Zutt- Ufuk Akcigit- Anu Bradford – Ufuk Akcigit- Johannes Zutt
S8
How AI Drives Innovation and Economic Growth — – Ufuk Akcigit- Johannes Zutt
S9
New Development Actors for the 21st Century / DAVOS 2025 — – Iqbal Dhaliwal – Global Director of J-PAL at MIT
S10
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — – Iqbal Dhaliwal- Ronnie Chatterji – Iqbal Dhaliwal- Sanjiv Bikhchandani
S11
DIGITAL DIVIDENDS — – Cantijoch, Marta, Silvia Galandini, and Rachel Gobson. 2014. ‘Civic Websites and Community Engagement: A Mixed Metho…
S12
Rights and Permissions — – Aboud, Frances E., and Kamal Hossain. 2011. ‘The Impact of Preprimary School on Primary School Achievement in Banglade…
S13
How AI Drives Innovation and Economic Growth — – Johannes Zutt- Michael Kremer – Michael Kremer- Iqbal Dhaliwal
S14
A Digital Future for All (afternoon sessions) — AI is enabling economic progress and entrepreneurship, especially in emerging markets. It can boost productivity across …
S15
Comprehensive Report: Preventing Jobless Growth in the Age of AI — Higher productivity potential exists in agriculture, manufacturing, healthcare, and construction sectors
S16
Turbocharging Digital Transformation in Emerging Markets: Unleashing the Power of AI in Agritech (ITC) — By addressing these challenges and improving data management and accessibility within the agricultural sector, the overa…
S17
Artificial Intelligence & Emerging Tech — The analysis explores multiple aspects of the relationship between artificial intelligence (AI) and developing countries…
S18
AI sandboxes pave path for responsible innovation in developing countries — At theInternet Governance Forum 2025in Lillestrøm, Norway, experts from around the worldgatheredto examine how AI sandbo…
S19
AI Safety at the Global Level Insights from Digital Ministers Of — “Things like regulatory sandboxes or like policy lab type things where you can try limited pilot approaches seem to be g…
S20
Advancing Scientific AI with Safety Ethics and Responsibility — “So for example, we are going to launch a global south network for trustworthy AI and we are going to launch a global so…
S21
How Small AI Solutions Are Creating Big Social Change — African languages. And we just released a data set of 21 now, 27 voice languages, given that Africa has 2 ,000 or so lan…
S22
Multistakeholder Partnerships for Thriving AI Ecosystems — For instance, the National Skilling Mission, the skilling mission that is undertaken by NASCOM, which is the IT industry…
S23
Building Inclusive Societies with AI — Public-private collaboration is crucial for national growth and inclusive technology adoption
S24
9821st meeting — Robust accountability frameworks and national policies aligned with human rights standards are essential to prevent pote…
S25
Unveiling Trade Secrets: Exploring the Implications of trade agreements for AI Regulation in the Global South — Such instances underline the importance of robust regulation to prevent future abuses and protect individual rights. Fur…
S26
Building Scalable AI Through Global South Partnerships — It’s like a case management system for tuberculosis patients. We’ve integrated everything. We developed algorithms into …
S27
Digital Health at the crossroads of human rights, AI governance, and e-trade (SouthCentre) — Technological innovation has led to a significant transformation in health systems, particularly through advancements in…
S28
Can we test for trust? The verification challenge in AI — Anja Kaspersen: Massively so. So let me, I’m just gonna rewind a little bit to our title of this session if you allow me…
S29
Building Sovereign and Responsible AI Beyond Proof of Concepts — “The second is around governance failures.”[65]. “And then there’s also a failure around misalignment.”[66]. “So I put h…
S30
https://dig.watch/event/india-ai-impact-summit-2026/how-ai-drives-innovation-and-economic-growth — And I’m seeing two patterns. One is about trust in technology, and the second part is about the reality of the policy wo…
S31
Scaling AI Beyond Pilots: A World Economic Forum Panel Discussion — Amin Nasser emphasizes that successful AI scaling requires establishing clear operational models, including processes fo…
S32
AI for agriculture Scaling Intelegence for food and climate resiliance — “We will move from pilots to platforms, from fragmented data to interoperable systems, from experimentation to execution…
S33
AI for Good – food and agriculture — – Creation of predictive models for planting and harvesting decisions – Use of remote sensing and geospatial platforms …
S34
AI for Social Good Using Technology to Create Real-World Impact — Nilekani provided compelling examples of how open networks enable rapid capability integration and new economic opportun…
S35
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — – Ashwini Vaishnaw- Khalid Al-Falih Economic | Development | Infrastructure IMF Index of Preparedness evaluates physic…
S36
Trade regulations in the digital environment: Is there a gender component? (UNCTAD) — Emerging technologies and artificial intelligence can be true enablers of economic growth and social well-being. The an…
S37
Is AI a catalyst for development? — The Economist argues that AI has the potential to revolutionise developing countries by transforming their economies and…
S38
What policy levers can bridge the AI divide? — **Lithuania** emphasized leveraging small country advantages through flexibility and reduced bureaucracy, proposing regu…
S39
WS #82 A Global South perspective on AI governance — An audience member points out that Global South countries often regulate companies and organizations outside their juris…
S40
Building Trusted AI at Scale – Keynote Anne Bouverot — This comment shifts the discussion from acknowledging competition to actively proposing strategic alliances. It introduc…
S41
Panel Discussion: Europe’s AI Governance Strategy in the Face of Global Competition — Brunner summarizes Trump’s AI approach as: American AI is number one and must remain the leader, compete with China, the…
S42
How AI Is Transforming Indias Workforce for Global Competitivene — “Because kind of when we have a small set of institutions or companies or talent pools pull ahead disproportionately bec…
S43
How AI Drives Innovation and Economic Growth — Artificial intelligence | Social and economic development | Human rights and the ethical dimensions of the information s…
S44
AI for Social Empowerment_ Driving Change and Inclusion — He asks how governments and institutions can govern AI responsibly to minimise labour market disruption and ensure a smo…
S45
Keeping AI in check — Societies should not be forgetful of the fact that technology is a product of the human mind and that the most intellige…
S46
Generative AI: Steam Engine of the Fourth Industrial Revolution? — Technology is moving at an incredibly fast pace, and this rapid advancement is seen in various sectors such as AI, semic…
S47
Comprehensive Report: AI’s Impact on the Future of Work – Davos 2026 Panel Discussion — Bhan argues that AI’s impact on jobs cannot be viewed in isolation but must be considered alongside broader economic dis…
S48
Labour market stability persists despite the rise of AI — Public fears of AI rapidly displacing workershave not yet materialisedin the US labour market. A new study finds that th…
S49
The UK labour market feels a sharper impact from AI use — Companies are reporting net job losses linked to AI adoption, with research showing a sharper impact than in other major…
S50
The digital economy in the age of AI: Implications for developing countries (UNCTAD) — The development of these technologies is highly concentrated in few countries and firms
S51
How AI Is Transforming Indias Workforce for Global Competitivene — “Because kind of when we have a small set of institutions or companies or talent pools pull ahead disproportionately bec…
S52
Secure Finance Risk-Based AI Policy for the Banking Sector — India’s regulatory thinking reflects this balance, encouraging experimentation while reinforcing institutional responsib…
S53
How AI Drives Innovation and Economic Growth — And I would like to, you know, separate advanced economies from emerging or developing economies. So when it comes to ad…
S54
Harnessing the potential of artificial intelligence in developing countries — The Economistarguesthat there are three main reasons for optimism about AI and development: First, the technology is imp…
S55
The mismatch between public fear of AI and its measured impact — Artificial intelligencehas become one of the loudest topics in public discourse. Headlines speak of mass job displacemen…
S56
From Innovation to Impact_ Bringing AI to the Public — Whilst maintaining an optimistic outlook, the discussion acknowledges important limitations and risks. Sharma emphasises…
S57
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — This brings me to the international dimension. AI is a truly global challenge whose effects transcend national borders. …
S58
Sandboxes for Data Governance: Global Responsible Innovation | IGF 2023 WS #279 — Advocates for a harmonised approach to regulation and policy-making believe that this method can yield positive outcomes…
S59
Keynotes — The main areas of disagreement center on regulatory philosophy (soft vs. comprehensive regulation) and the role of crisi…
S60
WS #35 Unlocking sandboxes for people and the planet — The level of disagreement among speakers was moderate. While there were clear differences in approaches and perspectives…
S61
AI Automation in Telecom_ Ensuring Accountability and Public Trust India AI Impact Summit 2026 — I mean, if a mobile operator arbitrarily starts turning off SIM cards because they think maybe that traffic looks a bit …
S62
The Foundation of AI Democratizing Compute Data Infrastructure — The emphasis on community participation, data sovereignty, and alternative technical architectures suggests AI developme…
S63
Building Inclusive Societies with AI — Public-private collaboration is crucial for national growth and inclusive technology adoption
S64
Open Forum #33 Building an International AI Cooperation Ecosystem — Development | Economic | Capacity development Innovation Ecosystems and Practical Implementation The speaker argues th…
S65
AI and Global Power Dynamics: A Comprehensive Analysis of Economic Transformation and Geopolitical Implications — Economic | Development | Infrastructure IMF Index of Preparedness evaluates physical infrastructure, labor skills capab…
S66
How AI Drives Innovation and Economic Growth — “So, you know, for all countries, but especially for emerging markets and developing economies, AI can be a game changer…
S67
Emerging Markets: Resilience, Innovation, and the Future of Global Development — Economic | Infrastructure | Development Technology enables leapfrogging development opportunities for emerging markets
S68
Harnessing Collective AI for India’s Social and Economic Development — Kushe Bahl believes that AI will fundamentally reshape jobs rather than just replacing them outright. He suggests this t…
S69
Is AI a catalyst for development? — The Economist argues that AI has the potential to revolutionise developing countries by transforming their economies and…
S70
How AI Drives Innovation and Economic Growth — Countries may not have reliable electricity, sufficient internet backbone, basic literacy and numeracy skills, or may ne…
S71
Artificial Intelligence & Emerging Tech — Jennifer Chung:Thank you, Nazar. I actually do see two more questions from the Bangladesh Remote Hub. This is good. This…
S72
AI/Gen AI for the Global Goals — Christopher Lu points out that many areas, particularly in developing countries, lack basic infrastructure such as inter…
S73
What policy levers can bridge the AI divide? — **Lithuania** emphasized leveraging small country advantages through flexibility and reduced bureaucracy, proposing regu…
S74
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Call for individuals in government and private companies to actively bridge the research-policy gap in their work
S75
WS #82 A Global South perspective on AI governance — AUDIENCE: Thank you for the wonderful thought provoking conversation. I wanted to ask, I only attended half of the ses…
S76
Who Watches the Watchers Building Trust in AI Governance — So there is no end to the story of how regulators should design the regulations. That is the main question. All countrie…
S77
Building Trusted AI at Scale – Keynote Anne Bouverot — This comment shifts the discussion from acknowledging competition to actively proposing strategic alliances. It introduc…
S78
Global South’s role in AI governance explored at IGF 2024 — The inclusion of the Global South, particularly theMENA region, in AI governance emerged as a key focus in a recentpanel…
S79
How AI Is Transforming Indias Workforce for Global Competitivene — “Because kind of when we have a small set of institutions or companies or talent pools pull ahead disproportionately bec…
S80
Better governance for fairer digital markets: unlocking the innovation potential and leveling the playing field (UNCTAD) — Overall, the analysis highlights the pressing need for stronger governance in the digital economy. It provides evidence …
S81
AI for Social Empowerment_ Driving Change and Inclusion — She warns that AI is exacerbating inequality by increasing capital concentration while labour’s share of income shrinks….
S82
Trusted Connections_ Ethical AI in Telecom & 6G Networks — The discussion maintained a consistently optimistic and forward-looking tone throughout. Speakers expressed confidence i…
S83
Governments, Rewired / Davos 2025 — The overall tone was optimistic and forward-looking, with speakers highlighting the transformative potential of technolo…
S84
AI 2.0 The Future of Learning in India — The tone was consistently optimistic and forward-looking throughout the conversation. Speakers maintained an enthusiasti…
S85
Science AI & Innovation_ India–Japan Collaboration Showcase — The tone was consistently optimistic and forward-looking throughout the conversation. The panelists demonstrated genuine…
S86
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — The conversation maintains a consistently optimistic and enthusiastic tone throughout. Both speakers demonstrate genuine…
S87
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — The tone begins confrontational and personal as Hunter-Torricke distances himself from his tech industry past, then shif…
S88
WS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy — The tone of the discussion was largely serious and concerned, given the gravity of the issues being discussed. However, …
S89
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S90
WS #187 Bridging Internet AI Governance From Theory to Practice — The discussion maintained a thoughtful but increasingly cautious tone throughout. It began optimistically, with speakers…
S91
Shaping the Future AI Strategies for Jobs and Economic Development — The discussion maintained an optimistic yet pragmatic tone throughout. While acknowledging significant challenges around…
S92
Swiss AI Initiatives and Policy Implementation Discussion — The discussion maintained a professional, collaborative tone throughout, with speakers presenting both opportunities and…
S93
Law, Tech, Humanity, and Trust — The discussion maintained a consistently professional, collaborative, and optimistic tone throughout. The speakers demon…
S94
WS #462 Bridging the Compute Divide a Global Alliance for AI — The discussion maintained a constructive and collaborative tone throughout, with participants building on each other’s i…
S95
Revamping Decision-Making in Digital Governance and the WSIS Framework — The discussion maintained a constructive and collaborative tone throughout, with speakers building upon each other’s poi…
S96
Rewriting Development / Davos 2025 — The tone was largely serious and analytical, with speakers offering critical assessments of current development models. …
S97
Afternoon session — The discussion began with a collaborative and appreciative tone as various stakeholders shared their visions and commitm…
S98
WS #25 Multistakeholder cooperation for online child protection — The tone of the discussion was serious and concerned, reflecting the gravity of the issues being discussed. However, it …
S99
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S100
Meeting REPORT — In summation, the meeting chaired by Stefano Belfiore exemplified focused and action-oriented governance and delivered e…
S101
WS #283 AI Agents: Ensuring Responsible Deployment — The discussion maintained a balanced, thoughtful tone throughout, combining cautious optimism with realistic concern. Pa…
S102
High-Level Track Facilitators Summary and Certificates — The discussion maintained a consistently positive and celebratory tone throughout, characterized by gratitude, accomplis…
S103
Deepfakes for good or bad? — The tone was thoughtful and pragmatic throughout, balancing concern with cautious optimism. The panelists acknowledged s…
S104
High Level Session 3: AI & the Future of Work — Jennifer Bacchus: Thank you very much indeed. Let me move on to our next question now, question two. So what guiding pri…
S105
Lightning Talk #173 Artificial Intelligence in Agrotech and Foodtech — Alina Ustinova: Hello, everyone. My name is Alina. I represent the Center for Global IT Cooperation, and today I want to…
S106
https://dig.watch/event/india-ai-impact-summit-2026/founders-adda-raw-conversations-with-indias-top-ai-pioneers — So for example, anything and everything that is required we are basically making the entire suite of the… automation l…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
J
Johannes Zutt
6 arguments141 words per minute1450 words612 seconds
Argument 1
AI can be a game‑changer for emerging markets, boosting productivity across sectors such as agriculture, health, and finance (Johannes Zutt)
EXPLANATION
Johannes argues that artificial intelligence offers a unique opportunity for emerging and developing economies to leapfrog longstanding development challenges. By enhancing productivity in sectors like farming, healthcare, and financial services, AI can drive inclusive growth.
EVIDENCE
He notes that AI complements about 15-16 % of jobs in South Asia, enabling workers to expand skills and effectiveness (e.g., farmers identifying pests and diseases, nurses diagnosing unfamiliar ailments, and financial institutions better assessing borrower risk) and highlights the broad potential for AI to fill skill gaps in education, health, and public resource allocation [6-10][12-20].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The transformative potential of AI for productivity in agriculture, health and finance in emerging economies is highlighted in S14 and S15, while sector‑specific AI applications for farming are documented in S33 and S34.
MAJOR DISCUSSION POINT
AI as a development catalyst
AGREED WITH
Michael Kremer, Ufuk Akcigit
Argument 2
AI may displace entry‑level, routine jobs and many developing countries lack basic infrastructure (electricity, connectivity, literacy) to harness it (Johannes Zutt)
EXPLANATION
He warns that AI automation will likely eliminate certain low‑skill, document‑based jobs, especially at the bottom of professional hierarchies. Moreover, many emerging economies lack the foundational infrastructure needed to deploy AI effectively.
EVIDENCE
Johannes cites observed reductions in entry-level job advertisements within the World Bank Group and points to basic constraints such as unreliable electricity, weak internet backbones, low literacy and numeracy, and reliance on very basic devices in many developing countries [22-31].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Infrastructure constraints such as unreliable electricity and limited internet are described in S1 and S17, and the risk of jobless growth despite productivity gains is discussed in S15.
MAJOR DISCUSSION POINT
Risks of job displacement and infrastructure gaps
Argument 3
Governments and multilateral bodies must create standards, sandboxes, and advisory support to enable safe AI deployment (Johannes Zutt)
EXPLANATION
He emphasizes the need for public‑facing efforts that establish standards, interoperability, and sandbox environments, alongside private‑sector‑facing initiatives that foster application development. This dual approach is essential for trustworthy AI uptake.
EVIDENCE
He describes the World Bank’s focus on “small AI” that works under limited connectivity and data conditions, and outlines the Bank’s role in advising on data reliability, creating sandbox spaces for experimentation, and coordinating public-private efforts [33-36][49-53].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The role of regulatory sandboxes and collaborative policy labs for responsible AI innovation is covered in S18, S19 and S20, while multistakeholder partnership models are outlined in S22.
MAJOR DISCUSSION POINT
Institutional frameworks for AI
AGREED WITH
Michael Kremer, Iqbal Dhaliwal
DISAGREED WITH
Anu Bradford
Argument 4
The World Bank focuses on “small AI”: affordable, locally relevant solutions, and provides advisory and sandbox environments (Johannes Zutt)
EXPLANATION
Johannes explains that the Bank concentrates on practical, low‑cost AI applications that can operate with limited connectivity, data, and skills. The Bank’s comparative advantage lies in advisory work, ensuring trustworthy data and facilitating sandbox environments for local innovators.
EVIDENCE
He details work in South Asia where the Bank supports state governments (e.g., Uttar Pradesh, Maharashtra, Kerala) to build offline-capable applications, partners with private investors, and avoids direct app development, focusing instead on advisory and standards work [34-38][39-46][49-53].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Examples of low‑cost, locally‑tailored AI solutions and partnership‑driven deployments are provided in S21 and S26, and the sandbox approach is reinforced in S18 and S19.
MAJOR DISCUSSION POINT
World Bank’s small‑AI strategy
AGREED WITH
Iqbal Dhaliwal, Michael Kremer
Argument 5
Coordination between public agencies and private innovators is crucial to build a vibrant AI ecosystem (Johannes Zutt; Michael Kremer)
EXPLANATION
Both speakers stress that effective AI deployment requires close collaboration between governments, multilateral institutions, and private sector innovators. Such coordination ensures that AI solutions are both demand‑driven and scalable.
EVIDENCE
Johannes mentions working with governments and private-sector investors to develop locally relevant AI applications and create sandbox environments, while Michael describes the Development Innovation Ventures fund that bridges public-private collaboration through staged financing and evidence-based pilots [45-49][266-271].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Public‑private collaboration frameworks for AI ecosystems are discussed in S22, S23 and the need for clear operational pathways for scaling pilots is highlighted in S31.
MAJOR DISCUSSION POINT
Public‑private coordination
Argument 6
AI enables precise, individual‑level poverty targeting, but robust governance is essential to prevent abuse
EXPLANATION
Zutt suggests that AI tools can allow governments and development agencies to direct anti‑poverty interventions at the individual level, a transformative capability that must be paired with strong governance safeguards.
EVIDENCE
He remarks that “for the first time in human history, we may actually have the tools available to enable us to target poverty reduction, poverty elimination initiatives on individuals” and follows with “I do worry that we will not get the governance right or we won’t be able to make that governance sufficiently robust to prevent abuses” [414-416].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Robust accountability and human‑rights‑aligned AI governance needed to avoid abuses are emphasized in S24 and S25, with additional insights on governance challenges in digital health (S27) and trust verification (S28).
MAJOR DISCUSSION POINT
Governance of AI‑driven poverty targeting
I
Iqbal Dhaliwal
3 arguments183 words per minute1151 words375 seconds
Argument 1
Small‑scale AI applications free up teachers’ and health workers’ time, improving education and healthcare outcomes (Iqbal Dhaliwal)
EXPLANATION
Iqbal illustrates how AI can automate routine tasks such as spelling correction, allowing teachers to focus on higher‑order learning activities. Similar time‑saving benefits apply to health frontline workers.
EVIDENCE
He describes AI taking over low-level tasks like correcting spelling and punctuation, thereby freeing teachers to engage students in analytical thinking, and notes comparable gains for nurses and Anganwadi workers [241-247].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Productivity gains from AI in education and health settings are noted in S14, and efficiency improvements through AI‑driven agritech illustrate similar time‑saving effects in S16.
MAJOR DISCUSSION POINT
Productivity gains through task automation
AGREED WITH
Johannes Zutt, Michael Kremer
Argument 2
Trust gaps and mismatches between technology performance and real‑world systems can cause pilots to fail or be rejected (Iqbal Dhaliwal)
EXPLANATION
Iqbal points out that even when AI tools outperform humans in controlled settings, lack of trust and misalignment with existing workflows can undermine adoption. Successful pilots must consider system‑level adjustments.
EVIDENCE
He cites studies where AI diagnostic tools, despite higher lab accuracy, reduced doctor efficiency due to insufficient training, and gives an Indian GST fraud-detection example where the government rejected scaling because the AI removed human discretion, highlighting trust and governance issues [309-318].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Studies on trust deficits in AI deployments for health workers (S30), governance and misalignment failures (S29), and the importance of trust for scaling (S31) provide supporting context.
MAJOR DISCUSSION POINT
Implementation challenges due to trust and system fit
AGREED WITH
Johannes Zutt, Michael Kremer, Anu Bradford
Argument 3
Demand‑driven pilots that demonstrate clear impact (e.g., education tools) are key for scaling successful AI projects (Iqbal Dhaliwal)
EXPLANATION
He argues that pilots should arise from genuine demand by end‑users—students, teachers, and school districts—and should show measurable improvements before scaling.
EVIDENCE
Iqbal notes that the AI education tool responded to demand from students for better essays, teachers for reduced grading workload, and districts for demonstrable progress, illustrating a successful demand-driven rollout [250-254].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Guidance on moving from pilots to full deployment through demand‑driven evidence and trust considerations is outlined in S31 and S32.
MAJOR DISCUSSION POINT
Importance of demand‑driven pilots
AGREED WITH
Michael Kremer, Ufuk Akcigit
M
Michael Kremer
5 arguments160 words per minute1592 words593 seconds
Argument 1
Public‑good AI tools—e.g., AI‑enhanced weather forecasts—can dramatically help farmers and other low‑income users (Michael Kremer)
EXPLANATION
Michael highlights AI‑driven weather forecasting as a non‑rival public good that can improve agricultural decisions for millions of smallholder farmers, thereby increasing productivity and resilience.
EVIDENCE
He references India’s AI weather-forecast service that reached 38 million farmers, noting that accurate forecasts led to earlier transplanting and greater hybrid-seed use, and that similar positive responses have been observed globally [133-155].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI‑enhanced weather services for millions of farmers are described in S34, with complementary examples of predictive agricultural models in S33 and broader productivity impacts in S14.
MAJOR DISCUSSION POINT
AI for public‑good agriculture
AGREED WITH
Johannes Zutt, Iqbal Dhaliwal
Argument 2
Slow public‑sector adoption and procurement lock‑ins risk leaving the poor without access to AI benefits (Michael Kremer)
EXPLANATION
Michael warns that without proactive government action and well‑designed procurement processes, AI innovations may remain confined to the private sector, excluding the most vulnerable populations.
EVIDENCE
He describes the need for governments and multilateral banks to address market failures, cites examples where private firms lack incentives for public-good AI, and stresses that procurement structures can create monopsony power, limiting competition and scaling [263-277][284-292].
MAJOR DISCUSSION POINT
Risks of delayed public adoption
AGREED WITH
Johannes Zutt, Anu Bradford, Iqbal Dhaliwal
Argument 3
Innovation funds such as Development Innovation Ventures provide staged, evidence‑based financing to pilot, test, and scale AI interventions (Michael Kremer)
EXPLANATION
Michael outlines a tiered funding model that starts with small grants for pilots, moves to larger grants for rigorous testing, and culminates in scale‑up financing, thereby de‑risking AI projects and encouraging evidence‑based scaling.
EVIDENCE
He details the Development Innovation Ventures structure, noting its origins in the U.S. government, its independent relaunch, and its three-stage grant system that supports pilots, rigorous evaluation, and eventual scaling [266-271].
MAJOR DISCUSSION POINT
Staged funding for AI innovation
AGREED WITH
Ufuk Akcigit, Iqbal Dhaliwal
DISAGREED WITH
Johannes Zutt, Ufuk Akcigit
Argument 4
Coordination between public agencies and private innovators is crucial to build a vibrant AI ecosystem (Johannes Zutt; Michael Kremer)
EXPLANATION
Michael reinforces the need for joint effort between governments, multilateral institutions, and private firms to ensure AI solutions are both locally relevant and scalable.
EVIDENCE
He references the Development Innovation Ventures fund as an example of public-private collaboration that supports evidence-based pilots, complementing Johannes’s description of the World Bank’s advisory role and sandbox creation for small-AI projects [266-271][45-49].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Public‑private collaboration frameworks for AI ecosystems are discussed in S22, S23 and the need for clear operational pathways for scaling pilots is highlighted in S31.
MAJOR DISCUSSION POINT
Public‑private partnership for AI
Argument 5
AI can improve traffic safety and enforcement through automated cameras and AI‑driven driver‑licensing tests, enhancing fairness and reducing accidents
EXPLANATION
Kremer highlights AI applications that automate traffic monitoring and driver assessment, which can lower accident rates and increase perceived fairness in enforcement.
EVIDENCE
He describes “automated traffic cameras that have the opportunity to improve traffic outcomes” and cites the “India Research Program” where AI-based driver-license testing (HAB) led to a “20 to 30 percent” reduction in unsafe drivers across 56 sites [276-278].
MAJOR DISCUSSION POINT
AI for public‑good safety applications
U
Ufuk Akcigit
6 arguments163 words per minute1041 words382 seconds
Argument 1
AI offers sizable opportunities for developing economies, but only if the broader business environment supports entrepreneurship (Ufuk Akcigit)
EXPLANATION
Ufuk stresses that AI’s potential can be realized only when underlying business conditions—such as competition, regulatory clarity, and entrepreneurial ecosystems—are conducive. Without these, AI tools may not translate into growth.
EVIDENCE
He questions why, historically, firm size in emerging economies correlated with family size rather than competition, arguing that without a business-friendly environment AI cannot deliver its promised gains [104-115].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The link between AI‑driven entrepreneurship and supportive ecosystems is highlighted in S14, while multistakeholder partnership and skilling initiatives that foster such environments are detailed in S22 and S23.
MAJOR DISCUSSION POINT
Business climate as prerequisite for AI benefits
AGREED WITH
Johannes Zutt, Michael Kremer
DISAGREED WITH
Johannes Zutt, Michael Kremer
Argument 2
The foundational AI layer is compute‑, data‑, and talent‑intensive, creating high concentration and threatening labor markets (Ufuk Akcigit)
EXPLANATION
He differentiates between the application layer (low entry barriers) and the foundational layer (high barriers due to heavy requirements for compute, data, and skilled talent). This concentration risks limiting competition and exacerbating labor market disruptions.
EVIDENCE
Ufuk notes that the foundational layer demands substantial compute, data, and talent, making it prone to concentration, and that outcomes at this layer will spill over to the application layer, influencing overall innovation dynamics [93-100].
MAJOR DISCUSSION POINT
Concentration in foundational AI
AGREED WITH
Anu Bradford, Michael Kremer
Argument 3
Strengthening the overall business climate is essential for entrepreneurs to exploit AI’s potential (Ufuk Akcigit)
EXPLANATION
He reiterates that a supportive entrepreneurial environment—characterized by competition, clear regulations, and access to finance—is vital for firms to harness AI technologies effectively.
EVIDENCE
His earlier remarks about the need to fix business environments, citing the lack of competition-friendly conditions and the historical reliance on family size for firm growth, underscore this point [104-115].
MAJOR DISCUSSION POINT
Need for pro‑entrepreneurial policies
Argument 4
The application layer lowers entry barriers for startups, while the foundational layer remains highly concentrated, shaping future innovation dynamics (Ufuk Akcigit)
EXPLANATION
Ufuk observes that AI applications can be built by small firms due to low entry costs, but the underlying models and infrastructure remain dominated by a few large players, influencing the trajectory of innovation.
EVIDENCE
He describes the application layer as having low entry barriers, enabling small businesses to perform tasks previously reserved for large firms, contrasted with the foundational layer’s high barriers and concentration risk [87-93].
MAJOR DISCUSSION POINT
Dual‑layer structure of AI markets
Argument 5
Preserving strong university research capacity is vital to keep the foundational AI layer contestable and prevent excessive concentration (Ufuk Akcigit)
EXPLANATION
Ufuk argues that universities are essential for maintaining a competitive foundational AI ecosystem; weakening academic research would tilt power toward large incumbents.
EVIDENCE
He states that keeping the foundational layer contestable requires healthy universities, warning that without them the sector could become overly concentrated among large incumbent information companies [349-351].
MAJOR DISCUSSION POINT
Role of academia in AI foundations
Argument 6
AI‑driven creative destruction is a key engine of long‑term economic growth, but early indicators must be monitored to manage risks
EXPLANATION
Akcigit stresses that creative destruction, accelerated by AI, fuels sustained growth, yet stresses the importance of tracking early signals to avoid adverse side‑effects.
EVIDENCE
He states “creative destruction is an important driver of economic growth in the long run” and later adds “we need to look at early indicators” when discussing AI’s impact [76-78][101-102].
MAJOR DISCUSSION POINT
Creative destruction and AI
A
Anu Bradford
4 arguments199 words per minute1374 words412 seconds
Argument 1
The Global South needs AI sovereignty and rights‑based regulatory frameworks, drawing lessons from the EU while tailoring to local priorities (Anu Bradford)
EXPLANATION
Anu contends that countries in the Global South should develop AI regulations that protect fundamental rights and ensure equitable benefit distribution, while adapting successful elements from the EU’s rights‑driven approach to fit local contexts.
EVIDENCE
She references the EU’s AI Act as a rights-based, economy-wide framework that protects individual rights and promotes broader benefit sharing, suggesting India can learn from this while customizing its own rules [167-176].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for rights‑based, accountable AI regulation and safeguards against abuse are made in S24 and S25, and the establishment of trustworthy AI networks for the Global South is discussed in S20.
MAJOR DISCUSSION POINT
AI sovereignty through rights‑based regulation
AGREED WITH
Johannes Zutt, Michael Kremer, Iqbal Dhaliwal
DISAGREED WITH
Johannes Zutt
Argument 2
Full AI sovereignty is limited by global supply‑chain interdependence; techno‑nationalism must be balanced against cooperation (Anu Bradford)
EXPLANATION
Anu highlights that AI development relies on a globally interlinked supply chain for compute, semiconductors, and raw materials, making absolute sovereignty unrealistic. She warns against weaponising these interdependencies.
EVIDENCE
She outlines the AI stack’s reliance on U.S. semiconductor design, Taiwanese manufacturing, Dutch equipment (ASML), Japanese chemicals, and Chinese raw materials, noting that attempts to weaponise these choke points can backfire and harm all parties [357-371].
MAJOR DISCUSSION POINT
Limits of AI sovereignty due to supply chains
AGREED WITH
Ufuk Akcigit, Michael Kremer
DISAGREED WITH
Jeanette Rodrigues
Argument 3
Over‑regulation, weak governance, and geopolitical weaponisation of AI pose systemic risks (Anu Bradford)
EXPLANATION
Anu warns that overly stringent or poorly designed regulations can stifle innovation, while geopolitical competition may lead to the weaponisation of AI components, creating systemic vulnerabilities.
EVIDENCE
She notes the difficulty even established regulators like the EU face in crafting AI legislation, and later expands on how geopolitical rivalries over compute, semiconductors, and raw materials could be weaponised, undermining global stability [166-176][357-374].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Systemic risks from poorly designed regulations and geopolitical competition are examined in S24, S25 and S29, with additional emphasis on trust and verification challenges in S28.
MAJOR DISCUSSION POINT
Systemic risks from regulation and geopolitics
Argument 4
AI regulation should be adapted to local contexts; countries like India must tailor lessons from the EU rather than copy‑paste frameworks
EXPLANATION
Bradford argues that while the EU provides a rights‑based regulatory template, each nation should modify it to reflect its unique priorities and institutional realities.
EVIDENCE
She says “India is a formidable economy that doesn’t need to take a template and plug it into the economy as such. I think India is in a very good position to take the lessons that serves its needs yet make the kind of local modification and variations that are more reflecting the distinct priorities of this country” [177-178].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for context‑specific, rights‑based AI policy frameworks is reinforced in S24 and S25, which stress adapting global standards to national realities.
MAJOR DISCUSSION POINT
Context‑specific AI regulation
J
Jeanette Rodrigues
3 arguments174 words per minute1039 words356 seconds
Argument 1
AI innovation does not disperse equally, risking a widening of development gaps
EXPLANATION
Rodrigues observes that AI and related innovations fail to spread uniformly across societies, which can exacerbate existing inequalities rather than close them.
EVIDENCE
She explicitly states that “AI, innovation never disperses and never diffuses equally” [61].
MAJOR DISCUSSION POINT
Inequitable diffusion of AI
Argument 2
Policymakers need a balanced, pragmatic, policy‑first approach that reconciles hope and concern about AI
EXPLANATION
She argues that the discussion should aim to manage both the optimism surrounding AI’s benefits and the fears about its risks by adopting practical, policy‑driven solutions for real‑world implementation.
EVIDENCE
Rodrigues says “we need to balance both of those extremes, hope and concern, and go ahead in a pragmatic, policy-first way to prepare for the real world” [71].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Policy‑lab approaches and collaborative sandbox models that balance innovation with risk management are described in S19 and S20.
MAJOR DISCUSSION POINT
Balanced, policy‑first AI governance
Argument 3
Governance of large language models is dominated by the US and China, limiting Global South sovereignty; the Global South must develop its own AI governance frameworks
EXPLANATION
She points out that the concentration of foundational AI models in a few major powers raises questions about who sets the rules for developing economies, implying the need for independent AI sovereignty.
EVIDENCE
Rodrigues notes that “large language models are concentrated in the US, in China now with DeepSeek… Who sets the AI rules for the Global South?” [163-166].
MAJOR DISCUSSION POINT
AI governance concentration and Global South sovereignty
Agreements
Agreement Points
AI is a powerful catalyst for development in emerging and developing economies, offering productivity gains in agriculture, health, finance and other sectors.
Speakers: Johannes Zutt, Michael Kremer, Ufuk Akcigit
AI can be a game‑changer for emerging markets, boosting productivity across sectors such as agriculture, health, and finance (Johannes Zutt) Public‑good AI tools—e.g., AI‑enhanced weather forecasts—can dramatically help farmers and other low‑income users (Michael Kremer) AI offers sizable opportunities for developing economies, but only if the broader business environment supports entrepreneurship (Ufuk Akcigit)
All three speakers highlight that AI can substantially improve productivity and outcomes in key sectors for low-income countries, from farm-level pest detection and health diagnostics to AI-driven weather forecasts, provided the right ecosystem exists [6-10][12-20][133-155][104-106].
POLICY CONTEXT (KNOWLEDGE BASE)
This view aligns with analyses of AI-driven economic growth and social development in the literature, such as the discussion of AI as a catalyst for productivity in agriculture, health and finance in S43, and the distinction between foundational and application layers for emerging economies in S53 and S54.
Effective AI deployment requires coordinated public‑private collaboration and institutional mechanisms such as advisory support, sandboxes and staged funding.
Speakers: Johannes Zutt, Michael Kremer, Iqbal Dhaliwal
Governments and multilateral bodies must create standards, sandboxes, and advisory support to enable safe AI deployment (Johannes Zutt) Coordination between public agencies and private innovators is crucial to build a vibrant AI ecosystem (Michael Kremer) Demand‑driven pilots that demonstrate clear impact (e.g., education tools) are key for scaling successful AI projects (Iqbal Dhaliwal)
Johannes stresses standards and sandbox environments, Michael describes the Development Innovation Ventures fund that bridges public and private effort, and Iqbal points to demand-driven pilots as a practical way to align stakeholders, all underscoring the need for coordinated ecosystems [45-53][49-53][266-271][250-254].
POLICY CONTEXT (KNOWLEDGE BASE)
Public‑private partnerships and sandbox approaches are repeatedly advocated as essential for responsible AI rollout, e.g., the sandboxes for data governance framework in S58, the IGF discussion on sandbox value in S60, and broader PPP recommendations in S63 and S64. Risk‑based policy guidance for experimentation versus oversight is outlined in S52.
AI also brings significant risks, notably job displacement for low‑skill workers, infrastructure gaps, and the danger of slow public‑sector adoption that could leave the poor behind.
Speakers: Johannes Zutt, Ufuk Akcigit, Michael Kremer, Iqbal Dhaliwal
AI may displace entry‑level, routine jobs and many developing countries lack basic infrastructure (Johannes Zutt) The biggest risk is the labor market – entry‑level jobs may disappear faster than workers can adapt (Ufuk Akcigit) Slow public‑sector adoption and procurement lock‑ins risk leaving the poor without access to AI benefits (Michael Kremer) Trust gaps and mismatches between technology performance and real‑world systems can cause pilots to fail or be rejected (Iqbal Dhaliwal)
Johannes warns of automation-driven job loss and weak electricity/internet, Ufuk flags rapid erosion of entry-level employment, Michael cautions that governments may lag in adopting AI for public goods, and Iqbal notes that lack of trust and system alignment can stall deployment-all pointing to structural and adoption risks [22-31][405-412][263-277][309-318].
POLICY CONTEXT (KNOWLEDGE BASE)
Labour‑market disruption concerns are central to policy debates, as seen in the call for responsible governance to minimise displacement in S44, the Davos 2026 panel linking AI impact to broader economic shocks in S47, and empirical studies showing limited displacement in the US (S48) versus sharper job losses in the UK (S49).
Localized, low‑cost “small AI” solutions are essential for contexts with limited connectivity, data and skill resources.
Speakers: Johannes Zutt, Iqbal Dhaliwal, Michael Kremer
The World Bank focuses on “small AI”: affordable, locally relevant solutions, and provides advisory and sandbox environments (Johannes Zutt) Small‑scale AI applications free up teachers’ and health workers’ time, improving education and healthcare outcomes (Iqbal Dhaliwal) Public‑good AI tools—e.g., AI‑enhanced weather forecasts—can dramatically help farmers and other low‑income users (Michael Kremer)
Johannes describes the Bank’s “small AI” strategy, Iqbal gives concrete classroom examples where AI automates routine tasks, and Michael cites AI weather services that operate at scale with minimal user cost, collectively emphasizing the importance of affordable, context-aware AI [34-38][39-46][241-247][133-155].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for affordable, context‑adapted AI is highlighted in discussions of democratizing compute and data infrastructure in S62, and the cost barriers to training large models that may limit diffusion to low‑resource settings in S54.
Robust governance, regulation and standards are needed to prevent misuse of AI, especially when targeting individuals for poverty reduction or public services.
Speakers: Johannes Zutt, Michael Kremer, Anu Bradford, Iqbal Dhaliwal
AI enables precise, individual‑level poverty targeting, but robust governance is essential to prevent abuse (Johannes Zutt) Slow public‑sector adoption and procurement lock‑ins risk leaving the poor without access to AI benefits (Michael Kremer) The Global South needs AI sovereignty and rights‑based regulatory frameworks, drawing lessons from the EU while tailoring to local priorities (Anu Bradford) Trust gaps and mismatches between technology performance and real‑world systems can cause pilots to fail or be rejected (Iqbal Dhaliwal)
Johannes highlights the need for governance when using AI for poverty targeting, Michael stresses that without proper public mechanisms the poor may be excluded, Anu advocates rights-based, locally adapted regulation, and Iqbal points to trust and system-fit as governance challenges, all converging on the necessity of strong, context-sensitive oversight [414-416][263-277][167-176][309-318].
POLICY CONTEXT (KNOWLEDGE BASE)
Governance imperatives are underscored in S45’s emphasis on human responsibility, the call for guardrails in S46, the caution against full AI control over sensitive services in S56, and the broader regulatory philosophy debate captured in S59.
The foundational AI layer is highly concentrated in a few countries and firms, creating systemic risks for competition and innovation.
Speakers: Ufuk Akcigit, Anu Bradford, Michael Kremer
The foundational AI layer is compute‑, data‑, and talent‑intensive, creating high concentration and threatening labor markets (Ufuk Akcigit) Full AI sovereignty is limited by global supply‑chain interdependence; techno‑nationalism must be balanced against cooperation (Anu Bradford) Big AI can, through computational power, generate new knowledge that can help us do things much better, but small AI translation is also important (Michael Kremer)
Ufuk describes the concentration risk of the compute-heavy foundational layer, Anu maps the global supply-chain that ties AI capability to a few power-houses, and Michael notes the distinction between “big AI” and “small AI” while acknowledging the dominance of large players, together underscoring systemic concentration concerns [93-100][357-371][55-57].
POLICY CONTEXT (KNOWLEDGE BASE)
Concentration risks are documented in the UNCTAD report on the digital economy in S50, the India workforce case noting data and compute concentration in S51, and the foundational vs. application layer analysis in S53.
Evidence‑based evaluation and early indicators are crucial for guiding AI policy and scaling interventions.
Speakers: Michael Kremer, Ufuk Akcigit, Iqbal Dhaliwal
Innovation funds such as Development Innovation Ventures provide staged, evidence‑based financing to pilot, test, and scale AI interventions (Michael Kremer) AI‑driven creative destruction is a key engine of long‑term growth, but early indicators must be monitored to manage risks (Ufuk Akcigit) Demand‑driven pilots that demonstrate clear impact (e.g., education tools) are key for scaling successful AI projects (Iqbal Dhaliwal)
Michael outlines a tiered, evidence-based funding model, Ufuk stresses monitoring early signals of creative destruction, and Iqbal emphasizes demand-driven pilots with measurable outcomes, all advocating data-driven decision-making for AI deployment [266-271][101-103][250-254].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy frameworks stress evidence‑based assessment, as reflected in the responsible AI governance recommendations in S44 and the discussion of the mismatch between public fear and measured impact in S55.
Similar Viewpoints
Both stress that public institutions need to provide frameworks, standards and collaborative platforms (e.g., sandboxes, advisory roles) to ensure AI solutions are safe, trustworthy and scalable [45-53][266-271].
Speakers: Johannes Zutt, Michael Kremer
Governments and multilateral bodies must create standards, sandboxes, and advisory support to enable safe AI deployment (Johannes Zutt) Coordination between public agencies and private innovators is crucial to build a vibrant AI ecosystem (Michael Kremer)
Both recognize that AI capabilities are concentrated in a few geopolitical actors due to the underlying hardware and talent ecosystem, limiting true sovereignty for any single country [93-100][357-371].
Speakers: Ufuk Akcigit, Anu Bradford
The foundational AI layer is compute‑, data‑, and talent‑intensive, creating high concentration (Ufuk Akcigit) Full AI sovereignty is limited by global supply‑chain interdependence; techno‑nationalism must be balanced against cooperation (Anu Bradford)
Both highlight institutional and trust barriers that can prevent AI benefits from reaching intended users, emphasizing the need for careful implementation and procurement design [309-318][263-277].
Speakers: Iqbal Dhaliwal, Michael Kremer
Trust gaps and mismatches between technology performance and real‑world systems can cause pilots to fail or be rejected (Iqbal Dhaliwal) Slow public‑sector adoption and procurement lock‑ins risk leaving the poor without access to AI benefits (Michael Kremer)
Unexpected Consensus
Agreement that AI innovation does not disperse equally and may widen development gaps, despite varied professional backgrounds.
Speakers: Jeanette Rodrigues, Johannes Zutt, Michael Kremer, Ufuk Akcigit, Anu Bradford
AI innovation does not disperse equally, risking a widening of development gaps (Jeanette Rodrigues) AI may displace entry‑level jobs and many developing countries lack basic infrastructure (Johannes Zutt) Slow public‑sector adoption risks leaving the poor without access (Michael Kremer) The foundational AI layer is concentration‑prone, threatening competition (Ufuk Akcigit) Full AI sovereignty is limited by global supply‑chain interdependence (Anu Bradford)
While each speaker approached the issue from different angles (technology diffusion, job loss, public-sector lag, concentration, supply-chain dependence), they all converge on the unexpected consensus that without deliberate policy action AI could exacerbate existing inequalities rather than close them [61][22-31][263-277][93-100][357-371].
POLICY CONTEXT (KNOWLEDGE BASE)
Unequal diffusion and widening gaps are highlighted in the UNCTAD analysis of concentration (S50), the India case on competitive imbalances (S51), and the optimism‑caution balance in S54.
Overall Assessment

The panel shows strong convergence on the dual nature of AI: it is a powerful development tool but also poses systemic risks related to job displacement, concentration, and governance. Consensus is high on the need for public‑private collaboration, localized small‑AI solutions, robust rights‑based regulation, and evidence‑based scaling mechanisms.

High consensus across speakers on both opportunities and risks, implying that policy agendas should simultaneously promote inclusive, small‑AI pilots, strengthen institutional frameworks, and address concentration and governance challenges to ensure AI narrows rather than widens development gaps.

Differences
Different Viewpoints
What is the primary mechanism to harness AI for development in emerging economies?
Speakers: Johannes Zutt, Ufuk Akcigit, Michael Kremer
Small AI meaning practical, affordable, locally relevant AI that addresses specific problems and also works where connectivity, data, skills, infrastructure are fairly limited (Johannes Zutt) AI offers sizable opportunities for developing economies, but only if the broader business environment supports entrepreneurship (Ufuk Akcigit) Innovation funds such as Development Innovation Ventures provide staged, evidence‑based financing to pilot, test, and scale AI interventions (Michael Kremer)
Johannes advocates focusing on “small AI” and advisory work with sandboxes to enable low-cost, locally relevant solutions [34-38][49-53]. Ufuk stresses that without a supportive business climate-competition, clear regulation, access to finance-AI cannot deliver its promises [104-115]. Michael proposes a staged public-private financing mechanism (Innovation Innovation Ventures) to de-risk pilots and scale successful projects [266-271]. The three speakers agree AI can help development but disagree on whether the main lever should be technical advisory/small-AI deployment, broader entrepreneurship-friendly reforms, or dedicated innovation funding.
POLICY CONTEXT (KNOWLEDGE BASE)
Stakeholders debate mechanisms such as sandboxes, advisory bodies and PPPs; the sandboxes for data governance approach (S58), the IGF discussion on sandbox value (S60), and the emphasis on public‑private collaboration in S63 and S64 provide relevant context.
Who should set AI rules for the Global South and how feasible is AI sovereignty?
Speakers: Jeanette Rodrigues, Anu Bradford
Who sets the AI rules for the Global South? Is there even the possibility for the Global South to talk about sovereignty? (Jeanette Rodrigues) Full AI sovereignty is limited by global supply‑chain interdependence; techno‑nationalism must be balanced against cooperation (Anu Bradford)
Jeanette raises the question of who will define AI governance for developing countries, implying that a sovereign framework could be established [163-166]. Anu counters that true sovereignty is unrealistic because AI development depends on a globally linked supply chain for compute, semiconductors, equipment, and raw materials, making any unilateral rule-making vulnerable to geopolitical pressures [357-371]. The disagreement centers on the feasibility and locus of AI governance for the Global South.
POLICY CONTEXT (KNOWLEDGE BASE)
The feasibility of AI sovereignty and rule‑setting is examined in the UNCTAD report on digital economy concentration (S50), the data‑sovereignty perspective in S62, and broader calls for human‑centric responsibility in AI governance in S45.
Regulatory approach: sandbox/advisory versus rights‑based comprehensive regulation
Speakers: Johannes Zutt, Anu Bradford
Governments and multilateral bodies must create standards, sandboxes, and advisory support to enable safe AI deployment (Johannes Zutt) The Global South needs AI sovereignty and rights‑based regulatory frameworks, drawing lessons from the EU while tailoring to local priorities (Anu Bradford)
Johannes emphasizes a pragmatic, standards-based sandbox model that the World Bank can help build, focusing on technical interoperability and low-cost applications [33-36][49-53]. Anu argues for a broader, rights-driven regulatory regime modeled on the EU AI Act, adapted to local contexts, suggesting a more comprehensive legal framework [167-176][177-178]. The two approaches differ in scope and emphasis-technical facilitation versus rights-based regulation.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between soft, sandbox‑based regulation and rights‑based comprehensive frameworks is a recurring theme, seen in the sandboxes advocacy in S58, the regulatory philosophy debate summarized in S59, and the contrasting views on guardrails versus flexible experimentation in S45 and S46.
Severity and handling of AI‑induced labor market disruptions
Speakers: Johannes Zutt, Ufuk Akcigit
One of them is there will be some job losses, particularly sort of entry‑level jobs that are very much knowledge or document‑based, performing relatively rote work that can be taken over by automation (Johannes Zutt) The biggest risk, I think, is definitely the labor market. If there was a dial where I could slow down the adaptation and give time to the labor market to catch up, that’s my biggest worry (Ufuk Akcigit)
Johannes notes that AI will eliminate certain low-skill, document-based positions, citing reductions in entry-level job ads within the World Bank Group [22-24]. Ufuk stresses that the broader labor market-especially entry-level coding jobs that fuel urban tech hubs-could be rapidly displaced, and he wishes for a mechanism to slow AI adoption to protect workers [405-411]. While both acknowledge job displacement, they differ on the perceived magnitude and the policy response needed.
POLICY CONTEXT (KNOWLEDGE BASE)
Assessments of labour‑market impact range from cautionary (S44) to empirical findings of limited displacement (S48) and sharper job losses in specific economies (S49), with broader contextualisation of AI’s impact alongside geopolitical factors in S47.
Unexpected Differences
Real‑world impact of AI pilots versus optimism about AI as a public good
Speakers: Iqbal Dhaliwal, Michael Kremer
Trust gaps and mismatches between technology performance and real‑world systems can cause pilots to fail or be rejected (Iqbal Dhaliwal) Public‑good AI tools—e.g., AI‑enhanced weather forecasts—can dramatically help farmers and other low‑income users (Michael Kremer)
Iqbal highlights that even technically superior AI tools may be rejected if users distrust them or if existing workflows are not adapted, citing failed GST fraud-detection scaling due to loss of human discretion [309-318]. Michael, by contrast, presents concrete success stories (AI weather forecasts reaching 38 million Indian farmers and improving planting decisions) as evidence that AI public goods can be rapidly adopted and generate impact [133-155]. The tension between skepticism about adoption barriers and optimism about transformative outcomes was not anticipated given the overall pro-AI tone of the panel.
POLICY CONTEXT (KNOWLEDGE BASE)
The gap between pilot optimism and measured outcomes is discussed in the mismatch analysis of public fear versus actual impact in S55, complemented by empirical stability evidence in S48 and the UK job‑loss case in S49.
Overall Assessment

The panel broadly concurs that AI holds significant promise for emerging economies, especially through small‑scale, locally relevant applications and public‑good services. Nevertheless, substantial disagreement exists on the optimal policy lever—whether to prioritize technical sandboxes, rights‑based regulation, business‑environment reforms, or staged innovation financing. Additional contention surrounds the feasibility of AI sovereignty for the Global South and the severity of labor‑market disruptions. These divergences suggest that coordinated, multi‑track strategies will be required to balance rapid AI deployment with governance, regulatory, and labor considerations.

Moderate to high: while the overarching goal of inclusive AI‑driven development is shared, the panelists propose markedly different pathways and express contrasting views on governance feasibility and labor impacts, indicating that consensus on concrete policy actions remains limited.

Partial Agreements
All speakers agree that AI has the potential to drive inclusive development and improve livelihoods in emerging economies. However, they diverge on the means: Johannes stresses small, advisory‑driven deployments; Ufuk calls for macro‑level business‑environment reforms; Michael highlights public‑good applications supported by innovation funding; Anu focuses on rights‑based regulatory sovereignty. The shared goal of inclusive AI‑driven growth is clear, but the pathways differ.
Speakers: Johannes Zutt, Ufuk Akcigit, Michael Kremer, Anu Bradford
AI can be a game‑changer for emerging markets, boosting productivity across sectors such as agriculture, health, and finance (Johannes Zutt) AI offers sizable opportunities for developing economies, but only if the broader business environment supports entrepreneurship (Ufuk Akcigit) Public‑good AI tools—e.g., AI‑enhanced weather forecasts—can dramatically help farmers and low‑income users (Michael Kremer) The Global South needs AI sovereignty and rights‑based regulatory frameworks, drawing lessons from the EU while tailoring to local priorities (Anu Bradford)
All three agree that AI can improve service delivery in education, health, and agriculture. Yet, Michael stresses the need for government procurement and public‑sector adoption to reach the poor, Iqbal stresses demand‑driven pilots and trust building, while Johannes focuses on technical complementarity and skill expansion. The consensus on benefit is matched with differing implementation strategies.
Speakers: Michael Kremer, Iqbal Dhaliwal, Johannes Zutt
Public‑good AI tools—e.g., AI‑enhanced weather forecasts—can dramatically help farmers and other low‑income users (Michael Kremer) Small‑scale AI applications free up teachers’ and health workers’ time, improving education and healthcare outcomes (Iqbal Dhaliwal) AI enables people in those jobs to expand their skills and their effectiveness in delivering the products and services that they are trying to provide (Johannes Zutt)
Takeaways
Key takeaways
AI can be a transformative development tool for emerging economies, improving productivity in agriculture, health, education, finance and public‑good services such as weather forecasting. ‘Small AI’ – affordable, locally relevant, low‑connectivity solutions – is critical for contexts with limited electricity, internet, literacy and data infrastructure. The foundational AI layer (large models, compute, data, talent) is highly concentrated, creating risks of market power, talent drain from academia, and uneven innovation dynamics. AI will displace certain entry‑level, routine jobs, especially in document‑based tasks, creating labor‑market pressures in developing countries. Effective AI deployment requires both public‑sector actions (standards, sandboxes, advisory support, procurement reforms) and private‑sector innovation; coordination between them is essential. Regulatory frameworks must balance innovation‑friendliness with protection of rights and public‑good outcomes; the EU rights‑based approach offers lessons but must be adapted locally. Multilateral institutions can accelerate impact by providing evidence‑based funding pipelines (pilot → test → scale) and by helping governments create enabling environments. Trust gaps and mis‑alignment between technology performance and real‑world systems can cause pilots to fail or be rejected; implementation design matters as much as the model itself. Geopolitical interdependence in the AI supply chain limits full AI sovereignty; techno‑nationalism should be tempered with cooperation.
Resolutions and action items
World Bank to continue focusing on ‘small AI’ projects, providing advisory support, standards and sandbox environments for local innovators. Development Innovation Ventures (and similar funds) to be used as a staged, evidence‑based financing mechanism for AI pilots, rigorous testing, and scaling. Governments (e.g., India) to develop policies that ensure AI accessibility offline, promote digital identity integration, and support private‑sector AI startups. Encourage creation of demand‑driven AI pilots that demonstrably free up public‑sector worker time (e.g., education grading tools, health diagnostics). Maintain and strengthen university research capacity to keep the foundational AI layer contestable and to mitigate concentration of talent in industry. Design procurement processes that require continuous evaluation, A/B testing, and open‑access provisions to avoid lock‑ins.
Unresolved issues
How to design AI regulations that protect rights without stifling innovation, especially in the Global South where regulatory capacity is limited. Concrete strategies to mitigate labor displacement for entry‑level workers and to reskill affected populations. Mechanisms to ensure that AI‑driven public‑good tools (e.g., weather forecasts, tax fraud detection) are adopted and scaled by governments despite political or bureaucratic resistance. Addressing the growing market concentration and ensuring competitive dynamics in both the application and foundational AI layers. How to manage geopolitical risks and supply‑chain dependencies (semiconductors, rare earths) while pursuing AI sovereignty. Establishing trustworthy governance frameworks that balance power between AI developers, regulators, and end‑users.
Suggested compromises
Adopt a rights‑based regulatory approach (EU model) but tailor it to local priorities, avoiding a binary choice between regulation and innovation. Combine public‑facing standards and sandbox creation with private‑sector‑driven application development to leverage strengths of both sectors. Support both small‑AI deployments for immediate impact and invest in foundational AI research to keep the ecosystem competitive. Implement staged funding (pilot → test → scale) that allows early learning while still providing pathways for rapid scaling of successful solutions. Encourage incremental AI adoption in government services (e.g., AI‑assisted call centers, health record linking) to improve productivity without abrupt labor shocks.
Thought Provoking Comments
AI can be a game‑changer for emerging markets, but the real bottleneck is basic infrastructure – reliable electricity, internet, literacy – leading us to focus on "small AI" that works on limited devices and connectivity.
He reframed the AI debate from a high‑tech, global perspective to the concrete, on‑the‑ground constraints of developing economies, introducing the practical concept of “small AI” as a solution.
Shifted the conversation from abstract potential to actionable implementation, prompting later speakers (e.g., Michael and Iqbal) to discuss concrete pilots (weather forecasts, teacher‑assistant tools) and setting the stage for the policy‑focused parts of the panel.
Speaker: Johannes Zutt
The AI ecosystem has two layers: a foundational layer (compute‑heavy, data‑heavy, talent‑heavy) that is highly concentrated, and an application layer where entry barriers are low. The concentration at the foundation will spill over to the application side.
He introduced a clear analytical framework that separates structural constraints from market opportunities, highlighting why concentration matters for creative destruction.
Prompted a deeper discussion on market concentration and the role of incumbents, leading Anu and Iqbal to raise concerns about regulatory sovereignty and the risk of power consolidation in AI deployment.
Speaker: Ufuk Akcigit
Public‑good AI applications (e.g., AI‑driven weather forecasts for 38 million Indian farmers) will not attract private investment, so governments and multilateral development banks must step in.
He linked a tangible AI use case to a market‑failure argument, showing how public sector action can unlock benefits for the poor that the private sector ignores.
Steered the dialogue toward funding mechanisms and the need for evidence‑based innovation funds, which Michael later expanded on, and reinforced the panel’s focus on policy levers rather than just technology.
Speaker: Michael Kremer
Regulation is not a false choice to innovation; the EU’s rights‑driven AI Act shows that a balanced, rights‑focused framework can coexist with vibrant AI ecosystems, and India can adapt such lessons without copying them wholesale.
She challenged the common narrative that regulation stifles innovation, offering a nuanced view that regulation can be both protective and enabling.
Opened the floor to a debate on AI sovereignty and regulatory design, influencing subsequent remarks from Ufuk about concentration and from Iqbal about power dynamics in scaling AI tools.
Speaker: Anu Bradford
AI should free up frontline workers’ time (e.g., teachers no longer correcting spelling) rather than replace them; demand‑driven pilots that improve productivity for teachers, nurses, and Anganwadi workers are the most successful.
He provided a concrete, demand‑side example that reframed AI from a job‑loss narrative to a productivity‑enhancement story, emphasizing user‑centric design.
Reinforced the “small AI” theme, influenced Michael’s later points about evidence‑based pilots, and set up his later caution about trust and policy mismatches.
Speaker: Iqbal Dhaliwal
Even when AI tools work better than humans in the lab (e.g., diagnostic AI), they can fail in the field because users aren’t trained and because the surrounding system isn’t adapted; scaling the GST fraud‑detection model was blocked because it removed human discretion—a power issue.
He highlighted the gap between technical performance and real‑world adoption, introducing the concept of “power” as a barrier to scaling AI, which is rarely discussed in tech‑centric panels.
Created a turning point that moved the discussion from optimism about AI capabilities to a critical examination of institutional inertia and political economy, prompting Anu and Ufuf to discuss sovereignty and concentration.
Speaker: Iqbal Dhaliwal
The biggest systemic risk is not super‑intelligent AI but that humanity becomes dumber by outsourcing thinking to models; education must teach students to augment, not replace, their cognition.
She shifted the risk narrative from external threats to internal cognitive decline, a perspective that broadens the ethical debate beyond regulation and market concentration.
Prompted Michael to echo concerns about public‑sector adoption and the need for careful procurement, and reinforced the panel’s concluding focus on long‑term societal impacts.
Speaker: Anu Bradford
AI could enable poverty‑targeted interventions at the individual level for the first time, but we risk failing to build robust governance to prevent abuses.
He distilled the panel’s core promise—precision poverty alleviation—while simultaneously flagging the governance challenge, encapsulating the central tension of the discussion.
Served as a concise summary that reinforced earlier points about governance, prompting the final round of reflections on both upside (health, education) and downside (concentration, governance failures).
Speaker: Johannes Zutt (rapid‑fire round)
Overall Assessment

The discussion was shaped by a series of pivot points that moved the conversation from high‑level optimism about AI’s potential to a nuanced, policy‑oriented debate about infrastructure, market concentration, regulatory design, and power dynamics. Johannes’s “small AI” framing grounded the talk in practical constraints, Ufuk’s two‑layer model introduced a structural lens that exposed concentration risks, and Iqbal’s field examples highlighted the human and institutional frictions that can derail even technically superior solutions. Anu’s challenge to the regulation‑vs‑innovation myth and her warning about cognitive atrophy broadened the scope to societal values. Michael’s public‑good examples and funding‑mechanism suggestions offered concrete pathways for action. Together, these comments redirected the panel from abstract hype to concrete, actionable insights, ensuring that the dialogue remained balanced between opportunity and risk.

Follow-up Questions
What early indicators can signal how AI will affect creative destruction in both advanced and emerging economies?
Understanding early signals is crucial for anticipating economic impacts and guiding policy.
Speaker: Ufuk Akcigit
Why have emerging economies historically lacked entrepreneurship and dynamism, and what business‑environment reforms are needed for AI to foster entrepreneurship?
Identifying structural barriers is essential to ensure AI translates into genuine economic dynamism.
Speaker: Ufuk Akcigit
How can developing countries overcome basic infrastructure constraints such as unreliable electricity, weak internet backbones, and low literacy to enable effective AI use?
Infrastructure gaps limit AI adoption; research is needed on feasible solutions for low‑resource settings.
Speaker: Johannes Zutt
What models and strategies are most effective for scaling ‘small AI’ applications in environments with limited connectivity and data?
Small AI can deliver high impact in low‑resource contexts, but evidence on best practices for scaling is limited.
Speaker: Johannes Zutt
What evaluation frameworks and metrics should policymakers use to assess AI interventions, ensuring continuous improvement and real‑world impact?
Robust evaluation is needed to determine which AI solutions deliver measurable benefits and to guide scaling decisions.
Speaker: Michael Kremer
Which regulatory approaches can balance innovation with protection of fundamental rights in the Global South, drawing lessons from the EU AI Act?
Adapting rights‑driven regulation could help emerging economies harness AI while safeguarding citizens.
Speaker: Anu Bradford
How can India tailor the EU’s rights‑driven AI regulatory model to its own priorities without stifling innovation?
India needs a customized regulatory framework that supports local innovation while ensuring safeguards.
Speaker: Anu Bradford
What are the implications of high concentration in the foundational AI layer (compute, data, talent) for market competition and downstream application development?
Concentration could limit access for smaller players and affect the diffusion of AI benefits.
Speaker: Ufuk Akcigit
How does the migration of AI talent from academia to industry affect open science, knowledge spillovers, and overall innovation?
Shifts toward proprietary research may reduce collaborative advances and public‑good outcomes.
Speaker: Ufuk Akcigit
What governance mechanisms are needed to ensure AI tools (e.g., GST fraud detection) are adopted at scale without undermining necessary human discretion?
Understanding the balance between algorithmic decision‑making and human oversight is vital for effective policy implementation.
Speaker: Iqbal Dhaliwal
How can trust in AI technologies be built among frontline workers such as doctors and teachers to ensure effective adoption and impact?
Lack of trust can negate technical performance; research on training, incentives, and system integration is required.
Speaker: Iqbal Dhaliwal
What procurement designs (e.g., evidence‑based innovation funds, A/B testing, open‑access requirements) can accelerate AI adoption in the public sector while ensuring competition and quality?
Innovative procurement can overcome market failures and speed up delivery of AI‑enabled public services.
Speaker: Michael Kremer
How can vulnerabilities in the AI supply chain (semiconductors, equipment, raw materials) be mitigated to reduce geopolitical weaponization risks?
Supply‑chain resilience is critical for sustainable AI development across nations.
Speaker: Anu Bradford
What policies can mitigate labor‑market risks from AI‑driven automation of entry‑level jobs in emerging economies, ensuring a just transition for workers?
Protecting vulnerable workers while fostering AI adoption requires targeted labor and social policies.
Speaker: Ufuk Akcigit
How can AI be leveraged to target poverty reduction at the individual level while establishing robust governance to prevent abuses?
Precision poverty targeting promises gains but raises governance and ethical concerns that need further study.
Speaker: Johannes Zutt
What detailed sector‑specific data are needed to better understand AI’s impact on jobs and productivity in South Asia?
More granular evidence would inform policy design and investment priorities.
Speaker: Johannes Zutt

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.