How AI Drives Innovation and Economic Growth

20 Feb 2026 15:00h - 16:00h

How AI Drives Innovation and Economic Growth

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel convened to examine how artificial intelligence can either narrow or widen development gaps in emerging economies [60-65]. Johannes Zutt highlighted AI’s capacity to boost productivity in sectors such as agriculture, health care and finance, noting that 15-16 % of South Asian jobs show strong complementarity with AI [12-20]. He also warned that AI may displace entry-level, knowledge-based jobs and that many low-income countries lack basic infrastructure-reliable electricity, broadband, and literacy-to deploy it effectively [21-31]. To address these constraints, the World Bank promotes “small AI”: affordable, locally relevant applications that operate with limited connectivity and data, citing India’s digital identity system and farmer-focused phone tools as exemplars [34-40][45-46].


Ufuk Akcigit argued that while the application layer of AI lowers entry barriers and encourages creative destruction, the foundational layer remains compute-, data- and talent-intensive, creating concentration risks that could spill over to downstream markets [86-98][99-101]. He stressed that without improving the business environment-such as reducing reliance on family size for firm growth-AI alone will not generate entrepreneurship in developing economies [111-115]. Anu Bradford emphasized the need for AI sovereignty and a rights-based regulatory approach, noting that Europe’s AI Act illustrates how regulation can protect public interests while still fostering innovation, and that India must adapt such frameworks to its own priorities [167-176][181-190].


Michael Kremer argued that targeted public-good AI, such as AI-generated weather forecasts for 38 million Indian farmers and AI-assisted traffic safety tools, can substantially reduce poverty if supported by evidence-based innovation funds and rigorous impact evaluation [133-158][263-292]. Iqbal Dhaliwal provided a concrete small-AI case in Indian schools where AI automated spelling checks, freeing teachers to focus on higher-order learning, and warned that hype must be separated from reality because technology often fails without complementary system changes [236-247][294-318].


Across the discussion, participants agreed that AI’s promise hinges on coordinated public-private effort, robust infrastructure, and policies that mitigate job displacement and market concentration [34-38][86-98][263-270]. They also concurred that neglecting governance-whether through weak regulation, insufficient public-sector adoption, or power dynamics that block scaling of successful pilots-poses a major risk to equitable outcomes [414-416][321-328]. The panel concluded that while AI can drive transformative gains in health, education and agriculture, realizing these benefits requires proactive policy, inclusive regulation, and safeguards against concentration and labor-market shocks [382-388][394-398]. Overall, the discussion underscored AI’s dual potential to accelerate development and exacerbate inequality, urging immediate action to shape its trajectory responsibly [414-416].


Keypoints


Major discussion points


AI as a catalyst for development in emerging markets - Johannes Zutt emphasizes “small AI” that works with limited connectivity, data and skills, citing examples such as pest-identification for farmers, AI-assisted nursing, and credit scoring [10-21]. He highlights India’s digital identity and payment infrastructure as foundations for scaling such tools [39-42]. Michael Kremer adds concrete public-good cases, notably AI-driven weather forecasts that reached 38 million Indian farmers and improved planting decisions [136-151].


Structural challenges and concentration risks - The panel notes that AI can displace entry-level, knowledge-based jobs and that the World Bank itself sees fewer such positions advertised [22-32]. Ufuk Akcigit distinguishes a high-barrier “foundational layer” (compute, data, talent) that tends toward concentration, warning that this may spill over to the application layer [85-100]. Anu Bradford and later speakers point to the global concentration of large-model development in the US and China, and to rising market concentration that could lock-in incumbents [185-190][342-347].


Policy, regulatory and governance imperatives - Both speakers stress the need for AI sovereignty and rights-based regulation. Anu Bradford describes the EU’s AI Act as a “rights-driven” approach and suggests India can adapt such lessons while preserving local priorities [166-176]. Michael Kremer argues that governments and multilateral development banks must fill gaps where private profit motives fall short, e.g., funding AI for public goods like digital IDs and weather forecasts [128-133]. He also proposes evidence-based innovation funds to accelerate responsible deployment [266-277].


Evidence-based implementation and evaluation - Iqbal Dhaliwal shares a school-AI pilot that freed teachers from routine grading, allowing them to focus on higher-order learning, illustrating the demand-driven benefits of “small AI” [236-247]. Michael Kremer outlines a four-stage evaluation framework-model performance, user impact, scalability, and continuous improvement-to ensure AI interventions deliver real outcomes [284-291]. The discussion repeatedly stresses the need to adapt institutional systems (e.g., teacher training, regulatory processes) to realize technology’s promise [308-315].


Overall purpose / goal of the discussion


The panel was convened to examine whether artificial intelligence will narrow or widen development gaps and to identify practical policy levers for emerging economies. Participants shared concrete use-cases, highlighted systemic risks, and debated how multilateral institutions, national governments, and the private sector can jointly shape an AI ecosystem that delivers inclusive growth.


Tone of the discussion


Opening (0-10 min): Optimistic and celebratory, emphasizing AI’s transformative potential.


Middle (10-30 min): Becomes more measured as speakers acknowledge infrastructure deficits, job displacement, and concentration of power, shifting to a cautious, problem-solving tone.


Later (30-48 min): Pragmatic and solution-focused, with concrete examples, policy recommendations, and calls for evidence-based pilots.


Closing (48-51 min): Balanced rapid-fire reflections, acknowledging both “big wins” (health, education) and “big risks” (market concentration, regulatory lag), ending on a sober yet hopeful note.


Overall, the conversation moves from enthusiastic optimism to critical realism, ending with a constructive, forward-looking outlook.


Speakers

Jeanette Rodrigues – Moderator/host of the panel discussion [S1]


Michael Kremer – Economist, Nobel laureate (mentioned in the transcript)


Johannes Zutt – World Bank representative (referred to as “John” in the discussion) [S8]


Iqbal Dhaliwal – Global Director of J-PAL at MIT [S9]


Ufuk Akcigit – Macroeconomist (provides analysis on creative destruction and AI’s impact on economies) [S11]


Anu Bradford – Expert on AI governance and regulation (contributes perspectives on AI sovereignty and regulatory approaches)


Additional speakers:


None (all speakers in the transcript are covered by the list above).


Full session reportComprehensive analysis and detailed insights

Jeanette Rodrigues: The session opened with Jeanette thanking the participants and stating the panel’s aim: to explore whether artificial intelligence (AI) will narrow or widen development gaps in emerging economies and to identify the policy levers that should guide real-world implementation [2][3][60-65][71]. She noted that this was the fourth AI summit (the first having been held in the UK) and that participants repeatedly described the first session as “full of fear” about AI, framing the debate as a balance between hope and fear [2-3].


Johannes Zutt: Johannes described AI as a structural transformation already reshaping economies worldwide [6-7]. He clarified that the World Bank does not develop AI applications itself; its comparative advantage lies in advisory work-ensuring data reliability and helping governments create “AI sandbox” environments for experimentation [45-46]. He highlighted basic constraints in many low-income countries-unreliable electricity, weak broadband, low literacy, and reliance on very simple devices [26-31]. To address these gaps he introduced the Bank’s “small AI” agenda: affordable, locally relevant tools that function with limited connectivity, data and skills [34-36], and noted that the Bank is assisting governments in setting up AI sandboxes for pilots [49-52]. Examples included India’s digital identity programme and farmer-focused phone applications, which illustrate how small AI can be deployed at scale when supported by government standards and private-sector innovation [39-42][45-46][49-52]. He also pointed to AI-enabled services in education and health that can fill skill gaps for teachers and frontline workers [21-33].


Michael Kremer: Michael presented concrete public-good AI interventions. He explained that the Indian government’s AI-generated weather forecasts are a non-rival, public-good resource, justifying public investment [263-267]. The forecasts correctly predicted an early monsoon in Kerala and a later-than-expected progression elsewhere, becoming the only source of information for millions of farmers [263-267]. He also described AI-enabled traffic-safety tools-automated traffic cameras and the HAB (AI-based driver-licence testing) program-which have reduced unsafe driving by 20-30 % in pilot sites [263-267][268-277]. Emphasising the role of multilateral development banks, he argued that market failures leave critical public-good AI under-invested and proposed evidence-based innovation funds that follow a four-step evaluation framework: 1) model performance; 2) user impact; 3) scalability; 4) continuous improvement [284-291].


Ufuk Akcigit: Ufuk offered a macro-economic perspective on AI-driven creative destruction. He distinguished a “foundational layer” (compute-, data- and talent-intensive) with high entry barriers that tends to concentrate power, from an “application layer” where low barriers enable small firms to compete with incumbents [85-93][94-98]. He warned that concentration at the foundational level can spill over into downstream markets, limiting inclusive benefits [99-101][324-340]. He also questioned why entrepreneurship has historically been weak in emerging economies-citing family size and gendered labour dynamics as key determinants of firm growth-and argued that without reforms to the business environment, AI alone will not spark the desired dynamism [111-115][84-85].


Anu Bradford: Anu focused on governance and AI sovereignty. She described the European Union’s AI Act as a “rights-driven” framework that seeks to protect fundamental rights while distributing AI benefits more broadly [173-176]. She argued that the Global South must develop its own regulatory sovereignty, adapting lessons from the EU without merely copying them, to ensure AI serves local public-interest goals [167-176][181-190]. She also warned of the geopolitical concentration of AI capabilities in the United States and China, noting that supply-chain choke points in semiconductors and raw materials create strategic vulnerabilities for developing nations [357-371].


Iqbal Dhaliwal: Iqbal illustrated the impact of “small AI” on the ground. In a pilot in Indian public schools, AI automated routine spelling checks, freeing teachers to focus on higher-order learning and thereby improving educational outcomes [236-247]. He stressed that the success was demand-driven-teachers, students and districts all asked for the tool-and that similar time-saving AI could benefit health-frontline workers [248-250]. He identified two recurring patterns: (a) trust-highly accurate AI diagnostics can fail in practice if users lack trust or proper training [294-318]; and (b) institutional adaptation-the GST-fraud detection model was not scaled because it threatened existing discretionary power, highlighting the need to adapt processes alongside technology [309-322].


Points of Consensus:


– All speakers concurred that AI’s transformative potential is contingent on basic infrastructure (electricity, connectivity, literacy) [6-7][61-71].


– The panel uniformly endorsed the “small AI” approach as a pragmatic pathway for low-resource settings [34-36][236-247].


– There was consensus that robust, rights-based yet locally adaptable regulation is essential to prevent misuse and manage risks such as job displacement and market concentration [21-33][173-176].


– Participants agreed that public-sector investment and evidence-based innovation funds are needed to develop AI public goods that the private market will not provide on its own [133-158][263-292][266-271].


Key Disagreements:


– Ufuk warned that the compute-heavy foundational layer will entrench concentration, whereas Johannes’s emphasis on deploying small AI did not directly address this structural bottleneck [94-98][34-36].


– On regulatory sovereignty, Anu advocated for a rights-driven, locally tailored framework, while Jeanette highlighted the dominance of US and Chinese AI developers and questioned whether true sovereignty is achievable [162-166][167-176].


– Johannes identified job losses as a challenge, whereas Ufuk called for a deliberate slowdown of AI adoption to give workers time to adjust [22-24][405-412].


– Johannes stressed formal governance mechanisms, whereas Iqbal argued that trust, training and system-level adaptation are equally critical for successful deployment [21-33][309-322].


Key Takeaways:


1. AI can be a powerful catalyst for productivity in agriculture, health, finance and education, but its impact is limited by infrastructural deficits [6-7][84-85][61-71].


2. The World Bank’s “small AI” strategy-affordable, offline-capable tools co-designed with governments and private innovators-offers a viable model for low-connectivity contexts [34-36][236-247].


3. High entry barriers of the foundational AI layer risk increasing market concentration and talent migration from academia to incumbents, threatening inclusive growth [94-98][324-340][342-347].


4. AI sovereignty requires rights-based regulation that can be customised to national priorities while learning from the EU’s approach [167-176][173-176].


5. Rigorous evaluation-covering model performance, user impact, scalability and continuous improvement-should guide AI pilots, as outlined by Michael [284-291].


Concrete Actions:


– The World Bank pledged to expand small-AI sandboxes across South Asian states, collaborating with governments to ensure interoperability and offline functionality [45-46][49-52].


– Multilateral development banks were urged to scale evidence-based innovation funds such as Development Innovation Ventures, providing tiered financing for pilots, rigorous testing and eventual scale-up [266-271].


– India’s digital identity and payment infrastructure were highlighted as foundational assets that other developing nations could emulate [39-42].


– Private-sector developers were encouraged to create demand-driven applications that free frontline workers’ time, while regulators were asked to adopt a rights-driven framework that balances innovation with safeguards [173-176][236-247].


Unresolved Issues:


– How can policy prevent the foundational AI layer from cementing incumbent dominance?


– What mechanisms will align AI talent pipelines with local ecosystems to avoid excessive brain-drain?


– How should finance ministers balance AI sovereignty with geopolitical dependencies in the semiconductor supply chain?


– How can public-sector procurement be reformed to avoid monopsonistic lock-in while ensuring rapid adoption of proven tools?


These questions point to a need for further research on competition-friendly policies, talent development programmes and procurement reforms [342-347][357-371][397-398].


Rapid-fire Closing:


– Iqbal warned that unchecked market concentration could become a regrettable legacy [382-384][324-340].


– Anu cautioned that over-reliance on generative AI might make humanity “dumber” if critical thinking is outsourced [387-392].


– Michael echoed the risk that public-sector inertia could deny the poor access to AI-driven services [394-398].


– Ufuk highlighted the labour-market risk of rapid AI adoption outpacing job creation [405-412].


Overall Assessment: The discussion moved from an initial optimism about AI’s transformative power to a nuanced, evidence-based appraisal of the structural, regulatory and societal challenges that must be addressed. The consensus calls for immediate, collaborative action to build the enabling environment, fund public-good AI, and design governance that safeguards inclusive growth while preserving the innovative dynamism essential for emerging economies to thrive in the AI era [414-416][382-388].


Session transcriptComplete transcript of the session
Jeanette Rodrigues

all around the Bharat Mandapam. So once again, thank you very much for your time this afternoon and for choosing us to have a conversation with. To start off, I would like to introduce John, who will make some opening comments for the World Bank.

Johannes Zutt

So thank you very much, Jeanette. It’s a great pleasure to be here speaking to all of you this afternoon. Over the past week, we’ve heard from a lot of world leaders, tech leaders, experts from across many, many countries about how AI is fundamentally reshaping our world, presenting not just a technological shift but a structural transformation with profound implications for economies and societies everywhere. For emerging markets and developing economies, as for all economies, AI could be a game changer. So sorry, that probably helps. I thought the mics were on. So, you know, for all countries, but especially for emerging markets and developing economies, AI can be a game changer, a unique opportunity to leapfrog longstanding development challenges.

It offers clear opportunities to enhance growth and productivity. We recently did some work in South Asia at the World Bank Group to see what sort of impact AI was having on jobs in the region, and we found that approximately 15 or 16 percent of jobs here have strong complementarity with AI. AI enables people in those jobs to expand their skills and their effectiveness in delivering the products and services that they are trying to provide. It also helps, you know, very, very diverse groups of people in many, many different sectors of the economy. It helps farmers to identify pests on their crops. It helps farmers to identify pests on their crops, diseases in their crops, and also how to address them.

It helps farmers to identify pests on their crops, diseases in their crops, and also how to address them. It helps nurses to identify the ailments and illnesses that their patients may be suffering, particularly the ones that they’re not very familiar with, but that they can research using appropriate AI applications. It helps financial institutions to understand better the ability of borrowers to take on loans, which, of course, expands the ability of the borrower to expand his or her business. So there’s clearly enormous potential for AI to fill skill gaps in the areas that I mentioned, also in education, in health care services, to detect patterns, to generate forecasts, to guide the allocation of public resources, and so on.

Of course, at the same time, on the flip side, AI also creates a number of challenges. One of them is there will be some job losses, particularly sort of entry -level jobs that are very much knowledge or document -based, performing relatively rote work that can be taken over by automation. And we’re actually seeing this in the World Bank Group. We went and looked at the number – the types of jobs that we are advertising these days compared to a couple of years ago, and what we found is that that layer, sort of at the bottom of the professional classes inside the bank group, there’s just fewer of those types of jobs being advertised in the World Bank Group today than there were a few years ago.

At the same time, you know, particularly for developing economies and emerging markets, many of them are going to struggle to harness the potential that AI offers because of very basic issues around the foundations for effective AI use. They may not have reliable electricity. We can start with that very basic one. They may not have an internet backbone that’s sufficiently strong. People in these countries may not have very, very basic skills of literacy and numeracy that enable them to work effectively with higher end devices. They may need to use very, very basic devices, not even smartphones, and rely on voice communication, asking a question and hearing a response. So there may be struggles of that kind in developing countries and emerging markets.

And I’m not even talking about all the governance and regulatory safeguards that can also come into play. So the question, of course, is how can emerging economies, developing markets, harness the potential of AI and avoid the pitfalls? And for us in the World Bank group, we’ve been very, very focused on focused recently on basically small AI. Small AI meaning practical, affordable, locally relevant AI that addresses specific problems and also works where connectivity, data, skills, infrastructure are fairly limited. And this is extremely important in countries like India where all of those conditions can apply. And yet there’s tremendous potential for people to expand their, to grow their productivity if they have timely access to information of the right kind in their local language tailored to their specific circumstances.

So that’s what we are trying to do in South Asia today and across the globe actually. And this is really about some of the examples that I mentioned earlier, having bespoke… applications that help farmers to do very basic investigation of the types of issues that they’re facing using their phone to analyze what’s going on to identify it to find out how to address it even to find out who within their local area in their market space can help them by providing the tools or the products that are necessary to address whatever they’re running into so India of course is a very strong example of what’s possible India has been a leading country in digital innovation for quite some time after the United States and China it has the largest if you like digital universe you in the in the world today it’s got some very good foundations there’s the the digital identity program as well as the digital payment platform that currently exists.

There are lots of Indian firms that are innovating in AI, including in the small AI applications that I’ve been talking about. And the governments of India have an objective of ensuring that there is AI for all. So they are very, very aware of the challenges that need to be overcome to make AI accessible to a very, very broad spectrum of the population and not just the very rich that, to some extent, need assistance the least, right? It’s the poorer parts of the country that benefit the most because they will be leveraging a tool that they are not very familiar with and have not been using that much in the past. So we’re working in India.

We’re working in a lot of different states, Uttar Pradesh, Maharashtra, Kerala, Haryana, Telgana. these different aspects working with governments to work on the foundational elements, interoperability, making sure that the accessibility is possible, that programs can run offline as it were so that people who aren’t able to get online all the time can benefit and so on. And then we’re also working with private sector investors who are developing apps. I mean we’re not actually developing many apps ourselves. That’s not really in our comparative advantage. Our comparative advantage as the World Bank Group is to do the more advisory work, make sure that the backbone information that’s embedded in the application is reliable and trustworthy because of course that’s critical for ensuring successful uptake.

But we are helping governments to create. We are helping governments to create the space that enables experimentation in AI sandbox to develop the different applications that people in this incredibly creative country are coming up with to help people get on with their work and become more productive. So I think it’s important to recognize that if we’re going to make effective use of this tool, we need both a public -facing effort to address the standards and the other issues, the interoperability and so on that I mentioned before, but also a private -sector -facing effort because it’s the private sector that’s actually generating, creating most of these applications that are working, particularly in the small AI area.

We’re doing a little bit on bigger AI. There’s obviously a connection between the two. Big AI can, through computational power, generate new knowledge that can help us to do things that we haven’t done so well in the past much, much better. But for… There are countries like India translating that. into small AI will also be very, very important for uptake. So I’m looking forward to hearing from all the distinguished speakers in this panel about their thoughts on what’s happening today in this sector. So thank you very much.

Jeanette Rodrigues

Thank you very much, John. John spoke about, of course, the use cases for AI, and on the other side of the spectrum we have the large language models, we have the foundational AI. But no matter where you sit on the spectrum, no matter where your interests lie, AI, innovation never disperses and never diffuses equally. Today on this panel, I hope to unpack what determines whether AI narrows the development gap or whether it widens the development gap. Especially we are looking to talk about the real world. What should policymakers in the real world think about and keep at the top of their mind as they go ahead preparing policies considering AI? Before I start, just setting the stage.

To a man, to a woman, everybody I spoke with who’s attended the first AI summit to today, this is, I think, the fourth AI summit being held. The first one was held in the UK. And without exception, all of them made it a point to tell me how the first session was full of fear. It was, oh, my God, AI is this terrible technology which is going to steal all our jobs, make us redundant. And when they come to India, they see the hope that technology and AI brings. And that’s the spirit of the discussion this afternoon, to figure out how can we balance both of those extremes, hope and concern, and go ahead in a pragmatic, policy -first way to prepare for the real world.

So if I could start with you, Ufuk, how do you think about AI? And especially, where do you see areas of creative destruction? To foster the innovation that we need.

Ufuk Akcigit

Thank you very much. And so, of course, creative destruction is an important driver of economic growth in the long run. So that’s why, you know, it’s an interesting question how AI will affect creative destruction in general. Of course, we are at a very early phase of AI, and it’s a GPT. And typically, you know, when GPTs are emerging, there’s a huge surge of new businesses. And this should not be misleading. I think the main question we should be asking ourselves is what will happen to the creative destruction in the future? How does the future look like in terms of creative destruction? And I’m a macroeconomist, so that’s why I like to look at this with a, you know, bird’s eye view.

And I would like to, you know, separate advanced economies from emerging or developing economies. So when it comes to advanced economies, there, again, we need to split the issue into two layers. One, the foundational layer. and the other one is the application layer. When we look at the application layer, it’s great. You know, the entry barriers are low. Small businesses can do what only large businesses could do in the past, and, you know, they can do their accounting, marketing. You know, there are so many opportunities now. The entry barrier is low. As a result, this suggests that, you know, this is going to be more, you know, friendly for creative destruction on the application. But then there’s also the foundation layer, and I think that’s exactly where the bottleneck is.

When we look at the foundation layer, the entry barrier is really, really high, and, you know, the compute is very compute -heavy. It’s very data -heavy. It’s very talent -heavy. So as a result, you know, this market, at least this layer, is very concentration -prone. Of course, it’s very early. But, you know, normally we have to be concerned about the foundational layer and how things will pan out because this is the upstream to the application layer, which is downstream to foundation layer. So that’s why whatever will happen at the foundational layer will potentially spill over to application layer two. So that’s why I think we need to look at early indicators. But, you know, in the interest of time, I don’t want to go into the empirical evidence yet.

Maybe we can come back in the second layer. When we look at the developing countries, so I think, you know, I agree with Johannes. You know, I think AI is creating fantastic opportunities. So that’s why I think it’s really important to understand the opportunities as well as the risks for developing countries. And together with the World Bank, we are working on the world development. Report 2026, which is going to be on AI and development. And these are exactly the issues that we are focusing on. But I think before we go into those details, we should ask ourselves one major question. Why was there no entrepreneurship and dynamism before the AI revolution in emerging economies? Why was, you know, when we looked at the firm’s life cycle, for instance, why was it not up or out?

Why was it not, you know, very competition friendly? Why did the best predictor of firm size in emerging economies or developing economies was the size of the family and or the number of male children? These are still lingering issues and AI is not, you know, will not bring magic unless we understand and fix the business environment in these economies. You know, AI will just create new tools. But at the end of the day, we need to make sure that the business friendly environment is there for entrepreneurs to come and exercise their ideas

Jeanette Rodrigues

Ufuk, that’s a very interesting leaping of point, the real world. And the intention of this panel is to get exactly there. So if I may turn to you, quite literally turn to you, Michael, and ask you about the real world. You’re obviously doing a lot of work on the ground. Where do you see the potential for AI to spur gains? And are there any really transformative breakthrough areas that you’re looking at right now?

Michael Kremer

Yes. Thank you. Thanks very much. You know, I don’t want to minimize the existence of forces that may widen gaps. I think that if policymakers, primarily at the national level, but also in multilateral development banks, take appropriate actions and make appropriate investments, then I think AI has the potential to substantially narrow some of the gaps. And, you know, I think the… which policy actions to take can be informed by thinking through relevant market failures and relevant government failures. Let me give a concrete example or two. So private firms have incentives to develop and improve applications of AI that can generate profits. But there are some very important applications of AI for public goods, for example, that will not attract commercial investment to measure it with their needs.

And that’s an area where I think governments and multilateral development banks can play an important role. And I think some of this very much echoes what you were saying about small models, but also I’ll mention the link between the two. So an obvious example where I think India has been a leader for the world is in the development of digital identity. You know, this is… will enable, as Ufuk was saying, this enables a lot of work by individual entrepreneurs, a lot of other applications. So that’s a huge success, and I think multilateral development banks together with India can help bring that to many other countries. Let me take another example, one that’s not as well -known, but picks up on your comment about farmers.

So one thing that’s critical for farmers, they have to make a bunch of decisions that are weather -dependent. You know, when do you plant, for example? What varieties do you use? A drought -resistant variety, another variety. That, most farmers don’t have access to state -of -the -art weather forecasts around the world. I’m not talking about one country. In low – and middle -income countries, they don’t have access to that. Now, there’s a huge advance. We tend to think of large language models, but obviously AI is pushing science forward, and that includes in weather forecasting. There’s really a revolution driven by AI. But weather forecasts are non -rival. They’re largely non -excludable. They’re the classic definition of a public good.

So there’s a strong rationale for national governments, in some cases supported by multilateral development banks, to make investments in producing and disseminating AI weather forecasts. Again here, India is a leader. So if you, India in particular, in particular, India’s, the Indian government distributed forecasts to AI weather forecasts to 38 million farmers last year. And the evidence suggests that farmers, both from India, from this particular case, that in areas, I’ll say a little bit about last year’s monsoon, it came early in Kerala and southern India, but then there was an unexpected delay in the progression. The AI forecasts got that right, that was the only source of information that reached farmers with that. In the areas, we did a survey above that line, and farmers are responding, and they transplant more, they use hybrid seeds more.

Evidence from around the world is consistent with this. Farmers respond to these AI weather forecasts. So I think that’s one example, but many others, and happy to discuss them in education and traffic enforcement and elsewhere.

Jeanette Rodrigues

Michael, your answer should be read the book. Okay. We’ve spoken about the use cases of India, but setting up digital IDs, of course, is a sovereign decision. It’s something India could do unilaterally. When it comes to the large language models, that’s not reality. The large language models are concentrated in the US, in China now with DeepSeek. Anu, in a world where you largely have the rules being set by the two large powers, the US and China, arguably, there’s of course the EU as well, and you’ve done a lot of work on that. Who sets the AI rules for the Global South? Is there even the possibility for the Global South to talk about sovereignty?

Anu Bradford

So I think the Global South has the same kind of incentive for their own AI sovereignty, including then regulatory sovereignty, to design the rules that better work for their economies, for their societies, for what the public interest in these jurisdictions calls for. But regulating AI is really difficult even for very established bureaucracies. You need to be able to make sure that it is an innovation -friendly, and yet you at the same time need to be careful in managing the risks for individuals and societies. So even very established regulators like the European Union have found it one of the most challenging tasks to come up with the AI Act. So there’s probably something to be learned from these jurisdictions that have gone ahead and done the kind of thinking that had then resulted into some of those regulatory frameworks that we have now in place.

So if you think about the choices that India has when it looks around, one of them is to think about, okay, how does the EU go about this? The EU follows what I would call a rights -driven approach to regulation. So what is really characterizing this, the first horizontal binding, so economy -wide regulation that the Europeans enacted, it is a regulation that seeks to protect the fundamental rights of individuals, the democratic structures of the society, and that also seeks to ensure a greater distribution of the benefits from AI revolution. So the European approach is very conscious that it wants to also share some of the benefits so they don’t all go to the large developers of these models, but individual use as society at large.

smaller companies benefit from AI as well. So there’s something I think the Europeans can teach in terms of that regulatory approach in addition to maybe then some details of how that regulation in the end was constructed. But just one word, India is a formidable economy that doesn’t need to take a template and plug it into the economy as such. I think India is in a very good position to take the lessons that serves its needs yet make the kind of local modification and variations that are more reflecting the distinct priorities of this country.

Jeanette Rodrigues

Anu, before I turn to Iqbal, a quick follow -up question to you. As India makes its own rules, where does the trade -off lie between regulation and innovation?

Anu Bradford

So this is very interesting because often I am based in the U .S., but I’m initially from Europe, and these two jurisdictions are described as the U .S. develops technologies and the Europeans regulate those technologies. many ways does India want the innovation path or the regulation path? And I think there are many votes who would go for innovation. But I really would like to debunk this myth that to me it’s a false choice to say that the reason we don’t see these large language models being developed in Europe is not because there’s a GDPR, the General Data Protection Regulation. It’s not because there is AI Act. So the reason there is a perceived innovation gap between the United States and Europe is, I think, four things.

So first, there is no digital single market in Europe. It’s very hard for these AI companies to scale across 27 distinct markets. Second, there’s no deep, robust capital markets union. 5 % of the global venture capital is in Europe, over 50 % in the United States. That explains why the U .S. has been able to take much greater steps in developing AI technologies. Third, there are legal frameworks and cultural attitudes to risk -taking. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone.

You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. I wouldn’t encourage you to replicate that because it’s very hard to innovate on the frontier of technological innovation because sometimes you fail. But you need to be then given the second chance.

And the fourth, I think, the sort of foundational pillar of the robust U .S. tech ecosystem is that the U .S. has been spectacularly successful in harnessing the global talent that has chosen to come to the U .S., including many Indian data scientists, engineers, who think that U .S. is the place where they can start their companies, scale their companies, fund their companies, U .S. universities can attract them. So the idea that choosing to follow… Or imitate aspects of the European rights protective regulation would come at the cost of innovation, we need to understand better what drives the technological innovation and whether regulation should

Jeanette Rodrigues

Thank you, Anu. Iqbal, turning to you. You’re working in an area of the world, South Asia, where what is regulation? What is enforcement? At the risk of sounding like a provocateur, it’s the Wild West a little bit. And therefore, we talk a lot in our part of the world about small AI, about targeted AI. My question to you is that what should policymakers keep in mind when designing AI -enabled interventions, especially when it comes to small AI and the targeted use cases?

Iqbal Dhaliwal

vulnerable public schools all the way from 11th to becoming the second best performing state in just a matter of two or three years. Phenomenal results, right? But then you start saying, let’s unpack this. What was this thing doing? The first thing that they find out is that a lot of people are like, oh, does this mean that I don’t need teachers anymore? No, you still need the teachers. What it replaces is the road task of the teacher having to correct spelling mistakes, calling you to the room and saying, hey, you forgot your comma, you forgot to capitalize. Instead, AI takes care of all of that. And now the teacher can sit with you in the free time and say, how did you set up the structure of this essay?

Did you think about this analytically or not? And that’s the first insight that comes from evaluation. It frees up the teacher time. Everything that we do in the field ends up adding to teacher’s time, adding to the nurse’s time, adding to the Anganwadi worker’s time. Very few teachers do that. Free up time. So if your AI application can free up the time of the health frontline workers, first of all, that’s a winner. The second thing that is really important here was that this is a demand -driven thing, right? Like there was a demand by the kids to improve their essays. There was a demand by the teachers to free up their time. But most importantly, there was a demand by the school districts to show progress.

So I think those is kind of a great example of how everything comes together if you think about it ahead of time.

Jeanette Rodrigues

Ladies and gentlemen, a topper of India’s notoriously difficult civil services exam. So take Iqbal more seriously than you would as just a normal.

Iqbal Dhaliwal

Thank you. I thought that was history now.

Jeanette Rodrigues

It’s never history in India, Iqbal. Michael, turning to you, almost as equal in accomplishment by winning a Nobel. What risks should multilaterals like the World Bank keep in mind? Or let me rephrase that actually. Is there a risk that multilaterals are moving too slowly relative to the technology?

Michael Kremer

I think there certainly is. As I noted before, there are certain areas where the private sector is going to move, but there are other areas where they’re not going to move quickly, and it’s going to be very important for governments and for multilateral development banks and for philanthropy to move. I think there are a number of approaches to this. One way is by encouraging innovation by setting up institutions like innovation funds, particularly evidence -based, to echo Iqbal, I think evidence -based innovation funds. So I’ll give you one example of something that I’m involved in. Development Innovation Ventures, that was initially set up in the U .S. government, but it’s now been relaunched independently. It has tiered funding, so there’s initially very small…

grants to pilot new ideas. Then there’s somewhat larger grants to rigorously test them as Iqbal emphasized and then for those that are most successful there’s funds to help transition them to scale up. I think why is that important? Well that’s important because if we’re thinking about the services that public services and there are other sectors where this is needed but there’s probably going to be insufficient competition. Private developers are going to come up with innovations but then there if they have to sell them to the government they’re facing a monopsonistic buyer. They’re not going to probably not going to get rich doing that. So some support to generate more in that market, generate more entrance in that market, well I think is very important.

It’ll also mean that prices will go down and quality will go up when the government does that thing. Does that. Let me, I’ll just again let me give a example of the potential of how you know we we tend to focus on certain examples time after time here let me give another another example that is you know something that I doubt many people here are thinking of when they think of AI you know one of the things that you know traffic safety and we’ve all been exposed to traffic in the past few days you know traffic is a real problem interfering with urbanization which may drive growth there are a lot of deaths from from traffic a lot of citizens around the world have very difficult and painful experiences with traffic enforcement well you know you can have automated traffic cameras that have the opportunity to improve improve traffic outcomes but also improve people’s perception of fairness in government India’s moving in this let me mention another thing that within traffic safety that’s being done Microsoft Research India developed a program called the India Research Program and it’s a program that’s been developed by the government and it’s a program called HAB that is for driver’s licenses and that it automatically uses AI to test are that are the drivers until they actually pass in their exams they when this was introduced it’s been introduced I believe in 56 sites across India hundreds of thousands of people have taken tests this way we took a leaf from a false book we followed up the we’ve got information from Ola on ratings on and the number of drivers who were rated as driving unsafely that went down 20 to 30 percent where hams had been installed so you know that’s something that was developed not by Microsoft’s main business but by Microsoft research we can just create some support for more ideas like that to be developed to be rigorously tested that can benefit India can benefit the whole world we are we are running out of time probably this is this is one place in in India where time is really respected and we have to end in time.

So I had a list of wonderful questions, but if I could now move to a space where we are really giving shorter answers and quick answers and the deeply, deeply interesting ones about who’s winning and who’s losing. Michael, if I could start with you, actually. We’ve seen many promising technologies fail to live up to their promise. How should we think when we are evaluating AI interventions? How should we think about it? What should be the metrics that we use? Okay. First, model evaluation. So AI companies typically do that part. How good is the model output for specific tasks? You know, forecasting the weather. Does it do a good job? Does it match your local language well?

Second, user impact. Here, I think there’s a role both for sort of initial pilots akin to a medical efficacy trial. If you put the work into trying it, does it lead to improvements and outcomes for the users? Second… scalability and usage at scale that’s more like an effectiveness trial in medicine that it’s important to think not just about the tech but also about the human systems are the teachers actually going to use the product I think is it is an example how can you get the teachers to use the product and then the fourth area is continuous improvement you want a system that improves the underlying models so I think in procurement we might want to think about requiring continuous a B test publicity about what the what the impact usages and impact is and perhaps even thinking about requiring open access as part of the procurement package

Jeanette Rodrigues

thank you Michael. Iqbal, I want to flip that question to you where do you see where do you see hype in the promises of AI that you don’t think will play out

Iqbal Dhaliwal

I think hype is natural because the technology is exciting. It’s a general -purpose technology. It’s evolving so quickly. The marginal cost of deployment for the next users is very low. It’s multimodal. Today you are doing it in text. Tomorrow you’re doing it in video. Day after tomorrow you’re doing it on audio. Everybody who has a smartphone has it. So I can understand the hype, right, like where it is coming from. But I think what we really need to do is separate the hype from the reality on the ground. And the reality on the ground is that many of these technologies are not having the final impact that we are having. And I see kind of two, you know, like once again my job at J -PAL always, you know, sitting at the top is like to say not worry about one professor’s evaluation or one researcher’s evaluation, but say when I connect all these dots, what am I seeing?

And I’m seeing two patterns. One is about trust in technology, and the second part is about the reality of the policy world. Let me elaborate quickly on both. Trust in technology. There are studies which found that even if you give doctors and frontline health care workers access to diagnostic tools, including radiology, tools, using AI, AI enabled prediction of the diseases, oftentimes it doesn’t lead to an improvement in results. And when you try and unpack that, even though this technology worked even better than the human intervention in the lab, right? So some of these diagnostic things can work, have better predictability in the lab, but in the field, they end up decreasing, not only is their efficiency lower, but it lowers the efficiency of the doctors, because we have not trained them enough important.

And the second thing is the enabling mechanism, the world around us. We just assume that just because the technology works, even if it works in the field, the rest of the system will adapt to it. No, you have to adapt the system to the rest of the world. So this example quickly comes from India, where, you know, we have a with one particular state government, we try to improve the collection of value added taxes, it’s called GST in India, there is a whole worry about bogus firms that are created to get these GST or value added tax thing. The machine learning algorithm is able to increase the probability of predicting a bogus firm from 38 % to 55 % in one shot at a very, very low cost.

When it came time to scale up this program by the government, they refused to scale it up because you think about it, you have taken away the discretion of the human to decide whether they should raid Michael’s firm or they should raid Iqbal’s firm. That is power. And if you haven’t thought through that point, what is the point of technology?

Jeanette Rodrigues

I won’t terrify anyone in the room by asking why they didn’t want to scale up this tech. But talking about weeding out the bad actors, talking about firm -level decisions, moving on to UFOOC, does the firm -level evidence show productivity gains diffusing evenly across?

Iqbal Dhaliwal

So just going back quickly to the question of the firm. In the earlier model that I highlighted, I think it’s important to understand what’s happening at the upstream. so that we can then understand where things will be going in the future. And the evidence there, the early signs, is a bit worrying. So first of all, when we look at, for instance, the dynamism or market concentration in the U .S., market concentration has been increasing since 1980 but in an accelerating way after 2000. So that’s the first set of evidence. The second set of evidence comes from how innovative resources are allocated across firms. And when we look at the inventors who are creating the creative destruction and technologies, there’s a massive shift towards market incumbents.

And when I say incumbents, those firms that have more than 1 ,000 employees. In around 2000, 50 % of employees used to work for incumbent firms in just 10 years. That shifted. To more than 60%. A massive reallocation of innovative resources. And the final piece of evidence, and we are going to release this study next week, we looked at the universities, how AI is impacting universities, and we look at the AI publishing scientists. And AI publishing scientists in academia, the top 1%, used to make around $300 ,000 in 2000. It went up to $390 ,000 over two decades. Similar people in industry used to make around $550 ,000. Now it went up to $2 million. And there has been two breakpoints. One of them was in 2012. The other one was in 2017.

Of course, image processing and then the foundational model revolution in 2017. The more worrying part about this, which brings me back to the foundational model side of things, is that this created a massive out -migration from academia to industry.

Ufuk Akcigit

And after 2017 especially, B2B. When the compute and infrastructure became so important. And then we saw the rise of AI. The target or the destination is large incumbent information companies, which again highlights where things are going in terms of the concentration. And the worrying part also is that when people are moving to industry from academia, their publication record goes down by 50%. They start patenting by 600 % more after they move, which means that we are moving from open science to more protected science. Now, spillover is extremely important for creative destruction, for the future of innovation. So that’s why, and if we will keep the foundational layer contestable, I think that the fundamental players there will be universities.

And keeping universities in a healthy way is extremely important, but there is very little discussion on this, which I think before it gets too late. Because once you start buttoning the wrong button, and then the rest will follow wrong as well. So that’s why I think we have to have this frank conversation early on in the game, otherwise it might… too late.

Jeanette Rodrigues

Ufuk, what you spoke about boils down to something Iqbal mentioned as well, power. Because power still makes decisions in this world today. So Anu, before I move to the final section of this panel, if I could ask you if the finance minister of a developing country let’s say India, comes to you and asks you, Anu, how should I think? What would you tell her?

Anu Bradford

So today if you think about how much political power but also geopolitical power is shaping our conversations around AI it is something where I think each country is now pushed towards greater techno -nationalism, techno -protectionism AI sovereignty has become almost a sort of uniformly goal for everyone. But I would remind even when encountering players like the United States and China that nobody in today’s world will be completely sovereign when it comes to AI space. If I just take one layer of the AI stack as an example. What is now driving a lot of the global AI race is this idea that we want to do frontier AI we want to have these powerful foundation models.

That means you need to have a lot of computers. You can’t have a lot of compute unless you have access to the high -end semiconductors. The U .S. is well positioned there. It is hosting companies like NVIDIA. The U .S. leads in the design of semiconductors. But who is manufacturing them? We really need to think about the role of Taiwan there. But then the Europeans have ASML in the Netherlands that leads in the high -end manufacturing with the equipment needed for manufacturing. But that is dependent on chemicals where Japan is leading. And the entire supply chain relies on raw materials from China. So ultimately, all these choke points can in principle be weaponized, but that is not ultimately a sustainable strategy.

Even President Trump had to walk back some of the export controls to China because Chinese were saying, okay, then the raw materials are not coming your way. So there are the potential ways to weaponize these interdependencies that ultimately make us all poorer. So as a finance minister of India, when approaching other middle powers, the great powers,

Jeanette Rodrigues

Easily said than done. Our final, final section is, of course, the rapid fire round. We all love this in this room. In one sentence, in one sentence, if I could ask all of you, and Johannes, you’re not getting away easily, you’re going to answer this as well. So in one. if I could ask you, we’re sitting in New Delhi 2035. Could you predict one development outcome that will have dramatically improved with the use of AI and one risk we’ll regret not addressing now? I guess you already know my second answer.

Iqbal Dhaliwal

I think the concentration, the future of market concentration is something that we should be concerned about and we might regret not having discussed this sufficiently in 10 years. On what will change in a positive direction, clearly health care and education, I think. It’s a no -brainer.

Jeanette Rodrigues

Anu?

Anu Bradford

So first of all, it’s so inspiring to hear all the use case examples, whether we talk about traffic or agriculture or education, because I often talk about the risks and the downsides, so it’s a really good reminder. I’m personally very excited, especially what happens in the education space but also in the health space. In terms of the risks, I think one thing that we are not paying attention to, and what I would even call a systemic risk, is the idea that many worry about AI getting almost too smart. But I am more worried about us getting dumber as a humanity. There is a temptation to start skipping steps, outsourcing your thinking and your creativity to these models.

And as an educator, when I think about how I will teach my students to use generative AI to enhance but not substitute their capabilities, we will just make a tremendous mistake if we just forewent that hard work, that beautiful moment of thinking hard problems and creating and investing in our own capabilities. And all that just cannot be so outsourced, because otherwise we don’t even know what kind of questions we should be asking the AI going forward.

Jeanette Rodrigues

Michael.

Michael Kremer

I agree that there is huge potential in health. and education. I think we’ll see big improvements in that, but the risk is that the public sector won’t adopt these, and therefore the poor won’t have access to them. And that’s because the public sector, as Iqbal indicated, the government systems and the government workers may not adapt to use these. There’s also risks of copycat regulation that are over -focused on certain problems that other countries may be worrying about, but might not be relevant for emerging economies. And then final risk is that the procurement systems are just set up in such a way that we don’t get sufficient competition, we get lock -in, and then we just don’t wind up with good quality.

Jeanette Rodrigues

Thank you, Michael. The buzzer’s down, but I’ll take a risk and quickly run through the other.

Ufuk Akcigit

Yes. I think I am much more optimistic about the government actually adopting this thing. Whether it is when you call 100, your call is going to get answered very quickly. The PCR van is going to be at your house much faster. The hospitals are able to be able to link your health record. So I think the government sector productivity is going to improve leapfrogs. The biggest risk, I think, is definitely the labor market. If there was a dial where I could slow down the adaptation and give time to the labor market to catch up, that’s my biggest worry. You talked about entry -level jobs. An entry -level coding job might be an entry -level job in the United States.

It’s the aspirational job that created Gurgaon’s and Noida’s and Mohali’s of this country. And those people are going to be running out of jobs very, quickly. And I think the labor market, whether it is ESI, Provident Fund, Gratuity, we are piling on and making it harder and harder to hire labor. when, on the other hand, capital is not taxed. We are giving incentives to people to use AI, and we are taxing them through provident fund and labor market regulations to hire labor. And I think that, for me, is the biggest risk, actually.

Johannes Zutt

So I think that for the first time in human history, we may actually have the tools available to enable us to target poverty reduction, poverty elimination initiatives on individuals. And that could be tremendously transforming. But at the same time, I do worry that we will not get the governance right or we won’t be able to make that governance sufficiently robust to prevent abuses.

Jeanette Rodrigues

Thank you very much to all of our panelists and to you for your time and attention once again. I had the very rare fortune of being able to peek into Michael’s screen while he was speaking, and I saw all the messy human notes. Our panelists are definitely not outsourcing their thinking anytime soon, and thank God for that. Thank you, ladies and gentlemen

Related ResourcesKnowledge base sources related to the discussion topics (28)
Factual NotesClaims verified against the Diplo knowledge base (6)
Confirmedhigh

“This was the fourth AI summit (the first having been held in the UK) and participants repeatedly described the first session as “full of fear” about AI.”

The transcript excerpt S12 explicitly states that this is the fourth AI summit, the first was in the UK, and every previous session was described as “full of fear.”

Confirmedlow

“Jeanette Rodrigues is the moderator/host of the panel discussion.”

Source S1 lists Jeanette Rodrigues as the moderator/host of the discussion.

Confirmedmedium

“Many low‑income countries face unreliable electricity, weak broadband, low literacy, and rely on very simple devices.”

Infrastructure challenges such as unreliable electricity and limited internet access in low‑income settings are documented in S93 and S92.

Additional Contextmedium

“The World Bank helps governments create AI sandbox environments for experimentation.”

AI sandboxes are discussed as a mechanism for responsible innovation in developing countries in S90, though the source does not specifically name the World Bank.

Confirmedmedium

“AI‑enabled services in education and health can fill skill gaps for teachers and frontline workers.”

S24 describes AI decision‑support tools for frontline health workers and assessment tools for teachers, confirming the claim.

Additional Contextmedium

“India’s digital identity programme and farmer‑focused phone applications illustrate how small AI can be deployed at scale when supported by government standards and private‑sector innovation.”

S95 references India’s digital public infrastructure—including identity, payments, and UPI—showing government‑led platforms that enable large‑scale digital services, providing context for the claim.

External Sources (97)
S1
How AI Drives Innovation and Economic Growth — -Jeanette Rodrigues: Moderator/Host of the panel discussion This comprehensive discussion at the Bharat Mandapam, moder…
S2
Extreme poverty and human rights * — 16 Jeanette Rodrigues, ‘India ID program wins World Bank praise despite ‘Big Brother’ fears’, Bloomberg, 16 March 201…
S3
DIGITAL DIVIDENDS — – Cantijoch, Marta, Silvia Galandini, and Rachel Gobson. 2014. ‘Civic Websites and Community Engagement: A Mixed Metho…
S4
Rights and Permissions — – Aboud, Frances E., and Kamal Hossain. 2011. ‘The Impact of Preprimary School on Primary School Achievement in Banglade…
S5
How AI Drives Innovation and Economic Growth — – Johannes Zutt- Michael Kremer – Michael Kremer- Iqbal Dhaliwal
S6
AI Meets Agriculture Building Food Security and Climate Resilien — -Johannes Zutt- Regional Vice President, World Bank
S7
AI Meets Agriculture Building Food Security and Climate Resilien — This discussion focused on using artificial intelligence to enhance food security and climate resilience in agriculture,…
S8
How AI Drives Innovation and Economic Growth — -Johannes Zutt: World Bank representative (referred to as “John” in the discussion)
S9
New Development Actors for the 21st Century / DAVOS 2025 — – Iqbal Dhaliwal – Global Director of J-PAL at MIT
S10
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — – Iqbal Dhaliwal- Ronnie Chatterji – Iqbal Dhaliwal- Sanjiv Bikhchandani
S11
How AI Drives Innovation and Economic Growth — – Johannes Zutt- Ufuk Akcigit- Anu Bradford – Ufuk Akcigit- Johannes Zutt
S12
How AI Drives Innovation and Economic Growth — – Ufuk Akcigit- Johannes Zutt
S13
Keynotes — Michael O’Flaherty: EuroDIG, dear friends. Last Saturday, we watched as the newly elected Pope explained why he had ch…
S14
How AI Drives Innovation and Economic Growth — – Johannes Zutt- Ufuk Akcigit- Anu Bradford
S15
How AI Drives Innovation and Economic Growth — Evidence:Examples include bespoke applications that help farmers investigate issues using their phones to analyze proble…
S16
Artificial Intelligence & Emerging Tech — The analysis explores multiple aspects of the relationship between artificial intelligence (AI) and developing countries…
S17
How AI Drives Innovation and Economic Growth — Zutt advocates for a focus on ‘small AI’ rather than large-scale AI solutions, emphasizing practical applications that c…
S18
9821st meeting — The Secretary-General emphasizes the importance of maintaining human control over AI systems. This is crucial to ensure …
S19
From principles to practice: Governing advanced AI in action — Brian Tse: right now? First of all, it’s a great honor to be on this panel today. To ensure that AI could be used as a f…
S20
WS #283 AI Agents: Ensuring Responsible Deployment — As the session reached its time limit (with Prendergast noting the final 10 minutes), the discussion revealed both the p…
S21
AI for agriculture Scaling Intelegence for food and climate resiliance — And there’s still a lot of mechanization which is absent completely. It is all still very much done using traditional me…
S22
How Small AI Solutions Are Creating Big Social Change — Critical for transition of emerging economies to advanced economies, requires ecosystem development and business process…
S23
Education meets AI — The speakers also highlighted the importance of integrating AI into the education system. They argued that such integrat…
S24
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion — And productivity, which will translate essentially also in them being able to have more income and getting out of povert…
S25
How AI Is Transforming Indias Workforce for Global Competitivene — “Because kind of when we have a small set of institutions or companies or talent pools pull ahead disproportionately bec…
S26
How the EU’s GPAI Code Shapes Safe and Trustworthy AI Governance India AI Impact Summit 2026 — Summary:All speakers agree that while some level of AI governance is necessary, excessive or premature regulation can st…
S27
Responsible AI for Children Safe Playful and Empowering Learning — “Education systems are facing massive learning challenges for which governments are seeking equitable, scalable and evid…
S28
AI & Child Rights: Implementing UNICEF Policy Guidance | IGF 2023 WS #469 — Furthermore, it explores the potential of AI evaluation in ensuring fairness in education while cautioning about the nee…
S29
Governments, Rewired / Davos 2025 — The overall tone was optimistic and forward-looking, with speakers highlighting the transformative potential of technolo…
S30
Scaling Innovation Building a Robust AI Startup Ecosystem — Overall Tone:The tone was consistently celebratory, appreciative, and inspirational throughout. It began formally with t…
S31
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — The tone is consistently optimistic, collaborative, and forward-looking throughout the discussion. Speakers emphasize “l…
S32
Media Briefing: Unlocking the North Star for AI Adoption, Scaling and Global Impact / DAVOS 2025 — The overall tone was optimistic and forward-looking. Panelists expressed excitement about AI’s capabilities and potentia…
S33
AI for Bharat’s Health_ Addressing a Billion Clinical Realities — Policy frameworks and public vs private sector dynamics
S34
Responsible AI for Shared Prosperity — Disagreement level:Very low disagreement level. All speakers aligned on core issues: the need for multilingual AI, the i…
S35
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Evidence-Based Policymaking and Research Integration Legal and regulatory | Economic The role of policy researchers is…
S36
How AI Drives Innovation and Economic Growth — I think there certainly is. As I noted before, there are certain areas where the private sector is going to move, but th…
S37
AI as critical infrastructure for continuity in public services — These key comments fundamentally shifted the discussion from a technical and regulatory focus to a human-centered perspe…
S38
WS #97 Interoperability of AI Governance: Scope and Mechanism — Yik Chan Chin: Thank you, Olga. So, I speak on behalf of the PNAI because I’m the co-leader of the subgroup on the inte…
S39
Laying the foundations for AI governance — – The four fundamental obstacles identified by the moderator: time, uncertainty, geopolitics, and power concentration A…
S40
Safe and Responsible AI at Scale Practical Pathways — The panel revealed that making data AI-ready is fundamentally a governance challenge rather than merely technical. The a…
S41
Discussion Report: Sovereign AI in Defence and National Security — Faisal responds to concerns about competing global AI policies by arguing that the sovereign AI framework is adaptable t…
S42
Global AI Policy Framework: International Cooperation and Historical Perspectives — Mirlesse outlines practical steps for implementing open sovereignty, emphasizing domestic AI deployment in key sectors w…
S43
AI Algorithms and the Future of Global Diplomacy — I just want to kind of contextualize the sovereignty thing as well.
S44
How Trust and Safety Drive Innovation and Sustainable Growth — Explanation:Despite representing different perspectives (UK regulator, Singapore regulator, and industry), there was une…
S45
Unveiling Trade Secrets: Exploring the Implications of trade agreements for AI Regulation in the Global South — Overall, the analysis highlights the contrasting perspectives and approaches to regulation, specifically the comparison …
S46
Risks and opportunities of a new UN cybercrime treaty | IGF 2023 WS #225 — A rights-based approach is crucial in designing regulation policies. It is essential to ensure that the rights of childr…
S47
How Trust and Safety Drive Innovation and Sustainable Growth — Despite representing different perspectives (UK regulator, Singapore regulator, and industry), there was unexpected cons…
S48
WSIS at 20: successes, failures and future expectations | IGF 2023 Open Forum #100 — The analysis recognises that public investment is vital to foster innovation, particularly in areas where the private se…
S49
Open Forum #33 Building an International AI Cooperation Ecosystem — Development | Economic | Capacity development Innovation Ecosystems and Practical Implementation The speaker argues th…
S50
How AI Drives Innovation and Economic Growth — Private firms develop profitable applications, but public goods applications need government and multilateral support
S51
Hard power of AI — In the context of AI governance, the World Economic Forum emphasizes the significance of looking beyond regulation alone…
S52
Rethinking Africa’s digital trade: Entrepreneurship, innovation, & value creation in the age of Generative AI (depHub) — While recognizing the positive impacts of AI, Shamira Ahmed cautioned against the risks that might contribute to inequal…
S53
From principles to practice: Governing advanced AI in action — Brian Tse: right now? First of all, it’s a great honor to be on this panel today. To ensure that AI could be used as a f…
S54
AI for Social Empowerment_ Driving Change and Inclusion — She argues that immediate policy action is required across competition, tax, labour and social protection to mitigate AI…
S55
AI for Social Empowerment_ Driving Change and Inclusion — Effective governance of AI’s labor market effects requires robust institutional infrastructure including regulatory bodi…
S56
Labour market stability persists despite the rise of AI — Public fears of AI rapidly displacing workershave not yet materialisedin the US labour market. A new study finds that th…
S57
How to make AI governance fit for purpose? — Focus needed on job disruption mitigation through training, skilling, and upskilling programs Legal and regulatory | Ec…
S58
S59
How AI Drives Innovation and Economic Growth — Zutt advocates for a focus on ‘small AI’ rather than large-scale AI solutions, emphasizing practical applications that c…
S60
How AI Drives Innovation and Economic Growth — Appropriate technology solutions for developing countries Zutt advocates for a focus on ‘small AI’ rather than large-sc…
S61
AI Meets Agriculture Building Food Security and Climate Resilien — Thank you for that additional question. I mean, obviously, India is in a great position to lead the development of AI, p…
S63
How AI Drives Innovation and Economic Growth — Speakers:Johannes Zutt, Michael Kremer Speakers:Johannes Zutt, Michael Kremer, Iqbal Singh Dhaliwal
S64
How AI Is Transforming Indias Workforce for Global Competitivene — “Because kind of when we have a small set of institutions or companies or talent pools pull ahead disproportionately bec…
S65
Powering AI _ Global Leaders Session _ AI Impact Summit India Part 2 — -Policy and Regulatory Framework Challenges: Speakers identified the need for better coordination between central and st…
S66
How AI Drives Innovation and Economic Growth — “First, model evaluation.”[124]. “Second, user impact.”[134]. “Second… scalability and usage at scale that’s more like…
S67
How nonprofits are using AI-based innovations to scale their impact — Four-level evaluation framework includes user experience, user behavior, user evaluation, and impact evaluation
S68
Governments, Rewired / Davos 2025 — The overall tone was optimistic and forward-looking, with speakers highlighting the transformative potential of technolo…
S69
AI for food systems — The tone throughout the discussion was consistently formal, optimistic, and collaborative. It maintained a ceremonial qu…
S70
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — The tone is consistently optimistic, collaborative, and forward-looking throughout the discussion. Speakers emphasize “l…
S71
Comprehensive Discussion Report: AI’s Transformative Potential for Global Economic Growth — The conversation maintains a consistently optimistic and enthusiastic tone throughout. Both speakers demonstrate genuine…
S72
Building the AI-Ready Future From Infrastructure to Skills — The tone was consistently optimistic and collaborative throughout, with speakers expressing excitement about AI’s potent…
S73
Agenda item 6: other matters/OEWG 2025 — The overall tone was constructive and diplomatic, with most delegations expressing willingness to compromise and find co…
S74
AI and Digital Developments Forecast for 2026 — The tone begins as analytical and educational but becomes increasingly cautionary and urgent throughout the conversation…
S75
Engineering Accountable AI Agents in a Global Arms Race: A Panel Discussion Report — High level of consensus on implementation approach and timeline, with moderate consensus on regulatory strategies. The a…
S76
Transforming Health Systems with AI From Lab to Last Mile — The discussion maintained a cautiously optimistic and collaborative tone throughout. It began with enthusiasm about AI’s…
S77
WS #144 Bridging the Digital Divide Language Inclusion As a Pillar — This shifted the conversation’s tone from problem-solving to crisis response, and subsequent speakers began incorporatin…
S78
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — The discussion maintained a collaborative and constructive tone throughout, characterized by academic rigor combined wit…
S79
Panel 4 – Resilient Subsea Infrastructure for Underserved Regions  — The discussion maintained a professional, collaborative tone throughout, with panelists building on each other’s insight…
S80
WS #462 Bridging the Compute Divide a Global Alliance for AI — The discussion maintained a constructive and collaborative tone throughout, with participants building on each other’s i…
S81
Open Forum #47 Demystifying WSis+20 — Focus on concrete, evidence-based examples of what works rather than abstract declarations
S82
Opening of the session — Focusing on practical, action-oriented measures that can benefit both developed and developing countries
S83
Any other business /Adoption of the report/ Closure of the session — Significant steps taken towards consensus The country had hoped for a different ending to the session but acknowledges …
S84
Keynote-Nikesh Arora — Overall Tone:The tone begins optimistically, celebrating AI’s rapid progress and potential, then shifts to a more cautio…
S85
Closing Ceremony — The discussion maintains a consistently positive and collaborative tone throughout, characterized by gratitude, celebrat…
S86
Keynote-Dario Amodei — Overall Tone:The tone is consistently optimistic yet measured throughout. Amodei maintains an enthusiastic and respectfu…
S87
AI-driven Cyber Defense: Empowering Developing Nations | IGF 2023 — There is a lot of fear-mongering going on as well
S88
Global Digital Compact: AI solutions for a digital economy inclusive and beneficial for all — Ciyong Zou: Thank you. Thank you very much, moderator. Distinguished representatives, ladies and gentlemen, good afterno…
S89
Comprehensive Summary: AI Governance and Societal Transformation – A Keynote Discussion — In the last 20 years, we all got access to supercomputers in our pockets, billions of devices that became fundamental to…
S90
AI sandboxes pave path for responsible innovation in developing countries — At theInternet Governance Forum 2025in Lillestrøm, Norway, experts from around the worldgatheredto examine how AI sandbo…
S91
WS #294 AI Sandboxes Responsible Innovation in Developing Countries — He positioned sandboxes as “one of the tools that brings the capacity of dialogue, particularly when the discussions are…
S92
WSIS High-Level Dialogue: Multistakeholder Partnerships Driving Digital Transformation — Some low-income countries have limited internet access
S93
Internet Governance Forum 2024 — To this end,speakers highlighted significant infrastructure challengesfacing many African countries, including unreliabl…
S94
Conversational AI in low income & resource settings | IGF 2023 — Additionally, the potential of AI and chatbots in low-resource settings is acknowledged. The analysis suggests that thes…
S95
Building the Workforce_ AI for Viksit Bharat 2047 — We know we have 5 .8 million professionals. For example, the Tata AI Saki Immersion Programme is empowering rural women …
S96
AI for agriculture Scaling Intelegence for food and climate resiliance — to be here today. So we’re on the cusp of a major revolution in how support to farmers and agriculture is happening. I a…
S97
GPAI: A Multistakeholder Initiative on Trustworthy AI | IGF 2023 Open Forum #111 — Abhishek Singh:Thank you, thank you Inma. I must straightaway mention that one key value that we get as being part of th…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
J
Johannes Zutt
3 arguments141 words per minute1450 words612 seconds
Argument 1
AI can be a game‑changer for emerging markets, offering productivity gains in agriculture, health, and finance, yet faces basic constraints such as unreliable electricity, weak internet, and low literacy – (Johannes Zutt)
EXPLANATION
Johannes argues that AI holds transformative potential for emerging economies by improving productivity in key sectors like agriculture, health, and finance. However, he cautions that without reliable electricity, robust internet infrastructure, and basic literacy, these benefits cannot be fully realized.
EVIDENCE
He notes that AI can be a game-changer for all countries, especially emerging markets, providing opportunities to leapfrog development challenges and enhance growth and productivity [7-10]. He then lists fundamental constraints: unreliable electricity [26], weak internet backbone [28], low literacy and numeracy skills [29], and reliance on very basic devices rather than smartphones [30-31].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Constraints such as unreliable electricity, weak internet backbone, and low literacy are documented as major barriers to AI adoption in low-income settings [S1][S15][S16].
MAJOR DISCUSSION POINT
Infrastructure constraints limiting AI impact
AGREED WITH
Ufuk Akcigit, Anu Bradford, Jeanette Rodrigues
Argument 2
The World Bank promotes “small AI”: affordable, locally relevant solutions that operate with limited connectivity, requiring joint effort from governments and private innovators – (Johannes Zutt)
EXPLANATION
Johannes describes the World Bank’s focus on “small AI,” which are practical, low‑cost applications designed for environments with limited data, connectivity, and skills. Successful deployment requires collaboration between governments, who provide the necessary infrastructure, and private innovators who develop the applications.
EVIDENCE
He defines small AI as practical, affordable, locally relevant AI that works where connectivity, data, skills, and infrastructure are limited [34-36]. He cites examples in India, where the Bank works with multiple states and private sector investors to develop such tools, emphasizing the need for both public-facing standards and private-sector innovation [39-52].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The concept of “small AI” – practical, low-cost applications designed for limited connectivity, data, and skill environments – is described in several studies of AI for development [S15][S17][S22].
MAJOR DISCUSSION POINT
Public‑private collaboration for small AI
DISAGREED WITH
Ufuk Akcigit
Argument 3
Robust governance is essential to prevent misuse of AI and to ensure responsible deployment, especially in high‑impact domains – (Johannes Zutt)
EXPLANATION
Johannes stresses that alongside the opportunities AI presents, there are significant governance challenges that must be addressed to avoid harmful outcomes. Effective regulatory safeguards are needed, particularly for applications with large societal impact.
EVIDENCE
He acknowledges that AI creates challenges such as job losses and infrastructure gaps, and adds that governance and regulatory safeguards are crucial considerations [21-33].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Recent AI governance reports stress the need for strong safeguards, human oversight, and rights-protective frameworks for high-impact AI systems [S18][S19][S20].
MAJOR DISCUSSION POINT
Need for strong AI governance
AGREED WITH
Anu Bradford, Jeanette Rodrigues, Michael Kremer
DISAGREED WITH
Iqbal Dhaliwal
M
Michael Kremer
3 arguments160 words per minute1592 words593 seconds
Argument 1
Government‑backed AI weather forecasts can dramatically improve farmers’ planting decisions and yields, illustrating the need for public investment in AI public goods – (Michael Kremer)
EXPLANATION
Michael highlights AI‑driven weather forecasting as a public good that can help farmers make better planting decisions, leading to higher yields. He argues that governments, possibly supported by multilateral development banks, should invest in producing and disseminating such forecasts.
EVIDENCE
He cites India’s AI weather forecasts reaching 38 million farmers, noting that the forecasts correctly predicted an early monsoon in Kerala and helped farmers adjust planting and seed choices, with survey evidence showing increased transplanting and hybrid seed use [133-155].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
AI-driven weather forecasting for millions of farmers is highlighted as a public-good application that improves planting decisions and yields [S15][S21].
MAJOR DISCUSSION POINT
Public investment in AI for agriculture
AGREED WITH
Johannes Zutt, Iqbal Dhaliwal
Argument 2
Multilateral development banks should create evidence‑based innovation funds that pilot, rigorously test, and scale AI applications to overcome market failures and accelerate adoption – (Michael Kremer)
EXPLANATION
Michael proposes that institutions like the World Bank set up tiered, evidence‑based innovation funds to support AI pilots, rigorous testing, and scaling. Such funds would address market failures where private firms lack incentives to develop public‑good AI solutions.
EVIDENCE
He describes Development Innovation Ventures, which provides small grants for pilots, larger grants for rigorous testing, and further funding for scaling successful projects, emphasizing the need for evidence-based approaches [266-271].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Development Innovation Ventures model, with tiered, evidence-based grants for pilots, testing, and scaling, is presented as a template for such funds [S12][S15].
MAJOR DISCUSSION POINT
Evidence‑based AI funding mechanisms
DISAGREED WITH
Johannes Zutt
Argument 3
AI projects should be evaluated like medical trials: assess model accuracy, user impact, scalability, and establish mechanisms for continuous improvement and transparent reporting – (Michael Kremer)
EXPLANATION
Michael suggests a four‑stage evaluation framework for AI interventions, analogous to clinical trials: model performance, user impact, scalability/effectiveness, and continuous improvement with transparent reporting. This approach ensures that AI tools deliver real benefits at scale.
EVIDENCE
He outlines the steps: model evaluation for task performance, assessing user impact through pilot-like trials, testing scalability akin to effectiveness trials, and requiring continuous improvement and open reporting in procurement contracts [284-292].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
A four-stage, trial-like evaluation framework for AI interventions (model performance, user impact, scalability, continuous improvement) is outlined in the evidence-based evaluation guidelines [S12].
MAJOR DISCUSSION POINT
Rigorous evaluation of AI interventions
U
Ufuk Akcigit
2 arguments163 words per minute1041 words382 seconds
Argument 1
Realizing AI’s benefits requires fixing fundamental business‑environment issues (e.g., firm size determinants, entrepreneurship climate) in developing economies – (Ufuk Akcigit)
EXPLANATION
Ufuk argues that AI alone cannot spur entrepreneurship in emerging economies unless underlying business‑environment problems—such as the influence of family size on firm size—are addressed. A supportive environment is essential for AI to translate into genuine dynamism.
EVIDENCE
He questions why, before AI, firm size in emerging economies was determined by family size or number of male children, and stresses that without fixing the business environment, AI will not magically create entrepreneurship [111-115].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Analyses of AI-driven development stress that ecosystem development and business-process reforms are prerequisites for AI impact in emerging economies [S22].
MAJOR DISCUSSION POINT
Business environment as prerequisite for AI benefits
AGREED WITH
Johannes Zutt, Anu Bradford, Jeanette Rodrigues
Argument 2
The foundational AI layer has high entry barriers (compute, data, talent) leading to market concentration, while the application layer remains low‑barrier and more conducive to creative destruction – (Ufuk Akcigit)
EXPLANATION
Ufuk distinguishes between the foundation layer of AI, which requires heavy compute, data, and talent and thus favors concentration, and the application layer, where entry barriers are low and creative destruction can thrive. He warns that concentration at the foundational level may spill over to downstream applications.
EVIDENCE
He notes that the foundation layer is compute-heavy, data-heavy, and talent-heavy, making it prone to concentration, whereas the application layer has low entry barriers and encourages creative destruction [94-98].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Research distinguishes a compute-heavy, data-intensive foundation layer that tends toward concentration from a low-barrier application layer that enables creative destruction [S17][S1].
MAJOR DISCUSSION POINT
Layered AI structure and concentration risk
AGREED WITH
Iqbal Dhaliwal, Michael Kremer
DISAGREED WITH
Johannes Zutt
I
Iqbal Dhaliwal
3 arguments183 words per minute1151 words375 seconds
Argument 1
Targeted “small AI” tools can free teachers’ time and enhance education outcomes when they are demand‑driven and integrated into existing workflows – (Iqbal Dhaliwal)
EXPLANATION
Iqbal explains that small AI applications can automate routine tasks such as spelling correction, allowing teachers to focus on higher‑order instruction. When these tools are driven by demand from students, teachers, and districts, they can improve educational performance.
EVIDENCE
He describes an AI system that takes over routine correction tasks, freeing teachers to engage with students on essay structure, and notes that the demand came from students, teachers, and school districts seeking progress, leading to measurable improvements [236-247].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Studies of AI in education report demand-driven tools such as automated spelling correction that free teachers for higher-order instruction [S23][S15].
MAJOR DISCUSSION POINT
Demand‑driven AI to augment teaching
AGREED WITH
Johannes Zutt
Argument 2
AI is accelerating market concentration, shifting innovative resources to large incumbents and prompting a talent drain from academia to industry, raising concerns about unequal benefit distribution – (Iqbal Dhaliwal)
EXPLANATION
Iqbal points out that AI is concentrating innovation within large incumbent firms and drawing talent away from universities, which may reduce competition and widen inequality. He presents evidence of increasing market concentration and higher earnings for AI scientists in industry.
EVIDENCE
He cites rising market concentration in the U.S. since 1980, a shift of innovative resources toward firms with over 1,000 employees, and data showing AI scientists’ earnings rising from $300 k to $390 k in academia and from $550 k to $2 M in industry, along with a talent migration from academia to industry [324-340].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Empirical work documents rising market concentration in AI and a migration of talent from academia to industry, especially toward large incumbent firms [S1][S17].
MAJOR DISCUSSION POINT
Concentration and talent migration
AGREED WITH
Ufuk Akcigit, Michael Kremer
Argument 3
Trust in technology and system adaptation are critical; even highly accurate AI tools can fail to deliver benefits if users are not trained or institutional processes are not adjusted – (Iqbal Dhaliwal)
EXPLANATION
Iqbal stresses that the effectiveness of AI depends on user trust and the surrounding institutional context. Without proper training and system redesign, even superior AI diagnostics may not improve outcomes and can even reduce efficiency.
EVIDENCE
He references studies where AI diagnostic tools performed better than humans in labs but did not improve field outcomes due to insufficient training of health workers [309-315], and an example where an AI system for detecting bogus firms in India was not scaled because it removed human discretion, highlighting the need for system adaptation [316-322].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Implementation studies highlight that user trust, adequate training, and alignment of institutional processes are essential for AI tools to achieve intended outcomes [S24][S18].
MAJOR DISCUSSION POINT
Importance of trust and system alignment
DISAGREED WITH
Johannes Zutt
A
Anu Bradford
2 arguments199 words per minute1374 words412 seconds
Argument 1
Effective AI regulation should be rights‑driven yet adaptable to local priorities, allowing India and other Global South nations to tailor frameworks without merely copying external models – (Anu Bradford)
EXPLANATION
Anu argues that AI regulation must protect fundamental rights while being flexible enough for local contexts. She suggests learning from the EU’s rights‑based approach but adapting it to national priorities rather than adopting it wholesale.
EVIDENCE
She describes the EU’s rights-driven regulation that protects individual rights, democratic structures, and seeks broader benefit distribution, and recommends that India take lessons from this approach while customizing it to its own needs [172-176].
MAJOR DISCUSSION POINT
Rights‑based yet locally adapted AI regulation
AGREED WITH
Johannes Zutt, Jeanette Rodrigues, Michael Kremer
DISAGREED WITH
Jeanette Rodrigues
Argument 2
The Global South must develop its own AI regulatory sovereignty, drawing lessons from the EU’s rights‑based approach but customizing rules to national contexts – (Anu Bradford)
EXPLANATION
Anu emphasizes the need for the Global South to assert AI regulatory sovereignty, creating rules that suit their economies and societies. While acknowledging the difficulty of regulation, she advocates for tailored frameworks rather than reliance on external models.
EVIDENCE
She states that the Global South has incentives for AI sovereignty, including regulatory sovereignty, and that they should design rules fitting their economies and public interests, while learning from jurisdictions like the EU [167-172].
MAJOR DISCUSSION POINT
AI regulatory sovereignty for the Global South
J
Jeanette Rodrigues
2 arguments174 words per minute1039 words356 seconds
Argument 1
Policymakers must balance hope and fear, ensuring AI narrows rather than widens development gaps – (Jeanette Rodrigues)
EXPLANATION
Jeanette calls for policymakers to navigate between the optimism surrounding AI’s potential and the fears of job loss and inequality. She stresses that policies should aim to ensure AI reduces, not widens, development disparities.
EVIDENCE
She notes that AI innovation does not diffuse equally, and the panel’s purpose is to explore what determines whether AI narrows or widens the development gap, emphasizing the need to balance hope and concern [61-71].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Literature notes the natural hype surrounding AI and stresses the need for balanced policy that mitigates risks while leveraging benefits, to avoid widening inequality [S15][S24].
MAJOR DISCUSSION POINT
Balancing optimism and risk in AI policy
AGREED WITH
Johannes Zutt, Ufuk Akcigit, Anu Bradford
Argument 2
There is anxiety that AI rule‑making is dominated by the US and China, prompting questions about how developing countries can assert sovereign policy control – (Jeanette Rodrigues)
EXPLANATION
Jeanette raises concerns that large language models are concentrated in the United States and China, which may allow these powers to set global AI rules. She asks who will set AI regulations for the Global South and whether sovereign policy is possible.
EVIDENCE
She points out that large language models are concentrated in the US and China, mentions the EU as another player, and asks who sets AI rules for the Global South and if sovereignty is possible [162-166].
MAJOR DISCUSSION POINT
AI governance dominance and sovereignty concerns
AGREED WITH
Johannes Zutt, Anu Bradford, Michael Kremer
DISAGREED WITH
Anu Bradford
Agreements
Agreement Points
AI can be a transformative game‑changer for emerging markets but requires basic infrastructure such as reliable electricity, strong internet, and basic literacy to realise its potential.
Speakers: Johannes Zutt, Ufuk Akcigit, Anu Bradford, Jeanette Rodrigues
AI can be a game‑changer for emerging markets, offering productivity gains in agriculture, health, and finance, yet faces basic constraints such as unreliable electricity, weak internet, and low literacy – (Johannes Zutt) Realizing AI’s benefits requires fixing fundamental business‑environment issues (e.g., firm size determinants, entrepreneurship climate) in developing economies – (Ufuk Akcigit) Effective AI regulation should be rights‑driven yet adaptable to local priorities, allowing India and other Global South nations to tailor frameworks without merely copying external models – (Anu Bradford) Policymakers must balance hope and fear, ensuring AI narrows rather than widens development gaps – (Jeanette Rodrigues)
All four speakers stress that while AI holds great promise for emerging economies, its impact will be limited unless foundational infrastructure and a supportive business-environment are put in place, and policies are crafted to balance optimism with realistic constraints. [7-10][26-31][84-85][111-115][167-176][61-71]
POLICY CONTEXT (KNOWLEDGE BASE)
This view mirrors policy emphasis on foundational digital infrastructure for AI development in low-income settings, as highlighted in discussions on computing infrastructure needs and multilateral support for emerging economies [S34][S36].
Promotion of “small AI” – affordable, locally relevant AI solutions that operate with limited connectivity and data – is essential for developing contexts.
Speakers: Johannes Zutt, Iqbal Dhaliwal
For us in the World Bank group, we’ve been very, very focused on focused recently on basically small AI. Small AI meaning practical, affordable, locally relevant AI that addresses specific problems and also works where connectivity, data, skills, infrastructure are fairly limited – (Johannes Zutt) Targeted “small AI” tools can free teachers’ time and enhance education outcomes when they are demand‑driven and integrated into existing workflows – (Iqbal Dhaliwal)
Both speakers advocate for low-cost, context-specific AI applications that can function despite weak connectivity or limited data, highlighting education and agriculture as key sectors. [34-36][236-247]
POLICY CONTEXT (KNOWLEDGE BASE)
The call for affordable, locally-tailored AI aligns with calls for multilingual, low-resource AI solutions and a human-centered implementation focus in development forums [S34][S37].
Strong governance and regulatory frameworks are crucial to ensure responsible AI deployment and to mitigate risks such as job losses, misuse, and concentration of power.
Speakers: Johannes Zutt, Anu Bradford, Jeanette Rodrigues, Michael Kremer
Robust governance is essential to prevent misuse of AI and to ensure responsible deployment, especially in high‑impact domains – (Johannes Zutt) Effective AI regulation should be rights‑driven yet adaptable to local priorities, allowing India and other Global South nations to tailor frameworks without merely copying external models – (Anu Bradford) There is anxiety that AI rule‑making is dominated by the US and China, prompting questions about how developing countries can assert sovereign policy control – (Jeanette Rodrigues) I think there is huge potential in health and education, but the risk is that the public sector won’t adopt these, and that procurement systems may lock‑in and limit competition – (Michael Kremer)
All four emphasize the need for robust, rights-based, and locally adapted governance structures to manage AI’s societal impacts, prevent concentration, and ensure public-sector uptake. [21-33][167-176][162-166][397-398]
POLICY CONTEXT (KNOWLEDGE BASE)
Multiple policy papers stress robust AI governance as a primary safeguard, citing governance challenges, labor-impact regulation, and broader AI governance obstacles [S40][S54][S55][S39].
Public sector investment and evidence‑based innovation funds are needed to develop AI public goods (e.g., weather forecasts, health and education tools) that the private sector will not provide on its own.
Speakers: Michael Kremer, Johannes Zutt, Iqbal Dhaliwal
Government‑backed AI weather forecasts can dramatically improve farmers’ planting decisions and yields, illustrating the need for public investment in AI public goods – (Michael Kremer) We’re doing a little bit on bigger AI… Small AI will also be very, very important for uptake – (Johannes Zutt) Free up time. So if your AI application can free up the time of the health frontline workers, first of all, that’s a winner – (Iqbal Dhaliwal)
The speakers agree that governments and multilateral institutions must fund and pilot AI solutions that serve public-good functions, such as weather forecasting for farmers or tools that free health and education workers, because market incentives are insufficient. [133-155][34-36][236-247]
POLICY CONTEXT (KNOWLEDGE BASE)
Evidence-based AI policy roadmaps and public-investment analyses underscore the need for government-funded AI public-goods programmes where market incentives fall short [S35][S48][S50].
AI development is leading to increasing market concentration and talent migration toward large incumbents, raising concerns about unequal benefit distribution and the need to keep foundational AI layers contestable.
Speakers: Ufuk Akcigit, Iqbal Dhaliwal, Michael Kremer
The foundational AI layer has high entry barriers (compute, data, talent) leading to market concentration, while the application layer remains low‑barrier and more conducive to creative destruction – (Ufuk Akcigit) AI is accelerating market concentration, shifting innovative resources to large incumbents and prompting a talent drain from academia to industry, raising concerns about unequal benefit distribution – (Iqbal Dhaliwal) There is a risk that the public sector won’t adopt these, and that procurement systems may lock‑in and limit competition – (Michael Kremer)
All three highlight that AI’s compute-intensive foundation favors a few large players, causing concentration of innovation and talent, and warn that without careful policy (e.g., competition-friendly procurement) the benefits may be unevenly shared. [94-98][324-340][397-398]
POLICY CONTEXT (KNOWLEDGE BASE)
Recent assessments flag rising market concentration, power concentration, and wealth inequality in AI as key risks requiring contestable foundational layers [S58][S39][S52].
Similar Viewpoints
Both stress the importance of public‑sector‑led, evidence‑based pilots and funding mechanisms to develop and scale small, locally relevant AI solutions, recognizing that private firms alone will not fill the public‑good gap. [34-36][266-271]
Speakers: Johannes Zutt, Michael Kremer
For us in the World Bank group, we’ve been very, very focused on focused recently on basically small AI… – (Johannes Zutt) Multilateral development banks should create evidence‑based innovation funds that pilot, rigorously test, and scale AI applications to overcome market failures and accelerate adoption – (Michael Kremer)
Both highlight that AI is creating concentration at the foundational level, concentrating innovation and talent in large incumbents, which threatens broader inclusive growth. [94-98][324-340]
Speakers: Ufuk Akcigit, Iqbal Dhaliwal
The foundational AI layer has high entry barriers (compute, data, talent) leading to market concentration… – (Ufuk Akcigit) AI is accelerating market concentration, shifting innovative resources to large incumbents and prompting a talent drain… – (Iqbal Dhaliwal)
Both argue that the Global South needs to assert AI regulatory sovereignty and craft rights‑based, locally adapted frameworks rather than simply follow US/China or EU models. [167-172][162-166]
Speakers: Anu Bradford, Jeanette Rodrigues
The Global South must develop its own AI regulatory sovereignty, drawing lessons from the EU’s rights‑based approach but customizing rules to national contexts – (Anu Bradford) There is anxiety that AI rule‑making is dominated by the US and China, prompting questions about how developing countries can assert sovereign policy control – (Jeanette Rodrigues)
Unexpected Consensus
Recognition by both a World Bank official and a field practitioner that system‑level trust, user training, and institutional adaptation are as critical as the technology itself for AI success.
Speakers: Johannes Zutt, Iqbal Dhaliwal
Robust governance is essential to prevent misuse of AI and to ensure responsible deployment, especially in high‑impact domains – (Johannes Zutt) Trust in technology and system adaptation are critical; even highly accurate AI tools can fail to deliver benefits if users are not trained or institutional processes are not adjusted – (Iqbal Dhaliwal)
While Johannes focuses on governance from a high-level perspective, Iqbal emphasizes on-ground trust and training. Their convergence on the necessity of aligning institutions and users with AI tools is unexpected given their different roles. [21-33][309-322]
POLICY CONTEXT (KNOWLEDGE BASE)
Human-centered AI implementation studies highlight system-level trust, training, and institutional adaptation as pivotal alongside technical deployment [S37][S44][S40].
Overall Assessment

The panel shows strong convergence on four main themes: (1) AI’s transformative potential is contingent on basic infrastructure and a supportive business‑environment; (2) “small AI” solutions that are affordable and locally relevant are vital; (3) robust, rights‑based governance and regulatory sovereignty are needed to manage risks and prevent concentration; (4) public‑sector investment and evidence‑based funding mechanisms are essential to deliver AI public goods and avoid lock‑in. Concerns about market concentration and talent migration are also widely shared.

High consensus across speakers, indicating a shared understanding that policy, infrastructure, and governance must accompany technological advances to ensure AI narrows rather than widens development gaps. This consensus suggests that future initiatives should prioritize coordinated public‑private funding, rights‑focused regulation, and capacity‑building to harness AI for inclusive development.

Differences
Different Viewpoints
Foundational AI layer concentration vs focus on small AI applications
Speakers: Ufuk Akcigit, Johannes Zutt
The foundational AI layer has high entry barriers (compute, data, talent) leading to market concentration, while the application layer remains low‑barrier and more conducive to creative destruction – (Ufuk Akcigit) The World Bank promotes “small AI”: affordable, locally relevant solutions that operate with limited connectivity, requiring joint effort from governments and private innovators – (Johannes Zutt)
Ufuk warns that the compute-, data- and talent-intensive foundation layer creates concentration that can spill over to downstream applications, suggesting the need to address these structural barriers [94-98]. Johannes, by contrast, concentrates on deploying “small AI” that works despite limited connectivity and infrastructure, emphasizing public-private collaboration without addressing the foundational layer’s concentration risk [34-36][39-52].
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between concentrating power in large AI models and promoting low-resource, small-AI solutions is reflected in market-concentration analyses and calls for affordable AI for developing contexts [S58][S34].
Approach to AI regulatory sovereignty and feasibility
Speakers: Anu Bradford, Jeanette Rodrigues
Effective AI regulation should be rights‑driven yet adaptable to local priorities, allowing India and other Global South nations to tailor frameworks without merely copying external models – (Anu Bradford) There is anxiety that AI rule‑making is dominated by the US and China, prompting questions about how developing countries can assert sovereign policy control – (Jeanette Rodrigues)
Anu advocates a rights-based, locally adapted regulatory framework, learning from the EU but customizing to national needs [172-176]. Jeanette highlights the dominance of the US and China in large-model development and questions whether the Global South can achieve true AI sovereignty [162-166]. The two differ on the feasibility and emphasis of sovereign regulation.
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on sovereign AI frameworks, open-sovereignty strategies, and AI’s role in diplomacy illustrate divergent views on national regulatory autonomy and feasibility [S41][S42][S43].
Primary mechanism for scaling AI in emerging economies – public‑private collaboration vs evidence‑based funding
Speakers: Johannes Zutt, Michael Kremer
The World Bank promotes “small AI” meaning practical, affordable, locally relevant AI that addresses specific problems and also works where connectivity, data, skills, infrastructure are fairly limited – (Johannes Zutt) Multilateral development banks should create evidence‑based innovation funds that pilot, rigorously test, and scale AI applications to overcome market failures and accelerate adoption – (Michael Kremer)
Johannes focuses on delivering small-AI solutions through joint government standards and private-sector innovators, stressing practicality in low-resource settings [34-36][39-52]. Michael proposes tiered, evidence-based innovation funds (small grants, larger testing grants, scaling grants) to address market failures and ensure rigorous evaluation before scaling [266-271]. They share the goal of AI diffusion but differ on the primary driver-collaboration versus structured funding mechanisms.
POLICY CONTEXT (KNOWLEDGE BASE)
Policy discussions stress both the necessity of public-private partnerships for scaling AI and the importance of evidence-based funding mechanisms to target impact effectively [S49][S48][S35].
How to mitigate AI‑induced labor market disruptions
Speakers: Johannes Zutt, Ufuk Akcigit
One of them is there will be some job losses, particularly sort of entry‑level jobs that are very much knowledge or document‑based, performing relatively rote work that can be taken over by automation – (Johannes Zutt) The biggest risk, I think, is definitely the labor market. If there was a dial where I could slow down the adaptation and give time to the labor market to catch up, that’s my biggest worry – (Ufuk Akcigit)
Johannes acknowledges AI will displace entry-level, knowledge-based jobs and notes this trend within the World Bank’s own hiring [22-24]. Ufuk stresses the broader macro-economic risk, calling for a slower AI rollout to give labor markets time to adjust, especially for aspiring entry-level coding jobs [405-412]. The disagreement lies in the emphasis: Johannes notes the problem, while Ufuk proposes a policy lever (slowing adoption) as a solution.
POLICY CONTEXT (KNOWLEDGE BASE)
Recommendations include targeted training, social-protection measures, and dedicated governance bodies to monitor AI’s labor impacts, as outlined in recent policy briefs on AI-driven disruption [S54][S55][S57].
Primary barrier to successful AI deployment – governance frameworks vs trust and system adaptation
Speakers: Johannes Zutt, Iqbal Dhaliwal
Robust governance is essential to prevent misuse of AI and to ensure responsible deployment, especially in high‑impact domains – (Johannes Zutt) Trust in technology and system adaptation are critical; even highly accurate AI tools can fail to deliver benefits if users are not trained or institutional processes are not adjusted – (Iqbal Dhaliwal)
Johannes stresses the need for governance and regulatory safeguards to avoid harmful outcomes [21-33]. Iqbal argues that beyond governance, user trust, adequate training, and alignment of institutional processes are essential for AI effectiveness, citing failures of AI diagnostics and GST fraud detection when systems were not adapted [309-315]. The two focus on different levers-formal governance versus practical trust and system integration.
POLICY CONTEXT (KNOWLEDGE BASE)
Analyses identify governance coordination challenges as a barrier, while other strands argue that building trust and adapting institutional processes are equally decisive [S40][S37].
Unexpected Differences
Trust and system adaptation versus rights‑based regulatory approach
Speakers: Iqbal Dhaliwal, Anu Bradford
Trust in technology and system adaptation are critical; even highly accurate AI tools can fail to deliver benefits if users are not trained or institutional processes are not adjusted – (Iqbal Dhaliwal) Effective AI regulation should be rights‑driven yet adaptable to local priorities, allowing India and other Global South nations to tailor frameworks without merely copying external models – (Anu Bradford)
While both discuss how to ensure AI works for societies, Iqbal stresses on on‑the‑ground trust, training, and institutional redesign as the main hurdle, whereas Anu emphasizes a legal‑regulatory, rights‑based framework. The divergence is unexpected because both are usually aligned on the need for supportive environments, yet they prioritize very different levers (social trust vs legal rights).
POLICY CONTEXT (KNOWLEDGE BASE)
The literature contrasts rights-based regulatory models with trust-and-safety-focused interventions, highlighting the policy trade-off between protecting fundamental rights and fostering user confidence [S45][S46][S44].
Overall Assessment

The panel shows broad consensus that AI can help narrow development gaps, but disagreements arise around how to manage structural concentration, the balance between rights‑based regulation and sovereign policy, the primary mechanisms for scaling AI (public‑private collaboration vs evidence‑based funding), and the most effective way to protect labor markets and build trust. These divergences reflect differing priorities among economists, development practitioners, and policy experts.

Moderate to high – while there is shared optimism, the participants differ substantially on the pathways and institutional levers needed, implying that coordinated policy design will require reconciling these perspectives to avoid fragmented or counter‑productive AI strategies.

Partial Agreements
Both agree that AI should be used to narrow development gaps and benefit the poor. Jeanette emphasizes a balanced policy approach to manage optimism and fear [61-71], while Michael points to concrete public‑sector investment (AI weather forecasts) as a way to achieve that goal [133-155]. They share the same objective but propose different pathways—policy balance versus targeted public investment.
Speakers: Jeanette Rodrigues, Michael Kremer
Policymakers must balance hope and fear, ensuring AI narrows rather than widens development gaps – (Jeanette Rodrigues) Government‑backed AI weather forecasts can dramatically improve farmers’ planting decisions and yields, illustrating the need for public investment in AI public goods – (Michael Kremer)
Both seek mechanisms that ensure AI benefits are widely shared and risks are managed. Anu focuses on a rights‑based, adaptable regulatory framework, while Michael proposes evidence‑based funding and procurement safeguards. They agree on the need for structured, protective measures but differ on whether regulation or funding is the primary tool.
Speakers: Anu Bradford, Michael Kremer
Effective AI regulation should be rights‑driven yet adaptable to local priorities, allowing India and other Global South nations to tailor frameworks without merely copying external models – (Anu Bradford) Multilateral development banks should create evidence‑based innovation funds that pilot, rigorously test, and scale AI applications to overcome market failures and accelerate adoption – (Michael Kremer)
Takeaways
Key takeaways
AI can be a powerful development catalyst for emerging markets, offering productivity gains in agriculture, health, finance and education, but its impact is limited by basic infrastructure gaps such as unreliable electricity, weak internet connectivity, and low literacy. The World Bank’s “small AI” approach—affordable, locally‑relevant tools that work offline or with limited data—highlights the need for public‑private collaboration to create AI applications that match on‑the‑ground needs. Foundational AI models have high entry barriers (compute, data, talent) leading to market concentration, while the application layer remains low‑barrier and more conducive to creative destruction; this concentration raises concerns about unequal benefit distribution and talent drain from academia to industry. Effective AI governance requires a rights‑driven yet locally adaptable regulatory framework; the Global South must develop its own AI sovereignty rather than simply copying US, China or EU models. Evaluation of AI interventions should follow a rigorous, multi‑stage process (model accuracy, user impact, scalability, continuous improvement) similar to medical trials, and must address trust, user training, and system‑level adaptation.
Resolutions and action items
World Bank to continue promoting and scaling “small AI” solutions in South Asia, including AI sandboxes for experimentation with governments. Creation/expansion of evidence‑based innovation funds (e.g., Development Innovation Ventures) to pilot, rigorously test, and scale AI applications for public‑good outcomes. Governments (e.g., India) to invest in AI‑generated public goods such as weather forecasts and to integrate them into farmer decision‑making processes. Encourage private‑sector developers to build demand‑driven AI tools that free up frontline worker time (e.g., teachers, health workers) and align with local language and offline capabilities. Policy makers to adopt a rights‑based regulatory approach that can be customized to national priorities, drawing lessons from the EU AI Act while avoiding a one‑size‑fits‑all model.
Unresolved issues
How to prevent AI‑driven market concentration from entrenching incumbent firms and limiting opportunities for new entrants in developing economies. Specific mechanisms for aligning AI talent pipelines with local innovation ecosystems to avoid excessive migration from academia to industry. The optimal balance between regulation and innovation for the Global South, especially given geopolitical pressures from the US and China. Ways to ensure public‑sector adoption of AI tools at scale without creating monopsonistic procurement bottlenecks. How to design AI governance structures that protect against misuse while remaining flexible enough for rapid technological change.
Suggested compromises
Combine a public‑facing effort on standards, interoperability and offline capability with a private‑sector‑driven push for rapid application development. Adopt a rights‑driven regulatory framework that is adapted locally, allowing countries like India to tailor rules without fully replicating EU or US models. Use tiered innovation funding (small grants for pilots, larger grants for rigorous testing, and scale‑up financing) to balance speed of innovation with evidence‑based risk mitigation. Encourage AI sandboxes that permit controlled experimentation while maintaining oversight, thereby reconciling the need for rapid development with governance concerns.
Thought Provoking Comments
AI can be a game changer… but at the same time, AI also creates a number of challenges. … many developing countries lack reliable electricity, internet backbone, basic literacy and numeracy, and may need to use very basic devices. We need to focus on "small AI" – practical, affordable, locally relevant AI that works where connectivity, data, skills, infrastructure are limited.
He simultaneously highlighted AI’s transformative potential and the concrete infrastructural and governance constraints in emerging economies, introducing the concept of “small AI” as a pragmatic solution.
Set the agenda for the panel by framing the discussion around both opportunities and systemic barriers, prompting other speakers to address feasibility, policy, and implementation challenges specific to developing contexts.
Speaker: Johannes Zutt
When we look at the application layer, entry barriers are low and small businesses can do what only large businesses could do before. But the foundational layer has very high entry barriers – compute‑heavy, data‑heavy, talent‑heavy – leading to concentration.
He provided a clear two‑tier framework (foundational vs. application) that explains why AI could both democratize entrepreneurship and simultaneously reinforce market concentration.
Shifted the conversation toward structural market dynamics, influencing later remarks on concentration, incumbency, and the need to keep the foundational layer contestable (referenced by Iqbal and later by Ufuk himself).
Speaker: Ufuk Akcigit
AI weather forecasts are a public good – non‑rival and non‑excludable. India’s AI‑generated forecasts reached 38 million farmers last year, leading to better planting decisions and higher adoption of hybrid seeds.
He gave a concrete, data‑driven example of AI delivering public‑good benefits at scale, illustrating how multilateral institutions can catalyze such interventions.
Introduced a tangible success story that anchored the abstract discussion, prompting further dialogue on scaling, government involvement, and the risk of slow adoption by public sectors.
Speaker: Michael Kremer
The myth that regulation kills innovation is false. Europe’s slower AI rollout is due to lack of a digital single market, a shallow capital‑markets union, risk‑averse culture, and talent pipelines, not because of the AI Act or GDPR.
She challenged a common narrative that stringent regulation hampers AI development, providing a nuanced analysis of structural factors behind regional innovation gaps.
Redirected the debate on regulatory design, encouraging participants to consider how policy can be crafted without sacrificing innovation, and influencing the later discussion on AI sovereignty and regulatory balance.
Speaker: Anu Bradford
AI can free teachers from routine tasks like correcting spelling, allowing them to focus on deeper learning. The key is demand‑driven design: teachers, students, and districts all asked for it, and it delivered measurable gains.
He linked AI deployment to real‑world educational outcomes, emphasizing the importance of freeing human capacity and aligning technology with user demand.
Grounded the conversation in field experience, reinforcing the theme of human‑AI collaboration and prompting others to discuss evaluation metrics and scalability.
Speaker: Iqbal Dhaliwal
Even when AI diagnostics outperform humans in the lab, they can reduce doctors’ efficiency in the field because the surrounding system isn’t adapted. Example: a GST fraud‑detection model was not scaled because it removed human discretion, a source of power.
He highlighted the sociopolitical dimension of AI adoption—trust, power, and institutional inertia—showing that technical superiority alone doesn’t guarantee deployment.
Introduced a cautionary perspective on governance and power dynamics, steering the panel toward discussing regulatory safeguards, stakeholder buy‑in, and the risk of technology being blocked by non‑technical considerations.
Speaker: Iqbal Dhaliwal
Evidence shows market concentration is rising, innovative resources are shifting to incumbents, and top AI scientists are moving from academia to industry, reducing open science. This could undermine creative destruction.
He supplied empirical data on concentration trends and the migration of talent, warning that the foundational AI layer may become increasingly closed and monopolized.
Deepened the earlier foundational‑layer argument, prompting the panel to consider policies that preserve competition, support universities, and maintain open research ecosystems.
Speaker: Ufuk Akcigit
The biggest systemic risk is that humanity becomes dumber by outsourcing thinking to AI. As educators, we must teach students to use generative AI to augment—not replace—their own reasoning.
She raised a philosophical and societal risk that goes beyond economics or regulation, questioning the long‑term cognitive effects of pervasive AI assistance.
Expanded the scope of the discussion to include human capital and education quality, influencing the rapid‑fire round and reinforcing the need for thoughtful AI integration.
Speaker: Anu Bradford
For the first time we may have tools to target poverty reduction at the individual level, but we risk not having robust governance to prevent abuses.
He combined optimism about AI’s precision in poverty alleviation with a sober warning about governance gaps, encapsulating the panel’s central tension.
Served as a concise summary of the panel’s dual narrative, prompting final reflections on both the transformative promise and the regulatory/ethical challenges.
Speaker: Johannes Zutt
Overall Assessment

The discussion was shaped by a series of pivotal insights that moved it from a broad, hopeful overview of AI’s potential to a nuanced examination of structural, institutional, and societal constraints. Johannes’s framing of “small AI” and the infrastructural gaps set the stage, while Ufuk’s two‑layer model introduced a structural lens that underpinned later concerns about concentration. Michael’s concrete public‑good example and Iqbal’s field‑level successes grounded the debate in real impact, whereas Anu’s deconstruction of the regulation‑innovation myth and her warning about cognitive atrophy broadened the policy conversation. The recurring theme of power—whether in the GST model or in market concentration—highlighted governance as a decisive factor. Collectively, these comments redirected the panel toward concrete policy levers (e.g., evidence‑based innovation funds, regulatory design, support for universities) and underscored the need to balance AI‑driven productivity gains with safeguards against inequality, concentration, and loss of human agency.

Follow-up Questions
How can emerging economies and developing markets harness the potential of AI and avoid the pitfalls?
Identifies the core challenge of translating AI opportunities into real benefits while addressing infrastructure, skills, and governance gaps in low‑resource settings.
Speaker: Johannes Zutt
Why was there historically low entrepreneurship and dynamism in emerging economies before AI, and what business‑environment reforms are needed to enable AI‑driven entrepreneurship?
Seeks to uncover structural constraints (e.g., family‑based firm size, regulatory environment) that limit firm dynamism, a prerequisite for AI to generate inclusive growth.
Speaker: Ufuk Akcigit
What are the likely impacts of AI on entry‑level job losses in developing countries, and what policies can mitigate labor‑market disruption?
Highlights the need for data‑driven analysis and policy design to protect vulnerable workers as automation spreads.
Speaker: Johannes Zutt
How effective are small‑AI applications in low‑connectivity, low‑literacy environments, and what design features (offline mode, local‑language support) are essential?
Calls for empirical research on the usability and impact of lightweight AI tools for farmers, nurses, teachers, etc., where infrastructure is limited.
Speaker: Johannes Zutt
What evaluation frameworks and metrics should be used to assess AI interventions (model performance, user impact, scalability, continuous improvement)?
Emphasizes the need for rigorous, evidence‑based assessment methods to ensure AI projects deliver real‑world benefits and can be iteratively improved.
Speaker: Michael Kremer
What role should multilateral development banks play in financing AI for public‑good applications (e.g., AI‑driven weather forecasts), and how can they accelerate adoption?
Points to a research gap on institutional mechanisms that can fund and scale AI solutions that lack private‑sector profit incentives.
Speaker: Michael Kremer
Where does the trade‑off lie between AI regulation and innovation, particularly for India’s emerging AI ecosystem?
Seeks guidance on balancing safeguards with a vibrant innovation climate, a key policy dilemma for many developing economies.
Speaker: Anu Bradford
How will concentration in the foundational AI layer affect downstream application markets, and what policies can keep the foundational layer contestable?
Raises concerns that high barriers to compute, data, and talent may entrench a few incumbents, limiting competition and creative destruction.
Speaker: Ufuk Akcigit
How can trust in AI technologies be built among frontline workers (doctors, teachers, health workers), and what training or system‑design interventions are needed?
Identifies a gap between technical performance and real‑world adoption, requiring research on user trust, workflow integration, and capacity building.
Speaker: Iqbal Dhaliwal
What governance and institutional barriers impede scaling of AI solutions (e.g., GST fraud detection), and how can policy address power dynamics that resist automation?
Highlights the need to study why governments may reject effective AI tools due to concerns over discretionary power, informing design of acceptable implementation pathways.
Speaker: Iqbal Dhaliwal
What are the systemic risks of AI causing human cognitive atrophy (outsourcing thinking), and how should education systems adapt to ensure AI augments rather than replaces critical thinking?
Calls for research on long‑term societal impacts of over‑reliance on generative AI and curriculum reforms to preserve human creativity.
Speaker: Anu Bradford
What should finance ministers of developing countries consider regarding AI sovereignty, supply‑chain dependencies, and geopolitical risks in the AI stack?
Seeks a strategic framework for policymakers to navigate techno‑nationalism, semiconductor dependencies, and potential weaponization of AI supply chains.
Speaker: Anu Bradford
How can evidence‑based innovation funds with tiered financing (pilot grants, rigorous testing, scale‑up) be structured to bridge the speed gap between private AI developers and public‑sector adoption?
Proposes a research agenda on financing mechanisms that de‑risk AI pilots and promote competitive, high‑quality solutions for public services.
Speaker: Michael Kremer
How can universities remain healthy custodians of foundational AI research to prevent a shift toward closed, industry‑driven science, and what policies support open science in the AI era?
Identifies a need to study institutional policies that keep foundational AI research contestable and publicly accessible, preserving spillovers.
Speaker: Ufuk Akcigit
What are the scalable impacts of AI on health and education outcomes in the public sector of low‑ and middle‑income countries, and what implementation research is needed to realize these gains?
Calls for systematic evaluation of AI‑driven interventions in health and education to determine effectiveness, equity, and pathways for large‑scale rollout.
Speaker: Michael Kremer

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.