How AI Drives Innovation and Economic Growth
20 Feb 2026 13:00h - 14:00h
How AI Drives Innovation and Economic Growth
Session at a glance
Summary
This discussion, moderated by Jeanette Rodrigues at the Bharat Mandapam, focused on how artificial intelligence can either narrow or widen development gaps between countries, particularly examining opportunities and challenges for emerging economies like India. Johannes Zutt from the World Bank opened by highlighting AI’s potential as a game-changer for developing nations, noting that 15-16% of jobs in South Asia show strong complementarity with AI, enabling workers to enhance their skills and effectiveness across sectors like agriculture, healthcare, and finance.
The panelists explored the concept of “small AI” – practical, affordable, locally relevant applications that work with limited infrastructure – as opposed to large foundational models concentrated in the US and China. Michael Kremer emphasized AI’s potential to provide public goods like weather forecasting and digital identity systems, citing India’s success in distributing AI weather forecasts to 38 million farmers. Anu Bradford discussed regulatory approaches, comparing the EU’s rights-driven framework with other models, while debunking the myth that regulation necessarily stifles innovation.
Ufuk Akcigit raised concerns about market concentration in AI’s foundational layer, noting worrying trends of talent migration from academia to large tech companies and the shift from open to protected science. Iqbal Dhaliwal stressed the importance of evidence-based evaluation of AI interventions, highlighting examples where promising technologies failed due to trust issues or inadequate adaptation of existing systems.
The discussion revealed both optimism about AI’s transformative potential in healthcare, education, and government services, and significant concerns about labor market disruption, market concentration, and the risk of humans becoming overly dependent on AI systems. The panelists concluded that realizing AI’s benefits while mitigating risks requires careful policy design, robust governance frameworks, and continued investment in human capabilities alongside technological advancement.
Keypoints
Major Discussion Points:
– AI’s Dual Potential for Development: The discussion centered on how AI could either narrow or widen development gaps, with particular focus on “small AI” – practical, affordable, locally relevant applications that work in environments with limited connectivity and infrastructure, versus large foundational models that require significant resources.
– Market Concentration vs. Democratization: A key tension emerged between AI’s democratizing potential at the application layer (where small businesses can access previously unavailable tools) and concerning concentration trends at the foundational layer, where high barriers to entry in compute, data, and talent are creating oligopolistic conditions.
– Real-World Implementation Challenges: Panelists emphasized that successful AI deployment requires addressing fundamental systemic issues – from basic infrastructure (electricity, internet) to business environments, regulatory frameworks, and human adaptation. Technology alone cannot solve problems without proper institutional support.
– Regulatory Sovereignty and Global Power Dynamics: The discussion explored how developing countries can maintain AI sovereignty when foundational technologies are concentrated in the US and China, examining different regulatory approaches (US innovation-focused vs. EU rights-driven) and their implications for emerging economies.
– Evidence-Based Evaluation and Scaling: Strong emphasis on rigorous testing of AI interventions, moving beyond technological capability to measure actual user impact, scalability, and continuous improvement, with multiple examples of promising pilots that failed to scale due to political economy factors.
Overall Purpose:
The discussion aimed to provide policymakers in developing countries with practical guidance on harnessing AI’s benefits while mitigating risks, moving beyond both utopian and dystopian narratives to focus on real-world implementation challenges and opportunities.
Overall Tone:
The tone was notably optimistic yet pragmatic, described as representing “hope” rather than the “fear” that characterized earlier AI summits. While panelists acknowledged significant risks around market concentration, job displacement, and governance challenges, they maintained a constructive focus on actionable solutions. The conversation remained consistently grounded in empirical evidence and real-world examples, avoiding both technological determinism and excessive pessimism.
Speakers
Speakers from the provided list:
– Jeanette Rodrigues: Moderator/Host of the panel discussion
– Johannes Zutt: World Bank representative (referred to as “John” in the discussion)
– Ufuk Akcigit: Macroeconomist, working on World Development Report 2026 on AI and development with the World Bank
– Michael Kremer: Nobel Prize winner, involved with Development Innovation Ventures and various AI development initiatives
– Anu Bradford: Legal scholar/academic based in the U.S., originally from Europe, specializing in AI regulation and policy
– Iqbal Dhaliwal: Works at J-PAL (Abdul Latif Jameel Poverty Action Lab), former civil services exam topper in India, focuses on evidence-based policy interventions
Additional speakers:
None – all speakers mentioned in the transcript are included in the provided speakers names list.
Full session report
This comprehensive discussion at the Bharat Mandapam, moderated by Jeanette Rodrigues, brought together leading experts to examine one of the most pressing questions in international development: whether artificial intelligence will narrow or widen the development gap between nations. The panel featured Johannes Zutt from the World Bank, Nobel laureate economist Michael Kremer, macroeconomist Ufuk Akcigit, legal scholar Anu Bradford (originally from Europe but based in the US), and development practitioner Iqbal Dhaliwal (a civil services exam topper turned researcher), each offering distinct perspectives on AI’s transformative potential and inherent risks.
Rodrigues noted this represented the fourth AI summit, following previous gatherings including the first in the UK, and observed a notable shift from fear-based discussions in earlier summits to the hope-focused approach evident in India’s “AI for all” objective.
AI’s Transformative Potential for Development
Johannes Zutt opened the discussion by positioning AI as a potential game-changer for emerging markets and developing economies, presenting evidence from the World Bank’s recent research in South Asia. The findings revealed that approximately 15-16% of jobs in the region demonstrate strong complementarity with AI, enabling workers to expand their skills and effectiveness rather than being displaced. This statistic challenges the common narrative of AI as primarily a job destroyer, instead highlighting its potential as a productivity enhancer.
Zutt described practical applications that illustrate AI’s democratising potential: farmers using AI to identify crop diseases and pests, nurses leveraging AI for diagnostic support in unfamiliar cases, and financial institutions employing AI to better assess borrower creditworthiness. These examples demonstrate how AI can fill critical skill gaps in healthcare, education, and financial services.
However, Zutt acknowledged significant challenges facing developing countries in harnessing AI’s potential. Basic infrastructure deficits—unreliable electricity, weak internet connectivity, limited digital literacy—create fundamental barriers to AI adoption. Many users may need to rely on voice-based interactions with basic devices rather than sophisticated smartphones.
The Small AI Revolution
Central to Zutt’s analysis was the concept of “small AI”—practical, affordable, locally relevant applications that address specific problems whilst working within constraints of limited connectivity, data availability, skills, and infrastructure. This approach contrasts with large foundational models that require massive computational resources.
Zutt emphasised that small AI represents the most promising pathway for developing countries, requiring bespoke solutions that help users conduct basic investigations using their phones, identify problems, find solutions, and connect with local resources. India emerged as a compelling example, with the world’s third-largest digital universe after the United States and China, built on strong foundations through digital identity programmes and payment platforms.
Market Concentration and Creative Destruction
Ufuk Akcigit introduced a crucial analytical framework distinguishing between AI’s foundational layer and application layer. At the application layer, AI democratises capabilities previously available only to large businesses, enabling small enterprises to access sophisticated tools. However, the foundational layer presents extraordinarily high entry barriers due to compute-intensive requirements and massive data needs, creating conditions prone to market concentration.
Akcigit presented empirical evidence of troubling trends: market concentration in the United States has been increasing since 1980, accelerating after 2000, with innovative resources increasingly shifting towards large incumbent firms. He highlighted a significant brain drain from academia to industry, with dramatic salary increases in industry accelerating after breakthrough moments in 2012 (image processing) and 2017 (foundational models). When researchers move to industry, their publication output drops significantly whilst patenting increases dramatically, representing a shift from open science to protected intellectual property.
Akcigit’s most provocative insight challenged AI’s development premise, questioning why entrepreneurship and dynamism were absent in emerging economies before AI’s arrival. He noted that firm size in developing countries was often best predicted by family size rather than competitive performance, suggesting AI alone cannot overcome deep-seated institutional barriers.
Public Goods and Government Investment
Michael Kremer provided analysis of market failures and government roles, arguing that whilst private firms develop profitable AI applications, critical public goods applications require government and multilateral support. He cited AI-powered weather forecasting as an exemplar: India’s distribution of AI weather forecasts to 38 million farmers demonstrated both scale of impact and public good nature of such services.
During an unpredictable monsoon season, AI forecasts accurately predicted early arrival in Kerala and southern India followed by unexpected delays—information that reached farmers when other sources failed. Survey evidence showed farmers responding by adjusting transplanting schedules and seed varieties.
Kremer also highlighted India’s digital identity system as a powerful example of government investment in AI-enabled public goods creating platforms for broader innovation. He referenced Microsoft Research India’s HAB program for driver’s licenses as another example of AI applications in traffic safety.
However, Kremer expressed concern about public sector adoption challenges, noting that government systems may resist AI technologies, potentially excluding the poor from benefits in public services.
Regulatory Sovereignty and Global Power Dynamics
Anu Bradford addressed how developing countries can maintain AI sovereignty when foundational technologies are concentrated in the United States and China, with DeepSeek representing China’s position in large language models. She argued that the Global South has the same incentives for regulatory sovereignty as developed nations but faces extraordinary implementation challenges.
Bradford’s analysis of the European Union’s rights-driven regulatory framework offers lessons for countries seeking to balance innovation with protection. Crucially, she challenged the conventional wisdom that regulation stifles innovation, calling this a “false choice.” Her analysis of Europe’s innovation gap identified four structural factors: lack of a digital single market across 27 jurisdictions, absence of robust capital markets (with only 5% of global venture capital compared to over 50% in the United States), legal frameworks discouraging risk-taking, and failure to harness global talent effectively.
This reframing suggests developing countries can pursue protective regulation without sacrificing innovation, provided they address underlying structural factors that drive technological development.
Implementation Challenges and Real-World Constraints
Iqbal Dhaliwal brought crucial field experience, emphasising that successful AI applications must be demand-driven and free up time for frontline workers rather than adding burden. His example of AI-powered essay feedback in public schools illustrated this principle—the technology eliminated routine tasks like correcting spelling errors, freeing teachers for higher-value activities like analytical thinking instruction.
However, Dhaliwal’s research reveals systematic implementation failures even when AI demonstrates superior laboratory performance. His most revealing example involved machine learning for tax collection in India: despite successfully increasing identification of fraudulent firms from 38% to 55% at low cost, officials refused to scale the programme because it threatened existing power structures by removing human discretion in enforcement decisions.
Evidence-Based Evaluation and Scaling
Both Kremer and Dhaliwal emphasised rigorous evaluation methodologies. Kremer outlined a four-stage framework: model evaluation (technical performance), user impact assessment (efficacy trials), scalability testing (effectiveness at scale), and continuous improvement systems. He referenced Development Innovation Ventures as an example of tiered funding approaches—small grants for pilots, larger grants for rigorous testing, and substantial funding for successful scale-up.
Future Risks and Opportunities: 2035 Predictions
In rapid-fire predictions for 2035, panellists identified both opportunities and risks:
Ufuk Akcigit expressed optimism about government productivity improvements but concern about labour market disruption, particularly for entry-level jobs that represent aspirational opportunities in developing countries. He highlighted a policy contradiction where governments incentivise AI adoption whilst taxing human employment through provident fund contributions and labour regulations.
Anu Bradford showed excitement about education and health improvements but worried about humans “getting dumber” by outsourcing thinking to AI systems. As an educator, she emphasised using AI to enhance rather than substitute human capabilities.
Michael Kremer was optimistic about health and education advances but concerned about public sector adoption failures that could exclude the poor from AI benefits in public services.
Iqbal Dhaliwal shared optimism about healthcare and education whilst worrying about market concentration preventing broad benefit distribution.
Johannes Zutt expressed excitement about targeted poverty reduction through AI-enabled individual-level interventions but warned that inadequate governance frameworks could enable serious abuses.
Balancing Hope and Pragmatism
The discussion successfully balanced optimistic potential with realistic assessment of implementation challenges. Unlike earlier AI summits dominated by fear about job displacement, this conversation maintained constructive focus on actionable solutions whilst acknowledging genuine risks.
The panellists’ diverse backgrounds provided complementary perspectives, with convergence on key issues like evidence-based evaluation, locally relevant solutions, and market concentration concerns suggesting robust foundations for policy development.
Conclusion and Policy Implications
The discussion revealed that AI’s impact on development gaps depends critically on policy choices made today. The technology offers genuine opportunities to leapfrog development challenges, particularly through small AI applications working within existing constraints rather than requiring wholesale infrastructure transformation.
However, realising benefits requires addressing structural issues predating AI: market concentration in foundational development, inadequate governance frameworks, institutional resistance to change, and policy contradictions favouring capital over labour.
The conversation suggests developing countries need not choose between innovation and regulation but must address structural factors driving technological development: market access, capital availability, talent retention, and risk-taking culture. Success requires coordinated action across infrastructure investment, regulatory frameworks, education systems, and labour market policies.
As Rodrigues observed in closing, noting the “messy human notes” visible on panellists’ screens, the experts weren’t outsourcing their thinking to AI—embodying the principle that AI should enhance rather than replace human capabilities. The choice between AI narrowing or widening development gaps remains open, contingent on the wisdom and effectiveness of policy responses implemented today.
Session transcript
all around the Bharat Mandapam. So once again, thank you very much for your time this afternoon and for choosing us to have a conversation with. To start off, I would like to introduce John, who will make some opening comments for the World Bank.
So thank you very much, Jeanette. It’s a great pleasure to be here speaking to all of you this afternoon. Over the past week, we’ve heard from a lot of world leaders, tech leaders, experts from across many, many countries about how AI is fundamentally reshaping our world, presenting not just a technological shift but a structural transformation with profound implications for economies and societies everywhere. For emerging markets and developing economies, as for all economies, AI could be a game changer. So sorry, that probably helps. I thought the mics were on. So, you know, for all countries, but especially for emerging markets and developing economies, AI can be a game changer, a unique opportunity to leapfrog longstanding development challenges.
It offers clear opportunities to enhance growth and productivity. We recently did some work in South Asia at the World Bank Group to see what sort of impact AI was having on jobs in the region, and we found that approximately 15 or 16 percent of jobs here have strong complementarity with AI. AI enables people in those jobs to expand their skills and their effectiveness in delivering the products and services that they are trying to provide. It also helps, you know, very, very diverse groups of people in many, many different sectors of the economy. It helps farmers to identify pests on their crops. It helps farmers to identify pests on their crops, diseases in their crops, and also how to address them.
It helps farmers to identify pests on their crops, diseases in their crops, and also how to address them. It helps nurses to identify the ailments and illnesses that their patients may be suffering, particularly the ones that they’re not very familiar with, but that they can research using appropriate AI applications. It helps financial institutions to understand better the ability of borrowers to take on loans, which, of course, expands the ability of the borrower to expand his or her business. So there’s clearly enormous potential for AI to fill skill gaps in the areas that I mentioned, also in education, in health care services, to detect patterns, to generate forecasts, to guide the allocation of public resources, and so on.
Of course, at the same time, on the flip side, AI also creates a number of challenges. One of them is there will be some job losses, particularly sort of entry -level jobs that are very much knowledge or document -based, performing relatively rote work that can be taken over by automation. And we’re actually seeing this in the World Bank Group. We went and looked at the number – the types of jobs that we are advertising these days compared to a couple of years ago, and what we found is that that layer, sort of at the bottom of the professional classes inside the bank group, there’s just fewer of those types of jobs being advertised in the World Bank Group today than there were a few years ago.
At the same time, you know, particularly for developing economies and emerging markets, many of them are going to struggle to harness the potential that AI offers because of very basic issues around the foundations for effective AI use. They may not have reliable electricity. We can start with that very basic one. They may not have an internet backbone that’s sufficiently strong. People in these countries may not have very, very basic skills of literacy and numeracy that enable them to work effectively with higher end devices. They may need to use very, very basic devices, not even smartphones, and rely on voice communication, asking a question and hearing a response. So there may be struggles of that kind in developing countries and emerging markets.
And I’m not even talking about all the governance and regulatory safeguards that can also come into play. So the question, of course, is how can emerging economies, developing markets, harness the potential of AI and avoid the pitfalls? And for us in the World Bank group, we’ve been very, very focused on focused recently on basically small AI. Small AI meaning practical, affordable, locally relevant AI that addresses specific problems and also works where connectivity, data, skills, infrastructure are fairly limited. And this is extremely important in countries like India where all of those conditions can apply. And yet there’s tremendous potential for people to expand their, to grow their productivity if they have timely access to information of the right kind in their local language tailored to their specific circumstances.
So that’s what we are trying to do in South Asia today and across the globe actually. And this is really about some of the examples that I mentioned earlier, having bespoke… applications that help farmers to do very basic investigation of the types of issues that they’re facing using their phone to analyze what’s going on to identify it to find out how to address it even to find out who within their local area in their market space can help them by providing the tools or the products that are necessary to address whatever they’re running into so India of course is a very strong example of what’s possible India has been a leading country in digital innovation for quite some time after the United States and China it has the largest if you like digital universe you in the in the world today it’s got some very good foundations there’s the the digital identity program as well as the digital payment platform that currently exists.
There are lots of Indian firms that are innovating in AI, including in the small AI applications that I’ve been talking about. And the governments of India have an objective of ensuring that there is AI for all. So they are very, very aware of the challenges that need to be overcome to make AI accessible to a very, very broad spectrum of the population and not just the very rich that, to some extent, need assistance the least, right? It’s the poorer parts of the country that benefit the most because they will be leveraging a tool that they are not very familiar with and have not been using that much in the past. So we’re working in India.
We’re working in a lot of different states, Uttar Pradesh, Maharashtra, Kerala, Haryana, Telgana. these different aspects working with governments to work on the foundational elements, interoperability, making sure that the accessibility is possible, that programs can run offline as it were so that people who aren’t able to get online all the time can benefit and so on. And then we’re also working with private sector investors who are developing apps. I mean we’re not actually developing many apps ourselves. That’s not really in our comparative advantage. Our comparative advantage as the World Bank Group is to do the more advisory work, make sure that the backbone information that’s embedded in the application is reliable and trustworthy because of course that’s critical for ensuring successful uptake.
But we are helping governments to create. We are helping governments to create the space that enables experimentation in AI sandbox to develop the different applications that people in this incredibly creative country are coming up with to help people get on with their work and become more productive. So I think it’s important to recognize that if we’re going to make effective use of this tool, we need both a public -facing effort to address the standards and the other issues, the interoperability and so on that I mentioned before, but also a private -sector -facing effort because it’s the private sector that’s actually generating, creating most of these applications that are working, particularly in the small AI area.
We’re doing a little bit on bigger AI. There’s obviously a connection between the two. Big AI can, through computational power, generate new knowledge that can help us to do things that we haven’t done so well in the past much, much better. But for… There are countries like India translating that. into small AI will also be very, very important for uptake. So I’m looking forward to hearing from all the distinguished speakers in this panel about their thoughts on what’s happening today in this sector. So thank you very much.
Thank you very much, John. John spoke about, of course, the use cases for AI, and on the other side of the spectrum we have the large language models, we have the foundational AI. But no matter where you sit on the spectrum, no matter where your interests lie, AI, innovation never disperses and never diffuses equally. Today on this panel, I hope to unpack what determines whether AI narrows the development gap or whether it widens the development gap. Especially we are looking to talk about the real world. What should policymakers in the real world think about and keep at the top of their mind as they go ahead preparing policies considering AI? Before I start, just setting the stage.
To a man, to a woman, everybody I spoke with who’s attended the first AI summit to today, this is, I think, the fourth AI summit being held. The first one was held in the UK. And without exception, all of them made it a point to tell me how the first session was full of fear. It was, oh, my God, AI is this terrible technology which is going to steal all our jobs, make us redundant. And when they come to India, they see the hope that technology and AI brings. And that’s the spirit of the discussion this afternoon, to figure out how can we balance both of those extremes, hope and concern, and go ahead in a pragmatic, policy -first way to prepare for the real world.
So if I could start with you, Ufuk, how do you think about AI? And especially, where do you see areas of creative destruction? To foster the innovation that we need.
Thank you very much. And so, of course, creative destruction is an important driver of economic growth in the long run. So that’s why, you know, it’s an interesting question how AI will affect creative destruction in general. Of course, we are at a very early phase of AI, and it’s a GPT. And typically, you know, when GPTs are emerging, there’s a huge surge of new businesses. And this should not be misleading. I think the main question we should be asking ourselves is what will happen to the creative destruction in the future? How does the future look like in terms of creative destruction? And I’m a macroeconomist, so that’s why I like to look at this with a, you know, bird’s eye view.
And I would like to, you know, separate advanced economies from emerging or developing economies. So when it comes to advanced economies, there, again, we need to split the issue into two layers. One, the foundational layer. and the other one is the application layer. When we look at the application layer, it’s great. You know, the entry barriers are low. Small businesses can do what only large businesses could do in the past, and, you know, they can do their accounting, marketing. You know, there are so many opportunities now. The entry barrier is low. As a result, this suggests that, you know, this is going to be more, you know, friendly for creative destruction on the application. But then there’s also the foundation layer, and I think that’s exactly where the bottleneck is.
When we look at the foundation layer, the entry barrier is really, really high, and, you know, the compute is very compute -heavy. It’s very data -heavy. It’s very talent -heavy. So as a result, you know, this market, at least this layer, is very concentration -prone. Of course, it’s very early. But, you know, normally we have to be concerned about the foundational layer and how things will pan out because this is the upstream to the application layer, which is downstream to foundation layer. So that’s why whatever will happen at the foundational layer will potentially spill over to application layer two. So that’s why I think we need to look at early indicators. But, you know, in the interest of time, I don’t want to go into the empirical evidence yet.
Maybe we can come back in the second layer. When we look at the developing countries, so I think, you know, I agree with Johannes. You know, I think AI is creating fantastic opportunities. So that’s why I think it’s really important to understand the opportunities as well as the risks for developing countries. And together with the World Bank, we are working on the world development. Report 2026, which is going to be on AI and development. And these are exactly the issues that we are focusing on. But I think before we go into those details, we should ask ourselves one major question. Why was there no entrepreneurship and dynamism before the AI revolution in emerging economies? Why was, you know, when we looked at the firm’s life cycle, for instance, why was it not up or out?
Why was it not, you know, very competition friendly? Why did the best predictor of firm size in emerging economies or developing economies was the size of the family and or the number of male children? These are still lingering issues and AI is not, you know, will not bring magic unless we understand and fix the business environment in these economies. You know, AI will just create new tools. But at the end of the day, we need to make sure that the business friendly environment is there for entrepreneurs to come and exercise their ideas
Ufuk, that’s a very interesting leaping of point, the real world. And the intention of this panel is to get exactly there. So if I may turn to you, quite literally turn to you, Michael, and ask you about the real world. You’re obviously doing a lot of work on the ground. Where do you see the potential for AI to spur gains? And are there any really transformative breakthrough areas that you’re looking at right now?
Yes. Thank you. Thanks very much. You know, I don’t want to minimize the existence of forces that may widen gaps. I think that if policymakers, primarily at the national level, but also in multilateral development banks, take appropriate actions and make appropriate investments, then I think AI has the potential to substantially narrow some of the gaps. And, you know, I think the… which policy actions to take can be informed by thinking through relevant market failures and relevant government failures. Let me give a concrete example or two. So private firms have incentives to develop and improve applications of AI that can generate profits. But there are some very important applications of AI for public goods, for example, that will not attract commercial investment to measure it with their needs.
And that’s an area where I think governments and multilateral development banks can play an important role. And I think some of this very much echoes what you were saying about small models, but also I’ll mention the link between the two. So an obvious example where I think India has been a leader for the world is in the development of digital identity. You know, this is… will enable, as Ufuk was saying, this enables a lot of work by individual entrepreneurs, a lot of other applications. So that’s a huge success, and I think multilateral development banks together with India can help bring that to many other countries. Let me take another example, one that’s not as well -known, but picks up on your comment about farmers.
So one thing that’s critical for farmers, they have to make a bunch of decisions that are weather -dependent. You know, when do you plant, for example? What varieties do you use? A drought -resistant variety, another variety. That, most farmers don’t have access to state -of -the -art weather forecasts around the world. I’m not talking about one country. In low – and middle -income countries, they don’t have access to that. Now, there’s a huge advance. We tend to think of large language models, but obviously AI is pushing science forward, and that includes in weather forecasting. There’s really a revolution driven by AI. But weather forecasts are non -rival. They’re largely non -excludable. They’re the classic definition of a public good.
So there’s a strong rationale for national governments, in some cases supported by multilateral development banks, to make investments in producing and disseminating AI weather forecasts. Again here, India is a leader. So if you, India in particular, in particular, India’s, the Indian government distributed forecasts to AI weather forecasts to 38 million farmers last year. And the evidence suggests that farmers, both from India, from this particular case, that in areas, I’ll say a little bit about last year’s monsoon, it came early in Kerala and southern India, but then there was an unexpected delay in the progression. The AI forecasts got that right, that was the only source of information that reached farmers with that. In the areas, we did a survey above that line, and farmers are responding, and they transplant more, they use hybrid seeds more.
Evidence from around the world is consistent with this. Farmers respond to these AI weather forecasts. So I think that’s one example, but many others, and happy to discuss them in education and traffic enforcement and elsewhere.
Michael, your answer should be read the book. Okay. We’ve spoken about the use cases of India, but setting up digital IDs, of course, is a sovereign decision. It’s something India could do unilaterally. When it comes to the large language models, that’s not reality. The large language models are concentrated in the US, in China now with DeepSeek. Anu, in a world where you largely have the rules being set by the two large powers, the US and China, arguably, there’s of course the EU as well, and you’ve done a lot of work on that. Who sets the AI rules for the Global South? Is there even the possibility for the Global South to talk about sovereignty?
So I think the Global South has the same kind of incentive for their own AI sovereignty, including then regulatory sovereignty, to design the rules that better work for their economies, for their societies, for what the public interest in these jurisdictions calls for. But regulating AI is really difficult even for very established bureaucracies. You need to be able to make sure that it is an innovation -friendly, and yet you at the same time need to be careful in managing the risks for individuals and societies. So even very established regulators like the European Union have found it one of the most challenging tasks to come up with the AI Act. So there’s probably something to be learned from these jurisdictions that have gone ahead and done the kind of thinking that had then resulted into some of those regulatory frameworks that we have now in place.
So if you think about the choices that India has when it looks around, one of them is to think about, okay, how does the EU go about this? The EU follows what I would call a rights -driven approach to regulation. So what is really characterizing this, the first horizontal binding, so economy -wide regulation that the Europeans enacted, it is a regulation that seeks to protect the fundamental rights of individuals, the democratic structures of the society, and that also seeks to ensure a greater distribution of the benefits from AI revolution. So the European approach is very conscious that it wants to also share some of the benefits so they don’t all go to the large developers of these models, but individual use as society at large.
smaller companies benefit from AI as well. So there’s something I think the Europeans can teach in terms of that regulatory approach in addition to maybe then some details of how that regulation in the end was constructed. But just one word, India is a formidable economy that doesn’t need to take a template and plug it into the economy as such. I think India is in a very good position to take the lessons that serves its needs yet make the kind of local modification and variations that are more reflecting the distinct priorities of this country.
Anu, before I turn to Iqbal, a quick follow -up question to you. As India makes its own rules, where does the trade -off lie between regulation and innovation?
So this is very interesting because often I am based in the U.S., but I’m initially from Europe, and these two jurisdictions are described as the U.S. develops technologies and the Europeans regulate those technologies. many ways does India want the innovation path or the regulation path? And I think there are many votes who would go for innovation. But I really would like to debunk this myth that to me it’s a false choice to say that the reason we don’t see these large language models being developed in Europe is not because there’s a GDPR, the General Data Protection Regulation. It’s not because there is AI Act. So the reason there is a perceived innovation gap between the United States and Europe is, I think, four things.
So first, there is no digital single market in Europe. It’s very hard for these AI companies to scale across 27 distinct markets. Second, there’s no deep, robust capital markets union. 5 % of the global venture capital is in Europe, over 50 % in the United States. That explains why the U.S. has been able to take much greater steps in developing AI technologies. Third, there are legal frameworks and cultural attitudes to risk -taking. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone.
You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. You are not alone. I wouldn’t encourage you to replicate that because it’s very hard to innovate on the frontier of technological innovation because sometimes you fail. But you need to be then given the second chance.
And the fourth, I think, the sort of foundational pillar of the robust U.S. tech ecosystem is that the U.S. has been spectacularly successful in harnessing the global talent that has chosen to come to the U.S., including many Indian data scientists, engineers, who think that U.S. is the place where they can start their companies, scale their companies, fund their companies, U.S. universities can attract them. So the idea that choosing to follow… Or imitate aspects of the European rights protective regulation would come at the cost of innovation, we need to understand better what drives the technological innovation and whether regulation should
Thank you, Anu. Iqbal, turning to you. You’re working in an area of the world, South Asia, where what is regulation? What is enforcement? At the risk of sounding like a provocateur, it’s the Wild West a little bit. And therefore, we talk a lot in our part of the world about small AI, about targeted AI. My question to you is that what should policymakers keep in mind when designing AI -enabled interventions, especially when it comes to small AI and the targeted use cases?
vulnerable public schools all the way from 11th to becoming the second best performing state in just a matter of two or three years. Phenomenal results, right? But then you start saying, let’s unpack this. What was this thing doing? The first thing that they find out is that a lot of people are like, oh, does this mean that I don’t need teachers anymore? No, you still need the teachers. What it replaces is the road task of the teacher having to correct spelling mistakes, calling you to the room and saying, hey, you forgot your comma, you forgot to capitalize. Instead, AI takes care of all of that. And now the teacher can sit with you in the free time and say, how did you set up the structure of this essay?
Did you think about this analytically or not? And that’s the first insight that comes from evaluation. It frees up the teacher time. Everything that we do in the field ends up adding to teacher’s time, adding to the nurse’s time, adding to the Anganwadi worker’s time. Very few teachers do that. Free up time. So if your AI application can free up the time of the health frontline workers, first of all, that’s a winner. The second thing that is really important here was that this is a demand -driven thing, right? Like there was a demand by the kids to improve their essays. There was a demand by the teachers to free up their time. But most importantly, there was a demand by the school districts to show progress.
So I think those is kind of a great example of how everything comes together if you think about it ahead of time.
Ladies and gentlemen, a topper of India’s notoriously difficult civil services exam. So take Iqbal more seriously than you would as just a normal.
Thank you. I thought that was history now.
It’s never history in India, Iqbal. Michael, turning to you, almost as equal in accomplishment by winning a Nobel. What risks should multilaterals like the World Bank keep in mind? Or let me rephrase that actually. Is there a risk that multilaterals are moving too slowly relative to the technology?
I think there certainly is. As I noted before, there are certain areas where the private sector is going to move, but there are other areas where they’re not going to move quickly, and it’s going to be very important for governments and for multilateral development banks and for philanthropy to move. I think there are a number of approaches to this. One way is by encouraging innovation by setting up institutions like innovation funds, particularly evidence -based, to echo Iqbal, I think evidence -based innovation funds. So I’ll give you one example of something that I’m involved in. Development Innovation Ventures, that was initially set up in the U.S. government, but it’s now been relaunched independently. It has tiered funding, so there’s initially very small… grants to pilot new ideas.
Then there’s somewhat larger grants to rigorously test them as Iqbal emphasized and then for those that are most successful there’s funds to help transition them to scale up. I think why is that important? Well that’s important because if we’re thinking about the services that public services and there are other sectors where this is needed but there’s probably going to be insufficient competition. Private developers are going to come up with innovations but then there if they have to sell them to the government they’re facing a monopsonistic buyer. They’re not going to probably not going to get rich doing that. So some support to generate more in that market, generate more entrance in that market, well I think is very important.
It’ll also mean that prices will go down and quality will go up when the government does that thing. Does that. Let me, I’ll just again let me give a example of the potential of how you know we we tend to focus on certain examples time after time here let me give another another example that is you know something that I doubt many people here are thinking of when they think of AI you know one of the things that you know traffic safety and we’ve all been exposed to traffic in the past few days you know traffic is a real problem interfering with urbanization which may drive growth there are a lot of deaths from from traffic a lot of citizens around the world have very difficult and painful experiences with traffic enforcement well you know you can have automated traffic cameras that have the opportunity to improve improve traffic outcomes but also improve people’s perception of fairness in government India’s moving in this let me mention another thing that within traffic safety that’s being done Microsoft Research India developed a program called the India Research Program and it’s a program that’s been developed by the government and it’s a program called HAB that is for driver’s licenses and that it automatically uses AI to test are that are the drivers until they actually pass in their exams they when this was introduced it’s been introduced I believe in 56 sites across India hundreds of thousands of people have taken tests this way we took a leaf from a false book we followed up the we’ve got information from Ola on ratings on and the number of drivers who were rated as driving unsafely that went down 20 to 30 percent where hams had been installed so you know that’s something that was developed not by Microsoft’s main business but by Microsoft research we can just create some support for more ideas like that to be developed to be rigorously tested that can benefit India can benefit the whole world we are we are running out of time probably this is this is one place in in India where time is really respected and we have to end in time.
So I had a list of wonderful questions, but if I could now move to a space where we are really giving shorter answers and quick answers and the deeply, deeply interesting ones about who’s winning and who’s losing. Michael, if I could start with you, actually. We’ve seen many promising technologies fail to live up to their promise. How should we think when we are evaluating AI interventions? How should we think about it? What should be the metrics that we use? Okay. First, model evaluation. So AI companies typically do that part. How good is the model output for specific tasks? You know, forecasting the weather. Does it do a good job? Does it match your local language well?
Second, user impact. Here, I think there’s a role both for sort of initial pilots akin to a medical efficacy trial. If you put the work into trying it, does it lead to improvements and outcomes for the users? Second… scalability and usage at scale that’s more like an effectiveness trial in medicine that it’s important to think not just about the tech but also about the human systems are the teachers actually going to use the product I think is it is an example how can you get the teachers to use the product and then the fourth area is continuous improvement you want a system that improves the underlying models so I think in procurement we might want to think about requiring continuous a B test publicity about what the what the impact usages and impact is and perhaps even thinking about requiring open access as part of the procurement package
thank you Michael. Iqbal, I want to flip that question to you where do you see where do you see hype in the promises of AI that you don’t think will play out
I think hype is natural because the technology is exciting. It’s a general -purpose technology. It’s evolving so quickly. The marginal cost of deployment for the next users is very low. It’s multimodal. Today you are doing it in text. Tomorrow you’re doing it in video. Day after tomorrow you’re doing it on audio. Everybody who has a smartphone has it. So I can understand the hype, right, like where it is coming from. But I think what we really need to do is separate the hype from the reality on the ground. And the reality on the ground is that many of these technologies are not having the final impact that we are having. And I see kind of two, you know, like once again my job at J -PAL always, you know, sitting at the top is like to say not worry about one professor’s evaluation or one researcher’s evaluation, but say when I connect all these dots, what am I seeing?
And I’m seeing two patterns. One is about trust in technology, and the second part is about the reality of the policy world. Let me elaborate quickly on both. Trust in technology. There are studies which found that even if you give doctors and frontline health care workers access to diagnostic tools, including radiology, tools, using AI, AI enabled prediction of the diseases, oftentimes it doesn’t lead to an improvement in results. And when you try and unpack that, even though this technology worked even better than the human intervention in the lab, right? So some of these diagnostic things can work, have better predictability in the lab, but in the field, they end up decreasing, not only is their efficiency lower, but it lowers the efficiency of the doctors, because we have not trained them enough important.
And the second thing is the enabling mechanism, the world around us. We just assume that just because the technology works, even if it works in the field, the rest of the system will adapt to it. No, you have to adapt the system to the rest of the world. So this example quickly comes from India, where, you know, we have a with one particular state government, we try to improve the collection of value added taxes, it’s called GST in India, there is a whole worry about bogus firms that are created to get these GST or value added tax thing. The machine learning algorithm is able to increase the probability of predicting a bogus firm from 38 % to 55 % in one shot at a very, very low cost.
When it came time to scale up this program by the government, they refused to scale it up because you think about it, you have taken away the discretion of the human to decide whether they should raid Michael’s firm or they should raid Iqbal’s firm. That is power. And if you haven’t thought through that point, what is the point of technology?
I won’t terrify anyone in the room by asking why they didn’t want to scale up this tech. But talking about weeding out the bad actors, talking about firm -level decisions, moving on to UFOOC, does the firm -level evidence show productivity gains diffusing evenly across?
So just going back quickly to the question of the firm. In the earlier model that I highlighted, I think it’s important to understand what’s happening at the upstream. so that we can then understand where things will be going in the future. And the evidence there, the early signs, is a bit worrying. So first of all, when we look at, for instance, the dynamism or market concentration in the U.S., market concentration has been increasing since 1980 but in an accelerating way after 2000. So that’s the first set of evidence. The second set of evidence comes from how innovative resources are allocated across firms. And when we look at the inventors who are creating the creative destruction and technologies, there’s a massive shift towards market incumbents.
And when I say incumbents, those firms that have more than 1 ,000 employees. In around 2000, 50 % of employees used to work for incumbent firms in just 10 years. That shifted. To more than 60%. A massive reallocation of innovative resources. And the final piece of evidence, and we are going to release this study next week, we looked at the universities, how AI is impacting universities, and we look at the AI publishing scientists. And AI publishing scientists in academia, the top 1%, used to make around $300 ,000 in 2000. It went up to $390 ,000 over two decades. Similar people in industry used to make around $550 ,000. Now it went up to $2 million.
And there has been two breakpoints. One of them was in 2012. The other one was in 2017. Of course, image processing and then the foundational model revolution in 2017. The more worrying part about this, which brings me back to the foundational model side of things, is that this created a massive out -migration from academia to industry.
And after 2017 especially, B2B. When the compute and infrastructure became so important. And then we saw the rise of AI. The target or the destination is large incumbent information companies, which again highlights where things are going in terms of the concentration. And the worrying part also is that when people are moving to industry from academia, their publication record goes down by 50%. They start patenting by 600 % more after they move, which means that we are moving from open science to more protected science. Now, spillover is extremely important for creative destruction, for the future of innovation. So that’s why, and if we will keep the foundational layer contestable, I think that the fundamental players there will be universities.
And keeping universities in a healthy way is extremely important, but there is very little discussion on this, which I think before it gets too late. Because once you start buttoning the wrong button, and then the rest will follow wrong as well. So that’s why I think we have to have this frank conversation early on in the game, otherwise it might… too late.
Ufuk, what you spoke about boils down to something Iqbal mentioned as well, power. Because power still makes decisions in this world today. So Anu, before I move to the final section of this panel, if I could ask you if the finance minister of a developing country let’s say India, comes to you and asks you, Anu, how should I think? What would you tell her?
So today if you think about how much political power but also geopolitical power is shaping our conversations around AI it is something where I think each country is now pushed towards greater techno -nationalism, techno -protectionism AI sovereignty has become almost a sort of uniformly goal for everyone. But I would remind even when encountering players like the United States and China that nobody in today’s world will be completely sovereign when it comes to AI space. If I just take one layer of the AI stack as an example. What is now driving a lot of the global AI race is this idea that we want to do frontier AI we want to have these powerful foundation models.
That means you need to have a lot of computers. You can’t have a lot of compute unless you have access to the high -end semiconductors. The U.S. is well positioned there. It is hosting companies like NVIDIA. The U.S. leads in the design of semiconductors. But who is manufacturing them? We really need to think about the role of Taiwan there. But then the Europeans have ASML in the Netherlands that leads in the high -end manufacturing with the equipment needed for manufacturing. But that is dependent on chemicals where Japan is leading. And the entire supply chain relies on raw materials from China. So ultimately, all these choke points can in principle be weaponized, but that is not ultimately a sustainable strategy.
Even President Trump had to walk back some of the export controls to China because Chinese were saying, okay, then the raw materials are not coming your way. So there are the potential ways to weaponize these interdependencies that ultimately make us all poorer. So as a finance minister of India, when approaching other middle powers, the great powers,
Easily said than done. Our final, final section is, of course, the rapid fire round. We all love this in this room. In one sentence, in one sentence, if I could ask all of you, and Johannes, you’re not getting away easily, you’re going to answer this as well. So in one. if I could ask you, we’re sitting in New Delhi 2035. Could you predict one development outcome that will have dramatically improved with the use of AI and one risk we’ll regret not addressing now? I guess you already know my second answer.
I think the concentration, the future of market concentration is something that we should be concerned about and we might regret not having discussed this sufficiently in 10 years. On what will change in a positive direction, clearly health care and education, I think. It’s a no -brainer.
Anu?
So first of all, it’s so inspiring to hear all the use case examples, whether we talk about traffic or agriculture or education, because I often talk about the risks and the downsides, so it’s a really good reminder. I’m personally very excited, especially what happens in the education space but also in the health space. In terms of the risks, I think one thing that we are not paying attention to, and what I would even call a systemic risk, is the idea that many worry about AI getting almost too smart. But I am more worried about us getting dumber as a humanity. There is a temptation to start skipping steps, outsourcing your thinking and your creativity to these models.
And as an educator, when I think about how I will teach my students to use generative AI to enhance but not substitute their capabilities, we will just make a tremendous mistake if we just forewent that hard work, that beautiful moment of thinking hard problems and creating and investing in our own capabilities. And all that just cannot be so outsourced, because otherwise we don’t even know what kind of questions we should be asking the AI going forward.
Michael.
I agree that there is huge potential in health. and education. I think we’ll see big improvements in that, but the risk is that the public sector won’t adopt these, and therefore the poor won’t have access to them. And that’s because the public sector, as Iqbal indicated, the government systems and the government workers may not adapt to use these. There’s also risks of copycat regulation that are over -focused on certain problems that other countries may be worrying about, but might not be relevant for emerging economies. And then final risk is that the procurement systems are just set up in such a way that we don’t get sufficient competition, we get lock -in, and then we just don’t wind up with good quality.
Thank you, Michael. The buzzer’s down, but I’ll take a risk and quickly run through the other.
Yes. I think I am much more optimistic about the government actually adopting this thing. Whether it is when you call 100, your call is going to get answered very quickly. The PCR van is going to be at your house much faster. The hospitals are able to be able to link your health record. So I think the government sector productivity is going to improve leapfrogs. The biggest risk, I think, is definitely the labor market. If there was a dial where I could slow down the adaptation and give time to the labor market to catch up, that’s my biggest worry. You talked about entry -level jobs. An entry -level coding job might be an entry -level job in the United States.
It’s the aspirational job that created Gurgaon’s and Noida’s and Mohali’s of this country. And those people are going to be running out of jobs very, quickly. And I think the labor market, whether it is ESI, Provident Fund, Gratuity, we are piling on and making it harder and harder to hire labor. when, on the other hand, capital is not taxed. We are giving incentives to people to use AI, and we are taxing them through provident fund and labor market regulations to hire labor. And I think that, for me, is the biggest risk, actually.
So I think that for the first time in human history, we may actually have the tools available to enable us to target poverty reduction, poverty elimination initiatives on individuals. And that could be tremendously transforming. But at the same time, I do worry that we will not get the governance right or we won’t be able to make that governance sufficiently robust to prevent abuses.
Thank you very much to all of our panelists and to you for your time and attention once again. I had the very rare fortune of being able to peek into Michael’s screen while he was speaking, and I saw all the messy human notes. Our panelists are definitely not outsourcing their thinking anytime soon, and thank God for that. Thank you, ladies and gentlemen
Johannes Zutt
Speech speed
141 words per minute
Speech length
1450 words
Speech time
612 seconds
AI as a leap‑frog tool for productivity and growth
Explanation
Johannes argues that AI offers a unique chance for emerging economies to bypass long‑standing development hurdles and boost growth and productivity. He sees AI as a game‑changer that can accelerate progress across sectors.
Evidence
“So, you know, for all countries, but especially for emerging markets and developing economies, AI can be a game changer, a unique opportunity to leapfrog longstanding development challenges.” [5]. “It offers clear opportunities to enhance growth and productivity.” [1].
Major discussion point
AI as a development catalyst for emerging economies
Topics
Artificial intelligence | Social and economic development | The enabling environment for digital development
Infrastructure and skill gaps limit AI uptake
Explanation
He warns that many developing countries lack basic foundations such as reliable internet, electricity, and basic literacy, which hampers effective AI adoption. These constraints must be addressed before AI benefits can be realized.
Evidence
“At the same time, you know, particularly for developing economies and emerging markets, many of them are going to struggle to harness the potential that AI offers because of very basic issues around the foundations for effective AI use.” [19]. “They may not have an internet backbone that’s sufficiently strong.” [23]. “People in these countries may not have very, very basic skills of literacy and numeracy that enable them to work effectively with higher end devices.” [24].
Major discussion point
AI as a development catalyst for emerging economies
Topics
Capacity development | Closing all digital divides | The enabling environment for digital development
AI can fill skill gaps in agriculture, health, finance
Explanation
Johannes highlights AI’s potential to address shortages of skilled personnel by providing pattern detection, forecasting, and resource allocation tools in sectors like education, health care, and agriculture.
Evidence
“So there’s clearly enormous potential for AI to fill skill gaps in the areas that I mentioned, also in education, in health care services, to detect patterns, to generate forecasts, to guide the allocation of public resources, and so on.” [14].
Major discussion point
AI as a development catalyst for emerging economies
Topics
Artificial intelligence | Social and economic development | Capacity development
Risk of job losses in entry‑level, knowledge‑based roles
Explanation
He notes that AI automation may displace entry‑level, routine knowledge jobs, creating labor‑market challenges that require policy attention.
Evidence
“One of them is there will be some job losses, particularly sort of entry -level jobs that are very much knowledge or document -based, performing relatively rote work that can be taken over by automation.” [32].
Major discussion point
Risks of AI widening inequality and labor market disruption
Topics
The digital economy | Human rights and the ethical dimensions of the information society | Capacity development
Small AI: affordable, offline, locally relevant solutions
Explanation
Johannes defines “small AI” as practical, low‑cost applications that work with limited connectivity, data, and infrastructure, making them suitable for low‑resource settings.
Evidence
“Small AI meaning practical, affordable, locally relevant AI that addresses specific problems and also works where connectivity, data, skills, infrastructure are fairly limited.” [82].
Major discussion point
Small AI vs. foundational AI and market concentration
Topics
Artificial intelligence | Closing all digital divides | The enabling environment for digital development
Governments should create AI sandboxes and standards for safe experimentation
Explanation
He stresses the need for both public‑facing standards and private‑sector engagement, including sandbox environments, to enable safe and innovative AI deployment.
Evidence
“I think it’s important to recognize that if we’re going to make effective use of this tool, we need both a public‑facing effort to address the standards and the other issues, the interoperability and so that I mentioned before, but also a private‑sector‑facing effort because it’s the private sector that’s actually generating, creating most of these applications that are working, particularly in the small AI area.” [59]. “We are helping governments to create the space that enables experimentation in AI sandbox to develop the different applications that people in this incredibly creative country are coming up with to help people get on with their work and become more productive.” [169].
Major discussion point
Policy, regulation, and AI sovereignty
Topics
Artificial intelligence | The enabling environment for digital development | Monitoring and measurement
Governance failures could enable abuses and power concentration
Explanation
Johannes warns that without robust governance, AI could be misused, leading to concentration of power and potential abuses.
Evidence
“I do worry that we will not get the governance right or we won’t be able to make that governance sufficiently robust to prevent abuses.” [163].
Major discussion point
Risks of AI widening inequality and labor market disruption
Topics
Human rights and the ethical dimensions of the information society | Artificial intelligence | The enabling environment for digital development
Ufuk Akcigit
Speech speed
163 words per minute
Speech length
1041 words
Speech time
382 seconds
Creative destruction differs between advanced and emerging markets
Explanation
Ufuk points out that the dynamics of creative destruction will vary, with emerging economies facing distinct challenges compared to advanced economies.
Evidence
“I would like to, you know, separate advanced economies from emerging or developing economies.” [51]. “Now, spillover is extremely important for creative destruction, for the future of innovation.” [44].
Major discussion point
AI as a development catalyst for emerging economies
Topics
The digital economy | Artificial intelligence | Social and economic development
Need for a business‑friendly environment to realise AI benefits
Explanation
He emphasizes that a supportive business climate is essential for entrepreneurs to develop and deploy AI solutions.
Evidence
“But at the end of the day, we need to make sure that the business friendly environment is there for entrepreneurs to come and exercise their ideas” [58].
Major discussion point
AI as a development catalyst for emerging economies
Topics
The enabling environment for digital development | The digital economy
Foundational layer is compute‑, data‑, talent‑intensive, leading to concentration
Explanation
He describes the foundational AI layer as having high entry barriers due to heavy compute, data, and talent requirements, which fosters market concentration.
Evidence
“When we look at the foundation layer, the entry barrier is really, really high, and, you know, the compute is very compute‑heavy.” [46]. “It’s very talent‑heavy.” [92]. “It’s very data‑heavy.” [93].
Major discussion point
Small AI vs. foundational AI and market concentration
Topics
Artificial intelligence | The digital economy
Concentration risk: incumbents dominate foundational AI market
Explanation
He notes that the foundational AI market is prone to concentration, with large incumbent information firms likely to capture most of the value.
Evidence
“So as a result, you know, this market, at least this layer, is very concentration‑prone.” [91]. “The target or the destination is large incumbent information companies, which again highlights where things are going in terms of the concentration.” [99].
Major discussion point
Small AI vs. foundational AI and market concentration
Topics
Artificial intelligence | The digital economy
Keeping the foundational layer contestable; universities as key players
Explanation
He argues that to prevent excessive concentration, the foundational layer should remain contestable, with universities playing a central role.
Evidence
“So that’s why if we will keep the foundational layer contestable, I think that the fundamental players there will be universities.” [95].
Major discussion point
Small AI vs. foundational AI and market concentration
Topics
Artificial intelligence | Capacity development
Labor‑market risk: rapid loss of entry‑level jobs
Explanation
Ufuk identifies the biggest risk as labor‑market disruption, especially the swift disappearance of entry‑level positions without adequate safety nets.
Evidence
“The biggest risk, I think, is definitely the labor market.” [35]. “If there was a dial where I could slow down the adaptation and give time to the labor market to catch up, that’s my biggest worry.” [41].
Major discussion point
Risks of AI widening inequality and labor market disruption
Topics
The digital economy | Human rights and the ethical dimensions of the information society
Michael Kremer
Speech speed
160 words per minute
Speech length
1592 words
Speech time
593 seconds
Multilateral policy actions can narrow development gaps
Explanation
Michael contends that coordinated actions by national governments and multilateral development banks can harness AI to reduce existing development disparities.
Evidence
“I think that if policymakers, primarily at the national level, but also in multilateral development banks, take appropriate actions and make appropriate investments, then I think AI has the potential to substantially narrow some of the gaps.” [12].
Major discussion point
AI as a development catalyst for emerging economies
Topics
Financial mechanisms | The enabling environment for digital development
AI‑driven weather forecasts improve farmer decisions
Explanation
He provides evidence that AI weather forecasts are being used by millions of farmers, enhancing agricultural decision‑making.
Evidence
“Farmers respond to these AI weather forecasts.” [30]. “So there’s a strong rationale for national governments, in some cases supported by multilateral development banks, to make investments in producing and disseminating AI weather forecasts.” [68]. “The AI forecasts got that right, that was the only source of information that reached farmers with that.” [73]. “But weather forecasts are non‑rival.” [75].
Major discussion point
AI as a development catalyst for emerging economies
Topics
Artificial intelligence | Social and economic development | Agricultural development
Multilateral institutions risk moving too slowly; need faster action
Explanation
He acknowledges concerns that multilateral bodies may lag behind rapid AI advances, urging more agile responses.
Evidence
“Is there a risk that multilaterals are moving too slowly relative to the technology?” [66]. “There are certain areas where the private sector is going to move, but there are other areas where they’re not going to move quickly, and it’s going to be very important for governments and for multilateral development banks and for philanthropy to move.” [71].
Major discussion point
Policy, regulation, and AI sovereignty
Topics
Financial mechanisms | The enabling environment for digital development
Evidence‑based innovation funds and staged financing to scale AI solutions
Explanation
He proposes the creation of evidence‑based innovation funds that provide tiered grants, supporting pilots and scaling successful AI applications.
Evidence
“One way is by encouraging innovation by setting up institutions like innovation funds, particularly evidence‑based, to echo Iqbal, I think evidence‑based innovation funds.” [116]. “It has tiered funding, so there’s initially very small… grants to pilot new ideas.” [168]. “Then there’s somewhat larger grants to rigorously test them as Iqbal emphasized and then for those that are most successful there’s funds to help transition them to scale up.” [167].
Major discussion point
Policy, regulation, and AI sovereignty
Topics
Financial mechanisms | Monitoring and measurement
Rigorous evaluation framework: model performance, user impact, scalability, continuous improvement
Explanation
Michael outlines a multi‑dimensional evaluation approach covering technical performance, user outcomes, scalability, and ongoing model refinement.
Evidence
“First, model evaluation.” [124]. “Second, user impact.” [134]. “Second… scalability and usage at scale that’s more like an effectiveness trial in medicine… and the fourth area is continuous improvement you want a system that improves the underlying models…” [189].
Major discussion point
Implementation challenges, trust, and evaluation
Topics
Monitoring and measurement | Artificial intelligence
Public sector may lag in adopting AI, leaving the poor without access
Explanation
He warns that if governments do not adopt AI tools, the benefits will not reach low‑income populations who rely on public services.
Evidence
“the risk is that the public sector won’t adopt these, and therefore the poor won’t have access to them.” [185].
Major discussion point
Risks of AI widening inequality and labor market disruption
Topics
Social and economic development | Human rights and the ethical dimensions of the information society
Anu Bradford
Speech speed
199 words per minute
Speech length
1374 words
Speech time
412 seconds
EU rights‑driven, innovation‑friendly regulation as a reference
Explanation
Anu points to the European Union’s rights‑based regulatory approach as a model that balances protection of fundamental rights with innovation.
Evidence
“The EU follows what I would call a rights‑driven approach to regulation.” [122]. “So the idea that choosing to follow… Or imitate aspects of the European rights protective regulation would come at the cost of innovation, we need to understand better what drives the technological innovation and whether regulation should…” [123].
Major discussion point
Policy, regulation, and AI sovereignty
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | The enabling environment for digital development
India can adapt global lessons while crafting sovereign AI rules
Explanation
She argues that India is well‑positioned to incorporate international best practices into locally‑tailored AI regulations, preserving sovereignty.
Evidence
“I think India is in a very good position to take the lessons that serves its needs yet make the kind of local modification and variations that are more reflecting the distinct priorities of this country.” [135].
Major discussion point
Policy, regulation, and AI sovereignty
Topics
Artificial intelligence | The enabling environment for digital development
Myth that regulation necessarily stifles innovation; need to understand drivers
Explanation
Anu seeks to debunk the belief that regulation hampers AI progress, emphasizing that well‑designed rules can coexist with innovation.
Evidence
“But I really would like to debunk this myth that to me it’s a false choice to say that the reason we don’t see these large language models being developed in Europe is not because there’s a GDPR…” [144].
Major discussion point
Policy, regulation, and AI sovereignty
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society
Global South must assert regulatory sovereignty amid US/China dominance
Explanation
She stresses that countries of the Global South should develop their own AI regulatory frameworks to avoid dependence on the major powers.
Evidence
“the Global South has the same kind of incentive for their own AI sovereignty, including then regulatory sovereignty, to design the rules that better work for their economies, for their societies…” [137]. “But I would remind even when encountering players like the United States and China that nobody in today’s world will be completely sovereign when it comes to AI space.” [140].
Major discussion point
Policy, regulation, and AI sovereignty
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | The enabling environment for digital development
Iqbal Dhaliwal
Speech speed
183 words per minute
Speech length
1151 words
Speech time
375 seconds
Small AI frees teachers’ and health workers’ time, improving outcomes
Explanation
Iqbal notes that AI applications that automate routine tasks free frontline staff, leading to better service delivery in health and education.
Evidence
“So if your AI application can free up the time of the health frontline workers, first of all, that’s a winner.” [28]. “It frees up the teacher time.” [81]. “There was a demand by the teachers to free up their time.” [83].
Major discussion point
AI as a development catalyst for emerging economies
Topics
Artificial intelligence | Social and economic development | Capacity development
Laboratory‑proven AI may fail in the field without proper training and system adaptation
Explanation
He cautions that AI tools that perform well in controlled settings can underperform in real‑world deployments if users are not adequately trained or systems are not adapted.
Evidence
“So some of these diagnostic things can work, have better predictability in the lab, but in the field, they end up decreasing, not only is their efficiency lower, but it lowers the efficiency of the doctors, because we have not trained them enough important.” [172]. “We just assume that just because the technology works, even if it works in the field, the rest of the system will adapt to it.” [173].
Major discussion point
Implementation challenges, trust, and evaluation
Topics
Artificial intelligence | Capacity development | Monitoring and measurement
GST fraud‑detection algorithm not scaled due to concerns over human discretion and power
Explanation
He describes a case where a successful AI model for detecting bogus GST firms was not rolled out because authorities feared loss of human decision‑making power.
Evidence
“When it came time to scale up this program by the government, they refused to scale it up because you think about it, you have taken away the discretion of the human to decide whether they should raid Michael’s firm or they should raid Iqbal’s firm.” [178]. “The machine learning algorithm is able to increase the probability of predicting a bogus firm from 38 % to 55 % in one shot at a very, very low cost.” [181].
Major discussion point
Implementation challenges, trust, and evaluation
Topics
Artificial intelligence | Governance | The digital economy
Demand‑driven design that frees frontline workers’ time is crucial for adoption
Explanation
He argues that AI solutions should be built around clear demand signals and should free up staff time to ensure uptake and impact.
Evidence
“The second thing that is really important here was that this is a demand‑driven thing, right?” [186]. “But most importantly, there was a demand by the school districts to show progress.” [187]. “Free up time.” [188].
Major discussion point
Implementation challenges, trust, and evaluation
Topics
Artificial intelligence | Capacity development | Social and economic development
Shift of innovative resources to large incumbents and industry migration from academia
Explanation
He highlights a massive reallocation of talent and innovation from academic settings to large incumbent firms, raising concerns about concentration.
Evidence
“The more worrying part about this, which brings me back to the foundational model side of things, is that this created a massive out‑migration from academia to industry.” [110]. “A massive reallocation of innovative resources.” [109].
Major discussion point
Small AI vs. foundational AI and market concentration
Topics
Artificial intelligence | The digital economy | Capacity development
Concentration could limit diffusion of AI gains to poorer populations
Explanation
He points out that increasing market concentration may prevent AI benefits from reaching low‑income groups and regions.
Evidence
“In low‑ and middle‑income countries, they don’t have access to that.” [196]. “The poorer parts of the country that benefit the most because they will be leveraging a tool that they are not very familiar with…” [195].
Major discussion point
Risks of AI widening inequality and labor market disruption
Topics
Artificial intelligence | Social and economic development | The digital economy
Jeanette Rodrigues
Speech speed
174 words per minute
Speech length
1039 words
Speech time
356 seconds
Policymakers need to keep AI‑enabled interventions in mind
Explanation
Jeanette asks what policymakers should prioritize when designing AI‑enabled programs, emphasizing the need for clear guidance and focus on impact.
Evidence
“My question to you is that what should policymakers keep in mind when designing AI‑enabled interventions, especially when it comes to small AI and the targeted use cases?” [61]. “What should policymakers in the real world think about and keep at the top of their mind as they go ahead preparing policies considering AI?” [170].
Major discussion point
Policy, regulation, and AI sovereignty
Topics
The enabling environment for digital development | Artificial intelligence | Policy design
Agreements
Agreement points
AI has transformative potential for healthcare and education sectors
Speakers
– Johannes Zutt
– Ufuk Akcigit
– Anu Bradford
Arguments
AI enables people in those jobs to expand their skills and their effectiveness in delivering the products and services that they are trying to provide. It also helps, you know, very, very diverse groups of people in many, many different sectors of the economy
Healthcare and education will see dramatic improvements through AI applications
I’m personally very excited, especially what happens in the education space but also in the health space
Summary
All speakers agree that healthcare and education represent the most promising sectors for AI transformation, with potential for significant positive outcomes
Topics
Artificial intelligence | Social and economic development
Market concentration in AI is a significant concern requiring attention
Speakers
– Ufuk Akcigit
– Johannes Zutt
Arguments
Market concentration has been increasing since 1980, accelerating after 2000, with innovative resources shifting to large incumbent firms
I think the concentration, the future of market concentration is something that we should be concerned about and we might regret not having discussed this sufficiently in 10 years
Summary
Both speakers express concern about increasing market concentration in AI development and its potential negative implications for competition and innovation
Topics
Artificial intelligence | The digital economy | The enabling environment for digital development
Public sector adoption challenges pose risks to equitable AI access
Speakers
– Michael Kremer
– Iqbal Dhaliwal
Arguments
Government systems and workers may not adapt to use AI technologies, limiting access for the poor
Many promising AI technologies fail due to trust issues and inadequate adaptation of surrounding systems
Summary
Both speakers identify significant challenges in public sector AI adoption, including resistance to change and failure to adapt systems, which could prevent the poor from accessing AI benefits
Topics
Artificial intelligence | Capacity development | Social and economic development
Small AI and locally relevant applications are crucial for developing countries
Speakers
– Johannes Zutt
– Michael Kremer
Arguments
Focus should be on ‘small AI’ – practical, affordable, locally relevant AI that works with limited infrastructure
Private firms develop profitable applications, but public goods applications need government and multilateral support
Summary
Both speakers emphasize the importance of practical, locally-relevant AI solutions that can work within the constraints of developing country infrastructure and address specific local needs
Topics
Artificial intelligence | Information and communication technologies for development | Closing all digital divides
Similar viewpoints
AI presents significant opportunities for development if implemented thoughtfully with appropriate support systems and policy frameworks
Speakers
– Johannes Zutt
– Michael Kremer
– Iqbal Dhaliwal
Arguments
AI offers opportunities to leapfrog development challenges, with 15-16% of jobs in South Asia showing strong complementarity with AI
AI has potential to substantially narrow development gaps if appropriate policy actions are taken
AI applications should free up time for frontline workers rather than adding to their burden
Topics
Artificial intelligence | Social and economic development | Information and communication technologies for development
Structural factors like market access, capital availability, and talent retention are more important for innovation than regulatory constraints
Speakers
– Anu Bradford
– Ufuk Akcigit
Arguments
Innovation vs. regulation is a false choice – Europe’s innovation gap stems from market fragmentation, capital constraints, and talent issues, not regulation
There’s concerning migration of AI talent from academia to industry, reducing open science and increasing protected patents
Topics
Artificial intelligence | The enabling environment for digital development | Capacity development
Successful AI implementation requires addressing institutional and governance challenges, not just technical capabilities
Speakers
– Iqbal Dhaliwal
– Johannes Zutt
Arguments
Technology deployment requires addressing power dynamics and institutional resistance to change
Governance and regulatory safeguards are critical challenges, especially for developing countries
Topics
Artificial intelligence | The enabling environment for digital development | Capacity development
Unexpected consensus
Labor market disruption as primary AI risk
Speakers
– Ufuk Akcigit
– Johannes Zutt
Arguments
Labor market disruption is the biggest concern, especially for entry-level jobs that drive economic development
AI also creates a number of challenges. One of them is there will be some job losses, particularly sort of entry-level jobs that are very much knowledge or document-based
Explanation
Despite their different backgrounds (academic economist vs. World Bank practitioner), both speakers converge on labor displacement as the most significant risk, particularly for developing countries where entry-level jobs represent crucial economic opportunities
Topics
Artificial intelligence | The digital economy | Social and economic development
Need for evidence-based AI evaluation
Speakers
– Michael Kremer
– Iqbal Dhaliwal
Arguments
First, model evaluation. So AI companies typically do that part. How good is the model output for specific tasks? You know, forecasting the weather. Does it do a good job? Does it match your local language well? Second, user impact
I think what we really need to do is separate the hype from the reality on the ground. And the reality on the ground is that many of these technologies are not having the final impact that we are having
Explanation
Both the Nobel laureate economist and the development practitioner strongly emphasize rigorous evaluation methodologies, showing unexpected alignment between academic and field perspectives on the importance of evidence-based assessment
Topics
Artificial intelligence | Monitoring and measurement | Social and economic development
Human capability preservation in AI era
Speakers
– Anu Bradford
– Iqbal Dhaliwal
Arguments
Risk of humans becoming overly dependent on AI and losing critical thinking capabilities
Demand-driven AI solutions that address real needs of users, teachers, and institutions are most successful
Explanation
The legal scholar and development practitioner unexpectedly converge on concerns about maintaining human agency and capabilities, emphasizing that AI should enhance rather than replace human thinking and decision-making
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | Capacity development
Overall assessment
Summary
The speakers demonstrate strong consensus on AI’s transformative potential for healthcare and education, the importance of addressing market concentration concerns, challenges in public sector adoption, and the need for locally-relevant AI solutions. There is also significant agreement on the importance of evidence-based evaluation and addressing institutional barriers to implementation.
Consensus level
High level of consensus across diverse perspectives (World Bank, academia, legal scholarship, development practice) suggests robust foundation for policy development. The alignment between theoretical concerns and practical implementation challenges indicates that policy frameworks addressing these shared concerns could gain broad support across different stakeholder communities.
Differences
Different viewpoints
Speed of AI adoption and labor market adaptation
Speakers
– Ufuk Akcigit
– Iqbal Dhaliwal
Arguments
Labor market disruption is the biggest concern, especially for entry-level jobs that drive economic development
AI applications should free up time for frontline workers rather than adding to their burden
Summary
Akcigit is deeply concerned about rapid AI adoption displacing workers faster than the labor market can adapt, particularly entry-level coding jobs that built India’s tech hubs. Dhaliwal focuses on designing AI to complement rather than replace workers, emphasizing applications that free up time for higher-value tasks.
Topics
Artificial intelligence | The digital economy | Social and economic development
Public sector AI adoption capability
Speakers
– Michael Kremer
– Ufuk Akcigit
Arguments
Government systems and workers may not adapt to use AI technologies, limiting access for the poor
Public sector productivity will improve significantly through AI adoption in government services
Summary
Kremer is pessimistic about public sector adaptation to AI, viewing it as a major risk that could exclude the poor from AI benefits. Akcigit is optimistic about government AI adoption, predicting dramatic improvements in service delivery and response times.
Topics
Artificial intelligence | Social and economic development | Capacity development
Primary AI risks for humanity
Speakers
– Anu Bradford
– Ufuk Akcigit
Arguments
Risk of humans becoming overly dependent on AI and losing critical thinking capabilities
There’s concerning migration of AI talent from academia to industry, reducing open science and increasing protected patents
Summary
Bradford focuses on the risk of human intellectual degradation from AI dependency, while Akcigit is concerned about structural changes in the innovation ecosystem, particularly the brain drain from academia to industry affecting open science.
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | Capacity development
Unexpected differences
Regulation versus innovation trade-off
Speakers
– Anu Bradford
– Implicit assumption by others
Arguments
Innovation vs. regulation is a false choice – Europe’s innovation gap stems from market fragmentation, capital constraints, and talent issues, not regulation
Explanation
Bradford’s strong rejection of the regulation-innovation trade-off is unexpected given the common assumption that regulation stifles innovation. Her detailed evidence about Europe’s structural issues rather than regulatory burden challenges conventional wisdom about AI governance.
Topics
Artificial intelligence | The enabling environment for digital development | The digital economy
Trust in AI technology implementation
Speakers
– Iqbal Dhaliwal
– Other speakers
Arguments
Many promising AI technologies fail due to trust issues and inadequate adaptation of surrounding systems
Explanation
While other speakers focus on technical capabilities and policy frameworks, Dhaliwal’s emphasis on trust and human factors as primary barriers to AI success is unexpected. His examples of doctors not using superior AI diagnostic tools due to trust issues reveals a different dimension of implementation challenges.
Topics
Artificial intelligence | Capacity development | Social and economic development
Overall assessment
Summary
The speakers show broad agreement on AI’s transformative potential for development but significant disagreements on implementation approaches, risk priorities, and institutional capabilities. Key tensions exist between optimistic and cautious views of public sector adaptation, different prioritization of risks (labor displacement vs. human dependency vs. market concentration), and varying emphasis on technical solutions versus institutional reform.
Disagreement level
Moderate disagreement with high implications – while speakers share common goals of harnessing AI for development, their different approaches to risk management, implementation strategies, and institutional capabilities could lead to very different policy recommendations and outcomes for developing countries.
Partial agreements
Partial agreements
All speakers agree AI has tremendous potential for developing countries, but disagree on implementation approaches. Zutt emphasizes small AI solutions, Kremer focuses on public goods applications needing government support, while Akcigit stresses the need to fix underlying business environments first.
Speakers
– Johannes Zutt
– Michael Kremer
– Ufuk Akcigit
Arguments
Focus should be on ‘small AI’ – practical, affordable, locally relevant AI that works with limited infrastructure
Private firms develop profitable applications, but public goods applications need government and multilateral support
AI creates fantastic opportunities for developing countries but requires fixing underlying business environment issues
Topics
Artificial intelligence | Social and economic development | The enabling environment for digital development
Both speakers recognize the importance of governance and institutional factors in AI implementation, but Bradford focuses on regulatory sovereignty challenges while Dhaliwal emphasizes power dynamics and resistance within existing institutions.
Speakers
– Anu Bradford
– Iqbal Dhaliwal
Arguments
Global South has incentive for AI sovereignty but regulating AI is difficult even for established bureaucracies
Technology deployment requires addressing power dynamics and institutional resistance to change
Topics
Artificial intelligence | The enabling environment for digital development | Human rights and the ethical dimensions of the information society
Similar viewpoints
AI presents significant opportunities for development if implemented thoughtfully with appropriate support systems and policy frameworks
Speakers
– Johannes Zutt
– Michael Kremer
– Iqbal Dhaliwal
Arguments
AI offers opportunities to leapfrog development challenges, with 15-16% of jobs in South Asia showing strong complementarity with AI
AI has potential to substantially narrow development gaps if appropriate policy actions are taken
AI applications should free up time for frontline workers rather than adding to their burden
Topics
Artificial intelligence | Social and economic development | Information and communication technologies for development
Structural factors like market access, capital availability, and talent retention are more important for innovation than regulatory constraints
Speakers
– Anu Bradford
– Ufuk Akcigit
Arguments
Innovation vs. regulation is a false choice – Europe’s innovation gap stems from market fragmentation, capital constraints, and talent issues, not regulation
There’s concerning migration of AI talent from academia to industry, reducing open science and increasing protected patents
Topics
Artificial intelligence | The enabling environment for digital development | Capacity development
Successful AI implementation requires addressing institutional and governance challenges, not just technical capabilities
Speakers
– Iqbal Dhaliwal
– Johannes Zutt
Arguments
Technology deployment requires addressing power dynamics and institutional resistance to change
Governance and regulatory safeguards are critical challenges, especially for developing countries
Topics
Artificial intelligence | The enabling environment for digital development | Capacity development
Takeaways
Key takeaways
AI offers significant potential for developing countries to leapfrog development challenges, particularly through ‘small AI’ applications that are practical, affordable, and locally relevant
Success requires addressing foundational issues like infrastructure, digital literacy, and business environment rather than just deploying technology
Market concentration in AI’s foundational layer poses risks, with innovative resources increasingly shifting to large incumbent firms and away from open science
Effective AI implementation depends on demand-driven solutions that free up time for frontline workers and integrate well with existing systems
The choice between innovation and regulation is false – successful AI adoption requires both appropriate governance frameworks and supportive business environments
Public sector applications of AI (weather forecasting, digital identity, traffic management) require government and multilateral support as they won’t attract sufficient private investment
Trust in technology and adaptation of surrounding systems are critical factors that often cause promising AI applications to fail in real-world deployment
Resolutions and action items
World Bank Group to continue focus on ‘small AI’ applications working with governments across Indian states (Uttar Pradesh, Maharashtra, Kerala, Haryana, Telangana)
Need for evidence-based innovation funds with tiered funding structure for piloting, testing, and scaling AI applications
Governments and multilateral development banks should invest in AI applications for public goods like weather forecasting and digital identity systems
Requirement for continuous A/B testing and impact evaluation in AI procurement processes
Development of AI regulatory frameworks that balance innovation with rights protection, adapted to local contexts rather than copying templates
Unresolved issues
How to address the fundamental tension between AI-driven job displacement and the need for economic development in emerging markets
Who will ultimately set AI rules for the Global South given concentration of power in US and China
How to prevent the migration of AI talent from academia to industry and maintain open science
How to ensure public sector adoption of AI technologies when government systems resist change
How to balance AI sovereignty aspirations with the reality of global interdependence in AI supply chains
How to address labor market regulations that incentivize AI adoption over human employment
How to maintain human cognitive capabilities while leveraging AI tools in education and decision-making
Suggested compromises
Focus on AI applications that complement rather than replace human workers, particularly in freeing up time for higher-value tasks
Develop regulatory approaches that learn from established frameworks (like EU’s rights-driven approach) while adapting to local priorities and contexts
Balance between foundational AI development and application-layer innovation, recognizing different entry barriers and concentration risks
Create procurement systems that encourage competition while ensuring quality and avoiding vendor lock-in
Pursue AI sovereignty goals while acknowledging interdependence and avoiding counterproductive techno-nationalism
Implement AI solutions gradually with proper training and system adaptation to address trust and adoption challenges
Thought provoking comments
Why was there no entrepreneurship and dynamism before the AI revolution in emerging economies? Why was it not up or out? Why was it not very competition friendly? Why did the best predictor of firm size in emerging economies or developing economies was the size of the family and or the number of male children? These are still lingering issues and AI is not, you know, will not bring magic unless we understand and fix the business environment in these economies.
Speaker
Ufuk Akcigit
Reason
This comment cuts through the AI hype to address fundamental structural issues. It challenges the assumption that AI will automatically solve development problems and forces the discussion to confront deeper institutional and cultural barriers to economic growth.
Impact
This shifted the conversation from optimistic AI use cases to a more sobering examination of underlying constraints. It prompted Jeanette to pivot toward ‘real world’ considerations and influenced subsequent speakers to address implementation challenges rather than just technological possibilities.
I really would like to debunk this myth that to me it’s a false choice to say that the reason we don’t see these large language models being developed in Europe is not because there’s a GDPR… It’s not because there is AI Act. So the reason there is a perceived innovation gap between the United States and Europe is… four things: no digital single market, no deep robust capital markets union (5% of global venture capital vs 50% in US), legal frameworks and cultural attitudes to risk-taking, and success in harnessing global talent.
Speaker
Anu Bradford
Reason
This systematically dismantles a widely held belief about the regulation-innovation tradeoff, providing concrete evidence that structural economic factors, not regulation, drive innovation gaps. It reframes the entire debate about how developing countries should approach AI governance.
Impact
This fundamentally changed the framing of the regulation vs innovation debate. It gave policymakers permission to think about protective regulation without fearing innovation loss, and shifted focus to the real drivers of technological development – capital markets, talent, and risk culture.
Everything that we do in the field ends up adding to teacher’s time, adding to the nurse’s time, adding to the Anganwadi worker’s time. Very few teachers do that. Free up time. So if your AI application can free up the time of the health frontline workers, first of all, that’s a winner.
Speaker
Iqbal Dhaliwal
Reason
This provides a practical, field-tested criterion for evaluating AI interventions that cuts through technological complexity to focus on human impact. It offers a simple but powerful framework for policymakers to assess AI projects.
Impact
This introduced a concrete evaluation framework that other panelists could build upon. It grounded the abstract discussion in practical implementation reality and provided a memorable heuristic for the audience to apply in their own contexts.
When people are moving to industry from academia, their publication record goes down by 50%. They start patenting by 600% more after they move, which means that we are moving from open science to more protected science. Now, spillover is extremely important for creative destruction, for the future of innovation.
Speaker
Ufuk Akcigit
Reason
This reveals a hidden but critical consequence of AI development – the shift from open knowledge sharing to proprietary research. It connects talent migration to long-term innovation capacity in a way that’s not immediately obvious but has profound implications.
Impact
This introduced a completely new dimension to the discussion about AI’s impact on innovation ecosystems. It elevated the conversation from immediate applications to systemic effects on knowledge production and sharing, influencing how other panelists thought about long-term consequences.
I am more worried about us getting dumber as a humanity. There is a temptation to start skipping steps, outsourcing your thinking and your creativity to these models… we will just make a tremendous mistake if we just forewent that hard work, that beautiful moment of thinking hard problems and creating and investing in our own capabilities.
Speaker
Anu Bradford
Reason
This shifts focus from AI becoming too powerful to humans becoming too dependent, introducing a philosophical dimension about human agency and capability development that’s often overlooked in technical discussions.
Impact
This comment introduced a deeply humanistic perspective that balanced the technical and economic focus of the discussion. It prompted reflection on education and human development strategies, adding emotional resonance to the policy considerations.
An entry-level coding job might be an entry-level job in the United States. It’s the aspirational job that created Gurgaon’s and Noida’s and Mohali’s of this country. And those people are going to be running out of jobs very quickly… we are giving incentives to people to use AI, and we are taxing them through provident fund and labor market regulations to hire labor.
Speaker
Ufuk Akcigit
Reason
This powerfully illustrates how AI’s impact varies dramatically by economic context – what’s entry-level displacement in one country represents the destruction of an entire economic development model in another. It also connects AI adoption to specific policy contradictions.
Impact
This comment brought urgent specificity to abstract discussions about job displacement, making the stakes tangible for the Indian audience. It connected AI policy to broader economic development strategy and highlighted policy inconsistencies that needed immediate attention.
Overall assessment
These key comments fundamentally shaped the discussion by consistently challenging surface-level optimism about AI and forcing deeper examination of structural, institutional, and human factors. Ufuk Akcigit’s interventions were particularly influential in grounding the conversation in economic realities and long-term systemic effects. Anu Bradford’s contributions reframed conventional wisdom about regulation and introduced philosophical dimensions about human agency. Iqbal Dhaliwal provided practical frameworks that made abstract concepts actionable. Together, these comments transformed what could have been a typical ‘AI will solve everything’ discussion into a nuanced examination of how technology interacts with existing power structures, institutions, and human capabilities. The conversation evolved from optimistic use cases to structural constraints, from technical possibilities to implementation realities, and from immediate benefits to long-term systemic risks – creating a much more sophisticated and policy-relevant dialogue.
Follow-up questions
What will happen to creative destruction in the future with AI, particularly in the foundational layer versus application layer?
Speaker
Ufuk Akcigit
Explanation
This is critical for understanding long-term economic impacts and market concentration risks in AI development
Why was there no entrepreneurship and dynamism before the AI revolution in emerging economies?
Speaker
Ufuk Akcigit
Explanation
Understanding underlying structural issues is essential before AI can effectively transform business environments in developing countries
How can we ensure the foundational layer of AI remains contestable and doesn’t become overly concentrated?
Speaker
Ufuk Akcigit
Explanation
Market concentration in foundational AI could limit innovation and competition, affecting downstream applications
How can we keep universities healthy in the AI ecosystem to maintain open science and spillovers?
Speaker
Ufuk Akcigit
Explanation
The migration of AI talent from academia to industry is reducing open science and could harm future innovation
How can we design procurement systems to ensure sufficient competition and avoid lock-in with AI technologies?
Speaker
Michael Kremer
Explanation
Poor procurement could lead to monopolistic situations and reduced quality in AI services for governments
How can we adapt government systems and workers to effectively use AI technologies?
Speaker
Michael Kremer
Explanation
Government adoption challenges could prevent the poor from accessing AI benefits in public services
How can we train healthcare workers and other professionals to effectively use AI diagnostic tools?
Speaker
Iqbal Dhaliwal
Explanation
Studies show that even superior AI tools can reduce efficiency if users aren’t properly trained to trust and use them
How can we adapt existing power structures and systems to accommodate AI-driven decision making?
Speaker
Iqbal Dhaliwal
Explanation
Resistance to scaling AI solutions often stems from concerns about losing human discretion and power
How can we reform labor market regulations to balance AI adoption with employment protection?
Speaker
Ufuk Akcigit
Explanation
Current regulations may incentivize AI adoption over human hiring, potentially accelerating job displacement
How can we develop robust governance frameworks to prevent abuses in AI-powered poverty targeting?
Speaker
Johannes Zutt
Explanation
While AI could enable precise poverty interventions, inadequate governance could lead to misuse or discrimination
How can we ensure humans don’t become overly dependent on AI and lose critical thinking capabilities?
Speaker
Anu Bradford
Explanation
There’s a systemic risk that outsourcing thinking to AI could diminish human cognitive abilities and creativity
What are the early indicators of market concentration in AI and how should we monitor them?
Speaker
Ufuk Akcigit
Explanation
Understanding concentration trends is crucial for policy interventions before market dominance becomes entrenched
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

