Panel Discussion Inclusion Innovation & the Future of AI
20 Feb 2026 16:00h - 17:00h
Panel Discussion Inclusion Innovation & the Future of AI
Summary
The panel opened by framing AI’s growing role in daily life and the need to balance excellence with inclusion in policy and practice [4-9][14-16]. Dean argued that AI should first be governed through existing legal frameworks such as liability and product regulations, rather than creating a single new AI law, and that the presumption should be that current law is sufficient unless proven otherwise [24-32][41-44]. He identified “tail events” – low-probability but high-impact risks – as the area where proactive governance is justified, and he has advocated for transparency legislation to address such threats [34-40]. Dean also emphasized that AI compute infrastructure, especially data centers powering frontier models, ought to be treated as critical national infrastructure comparable to ports or railroads [111-114].
Gabriela stressed that AI development requires a whole ecosystem of government-backed investments, incentives, institutions and infrastructure, noting historic U.S. examples such as DARPA and the Internet that were publicly funded [48-53][54-58]. She described AI technologies as natural monopolies that create market distortions, and argued that public-private partnerships are needed to break the “diffusion machine” and ensure broader economic inclusion [133-146].
Ivana (representing Wipro) explained that AI governance must go beyond compliance, embedding privacy, security, and resilience into products and adopting a techno-legal approach that translates law into technical tools [66-74][75-84]. She highlighted the importance of continuous monitoring of AI systems in production, designing trust mechanisms for agentic AI, and involving employees in governance to shift from pure risk-management to “AI for good” [80-87][94-107].
When asked whether inclusion is an ethical imperative or a competitive strategy, Gabriela replied that the two are inseparable, asserting that inclusive policies boost market competitiveness by preventing concentration and fostering a level-playing field [133-146]. Dean identified a blind spot in current discourse: the tendency to dismiss frontier models as unnecessary, despite massive public investment and their potential to create capabilities beyond today’s imagination, especially for the Global South [156-169].
He reiterated that building AI readiness will require new institutions and infrastructure, while managing both everyday harms and future catastrophic risks, and warned that concentration of AI power is a key political-economic challenge [200-214]. The moderator concluded that the discussion highlighted trade-offs, potential solutions, and the urgency of preparing national AI capabilities for competitive advantage and inclusive growth [216-217].
Keypoints
Major discussion points
– Defining AI inclusion beyond data representation – inclusion is framed as access to compute, standards, policy frameworks and regulatory clarity, not just equitable datasets [4-13].
– Governance strategy: use existing law, intervene for tail-risk events, and treat AI infrastructure as critical – Dean argues that AI should first be governed through current liability and product regulations, with proactive rules only for low-probability, high-impact “tail” events, and that AI data-centers should be classified as critical national infrastructure [24-33][34-42][111-124].
– Public-private ecosystem and market concentration – Gabriela stresses that government-funded research (e.g., DARPA, Internet) is essential, that AI markets behave as natural monopolies/oligopolies requiring policy to curb distortions, and that inclusive policies must coexist with competitiveness [48-55][140-151].
– Organizational AI governance as a strategic capability – Ivana outlines the shift from compliance to a broader “trust stack” that embeds privacy, security, and ethical design, requires continuous monitoring, and involves employees in the governance loop [65-84][94-107].
– Blind spots and future challenges – Panelists identify (i) the over-focus on frontier models while cheaper models can suffice, (ii) the lack of education and skill pipelines to prepare societies for AI, and (iii) the absence of global consensus on “red-line” prohibitions [156-168][171-182][189-198].
Overall purpose / goal of the discussion
The panel was convened to explore how AI can be made inclusive and beneficial while preserving national competitiveness. Participants examined policy trade-offs, governance mechanisms, ecosystem investments, and practical implementation steps needed to build AI readiness at both governmental and organizational levels, and to surface gaps that must be addressed for responsible, equitable AI deployment worldwide.
Overall tone and its evolution
The conversation began with an upbeat, collaborative tone (“such a pleasure to be here…”) and a forward-looking optimism about AI’s benefits. As the dialogue progressed, speakers introduced more cautionary notes-highlighting regulatory gaps, tail-risk threats, market monopolies, and the need for rigorous governance-shifting the tone toward a balanced, problem-solving stance. By the closing remarks, the tone became reflective yet hopeful, acknowledging significant challenges (education gaps, lack of global red-lines) while reaffirming confidence that coordinated policy and ecosystem action can steer AI toward inclusive, competitive outcomes.
Speakers
– Ivana Bartoletti – AI governance, privacy, security, and responsible AI implementation; Virtual panelist (panelist) [S1]
– Gabriela – AI policy, public-private partnerships, inclusion and market dynamics; (no specific title provided)
– Dean – AI policy and governance expert; identified as Dean Xue Lan, specialist in governance and policy [S7]
– Speaker 1 – Moderator/host of the panel discussion [S10]
Additional speakers: None
The session opened with the moderator welcoming the panel and framing the week-long focus on “who truly benefits from artificial intelligence and under what rules” [1-4]. She emphasized that AI has moved from a niche enterprise tool to a pervasive part of daily life-work, entertainment, health-care, hiring and many other domains [5-9]-and argued that “inclusion in AI” goes far beyond equitable data-sets to include access to compute, common standards, supportive policy frameworks and clear cross-border regulations [10-13]. She also presented the discussion as a trade-off between “excellence” and “inclusion” [14-16].
Dean’s opening remarks
Dean argued that AI should first be governed through the existing web of liability, product-safety and other statutes rather than a stand-alone AI law [24-33]. He urged governments to map current legal tools-such as the United States’ liability doctrine-to AI use-cases, presuming existing law is sufficient unless a clear threat model demonstrates otherwise [40-44]. He identified “tail events” (low-probability, high-impact scenarios such as pandemics) as domains where proactive governance, including transparency legislation, is justified [34-40]. Dean said data-centres that power frontier AI are akin to ports or railroads and should be regarded as critical national infrastructure; he also referenced the U.S. policy to subsidise AI data-centre development in the Global South [111-124]. He noted his prior role in the Trump administration’s White House Office of Science and Technology Policy, where he helped shape the AI Action Plan and the AI Export Program [125-130]. Finally, Dean warned that dismissing frontier models as unnecessary would be a serious oversight, pointing out that the United States is allocating roughly $1 trillion this year to AI development, which will enable capabilities we cannot yet name and could offer particular opportunities for the Global South [156-160][156-169].
Gabriela’s response
Gabriela broadened the conversation to the ecosystem needed for responsible AI development. She cited historic government-funded programmes such as DARPA and the early Internet as seeds for breakthrough technologies and called for similar public investment today to nurture an AI ecosystem of incentives, institutions and infrastructure [48-55]. Describing AI technologies as “natural monopolies” that tend toward oligopolistic concentration, she warned that this breaks the “diffusion machine” that spreads innovation [133-146]. To counteract market distortions she advocated public-private partnerships, open-research models and policies that promote economic inclusion, reduce concentration and ensure a level playing field [140-151]. Gabriela highlighted India’s digital identification system as an example of large-scale public investment, noting how the government financed a registry capable of handling 100 million people per month [150-155]. She repeatedly linked inclusion to both ethical duty and competitive advantage, arguing that inclusive policies boost market competitiveness by preventing concentration and fostering a robust diffusion of AI benefits [131-138][140-146]. She also stressed the urgent need to overhaul school curricula, reduce teachers’ administrative burdens and invest in teacher training so the future workforce can effectively engage with AI tools [171-183].
Ivana’s contribution
Ivana positioned AI governance as a strategic organisational capability rather than a mere compliance checklist. She explained that governance must embed privacy, security, legal safeguards and resilience into AI products from design through deployment, requiring investment in privacy-enhancing technologies and a “techno-legal” translation of law into technical tools [65-74][75-84]. She highlighted that many early AI-governance initiatives were led by privacy professionals because a large share of AI harms are privacy-related [70-73]. Continuous post-deployment monitoring, mechanisms for human override of agentic AI, and protection against model drift and hallucinations were presented as essential components of a “trust stack” [80-87]. Ivana also referenced a recent World Economic Forum article she authored on designing trust for agentic AI [115-118]. While acknowledging AI’s benefits, she warned against naïvely ignoring risks such as disinformation, deep-fakes and the reinforcement of existing inequalities, urging a shift from pure risk-management to an “AI-for-good” mindset that engineers fairness and inclusivity into systems [94-107][102-107].
Moderator follow-up and “mindset” framing
The moderator returned to the inclusion question, asking whether it should be framed primarily as an ethical duty or a competitive strategy. She reiterated that inclusion requires building the necessary mindset, skill-sets and tool-sets, emphasizing AI literacy among teachers, students and employees [165-167][84-88][126-130]. Gabriela echoed the need to upgrade education systems and up-skill teachers, noting that no major education system has yet been refreshed to teach AI concepts or equip teachers with the required tools [171-182].
Blind-spot round
– Dean warned that overlooking frontier models is a serious blind spot, stressing the scale of U.S. investment and the unknown capabilities these models will unlock [156-169].
– Gabriela identified chronic under-investment in education and skills development as a fundamental barrier to AI readiness [171-182].
– Ivana pointed out the lack of a global consensus on “red-lines” for AI-clear ethical boundaries that all nations agree not to cross-leaving a gap in international governance [189-198].
Areas of agreement
Both the moderator and Ivana agreed that inclusion must go beyond data representation to include compute access, standards and regulatory clarity [9-13][104-106]. Dean and the moderator concurred that AI compute facilities should be classified as critical infrastructure and that governments should partner with the private sector to develop them [111-124]. Gabriela and the moderator shared the view that AI policy should be built as an ecosystem of investments, incentives and institutions rather than relying solely on regulation [48-52][15].
Points of disagreement
Dean maintained that existing legal frameworks are generally sufficient and that the burden of proof lies with regulators proposing new rules [24-33][40-44], whereas Gabriela argued that the rapid evolution of AI demands a broader ecosystem of public investment and possibly new regulatory tools to prevent market distortions [48-55][53-55]. Dean’s focus on proactive governance for tail-risk events contrasted with Ivana’s call for a fairness-centric, techno-legal approach that embeds ethical design throughout the AI lifecycle [34-38][102-107].
Key take-aways
1. Existing law is presumed adequate for most AI applications; new rules are needed only for demonstrated tail-event gaps [24-33][40-44].
2. Proactive governance is required for low-probability, high-impact AI risks [34-38].
3. AI governance is a strategic capability integrating privacy, security and techno-legal tools [65-74][75-84].
4. Inclusion is both an ethical imperative and a competitive advantage [131-138].
5. Public-private partnerships and government investment are essential to curb market concentration and nurture open research [48-55][140-151].
6. AI compute facilities should be treated as critical national infrastructure [111-124].
7. AI’s natural-monopoly tendencies require policy interventions to prevent concentration [133-146][208-214].
8. Urgent reform of education systems and up-skilling of teachers and workers are needed for AI readiness [171-183].
9. Blind spots include under-estimating frontier models and the absence of global “red-lines” [156-169][189-198].
Closing
The moderator summarized the trade-offs discussed, the potential solutions offered, and the imperative to build AI readiness for national competitiveness [215-217]. Dean closed by reminding the audience of the massive institutional and infrastructural challenges ahead, the need to manage both everyday harms and future catastrophic risks, and the importance of preventing concentration of AI power through coordinated policy and competitive dynamics [200-214].
Overall, the panel expressed optimism tempered by acknowledgement of significant challenges, leaving the audience with a clear roadmap: treat compute as critical infrastructure, leverage existing legal tools while targeting tail-risk gaps, invest in public-private ecosystems, embed fairness and trust into AI systems, and urgently reform education to prepare the next generation for an AI-augmented world.
And it’s such a pleasure to be here with such lovely panelists and an audience who’s possibly going to skip some of the lunchtime to join us today in our discussions. Let me get started by really talking about, you know, we are towards the end of the week. It’s been a fantastic week, lots of conversations. And one thing which I reflect back on most of the conversations has been what is the most defining question of our time, which is who all is artificial intelligence really benefiting and with what rules? If I look at it, AI’s enterprise infrastructure, AI’s public sector capability, AI’s even geopolitical leverage is what we’ve seen across all these days. But more importantly, AI has become a part and parcel of our daily lives.
It stretches from everything from making our work life easier. to making sure that we get our entertainment as and when and how we require it. And more importantly, from healthcare to hiring to anything you can possibly imagine. When we really focus on inclusion in AI, one thing which has kind of stayed as a thought for the last five days is inclusion in AI is way beyond equitable representation in data sets. It’s, you know, it’s everything. It’s about access to compute. It’s about standards. It’s about having a right policy framework, which encourages everyone, everywhere. And more important, it’s also getting clarity on regulations, which are there across countries, to see how it can really be beneficial.
Now, to take the discussion ahead, today’s conversation is going to be really about trade -offs. Excellence and inclusion. It’s been interesting on how to navigate both these terminologies whenever you think of any policy or a framework. So I’m going to start with my first question to Dean. So Dean, you know, you’ve been working at the frontier of AI policy. You’ve been at the institutional design through the Foundation for American Innovation. There is a lot of growing debate between self -regulation and innovation -first approaches. Where should policymakers really draw the line without really undermining national competitiveness?
So I think it’s a, first of all, thanks for being here. And thank you for having me. It’s an honor to be here. The way I think about this is that, you know, we will govern AI through a very large intersecting web of different things, right? It’s not just going to be one day one bill is going to get passed and that’s going to be the AI bill and then AI is regulated, right? AI is currently regulated today. It’s regulated by many different things. It’s regulated in the United States by things like liability doctrine and a lot of existing products. regulations and things like that. So I think step number one for government is let’s take the existing bodies of law, you know, many of which just as, you know, as in India and the United States, we’re quite proud of.
Many countries, you know, are very proud of their regulatory and legal traditions. We have a common law tradition in the United States that we are proud of. So let’s take those things and let’s figure out how to apply them to AI. And then, you know, the companies, I think, thus far, the major AI labs have been, I think, responsible stewards when it comes to the major risks. Now, I think the area where you might need proactive governance first is, at least in my view, is really this domain of tail events, potential events that could be very serious, have very serious consequences that are relatively unlikely. So, So, you know, pandemic is an example of a tail event.
And I think AI might have some tail, you know, sort of catastrophic type risks associated with it. And so this is an area where some proactive governance, I think, is needed. And I’ve written supportively about transparency laws in the United States along those lines. So I think that’s where when we have a clear and demonstrated threat model and we have a, you know, clear evidence that existing law is not sufficient. I think one area, one aspect of AI governance that I often push back on and that I often dispute is there’s this kind of assumption baked in whenever we talk about AI regulation that the existing law is insufficient and that the current status quo is that AI is unregulated in some way.
And I think that should actually be, we should have the opposite presumption. We should presume that existing law is sufficient and that there is some sort of good solution. And then. Yeah. It should be, the burden of proof should be on the person who wants the regulation to show this is why existing law doesn’t work.
Thank you, Dean. That’s very interesting that we go with an assumption. And with that, Gabriela, let me move on to you. So should, you know, how can governments foster open innovation, assuming to whatever Dean said, while minimizing the risk of market distortions?
Well, I think that it’s a very nice segue because I completely agree with Dean that there are a very broad portfolio of policy interventions that has not only to be with regulations. Regulations is looking at the way the technology is developed. But we need to think about this as an ecosystem that needs to be nurtured, that need investments. That needs incentives. that needs institutions and that needs infrastructure. And therefore it’s not only the technological conversation about what do we do with AI, but what kind of an economy we want that is really productive, that delivers for people with AI, and for that you need government intervention. And let me tell you, what is very interesting is we usually tend to think that the private sector is an innovative force and the government is a break.
In the U .S. that was not the case. The U .S. was the place where the massive investment in innovation in DARPA, in the creation of the Internet, all the foundational issues that we are seeing now were financed at some point by basic research that was paid by the government of the U .S. And many countries fill that space and that’s why it’s so important that we invest in research because it cannot be that the research is being done only by the private sector. and then it’s also true that when the government gets into the research it’s open research because it needs to bring everybody around the table and then it needs to be shared which is not always the case when you have a private sector innovation so I will contest this also way of framing the issues in terms of the government’s only creating market distortions because at the end it’s about how the government can be effective to address the market distortions that we see many times emerged in this case I like to see the AI technologies as natural monopolies somebody invented something somebody laid the whole network to operate it and then it was a monopoly and now it’s oligopoly at the end it’s very concentrated so there are market distortions now that needs to be addressed by government policies again there is a wide gap of things that needs to be done to ensure that the main distortion that can occur nationally and globally, which is that this is a story of a lucky few, is prevented.
Thanks, Gabriela. And you know, it’s interesting you mention that because at least in India, whenever we speak about public -private partnerships, it’s all about how we are moving from a culture of competition to cooperation, to really working together so that the markets stay healthy. With that, we move over to you, Amanda. So, Amanda, you know… Eva. Sorry. Yeah, Amanda is missed. So, Eva, let’s talk about the global AI governance strategy at Wipro, right? Many organizations are developing a responsible AI framework. How do we move beyond policy statements? Through measurable accountability. and specifically when we have to do that at scale.
Thank you very much and it’s great to be here and thanks to all of you for joining. So I have what I say often, I have the best job in the world, which is basically to translate a lot of the things that we’ve been discussed over the last few days into practice. So basically means we’ve heard democratisation, we’ve heard inclusion, we’ve heard how it’s important that AI is inclusive and by inclusivity, it’s not just about access, as was said, but it’s also making sure that many get the opportunity to participate in the design of this technology, but also in the decisions around what we are producing and who is going to be benefiting from that.
We, I think in a lot of our work, a lot of organisations, what happened over the last few years when generative AI came about, a lot of organizations we had to face something quite dramatic if you think about it because before then AI was very much for engineers for scientists to work with if they think about machine learning people who knew about AI then what happened a few years ago is that generative AI came and everybody got access to it did you remember and do you remember how companies started to scramble with who’s got access do we leave people our employees to access this the systems do we create our own private instance how do we navigate the fact that we want people to play with these tools with the fact that we have to be safe and secure as an enterprise and then things evolved and a lot of organizations if you know how the debate around governors started and you know a lot of organizations started to set up governance boards and they started to set up ethics boards and all of it and I think we realized at some point and I took on the challenge of AI governance from a privacy standpoint the reason for this and many people in organizations took on AI governance from a privacy standpoint not only because a lot of AI harms are actually privacy harms but also because privacy professionals knew about risk management and then we realized that actually governance of AI is much more than that it’s much more than risk management it’s much more than compliance we realized and I think this summit show that really clearly that AI governance is really about a strategic capability that an organization must have to create long -term value What does that mean?
It means that you have to do two things. First, you have to look at what you want to deploy or develop and that is where you need to embed privacy, security, legal protections, resilience into the products that you’re working on. That is not an easy one. It’s not an easy one. It requires knowledge. It requires investment in privacy enhancing and security enhancing technologies. It requires what, for example, India is promoting which is a techno -legal approach. It’s not just about the law but it’s also about how you translate the law into technical tools. So you have to do all of that and then you have to look at what happens once the product is in production.
So how do you monitor it once it’s out in production? How do you make sure that if, for example, you’re using AI to fire and fire, as sometimes it happens, you have tools to pull the trigger if something goes wrong? Now we are into the realm of agentic AI. If you’re interested in this, I’ve just published an article on the World Economic Forum of a subject I’m really fascinated in, which is what is the design for trust in agentic AI? So, for example, governance means that you do design these agents, but you give people, according to security standards, but also according to their own preferences, the right to intervene when they don’t want the machine to make a decision in an autonomous way.
And then you make sure that you protect from cascading hallucinations, from model drifting, all of that. So governance, to me, is very much about… . the capability that organizations have to think laterally about AI, which means impact, design choices, the trust stack that enables people and employees to trust the product. And one element which to me is very important is to make sure that companies bring their employees with them. That is a very crucial part of governance because the work is going to change. People are going to change the way that they work. They’re going to, and it’s important, the people are going to know best how to use AI are the people working in a company.
This is why I’ve seen successful companies developing a lot of use cases based on their activity and asking their employees, how should we innovate this? This is a fundamental part of governance, I believe, because it brings people with them. us. So very encompassing approach to governance. I think we are evolving and changing how we see it but certainly I think it’s become very clear over the last sort of few years and especially with things like this summit talking about impact that it’s way beyond compliance and it’s way
Thanks Eva and that makes me very curious enough to ask you a very quick question. So do you still feel you’re underestimating the risks because you spoke about AI trust?
No, I think let’s be honest here. Over the last few years we’ve seen amazing benefits coming from AI, right? Beautiful stuff, fantastic. Every day there is a piece of news that makes us hopeful that we can improve our well -being and we can feel better in the world we live in. But at the same time we’ve seen the risks too and we’ve got to be honest that looking at the success without looking at the risks is very naive. We can’t. because we’re not going to be able to deploy AI successfully if we don’t look at the risks. We’ve seen disinformation. We see deepfakes. We have seen AI softwareizing existing inequalities into decision -making around people, future, rights, and livelihood.
That’s not okay. So we’re not underestimating the risks. But we can’t approach governance from a risk management control. We have to shift our approach and do AI for good and change the way that we look into this. So we have to engineer fairness into the systems that we create. We have to engineer inclusivity into the systems that we create. And, of course, we have to manage the risk. But the mindset has to really shift.
Thanks. And that gets me back to Dean. So my question to you is, inclusion at the national level often intersects with computer access. And research infrastructure. You spoke about public -private partnerships, spoke about trust in… emerging technologies like artificial intelligence and maybe even quantum going ahead, should governments treat compute as critical infrastructure?
Yeah, I think they should. The data centers that power frontier AI systems are going to be a part, you know, like ports or railroads. They’re going to be critical infrastructure of the future. I believe that’s true. Prior to my current role, I worked in the Trump administration in the White House Office of Science and Technology Policy. And in particular, I was one of the people that shaped the administration’s AI action plan and AI export program, which my former boss, Michael Kratios, was just here talking about and announcing some next steps on. I was really excited to see that. One of the key messages of that that I feel was I feel this is maybe a communications failure on our part.
But. You know, the United States government has publicly said the president. has come out and made as a flagship of his AI policy that we intend to subsidize the development of AI data centers in the global south. That is a policy of the United States under this administration. And we don’t have the interest in exercising control over the technology in the way that I think the prior administration did in some ways. We don’t want to control other countries’ use in the same way that the prior administration did. So I do think you should think of it as critical infrastructure. And I think that you should think of the United States as a partner in the construction of that.
And I think that owning infrastructure of this kind is an asset that states and regions can use for years to come.
Thank you. So… So, you know, it’s been interesting because whenever we speak about AI at scale, when we talk about taking AI to every single person… across the planet, there are always three vectors we look at. So that can be mindset, skill sets, and tool sets. You just spoke about tool sets, which is extremely relevant. And that takes my question to you, Gabriela. When we talk about mindset, should inclusion be framed primarily as an ethical imperative or a competitive strategy or even both? What’s your take on that?
I really like the way this question is framed, my dear, because I’m sure that people think that going for inclusive policies might hinder competitiveness. Who from the public believes that? Can we have a show of hands? That being inclusive might hinder competitiveness? Investing in competitiveness might go against inclusiveness? I really think I’m an economist, and I think in this area we really need to think. We need to think about economic inclusiveness. because if we just think about social policies that might be needed when some people are left behind and therefore we need to invest in communities or in infrastructure or in people, kids that are in deep need of education, those things are very important.
But more importantly, we need to consider how do we foster market economies that are inclusive and that’s the core issue here. And I can tell you because I have been looking at the question of inequalities. Actually, I’m now co -chairing the task force on inequalities financial disclosure. And what we have seen is that when you have market concentration, productivity flattens. And what happened here, and we saw it at the OECD report that we did some years ago, when you have concentration at the top as the one we are seeing now, we saw that the OECD report which is companies having the whole concentration of compute capacities, the capacity to sort out skills and attract the skills, the capacity of having the financial means to invest.
What happens is that the diffusion machine, which is this very important element that trickles down the innovative developments into a broader set of users and benefits, is broken. Now the diffusion machine is broken. And therefore we need to see how do we ensure that the diffusion is faster. And to do that, of course, I agree with Ivana. The question is how do we ensure that we create the capacities of people and economies that are lagging behind. But we also need to see how do we diminish market dominance. And I know that there are many other considerations, geopolitics, competition matters, trade secrets matter, all these things matter. But for me, competitiveness, inclusiveness has to do with creating the highest well -being for people and that’s the outcome and that’s where ethics competitiveness, all of these narratives collide together because at the end what are we looking at?
that we have well distributed 70 % of wealth in many countries, 60 % of wealth 50 % of wealth is owned by just the top 10 % income groups but that’s not sustainable I get into Europe and Mexico and I was asking where do I put my children because I need good schools and they told me choose the right neighborhood that’s not possible and therefore I feel that there needs to be this set of policies and who is there to ensure a level playing field who is the one that needs to be using the tax systems or the incentive systems or the investment systems or to ensure that people are not left behind or the anti -competition or the non -competitive practices who is there to I pay my taxes so that the governments deliver on their promises so I think this is super important and I feel for example that what India has managed, this question of the digital registry I was with Mr.
Murthy when he presented his plan so many years ago I never could believe that you were going to be doing registry for 100 million people every month, it was just like you’re crazy, that will never happen who finances the government and now you have all India with the digital identification, it’s just amazing and then you go with the financial thing so I feel this is this is this is the this is the this is the this is the this is the this is the this is the this is the this is the this is the this is the this is the this
Thanks, Gabriela. Since with this vision that the world of tomorrow with AI would certainly be a better world and hopefully be a better world than what it is today, I have a common question for all the panelists. And the question is, what do you see as the most significant blind spot in recent times AI discourse, keeping in all the conversations you’ve possibly had this week and even prior to that? And maybe what we can do is, Dean, we can go with you first.
Yeah. So in terms of blind spots, I think maybe the most important thing I could possibly say here would be one thing I’ve heard repeated a lot in the conversations I’ve had this week is this notion that the frontier models, the best AI systems are not… necessary. You can find good enough models that can, you know, that are cheaper to run. And in some cases, I think that will be true. But I would point to the very significant blind spot there that, you know, I believe that what we are doing is building systems that are going to be smarter than humans at all cognitive labor. That is a very serious goal. The United States is currently spending, like, it’s not a joke, right?
That’s not a joke. That’s not hype. That’s not crypto. We’re spending a trillion dollars this year on that. That’s the plan. We’re going to do it. It’s going to happen, right? And so the capabilities of those systems and the way that that will change the way the world works, I think ambitious people will be able to do unbelievably broad range of things. And I think this could really be an incredible opportunity for countries in the global south and really everyone in the world to participate in building the future together. And this is not something rejecting frontier models out of some sort of belief that that preserves sovereignty because there are existing use cases you can think of that can be done you know with with with cheaper models i would use a think of think of frontier ai as being useful for stuff that uh we don’t even have words for today right concepts that you will invent uh and you know that we will all invent together that’s the future that we’re building um and uh i think that’s a it’s an easy thing to miss and i think missing it is basically missing the ball game
that’s very interesting thank you for sharing that gorilla what about you
i would say a traditional education education education okay i am really that takes awake my sleep why aren’t we upgrading massively the education pedagogy Why are we changing the way we go in school? Why don’t we invest in our teachers for them to understand how these technologies can help improve student outcomes and at the same time make their life easier? I see a lot of teachers complaining about all the administrative work that they need to do that don’t leave any space for them to invest in quality changes with their students. And I’m not seeing that happening. And we need that pipeline. If the future that Dean is projecting is going to happen, is going to arrive, we need people to be very well equipped.
And where do we get that equipment? I’m fine to invest in the workers in the market. That’s very important. And I think that we need to upgrade that too, the skills of the people in the market. But the school system needs to be upgraded. And actually, I haven’t seen it really happening anywhere. This is a challenge. This is a challenge for North, South, East, West. and I invite that for all to confront this challenge.
Okay, I love the fact that you brought education and skilling as a part of it because building AI readiness has become so essential to ensure natural competitiveness no matter which market we are talking about. And just to share an incidence, India was one of the few countries where AI was introduced as a school subject way back in 2019, even before the COVID era. So students could learn AI as they would possibly learn a biology or a physics. But yes, that’s a major challenge which we’re trying to work on. Ivana, your book.
I was very impressed yesterday when your prime minister spoke. And I was very impressed by one thing that he said. But he said, you know, develop here and serve humanity. And I think that to me went to point that it’s been very strong. here and where I see because the I like also something that has been missing so far so he said something very very important and he said that AI needs to be used for inclusion for economic well -being inclusion as we said as access but also as participation for many as reduction of the gap between areas of society in geographies across India inclusion also as creating models that respect your languages and your dialects and the ethical norms bind in this country together because the eye that we have now is often not reflecting of the diversity of the world one thing that following on this one thing that has been good has been to see many leaders coming from all over the world.
One thing that I’ve always supported is how we haven’t aligned, I’ve always thought this, how we haven’t aligned on what the red lines of AI are. Are there things that us as a society or as a world, we are never going to do or we don’t want to do? Regardless of, are they, and we’ve seen appeals coming over recent years. We’ve had massive debates around ethics of AI in different ways, whether it’s the US, whether it’s Europe, whether it’s, in different ways, or everyone in different ways. But I think when it comes to something which is far more than technology, because AI is far more than technology, AI’s power is geopolitics, is earth, cables, sea, so much.
I think one of the things that probably we are overlooking is how we, and if we will be, as a world able to come together and have some red lines and say, well actually, we’re not going to go
Thank you. So I just want to take a moment to thank the panelists and maybe I can ask Dean for you to sum it up.
Well, I think there’s a lot of different things. Unfortunately, the subject of AI governance is so difficult because it’s so capacious, right? It’s such an enormous topic. But look, I think we have a very real infrastructure development sort of challenge ahead of us. We have a huge complex of new types of institutions and old institutions that are going to change and evolve in various ways. And there’s all sorts of interlocking work to do on things like that that are going to be critical for the governance of AI for both everyday types of harms and also sort of catastrophic things that feel futuristic. But I think that are going to be real parts of our lives in the pretty near future.
And then I think, you know, another thing I would I would kind of double click on is this need for competitive. Yes, which I feel. agree with. And one of the things that I think is exciting about AI is that the price per token of models does drop quite quickly. And so there are a lot of good competitive dynamics here. There’s also centralizing tendencies. And so I think working together to figure out how to prevent those tendencies, I think that’s going to be extremely important. The concentration of power in AI and that issue in the long term is going to be, I think, one of the most important parts of the political economy of this topic.
So yeah, I think that’s how I see it.
Thank you. That’s fantastic. We spoke about trade -offs, we spoke about potential solutions, and we spoke about building AI readiness for national competitiveness. Thank you so much to all the panelists, and it was lovely having a conversation.
Thanks to the moderator. Thank you.
Gilwald contends that current digital inclusion challenges are primarily demand-side issues rather than infrastructure problems. She emphasizes that inclusion must be equitable, allowing people to act…
EventNeed to develop concrete public interest frameworks covering models, talent, and data sharing beyond just compute
EventWhile this transcript captures only the opening remarks of the AI Governance Dialogue, the key comments identified establish a foundational framework that likely shaped the entire day’s discussions. T…
EventDoreen Bogdan Martin: Thank you. And we now have a chance together to reflect on AI governance with someone who has a unique and incredible view on where everything is going. Please join me in welcomi…
EventHowever, Ball acknowledged that proactive governance may be necessary for addressing “tail events” – low-probability, high-impact scenarios. He mentioned writing “supportively about transparency laws”…
EventSubmarine cables should be classified and treated as critical infrastructure
EventLegal and regulatory | Cybersecurity | Development The speaker outlines a comprehensive framework for AI governance that spans multiple organizational levels. This includes governance structures, app…
EventTheresa describes emerging UK regulations targeting high‑risk AI, including transparency, explainability and third‑party supplier rules. She notes that similar regulatory moves are expected elsewhere …
EventAlan Paic:Yes, it was not about further countries joining. Well, I can also mention that. So we do have a membership process, which is well-defined and also described on the website. Further countries…
EventBalancing government-funded projects with maintaining market competitiveness
EventJune Paris: Can you hear me? Yes. Okay. Please, go ahead, we’re looking forward to hearing you talking about bridging digital divides. Yeah, I am June Paris, I am in Barbados at the moment, and it’s 2…
Event“AI governance now faces very similar tensions.”<a href=”https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-inclusion-innovation-the-future-of-ai?diplo-deep-link-text=It%27s+regulate…
EventAI is a general-purpose technology that holds the potential to increase productivity and build impactful solutions across numerous sectors, driving unique development needs as varied as healthcare, to…
EventThe conversation delved deeply into the practical challenges of implementing AI governance at scale, with Gurnani providing particularly compelling examples from IBM’s experience. She described how IB…
EventAI now influences decisions in healthcare, finance, hiring, and public administration, pushing AI ethics into thecentre of policy and public debate. What began as an abstract discussion about values i…
UpdatesIt applies in the business world as well. You have 51 % of the board control or equity in a company. Basically control that company, right? Right? Lobbyists have an extreme incentive to not push anybo…
EventBella Wilkinson from Chatham House provided a realistic assessment of the current geopolitical landscape, arguing that global consensus on AI governance is unlikely given US-China tensions and weakene…
EventInfrastructure challenges including energy, cooling, and water consumption are critical blind spots that need immediate attention
EventHowever, the adoption of AI in trade faces major barriers. These include the lack of expertise, high costs, absence of good practices, and the absence of a government strategy. A survey conducted with…
EventLack of consensus on what constitutes ‘intolerable risks’ and appropriate risk thresholds globally
Event“The session opened with the moderator framing the focus on who truly benefits from artificial intelligence and under what rules.”
The knowledge base notes that the main session on AI needs to consider who is using and who is benefiting from it [S13].
“Dean argued that AI should first be governed through the existing web of liability, product‑safety and other statutes rather than a stand‑alone AI law.”
Discussion summaries of US AI governance under the Trump administration highlight a liability-based approach instead of new AI-specific regulation [S108].
“Dean urged governments to map current legal tools—such as the United States’ liability doctrine—to AI use‑cases, presuming existing law is sufficient unless a clear threat model demonstrates otherwise.”
Panel discussions emphasize that existing legal frameworks should be presumed sufficient until proven otherwise, placing the burden of proof on advocates of new rules [S22].
“Dean identified “tail events” (low‑probability, high‑impact scenarios such as pandemics) as domains where proactive governance, including transparency legislation, is justified.”
Workshop notes refer to low-probability, high-risk scenarios as a focus for risk-mitigation strategies [S113].
“Dean said data‑centres that power frontier AI are akin to ports or railroads and should be regarded as critical national infrastructure.”
A speaker explicitly compared AI-powering data centres to ports and railroads, calling them future critical infrastructure [S114].
“Dean noted his prior role in the Trump administration’s White House Office of Science and Technology Policy.”
The same source that mentions the data-centre analogy also confirms his previous work in the Trump administration’s White House [S114].
“Dean suggested that existing regulations should be used as a foundation and complemented rather than replaced with entirely new AI statutes.”
Commentary from WS #162 advises countries to complement existing regulations instead of creating wholly new AI laws, adding nuance to Dean’s stance [S37].
“Historical precedent shows that legal principles can adapt to new technologies without needing separate legislation.”
Analysis of past technology regulation (e.g., the internet) argues that existing frameworks can be extended to cover emerging tech, supporting the view that AI may be governed by current law [S60].
“Transparency legislation is important for managing tail‑event risks.”
Experts stress that treating algorithms as black boxes limits transparency and can perpetuate disparities, underscoring the relevance of transparency measures [S105].
The panel shows strong convergence on four themes: a broad, systemic view of inclusion; the classification of AI compute as critical infrastructure; the urgent need to overhaul education and up‑skill the workforce; and the requirement for an ecosystem‑based policy approach that blends public investment with regulatory clarity.
High consensus across speakers, indicating a shared understanding that AI governance cannot rely solely on narrow regulation but must integrate infrastructure, education, and inclusive design. This consensus suggests that future policy initiatives should prioritize public‑private partnerships, critical‑infrastructure designation for compute resources, and large‑scale capacity‑building programmes to achieve equitable AI benefits.
The panel displayed moderate disagreement centered on the adequacy of existing legal regimes versus the need for new, ecosystem‑wide policy measures, and on the emphasis of risk‑centric regulation versus fairness‑centric governance. While all participants agreed on the overarching goal of inclusive, trustworthy AI, they diverged on the mechanisms—legal presumption, proactive tail‑risk rules, public investment, techno‑legal embedding, and the strategic classification of compute resources.
The level of disagreement is moderate but consequential: differing assumptions about legal sufficiency and the role of government could shape whether AI governance leans toward incremental adaptation of current law or toward a more transformative, investment‑driven framework. These divergences will affect policy design, allocation of resources, and the speed at which inclusive AI ecosystems can be built.
The discussion was shaped by a series of pivotal insights that moved it from a generic talk about AI policy to a nuanced exploration of regulation, market structure, infrastructure, governance practice, and human capital. Dean’s challenge to the presumption of regulatory gaps and his framing of compute as critical infrastructure set a legal‑and‑strategic foundation. Gabriela’s emphasis on natural monopolies and the dual nature of inclusion as ethical and competitive introduced economic depth and highlighted concentration risks. Ivana’s articulation of governance as a strategic, techno‑legal capability and the need for global red lines broadened the conversation to operational and geopolitical layers. Collectively, these comments redirected the dialogue toward concrete policy levers, long‑term investment in frontier models, and the essential role of education, thereby deepening the analysis and outlining a comprehensive roadmap for inclusive, competitive, and responsibly governed AI.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

