Panel Discussion Inclusion Innovation & the Future of AI

20 Feb 2026 16:00h - 17:00h

Panel Discussion Inclusion Innovation & the Future of AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel opened by framing AI’s growing role in daily life and the need to balance excellence with inclusion in policy and practice [4-9][14-16]. Dean argued that AI should first be governed through existing legal frameworks such as liability and product regulations, rather than creating a single new AI law, and that the presumption should be that current law is sufficient unless proven otherwise [24-32][41-44]. He identified “tail events” – low-probability but high-impact risks – as the area where proactive governance is justified, and he has advocated for transparency legislation to address such threats [34-40]. Dean also emphasized that AI compute infrastructure, especially data centers powering frontier models, ought to be treated as critical national infrastructure comparable to ports or railroads [111-114].


Gabriela stressed that AI development requires a whole ecosystem of government-backed investments, incentives, institutions and infrastructure, noting historic U.S. examples such as DARPA and the Internet that were publicly funded [48-53][54-58]. She described AI technologies as natural monopolies that create market distortions, and argued that public-private partnerships are needed to break the “diffusion machine” and ensure broader economic inclusion [133-146].


Ivana (representing Wipro) explained that AI governance must go beyond compliance, embedding privacy, security, and resilience into products and adopting a techno-legal approach that translates law into technical tools [66-74][75-84]. She highlighted the importance of continuous monitoring of AI systems in production, designing trust mechanisms for agentic AI, and involving employees in governance to shift from pure risk-management to “AI for good” [80-87][94-107].


When asked whether inclusion is an ethical imperative or a competitive strategy, Gabriela replied that the two are inseparable, asserting that inclusive policies boost market competitiveness by preventing concentration and fostering a level-playing field [133-146]. Dean identified a blind spot in current discourse: the tendency to dismiss frontier models as unnecessary, despite massive public investment and their potential to create capabilities beyond today’s imagination, especially for the Global South [156-169].


He reiterated that building AI readiness will require new institutions and infrastructure, while managing both everyday harms and future catastrophic risks, and warned that concentration of AI power is a key political-economic challenge [200-214]. The moderator concluded that the discussion highlighted trade-offs, potential solutions, and the urgency of preparing national AI capabilities for competitive advantage and inclusive growth [216-217].


Keypoints

Major discussion points


Defining AI inclusion beyond data representation – inclusion is framed as access to compute, standards, policy frameworks and regulatory clarity, not just equitable datasets [4-13].


Governance strategy: use existing law, intervene for tail-risk events, and treat AI infrastructure as critical – Dean argues that AI should first be governed through current liability and product regulations, with proactive rules only for low-probability, high-impact “tail” events, and that AI data-centers should be classified as critical national infrastructure [24-33][34-42][111-124].


Public-private ecosystem and market concentration – Gabriela stresses that government-funded research (e.g., DARPA, Internet) is essential, that AI markets behave as natural monopolies/oligopolies requiring policy to curb distortions, and that inclusive policies must coexist with competitiveness [48-55][140-151].


Organizational AI governance as a strategic capability – Ivana outlines the shift from compliance to a broader “trust stack” that embeds privacy, security, and ethical design, requires continuous monitoring, and involves employees in the governance loop [65-84][94-107].


Blind spots and future challenges – Panelists identify (i) the over-focus on frontier models while cheaper models can suffice, (ii) the lack of education and skill pipelines to prepare societies for AI, and (iii) the absence of global consensus on “red-line” prohibitions [156-168][171-182][189-198].


Overall purpose / goal of the discussion


The panel was convened to explore how AI can be made inclusive and beneficial while preserving national competitiveness. Participants examined policy trade-offs, governance mechanisms, ecosystem investments, and practical implementation steps needed to build AI readiness at both governmental and organizational levels, and to surface gaps that must be addressed for responsible, equitable AI deployment worldwide.


Overall tone and its evolution


The conversation began with an upbeat, collaborative tone (“such a pleasure to be here…”) and a forward-looking optimism about AI’s benefits. As the dialogue progressed, speakers introduced more cautionary notes-highlighting regulatory gaps, tail-risk threats, market monopolies, and the need for rigorous governance-shifting the tone toward a balanced, problem-solving stance. By the closing remarks, the tone became reflective yet hopeful, acknowledging significant challenges (education gaps, lack of global red-lines) while reaffirming confidence that coordinated policy and ecosystem action can steer AI toward inclusive, competitive outcomes.


Speakers

Ivana Bartoletti – AI governance, privacy, security, and responsible AI implementation; Virtual panelist (panelist) [S1]


Gabriela – AI policy, public-private partnerships, inclusion and market dynamics; (no specific title provided)


Dean – AI policy and governance expert; identified as Dean Xue Lan, specialist in governance and policy [S7]


Speaker 1 – Moderator/host of the panel discussion [S10]


Additional speakers: None


Full session reportComprehensive analysis and detailed insights

The session opened with the moderator welcoming the panel and framing the week-long focus on “who truly benefits from artificial intelligence and under what rules” [1-4]. She emphasized that AI has moved from a niche enterprise tool to a pervasive part of daily life-work, entertainment, health-care, hiring and many other domains [5-9]-and argued that “inclusion in AI” goes far beyond equitable data-sets to include access to compute, common standards, supportive policy frameworks and clear cross-border regulations [10-13]. She also presented the discussion as a trade-off between “excellence” and “inclusion” [14-16].


Dean’s opening remarks


Dean argued that AI should first be governed through the existing web of liability, product-safety and other statutes rather than a stand-alone AI law [24-33]. He urged governments to map current legal tools-such as the United States’ liability doctrine-to AI use-cases, presuming existing law is sufficient unless a clear threat model demonstrates otherwise [40-44]. He identified “tail events” (low-probability, high-impact scenarios such as pandemics) as domains where proactive governance, including transparency legislation, is justified [34-40]. Dean said data-centres that power frontier AI are akin to ports or railroads and should be regarded as critical national infrastructure; he also referenced the U.S. policy to subsidise AI data-centre development in the Global South [111-124]. He noted his prior role in the Trump administration’s White House Office of Science and Technology Policy, where he helped shape the AI Action Plan and the AI Export Program [125-130]. Finally, Dean warned that dismissing frontier models as unnecessary would be a serious oversight, pointing out that the United States is allocating roughly $1 trillion this year to AI development, which will enable capabilities we cannot yet name and could offer particular opportunities for the Global South [156-160][156-169].


Gabriela’s response


Gabriela broadened the conversation to the ecosystem needed for responsible AI development. She cited historic government-funded programmes such as DARPA and the early Internet as seeds for breakthrough technologies and called for similar public investment today to nurture an AI ecosystem of incentives, institutions and infrastructure [48-55]. Describing AI technologies as “natural monopolies” that tend toward oligopolistic concentration, she warned that this breaks the “diffusion machine” that spreads innovation [133-146]. To counteract market distortions she advocated public-private partnerships, open-research models and policies that promote economic inclusion, reduce concentration and ensure a level playing field [140-151]. Gabriela highlighted India’s digital identification system as an example of large-scale public investment, noting how the government financed a registry capable of handling 100 million people per month [150-155]. She repeatedly linked inclusion to both ethical duty and competitive advantage, arguing that inclusive policies boost market competitiveness by preventing concentration and fostering a robust diffusion of AI benefits [131-138][140-146]. She also stressed the urgent need to overhaul school curricula, reduce teachers’ administrative burdens and invest in teacher training so the future workforce can effectively engage with AI tools [171-183].


Ivana’s contribution


Ivana positioned AI governance as a strategic organisational capability rather than a mere compliance checklist. She explained that governance must embed privacy, security, legal safeguards and resilience into AI products from design through deployment, requiring investment in privacy-enhancing technologies and a “techno-legal” translation of law into technical tools [65-74][75-84]. She highlighted that many early AI-governance initiatives were led by privacy professionals because a large share of AI harms are privacy-related [70-73]. Continuous post-deployment monitoring, mechanisms for human override of agentic AI, and protection against model drift and hallucinations were presented as essential components of a “trust stack” [80-87]. Ivana also referenced a recent World Economic Forum article she authored on designing trust for agentic AI [115-118]. While acknowledging AI’s benefits, she warned against naïvely ignoring risks such as disinformation, deep-fakes and the reinforcement of existing inequalities, urging a shift from pure risk-management to an “AI-for-good” mindset that engineers fairness and inclusivity into systems [94-107][102-107].


Moderator follow-up and “mindset” framing


The moderator returned to the inclusion question, asking whether it should be framed primarily as an ethical duty or a competitive strategy. She reiterated that inclusion requires building the necessary mindset, skill-sets and tool-sets, emphasizing AI literacy among teachers, students and employees [165-167][84-88][126-130]. Gabriela echoed the need to upgrade education systems and up-skill teachers, noting that no major education system has yet been refreshed to teach AI concepts or equip teachers with the required tools [171-182].


Blind-spot round


– Dean warned that overlooking frontier models is a serious blind spot, stressing the scale of U.S. investment and the unknown capabilities these models will unlock [156-169].


– Gabriela identified chronic under-investment in education and skills development as a fundamental barrier to AI readiness [171-182].


– Ivana pointed out the lack of a global consensus on “red-lines” for AI-clear ethical boundaries that all nations agree not to cross-leaving a gap in international governance [189-198].


Areas of agreement


Both the moderator and Ivana agreed that inclusion must go beyond data representation to include compute access, standards and regulatory clarity [9-13][104-106]. Dean and the moderator concurred that AI compute facilities should be classified as critical infrastructure and that governments should partner with the private sector to develop them [111-124]. Gabriela and the moderator shared the view that AI policy should be built as an ecosystem of investments, incentives and institutions rather than relying solely on regulation [48-52][15].


Points of disagreement


Dean maintained that existing legal frameworks are generally sufficient and that the burden of proof lies with regulators proposing new rules [24-33][40-44], whereas Gabriela argued that the rapid evolution of AI demands a broader ecosystem of public investment and possibly new regulatory tools to prevent market distortions [48-55][53-55]. Dean’s focus on proactive governance for tail-risk events contrasted with Ivana’s call for a fairness-centric, techno-legal approach that embeds ethical design throughout the AI lifecycle [34-38][102-107].


Key take-aways


1. Existing law is presumed adequate for most AI applications; new rules are needed only for demonstrated tail-event gaps [24-33][40-44].


2. Proactive governance is required for low-probability, high-impact AI risks [34-38].


3. AI governance is a strategic capability integrating privacy, security and techno-legal tools [65-74][75-84].


4. Inclusion is both an ethical imperative and a competitive advantage [131-138].


5. Public-private partnerships and government investment are essential to curb market concentration and nurture open research [48-55][140-151].


6. AI compute facilities should be treated as critical national infrastructure [111-124].


7. AI’s natural-monopoly tendencies require policy interventions to prevent concentration [133-146][208-214].


8. Urgent reform of education systems and up-skilling of teachers and workers are needed for AI readiness [171-183].


9. Blind spots include under-estimating frontier models and the absence of global “red-lines” [156-169][189-198].


Closing


The moderator summarized the trade-offs discussed, the potential solutions offered, and the imperative to build AI readiness for national competitiveness [215-217]. Dean closed by reminding the audience of the massive institutional and infrastructural challenges ahead, the need to manage both everyday harms and future catastrophic risks, and the importance of preventing concentration of AI power through coordinated policy and competitive dynamics [200-214].


Overall, the panel expressed optimism tempered by acknowledgement of significant challenges, leaving the audience with a clear roadmap: treat compute as critical infrastructure, leverage existing legal tools while targeting tail-risk gaps, invest in public-private ecosystems, embed fairness and trust into AI systems, and urgently reform education to prepare the next generation for an AI-augmented world.


Session transcriptComplete transcript of the session
Speaker 1

And it’s such a pleasure to be here with such lovely panelists and an audience who’s possibly going to skip some of the lunchtime to join us today in our discussions. Let me get started by really talking about, you know, we are towards the end of the week. It’s been a fantastic week, lots of conversations. And one thing which I reflect back on most of the conversations has been what is the most defining question of our time, which is who all is artificial intelligence really benefiting and with what rules? If I look at it, AI’s enterprise infrastructure, AI’s public sector capability, AI’s even geopolitical leverage is what we’ve seen across all these days. But more importantly, AI has become a part and parcel of our daily lives.

It stretches from everything from making our work life easier. to making sure that we get our entertainment as and when and how we require it. And more importantly, from healthcare to hiring to anything you can possibly imagine. When we really focus on inclusion in AI, one thing which has kind of stayed as a thought for the last five days is inclusion in AI is way beyond equitable representation in data sets. It’s, you know, it’s everything. It’s about access to compute. It’s about standards. It’s about having a right policy framework, which encourages everyone, everywhere. And more important, it’s also getting clarity on regulations, which are there across countries, to see how it can really be beneficial.

Now, to take the discussion ahead, today’s conversation is going to be really about trade -offs. Excellence and inclusion. It’s been interesting on how to navigate both these terminologies whenever you think of any policy or a framework. So I’m going to start with my first question to Dean. So Dean, you know, you’ve been working at the frontier of AI policy. You’ve been at the institutional design through the Foundation for American Innovation. There is a lot of growing debate between self -regulation and innovation -first approaches. Where should policymakers really draw the line without really undermining national competitiveness?

Dean

So I think it’s a, first of all, thanks for being here. And thank you for having me. It’s an honor to be here. The way I think about this is that, you know, we will govern AI through a very large intersecting web of different things, right? It’s not just going to be one day one bill is going to get passed and that’s going to be the AI bill and then AI is regulated, right? AI is currently regulated today. It’s regulated by many different things. It’s regulated in the United States by things like liability doctrine and a lot of existing products. regulations and things like that. So I think step number one for government is let’s take the existing bodies of law, you know, many of which just as, you know, as in India and the United States, we’re quite proud of.

Many countries, you know, are very proud of their regulatory and legal traditions. We have a common law tradition in the United States that we are proud of. So let’s take those things and let’s figure out how to apply them to AI. And then, you know, the companies, I think, thus far, the major AI labs have been, I think, responsible stewards when it comes to the major risks. Now, I think the area where you might need proactive governance first is, at least in my view, is really this domain of tail events, potential events that could be very serious, have very serious consequences that are relatively unlikely. So, So, you know, pandemic is an example of a tail event.

And I think AI might have some tail, you know, sort of catastrophic type risks associated with it. And so this is an area where some proactive governance, I think, is needed. And I’ve written supportively about transparency laws in the United States along those lines. So I think that’s where when we have a clear and demonstrated threat model and we have a, you know, clear evidence that existing law is not sufficient. I think one area, one aspect of AI governance that I often push back on and that I often dispute is there’s this kind of assumption baked in whenever we talk about AI regulation that the existing law is insufficient and that the current status quo is that AI is unregulated in some way.

And I think that should actually be, we should have the opposite presumption. We should presume that existing law is sufficient and that there is some sort of good solution. And then. Yeah. It should be, the burden of proof should be on the person who wants the regulation to show this is why existing law doesn’t work.

Speaker 1

Thank you, Dean. That’s very interesting that we go with an assumption. And with that, Gabriela, let me move on to you. So should, you know, how can governments foster open innovation, assuming to whatever Dean said, while minimizing the risk of market distortions?

Gabriela

Well, I think that it’s a very nice segue because I completely agree with Dean that there are a very broad portfolio of policy interventions that has not only to be with regulations. Regulations is looking at the way the technology is developed. But we need to think about this as an ecosystem that needs to be nurtured, that need investments. That needs incentives. that needs institutions and that needs infrastructure. And therefore it’s not only the technological conversation about what do we do with AI, but what kind of an economy we want that is really productive, that delivers for people with AI, and for that you need government intervention. And let me tell you, what is very interesting is we usually tend to think that the private sector is an innovative force and the government is a break.

In the U .S. that was not the case. The U .S. was the place where the massive investment in innovation in DARPA, in the creation of the Internet, all the foundational issues that we are seeing now were financed at some point by basic research that was paid by the government of the U .S. And many countries fill that space and that’s why it’s so important that we invest in research because it cannot be that the research is being done only by the private sector. and then it’s also true that when the government gets into the research it’s open research because it needs to bring everybody around the table and then it needs to be shared which is not always the case when you have a private sector innovation so I will contest this also way of framing the issues in terms of the government’s only creating market distortions because at the end it’s about how the government can be effective to address the market distortions that we see many times emerged in this case I like to see the AI technologies as natural monopolies somebody invented something somebody laid the whole network to operate it and then it was a monopoly and now it’s oligopoly at the end it’s very concentrated so there are market distortions now that needs to be addressed by government policies again there is a wide gap of things that needs to be done to ensure that the main distortion that can occur nationally and globally, which is that this is a story of a lucky few, is prevented.

Speaker 1

Thanks, Gabriela. And you know, it’s interesting you mention that because at least in India, whenever we speak about public -private partnerships, it’s all about how we are moving from a culture of competition to cooperation, to really working together so that the markets stay healthy. With that, we move over to you, Amanda. So, Amanda, you know… Eva. Sorry. Yeah, Amanda is missed. So, Eva, let’s talk about the global AI governance strategy at Wipro, right? Many organizations are developing a responsible AI framework. How do we move beyond policy statements? Through measurable accountability. and specifically when we have to do that at scale.

Ivana Bartoletti

Thank you very much and it’s great to be here and thanks to all of you for joining. So I have what I say often, I have the best job in the world, which is basically to translate a lot of the things that we’ve been discussed over the last few days into practice. So basically means we’ve heard democratisation, we’ve heard inclusion, we’ve heard how it’s important that AI is inclusive and by inclusivity, it’s not just about access, as was said, but it’s also making sure that many get the opportunity to participate in the design of this technology, but also in the decisions around what we are producing and who is going to be benefiting from that.

We, I think in a lot of our work, a lot of organisations, what happened over the last few years when generative AI came about, a lot of organizations we had to face something quite dramatic if you think about it because before then AI was very much for engineers for scientists to work with if they think about machine learning people who knew about AI then what happened a few years ago is that generative AI came and everybody got access to it did you remember and do you remember how companies started to scramble with who’s got access do we leave people our employees to access this the systems do we create our own private instance how do we navigate the fact that we want people to play with these tools with the fact that we have to be safe and secure as an enterprise and then things evolved and a lot of organizations if you know how the debate around governors started and you know a lot of organizations started to set up governance boards and they started to set up ethics boards and all of it and I think we realized at some point and I took on the challenge of AI governance from a privacy standpoint the reason for this and many people in organizations took on AI governance from a privacy standpoint not only because a lot of AI harms are actually privacy harms but also because privacy professionals knew about risk management and then we realized that actually governance of AI is much more than that it’s much more than risk management it’s much more than compliance we realized and I think this summit show that really clearly that AI governance is really about a strategic capability that an organization must have to create long -term value What does that mean?

It means that you have to do two things. First, you have to look at what you want to deploy or develop and that is where you need to embed privacy, security, legal protections, resilience into the products that you’re working on. That is not an easy one. It’s not an easy one. It requires knowledge. It requires investment in privacy enhancing and security enhancing technologies. It requires what, for example, India is promoting which is a techno -legal approach. It’s not just about the law but it’s also about how you translate the law into technical tools. So you have to do all of that and then you have to look at what happens once the product is in production.

So how do you monitor it once it’s out in production? How do you make sure that if, for example, you’re using AI to fire and fire, as sometimes it happens, you have tools to pull the trigger if something goes wrong? Now we are into the realm of agentic AI. If you’re interested in this, I’ve just published an article on the World Economic Forum of a subject I’m really fascinated in, which is what is the design for trust in agentic AI? So, for example, governance means that you do design these agents, but you give people, according to security standards, but also according to their own preferences, the right to intervene when they don’t want the machine to make a decision in an autonomous way.

And then you make sure that you protect from cascading hallucinations, from model drifting, all of that. So governance, to me, is very much about… . the capability that organizations have to think laterally about AI, which means impact, design choices, the trust stack that enables people and employees to trust the product. And one element which to me is very important is to make sure that companies bring their employees with them. That is a very crucial part of governance because the work is going to change. People are going to change the way that they work. They’re going to, and it’s important, the people are going to know best how to use AI are the people working in a company.

This is why I’ve seen successful companies developing a lot of use cases based on their activity and asking their employees, how should we innovate this? This is a fundamental part of governance, I believe, because it brings people with them. us. So very encompassing approach to governance. I think we are evolving and changing how we see it but certainly I think it’s become very clear over the last sort of few years and especially with things like this summit talking about impact that it’s way beyond compliance and it’s way

Speaker 1

Thanks Eva and that makes me very curious enough to ask you a very quick question. So do you still feel you’re underestimating the risks because you spoke about AI trust?

Ivana Bartoletti

No, I think let’s be honest here. Over the last few years we’ve seen amazing benefits coming from AI, right? Beautiful stuff, fantastic. Every day there is a piece of news that makes us hopeful that we can improve our well -being and we can feel better in the world we live in. But at the same time we’ve seen the risks too and we’ve got to be honest that looking at the success without looking at the risks is very naive. We can’t. because we’re not going to be able to deploy AI successfully if we don’t look at the risks. We’ve seen disinformation. We see deepfakes. We have seen AI softwareizing existing inequalities into decision -making around people, future, rights, and livelihood.

That’s not okay. So we’re not underestimating the risks. But we can’t approach governance from a risk management control. We have to shift our approach and do AI for good and change the way that we look into this. So we have to engineer fairness into the systems that we create. We have to engineer inclusivity into the systems that we create. And, of course, we have to manage the risk. But the mindset has to really shift.

Speaker 1

Thanks. And that gets me back to Dean. So my question to you is, inclusion at the national level often intersects with computer access. And research infrastructure. You spoke about public -private partnerships, spoke about trust in… emerging technologies like artificial intelligence and maybe even quantum going ahead, should governments treat compute as critical infrastructure?

Dean

Yeah, I think they should. The data centers that power frontier AI systems are going to be a part, you know, like ports or railroads. They’re going to be critical infrastructure of the future. I believe that’s true. Prior to my current role, I worked in the Trump administration in the White House Office of Science and Technology Policy. And in particular, I was one of the people that shaped the administration’s AI action plan and AI export program, which my former boss, Michael Kratios, was just here talking about and announcing some next steps on. I was really excited to see that. One of the key messages of that that I feel was I feel this is maybe a communications failure on our part.

But. You know, the United States government has publicly said the president. has come out and made as a flagship of his AI policy that we intend to subsidize the development of AI data centers in the global south. That is a policy of the United States under this administration. And we don’t have the interest in exercising control over the technology in the way that I think the prior administration did in some ways. We don’t want to control other countries’ use in the same way that the prior administration did. So I do think you should think of it as critical infrastructure. And I think that you should think of the United States as a partner in the construction of that.

And I think that owning infrastructure of this kind is an asset that states and regions can use for years to come.

Speaker 1

Thank you. So… So, you know, it’s been interesting because whenever we speak about AI at scale, when we talk about taking AI to every single person… across the planet, there are always three vectors we look at. So that can be mindset, skill sets, and tool sets. You just spoke about tool sets, which is extremely relevant. And that takes my question to you, Gabriela. When we talk about mindset, should inclusion be framed primarily as an ethical imperative or a competitive strategy or even both? What’s your take on that?

Gabriela

I really like the way this question is framed, my dear, because I’m sure that people think that going for inclusive policies might hinder competitiveness. Who from the public believes that? Can we have a show of hands? That being inclusive might hinder competitiveness? Investing in competitiveness might go against inclusiveness? I really think I’m an economist, and I think in this area we really need to think. We need to think about economic inclusiveness. because if we just think about social policies that might be needed when some people are left behind and therefore we need to invest in communities or in infrastructure or in people, kids that are in deep need of education, those things are very important.

But more importantly, we need to consider how do we foster market economies that are inclusive and that’s the core issue here. And I can tell you because I have been looking at the question of inequalities. Actually, I’m now co -chairing the task force on inequalities financial disclosure. And what we have seen is that when you have market concentration, productivity flattens. And what happened here, and we saw it at the OECD report that we did some years ago, when you have concentration at the top as the one we are seeing now, we saw that the OECD report which is companies having the whole concentration of compute capacities, the capacity to sort out skills and attract the skills, the capacity of having the financial means to invest.

What happens is that the diffusion machine, which is this very important element that trickles down the innovative developments into a broader set of users and benefits, is broken. Now the diffusion machine is broken. And therefore we need to see how do we ensure that the diffusion is faster. And to do that, of course, I agree with Ivana. The question is how do we ensure that we create the capacities of people and economies that are lagging behind. But we also need to see how do we diminish market dominance. And I know that there are many other considerations, geopolitics, competition matters, trade secrets matter, all these things matter. But for me, competitiveness, inclusiveness has to do with creating the highest well -being for people and that’s the outcome and that’s where ethics competitiveness, all of these narratives collide together because at the end what are we looking at?

that we have well distributed 70 % of wealth in many countries, 60 % of wealth 50 % of wealth is owned by just the top 10 % income groups but that’s not sustainable I get into Europe and Mexico and I was asking where do I put my children because I need good schools and they told me choose the right neighborhood that’s not possible and therefore I feel that there needs to be this set of policies and who is there to ensure a level playing field who is the one that needs to be using the tax systems or the incentive systems or the investment systems or to ensure that people are not left behind or the anti -competition or the non -competitive practices who is there to I pay my taxes so that the governments deliver on their promises so I think this is super important and I feel for example that what India has managed, this question of the digital registry I was with Mr.

Murthy when he presented his plan so many years ago I never could believe that you were going to be doing registry for 100 million people every month, it was just like you’re crazy, that will never happen who finances the government and now you have all India with the digital identification, it’s just amazing and then you go with the financial thing so I feel this is this is this is the this is the this is the this is the this is the this is the this is the this is the this is the this is the this is the this is the this

Speaker 1

Thanks, Gabriela. Since with this vision that the world of tomorrow with AI would certainly be a better world and hopefully be a better world than what it is today, I have a common question for all the panelists. And the question is, what do you see as the most significant blind spot in recent times AI discourse, keeping in all the conversations you’ve possibly had this week and even prior to that? And maybe what we can do is, Dean, we can go with you first.

Dean

Yeah. So in terms of blind spots, I think maybe the most important thing I could possibly say here would be one thing I’ve heard repeated a lot in the conversations I’ve had this week is this notion that the frontier models, the best AI systems are not… necessary. You can find good enough models that can, you know, that are cheaper to run. And in some cases, I think that will be true. But I would point to the very significant blind spot there that, you know, I believe that what we are doing is building systems that are going to be smarter than humans at all cognitive labor. That is a very serious goal. The United States is currently spending, like, it’s not a joke, right?

That’s not a joke. That’s not hype. That’s not crypto. We’re spending a trillion dollars this year on that. That’s the plan. We’re going to do it. It’s going to happen, right? And so the capabilities of those systems and the way that that will change the way the world works, I think ambitious people will be able to do unbelievably broad range of things. And I think this could really be an incredible opportunity for countries in the global south and really everyone in the world to participate in building the future together. And this is not something rejecting frontier models out of some sort of belief that that preserves sovereignty because there are existing use cases you can think of that can be done you know with with with cheaper models i would use a think of think of frontier ai as being useful for stuff that uh we don’t even have words for today right concepts that you will invent uh and you know that we will all invent together that’s the future that we’re building um and uh i think that’s a it’s an easy thing to miss and i think missing it is basically missing the ball game

Speaker 1

that’s very interesting thank you for sharing that gorilla what about you

Gabriela

i would say a traditional education education education okay i am really that takes awake my sleep why aren’t we upgrading massively the education pedagogy Why are we changing the way we go in school? Why don’t we invest in our teachers for them to understand how these technologies can help improve student outcomes and at the same time make their life easier? I see a lot of teachers complaining about all the administrative work that they need to do that don’t leave any space for them to invest in quality changes with their students. And I’m not seeing that happening. And we need that pipeline. If the future that Dean is projecting is going to happen, is going to arrive, we need people to be very well equipped.

And where do we get that equipment? I’m fine to invest in the workers in the market. That’s very important. And I think that we need to upgrade that too, the skills of the people in the market. But the school system needs to be upgraded. And actually, I haven’t seen it really happening anywhere. This is a challenge. This is a challenge for North, South, East, West. and I invite that for all to confront this challenge.

Speaker 1

Okay, I love the fact that you brought education and skilling as a part of it because building AI readiness has become so essential to ensure natural competitiveness no matter which market we are talking about. And just to share an incidence, India was one of the few countries where AI was introduced as a school subject way back in 2019, even before the COVID era. So students could learn AI as they would possibly learn a biology or a physics. But yes, that’s a major challenge which we’re trying to work on. Ivana, your book.

Ivana Bartoletti

I was very impressed yesterday when your prime minister spoke. And I was very impressed by one thing that he said. But he said, you know, develop here and serve humanity. And I think that to me went to point that it’s been very strong. here and where I see because the I like also something that has been missing so far so he said something very very important and he said that AI needs to be used for inclusion for economic well -being inclusion as we said as access but also as participation for many as reduction of the gap between areas of society in geographies across India inclusion also as creating models that respect your languages and your dialects and the ethical norms bind in this country together because the eye that we have now is often not reflecting of the diversity of the world one thing that following on this one thing that has been good has been to see many leaders coming from all over the world.

One thing that I’ve always supported is how we haven’t aligned, I’ve always thought this, how we haven’t aligned on what the red lines of AI are. Are there things that us as a society or as a world, we are never going to do or we don’t want to do? Regardless of, are they, and we’ve seen appeals coming over recent years. We’ve had massive debates around ethics of AI in different ways, whether it’s the US, whether it’s Europe, whether it’s, in different ways, or everyone in different ways. But I think when it comes to something which is far more than technology, because AI is far more than technology, AI’s power is geopolitics, is earth, cables, sea, so much.

I think one of the things that probably we are overlooking is how we, and if we will be, as a world able to come together and have some red lines and say, well actually, we’re not going to go

Speaker 1

Thank you. So I just want to take a moment to thank the panelists and maybe I can ask Dean for you to sum it up.

Dean

Well, I think there’s a lot of different things. Unfortunately, the subject of AI governance is so difficult because it’s so capacious, right? It’s such an enormous topic. But look, I think we have a very real infrastructure development sort of challenge ahead of us. We have a huge complex of new types of institutions and old institutions that are going to change and evolve in various ways. And there’s all sorts of interlocking work to do on things like that that are going to be critical for the governance of AI for both everyday types of harms and also sort of catastrophic things that feel futuristic. But I think that are going to be real parts of our lives in the pretty near future.

And then I think, you know, another thing I would I would kind of double click on is this need for competitive. Yes, which I feel. agree with. And one of the things that I think is exciting about AI is that the price per token of models does drop quite quickly. And so there are a lot of good competitive dynamics here. There’s also centralizing tendencies. And so I think working together to figure out how to prevent those tendencies, I think that’s going to be extremely important. The concentration of power in AI and that issue in the long term is going to be, I think, one of the most important parts of the political economy of this topic.

So yeah, I think that’s how I see it.

Speaker 1

Thank you. That’s fantastic. We spoke about trade -offs, we spoke about potential solutions, and we spoke about building AI readiness for national competitiveness. Thank you so much to all the panelists, and it was lovely having a conversation.

Gabriela

Thanks to the moderator. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (21)
Factual NotesClaims verified against the Diplo knowledge base (9)
Confirmedhigh

“The session opened with the moderator framing the focus on who truly benefits from artificial intelligence and under what rules.”

The knowledge base notes that the main session on AI needs to consider who is using and who is benefiting from it [S13].

Confirmedhigh

“Dean argued that AI should first be governed through the existing web of liability, product‑safety and other statutes rather than a stand‑alone AI law.”

Discussion summaries of US AI governance under the Trump administration highlight a liability-based approach instead of new AI-specific regulation [S108].

Confirmedhigh

“Dean urged governments to map current legal tools—such as the United States’ liability doctrine—to AI use‑cases, presuming existing law is sufficient unless a clear threat model demonstrates otherwise.”

Panel discussions emphasize that existing legal frameworks should be presumed sufficient until proven otherwise, placing the burden of proof on advocates of new rules [S22].

Confirmedmedium

“Dean identified “tail events” (low‑probability, high‑impact scenarios such as pandemics) as domains where proactive governance, including transparency legislation, is justified.”

Workshop notes refer to low-probability, high-risk scenarios as a focus for risk-mitigation strategies [S113].

Confirmedhigh

“Dean said data‑centres that power frontier AI are akin to ports or railroads and should be regarded as critical national infrastructure.”

A speaker explicitly compared AI-powering data centres to ports and railroads, calling them future critical infrastructure [S114].

Confirmedmedium

“Dean noted his prior role in the Trump administration’s White House Office of Science and Technology Policy.”

The same source that mentions the data-centre analogy also confirms his previous work in the Trump administration’s White House [S114].

Additional Contextmedium

“Dean suggested that existing regulations should be used as a foundation and complemented rather than replaced with entirely new AI statutes.”

Commentary from WS #162 advises countries to complement existing regulations instead of creating wholly new AI laws, adding nuance to Dean’s stance [S37].

Additional Contextlow

“Historical precedent shows that legal principles can adapt to new technologies without needing separate legislation.”

Analysis of past technology regulation (e.g., the internet) argues that existing frameworks can be extended to cover emerging tech, supporting the view that AI may be governed by current law [S60].

Additional Contextmedium

“Transparency legislation is important for managing tail‑event risks.”

Experts stress that treating algorithms as black boxes limits transparency and can perpetuate disparities, underscoring the relevance of transparency measures [S105].

External Sources (117)
S1
Global AI Governance: Reimagining IGF’s Role & Impact — – **Ivana Bartoletti** – Virtual panelist (specific role/title not mentioned in transcript) Elizabeth Orembo: Thanks, I…
S2
Shaping AI to ensure Respect for Human Rights and Democracy | IGF 2023 Day 0 Event #51 — Moderator:you very much, Björn. And you said it, the key of us being together here is to learn from each other, which me…
S3
Lightning Talk #245 Advancing Equality and Inclusion in AI — – **Ivana Bartoletti**: Was scheduled to speak but was unable to attend due to unfortunate circumstances. Role/expertise…
S4
DC-Sustainability Data, Access & Transparency: A Trifecta for Sustainable News | IGF 2023 — Audience:OK, that’s a nice clarification. Hello, everyone. I’m Alice Lenna from Brazil. I’m also a consultant for GRI, t…
S5
https://dig.watch/event/india-ai-impact-summit-2026/how-ai-is-transforming-diplomacy-and-conflict-management — And the MOVE 37 initiative that we’re here to talk to you about today. is a part of that program. As you can imagine, in…
S6
How AI Is Transforming Diplomacy and Conflict Management — And the MOVE 37 initiative that we’re here to talk to you about today. is a part of that program. As you can imagine, in…
S7
Laying the foundations for AI governance — – **Lan Xue**: Dean (Dean Xue Lan), expertise in governance and policy Robert Trager: Good. We can finally end this her…
S8
Legal Notice: — Chief of International Law Studies. He has previously served as Dean of the George C. Marshall Center in Germany and Gen…
S9
https://dig.watch/event/india-ai-impact-summit-2026/how-multilingual-ai-bridges-the-gap-to-inclusive-access — It’s wonderful. At NTU Singapore, we’re the newest members of ICAIN, but it’s fantastic that the… And I’ve only been a…
S10
Keynote-Martin Schroeter — -Speaker 1: Role/Title: Not specified, Area of expertise: Not specified (appears to be an event moderator or host introd…
S11
Responsible AI for Children Safe Playful and Empowering Learning — -Speaker 1: Role/title not specified – appears to be a student or child participant in educational videos/demonstrations…
S12
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Keynote Vijay Shekar Sharma Paytm — -Speaker 1: Role/Title: Not mentioned, Area of expertise: Not mentioned (appears to be an event host or moderator introd…
S13
Main Session on Artificial Intelligence | IGF 2023 — Needs to consider who is using, who is benefiting from it, and who has the risk
S14
WS #31 Cybersecurity in AI: balancing innovation and risks — Charbel Shbir: Hello. Yes, it is. Hello, my name is Charbel Shbir. I’m president of Lebanese ISOC. Regarding your q…
S15
AI Technology-a source of empowerment in consumer protection | IGF 2023 Open Forum #82 — It is crucial to strike the right balance between regulation and innovation to ensure fairness and responsible consumpti…
S16
AI Governance Dialogue: Steering the future of AI — #### Pillar 1: Inclusion Doreen Bogdan Martin: Thank you. And we now have a chance together to reflect on AI governance…
S17
Open Forum #71 Advancing Rights-Respecting AI Governance and Digital Inclusion through G7 and G20 — Alison Gilwald: and Melinda Ngo. Thank you very much. I’m going to, of course, leave Melinda to speak to the specifics o…
S18
WS #254 The Human Rights Impact of Underrepresented Languages in AI — Gustavo Fonseca Ribeiro: Yeah, of course. So what can we do at the international level, and international organizatio…
S19
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Anne Flanagan: Hello, apologies that I’m not there in person today. I’m in transit at the moment, hence my picture on yo…
S20
Bridging the AI innovation gap — LJ Rich: to invite our opening keynote. It’s a pleasure to invite to the stage the director of the Telecommunications St…
S21
Policy Network on Artificial Intelligence | IGF 2023 — It is believed that allowing academics and technical groups the space to explore and experiment is crucial for advancing…
S22
Panel Discussion Inclusion Innovation & the Future of AI — Existing legal frameworks should be presumed sufficient until proven otherwise, with burden of proof on those advocating…
S23
Ethical principles for the use of AI in cybersecurity | IGF 2023 WS #33 — Legal measures are acknowledged as an important consideration in the context of AI. However, it is argued that relying s…
S24
From principles to practice: Governing advanced AI in action — Brian Tse: right now? First of all, it’s a great honor to be on this panel today. To ensure that AI could be used as a f…
S25
Sovereign AI for India – Building Indigenous Capabilities for National and Global Impact — India possesses many essential ingredients for AI success: a robust software services industry, thriving startup ecosyst…
S26
Main Session | Policy Network on Artificial Intelligence — Brando Benifei: Thank you. Thank you very much. First of all, I’m really happy to be able to talk in this very impor…
S27
Democratizing AI Building Trustworthy Systems for Everyone — “of course see there would be a number of challenges but i think as i mentioned that one doesn’t need to really control …
S28
Science as a Growth Engine: Navigating the Funding and Translation Challenge — So I would just say that it’s something which the private sector can play a part, because as you say, you cross borders….
S29
WS #294 AI Sandboxes Responsible Innovation in Developing Countries — Cohen emphasised that sandboxes “require significant governance resources, clear eligibility criteria, testing framework…
S30
Nepal Engagement Session — The conversation highlighted two major technological breakthroughs. First, the integration of Bhashini (India’s language…
S31
Education meets AI — In conclusion, the integration of AI and digital tools in education is reshaping the job market and requires individuals…
S32
[Parliamentary Session 3] Researching at the frontier: Insights from the private sector in developing large-scale AI systems — Ivana Bartoletti: Yeah. So thank you. Excellent questions. So I wanted to just start with a provocation. I mean, yo…
S33
Why science metters in global AI governance — Bengio advocated for high-level principles that avoid technical details since “the details are going to change,” while o…
S34
Opening address of the co-chairs of the AI Governance Dialogue — While this transcript captures only the opening remarks of the AI Governance Dialogue, the key comments identified estab…
S35
Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies — Achieving inclusive AI requires addressing inequalities across three fundamental areas: access to computing infrastructu…
S36
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Lack of infrastructure, skills, compute access, and data access hinder policy effectiveness
S37
WS #162 Overregulation: Balance Policy and Innovation in Technology — Galvez suggests that countries should consider their local needs and existing regulations when developing AI governance …
S38
Dynamic Coalition Collaborative Session — Legal and regulatory | Cybersecurity | Development The speaker outlines a comprehensive framework for AI governance tha…
S39
Swiss AI Initiatives and Policy Implementation Discussion — This comment challenges the prevailing ‘checkbox compliance’ approach to AI governance by proposing a fundamental reorie…
S40
Networking Session #74 Digital Innovations Forum- Solutions for the Offline People — Balancing government-funded projects with maintaining market competitiveness
S41
Digital Public Infrastructure, Policy Harmonisation, and Digital Cooperation – AI, Data Governance,and Innovation for Development — An audience member emphasizes the importance of research and continuous stakeholder engagement in policy formulation. Th…
S42
Main Session | Dynamic Coalitions — June Paris: Can you hear me? Yes. Okay. Please, go ahead, we’re looking forward to hearing you talking about bridging di…
S43
AI ethics shifts from principles to governance frameworks — AI now influences decisions in healthcare, finance, hiring, and public administration, pushing AI ethics into thecentre …
S44
AI Meets Cybersecurity Trust Governance & Global Security — “AI governance now faces very similar tensions.”[27]”AI may shape the balance of power, but it is the governance or AI t…
S45
Aligning AI Governance Across the Tech Stack ITI C-Suite Panel — The speakers demonstrated remarkable consensus across several key areas: the need to balance governance with innovation,…
S46
Smart Regulation Rightsizing Governance for the AI Revolution — Bella Wilkinson from Chatham House provided a realistic assessment of the current geopolitical landscape, arguing that g…
S47
AI and the future of digital global supply chains (UNCTAD) — However, the adoption of AI in trade faces major barriers. These include the lack of expertise, high costs, absence of g…
S48
Technology Regulation and AI Governance Panel Discussion — Competition Policy and Market Structure Legal and regulatory | Economic Most restrictions to competition actually come…
S49
Competition law and regulations for digital markets: What are the best policy options for developing countries? (UNCTAD) — Competition policy and advocacy play an important role, especially in developing countries, where competition authoritie…
S50
Building fair markets in the algorithmic age (The Dialogue) — Furthermore, the analysis highlights another unintended consequence of AI in the competition arena. It suggests that dif…
S51
Panel Discussion Inclusion Innovation & the Future of AI — Ball advocates for minimal new regulation, preferring existing legal frameworks with burden of proof on those wanting ne…
S52
WS #205 Contextualising Fairness: AI Governance in Asia — Tejaswita Kharel: in global conversations around AI bias at the moment? Every speaker strictly has five minutes and we…
S53
Inclusive AI For A Better World, Through Cross-Cultural And Multi-Generational Dialogue — Fairness, accountability, and transparency must be evaluated in a relevant way Importance of hearing various perspectiv…
S54
Opening address of the co-chairs of the AI Governance Dialogue — While this transcript captures only the opening remarks of the AI Governance Dialogue, the key comments identified estab…
S55
Dedicated stakeholder session (in accordance with agreedmodalities for the participation of stakeholders of 22 April 2022) — Diplo Foundation: Mr. Chair, distinguished delegates, colleagues, my name is Vladimir Adunovic. I represent Diplo Fou…
S56
Open Forum #45 Advancing Cyber Resilience of Critical Infrastructure — Ms. Timea Suto: Thanks, Marie. I’ll try to be brief. Really, for business protecting critical infrastructure today, It i…
S57
WS #103 Aligning strategies, protecting critical infrastructure — Ms Robyn Greene argues that policies must consider the broader technological landscape and its impacts on critical infra…
S58
INTERNATIONAL CIIP HANDBOOK 2008 / 2009 — Critical infrastructures extend across many sectors of the economy and key government services.’ 2 In the first section…
S59
Digital Democracy Leveraging the Bhashini Stack in the Parliamen — These key comments fundamentally shaped the discussion by establishing three critical paradigm shifts: (1) from standard…
S60
Do we really need specialised AI regulation? — History demonstrates the resilience of legal principles in adapting to new technologies. For example, when the internet …
S61
AI Impact Summit 2026: Global Ministerial Discussions on Inclusive AI Development — “First, people must be at the center of AI strategy, as we heard all along today”[107]. “Investment in skills, lifelong …
S62
YCIG & DTC: Future of Education and Work with advancing tech & internet — Pajaro points out that the rapid advancement of AI and other technologies is changing the skills required in the modern …
S63
Artificial intelligence (AI) and cyber diplomacy — The conversation expanded to highlight the universal need for digital literacy and capacity building in AI, urging gover…
S64
AI Transformation in Practice_ Insights from India’s Consulting Leaders — Future skills requirements emphasise working with technology rather than coding, with increasing importance placed on ps…
S65
Vers un indice de vulnérabilité numérique (OIF) — Another noteworthy observation is the shift in focus from a punitive approach to a preventive approach in terms of regul…
S66
Informal Stakeholder Consultation Session — Moving from Reactive Regulation to a Proactive Vision:Called for moving beyond reactive regulation that only limits harm…
S67
Unveiling Trade Secrets: Exploring the Implications of trade agreements for AI Regulation in the Global South — Overall, the analysis highlights the contrasting perspectives and approaches to regulation, specifically the comparison …
S68
Open Forum #82 Catalyzing Equitable AI Impact the Role of International Cooperation — The discussion showed remarkable consensus on identifying problems (infrastructure gaps, skills shortages, data availabi…
S69
The Global Power Shift India’s Rise in AI & Semiconductors — So the goal of Genesis Project is to really, one, align public and private partnership, two, invest government resources…
S70
The Foundation of AI Democratizing Compute Data Infrastructure — High level of consensus across diverse stakeholders (academic, government, civil society, private sector, international …
S71
WS #462 Bridging the Compute Divide a Global Alliance for AI — Ivy describes Stargate as a $500 billion infrastructure project over four years that requires different types of partner…
S72
Critical infrastructure — AI plays a pivotal role in safeguarding critical infrastructure systems. AI can strengthen the security of critical infr…
S73
AI as critical infrastructure for continuity in public services — The discussion revealed relatively low levels of direct disagreement, with most speakers focusing on different aspects o…
S74
OpenAI’s push to establish AI as critical infrastructure — In a recent interview,Chris Lehane, the newly appointed vice president of public works at OpenAI, underscores AI’s role …
S75
Session — – The need for inclusion of diverse views, not just representation
S76
Open Forum #71 Advancing Rights-Respecting AI Governance and Digital Inclusion through G7 and G20 — Gilwald contends that current digital inclusion challenges are primarily demand-side issues rather than infrastructure p…
S77
Building Public Interest AI Catalytic Funding for Equitable Compute Access — Need to develop concrete public interest frameworks covering models, talent, and data sharing beyond just compute
S78
Opening address of the co-chairs of the AI Governance Dialogue — While this transcript captures only the opening remarks of the AI Governance Dialogue, the key comments identified estab…
S79
AI Governance Dialogue: Steering the future of AI — Doreen Bogdan Martin: Thank you. And we now have a chance together to reflect on AI governance with someone who has a un…
S80
Panel Discussion Inclusion Innovation & the Future of AI — However, Ball acknowledged that proactive governance may be necessary for addressing “tail events” – low-probability, hi…
S81
Panel 3 – Legal and Regulatory Tools to Reduce Risks and Strengthen Resilience  — Submarine cables should be classified and treated as critical infrastructure
S82
Dynamic Coalition Collaborative Session — Legal and regulatory | Cybersecurity | Development The speaker outlines a comprehensive framework for AI governance tha…
S83
Building Sovereign and Responsible AI Beyond Proof of Concepts — Theresa describes emerging UK regulations targeting high‑risk AI, including transparency, explainability and third‑party…
S84
GPAI: A Multistakeholder Initiative on Trustworthy AI | IGF 2023 Open Forum #111 — Alan Paic:Yes, it was not about further countries joining. Well, I can also mention that. So we do have a membership pro…
S85
Networking Session #74 Digital Innovations Forum- Solutions for the Offline People — Balancing government-funded projects with maintaining market competitiveness
S86
Main Session | Dynamic Coalitions — June Paris: Can you hear me? Yes. Okay. Please, go ahead, we’re looking forward to hearing you talking about bridging di…
S87
AI Meets Cybersecurity Trust Governance & Global Security — “AI governance now faces very similar tensions.”[27]”AI may shape the balance of power, but it is the governance or AI t…
S88
Evolving AI, evolving governance: from principles to action | IGF 2023 WS #196 — AI is a general-purpose technology that holds the potential to increase productivity and build impactful solutions acros…
S89
Global Enterprises Show How to Scale Responsible AI — The conversation delved deeply into the practical challenges of implementing AI governance at scale, with Gurnani provid…
S90
AI ethics shifts from principles to governance frameworks — AI now influences decisions in healthcare, finance, hiring, and public administration, pushing AI ethics into thecentre …
S91
From Technical Safety to Societal Impact Rethinking AI Governanc — It applies in the business world as well. You have 51 % of the board control or equity in a company. Basically control t…
S92
Smart Regulation Rightsizing Governance for the AI Revolution — Bella Wilkinson from Chatham House provided a realistic assessment of the current geopolitical landscape, arguing that g…
S93
Shaping the Future AI Strategies for Jobs and Economic Development — Infrastructure challenges including energy, cooling, and water consumption are critical blind spots that need immediate …
S94
AI and the future of digital global supply chains (UNCTAD) — However, the adoption of AI in trade faces major barriers. These include the lack of expertise, high costs, absence of g…
S95
From principles to practice: Governing advanced AI in action — Lack of consensus on what constitutes ‘intolerable risks’ and appropriate risk thresholds globally
S96
How can Artificial Intelligence (AI) improve digital accessibility for persons with disabilities? — Rosemary Kayess:Hello, thank you for the invitation to speak today. Article 27 of the Universal Declaration of Human Rig…
S97
Panel Discussion: 01 — We are expecting our other guests to join us very soon as Ms. Devjani Khosh, Distinguished Fellow Niti Aayog is going to…
S98
(Interactive Dialogue 2) Summit of the Future – General Assembly, 79th session — Juan Manuel Santos: Distinguished co-chairs, excellencies, ladies and gentlemen, like my fellow elder, Ellen Johnson, …
S99
When Code and Creativity Collide: AI’s Transformation of Music and Creative Expression — Juliet Mann argues that artificial intelligence is advancing at an unprecedented pace compared to previous technologies….
S100
The Global Economic Outlook — Georgieva emphasizes the importance of making artificial intelligence accessible to all, not just a privileged few. She …
S101
How AI Is Transforming Indias Workforce for Global Competitivene — Sue Daley from Tech UK shared how the UK government has created an AI skills partnership aimed at training over one mill…
S102
Thinking through Augmentation — Additionally, Christy emphasizes the necessity of involving workers in the AI transformation process. She believes that …
S103
Media and Education for All: Bridging Female Academic Leaders and Society towards Impactful Results — ### Accessibility as Universal Design ### Balancing Principles with Accessibility Anita Lamprecht: Thank you very much…
S104
AI for equality: Bridging the innovation gap — These key comments transformed what could have been a superficial discussion about women and technology into a sophistic…
S105
Internet Governance Forum 2024 — DuringWS #236 Ensuring Human Rights and Inclusion: An Algorithmic Strategy, Monica Lopez pointed out that treating algor…
S106
Keynote-Jeet Adani — She rises to stabilize, she rises to anchor a world searching for balance and she rises to build systems that are inclus…
S107
Workshop 1: AI & non-discrimination in digital spaces: from prevention to redress — Louise Hooper: Thanks. Good morning, everybody. So the first thing that I’d like to talk about is what AI systems are an…
S108
Discussion Summary: US AI Governance Strategy Under the Trump Administration — Ball advocated for a liability-based approach rather than comprehensive preemptive regulation, suggesting policymakers s…
S109
Rethinking AI regulation: Are new laws really necessary? — Specialised AI regulation may not be necessary, as existing laws already cover many aspects of AI-related concerns. Jova…
S110
Keynotes — O’Flaherty acknowledges that the regulatory work is not finished and that current regulatory models will likely be insuf…
S111
Keynote-Alexandr Wang — “We publish model cards and evaluation benchmarks and data so you can see how they work, their intended use, and how we …
S112
https://dig.watch/event/india-ai-impact-summit-2026/driving-indias-ai-future-growth-innovation-and-impact — The innovate side really comes down to. Areas like skilling, which I know when Minister Chaudhry joins us, we will get i…
S113
Panel 2 – Anticipating and Mitigating Risks Along the Global Subsea Network  — I think Andy yesterday from the UK government mentioned that that report and that committee was coming at it from a very…
S114
https://dig.watch/event/india-ai-impact-summit-2026/panel-discussion-inclusion-innovation-the-future-of-ai — Yeah, I think they should. The data centers that power frontier AI systems are going to be a part, you know, like ports …
S115
Building Scalable AI Through Global South Partnerships — And this particular event gave us that opportunity. I think we were very clear that what we wanted to do was to let peop…
S116
WS #100 Integrating the Global South in Global AI Governance — Data and Infrastructure Challenges Focus of inclusion efforts Idlebi suggests that initiatives are needed to encourage…
S117
WS #208 Democratising Access to AI with Open Source LLMs — Disclaimer:This is not an official session record. DiploAI generates these resources from audiovisual recordings, and th…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
S
Speaker 1
3 arguments158 words per minute938 words355 seconds
Argument 1
AI must be evaluated on who it benefits and under what regulatory rules.
EXPLANATION
Speaker 1 frames the central question of the summit as determining the beneficiaries of artificial intelligence and establishing appropriate governance frameworks. This sets the agenda for discussing equity, policy, and impact.
EVIDENCE
Speaker 1 states that the most defining question of our time is who AI really benefits and with what rules, framing the need to consider beneficiaries and regulatory frameworks. [4]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The IGF Main Session on Artificial Intelligence stresses that AI governance must consider who is using, who benefits, and who bears the risks, directly supporting this framing [S13].
MAJOR DISCUSSION POINT
Defining AI beneficiaries and governance
Argument 2
Inclusion in AI goes beyond data representation to include compute access, standards, policy frameworks, and regulatory clarity.
EXPLANATION
Speaker 1 argues that equitable AI requires more than diverse datasets; it also needs universal access to computational resources, common standards, supportive policies, and clear cross‑border regulations. This broader view of inclusion links technical and regulatory dimensions.
EVIDENCE
Speaker 1 explains that inclusion in AI extends beyond equitable data sets to include access to compute, standards, policy frameworks, and regulatory clarity across countries. [9-13]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The IGF inclusion pillar highlights a broad definition of AI inclusion that covers standards, policy frameworks and cross-border regulations beyond data sets [S16]; discussions on under-represented languages underline the need for linguistic and compute inclusion [S18]; the Nepal session shows language-AI platforms improving access, illustrating inclusion beyond data [S30]; and the Democratizing AI dialogue notes challenges of sharing compute resources, reinforcing the broader inclusion view [S27].
MAJOR DISCUSSION POINT
Broad definition of AI inclusion
AGREED WITH
Ivana Bartoletti
Argument 3
Policy must balance the trade‑offs between excellence and inclusion in AI development.
EXPLANATION
Speaker 1 highlights that achieving high performance (excellence) can conflict with equitable access (inclusion), and that policymakers need to navigate these competing priorities. This sets up the panel’s focus on navigating trade‑offs.
EVIDENCE
Speaker 1 notes that the conversation will focus on trade-offs between excellence and inclusion, highlighting the challenge of balancing high performance with equitable access in policy design. [15]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The consumer-protection forum stresses striking the right balance between regulation (excellence) and innovation (inclusion) for fairness [S15]; the IGF Policy Network notes the importance of balancing regulation with fostering innovation [S21]; and the cybersecurity-in-AI session discusses balancing innovation and risk, echoing the trade-off theme [S14].
MAJOR DISCUSSION POINT
Balancing excellence and inclusion
D
Dean
3 arguments169 words per minute1310 words464 seconds
Argument 1
Existing legal frameworks should be presumed sufficient for AI, with regulators bearing the burden of proof to show inadequacy.
EXPLANATION
Dean contends that many AI risks can be addressed through current liability, product, and other regulations, and that new AI‑specific laws should only be introduced when clear gaps are demonstrated. He flips the usual presumption, placing the onus on proponents of regulation.
EVIDENCE
Dean argues that existing bodies of law-such as liability doctrines and product regulations-should be applied to AI, and that the presumption should be that current law is sufficient, placing the burden of proof on those seeking new regulation. [24-33] He also states that we should presume existing law is sufficient and the burden of proof should be on the person who wants regulation to show why existing law doesn’t work. [40-44]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The Panel Discussion on Inclusion, Innovation & the Future of AI explicitly states that existing legal frameworks should be presumed sufficient until proven otherwise, placing the burden of proof on those seeking new rules [S22]; the cybersecurity-in-AI session also highlights reliance on existing liability doctrines for AI software [S14].
MAJOR DISCUSSION POINT
Presumption of adequacy of existing law
DISAGREED WITH
Gabriela
Argument 2
Proactive governance is required for low‑probability, high‑impact tail events associated with AI.
EXPLANATION
Dean identifies catastrophic scenarios, likening AI risks to pandemics, and argues that such tail events justify anticipatory regulatory measures, including transparency laws, before clear threats materialize.
EVIDENCE
Dean identifies tail events-low-probability, high-impact scenarios like pandemics-as areas where proactive AI governance is needed, and he has advocated for transparency laws to address such risks. [34-38]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same panel discussion calls for anticipatory governance of low-probability, high-impact tail events and mentions support for transparency laws [S22]; the ‘From principles to practice’ session discusses managing global-scale AI challenges, aligning with proactive tail-risk governance [S24].
MAJOR DISCUSSION POINT
Governance for AI tail risks
DISAGREED WITH
Ivana Bartoletti
Argument 3
AI compute infrastructure should be classified as critical national infrastructure.
EXPLANATION
Dean compares data centers powering frontier AI to ports and railroads, arguing they are essential to national security and economic competitiveness, and should be treated as critical infrastructure with public‑private partnership support.
EVIDENCE
Dean states that data centers powering frontier AI systems are comparable to ports or railroads and should be classified as critical infrastructure, citing his role in shaping the U.S. AI action plan and export program. [111-115]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Panel participants argue that AI compute facilities should be treated as critical national infrastructure, akin to ports or railways [S22]; the Sovereign AI for India briefing identifies compute as a strategic bottleneck, underscoring its critical status [S25]; and the Democratizing AI dialogue notes foundational compute resources as a major challenge, supporting the critical-infrastructure framing [S27].
MAJOR DISCUSSION POINT
AI compute as critical infrastructure
AGREED WITH
Speaker 1
DISAGREED WITH
Speaker 1, Gabriela
G
Gabriela
3 arguments143 words per minute1299 words544 seconds
Argument 1
AI policy must be built as an ecosystem of investments, incentives, institutions, and infrastructure, not limited to regulation.
EXPLANATION
Gabriela emphasizes that effective AI governance requires coordinated funding, incentive mechanisms, institutional support, and robust infrastructure, forming a holistic ecosystem that nurtures innovation.
EVIDENCE
Gabriela says that AI policy must be an ecosystem comprising investments, incentives, institutions, and infrastructure, not merely regulation, to nurture the technology. [48-52]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The AI Policy Research Roadmap emphasizes an ecosystem approach combining investments, incentives, institutions and infrastructure for evidence-based policy [S19]; the ‘Science as a Growth Engine’ discussion highlights the role of government funding and institutional support in translating research into impact, reinforcing the ecosystem view [S28]; AI sandboxes are cited as requiring governance resources, clear eligibility and institutional frameworks, part of such an ecosystem [S29]; the panel also presents a counterpoint from Dean favoring minimal new regulation, illustrating the debate [S22].
MAJOR DISCUSSION POINT
Ecosystem approach to AI policy
AGREED WITH
Speaker 1
DISAGREED WITH
Dean
Argument 2
Government has historically driven major technological breakthroughs and should continue to fund AI research to prevent market distortions.
EXPLANATION
Gabriela points to DARPA and the creation of the Internet as examples of public‑sector innovation, arguing that similar public investment is needed to avoid reliance on private‑sector monopolies and to ensure open, inclusive AI development.
EVIDENCE
Gabriela notes that the U.S. government funded DARPA and the Internet, arguing that similar public investment is essential to avoid market distortions and support AI as a natural monopoly. [53-55]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The ‘Science as a Growth Engine’ session notes that government funding of basic research has historically avoided market distortions and spurred breakthroughs, supporting continued public investment in AI [S28]; the Sovereign AI for India briefing references historic public-sector successes like DARPA and the Internet as models for AI investment [S25]; and the panel discussion contrasts Dean’s minimal-regulation stance with calls for comprehensive government intervention (Gabriela) [S22].
MAJOR DISCUSSION POINT
Public sector role in AI innovation
Argument 3
Education systems must be overhauled to provide AI literacy and skills for teachers and students.
EXPLANATION
Gabriela calls for modernizing curricula, investing in teacher training, and reducing administrative burdens so educators can integrate AI tools effectively, arguing that without such reforms the AI workforce pipeline will be insufficient.
EVIDENCE
Gabriela criticizes the lack of modernization in education, calling for upgraded pedagogy, teacher training, and skill development to prepare people for AI-driven futures. [171-183]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The ‘Education meets AI’ report stresses the need for AI literacy, digital skills, and curriculum reform to keep pace with AI-driven job markets [S31]; the Policy Network notes that old educational systems need to change to support AI competencies [S21]; and the Nepal engagement session demonstrates how AI language platforms can empower local officials, illustrating the benefits of AI-enabled education [S30].
MAJOR DISCUSSION POINT
AI education reform
AGREED WITH
Speaker 1, Ivana Bartoletti
I
Ivana Bartoletti
3 arguments143 words per minute1426 words594 seconds
Argument 1
AI governance should embed privacy, security, and legal safeguards from design through deployment, using a techno‑legal approach.
EXPLANATION
Ivana stresses that products need built‑in privacy and security controls, and that translating legal requirements into technical tools—exemplified by India’s techno‑legal model—is essential for responsible AI.
EVIDENCE
Ivana explains that effective AI governance requires embedding privacy, security, and legal protections into products from the design stage and continuing monitoring in production, using a techno-legal approach as exemplified by India’s initiatives. [70-76]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The ‘Why science matters in global AI governance’ briefing discusses techno-legal approaches that embed privacy, security and legal requirements into AI products from design to production [S33]; the cybersecurity ethics session argues that legal measures alone are insufficient, underscoring the need for integrated technical safeguards [S23]; and Ivana’s parliamentary remarks highlight the role of lawmakers in shaping such techno-legal frameworks [S32].
MAJOR DISCUSSION POINT
Techno‑legal embedding of safeguards
Argument 2
Trust in agentic AI requires mechanisms that let users intervene in autonomous decisions to prevent cascading harms.
EXPLANATION
Ivana argues that users should have the ability to stop or override AI agents when outcomes are undesirable, addressing risks like hallucinations and model drift, thereby building a trust stack.
EVIDENCE
Ivana describes the need for trust in agentic AI by giving users the right to intervene in autonomous decisions, preventing cascading hallucinations and model drift, as outlined in her recent World Economic Forum article. [80-83]
MAJOR DISCUSSION POINT
User intervention for trustworthy AI
Argument 3
Governance should shift from pure risk management to engineering fairness and inclusivity while still addressing AI hazards.
EXPLANATION
Ivana acknowledges AI’s benefits and risks, urging a move beyond risk‑control frameworks toward proactive design of fair and inclusive systems, integrating ethical considerations into AI development.
EVIDENCE
Ivana acknowledges both the benefits and the risks of AI, arguing that governance should move beyond pure risk management toward engineering fairness and inclusivity while still managing hazards. [94-107]
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The same ‘Why science matters in global AI governance’ source calls for moving beyond risk-control frameworks toward proactive engineering of fairness and inclusivity in AI systems [S33]; the cybersecurity ethics discussion also emphasizes fairness and ethical dimensions alongside risk management [S23].
MAJOR DISCUSSION POINT
Fairness and inclusivity in AI governance
DISAGREED WITH
Dean
Agreements
Agreement Points
Inclusion in AI must be understood broadly, covering not only data representation but also access to compute, standards, policy frameworks and the need to embed fairness and inclusivity throughout AI systems.
Speakers: Speaker 1, Ivana Bartoletti
Inclusion in AI goes beyond data representation to include compute access, standards, policy frameworks, and regulatory clarity. Fairness and inclusivity in AI governance
Both Speaker 1 and Ivana stress that AI inclusion is more than equitable datasets; it requires universal compute access, standards, supportive policies and the engineering of fairness and inclusivity into AI products [9-13][104-106].
POLICY CONTEXT (KNOWLEDGE BASE)
This aligns with the inclusive AI agenda highlighted in the AI Governance Dialogue emphasizing unprecedented participation and inclusion-by-design, and with calls to address infrastructure, data and skill gaps as barriers to inclusive policy [S53][S54][S59][S68].
AI compute infrastructure should be treated as critical national infrastructure and receive public‑private partnership support.
Speakers: Dean, Speaker 1
AI compute infrastructure should be classified as critical national infrastructure.
Dean argues that data centres powering frontier AI are akin to ports or railways and must be regarded as critical infrastructure, a view echoed by Speaker 1’s question about treating compute as critical infrastructure for national inclusion [111-114][108-110].
POLICY CONTEXT (KNOWLEDGE BASE)
Several policy discussions treat AI compute as critical infrastructure, citing the need for public-private partnerships and security considerations, e.g., the International CIIP Handbook definition of critical sectors [S58], the OpenAI positioning of AI as critical infrastructure [S74], and large-scale compute alliance initiatives [S71][S69][S72].
Education systems and workforce skills must be upgraded to provide AI literacy for teachers, students and employees.
Speakers: Gabriela, Speaker 1, Ivana Bartoletti
Education systems must be overhauled to provide AI literacy and skills for teachers and students.
Gabriela calls for modernising curricula, teacher training and reducing administrative burdens; Speaker 1 highlights the need for mindset, skill-sets and tool-sets; Ivana stresses bringing employees along and up-skilling them, all pointing to a shared demand for AI-focused education and capacity building [171-183][126-130][84-88].
POLICY CONTEXT (KNOWLEDGE BASE)
The importance of AI literacy and upskilling is reflected in the AI Impact Summit 2026 emphasis on skills investment [S61], and multiple analyses urging curriculum adaptation and digital literacy programs [S62][S63][S64].
Effective AI policy requires an ecosystem of investments, incentives, institutions and infrastructure rather than relying solely on regulation.
Speakers: Gabriela, Speaker 1
AI policy must be built as an ecosystem of investments, incentives, institutions, and infrastructure, not limited to regulation.
Gabriela describes AI policy as an ecosystem of funding, incentives and institutions, while Speaker 1 frames the policy challenge as balancing excellence and inclusion, both moving beyond a narrow regulatory focus [48-52][15].
POLICY CONTEXT (KNOWLEDGE BASE)
Panel discussions contrast minimal new regulation with comprehensive government-driven investment and institutional frameworks, supporting an ecosystem approach [S51][S69][S70][S68].
Similar Viewpoints
Both see a strong role for government partnership in building and governing AI infrastructure, whether as critical national assets or as part of a broader innovation ecosystem [111-114][48-52].
Speakers: Dean, Gabriela
AI compute infrastructure should be classified as critical national infrastructure. AI policy must be built as an ecosystem of investments, incentives, institutions, and infrastructure, not limited to regulation.
Both stress that AI governance must go beyond compliance and data‑set equity to embed fairness, inclusivity and broader systemic access [9-13][104-106].
Speakers: Speaker 1, Ivana Bartoletti
Inclusion in AI goes beyond data representation to include compute access, standards, policy frameworks, and regulatory clarity. Fairness and inclusivity in AI governance
Both highlight the necessity of up‑skilling people—teachers, employees and the broader workforce—to ensure responsible AI use and governance [171-183][84-88].
Speakers: Gabriela, Ivana Bartoletti
Education systems must be overhauled to provide AI literacy and skills for teachers and students. Governance should shift from pure risk management to engineering fairness and inclusivity while still addressing AI hazards.
Unexpected Consensus
Government as a partner for AI compute infrastructure
Speakers: Dean, Gabriela
AI compute infrastructure should be classified as critical national infrastructure. AI policy must be built as an ecosystem of investments, incentives, institutions, and infrastructure, not limited to regulation.
Dean frames compute centres as critical infrastructure needing public partnership, while Gabriela, typically focused on broader ecosystem issues, also stresses direct government investment to avoid market distortions-an alignment not explicitly anticipated given their different primary emphases [111-114][48-52].
POLICY CONTEXT (KNOWLEDGE BASE)
Public-private partnership models for compute infrastructure are advocated in the Genesis Project and the Global Alliance for AI, positioning governments as key partners [S69][S71][S51].
Recognition of AI’s transformative power and the need for proactive governance
Speakers: Dean, Ivana Bartoletti
AI compute infrastructure should be classified as critical national infrastructure. Fairness and inclusivity in AI governance
Dean warns that AI systems will become smarter than humans across all cognitive labour, while Ivana stresses trust in agentic AI and the necessity of embedding safeguards-both converging on the view that AI’s future impact is profound and requires forward-looking governance, a point not overtly shared earlier in the discussion [159-161][80-83].
POLICY CONTEXT (KNOWLEDGE BASE)
Speakers repeatedly stress AI’s transformative potential and the need for proactive, forward-looking governance, as in the AI Governance Dialogue opening remarks and OpenAI’s New Deal analogy [S54][S66][S74][S61].
Overall Assessment

The panel shows strong convergence on four themes: a broad, systemic view of inclusion; the classification of AI compute as critical infrastructure; the urgent need to overhaul education and up‑skill the workforce; and the requirement for an ecosystem‑based policy approach that blends public investment with regulatory clarity.

High consensus across speakers, indicating a shared understanding that AI governance cannot rely solely on narrow regulation but must integrate infrastructure, education, and inclusive design. This consensus suggests that future policy initiatives should prioritize public‑private partnerships, critical‑infrastructure designation for compute resources, and large‑scale capacity‑building programmes to achieve equitable AI benefits.

Differences
Different Viewpoints
Presumption of adequacy of existing legal frameworks versus the need for new, broader AI policy interventions
Speakers: Dean, Gabriela
Existing legal frameworks should be presumed sufficient for AI, with regulators bearing the burden of proof to show inadequacy. AI policy must be built as an ecosystem of investments, incentives, institutions, and infrastructure, not limited to regulation.
Dean argues that current liability, product, and other statutes are enough for AI governance and that any new regulation must first prove a gap in existing law [24-33][40-44]. Gabriela counters that effective AI governance requires a holistic ecosystem of public investment, incentives and possibly new regulatory tools, indicating that existing frameworks are insufficient on their own [48-52][53-55].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates at the Inclusion Innovation panel show one side favoring existing frameworks with burden of proof [S51], while others argue for new policy tools, echoing broader discussions on whether existing competition law suffices [S48][S60].
Focus on risk‑centric, tail‑event proactive regulation versus a shift toward fairness and inclusivity beyond pure risk management
Speakers: Dean, Ivana Bartoletti
Proactive governance is required for low‑probability, high‑impact tail events associated with AI. Governance should shift from pure risk management to engineering fairness and inclusivity while still addressing AI hazards.
Dean emphasizes anticipatory regulation for catastrophic, low-probability AI scenarios (e.g., pandemics) and supports transparency laws as a pre-emptive measure [34-38]. Ivana argues that AI governance should move beyond a narrow risk-control lens to embed fairness and inclusivity throughout design and deployment, suggesting a broader ethical-technical approach [102-107].
POLICY CONTEXT (KNOWLEDGE BASE)
Recent literature contrasts risk-based, tail-event regulation with rights-based, fairness-oriented approaches, highlighting a shift toward preventive, inclusive governance [S65][S66][S67][S68][S50].
Role of compute infrastructure – critical national infrastructure versus a component of broader inclusion without explicit critical‑infrastructure status
Speakers: Dean, Speaker 1, Gabriela
AI compute infrastructure should be classified as critical national infrastructure. Inclusion in AI includes access to compute as one of many elements needed for equitable AI. AI policy must be an ecosystem that includes investments and infrastructure, not limited to regulation.
Dean asserts that data centers powering frontier AI are akin to ports or railroads and should be treated as critical infrastructure, warranting public-private partnership and subsidies [111-115]. Speaker 1 and Gabriela treat compute access as part of a broader inclusion and ecosystem agenda, without explicitly framing it as critical infrastructure, focusing instead on investment and policy support [9-13][48-52].
POLICY CONTEXT (KNOWLEDGE BASE)
While some sources label AI compute as critical infrastructure requiring protection [S56][S57][S58][S72][S73][S74], other discussions frame it as one element of broader inclusion without explicit critical status [S53][S59].
Unexpected Differences
Government’s role in preventing market distortions versus reliance on existing competition law
Speakers: Dean, Gabriela
Existing legal frameworks should be presumed sufficient for AI, with the burden of proof on regulators. Government has historically driven major technological breakthroughs and should continue to fund AI research to prevent market distortions.
Dean’s stance that existing law already handles competition and that new regulation should only be introduced with clear evidence of a gap [24-33][40-44] contrasts sharply with Gabriela’s claim that active government investment and policy are needed to avoid monopolistic market distortions and to nurture innovation [53-55]. This divergence is unexpected given both speakers operate within policy circles but adopt opposite views on the necessity of new governmental intervention.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between government intervention to avoid market distortions and reliance on competition law is discussed in competition policy analyses and panel debates [S48][S49][S51][S68].
Treating AI compute as a strategic critical infrastructure versus viewing it as one element of broader inclusion without explicit critical‑infrastructure framing
Speakers: Dean, Speaker 1
AI compute infrastructure should be classified as critical national infrastructure. Inclusion in AI includes access to compute as part of a wider set of inclusion measures.
Dean explicitly categorizes AI data centers as future-critical infrastructure comparable to ports or railroads [111-115], while Speaker 1 mentions compute access only as one facet of inclusion without assigning it critical-infrastructure status [9-13]. The mismatch in framing-strategic asset versus inclusion component-is not anticipated given the shared focus on inclusion.
POLICY CONTEXT (KNOWLEDGE BASE)
Similar to the previous point, the strategic framing of AI compute as critical infrastructure is supported by policy handbooks and industry statements, while inclusion-by-design narratives treat it as part of a wider ecosystem [S58][S59][S71][S74].
Overall Assessment

The panel displayed moderate disagreement centered on the adequacy of existing legal regimes versus the need for new, ecosystem‑wide policy measures, and on the emphasis of risk‑centric regulation versus fairness‑centric governance. While all participants agreed on the overarching goal of inclusive, trustworthy AI, they diverged on the mechanisms—legal presumption, proactive tail‑risk rules, public investment, techno‑legal embedding, and the strategic classification of compute resources.

The level of disagreement is moderate but consequential: differing assumptions about legal sufficiency and the role of government could shape whether AI governance leans toward incremental adaptation of current law or toward a more transformative, investment‑driven framework. These divergences will affect policy design, allocation of resources, and the speed at which inclusive AI ecosystems can be built.

Partial Agreements
All three speakers share the goal of making AI inclusive and equitable, but differ on the primary means: Speaker 1 emphasizes a broad definition of inclusion covering technical and regulatory dimensions [9-13]; Gabriela stresses an ecosystem of public investment and institutional support [48-52]; Ivana focuses on embedding fairness and inclusivity through techno‑legal design and governance practices [102-107].
Speakers: Speaker 1, Gabriela, Ivana Bartoletti
Inclusion in AI goes beyond data representation to include compute access, standards, policy frameworks, and regulatory clarity. AI policy must be built as an ecosystem of investments, incentives, institutions, and infrastructure, not limited to regulation. Governance should shift from pure risk management to engineering fairness and inclusivity while still addressing AI hazards.
Both agree that law and regulation play a role in AI governance, yet Dean leans on the sufficiency of current statutes, whereas Ivana advocates for translating legal requirements into technical safeguards, indicating a shared recognition of legal relevance but divergent implementation pathways [24-33][40-44] vs [70-76].
Speakers: Dean, Ivana Bartoletti
Existing legal frameworks should be presumed sufficient for AI, with regulators bearing the burden of proof to show inadequacy. AI governance should embed privacy, security, and legal safeguards from design through deployment, using a techno‑legal approach.
Takeaways
Key takeaways
Existing legal frameworks are generally sufficient for AI; regulators must bear the burden of proof to show otherwise (Dean). Proactive governance is needed for low‑probability, high‑impact tail events such as catastrophic AI failures (Dean). AI governance should be a strategic capability that embeds privacy, security, and techno‑legal tools rather than merely a compliance checklist (Ivana). Inclusion must be pursued both as an ethical imperative and as a driver of economic competitiveness; it extends beyond data representation to access to compute, standards, and policy frameworks (Gabriela). Public‑private partnerships and government investment are essential to nurture the AI ecosystem, address market distortions, and support open research (Gabriela). AI compute facilities (data centers) should be classified as critical national infrastructure, with governments acting as partners in their development (Dean). AI technologies exhibit natural‑monopoly tendencies; policy must curb concentration to preserve competition while leveraging falling token costs that foster competitive dynamics (Dean, Gabriela). Education systems and teacher training are major blind spots; curricula need to be upgraded to prepare students and workers for AI‑augmented futures (Gabriela). Trust in agentic AI requires design mechanisms for human intervention, safeguards against hallucinations, and continuous monitoring in production (Ivana). Key blind spots identified: under‑estimating the strategic importance of frontier models and the lack of global consensus on AI “red lines” or ethical boundaries (Dean, Ivana).
Resolutions and action items
Treat AI compute infrastructure as critical national infrastructure and explore government‑partnered investment, especially in the Global South (Dean). Encourage governments to apply existing liability and product regulations to AI, only adding new rules for demonstrated tail‑event risks (Dean). Develop AI governance capabilities within organizations that integrate privacy, security, and techno‑legal tools throughout the product lifecycle (Ivana). Promote public‑private partnership models to fund research, open innovation, and address market distortions caused by AI concentration (Gabriela). Initiate initiatives to upgrade school curricula and provide teacher training on AI tools and pedagogy (Gabriela). Create mechanisms for human override and monitoring of agentic AI systems to ensure trust and mitigate model drift (Ivana).
Unresolved issues
How to concretely balance self‑regulation with innovation‑first approaches without harming national competitiveness. Specific policy instruments needed to curb AI market concentration and prevent oligopolistic dominance. Mechanisms for achieving global consensus on AI “red lines” and ethical boundaries. Funding models and governance structures for scaling compute infrastructure in developing regions. Detailed implementation plans for integrating AI education and upskilling across diverse education systems. Operational guidelines for applying existing legal frameworks to emerging AI use‑cases.
Suggested compromises
Presume existing law is sufficient for most AI applications, but allow targeted, proactive regulation for identified tail‑event risks (Dean). Frame inclusion as both an ethical duty and a competitive advantage, aligning social goals with economic incentives (Gabriela). Adopt public‑private partnership approaches that combine government funding with private sector innovation to mitigate market distortions (Gabriela). Recognize AI compute as critical infrastructure while avoiding overly restrictive control, positioning governments as partners rather than controllers (Dean). Leverage falling token prices to foster competition while implementing policies to counteract centralizing tendencies (Dean).
Thought Provoking Comments
We should presume that existing law is sufficient and place the burden of proof on those who want new regulation to show why current law doesn’t work.
Challenges the common assumption that AI is a regulatory vacuum and flips the default stance, prompting a more evidence‑based approach to new legislation.
Set the regulatory framing for the discussion, leading other panelists to consider how existing legal tools can be leveraged rather than defaulting to new, possibly heavy‑handed regulations.
Speaker: Dean
AI technologies are natural monopolies; we need government intervention to address market concentration and prevent a ‘lucky few’ scenario.
Introduces the economic concept of natural monopoly into AI policy, linking market structure to inclusion and highlighting the risk of oligopolistic control.
Shifted the conversation from pure regulation to ecosystem design, prompting later discussion on competition, diffusion of technology, and the role of public‑private partnerships.
Speaker: Gabriela
AI governance is a strategic capability that goes beyond compliance—it requires embedding privacy, security, and resilience into products, monitoring them in production, and giving people the right to intervene with agentic AI.
Expands the notion of governance from a checklist to an ongoing, technical‑legal practice, introducing concepts like trust stacks and agentic AI control.
Deepened the technical dimension of the debate, influencing subsequent remarks about techno‑legal approaches and the need for measurable accountability at scale.
Speaker: Ivana Bartoletti
Data centers that power frontier AI should be treated as critical infrastructure, like ports or railroads.
Reframes compute resources as essential public assets, moving the policy conversation toward infrastructure investment and sovereignty concerns.
Prompted a discussion on national strategies for compute, linking it to earlier points about inclusion, access, and the role of governments in building AI capacity.
Speaker: Dean
Inclusion should be seen both as an ethical imperative and a competitive strategy; market concentration breaks the ‘diffusion machine’ that spreads innovation broadly.
Synthesizes ethical and economic arguments, highlighting how concentration hampers the spread of AI benefits and calling for policies that boost diffusion.
Guided the panel toward concrete policy levers—tax, incentives, anti‑trust—to ensure equitable AI diffusion, and reinforced the earlier monopoly discussion.
Speaker: Gabriela
Rejecting frontier models is a blind spot; the most powerful future uses will come from capabilities we can’t yet name, and investing in them opens opportunities for the global south.
Challenges the notion that cheaper, smaller models are sufficient, emphasizing the strategic importance of cutting‑edge AI for future innovation and inclusion.
Reoriented the conversation toward long‑term investment in high‑performance AI, influencing later remarks about education, skills, and the need to prepare societies for advanced models.
Speaker: Dean
Our education system has not been upgraded to teach AI concepts or equip teachers; without this pipeline, the promised AI future cannot be realized.
Identifies a systemic blind spot—human capital development—that underpins all other policy discussions about AI readiness.
Added a concrete, actionable focus on curriculum reform and teacher training, linking back to earlier points on inclusion, skills, and national competitiveness.
Speaker: Gabriela
We lack globally agreed ‘red lines’ for AI; without shared boundaries, we risk divergent ethical standards and geopolitical friction.
Raises the geopolitical dimension of AI governance, pointing out the absence of international consensus on unacceptable uses.
Expanded the scope of the debate to include global coordination, influencing the final reflections on political economy and the need for shared norms.
Speaker: Ivana Bartoletti
Overall Assessment

The discussion was shaped by a series of pivotal insights that moved it from a generic talk about AI policy to a nuanced exploration of regulation, market structure, infrastructure, governance practice, and human capital. Dean’s challenge to the presumption of regulatory gaps and his framing of compute as critical infrastructure set a legal‑and‑strategic foundation. Gabriela’s emphasis on natural monopolies and the dual nature of inclusion as ethical and competitive introduced economic depth and highlighted concentration risks. Ivana’s articulation of governance as a strategic, techno‑legal capability and the need for global red lines broadened the conversation to operational and geopolitical layers. Collectively, these comments redirected the dialogue toward concrete policy levers, long‑term investment in frontier models, and the essential role of education, thereby deepening the analysis and outlining a comprehensive roadmap for inclusive, competitive, and responsibly governed AI.

Follow-up Questions
How can governments proactively govern tail events (low‑probability, high‑impact AI risks) that could have catastrophic consequences?
Dean highlighted the need for proactive governance of tail events, indicating that existing law may be insufficient and that specific policy mechanisms are required.
Speaker: Dean
What specific policy tools are needed to address market distortions caused by AI natural monopolies and concentration of power?
Gabriela described AI technologies as natural monopolies leading to market distortions and called for policies to mitigate these effects.
Speaker: Gabriela
How can organizations effectively monitor AI systems in production and intervene when harms arise?
Ivana emphasized the challenge of post‑deployment monitoring and the need for tools that allow timely intervention to prevent or mitigate AI‑related harms.
Speaker: Ivana Bartoletti
What design principles should constitute a "trust stack" for agentic AI, enabling users to intervene or override autonomous decisions?
She referenced her work on designing trust for agentic AI and the importance of giving users control over autonomous systems.
Speaker: Ivana Bartoletti
Should compute infrastructure (e.g., AI data centers) be classified as critical national infrastructure, and what regulatory regime should apply?
Dean argued that AI compute facilities are akin to ports or railroads and should be treated as critical infrastructure, prompting further policy development.
Speaker: Dean
Should inclusion be framed primarily as an ethical imperative, a competitive strategy, or both, and how can policies balance these perspectives?
Gabriela questioned the framing of inclusion and its relationship to competitiveness, suggesting a need for integrated policy approaches.
Speaker: Gabriela
What are the blind spots regarding reliance on frontier AI models versus cheaper, less powerful alternatives?
Dean identified a blind spot in assuming cheaper models are sufficient, stressing the importance of understanding the unique value of frontier models.
Speaker: Dean
How can education systems and teacher training be upgraded to prepare students and educators for AI readiness?
She pointed out the lack of pedagogical reform and teacher support for AI integration, indicating a need for systemic educational research and investment.
Speaker: Gabriela
How can the global community agree on AI "red lines"—unacceptable uses—and enforce them across jurisdictions?
Ivana noted the absence of worldwide consensus on AI red lines, highlighting a gap in international governance research.
Speaker: Ivana Bartoletti
What mechanisms can prevent concentration of power in AI and promote competitive dynamics while avoiding centralizing tendencies?
Dean warned about centralizing tendencies and the concentration of AI power, calling for anti‑trust and competition‑focused research.
Speaker: Dean
How can the diffusion of AI innovations be accelerated to reach lagging economies and communities?
She described the broken diffusion machine caused by market concentration and called for research into ways to speed up equitable diffusion of AI benefits.
Speaker: Gabriela

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.