Panel Discussion Inclusion Innovation & the Future of AI
20 Feb 2026 16:00h - 17:00h
Panel Discussion Inclusion Innovation & the Future of AI
Session at a glance
Summary
This panel discussion focused on navigating the trade-offs between excellence and inclusion in AI governance, examining how policymakers can foster innovation while ensuring equitable access and benefits. The conversation explored the tension between self-regulation and innovation-first approaches in AI policy development.
Dean W. Ball argued that existing legal frameworks should be the starting point for AI governance, with the burden of proof on those advocating for new regulations to demonstrate why current laws are insufficient. He emphasized that AI is already regulated through various existing mechanisms and suggested that proactive governance should focus primarily on potential catastrophic risks or “tail events.” Ball also advocated for treating AI compute infrastructure as critical infrastructure, similar to ports or railroads.
Gabriela Ramos challenged the assumption that government intervention creates market distortions, pointing out that foundational AI technologies were built on government-funded research. She argued that current AI markets show natural monopoly tendencies that require government intervention to prevent concentration of power among a few players. Ramos emphasized that inclusion and competitiveness are not opposing forces but complementary elements necessary for sustainable economic growth.
Ivana Bartoletti discussed the practical implementation of AI governance in organizations, describing it as a strategic capability that goes beyond risk management and compliance. She stressed the importance of embedding privacy, security, and fairness into AI systems while bringing employees along in the transformation process. The panelists identified significant blind spots in current AI discourse, including underestimating the transformative potential of frontier models, insufficient investment in education system upgrades, and the lack of global alignment on ethical red lines for AI development. The discussion concluded that effective AI governance requires a comprehensive approach combining infrastructure development, competitive dynamics, and institutional evolution.
Keypoints
Major Discussion Points:
– AI Governance Framework and Regulatory Approach: The panel debated whether to rely on existing legal frameworks versus creating new AI-specific regulations, with Dean advocating for applying current laws first and only creating new regulations when clear gaps are demonstrated, while others emphasized the need for proactive governance.
– Government’s Role in AI Innovation and Market Dynamics: Discussion centered on balancing government intervention with private sector innovation, addressing market concentration concerns, and the government’s historical role in foundational AI research (like DARPA and the internet), while preventing market distortions and monopolistic tendencies.
– Practical AI Governance Implementation: Eva detailed how organizations must move beyond policy statements to measurable accountability through comprehensive governance frameworks that include risk management, employee engagement, technical safeguards, and strategic value creation rather than just compliance.
– Infrastructure and Access as Critical Components: The conversation addressed treating AI compute infrastructure as critical national infrastructure, with Dean highlighting U.S. commitments to subsidize AI data centers in the global south, and the broader challenges of ensuring equitable access to AI capabilities.
– Education and Skills Development: Gabriela emphasized education as a major blind spot, calling for massive upgrades to educational systems and teacher training to prepare people for an AI-driven future, while the moderator noted India’s early adoption of AI as a school subject.
Overall Purpose:
The discussion aimed to explore the complex trade-offs between achieving AI excellence and ensuring AI inclusion, examining how policymakers can navigate regulatory frameworks, market dynamics, and implementation strategies to make AI beneficial for broader populations while maintaining national competitiveness.
Overall Tone:
The discussion maintained a constructive and collaborative tone throughout, with panelists building on each other’s points rather than engaging in adversarial debate. The conversation was forward-looking and solution-oriented, with participants sharing practical experiences and policy recommendations. The tone remained optimistic about AI’s potential while acknowledging serious challenges, and there was a notable spirit of international cooperation and shared responsibility for addressing global AI governance challenges.
Speakers
Speakers from the provided list:
– Moderator: Role as discussion facilitator and host of the panel session
– Dean W. Ball: Works at the frontier of AI policy through the Foundation for American Innovation; formerly worked in the Trump administration at the White House Office of Science and Technology Policy; helped shape the administration’s AI action plan and AI export program
– Gabriela Ramos: Economist; co-chairing the task force on inequalities financial disclosure; expertise in economic policy and inequality issues
– Ivana Bartoletti: Global AI governance strategy professional at Wipro; specializes in translating AI policy into practice; expertise in AI governance, privacy, and responsible AI frameworks; recently published on agentic AI design for trust
Additional speakers:
– No additional speakers were identified beyond those in the provided speakers names list
Full session report
This panel discussion examined the complex relationship between achieving AI excellence and ensuring AI inclusion, bringing together diverse perspectives on regulatory frameworks, market dynamics, and implementation strategies. The conversation revealed fundamental tensions between different approaches to AI governance and the role of government intervention in emerging AI markets.
Regulatory Philosophy and Governance Frameworks
Dean W. Ball presented a distinctive perspective on AI regulation, arguing that existing legal frameworks should serve as the foundation for AI governance rather than assuming new regulations are automatically necessary. Ball emphasized that “the burden of proof should be on the person who wants the regulation to show this is why existing law doesn’t work.” He noted that AI is already regulated through various existing mechanisms, including liability doctrine and product regulations, and highlighted the United States’ common law tradition as a robust foundation for addressing AI-related issues.
However, Ball acknowledged that proactive governance may be necessary for addressing “tail events” – low-probability, high-impact scenarios. He mentioned writing “supportively about transparency laws” while maintaining his general preference for existing legal frameworks over new regulatory approaches.
Gabriela Ramos offered a contrasting perspective that challenged the binary framing of government intervention versus market freedom. Drawing on historical examples, she highlighted how foundational AI technologies emerged from government-funded research, particularly citing DARPA’s role in creating the internet. This analysis reframed government intervention from a potential impediment to innovation to a proven catalyst for technological advancement.
Ramos argued that current AI markets exhibit characteristics that require government intervention to prevent harmful concentration of power. She described AI technologies as functioning like “natural monopolies,” where early developers gain overwhelming market advantages, leading to oligopolistic market structures.
Market Dynamics and Access Challenges
Ramos introduced the concept of a “broken diffusion machine” in AI innovation, arguing that when market concentration occurs at the top levels of AI development, the traditional mechanism by which innovations create widespread benefits becomes dysfunctional. She emphasized that a few entities controlling compute capacity, talent acquisition, and financial resources prevents broader distribution of AI benefits.
Ball focused on the importance of frontier AI models, warning against dismissing advanced capabilities in favor of “good enough” alternatives. He argued that frontier AI represents the development of systems that will be “smarter than humans at all cognitive labour,” opening possibilities for applications and “concepts that you will invent” that don’t yet exist. Ball noted that the US plans to spend “a trillion dollars this year” on AI development, highlighting the scale of investment in advanced capabilities.
The discussion revealed different theories about how AI benefits should diffuse through society, with Ball emphasizing access to cutting-edge capabilities and Ramos focusing on addressing market concentration to enable broader participation.
Organizational Implementation and Governance
Ivana Bartoletti, from Wipro, provided insights into translating AI governance principles into organizational practice. She described how companies initially scrambled to address employee access to AI tools following the widespread adoption of generative AI, leading to the establishment of governance boards and ethics committees.
Bartoletti emphasized a shift from traditional risk management approaches to “AI for good,” focusing on engineering fairness and inclusivity directly into systems rather than treating these as afterthoughts. She stressed the importance of bringing employees along in AI transformation, leveraging their expertise to develop practical use cases and ensuring workforce preparation for changing work patterns.
She also highlighted the need for design approaches that maintain human agency while enabling the benefits of autonomous systems, particularly as AI systems become more sophisticated and capable of independent decision-making.
Infrastructure and Global Considerations
Ball advocated for treating AI data centers as critical infrastructure comparable to ports or railways. He highlighted the US government’s stated policy to “subsidize the development of AI data centers in the global south,” representing a significant shift towards international cooperation in AI infrastructure development.
The moderator framed AI inclusion as extending beyond equitable representation in datasets to encompass access to compute resources, technical standards, supportive policy frameworks, and clear regulatory guidance. This comprehensive view recognizes that meaningful participation in the AI economy requires addressing multiple layers of access and capability.
Education and Capacity Building
Ramos identified education reform as a critical blind spot in current AI discourse, highlighting a fundamental mismatch between the transformative AI future being developed and educational institutions’ preparedness. She called for massive upgrades to educational pedagogy and teacher training, using the example of choosing neighborhoods based on good schools to illustrate how existing inequalities could be amplified in an AI-driven economy.
The moderator noted India’s early adoption of AI as a school subject in 2019, demonstrating how national education policies can proactively address AI readiness. This example illustrated the possibility of systematic educational reform while acknowledging the global nature of the challenge.
Global Coordination and Ethical Boundaries
Bartoletti raised questions about global coordination in AI governance, particularly regarding the establishment of ethical boundaries for AI development. She noted the absence of international alignment on what AI applications should never be pursued, regardless of technical feasibility.
The discussion touched on the need for international cooperation that addresses fundamental questions about values and principles guiding AI development, including ensuring that AI systems respect diverse languages, dialects, and cultural norms rather than imposing homogeneous approaches.
Key Tensions and Implications
The panel revealed several unresolved tensions that will likely shape future AI governance debates. The fundamental disagreement between Ball’s preference for existing legal frameworks and Ramos’s call for comprehensive government intervention reflects deeper philosophical differences about the appropriate role of government in emerging technology markets.
The tension between Ball’s emphasis on frontier AI capabilities and Ramos’s focus on addressing market concentration represents different theories about how technological benefits diffuse through society. These competing perspectives have significant implications for policy development.
The conversation highlighted the challenge of balancing national competitiveness with international cooperation, while speakers agreed on the importance of global participation in AI development. The panel’s emphasis on comprehensive governance approaches, government investment in infrastructure and research, and education reform suggests potential foundations for effective AI governance frameworks that move beyond simple regulatory approaches to encompass broader ecosystem development.
Session transcript
And it’s such a pleasure to be here with such lovely panelists and an audience who’s possibly going to skip some of the lunchtime to join us today in our discussions. Let me get started by really talking about, you know, we are towards the end of the week. It’s been a fantastic week, lots of conversations. And one thing which I reflect back on most of the conversations has been what is the most defining question of our time, which is who all is artificial intelligence really benefiting and with what rules? If I look at it, AI’s enterprise infrastructure, AI’s public sector capability, AI’s even geopolitical leverage is what we’ve seen across all these days. But more importantly, AI has become a part and parcel of our daily lives.
It stretches from everything from making our work life easier. to making sure that we get our entertainment as and when and how we require it. And more importantly, from healthcare to hiring to anything you can possibly imagine. When we really focus on inclusion in AI, one thing which has kind of stayed as a thought for the last five days is inclusion in AI is way beyond equitable representation in data sets. It’s, you know, it’s everything. It’s about access to compute. It’s about standards. It’s about having a right policy framework, which encourages everyone, everywhere. And more important, it’s also getting clarity on regulations, which are there across countries, to see how it can really be beneficial.
Now, to take the discussion ahead, today’s conversation is going to be really about trade -offs. Excellence and inclusion. It’s been interesting on how to navigate both these terminologies whenever you think of any policy or a framework. So I’m going to start with my first question to Dean. So Dean, you know, you’ve been working at the frontier of AI policy. You’ve been at the institutional design through the Foundation for American Innovation. There is a lot of growing debate between self -regulation and innovation -first approaches. Where should policymakers really draw the line without really undermining national competitiveness?
So I think it’s a, first of all, thanks for being here. And thank you for having me. It’s an honor to be here. The way I think about this is that, you know, we will govern AI through a very large intersecting web of different things, right? It’s not just going to be one day one bill is going to get passed and that’s going to be the AI bill and then AI is regulated, right? AI is currently regulated today. It’s regulated by many different things. It’s regulated in the United States by things like liability doctrine and a lot of existing products. regulations and things like that. So I think step number one for government is let’s take the existing bodies of law, you know, many of which just as, you know, as in India and the United States, we’re quite proud of.
Many countries, you know, are very proud of their regulatory and legal traditions. We have a common law tradition in the United States that we are proud of. So let’s take those things and let’s figure out how to apply them to AI. And then, you know, the companies, I think, thus far, the major AI labs have been, I think, responsible stewards when it comes to the major risks. Now, I think the area where you might need proactive governance first is, at least in my view, is really this domain of tail events, potential events that could be very serious, have very serious consequences that are relatively unlikely. So, So, you know, pandemic is an example of a tail event.
And I think AI might have some tail, you know, sort of catastrophic type risks associated with it. And so this is an area where some proactive governance, I think, is needed. And I’ve written supportively about transparency laws in the United States along those lines. So I think that’s where when we have a clear and demonstrated threat model and we have a, you know, clear evidence that existing law is not sufficient. I think one area, one aspect of AI governance that I often push back on and that I often dispute is there’s this kind of assumption baked in whenever we talk about AI regulation that the existing law is insufficient and that the current status quo is that AI is unregulated in some way.
And I think that should actually be, we should have the opposite presumption. We should presume that existing law is sufficient and that there is some sort of good solution. And then. Yeah. It should be, the burden of proof should be on the person who wants the regulation to show this is why existing law doesn’t work.
Thank you, Dean. That’s very interesting that we go with an assumption. And with that, Gabriela, let me move on to you. So should, you know, how can governments foster open innovation, assuming to whatever Dean said, while minimizing the risk of market distortions?
Well, I think that it’s a very nice segue because I completely agree with Dean that there are a very broad portfolio of policy interventions that has not only to be with regulations. Regulations is looking at the way the technology is developed. But we need to think about this as an ecosystem that needs to be nurtured, that need investments. That needs incentives. that needs institutions and that needs infrastructure. And therefore it’s not only the technological conversation about what do we do with AI, but what kind of an economy we want that is really productive, that delivers for people with AI, and for that you need government intervention. And let me tell you, what is very interesting is we usually tend to think that the private sector is an innovative force and the government is a break.
In the U.S. that was not the case. The U.S. was the place where the massive investment in innovation in DARPA, in the creation of the Internet, all the foundational issues that we are seeing now were financed at some point by basic research that was paid by the government of the U.S. And many countries fill that space and that’s why it’s so important that we invest in research because it cannot be that the research is being done only by the private sector. and then it’s also true that when the government gets into the research it’s open research because it needs to bring everybody around the table and then it needs to be shared which is not always the case when you have a private sector innovation so I will contest this also way of framing the issues in terms of the government’s only creating market distortions because at the end it’s about how the government can be effective to address the market distortions that we see many times emerged in this case I like to see the AI technologies as natural monopolies somebody invented something somebody laid the whole network to operate it and then it was a monopoly and now it’s oligopoly at the end it’s very concentrated so there are market distortions now that needs to be addressed by government policies again there is a wide gap of things that needs to be done to ensure that the main distortion that can occur nationally and globally, which is that this is a story of a lucky few, is prevented.
Thanks, Gabriela. And you know, it’s interesting you mention that because at least in India, whenever we speak about public -private partnerships, it’s all about how we are moving from a culture of competition to cooperation, to really working together so that the markets stay healthy. With that, we move over to you, Amanda. So, Amanda, you know… Eva. Sorry. Yeah, Amanda is missed. So, Eva, let’s talk about the global AI governance strategy at Wipro, right? Many organizations are developing a responsible AI framework. How do we move beyond policy statements? Through measurable accountability. and specifically when we have to do that at scale.
Thank you very much and it’s great to be here and thanks to all of you for joining. So I have what I say often, I have the best job in the world, which is basically to translate a lot of the things that we’ve been discussed over the last few days into practice. So basically means we’ve heard democratisation, we’ve heard inclusion, we’ve heard how it’s important that AI is inclusive and by inclusivity, it’s not just about access, as was said, but it’s also making sure that many get the opportunity to participate in the design of this technology, but also in the decisions around what we are producing and who is going to be benefiting from that.
We, I think in a lot of our work, a lot of organisations, what happened over the last few years when generative AI came about, a lot of organizations we had to face something quite dramatic if you think about it because before then AI was very much for engineers for scientists to work with if they think about machine learning people who knew about AI then what happened a few years ago is that generative AI came and everybody got access to it did you remember and do you remember how companies started to scramble with who’s got access do we leave people our employees to access this the systems do we create our own private instance how do we navigate the fact that we want people to play with these tools with the fact that we have to be safe and secure as an enterprise and then things evolved and a lot of organizations if you know how the debate around governors started and you know a lot of organizations started to set up governance boards and they started to set up ethics boards and all of it and I think we realized at some point and I took on the challenge of AI governance from a privacy standpoint the reason for this and many people in organizations took on AI governance from a privacy standpoint not only because a lot of AI harms are actually privacy harms but also because privacy professionals knew about risk management and then we realized that actually governance of AI is much more than that it’s much more than risk management it’s much more than compliance we realized and I think this summit show that really clearly that AI governance is really about a strategic capability that an organization must have to create long -term value What does that mean?
It means that you have to do two things. First, you have to look at what you want to deploy or develop and that is where you need to embed privacy, security, legal protections, resilience into the products that you’re working on. That is not an easy one. It’s not an easy one. It requires knowledge. It requires investment in privacy enhancing and security enhancing technologies. It requires what, for example, India is promoting which is a techno -legal approach. It’s not just about the law but it’s also about how you translate the law into technical tools. So you have to do all of that and then you have to look at what happens once the product is in production.
So how do you monitor it once it’s out in production? How do you make sure that if, for example, you’re using AI to fire and fire, as sometimes it happens, you have tools to pull the trigger if something goes wrong? Now we are into the realm of agentic AI. If you’re interested in this, I’ve just published an article on the World Economic Forum of a subject I’m really fascinated in, which is what is the design for trust in agentic AI? So, for example, governance means that you do design these agents, but you give people, according to security standards, but also according to their own preferences, the right to intervene when they don’t want the machine to make a decision in an autonomous way.
And then you make sure that you protect from cascading hallucinations, from model drifting, all of that. So governance, to me, is very much about… . the capability that organizations have to think laterally about AI, which means impact, design choices, the trust stack that enables people and employees to trust the product. And one element which to me is very important is to make sure that companies bring their employees with them. That is a very crucial part of governance because the work is going to change. People are going to change the way that they work. They’re going to, and it’s important, the people are going to know best how to use AI are the people working in a company.
This is why I’ve seen successful companies developing a lot of use cases based on their activity and asking their employees, how should we innovate this? This is a fundamental part of governance, I believe, because it brings people with them. us. So very encompassing approach to governance. I think we are evolving and changing how we see it but certainly I think it’s become very clear over the last sort of few years and especially with things like this summit talking about impact that it’s way beyond compliance and it’s way
Thanks Eva and that makes me very curious enough to ask you a very quick question. So do you still feel you’re underestimating the risks because you spoke about AI trust?
No, I think let’s be honest here. Over the last few years we’ve seen amazing benefits coming from AI, right? Beautiful stuff, fantastic. Every day there is a piece of news that makes us hopeful that we can improve our well -being and we can feel better in the world we live in. But at the same time we’ve seen the risks too and we’ve got to be honest that looking at the success without looking at the risks is very naive. We can’t. because we’re not going to be able to deploy AI successfully if we don’t look at the risks. We’ve seen disinformation. We see deepfakes. We have seen AI softwareizing existing inequalities into decision -making around people, future, rights, and livelihood.
That’s not okay. So we’re not underestimating the risks. But we can’t approach governance from a risk management control. We have to shift our approach and do AI for good and change the way that we look into this. So we have to engineer fairness into the systems that we create. We have to engineer inclusivity into the systems that we create. And, of course, we have to manage the risk. But the mindset has to really shift.
Thanks. And that gets me back to Dean. So my question to you is, inclusion at the national level often intersects with computer access. And research infrastructure. You spoke about public -private partnerships, spoke about trust in… emerging technologies like artificial intelligence and maybe even quantum going ahead, should governments treat compute as critical infrastructure?
Yeah, I think they should. The data centers that power frontier AI systems are going to be a part, you know, like ports or railroads. They’re going to be critical infrastructure of the future. I believe that’s true. Prior to my current role, I worked in the Trump administration in the White House Office of Science and Technology Policy. And in particular, I was one of the people that shaped the administration’s AI action plan and AI export program, which my former boss, Michael Kratios, was just here talking about and announcing some next steps on. I was really excited to see that. One of the key messages of that that I feel was I feel this is maybe a communications failure on our part.
But. You know, the United States government has publicly said the president. has come out and made as a flagship of his AI policy that we intend to subsidize the development of AI data centers in the global south. That is a policy of the United States under this administration. And we don’t have the interest in exercising control over the technology in the way that I think the prior administration did in some ways. We don’t want to control other countries’ use in the same way that the prior administration did. So I do think you should think of it as critical infrastructure. And I think that you should think of the United States as a partner in the construction of that.
And I think that owning infrastructure of this kind is an asset that states and regions can use for years to come.
Thank you. So… So, you know, it’s been interesting because whenever we speak about AI at scale, when we talk about taking AI to every single person… across the planet, there are always three vectors we look at. So that can be mindset, skill sets, and tool sets. You just spoke about tool sets, which is extremely relevant. And that takes my question to you, Gabriela. When we talk about mindset, should inclusion be framed primarily as an ethical imperative or a competitive strategy or even both? What’s your take on that?
I really like the way this question is framed, my dear, because I’m sure that people think that going for inclusive policies might hinder competitiveness. Who from the public believes that? Can we have a show of hands? That being inclusive might hinder competitiveness? Investing in competitiveness might go against inclusiveness? I really think I’m an economist, and I think in this area we really need to think. We need to think about economic inclusiveness. because if we just think about social policies that might be needed when some people are left behind and therefore we need to invest in communities or in infrastructure or in people, kids that are in deep need of education, those things are very important.
But more importantly, we need to consider how do we foster market economies that are inclusive and that’s the core issue here. And I can tell you because I have been looking at the question of inequalities. Actually, I’m now co -chairing the task force on inequalities financial disclosure. And what we have seen is that when you have market concentration, productivity flattens. And what happened here, and we saw it at the OECD report that we did some years ago, when you have concentration at the top as the one we are seeing now, we saw that the OECD report which is companies having the whole concentration of compute capacities, the capacity to sort out skills and attract the skills, the capacity of having the financial means to invest.
What happens is that the diffusion machine, which is this very important element that trickles down the innovative developments into a broader set of users and benefits, is broken. Now the diffusion machine is broken. And therefore we need to see how do we ensure that the diffusion is faster. And to do that, of course, I agree with Ivana. The question is how do we ensure that we create the capacities of people and economies that are lagging behind. But we also need to see how do we diminish market dominance. And I know that there are many other considerations, geopolitics, competition matters, trade secrets matter, all these things matter. But for me, competitiveness, inclusiveness has to do with creating the highest well -being for people and that’s the outcome and that’s where ethics competitiveness, all of these narratives collide together because at the end what are we looking at?
that we have well distributed 70 % of wealth in many countries, 60 % of wealth 50 % of wealth is owned by just the top 10 % income groups but that’s not sustainable I get into Europe and Mexico and I was asking where do I put my children because I need good schools and they told me choose the right neighborhood that’s not possible and therefore I feel that there needs to be this set of policies and who is there to ensure a level playing field who is the one that needs to be using the tax systems or the incentive systems or the investment systems or to ensure that people are not left behind or the anti -competition or the non -competitive practices who is there to I pay my taxes so that the governments deliver on their promises so I think this is super important and I feel for example that what India has managed, this question of the digital registry I was with Mr. Murthy when he presented his plan so many years ago I never could believe that you were going to be doing registry for 100 million people every month, it was just like you’re crazy, that will never happen who finances the government and now you have all India with the digital identification, it’s just amazing and then you go with the financial thing so I feel this is this is this is the this is the this is the this is the this is the this is the this is the this is the this is the this is the this is the this is the this
Thanks, Gabriela. Since with this vision that the world of tomorrow with AI would certainly be a better world and hopefully be a better world than what it is today, I have a common question for all the panelists. And the question is, what do you see as the most significant blind spot in recent times AI discourse, keeping in all the conversations you’ve possibly had this week and even prior to that? And maybe what we can do is, Dean, we can go with you first.
Yeah. So in terms of blind spots, I think maybe the most important thing I could possibly say here would be one thing I’ve heard repeated a lot in the conversations I’ve had this week is this notion that the frontier models, the best AI systems are not… necessary. You can find good enough models that can, you know, that are cheaper to run. And in some cases, I think that will be true. But I would point to the very significant blind spot there that, you know, I believe that what we are doing is building systems that are going to be smarter than humans at all cognitive labor. That is a very serious goal. The United States is currently spending, like, it’s not a joke, right?
That’s not a joke. That’s not hype. That’s not crypto. We’re spending a trillion dollars this year on that. That’s the plan. We’re going to do it. It’s going to happen, right? And so the capabilities of those systems and the way that that will change the way the world works, I think ambitious people will be able to do unbelievably broad range of things. And I think this could really be an incredible opportunity for countries in the global south and really everyone in the world to participate in building the future together. And this is not something rejecting frontier models out of some sort of belief that that preserves sovereignty because there are existing use cases you can think of that can be done you know with with with cheaper models i would use a think of think of frontier ai as being useful for stuff that uh we don’t even have words for today right concepts that you will invent uh and you know that we will all invent together that’s the future that we’re building um and uh i think that’s a it’s an easy thing to miss and i think missing it is basically missing the ball game
that’s very interesting thank you for sharing that gorilla what about you
i would say a traditional education education education okay i am really that takes awake my sleep why aren’t we upgrading massively the education pedagogy Why are we changing the way we go in school? Why don’t we invest in our teachers for them to understand how these technologies can help improve student outcomes and at the same time make their life easier? I see a lot of teachers complaining about all the administrative work that they need to do that don’t leave any space for them to invest in quality changes with their students. And I’m not seeing that happening. And we need that pipeline. If the future that Dean is projecting is going to happen, is going to arrive, we need people to be very well equipped.
And where do we get that equipment? I’m fine to invest in the workers in the market. That’s very important. And I think that we need to upgrade that too, the skills of the people in the market. But the school system needs to be upgraded. And actually, I haven’t seen it really happening anywhere. This is a challenge. This is a challenge for North, South, East, West. and I invite that for all to confront this challenge.
Okay, I love the fact that you brought education and skilling as a part of it because building AI readiness has become so essential to ensure natural competitiveness no matter which market we are talking about. And just to share an incidence, India was one of the few countries where AI was introduced as a school subject way back in 2019, even before the COVID era. So students could learn AI as they would possibly learn a biology or a physics. But yes, that’s a major challenge which we’re trying to work on. Ivana, your book.
I was very impressed yesterday when your prime minister spoke. And I was very impressed by one thing that he said. But he said, you know, develop here and serve humanity. And I think that to me went to point that it’s been very strong. here and where I see because the I like also something that has been missing so far so he said something very very important and he said that AI needs to be used for inclusion for economic well -being inclusion as we said as access but also as participation for many as reduction of the gap between areas of society in geographies across India inclusion also as creating models that respect your languages and your dialects and the ethical norms bind in this country together because the eye that we have now is often not reflecting of the diversity of the world one thing that following on this one thing that has been good has been to see many leaders coming from all over the world.
One thing that I’ve always supported is how we haven’t aligned, I’ve always thought this, how we haven’t aligned on what the red lines of AI are. Are there things that us as a society or as a world, we are never going to do or we don’t want to do? Regardless of, are they, and we’ve seen appeals coming over recent years. We’ve had massive debates around ethics of AI in different ways, whether it’s the US, whether it’s Europe, whether it’s, in different ways, or everyone in different ways. But I think when it comes to something which is far more than technology, because AI is far more than technology, AI’s power is geopolitics, is earth, cables, sea, so much.
I think one of the things that probably we are overlooking is how we, and if we will be, as a world able to come together and have some red lines and say, well actually, we’re not going to go
Thank you. So I just want to take a moment to thank the panelists and maybe I can ask Dean for you to sum it up.
Well, I think there’s a lot of different things. Unfortunately, the subject of AI governance is so difficult because it’s so capacious, right? It’s such an enormous topic. But look, I think we have a very real infrastructure development sort of challenge ahead of us. We have a huge complex of new types of institutions and old institutions that are going to change and evolve in various ways. And there’s all sorts of interlocking work to do on things like that that are going to be critical for the governance of AI for both everyday types of harms and also sort of catastrophic things that feel futuristic. But I think that are going to be real parts of our lives in the pretty near future.
And then I think, you know, another thing I would I would kind of double click on is this need for competitive. Yes, which I feel. agree with. And one of the things that I think is exciting about AI is that the price per token of models does drop quite quickly. And so there are a lot of good competitive dynamics here. There’s also centralizing tendencies. And so I think working together to figure out how to prevent those tendencies, I think that’s going to be extremely important. The concentration of power in AI and that issue in the long term is going to be, I think, one of the most important parts of the political economy of this topic.
So yeah, I think that’s how I see it.
Thank you. That’s fantastic. We spoke about trade -offs, we spoke about potential solutions, and we spoke about building AI readiness for national competitiveness. Thank you so much to all the panelists, and it was lovely having a conversation.
Thanks to the moderator. Thank you.
Dean W. Ball
Speech speed
169 words per minute
Speech length
1310 words
Speech time
464 seconds
Presumption of existing law sufficiency
Explanation
Dean argues that the default stance should be that current legal frameworks are adequate to regulate AI, and that anyone seeking new regulation must demonstrate why existing law fails. This places the burden of proof on regulators rather than on innovators.
Evidence
“We should presume that existing law is sufficient and that there is some sort of good solution.” [2]. “It should be, the burden of proof should be on the person who wants the regulation to show this is why existing law doesn’t work.” [1].
Major discussion point
Regulatory approach to AI
Topics
The enabling environment for digital development | Artificial intelligence
Proactive governance for tail events
Explanation
Dean cautions that rare but high‑impact AI failures—so‑called tail events—require forward‑looking governance measures because they can have catastrophic consequences despite low probability.
Evidence
“Now, I think the area where you might need proactive governance first is, at least in my view, is really this domain of tail events, potential events that could be very serious, have very serious consequences that are relatively unlikely.” [16]. “And I think AI might have some tail, you know, sort of catastrophic type risks associated with it.” [18].
Major discussion point
Regulatory approach to AI
Topics
Artificial intelligence | Building confidence and security in the use of ICTs
Compute as critical national infrastructure
Explanation
Dean proposes that AI data centers be treated like ports or railroads—critical infrastructure that governments can own or partner on, with the United States acting as a subsidiser for deployments in the Global South.
Evidence
“So I do think you should think of it as critical infrastructure.” [37]. “And I think that owning infrastructure of this kind is an asset that states and regions can use for years to come.” [38]. “The data centers that power frontier AI systems are going to be a part, you know, like ports or railroads.” [71]. “And I think that you should think of the United States as a partner in the construction of that.” [72]. “has come out and made as a flagship of his AI policy that we intend to subsidize the development of AI data centers in the global south.” [45].
Major discussion point
Compute as critical infrastructure
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | Information and communication technologies for development
Blind spot: underestimating frontier AI models
Explanation
Dean warns that overlooking the strategic importance of frontier AI models—those that may enable capabilities we cannot yet imagine—represents a major blind spot in current AI discourse.
Evidence
“And this is not something rejecting frontier models out of some sort of belief that that preserves sovereignty because there are existing use cases you can think of that can be done you know with with with cheaper models i would use a think of think of frontier ai as being useful for stuff that uh we don’t even have words for today right concepts that you will invent uh and you know that we will all invent together that’s the future that we’re building um and uh i think that’s a it’s an easy thing to miss and i think missing it is basically missing the ball game” [81]. “So in terms of blind spots, I think maybe the most important thing I could possibly say here would be one thing I’ve heard repeated a lot in the conversations I’ve had this week is this notion that the frontier models, the best AI systems are not… necessary.” [82].
Major discussion point
Blind spots in current AI discourse
Topics
Artificial intelligence
Gabriela Ramos
Speech speed
143 words per minute
Speech length
1299 words
Speech time
544 seconds
Government should fund and nurture AI research ecosystem
Explanation
Gabriela stresses that public investment is essential to keep AI research open, shared, and inclusive, arguing that government‑funded basic research has historically driven major digital breakthroughs.
Evidence
“and then it’s also true that when the government gets into the research it’s open research because it needs to bring everybody around the table and then it needs to be shared which is not always the case when you have a private sector innovation” [33]. “The U.S. was the place where the massive investment in innovation in DARPA, in the creation of the Internet, all the foundational issues that we are seeing now were financed at some point by basic research that was paid by the government of the U.S.” [34]. “but we need to think about this as an ecosystem that needs to be nurtured, that need investments.” [41].
Major discussion point
Government role in AI innovation ecosystem & market concentration
Topics
The enabling environment for digital development | Artificial intelligence | Social and economic development
AI as a natural monopoly; curb market concentration
Explanation
Gabriela characterises AI technologies as natural monopolies that tend toward oligopolies, calling for policies that address market distortions, prevent concentration, and ensure broader diffusion of benefits.
Evidence
“I like to see the AI technologies as natural monopolies somebody invented something somebody laid the whole network to operate it and then it was a monopoly and now it’s oligopoly at the end it’s very concentrated so there are market distortions now that needs to be addressed by government policies” [33]. “The concentration of power in AI and that issue in the long term is going to be, I think, one of the most important parts of the political economy of this topic.” [43]. “And what we have seen is that when you have market concentration, productivity flattens.” [48]. “But we also need to see how do we diminish market dominance.” [50].
Major discussion point
Government role in AI innovation ecosystem & market concentration
Topics
The digital economy | Artificial intelligence | The enabling environment for digital development
Inclusion as ethical imperative and competitive strategy
Explanation
Gabriela argues that fostering economic inclusiveness and reducing concentration are both moral duties and drivers of sustainable competitiveness, linking inclusive policies to broader well‑being.
Evidence
“We need to think about economic inclusiveness.” [39]. “I really like the way this question is framed, my dear, because I’m sure that people think that going for inclusive policies might hinder competitiveness.” [53]. “But more importantly, we need to consider how do we foster market economies that are inclusive and that’s the core issue here.” [76]. “But for me, competitiveness, inclusiveness has to do with creating the highest well‑being for people and that’s the outcome and that’s where ethics competitiveness, all of these narratives collide together because at the end what are we looking at?” [79].
Major discussion point
Inclusion versus competitiveness
Topics
Closing all digital divides | The digital economy | Social and economic development
Blind spot: education system modernization
Explanation
Gabriela highlights the failure to upgrade school curricula, train teachers, and build AI skill pipelines as a critical blind spot that threatens inclusive AI adoption.
Evidence
“But the school system needs to be upgraded.” [40]. “Why don’t we invest in our teachers for them to understand how these technologies can help improve student outcomes and at the same time make their life easier?” [87]. “i would say a traditional education education education okay i am really that takes awake my sleep why aren’t we upgrading massively the education pedagogy Why are we changing the way we go in school?” [88]. “I see a lot of teachers complaining about all the administrative work that they need to do that don’t leave any space for them to invest in quality changes with their students.” [89].
Major discussion point
Blind spots in current AI discourse
Topics
Capacity development | Social and economic development | Closing all digital divides
Ivana Bartoletti
Speech speed
143 words per minute
Speech length
1426 words
Speech time
594 seconds
AI governance as a strategic capability
Explanation
Ivana describes AI governance as a core organizational capability that goes beyond compliance, embedding privacy, security, and techno‑legal tools while fostering trust among employees and users.
Evidence
“AI governance is really about a strategic capability that an organization must have to create long‑term value What does that mean?” [30]. “the capability that organizations have to think laterally about AI, which means impact, design choices, the trust stack that enables people and employees to trust the product.” [57]. “governance means that you do design these agents, but you give people, according to security standards, but also according to their own preferences, the right to intervene when they don’t want the machine to make a decision in an autonomous way.” [56].
Major discussion point
Organizational AI governance and trust
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | Building confidence and security in the use of ICTs
Shift from risk management to AI for good
Explanation
Ivana calls for moving beyond pure risk‑management frameworks toward an “AI for good” approach that deliberately engineers fairness and inclusivity into AI systems.
Evidence
“We have to shift our approach and do AI for good and change the way that we look into this.” [44]. “We have to engineer inclusivity into the systems that we create.” [64]. “So we have to engineer fairness into the systems that we create.” [65].
Major discussion point
Organizational AI governance and trust
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | Closing all digital divides
Blind spot: lack of global AI red‑line consensus
Explanation
Ivana points out that the AI community has not yet agreed on common “red lines,” a missing global consensus that hampers coordinated governance and ethical standards.
Evidence
“One thing that I’ve always supported is how we haven’t aligned, I’ve always thought this, how we haven’t aligned on what the red lines of AI are.” [93]. “I think one of the things that probably we are overlooking is how we, and if we will be, as a world able to come together and have some red lines and say, well actually, we’re not going to go” [94].
Major discussion point
Blind spots in current AI discourse
Topics
Human rights and the ethical dimensions of the information society | Artificial intelligence | Building confidence and security in the use of ICTs
Moderator
Speech speed
158 words per minute
Speech length
938 words
Speech time
355 seconds
Policy frameworks to foster open innovation while limiting market distortion
Explanation
The moderator emphasizes the need for policy designs that encourage open AI innovation and public‑private partnerships, but also guard against market distortions that could arise from unchecked concentration.
Evidence
“So should, you know, how can governments foster open innovation, assuming to whatever Dean said, while minimizing the risk of market distortions?” [32]. “should governments treat compute as critical infrastructure?” [35].
Major discussion point
Facilitating discussion on AI governance
Topics
The enabling environment for digital development | The digital economy
Agreements
Agreement points
AI governance requires comprehensive approaches beyond just regulation
Speakers
– Dean W. Ball
– Gabriela Ramos
– Ivana Bartoletti
Arguments
Existing legal frameworks should be presumed sufficient until proven otherwise, with burden of proof on those advocating for new regulations
AI governance requires a broad portfolio of policy interventions beyond just regulations, including investments, incentives, institutions and infrastructure
AI governance is a strategic capability for creating long-term value, requiring embedding of privacy, security and legal protections into products
Summary
All speakers agree that AI governance cannot be addressed through regulation alone but requires a multifaceted approach involving existing legal frameworks, strategic investments, institutions, and comprehensive organizational capabilities
Topics
Artificial intelligence | The enabling environment for digital development | Human rights and the ethical dimensions of the information society
Government has a crucial role in AI infrastructure and research development
Speakers
– Dean W. Ball
– Gabriela Ramos
Arguments
Data centers powering frontier AI systems should be treated as critical infrastructure like ports or railroads
Government has historically been crucial for foundational AI research and should continue investing in open research that benefits everyone
Summary
Both speakers recognize the essential role of government in supporting AI infrastructure development and research, with Ball advocating for treating AI data centers as critical infrastructure and Ramos emphasizing government’s historical success in foundational research
Topics
Artificial intelligence | The enabling environment for digital development | Information and communication technologies for development
AI must be developed to serve humanity inclusively while respecting diversity
Speakers
– Gabriela Ramos
– Ivana Bartoletti
– Moderator
Arguments
Inclusion and competitiveness are not opposing forces – market concentration actually flattens productivity and breaks the innovation diffusion machine
AI development should focus on ‘develop here and serve humanity’ with models that respect diverse languages, dialects and ethical norms
AI inclusion extends far beyond equitable representation in datasets to encompass access to compute, standards, policy frameworks, and regulatory clarity
Summary
All three agree that AI development must prioritize inclusive approaches that serve humanity broadly, respect cultural diversity, and ensure equitable access rather than concentrating benefits among a few
Topics
Artificial intelligence | Closing all digital divides | Human rights and the ethical dimensions of the information society
Education and capacity building are critical for AI readiness
Speakers
– Gabriela Ramos
– Ivana Bartoletti
– Moderator
Arguments
Massive upgrading of education pedagogy and teacher training for AI integration is critically needed but not happening anywhere
Successful AI governance must bring employees along in the transformation and leverage their expertise for developing use cases
Building AI readiness requires addressing three critical vectors: mindset, skill sets, and tool sets
Summary
All speakers emphasize the urgent need for comprehensive education reform and capacity building to prepare people for an AI-driven future, from school systems to workplace transformation
Topics
Capacity development | Social and economic development | Artificial intelligence
Similar viewpoints
Both speakers recognize that current AI development creates problematic concentrations of power and that governance must actively address inequalities rather than just managing risks
Speakers
– Gabriela Ramos
– Ivana Bartoletti
Arguments
Current AI market shows concentration resembling natural monopolies that create distortions requiring government intervention
AI governance must shift from pure risk management to engineering fairness and inclusivity into systems while managing risks
Topics
Artificial intelligence | The digital economy | Human rights and the ethical dimensions of the information society
Both speakers believe in the importance of ambitious AI development and government support for advancing frontier capabilities, though from different perspectives
Speakers
– Dean W. Ball
– Gabriela Ramos
Arguments
Rejecting frontier models in favor of ‘good enough’ cheaper models is a significant blind spot that misses transformative potential
Government has historically been crucial for foundational AI research and should continue investing in open research that benefits everyone
Topics
Artificial intelligence | The enabling environment for digital development | Information and communication technologies for development
Unexpected consensus
Government’s positive role in AI development
Speakers
– Dean W. Ball
– Gabriela Ramos
Arguments
Data centers powering frontier AI systems should be treated as critical infrastructure like ports or railroads
Government has historically been crucial for foundational AI research and should continue investing in open research that benefits everyone
Explanation
Despite Ball’s generally pro-market stance and emphasis on existing legal frameworks, both he and Ramos agree on the essential role of government in AI infrastructure and research, showing unexpected alignment between different ideological approaches
Topics
Artificial intelligence | The enabling environment for digital development | Financial mechanisms
Need for proactive AI governance beyond pure market solutions
Speakers
– Dean W. Ball
– Ivana Bartoletti
Arguments
Existing legal frameworks should be presumed sufficient until proven otherwise, with burden of proof on those advocating for new regulations
AI governance is a strategic capability for creating long-term value, requiring embedding of privacy, security and legal protections into products
Explanation
Despite Ball’s preference for existing legal frameworks, both speakers acknowledge the need for proactive governance approaches, suggesting convergence between regulatory skepticism and practical governance needs
Topics
Artificial intelligence | Building confidence and security in the use of ICTs | The enabling environment for digital development
Overall assessment
Summary
The speakers demonstrated remarkable consensus on key AI governance principles despite coming from different backgrounds and perspectives. Main areas of agreement include: the need for comprehensive governance approaches beyond regulation alone, government’s crucial role in AI infrastructure and research, the importance of inclusive AI development that serves humanity broadly, and the critical need for education and capacity building.
Consensus level
High level of consensus with significant implications for AI policy development. The agreement across diverse perspectives suggests these principles could form the foundation for effective AI governance frameworks that balance innovation, inclusion, and responsible development. The consensus particularly strengthens the case for government investment in AI infrastructure and education while maintaining focus on inclusive development approaches.
Differences
Different viewpoints
Role of government in AI regulation and market intervention
Speakers
– Dean W. Ball
– Gabriela Ramos
Arguments
Existing legal frameworks should be presumed sufficient until proven otherwise, with burden of proof on those advocating for new regulations
AI governance requires a broad portfolio of policy interventions beyond just regulations, including investments, incentives, institutions and infrastructure
Summary
Ball advocates for minimal new regulation, preferring existing legal frameworks with burden of proof on those wanting new rules. Ramos argues for comprehensive government intervention including investments, incentives, and institutions, challenging the view that government creates market distortions.
Topics
Artificial intelligence | The enabling environment for digital development
Assessment of current AI market structure and need for intervention
Speakers
– Dean W. Ball
– Gabriela Ramos
Arguments
Data centers powering frontier AI systems should be treated as critical infrastructure like ports or railroads
Current AI market shows concentration resembling natural monopolies that create distortions requiring government intervention
Summary
While Ball sees AI infrastructure as critical that should be supported (mentioning US subsidies for global south), Ramos views current AI market concentration as problematic monopolistic behavior requiring active government intervention to prevent market distortions.
Topics
Artificial intelligence | The digital economy | The enabling environment for digital development
Importance and accessibility of frontier AI models
Speakers
– Dean W. Ball
– Gabriela Ramos
Arguments
Rejecting frontier models in favor of ‘good enough’ cheaper models is a significant blind spot that misses transformative potential
Inclusion and competitiveness are not opposing forces – market concentration actually flattens productivity and breaks the innovation diffusion machine
Summary
Ball emphasizes the critical importance of frontier AI models and warns against dismissing them for cheaper alternatives, viewing this as missing transformative opportunities. Ramos focuses on how market concentration around these advanced capabilities breaks the diffusion mechanism that would spread benefits more broadly.
Topics
Artificial intelligence | Closing all digital divides | The digital economy
Unexpected differences
Prioritization of frontier AI models versus broader access
Speakers
– Dean W. Ball
– Gabriela Ramos
Arguments
Rejecting frontier models in favor of ‘good enough’ cheaper models is a significant blind spot that misses transformative potential
Current AI market shows concentration resembling natural monopolies that create distortions requiring government intervention
Explanation
This disagreement is unexpected because both speakers seem to want AI benefits to reach more people globally, but Ball argues this requires access to the most advanced models while Ramos argues the concentration around these models is precisely what prevents broader benefits. They have fundamentally different theories about how AI benefits diffuse through society.
Topics
Artificial intelligence | Closing all digital divides | The digital economy
Government’s historical and future role in AI innovation
Speakers
– Dean W. Ball
– Gabriela Ramos
Arguments
Existing legal frameworks should be presumed sufficient until proven otherwise, with burden of proof on those advocating for new regulations
Government has historically been crucial for foundational AI research and should continue investing in open research that benefits everyone
Explanation
Despite Ball’s background in government AI policy and Ramos’s economist perspective, they have opposing views on government’s role. Ball advocates minimal intervention despite working on government AI initiatives, while Ramos argues for extensive government involvement based on historical success stories like DARPA and the Internet.
Topics
Artificial intelligence | The enabling environment for digital development | Financial mechanisms
Overall assessment
Summary
The main disagreements center on the appropriate level of government intervention in AI development and markets, the prioritization of frontier AI capabilities versus broader access, and different theories about how AI benefits should diffuse through society. While all speakers agree AI governance needs comprehensive approaches, they fundamentally disagree on whether existing frameworks are sufficient or whether extensive new interventions are needed.
Disagreement level
Moderate to high disagreement on fundamental approaches to AI governance, with significant implications for policy direction. The disagreements reflect deeper philosophical differences about market dynamics, government roles, and pathways to inclusive AI development that could lead to very different policy outcomes.
Partial agreements
Partial agreements
All speakers agree that AI governance requires comprehensive approaches beyond simple regulation, but they disagree on the extent of government intervention needed. Ball prefers existing legal frameworks with minimal new regulation, Ramos wants extensive government investment and intervention, while Bartoletti focuses on organizational strategic capabilities.
Speakers
– Dean W. Ball
– Gabriela Ramos
– Ivana Bartoletti
Arguments
Data centers powering frontier AI systems should be treated as critical infrastructure like ports or railroads
AI governance requires a broad portfolio of policy interventions beyond just regulations, including investments, incentives, institutions and infrastructure
AI governance is a strategic capability for creating long-term value, requiring embedding of privacy, security and legal protections into products
Topics
Artificial intelligence | The enabling environment for digital development
Both agree that inclusion should be actively engineered into AI systems rather than treated as secondary to competitiveness, but Ramos focuses on market-level interventions to prevent concentration while Bartoletti emphasizes organizational-level design choices to embed fairness.
Speakers
– Gabriela Ramos
– Ivana Bartoletti
Arguments
Inclusion and competitiveness are not opposing forces – market concentration actually flattens productivity and breaks the innovation diffusion machine
AI governance must shift from pure risk management to engineering fairness and inclusivity into systems while managing risks
Topics
Artificial intelligence | Human rights and the ethical dimensions of the information society | The digital economy
Both recognize education and capacity building as critical for AI readiness, but Ramos emphasizes the urgent need for systemic education reform that isn’t happening globally, while the Moderator presents a framework approach and highlights India’s proactive educational initiatives.
Speakers
– Gabriela Ramos
– Moderator
Arguments
Massive upgrading of education pedagogy and teacher training for AI integration is critically needed but not happening anywhere
Building AI readiness requires addressing three critical vectors: mindset, skill sets, and tool sets
Topics
Capacity development | Artificial intelligence | Social and economic development
Similar viewpoints
Both speakers recognize that current AI development creates problematic concentrations of power and that governance must actively address inequalities rather than just managing risks
Speakers
– Gabriela Ramos
– Ivana Bartoletti
Arguments
Current AI market shows concentration resembling natural monopolies that create distortions requiring government intervention
AI governance must shift from pure risk management to engineering fairness and inclusivity into systems while managing risks
Topics
Artificial intelligence | The digital economy | Human rights and the ethical dimensions of the information society
Both speakers believe in the importance of ambitious AI development and government support for advancing frontier capabilities, though from different perspectives
Speakers
– Dean W. Ball
– Gabriela Ramos
Arguments
Rejecting frontier models in favor of ‘good enough’ cheaper models is a significant blind spot that misses transformative potential
Government has historically been crucial for foundational AI research and should continue investing in open research that benefits everyone
Topics
Artificial intelligence | The enabling environment for digital development | Information and communication technologies for development
Takeaways
Key takeaways
AI governance requires a comprehensive ecosystem approach involving regulations, investments, incentives, institutions, and infrastructure rather than just regulatory frameworks
Existing legal frameworks should be presumed sufficient until proven otherwise, with new regulations needed primarily for tail events and catastrophic risks
Inclusion and competitiveness are complementary rather than opposing forces – market concentration actually reduces productivity and breaks innovation diffusion
AI governance must evolve from pure risk management to strategic capability building that engineers fairness and inclusivity into systems
Data centers powering frontier AI systems should be treated as critical infrastructure similar to ports or railroads
Government investment in AI research and infrastructure is crucial, as historically demonstrated by foundational technologies like the internet
Frontier AI capabilities represent building systems smarter than humans at cognitive labor, opening transformative possibilities beyond current imagination
Massive education system upgrades and teacher training for AI integration are critically needed but not happening globally
AI development should focus on serving humanity while respecting diverse languages, dialects, and ethical norms
Resolutions and action items
Treat compute infrastructure as critical national infrastructure requiring government investment and protection
Shift AI governance approach from risk management to strategic capability building that creates long-term value
Invest heavily in upgrading education pedagogy and teacher training for AI integration
Develop transparency laws for AI systems that pose potential catastrophic risks
Engineer fairness and inclusivity directly into AI systems during development rather than addressing issues post-deployment
Leverage employee expertise to develop practical AI use cases within organizations
Focus on faster innovation diffusion to prevent market concentration from stifling productivity
Unresolved issues
How to establish global red lines for AI development that all nations can agree upon
Specific mechanisms for preventing AI market concentration while maintaining innovation incentives
Concrete strategies for upgrading education systems globally to prepare for AI transformation
How to balance national competitiveness with international cooperation in AI development
Methods for ensuring AI models respect diverse cultural values and languages at scale
Practical implementation of techno-legal approaches that translate legal requirements into technical tools
How to manage the transition for workers whose jobs will be transformed by AI
Specific governance frameworks for agentic AI systems that can make autonomous decisions
Suggested compromises
Balance proactive governance for catastrophic AI risks while relying on existing legal frameworks for routine applications
Combine private sector innovation with government investment in open research that benefits everyone
Develop AI governance that manages risks while engineering positive outcomes rather than purely controlling negative ones
Create public-private partnerships that move from competition to cooperation while maintaining healthy markets
Allow frontier AI development while ensuring broader access through infrastructure investment and capability building
Implement transparency requirements for high-risk AI systems while avoiding over-regulation of beneficial applications
Thought provoking comments
We should presume that existing law is sufficient and that there is some sort of good solution. And then the burden of proof should be on the person who wants the regulation to show this is why existing law doesn’t work.
Speaker
Dean W. Ball
Reason
This comment fundamentally challenges the prevailing assumption in AI governance discussions that new regulations are automatically needed. It flips the burden of proof and suggests a more conservative, evidence-based approach to regulation that builds on existing legal frameworks rather than creating entirely new ones.
Impact
This comment set the tone for the entire discussion by establishing a counterintuitive starting point. It influenced subsequent speakers to address the role of government intervention more thoughtfully, with Gabriela directly building on this by discussing the ‘broad portfolio of policy interventions’ beyond just regulation.
In the U.S. that was not the case. The U.S. was the place where the massive investment in innovation in DARPA, in the creation of the Internet, all the foundational issues that we are seeing now were financed at some point by basic research that was paid by the government… I like to see the AI technologies as natural monopolies… there are market distortions now that needs to be addressed by government policies
Speaker
Gabriela Ramos
Reason
This comment powerfully reframes the government’s role from a potential impediment to innovation to a historical catalyst for it. By citing concrete examples like DARPA and the Internet, and characterizing AI as creating ‘natural monopolies,’ she challenges the binary thinking about public vs. private sector roles.
Impact
This comment shifted the discussion from whether government should intervene to how it should intervene effectively. It introduced historical context that grounded the theoretical debate in practical examples, and introduced the critical concept of market concentration as a key challenge requiring government response.
AI governance is really about a strategic capability that an organization must have to create long-term value… governance of AI is much more than risk management it’s much more than compliance… it’s much more than that it’s much more than risk management it’s much more than compliance we realized… that AI governance is really about a strategic capability
Speaker
Ivana Bartoletti
Reason
This comment fundamentally redefines AI governance from a defensive, compliance-focused activity to a proactive, value-creating strategic function. It moves beyond the typical risk-mitigation framing to position governance as essential for business success and innovation.
Impact
This redefinition elevated the entire conversation about governance from a necessary burden to a competitive advantage. It influenced the moderator to ask about underestimating risks, leading to a more nuanced discussion about balancing innovation with responsibility.
When you have market concentration, productivity flattens… the diffusion machine, which is this very important element that trickles down the innovative developments into a broader set of users and benefits, is broken. Now the diffusion machine is broken.
Speaker
Gabriela Ramos
Reason
This comment introduces a sophisticated economic concept – the ‘diffusion machine’ – that explains why concentration in AI isn’t just a fairness issue but an economic efficiency problem. It provides a compelling economic rationale for inclusion beyond moral arguments.
Impact
This comment provided the economic foundation for arguing that inclusion and competitiveness are complementary rather than competing goals. It influenced the moderator’s follow-up question about framing inclusion as ethical imperative vs. competitive strategy, leading to a deeper exploration of this relationship.
I believe that what we are doing is building systems that are going to be smarter than humans at all cognitive labor… We’re spending a trillion dollars this year on that. That’s the plan… think of frontier AI as being useful for stuff that we don’t even have words for today right concepts that you will invent
Speaker
Dean W. Ball
Reason
This comment cuts through incremental thinking about AI improvements to articulate the truly transformative nature of what’s being built. The concrete figure of a trillion dollars and the concept of capabilities ‘we don’t have words for’ makes the abstract future tangible and urgent.
Impact
This comment served as a wake-up call that shifted the discussion from managing current AI applications to preparing for fundamentally different future capabilities. It influenced Gabriela’s subsequent emphasis on education reform as an urgent priority, recognizing that current educational systems are inadequate for this future.
Why aren’t we upgrading massively the education pedagogy? Why are we changing the way we go in school? Why don’t we invest in our teachers… If the future that Dean is projecting is going to happen, is going to arrive, we need people to be very well equipped.
Speaker
Gabriela Ramos
Reason
This comment identifies a critical gap between the transformative AI future being discussed and the fundamental institutions (education) that need to prepare people for it. It’s particularly insightful because it connects the high-level AI governance discussion to practical, immediate policy needs.
Impact
This comment grounded the futuristic AI discussion in immediate, actionable policy needs. It prompted the moderator to share India’s experience with AI education, connecting the global discussion to specific national initiatives and demonstrating how abstract principles translate into concrete policies.
Overall assessment
These key comments fundamentally shaped the discussion by challenging conventional assumptions and reframing core concepts. Dean’s initial comment about presuming existing law sufficiency set a contrarian tone that encouraged deeper thinking throughout. Gabriela’s historical perspective on government’s role in innovation and her economic analysis of the ‘broken diffusion machine’ provided sophisticated frameworks for understanding the relationship between inclusion and competitiveness. Ivana’s redefinition of governance as strategic capability elevated the conversation beyond compliance thinking. The interplay between Dean’s vision of transformative AI capabilities and Gabriela’s urgent call for educational reform created a productive tension between future possibilities and present institutional needs. Together, these comments moved the discussion from surface-level policy debates to fundamental questions about the role of institutions, the nature of innovation, and the relationship between technological advancement and social equity.
Follow-up questions
How can existing bodies of law be effectively applied to AI governance?
Speaker
Dean W. Ball
Explanation
Dean emphasized the need to figure out how to apply existing legal frameworks like common law traditions to AI, rather than assuming new regulation is needed
What constitutes clear and demonstrated threat models for AI that would justify proactive governance?
Speaker
Dean W. Ball
Explanation
Dean mentioned the need for proactive governance in areas with clear threat models, particularly around catastrophic risks, but the specific criteria for what constitutes such threats needs clarification
How can governments effectively address market distortions created by AI technology concentration?
Speaker
Gabriela Ramos
Explanation
Gabriela highlighted that AI technologies function as natural monopolies leading to oligopolies, creating market distortions that need government intervention
How can the ‘diffusion machine’ for AI innovation be repaired to ensure broader benefits?
Speaker
Gabriela Ramos
Explanation
Gabriela identified that market concentration has broken the mechanism by which innovative developments trickle down to broader users, requiring research into solutions
What is the design for trust in agentic AI?
Speaker
Ivana Bartoletti
Explanation
Ivana mentioned publishing an article on this topic and highlighted the need to understand how to design autonomous AI agents that people can trust and intervene with when needed
How can organizations protect against cascading hallucinations and model drifting in production AI systems?
Speaker
Ivana Bartoletti
Explanation
These were mentioned as critical governance challenges that require further research and development of monitoring and intervention tools
How should education systems be massively upgraded to incorporate AI pedagogy?
Speaker
Gabriela Ramos
Explanation
Gabriela identified education reform as a critical blind spot, emphasizing the urgent need to upgrade teaching methods and teacher training for AI integration
What should be the global red lines for AI development and deployment?
Speaker
Ivana Bartoletti
Explanation
Ivana highlighted the lack of global alignment on what AI applications should never be pursued, suggesting need for international coordination on ethical boundaries
How can competitive dynamics in AI be maintained while preventing harmful concentration of power?
Speaker
Dean W. Ball
Explanation
Dean noted both competitive dynamics (dropping token prices) and centralizing tendencies in AI, requiring research into preventing excessive concentration
How can frontier AI capabilities be leveraged for applications we don’t yet have words for?
Speaker
Dean W. Ball
Explanation
Dean suggested that the most important applications of advanced AI systems may be for concepts and uses not yet invented, requiring exploration of these unknown possibilities
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

