Global Perspectives on Openness and Trust in AI

20 Feb 2026 15:00h - 16:00h

Global Perspectives on Openness and Trust in AI

Session at a glance

Summary

This panel discussion at an AI summit explored the concept of “openness” in artificial intelligence development and governance, examining how the term extends beyond technical considerations to encompass broader values of democratization, participation, and sovereignty. The conversation featured experts from government, academia, and civil society discussing the political economy of AI and questions of power distribution in the technology sector.


Alondra Nelson, former deputy director of the White House Office of Science and Technology, argued that true openness should be understood as a spectrum rather than a binary, emphasizing the socio-technical characteristics that include accountability, transparency, and democratic participation. She noted that while the current U.S. administration claims to be deregulatory, it actually employs heavy-handed approaches through industrial policy, trade measures, and immigration controls that lack democratic input processes.


Anne Bouverot discussed the shifting geopolitical landscape, highlighting how middle powers like France, Canada, and India are forming coalitions to compete with the U.S.-China AI duopoly. She emphasized that open source serves as a tool for challengers to catch up and for countries to develop digital sovereignty. Astha Kapoor warned that for Global South countries, openness framed primarily as adoption could be dangerous, as it risks these nations becoming testing grounds for technologies developed elsewhere rather than addressing their structural needs.


Competition Commission of India Chairperson Ravneet Kaur outlined how competition policy can serve as a sovereignty tool, emphasizing the importance of access to data, compute infrastructure, and skills. She highlighted concerns about concentration in the AI value chain and the need for transparency and accountability in AI systems.


Karen Hao presented examples of truly participatory AI development, including community-driven projects that embody broader definitions of openness through inclusive governance and value-sharing. She challenged the Silicon Valley notion of scale, arguing for community-specific AI solutions rather than monopolistic distribution models. The discussion concluded with recognition that meaningful AI governance requires genuine community participation and resistance to corporate co-optation of inclusive language.


Keypoints

Major Discussion Points:

Redefining “Openness” in AI Beyond Technical Specifications: The panel emphasized that openness should encompass socio-technical characteristics including democracy, transparency, accountability, and community participation, rather than just technical aspects like open model weights. This broader definition connects to historical open source movements focused on shifting power and creating shared infrastructure.


Geopolitical Shifts and the Rise of “Middle Powers”: Discussion centered on how the traditional “open vs. closed” binary between democratic and authoritarian nations has evolved, with middle powers (countries like Canada, France, Germany, India, Japan) forming coalitions to compete against AI superpowers like the US and China through collaborative approaches and open source strategies.


AI Governance Through Non-Traditional Policy Tools: The conversation highlighted how AI policy is increasingly implemented through industrial policy, trade measures, export controls, and immigration rather than traditional regulation, raising concerns about reduced democratic input and public accountability in policymaking processes.


Global South Perspectives on AI Adoption vs. Sovereignty: Panelists discussed the tension between using openness to drive AI adoption in developing countries versus maintaining agency and control over AI development, warning against treating large markets merely as testing grounds for technologies built elsewhere.


Competition and Market Concentration Concerns: Focus on anti-competitive practices in AI markets including self-preferencing, exclusive agreements, and ecosystem lock-in, with emphasis on ensuring access to data, compute infrastructure, and distribution channels to prevent monopolization.


Overall Purpose:

The discussion aimed to broaden the understanding of “openness” in AI beyond technical definitions to encompass democratic participation, community engagement, and equitable value distribution. The panel sought to examine how different countries and regions can maintain sovereignty and agency in AI development while addressing power imbalances in the global AI ecosystem.


Overall Tone:

The discussion maintained a thoughtful, critical, and collaborative tone throughout. While panelists raised serious concerns about corporate concentration, democratic deficits, and global inequities in AI development, the conversation remained constructive and solution-oriented. The tone was notably inclusive and community-focused, with panelists building on each other’s insights and emphasizing the importance of grassroots participation in AI governance discussions.


Speakers

Speakers from the provided list:


Amba Kak – Moderator, AI Now Institute


Alondra Nelson – Former deputy director of the White House Office of Science and Technology under President Biden


Anne Bouverot – French president’s special envoy for the AI Action Summit


Astha Kapoor – AAPTI Institute


Ravneet Kaur – Chairperson of the Competition Commission of India


Karen Hao – Author of “Empire of AI”


Audience member 1


Audience member 2 – Part of a group from Germany


Audience member 3


Audience member 4 – Intellectual property and business lawyer


Audience member 5


Audience member 6


Additional speakers:


None – all speakers mentioned in the transcript are included in the provided speakers names list.


Full session report

This panel discussion at an AI summit brought together leading experts to examine the concept of “openness” in artificial intelligence development and governance. The conversation, moderated by Amba Kak of the AI Now Institute, featured perspectives from government, academia, and civil society. Kak noted this was “the only female-only panel at this symposium,” acknowledging this as something to work on for future iterations.


Redefining Openness: Beyond Technical Specifications

Alondra Nelson, former deputy director of the White House Office of Science and Technology Policy, provided the panel’s foundational framework by arguing for understanding openness as a spectrum rather than a binary. Drawing from the historical open source movement, Nelson emphasized that true openness encompasses socio-technical characteristics including power redistribution, accountability, transparency, and democratic participation—not merely technical aspects like sharing model weights.


Nelson praised this conference as “the first AI conference I’ve been to that included the community in any considerable way,” highlighting how typical AI discussions exclude community voices. She challenged the current discourse where geopolitical stakes are used to justify abandoning the democratic dimensions of openness, noting that while some applications may legitimately require restrictions, the debate has been artificially polarized to treat all open source uses as equally dangerous.


The distinction between technical and socio-technical openness proved crucial throughout the discussion, with panelists examining how corporate actors have co-opted the language of openness while maintaining fundamentally closed development processes.


Geopolitical Dynamics and Middle Power Coalitions

Anne Bouverot, serving as the French president’s special envoy for the AI Action Summit, described how the traditional “open versus closed” binary has given way to more complex dynamics where middle powers are forming coalitions to compete against the US-China AI duopoly.


Bouverot highlighted how China’s strategic use of open source technologies, exemplified by DeepSeek’s emergence, demonstrates how openness can serve as a competitive tool for challengers. For middle powers—including countries like Canada, France, Germany, India, Japan, and Australia—she advocated for “coalitions of the willing” that could pool resources without requiring each nation to build complete AI stacks independently.


Global South Perspectives and the Adoption Trap

Astha Kapoor from the AAPTI Institute provided a crucial Global South perspective, warning that framing openness primarily as driving adoption risks positioning Global South countries as testing grounds for technologies developed elsewhere. She argued this “adoption trap” diverts attention from necessary structural investments toward technological solutions that may not address fundamental issues.


Kapoor referenced India’s experience with digital public infrastructure over the past 12-15 years, where initial innovation was followed by well-funded international actors dominating the market. She called for “openness as dialogue, as distribution of value” rather than mere adoption, emphasizing co-design and equal partnership over technology transfer.


Competition Policy as Democratic Governance

Ravneet Kaur, Chairperson of the Competition Commission of India, positioned competition policy as essential for maintaining sovereignty in the AI age. She outlined how anti-competitive practices from earlier digital markets are emerging in AI: concentration throughout the value chain, ecosystem lock-in, targeted price discrimination, exclusive partnerships, and opaque systems.


Kaur emphasized that access—to data, compute infrastructure, and skill sets—determines future market dynamics. Her framework focuses on transparency throughout the AI system lifecycle, including both technical transparency (understanding what technology does) and governance transparency (understanding how systems are governed). She noted that the Competition Commission released a market study on AI and competition in October 2025, available on their website.


Community-Driven Development Models

Karen Hao, author of “Empire of AI,” provided concrete examples of participatory AI development. She described the BigScience project as large-scale collaborative development involving researchers worldwide, and highlighted the Tahiku Media AI speech recognition project in New Zealand, where a Māori radio station approached AI development through extensive community engagement.


The Tahiku project began by asking the community whether they wanted the AI tool, conducted public education about AI development requirements, collected data with full community consent, and continuously returned to the community to determine applications. Both projects built upon Mozilla Foundation’s DeepSpeech model, creating collaborative development stacks.


Hao challenged Silicon Valley’s definition of scale, arguing that true scale would involve different communities worldwide developing models by and for themselves, creating distributed rather than centralized AI capabilities.


Democratic Deficits in Current Governance

Nelson highlighted a paradox in current US AI governance: while claiming to be “deregulatory,” the administration employs heavy-handed approaches through industrial policy tools including tariffs, export controls, immigration restrictions, and science funding priorities. She noted that H-1B visas now cost around $100,000 per worker, making this approach potentially more controlling than traditional regulation while being less democratic.


This shift toward non-traditional policy levers—trade, immigration, industrial policy—creates challenges because these spheres have historically been less accessible to public participation. Nelson also mentioned community contestation around data centers, where NDAs are being signed “in the dark of night.”


Corporate Co-optation and Inclusion Rhetoric

The discussion revealed how corporate actors have become sophisticated in adopting progressive language around inclusion and empowerment while promoting closed platforms. Hao observed that corporate rhetoric at AI conferences now incorporates social justice language to “make sure that you kind of buy into helping them lock in their closed platforms.”


This co-optation makes it difficult for communities to distinguish between genuine empowerment initiatives and marketing strategies designed to expand market access.


Audience Engagement and Unresolved Questions

The audience Q&A highlighted several ongoing challenges, including concerns about labor exploitation throughout the AI supply chain, intellectual property issues, and moral dilemmas individuals face when required to use AI tools built on exploitative practices.


Questions also addressed the absence of Chinese participants during Lunar New Year and broader inclusion concerns. Kaur noted that competition authorities intervene when intellectual property rights are abused to create unfair market conditions, focusing on preventing the use of innovation to stifle further innovation.


Key Takeaways

The panel demonstrated that AI openness questions are fundamentally about power, democracy, and global equity rather than merely technical specifications. Key insights included the need for broader definitions of openness encompassing socio-technical characteristics, genuine community participation in AI governance, and recognition that competition policy serves democratic governance beyond economic efficiency.


The discussion revealed ongoing tensions between rhetoric and reality in inclusion efforts, the sophistication of corporate co-optation of progressive language, and the challenge of achieving scale while maintaining democratic and participatory characteristics in AI development. As AI systems become increasingly central to economic and social life, these governance challenges require sustained attention to community agency, democratic participation, and equitable value distribution.


Session transcript

Amba Kak

The AI Now Institute and the AAPTI Institute, we are honored and delighted to be co -hosting this panel at the close of what has been an extremely stimulating, some would say over -stimulating week. What brings AAPTI and AI Now together, despite the many kinds of distance between New York and Bangalore, is our focus on the political economy of AI and our insistence that questions of technology are always questions of power. So we have a formidable panel by every standard, leaders in their field advocating for AI in the public interest, traversing several fields of government service, academia, and journalism, sometimes in the same person, as you will know if you read their bios, which I’m going to skip for reasons of expediency, but I’m going to talk through some of their specific advantages in the conversation.

You know, it always pains me a little bit to even bring it up, but I’m going to do it anyway, which is it is exceptional that this is also the only female -only panel at this symposium. Hopefully that’s not something we have to say a lot or something that we have to wear as a badge of honor, but more something to work on for future iterations. So before we begin, I don’t think he’s in the room, but I want to also thank Amlan Mohanty, who’s been a partner in conceptualizing and helping to bring this panel to light, and to our wonderful summit organizing team, Sanjana Mishra and Iksho Virat, for their tireless efforts. I hope you all get good sleep tonight after a very long week.

Okay, so let’s get into it. I’m going to moderate this panel, so I’ll take a seat. Thank you. So let’s get into it. Okay, let’s get into it. There have been many discussions about openness at this summit. You’ve probably been in at least one of them. For the most part, these discussions have focused on the kind of technical affordances of open source, open -weighted models, open hardware. But what’s clear is that the word open is doing a lot of work in these conversations. It’s a stand -in for many much broader values of democratization, of participation, agency, even sovereignty. So in today’s panel, we’re going to kind of widen our understanding of what openness could mean in this conversation about AI.

And I’m going to start with Alondra. Alondra has been the deputy director of the White House Office of… of science and technology under President Biden. And at the time, there was a very heated debate about the geopolitical but also safety implications of open source and what U.S. government policy would be on these issues. And it seems like under this current administration, we’ve landed on a pro -open source overall orientation. But at the same time, it feels as if in many senses, AI governance in the United States is more closed than it has ever been. So I guess I wanted to ask, what do you see as the broader challenges to openness in AI governance today?

Alondra Nelson

Thank you for organizing this, colleagues. And good to be here and good to close out this exciting summit with you all. So a couple of things. I mean, I would say the Biden administration, I think, took the questions. Question of open weight model. as a gradient, right? So it was a spectrum. So that open was not a binary. It’s either open or not open. And I think the new administration, the current administration, takes it much more as a binary, that open is a thing that you sort of have achieved and it is now open as opposed to being closed. I think the difference is that, to your point from the opening, Amba, is that I think part of what we were trying to do in the Biden administration was really go back to a kind of foundational sense of openness that comes out of an open source movement that really thinks about openness as a kind of socio -technical characteristic and not just a technical characteristic.

So certainly the questions around open models, AI models, are often around technical things like model weights. Are the model weights shared? Only the model weights shared? Is it also the case that the training data is shared? You know, is the API, open to a certain extent or closed to a certain extent. So the technical things are certainly there. But I think if we go back to a sort of broader understanding of openness that comes out of sort of open source software, it was about shifting power. It was about forms of accountability. It was about sort of openness as a kind of practice and openness as shared infrastructure, openness as resources that could be used by lots of different communities, things that could be, you could modify the technology, that you could sort of just use the technology for the sort of purposes of your community or the purposes that you had.

And so that meant that that older, I think, broader definition of open was much more about democracy and transparency and accountability in a way that if you take even, you know, a so -called open source model like Lama 2 or Lama 3, which isn’t really open source and that we’re… We’re being asked to be content with… model weights as open. So I think the, you know, why we want to really push back on that is because, you know, that we are often, I think, using geopolitical stakes as a justification for not doing the socio part of the socio -technical, for not doing the accountability and the transparency and the democracy part because, you know, too dangerous because in the UNESCO context, China, you know, these things just sort of sit in as signs for explanations for, you know, why things can’t be different.

And I think it’s the case that to go, you know, to be reminded of a kind of broader sense of open reminds us that, you know, it’s not this binary and that one can have, you know, there obviously may be places where you don’t want open source. Like, do you want open source, like nuclear deploy AI? Like, probably not, right? But the binary… The debate gets carried forward as if, like, every open source use or open weight use is that. use as opposed to the sort of gradient of uses that are much safer and moreover are beneficial to communities, to helping people achieve their goals and sort of certainly much better for public transparency and accountability about what these systems do in the world.

Amba Kak

Can I ask a quick follow -up and then I want to move to Anne, which is that the other sort of defining feature of certainly of U.S. government policy today is that it’s happening less through traditional, you know, the traditional forms of regulation that we’re used to and much more through industrial policy, through trade policy, through immigration. But these are also spheres that have been, I would say, relatively even more immunized from public accountability or harder to, you know, harder for the broader public to weigh in on. So just wanted your thoughts on how we…

Alondra Nelson

Yes, I’ve been writing and thinking about this. Thank you for that question. So… So, you know, we’ve spoken a lot about the new administration and gets talked about as being deregulatory in regards to AI and being very light and being, quote, unquote, light touch. And I think if we actually pose that as a question as opposed to accepting it as a statement and actually look at what the current administration in the U.S. is doing around AI, it’s actually taking a quite very heavy hand to sort of steer AI. So you mentioned some of the levers that they’re using, tariffs, trade policy, export controls of semiconductor trips, in the U.S. context even immigration. So, you know, there are, you know, I think companies are getting out of it and around it depending on their relationship to Washington, but we’re told that an H -1B visa for a high -tech worker is $100 ,000 per worker, right?

And so that’s, you know, 10x, 20x or whatever times a company, that’s quite a lot of money. And also just… The way that science is being funded to the extent that, you know, the federal government plays a large role in driving the sort of research ecosystem for technology. So all of those things are being very heavily shaped in the current administration in the U.S. And so… So it may not be regulatory in the sense of formal rulemaking as it happens in the United States context, but it is certainly hyper -regulatory, I think, in a lot of other ways. And I’ll go back to my keyword of the day, the democracy piece, which is the upside of formal rulemaking, even though it can be clunky, it can take a long time, sometimes the pace is too slow for the pace of the technology, all of those things can be true, is that it has democratic input.

So if you’re doing a rulemaking in the context of the U.S. federal government, there will be a public call, there will be a public notice that you’re doing the rulemaking, there will be a public call for input. So even if you don’t agree with the outcome, there are sort of moments of sort of democratic input. When we are doing AI policy by fiat and through executive authority only, those inputs, even if those limited inputs are even gone. So it’s not only, I think, quite heavy -handed. It’s unfortunately, I think, anti -democratic relative to the status quo.

Amba Kak

Yeah, exactly. Anne, I want to move to you. As the French president’s special envoy for the AI Action Summit, you’ve been at the heart of a lot of global coordination on AI governance. And there was a time, I would say, the last 10 years have been characterized by open versus closed as a kind of binary or a way of organizing the world into particular camps when it comes to AI, the democratic open world and the rest of the world. But it’s interesting how much that has, you know, the ground beneath us has shifted in the last few years. And it has been particularly interesting to note at this summit that it is middle powers as a frame that is coming through as a kind of new organizing principle.

So I guess I want to say, I mean, do you see that openness still has value in forging multilateral, solidarities and especially in this brave new world we’re in?

Anne Bouverot

Yes, absolutely. I mean, clearly the geopolitical landscape has really shifted. At the AI Action Summit in Paris, it was exactly a year ago in February. It was just after the inauguration in the U.S. It was the first international trip for Vice President Vance, and what a speech that was, just before Munich, the Munich Security Conference. It was a moment where the U.S. announced at the White House the Stargate project. So it was a very strong and loud message from the U.S. saying, we’re here, we’re investing, we’re the world leaders. And at the summit, J.D. Vance said very clearly, we want all of you to be customers of our technology. And at the same time, this is the moment when DeepSeek emerged on the world map and everybody realized that actually China, using open source, which is why I want to come to that, was really saying we have a seat at the table and we’re actually playing that game.

And China using open source is actually very interesting because open source has a number of benefits and also risks. I don’t think it’s the answer to everything, but clearly it’s a way for challengers to catch up. This is how Android came to the world of smartphones. There’s many examples, and this is what China has taken as a lever. To be in that race. But then on to what does it mean for other countries than the U.S. and China. It also means that this is a tool that can be used by other countries. which is why in France and in Europe we’re very much in favor of open source as a competitive tool and as a way to leverage the knowledge and the findings of others to then just stand on their shoulders and continue to develop technology.

It doesn’t mean that everything should be open source there are cases where you do want to be careful depending on the use case but as a way to develop and stimulate competition it is very powerful it’s not the only tool you mentioned middle economies middle powers there was this fantastic speech by Mark Carney at Davos and there was a speech by Macron as well that maybe I’ll conclude with but this idea that middle economies that have some resources, not the resources to build their own stack top to bottom and to fund frontier level AI but But together, by building coalitions of the willing, these middle economies can do a lot of things. I believe that Canada, France, Germany, Switzerland, India, Japan, Australia, I can name a few of them.

And it doesn’t have to be one big block of these middle powers, but ad hoc coalitions of the willing. So I believe this is really something that can be useful in the evolution of governance.

Amba Kak

That was a fascinating account, and I think what it also highlights is that actually, whether you’re China or the U.S. or the middle powers or France, there’s a level at which everyone, as we discussed, can in some limited way be pro -open source. So do you think then that the differentiation will be at the layer of governance and our approaches to how we govern? How do we govern these technologies?

Anne Bouverot

I don’t know, is really the answer. Governance is such a broad word. There’s a lot of, for example, open source is really being taken as a tool by startups and scale -ups in Europe and in other countries. I mean, by Mistral, by Cohere, by Sakana AI in Japan, by a number. Is that governance? I don’t know. But clearly, governance and countries and institutions have a role to play in saying, how do we shape those coalitions of the willing? How do we put public funding or access to publicly funded compute or access to data sets that countries can help to put together? How do we put that at use and in which ways? So what are the governance?

How do we put that at use and in which ways? How do we put that at use and in which ways? How do we put that at use and in which ways? that we use to strengthen digital sovereignty and resilience.

Amba Kak

Precisely, yeah, that’s sort of what I was getting at. Okay, Aas, I’ll quickly move to you. Middle powers, as we just discussed, it’s a very broad term, and what it conceals is that there are many different economic and political aspirations of the countries that are bundled in that mix, and especially for countries like India or other countries in the global south, what are the unique kind of forms of both leverage and dependence in this current environment?

Astha Kapoor

Yeah, thanks so much, Amba. I mean, I think that what we’ve been tussling with over the last few days is that we went from global south to middle powers very quickly in a matter of days, which changes our form a little bit and our aspirations, and I think that that is what we have to grapple with, which is that as global south, our needs are very different in terms of we have structure. We have structural issues around health, around education that need to be addressed. We also have, you know, things that we need to do in terms of moving the country forward beyond what is just technologically mediated progress. And I think that what we’ve been hearing around over the last five days is that things like, well, open data or multilingual data sets is what is going to be that push.

So, you know, our languages will now be online. But then at the same time, we also have to realize that without having openness or control or agency or frictions across that entire AI stack, we are basically risking our populations in the Global South doing the labor to bring people online. So openness as a driver of adoption is actually quite a dangerous frame for Global South countries because it moves attention from where we might need to invest our resources. to then thinking that the only way to our historical problems is via adoption. And we’ve also seen that in the absence of governance, India is not new to the openness discourse, right? We have had a history over the last 12 years or 15 years on digital public infrastructure, but we’ve also seen the limits of once adoption occurs and when you have innovation, people with the deepest pockets come to innovate there because this is an enormous market.

So I think that you mentioned, Karni, like if we are a middle power, we’re definitely on the menu as a market. If we are a global south country, I think that there’s value in thinking about what that solidarity is because you’re right, there’s no homogeneity. And I think we’ve missed some of those questions around what we as large markets diversify. We are not here. We’re not here to do the labor to, you know, test bed models that are built elsewhere. So I think openness as dialogue, as distribution of value is what we need to think about.

Amba Kak

so many soundbites that I want to clip out of what you just said that was incredible, thank you Chairperson Kaur, firstly thank you so much for being here, I think what Asha said actually leads in well to the question I wanted to ask you which is how does one combat this dependence and as the Chair of the Competition Commission of India you’re a regulator that has been kind of ahead of the curve of looking at anti -competitive trends in this market, so from your perspective can you say a little bit both about the key implications of competition in the AI market and also if you see competition as a lever in the so called sovereignty toolkit

Ravneet Kaur

Thank you Amba, so for us at the Competition Commission of India, we’ve been looking at a lot of developments happening in the internet economy and these developments have changed the way businesses work how consumers interact with the markets and how value is being is being created. So things are moving very rapidly on the digital front. And as the commission, we have looked at what can be the practices which can be anti -competitive. Apart from the benefits which are coming from a digital economy, we have numerous benefits when it comes to economies of scale, the network effects, the efficiencies which are coming from that. But then there are also these risks which are there. And some of these have already been observed by the commission.

So the key ones which we found in the case of digital markets is the self -preferencing which is happening. Tying and bundling is occurring in numerous cases. Leveraging is being done. And there are these exclusive agreements where unfair terms are being also sought and, you know, parity agreements, parity arrangements. are being put in place. So in the competition commission, we have looked at this conduct when it comes to search engines. We’ve looked at it mobile ecosystems, online intermediation services, whether it is hotel, bookings, food ordering, e -commerce, or it is social media platforms. So across the entire spectrum, the commission has been looking at it. And very interestingly, when we started looking at AI, what could be the impact of AI?

So we did a market study on AI and competition, and the report has been released recently, October 25. It’s available on our website. And we found a lot of similarities in the way AI can function as well. So AI can bring a lot of benefits. We are seeing a lot of benefits when it comes to healthcare, education, logistics, supply chain management, and a lot of agriculture. And I’m seeing a lot of good things happening on that front. But also there are these potential possibilities or risks where you could see concentration in the entire AI value chain. There could be ecosystem lock -in, which might happen. Then there could be targeted price discrimination of people based on location, economic means, et cetera.

And then exclusive partnerships and the systems being opaque. So those were the things identified in the market study. And as a first step, we thought we need to make everybody aware because the important issue is one of access. Who has the access? That is a person who will determine what will happen in future. So it is access to data. It’s access to compute infrastructure. It is access to even skill sets. So whether we are able to build up the required skill sets within the country to be able to compete effectively. so those issues have brought us to work towards a framework where we are saying in the entire life cycle of the AI system how can we bring in transparency how can we bring in accountability

Amba Kak

I think that’s so important too because we focus a lot on big tech control over infrastructure people are familiar, inputs but I think what you’re pointing to is that it’s access to the consumer the pathways to monetization are happening at the distribution layer so really paying close attention to making sure that we have free and open competition in that layer and firms can’t take dominance from one market into another seems really important my second maybe more provocative question was around do you see competition as a tool for particularly global majority countries to retain and exercise sovereignty in the kind of AI age

Ravneet Kaur

when we look at AI we are looking at how far we can develop and how much we can do to make sure that we are able to make the most of the market and how much we can do to make sure that we are able to and deploy, monitor our AI systems that we are putting in place. And that’s where the issue comes up that we need to have the autonomy to be able to deploy the systems as per our economic, strategic, and societal priorities. And that’s where we see the very critical thing that how we can ensure that AI does that. And competition is a very important aspect of it. We just can’t forget about it because competition is what is going to ensure that there are no entry barriers, that players who are already there are not using their dominance to foreclose competition, to foreclose the market, and also that the consumers are not left locked in into a particular system because they can’t move their data and their various benefits that they are deriving from the AI systems to some other applications.

So really competition is at the heart of it, and I don’t see any way where we can forget about market. Thank you. markets would need to be contestable, fair, competitive. And for that, you know, that is where I would like to point out about our study, that we have clearly brought out that people who are deploying the technology, they have to have technical transparency. The stakeholders have to be able to understand what’s happening, what is this technology or this application being used for. And then there has to be governance transparency. That is that how you are governing that system. That also needs to be transparent. So once we are able to ensure that the people who are deploying these systems are looking at all these aspects, then the self -audit is happening, then maybe we would be able to safeguard competition because at the really crux of it all is maintaining competition.

competition.

Amba Kak

Thank you so much. Karen, I’m going to move to you. And just by the fact that there was so many, a line of people trying to take a selfie with you before we started, I’m going to assume that many people in the audience are familiar with Karen’s incredible book, Empire of AI. Her work has really delved into the global inequities that are embedded in the AI sort of global supply chain. I want to ask you where, I mean, your book is full of rich examples, but where do you see that open approaches to developing AI in some ways pose a challenge to this empire model of AI?

Karen Hao

big science project. It was this project that brought together over a thousand researchers from 70 countries, 250 institutions to try and create an open source large language model that not only would allow many different researchers to then interrogate what is actually happening beneath the surface of a large language model, but also to completely rethink what it would take to develop these technologies in a fundamentally more beneficial way where, for example, there’s better data governance practices, where you’re actually curating and cleaning the data, making it transparent for people, being able to track which data owners are then contributing to what aspect of value generation within the model. And this kind of goes back to Alonzo’s point as well, where you were saying… that we really need to understand openness with a much broader conception of what openness means.

It’s not just technical openness. And this project really embodied that, where they were working together with lots of different cultural institutions, with libraries, historical institutions, to try and figure out better ways of capturing the rich data that they had, but with respect to that institution and with a way to then deliver value back to that institution so the value chain wasn’t going just to the model creators themselves. Another project that I really loved is one that I highlighted in my book in the epilogue, which is the Tahiku Media AI speech recognition model. So Tahiku Media, they are a nonprofit radio station in New Zealand, and they broadcast in Te Reo Maori, or the Maori language, the language of the indigenous peoples in New Zealand.

And when a couple years ago, there’s been this big movement within New Zealand to try and revitalize the Maori language because it has been a huge challenge for them. almost been lost through the process of colonization. And Tahiku Media thought they had a very unique opportunity with this rich archival audio of Tōrero Māori to open this up to the community and help facilitate more language learning. They wanted to make it more accessible than simply just allowing people to listen to it, though. They wanted to create an application where you listen to the audio while you see a transcription of the audio. You can click on the transcription to get automatic translation. You can figure out how the language actually works.

But they realized they didn’t have enough capability to transcribe this because there simply were not enough proficient Tōrero Māori speakers. So this was the perfect use case where they could leverage building an AI speech recognition tool to do that work for them. But they went about this project in a totally different way. They made it extremely open and participatory to the community. Also not in a technical way, but in a social way. where they engaged immediately with the community to ask them, do you want this AI tool? And once the community said yes, they then had a public education campaign where they taught everyone what is AI in the first place, what do we actually need, we need a model, we need data, this is the kind of data that we need, this is the data that we would need from you.

And then once they actually engaged in that process and they developed so much trust with the community, they were able to collect enough data from the community with full consent in just a few days to train a speech recognition model. And then they continued to go back to the community and they said, now that we have this model, what kinds of applications do you actually want us to develop with this? What kinds of new AI models do you want to develop with this? And all of this was built on another open source project, which was the Mozilla Foundation’s deep speech model, which was similarly developed. With that kind of broader definition of openness, it was a model developed purely with also consentful data donations.

And so the entire stack was with the spirit of collaboration, with participation from everyone in the community, with an equal exchange of value where the people who are giving the data have a vote, have a say in then how the model ultimately can help support their journey in language learning. So both of those examples I always hold in my head when I’m thinking of what are the visions of AI that we actually want to support, what are the visions of open space AI that we actually want to support.

Amba Kak

So as you were speaking, I was just thinking, apart from being open and participatory in all the ways you said, these examples also provide a contrast to the idea that there is one model to rule them all, there’s this very sort of large language, we’re taking a single bet on a single technology, type of approach. But similarly, one of the… of the, I guess, common retorts to these experiments in some sense is that we can’t do that at scale. And so I’m just curious, what do you see as the tension between these kinds of governance structures and scale, and is there a trade -off?

Karen Hao

So I would reframe what we mean by scale, because what we are taught by Silicon Valley is that scale means they distribute to everyone, but they are the sole distributor. And to me, that’s not scale. That’s a monopoly. And what really we would want from scale is different communities all around the world, different industries, different companies, each developing models by and for them at scale. Like, that’s, to me, like a much more appropriate way of thinking about scale. And in fact, what’s so interesting is, like, because of the data imperative for large language models and the compute imperative for large language models as they’re currently being trained by the main company, they’re not going to be able to do that.

There is not a, there isn’t a good ability to diffuse this technology across. many different industries or many different communities. Most industries are data -poor industries. They’re not like the Internet industries. They don’t sit on vast amounts of data. And so if we actually want to diffuse AI to more people around the world and for more use cases around the world, in fact, we need to think of scale from a small AI perspective, a community -driven perspective, application -specific perspective, and that’s how we’re going to get scale.

Amba Kak

Okay, we’ve heard, I guess, a range of rich perspectives, and I’m going to take it as a good sign that all our panelists seem to be actively taking notes and sort of engaging with what each other was saying. So I was going to propose as a sort of round two that I might ask, just based on the conversation we’ve just had, Alondra, what is something that’s sort of sticking with you or that you’re working through in response?

Alondra Nelson

Yeah, I think community. So Karen queued that up for me, and the note that I was just writing here was about that, and I was thinking about… is how the stack that we are building now is explicitly closed to community. And I was thinking in particular about the data center and cloud layer. So in the U.S. context, there’s a lot of contestation. There’s growing contestation in communities about data centers. What folks might not know is that part of the contestation is because elected officials are asked to sign NDAs and contracts are being signed to stand up data centers in the dark of night and communities don’t even know. So the sort of lack of openness around the infrastructure, that infrastructural piece of the AI stack is actually quite profound.

And then I was thinking the opposite. So my reflection on the time here, which I’m still going to be processing for quite a long time. It’s my first time in New Delhi, my first time in India. It’s been an incredible experience. But I’ve been to a lot of AI conferences like, you know, NeurIPS. and everything, you know, like professional ones, not professional ones. A lot. A lot. This is the first one I’ve ever been to that has included the community in any considerable way. And it just is, I mean, I think it’s a revolutionary thing. And if we’re really serious about having democracy and community and voice, AI conferences need to look much more like this one than the ones that we spend a lot of our time going to.

So, you know, so who knows what will be the outcome of this week together. But it has been extraordinary and distinctive in the inclusion of lots of, you know, unks, aunties, college students, and lots in between.

Amba Kak

Aastha, closing reflections.

Astha Kapoor

Yeah. First of all, thank you for that reframe. As somebody who was here on the 16th, I was feeling so overwhelmed, and my instinct was like, there are too many people. But I do appreciate that. That reframe on the fact that this is the community that is going. to build and question and do the work, I think, that we all keep talking about. And I think from that is also, my word is also community, but I think friction, how do we enable some of that, both the coalescing, but also the dialogue, the questions, the where is the value for me part of it. And I know an example that was presented yesterday on the Amul Co -op, we’ve been doing a lot of work with cooperatives, to me, which is a nice space because it is the governance question of one member, one vote, you can pool things.

So how do they become recipients but co -designers in some of the things that we’ve heard over the last few days. So, yeah. Just closing reflections and maybe even just a takeaway that you’re sitting with after this week.

Ravneet Kaur

Yeah, sure. So for me, I think the very important thing, which came out from this AI impact summit is that the governments need to be very active about how they are ensuring. that the deployment of AI is happening. And for that, I am very happy with the way we are going in terms of, you know, we did a great job when it came to digital identity and digital payments. So now we are looking at a digital public infrastructure, how you’re going to be able to provide compute platforms for startups, for people who don’t have the resources, make available data, and then the focus which is there on small language models. Everything doesn’t need to be large, especially when we look at things which are very language -specific, very related to our country and to our solutions.

So that’s one of the key takeaways that I have. And the other, of course, is that we’ll be going, all of us at the Competition Commission are now, you know, going back with this, that one needs to be very alert as to what are the kind of systems which are being put in place and are flexible. Is there transparency? Is there accountability? so those are the key things because at the end of the day it is trust if you can build up trust if your systems are not opaque then you would be able to get the people on board onto your applications and to your systems and that’s where success lies, that’s where value is

Amba Kak

I’ll say ma ‘am that one of my key takeaways and hopefully someone from the Swiss government is listening for next year is that we also need to see much more voices from the enforcers, those that are going to make sure that the players in this space are accountable to the public and not above the law and so I’m very grateful that you’re here and I hope that future summits see more enforcers at the table okay Karen you get the last word and I would say I’m going to open up for questions so start thinking of

Karen Hao

I think my biggest reflection from the summit which I also shared in an event last night is that um um It’s so interesting to observe corporate speak in these spaces. And the thing that struck me the most about this summit is that this corporate speak has gotten very sophisticated in that they have adopted the language of inclusion, diversity, empowering marginalized communities to talk about ultimately selling their technology and making sure that you kind of buy into helping them lock in their closed platforms. And I hope that because we have more community engagement and there’s more openness in a lot of the discussions that are happening kind of alongside this very sophisticated corporate speak that all of you will take away from the summit this broader idea of what it really means to ultimately build a future where AI can empower people.

It does not actually mean the democracy that the companies offer us. It in fact means that we should all be thinking very deeply about. What are the problems that we really need to solve in as individuals within our families, our communities, our companies, our context. and then whether or not AI is even the right solution for that problem and then how to design and develop from the ground up AI solutions that truly are empowering and enabling and help tackle those problems and bring everyone along together.

Amba Kak

That was, yeah, what a great note to end on. And honestly, a note of optimism and a note to build towards the futures we want to see. Okay, so does anyone have any questions? Okay, I saw you first. Go ahead.

Audience member 1:

Hi, everyone. And, yeah, I was one of the people in line looking for the signature on the book. So I read Carol. It’s a reference book. And my question is addressed to you. So all of this, it makes sense, but it makes sense in a more macro way. From a micro perspective where an individual is exposed to AI and, you know, at their workplaces and we’re expected to use it and, you know, that there’s no getting away from it. How do we reconcile the fact that, you know, probably there is a whole lot of exploitation behind the models that we’re using? But at the same time, you can’t not use it because it’s just, it’s every day.

I don’t use it. Yeah. Yeah. So I’d like to know a little bit more about that. How? Yeah.

Karen Hao

No, I actually, I think it’s totally possible to not use these tools. But also, I would say that oftentimes our conversations around adopting AI are posed as a binary. Like either you go completely all in. Or you go none at all. Yeah. and there’s actually a million possibilities in between right there are so many different ways that you could refrain from using air in certain contexts but maybe there are other ways that it helps you um being more intentional about what kinds of ai tools you adopt from which kinds of companies like we’ve been talking a lot about openness so maybe you choose to use more open ai technologies rather than the closed ones um one of the things that i feel is missing right now within the ai ecosystem that makes the burden very very high on consumers is that we don’t really have third -party organizations doing analysis to make clear like clear and easy labels for consumers to determine what values and what degree of resources are being used to develop different types of ai models so that they’re they can actually make informed decisions but we have lots of precedent of this happening in other industries like the fashion supply chain and food and coffee and so i hope that someone out there listening will start working on this like develop some kind of third party third party labeling system so that consumers can actually start making more informed choices.

The other thing that I would say is I also don’t think individuals like we aren’t just consumers. That’s not the only way that individuals can push against the inevitability narratives of AI. We’ve seen amazing protests that have broken out all around the world to push against data centers. We’ve seen protests from parents who feel that their children are being harmed and this rapid escalation of AI advancement is getting out of control. We’ve seen artists and writers using the tools of litigation to counter when these companies are infringing on their intellectual property in ways that they don’t stand for. There are many different ways I think within your life AI is everywhere and also that means you as an individual and within your community have a thousand different touch points for how you can interact with the AI supply chain and in each of those touch points you can choose whether to resist or adopt or be neutral and so there’s yeah like I hope that people actually feel significantly more agency than I think people generally feel today.

Amba Kak

Thank you. Okay, I think we should do a couple of questions. So you, you, and you. Okay, let’s go in that order. So we’ll take those three questions and then…

Audience member 2:

Hello, thank you so much. This was, I think, my favorite panel of the whole summit. And also, like, an all -female panel. I think it’s nice. It’s also kind of connected to a reflection. You know, my question is, like, I feel like at this space, I’ve realized there’s not as many women by far. As men. And, again, as you said, it’s the only female panel. And we’re here with a group of 15 people from Germany. And, like, half of us is male and half of us is female. Often just our male counterparts get addressed and somebody’s just speaking to them and, you know, not, like, asking them for money or other, like, in terms of, like, pitching their business idea, whatever.

But I’ve also noticed other things. Like, the theme is, right, AI all -inclusive, right? But I’m wondering, like, who does this include? In this specific context? In which vision, like, do you understand, like, from this summit, who you think is included in this vision for all -inclusive? and also I’ve realized, I don’t know if anybody else has realized but I feel like China is quite an important power in the AI governance space but the amount of Chinese people here I’ve seen is very low and it’s just something I realized that I noticed so I feel like it’s still just some reflection and I wonder how you see this, like what does this notion of all inclusive mean for you or how you perceived it here?

Amba Kak

Thank you, that was many important and provocative questions you just asked

Audience member 3:

I was curious, kind of as a follow up to our colleague here, your role on the open source Chinese models which are clearly the most intelligent in the open source space but clearly have a deep CCP perspective and so I’m curious like how does that come together in this ecosystem and how can we leverage it appropriately?

Audience member 4:

Hello. Thank you panel for the wonderful discussion. I’m an intellectual property and business lawyer. So my question is related to intellectual property, specific to Ravneet. Just I wanted to know how you see the openness of AI in context of the intellectual property as openness is somewhere giving the restriction in context of the intellectual property.

Amba Kak

Why don’t we start with that question?

Ravneet Kaur

Okay, sure. Sure. So when you look at intellectual property, because, you know, there’s a lot of research, development and innovation which has gone into the development of that technology. And whatever is put in place, and there are these copyrights, there are these patent acts which are protecting that. When it comes to the competition commission, we come into the picture only if we find that there is an abuse. Wherever, whatever innovation has been done, it is being used to ensure that there is an abuse. And we want to ensure that no other people can come into the map into the into the same map. And it is being used to enforce conditions which are unfair. So that is the only space where we come in.

Otherwise, the purpose of the commission is not to stifle innovation. We are to, in fact, protect innovation because that’s the way to grow. That’s the way markets will grow further. Competition will increase. New players will keep coming in, better technologies, better value for the customer. So consumer welfare is one of the very critical things we look at. That’s how we address these issues.

Amba Kak

I wonder if, Aastha, you can talk to the gender and that broader question on inclusion.

Astha Kapoor

Yeah. Thank you so much for that question. I think it’s what we’ve all been feeling as well. I think that basis, what I have understood in so very early, overwhelmed sense is that there is inclusion, as Karen was saying, is also being chosen host as a word for adoption. And I think that that is the primary framing that I’m taking away from this. I think that that is the primary framing that I’m taking away from this. I think that that is the primary framing that I’m taking away from this. I think that that is the primary framing that I’m taking away from this. I think that that is the primary framing that I’m taking away from this.

I think that that is the primary framing that I’m taking away from this. I think that that is the primary framing that I’m taking away from this. democratization is about market access. The working group also says so. And I think that the gender perspective will also, and we’ve seen this again in previous iterations of the tech will save us, financial inclusion, digital financial inclusion variety, which is like get people online. And then what ends up happening is that when you realize that you’re not able to make money of these, like, you know, the bottom 80%, then you start to get drop -offs there. So it is at the moment of that hype cycle of getting everybody online, and then whether we’re able

Amba Kak

I don’t know, maybe you could take the question on Chinese open source AI and how we feel about it.

Alondra Nelson

I’ll try. I mean, one thing I would say about, there’s been some news reporting on, you know, about the fact that this week took place during the Lunar New Year and that that probably had some impact on participation at Ramadan as well. I mean, you know, so I think, that’s not lost, I shouldn’t be lost on. any of us for this question of inclusion. I think, I mean, I haven’t worked with the Chinese model, so I don’t know, but if they’re open source models, you should be able to tune them so that they don’t have, you know, at least as much kind of, you know, CCP kind of ideological control. I don’t know if you do that in the training data or inference level or where you do it, but, and it seems that they are, there are a lot of companies that are building on the Chinese models, and so it seems like even in the enterprise space, and so that is clearly not a hurdle to some of the enterprise kind of uses and applications that people want to build on them, so.

Amba Kak

I think we can take two more questions. Okay, so your hand, and I just want to take someone from the middle. You can go. Okay. The alarm just went off. So if you could also make sure that it’s a crisp question that would allow there to also be answers. Yeah.

Audience member 5:

So I am really interested in how AI is going to impact labor. And one of the biggest concerns in this area is the fact that, you know, AI can train on the intellectual labor of so many people without giving credit, without giving compensation. So there are obviously regulatory approaches to this. But I’m more interested in like an up. So new research that’s happening about protecting publicly available data, be it images, be it websites, be it written content in a way that that data, if it’s used directly by AI, it’s either useless to it or it’s harmful to it. I think there’s some research happening in University of Chicago around that and some other places. So my question here is twofold.

First, is this like a good approach to sort of protect intellectual property or data by creating? Protection by design. And two, how does it tackle? How does it go with the idea of openness? Right. Because on the one hand, it’s.

Amba Kak

Thank you for the question. I just want to make sure we have time for the others. They’re going to kick us out of this room. That’s the final question and then maybe Karen, you can address the labour question.

Audience member 6:

Hi, I wanted to ask about open washing. We’ve been hearing the term in previous discussions about openness in competition. And I just wanted to ask in terms of enforcement, how should competition authorities assess whether this openness is genuinely lowering entry barriers or whether underlying dependencies still exist essentially. Do we need new analytical tools? Does there need to be a reworking of the frameworks around competition? That’s essentially the question I wanted to ask. Thank you.

Amba Kak

Karen and then Jayperson Kaur, you will have the last word.

Karen Hao

Sorry, can you remind me the very last part of your question? You were talking about… The labour one. Yes. I agree with everything that you said, basically, that, yes, this is a huge problem. Yeah, like labor exploitation is absolutely happening, both with the exploitation of the labor that is being used to produce the data and also labor exploitation of, like, data workers that are cleaning the data. And I think that just shows, given that the labor exploitation is happening all through the supply chain, that that is kind of inherent in the logic of how these models are being created, and we need to fundamentally rethink that from the ground up.

Ravneet Kaur

So when we do a competition assessment, we are looking at numerous economic factors that are also taken into consideration. It is not based on, you know, what has been submitted to us. And a very detailed analysis is done to understand whether there is any competition harm. And the other aspect which is looked into is what are the effects which are there. Is there an appreciable adverse effect? So we have to establish both the things, and this is done on a case -to -case basis after doing a very rigorous analysis. of both the data which is available in the public domain and the analysis done by our internal teams. Only then we are able to determine whether there’s a harm to competition.

Amba Kak

Okay, thank you all so much for being here. This is such a rich conversation and thank you all for being part of it. Thank you.

A

Alondra Nelson

Speech speed

175 words per minute

Speech length

1527 words

Speech time

520 seconds

Redefining Openness in AI Governance

Explanation

Nelson argues that openness should be understood as a socio‑technical gradient rather than a simple binary, encompassing transparency, accountability and democratic control over AI systems.


Evidence

“I think the difference is that, to your point from the opening, Amba, is that I think part of what we were trying to do in the Biden administration was really go back to a kind of foundational sense of openness that comes out of an open source movement that really thinks about openness as a kind of socio‑technical characteristic and not just a technical characteristic” [2]. “It was about sort of openness as a kind of practice and openness as shared infrastructure, openness as resources that could be used by lots of different communities, things that could be, you could modify the technology, that you could sort of just use the technology for the sort of purposes of your community or the purposes that you had” [4]. “And so that meant that that older, I think, broader definition of open was much more about democracy and transparency and accountability” [5]. “So that open was not a binary” [9].


Major discussion point

Redefining Openness in AI Governance


Topics

Artificial intelligence | Data governance


U.S. Government Approach and Democratic Accountability

Explanation

Nelson notes that the current U.S. administration relies on industrial, trade and immigration levers rather than formal rulemaking, which reduces democratic input and makes the approach less transparent.


Evidence

“you mentioned some of the levers that they’re using, tariffs, trade policy, export controls of semiconductor trips, in the U.S. context even immigration” [61]. “It’s unfortunately, I think, anti‑democratic relative to the status quo” [64]. “it may not be regulatory in the sense of formal rulemaking as it happens in the United States context, but it is certainly hyper‑regulatory, I think, in a lot of other ways” [72].


Major discussion point

U.S. Government Approach and Democratic Accountability


Topics

The enabling environment for digital development | Artificial intelligence | Human rights and the ethical dimensions of the information society


Community‑Driven Open‑Source AI and Scale

Explanation

Nelson stresses that genuine openness requires community participation and democratic voice in AI conferences, and that the lack of openness around AI infrastructure is a major barrier.


Evidence

“And if we’re really serious about having democracy and community and voice, AI conferences need to look much more like this one than the ones that we spend a lot of our time going to” [44]. “So the sort of lack of openness around the infrastructure, that infrastructural piece of the AI stack is actually quite profound” [45].


Major discussion point

Community‑Driven Open‑Source AI and Scale


Topics

Artificial intelligence | Capacity development | Data governance


A

Anne Bouverot

Speech speed

140 words per minute

Speech length

645 words

Speech time

275 seconds

Open‑source as a Strategic Tool (Not Universal)

Explanation

Bouverot argues that open‑source can be a powerful competitive lever but must be applied case‑by‑case, acknowledging both benefits and risks.


Evidence

“which is why in France and in Europe we’re very much in favor of open source as a competitive tool and as a way to leverage the knowledge and the findings of others to then just stand on their shoulders and continue to develop technology” [24]. “And China using open source is actually very interesting because open source has a number of benefits and also risks” [22]. “There’s a lot of, for example, open source is really being taken as a tool by startups and scale‑ups in Europe and in other countries” [25].


Major discussion point

Redefining Openness in AI Governance


Topics

Artificial intelligence | The digital economy


Middle Powers, Global South, and Multilateral Cooperation

Explanation

She highlights that middle‑power coalitions can use open‑source to stimulate competition and that ad‑hoc “coalitions of the willing” can help build collective AI governance capacity.


Evidence

“it doesn’t mean that everything should be open source there are cases where you do want to be careful depending on the use case but as a way to develop and stimulate competition it is very powerful … middle economies … building coalitions of the willing” [23]. “And it doesn’t have to be one big block of these middle powers, but ad hoc coalitions of the willing” [78].


Major discussion point

Middle Powers, Global South, and Multilateral Cooperation


Topics

The digital economy | Artificial intelligence


Public Funding, Compute Access and Data Sharing

Explanation

Bouverot stresses that open‑source governance must be backed by public funding, access to compute and shared data sets to be effective.


Evidence

“How do we put public funding or access to publicly funded compute or access to data sets that countries can help to put together?” [81].


Major discussion point

Community‑Driven Open‑Source AI and Scale


Topics

The enabling environment for digital development | Data governance


A

Astha Kapoor

Speech speed

185 words per minute

Speech length

852 words

Speech time

275 seconds

Openness as Dialogue and Value Distribution

Explanation

Kapoor frames openness as a dialogue that distributes value, warning that treating openness merely as an adoption driver can harm Global South countries.


Evidence

“So I think openness as dialogue, as distribution of value is what we need to think about” [6]. “So openness as a driver of adoption is actually quite a dangerous frame for Global South countries because it moves attention from where we might need to invest our resources” [34]. “But then at the same time, we also have to realize that without having openness or control or agency or frictions across that entire AI stack, we are basically risking our populations in the Global South doing the labor to bring people online” [35].


Major discussion point

Redefining Openness in AI Governance


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Co‑operative Models for Governance

Explanation

She points to cooperative structures like the Amul co‑op as examples of one‑member‑one‑vote governance that return value to contributors.


Evidence

“And I know an example that was presented yesterday on the Amul Co‑op, we’ve been doing a lot of work with cooperatives, to me, which is a nice space because it is the governance question of one member, one vote, you can pool things” [129].


Major discussion point

Community‑Driven Open‑Source AI and Scale


Topics

Capacity development | Data governance


R

Ravneet Kaur

Speech speed

168 words per minute

Speech length

1442 words

Speech time

512 seconds

Competition Policy, Market Power, and Sovereignty

Explanation

Kaur outlines anti‑competitive practices in AI markets—self‑preferencing, exclusive agreements, bundling, ecosystem lock‑in—and argues that competition safeguards sovereignty and prevents entry barriers.


Evidence

“the key ones which we found in the case of digital markets is the self‑preferencing which is happening” [105]. “And there are these exclusive agreements where unfair terms are being also sought and, you know, parity agreements, parity arrangements” [106]. “There could be ecosystem lock‑in, which might happen” [107]. “Tying and bundling is occurring in numerous cases” [108].


Major discussion point

Competition Policy, Market Power, and Sovereignty


Topics

The digital economy | Artificial intelligence | Human rights and the ethical dimensions of the information society


Competition as a Lever for Sovereignty

Explanation

She emphasizes that competition ensures contestable AI markets, protects national priorities and prevents dominance that could undermine sovereignty.


Evidence

“we can’t forget about it because competition is what is going to ensure that there are no entry barriers, that players who are already there are not using their dominance to foreclose competition” [56]. “markets would need to be contestable, fair, competitive” [43].


Major discussion point

Competition Policy, Market Power, and Sovereignty


Topics

The digital economy | Artificial intelligence


Commission Intervenes Only on Abusive Practices

Explanation

Kaur clarifies that the Competition Commission steps in only when there is an abuse, aiming to protect innovation and consumer welfare rather than stifle it.


Evidence

“When it comes to the competition commission, we come into the picture only if we find that there is an abuse” [112]. “Only then we are able to determine whether there’s a harm to competition” [113].


Major discussion point

Competition Policy, Market Power, and Sovereignty


Topics

The digital economy | Consumer protection


K

Karen Hao

Speech speed

171 words per minute

Speech length

1765 words

Speech time

618 seconds

Community‑Centric Big‑Science Open‑Source Projects

Explanation

Hao describes large collaborative projects that embed community‑centric data governance, consentful data donations and value‑return to data contributors.


Evidence

“It was this project that brought together over a thousand researchers … to try and create an open source large language model … better data governance practices, … making it transparent for people, … track which data owners are then contributing to what aspect of value generation within the model” [118]. “And this project really embodied that, where they were working together with lots of different cultural institutions … to figure out better ways of capturing the rich data … deliver value back to that institution” [119]. “the entire stack was with the spirit of collaboration … the people who are giving the data have a vote, have a say in then how the model ultimately can help support their journey in language learning” [120].


Major discussion point

Community‑Driven Open‑Source AI and Scale


Topics

Artificial intelligence | Data governance | Capacity development


Corporate Rhetoric vs Genuine Openness; Consumer Agency

Explanation

She calls for third‑party labeling to help consumers assess openness and ethical stance of AI tools, noting the gap between corporate inclusion language and actual closed platforms.


Evidence

“we don’t really have third‑party organizations doing analysis to make clear like clear and easy labels for consumers to determine what values and what degree of resources are being used to develop different types of ai models so that they can actually make informed decisions” [38].


Major discussion point

Corporate Rhetoric vs Genuine Openness; Consumer Agency


Topics

Consumer protection | Human rights and the ethical dimensions of the information society


Labor Exploitation in AI Supply Chains

Explanation

Hao highlights that labor exploitation is inherent in data collection and model training, urging a redesign of AI from the ground up.


Evidence

“labor exploitation is absolutely happening, both with the exploitation of the labor that is being used to produce the data and also labor exploitation of, like, data workers that are cleaning the data” [144].


Major discussion point

Corporate Rhetoric vs Genuine Openness; Consumer Agency


Topics

Human rights and the ethical dimensions of the information society | Social and economic development


A

Amba Kak

Speech speed

131 words per minute

Speech length

1825 words

Speech time

833 seconds

Openness as Binary vs Gradient

Explanation

Kak points out that the past decade framed AI as a binary open/closed divide, and urges a broader, gradient understanding of openness.


Evidence

“the last 10 years have been characterized by open versus closed as a kind of binary or a way of organizing the world into particular camps” [11]. “So that open was not a binary” [9]. “So in today’s panel, we’re going to kind of widen our understanding of what openness could mean in this conversation about AI” [20].


Major discussion point

Redefining Openness in AI Governance


Topics

Artificial intelligence | Data governance


Middle Powers and Coalitions of the Willing

Explanation

She notes that middle‑power coalitions can leverage open‑source as a competitive lever and form ad‑hoc “coalitions of the willing” for AI governance.


Evidence

“Middle powers, as we just discussed, it’s a very broad term … especially for countries like India or other countries in the global south” [84]. “And it doesn’t have to be one big block of these middle powers, but ad hoc coalitions of the willing” [78].


Major discussion point

Middle Powers, Global South, and Multilateral Cooperation


Topics

The digital economy | Artificial intelligence


Gender and Representation Concerns

Explanation

Kak highlights that the panel is all‑female, underscoring ongoing gender disparities in AI fields.


Evidence

“And also, like, an all‑female panel” [100]. “it’s also the only female‑only panel at this symposium” [153].


Major discussion point

Inclusion, Gender, and Geopolitical Representation


Topics

Closing all digital divides | Human rights and the ethical dimensions of the information society


A

Audience member 1

Speech speed

138 words per minute

Speech length

141 words

Speech time

61 seconds

Need for Third‑Party Labels

Explanation

The audience member calls for clear, independent labeling systems so consumers can assess the openness and ethical stance of AI tools.


Evidence

“we don’t really have third‑party organizations doing analysis to make clear like clear and easy labels for consumers to determine what values and what degree of resources are being used to develop different types of ai models so that they can actually make informed decisions” [38].


Major discussion point

Corporate Rhetoric vs Genuine Openness; Consumer Agency


Topics

Consumer protection | Human rights and the ethical dimensions of the information society


A

Audience member 2

Speech speed

190 words per minute

Speech length

256 words

Speech time

80 seconds

Inclusion, Gender, and Geopolitical Representation

Explanation

The audience member points out the low Chinese representation and gender balance, questioning who is truly included in the “all‑inclusive” AI vision.


Evidence

“I feel like China is quite an important power in the AI governance space but the amount of Chinese people here I’ve seen is very low” [92]. “the theme is, right, AI all‑inclusive, right?” [93]. “who you think is included in this vision for all‑inclusive?” [94]. “And also, like, an all‑female panel” [100]. “it’s also the only female panel” [152]. “half of us is male and half of us is female” [156].


Major discussion point

Inclusion, Gender, and Geopolitical Representation


Topics

Closing all digital divides | Human rights and the ethical dimensions of the information society


A

Audience member 3

Speech speed

193 words per minute

Speech length

59 words

Speech time

18 seconds

No direct quoted contribution captured

Explanation

No specific statement from Audience member 3 appears in the provided transcript excerpts.


A

Audience member 4

Speech speed

140 words per minute

Speech length

57 words

Speech time

24 seconds

Openness and Intellectual Property

Explanation

The audience member asks how openness interacts with intellectual property restrictions.


Evidence

“Just I wanted to know how you see the openness of AI in context of the intellectual property as openness is somewhere giving the restriction in context of the intellectual property” [40].


Major discussion point

Corporate Rhetoric vs Genuine Openness; Consumer Agency


Topics

Intellectual property (within The digital economy) | Artificial intelligence


A

Audience member 5

Speech speed

183 words per minute

Speech length

167 words

Speech time

54 seconds

Labor Exploitation in AI Supply Chains

Explanation

The audience member raises concerns about exploitation embedded in AI model development and its impact on labor.


Evidence

“How do we reconcile the fact that, you know, probably there is a whole lot of exploitation behind the models that we’re using?” [145]. “So I am really interested in how AI is going to impact labor” [149].


Major discussion point

Corporate Rhetoric vs Genuine Openness; Consumer Agency


Topics

Human rights and the ethical dimensions of the information society | Social and economic development


A

Audience member 6

Speech speed

128 words per minute

Speech length

78 words

Speech time

36 seconds

Open‑washing and Need for New Analytical Tools

Explanation

The audience member asks whether new tools are needed for competition authorities to assess genuine openness and to prevent hidden dependencies.


Evidence

“Hi, I wanted to ask about open washing” [48]. “Do we need new analytical tools?” [39]. “how should competition authorities assess whether this openness is genuinely lowering entry barriers or whether underlying dependencies still exist essentially” [47]. “Does there need to be a reworking of the frameworks around competition?” [49].


Major discussion point

Open‑washing; Competition Policy, Market Power, and Sovereignty


Topics

The digital economy | Artificial intelligence | Competition policy (within The digital economy)


Agreements

Agreement points

Openness should encompass broader socio-technical characteristics beyond just technical aspects

Speakers

– Alondra Nelson
– Karen Hao
– Amba Kak

Arguments

Openness should be understood as a spectrum rather than a binary, encompassing socio-technical characteristics beyond just technical aspects like model weights


Successful open AI projects involve community participation, consent-based data collection, and value sharing with data contributors


The word ‘open’ is doing significant work as a stand-in for broader values of democratization, participation, agency, and sovereignty in AI discussions


Summary

All three speakers agree that openness in AI should be understood more broadly than just technical openness, encompassing democratic values, community participation, and socio-technical characteristics that enable genuine empowerment and participation.


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Community participation and inclusion are essential for legitimate AI governance

Speakers

– Alondra Nelson
– Karen Hao
– Astha Kapoor

Arguments

This conference uniquely included community participation unlike typical AI conferences that exclude broader stakeholders


Successful open AI projects involve community participation, consent-based data collection, and value sharing with data contributors


Cooperatives and community governance models offer alternatives to corporate-controlled AI development


Summary

These speakers share the view that meaningful community participation is crucial for AI development and governance, contrasting corporate-controlled approaches with community-driven models that enable genuine participation and co-design.


Topics

Human rights and the ethical dimensions of the information society | Artificial intelligence


Competition and market concentration are critical concerns in AI governance

Speakers

– Ravneet Kaur
– Amba Kak

Arguments

AI markets show similar anti-competitive practices as digital markets including self-preferencing, tying, bundling, and exclusive agreements


Competition policy should focus on preventing dominance transfer from one market to another, particularly at the distribution layer where consumer access and monetization occur


Summary

Both speakers emphasize the importance of competition policy in preventing market concentration and anti-competitive practices in AI markets, particularly focusing on preventing dominance transfer between markets.


Topics

The digital economy | Artificial intelligence


Labor exploitation is inherent in current AI development processes

Speakers

– Karen Hao
– Audience member 1
– Audience member 5

Arguments

AI training exploits intellectual labor without credit or compensation, requiring fundamental rethinking of development approaches


Individuals face a moral dilemma between avoiding exploitative AI systems and workplace requirements to use AI tools


Technical protection methods for publicly available data could help address AI’s exploitation of intellectual labor


Summary

There is consensus that current AI development systematically exploits intellectual labor without proper compensation, creating both systemic and individual ethical dilemmas that require fundamental changes to development approaches.


Topics

Human rights and the ethical dimensions of the information society | The digital economy


Similar viewpoints

Both speakers recognize the challenges facing middle powers and Global South countries in maintaining sovereignty and agency in AI development, though Bouverot focuses more on coalition-building opportunities while Kapoor emphasizes the risks of being relegated to market roles.

Speakers

– Anne Bouverot
– Astha Kapoor

Arguments

Middle powers can form coalitions of the willing to pool resources and compete effectively without building entire AI stacks independently


Global South countries risk being positioned as markets for testing models built elsewhere rather than as sovereign developers


Topics

Artificial intelligence | The enabling environment for digital development


Both speakers critique how current AI development processes systematically exclude communities and use sophisticated rhetoric to mask fundamentally closed and undemocratic approaches to technology development.

Speakers

– Alondra Nelson
– Karen Hao

Arguments

Current AI infrastructure development, particularly data centers, explicitly excludes community input through NDAs and secretive contracts


Corporate speak has become sophisticated in adopting inclusion language while ultimately promoting closed platforms and technology sales


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Both speakers emphasize the importance of transparency and accountability in AI systems, with Kaur focusing on regulatory frameworks and Hao providing concrete examples of how these principles can be implemented in practice.

Speakers

– Ravneet Kaur
– Karen Hao

Arguments

Transparency and accountability in AI systems are crucial for building trust and preventing market concentration


Examples like BigScience and Tahiku Media demonstrate how open approaches can create more equitable AI development processes


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Unexpected consensus

Open source as a competitive tool across different geopolitical positions

Speakers

– Anne Bouverot
– Alondra Nelson
– Audience member 3

Arguments

Open source serves as a competitive tool that allows challengers to catch up and enables countries to leverage existing knowledge to develop their own technology


Chinese open source models present challenges due to ideological perspectives but can potentially be tuned to remove unwanted influences


Chinese open source AI models present a challenge due to their embedded ideological perspectives while being technically superior in the open source space


Explanation

Despite representing different geopolitical perspectives (French government, former US administration, and audience concern about Chinese models), there is unexpected consensus that open source serves as a valuable competitive tool that can be leveraged by various actors, even when there are concerns about ideological influences.


Topics

Artificial intelligence | The enabling environment for digital development


The limitations of current inclusion rhetoric in AI spaces

Speakers

– Astha Kapoor
– Karen Hao
– Audience member 2

Arguments

The summit’s inclusion rhetoric often masks adoption and market access goals rather than genuine empowerment


Corporate speak has become sophisticated in adopting inclusion language while ultimately promoting closed platforms and technology sales


The summit demonstrates significant gender imbalance and exclusion despite claims of being ‘AI all-inclusive’


Explanation

There is unexpected consensus across speakers from different backgrounds (policy researcher, academic, and audience member) that current inclusion rhetoric in AI spaces often masks rather than addresses genuine exclusion and power imbalances.


Topics

Human rights and the ethical dimensions of the information society | Closing all digital divides


Overall assessment

Summary

The speakers demonstrate strong consensus on several key issues: the need for broader definitions of openness beyond technical aspects, the importance of community participation in AI governance, concerns about market concentration and competition, and recognition that current AI development processes systematically exploit labor. There is also shared critique of corporate rhetoric around inclusion and recognition of the challenges facing Global South countries and middle powers in maintaining AI sovereignty.


Consensus level

High level of consensus on fundamental principles of democratic AI governance, community participation, and the need for systemic changes to current AI development approaches. This consensus suggests potential for building coalitions around these shared values, though implementation strategies may vary based on different actors’ positions and capabilities.


Differences

Different viewpoints

Definition and scope of openness in AI

Speakers

– Alondra Nelson
– Anne Bouverot

Arguments

Openness should be understood as a spectrum rather than a binary, encompassing socio-technical characteristics beyond just technical aspects like model weights


Open source serves as a competitive tool that allows challengers to catch up and enables countries to leverage existing knowledge to develop their own technology


Summary

Nelson emphasizes a broader socio-technical definition of openness focused on democracy, transparency and community empowerment, while Bouverot frames openness primarily as a competitive tool for countries and companies to catch up technologically


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Framing of Global South countries in AI development

Speakers

– Astha Kapoor
– Anne Bouverot

Arguments

Global South countries risk being positioned as markets for testing models built elsewhere rather than as sovereign developers


Middle powers can form coalitions of the willing to pool resources and compete effectively without building entire AI stacks independently


Summary

Kapoor warns against Global South countries being relegated to testing grounds for foreign models and emphasizes sovereignty, while Bouverot sees middle powers (including Global South countries) as potential coalition partners for competitive advantage


Topics

Artificial intelligence | The enabling environment for digital development


Scale and distribution models for AI

Speakers

– Karen Hao
– Anne Bouverot

Arguments

Scale should mean different communities developing models by and for themselves, not monopolistic distribution by single entities


Middle powers can form coalitions of the willing to pool resources and compete effectively without building entire AI stacks independently


Summary

Hao advocates for community-driven, application-specific AI development at scale, while Bouverot supports coalition-based approaches where middle powers pool resources for competitive advantage


Topics

Artificial intelligence | The digital economy


Individual agency in AI adoption

Speakers

– Karen Hao
– Audience member 1

Arguments

Individual resistance to AI adoption is possible through intentional choices about which tools to use and from which companies


Individuals face a moral dilemma between avoiding exploitative AI systems and workplace requirements to use AI tools


Summary

Hao argues individuals have significant agency to resist or selectively engage with AI, while the audience member highlights practical constraints that limit individual choice in workplace contexts


Topics

Human rights and the ethical dimensions of the information society | The digital economy


Unexpected differences

Role of Chinese open source AI models

Speakers

– Alondra Nelson
– Audience member 3

Arguments

Chinese open source models present challenges due to ideological perspectives but can potentially be tuned to remove unwanted influences


Chinese open source AI models present a challenge due to their embedded ideological perspectives while being technically superior in the open source space


Explanation

The disagreement is subtle but significant – Nelson suggests technical solutions (tuning) can address ideological concerns, while the audience member presents this as a fundamental tension between technical superiority and ideological acceptability


Topics

Artificial intelligence | The enabling environment for digital development


Nature of inclusion at the summit

Speakers

– Alondra Nelson
– Astha Kapoor
– Audience member 2

Arguments

This conference uniquely included community participation unlike typical AI conferences that exclude broader stakeholders


The summit’s inclusion rhetoric often masks adoption and market access goals rather than genuine empowerment


The summit demonstrates significant gender imbalance and exclusion despite claims of being ‘AI all-inclusive’


Explanation

Unexpected disagreement on whether the summit actually achieved meaningful inclusion – Nelson praises it as revolutionary, Kapoor critiques inclusion as masking adoption goals, and the audience member points to concrete exclusions like gender imbalance


Topics

Human rights and the ethical dimensions of the information society | Closing all digital divides


Overall assessment

Summary

The main areas of disagreement center on the definition and implementation of openness in AI, the role of different actors (individuals, communities, middle powers) in AI development, and the effectiveness of various governance approaches. There are also tensions between technical and socio-political framings of AI challenges.


Disagreement level

Moderate level of disagreement with significant implications – while speakers generally agree on the importance of democratizing AI and preventing concentration of power, they differ substantially on strategies and priorities. These disagreements reflect deeper tensions between technical, economic, and social justice approaches to AI governance, which could impact the effectiveness of coordinated policy responses.


Partial agreements

Partial agreements

Both agree that US AI governance is becoming less democratic and transparent, but they differ on emphasis – Nelson focuses on the anti-democratic nature of bypassing formal rulemaking, while Kak emphasizes the broader closure of governance processes

Speakers

– Alondra Nelson
– Amba Kak

Arguments

Current US AI policy operates through industrial policy, trade, and immigration rather than traditional regulation, reducing democratic input opportunities


AI governance in the United States is becoming more closed despite a pro-open source orientation, particularly through non-traditional regulatory mechanisms


Topics

Artificial intelligence | The enabling environment for digital development


Both advocate for community-driven AI development but differ in approach – Kapoor emphasizes cooperative governance structures while Hao focuses on participatory development processes and consent-based data practices

Speakers

– Astha Kapoor
– Karen Hao

Arguments

Cooperatives and community governance models offer alternatives to corporate-controlled AI development


Successful open AI projects involve community participation, consent-based data collection, and value sharing with data contributors


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Both see competition as crucial for AI governance but emphasize different aspects – Kaur focuses on preventing lock-in and ensuring market access, while Kak emphasizes preventing cross-market dominance transfer

Speakers

– Ravneet Kaur
– Amba Kak

Arguments

Competition is essential for ensuring market contestability and preventing consumer lock-in to particular AI systems


Competition policy should focus on preventing dominance transfer from one market to another, particularly at the distribution layer where consumer access and monetization occur


Topics

The digital economy | Artificial intelligence


Similar viewpoints

Both speakers recognize the challenges facing middle powers and Global South countries in maintaining sovereignty and agency in AI development, though Bouverot focuses more on coalition-building opportunities while Kapoor emphasizes the risks of being relegated to market roles.

Speakers

– Anne Bouverot
– Astha Kapoor

Arguments

Middle powers can form coalitions of the willing to pool resources and compete effectively without building entire AI stacks independently


Global South countries risk being positioned as markets for testing models built elsewhere rather than as sovereign developers


Topics

Artificial intelligence | The enabling environment for digital development


Both speakers critique how current AI development processes systematically exclude communities and use sophisticated rhetoric to mask fundamentally closed and undemocratic approaches to technology development.

Speakers

– Alondra Nelson
– Karen Hao

Arguments

Current AI infrastructure development, particularly data centers, explicitly excludes community input through NDAs and secretive contracts


Corporate speak has become sophisticated in adopting inclusion language while ultimately promoting closed platforms and technology sales


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Both speakers emphasize the importance of transparency and accountability in AI systems, with Kaur focusing on regulatory frameworks and Hao providing concrete examples of how these principles can be implemented in practice.

Speakers

– Ravneet Kaur
– Karen Hao

Arguments

Transparency and accountability in AI systems are crucial for building trust and preventing market concentration


Examples like BigScience and Tahiku Media demonstrate how open approaches can create more equitable AI development processes


Topics

Artificial intelligence | Human rights and the ethical dimensions of the information society


Takeaways

Key takeaways

Openness in AI should be understood as a spectrum encompassing socio-technical characteristics including power distribution, accountability, transparency, and community participation, not just technical aspects like model weights


The geopolitical AI landscape has shifted with middle powers forming coalitions to compete against US-China dominance, while Global South countries risk being positioned as markets rather than sovereign developers


Current AI governance increasingly operates through industrial policy, trade, and immigration rather than traditional regulation, reducing opportunities for democratic input and public accountability


Competition policy is essential for preventing market concentration in AI, with similar anti-competitive practices emerging as seen in digital markets including self-preferencing, bundling, and exclusive agreements


Community-driven AI development models demonstrate viable alternatives to corporate-controlled approaches, emphasizing consent-based data collection, participatory governance, and value sharing with contributors


Corporate rhetoric has become sophisticated in adopting inclusion language while ultimately promoting closed platforms and technology adoption rather than genuine empowerment


Labor exploitation occurs throughout the AI supply chain from data production to cleaning, requiring fundamental rethinking of development approaches rather than just regulatory fixes


True scale in AI means different communities developing models by and for themselves rather than monopolistic distribution by single entities


Resolutions and action items

Need for third-party labeling systems to help consumers make informed choices about AI tools based on development values and resource usage


Future AI conferences should include more community participation and voices from enforcers/regulators, not just corporate and academic perspectives


Competition authorities should maintain vigilance about transparency and accountability in AI systems deployment


Governments need to be more active in ensuring proper AI deployment through digital public infrastructure, compute platforms for startups, and focus on small language models for specific use cases


Development of better data governance practices with transparent curation, cleaning, and tracking of data contributor value generation


Unresolved issues

How to effectively govern Chinese open source AI models that may contain ideological perspectives while leveraging their technical capabilities


Balancing intellectual property protection with openness requirements and determining when competition authorities should intervene


Addressing the fundamental tension between corporate profit motives and genuine community empowerment in AI development


Resolving the challenge of scale versus community-driven governance in AI development


Determining effective enforcement mechanisms for distinguishing genuine openness from ‘open washing’ by companies


Addressing systemic gender and representation gaps in AI governance spaces


Managing the trade-offs between rapid AI adoption and protecting labor rights and intellectual property


Defining what ‘AI for all’ and ‘inclusion’ actually mean beyond market access and technology adoption


Suggested compromises

Adopting a gradient approach to openness rather than treating it as a binary, allowing for different levels of openness based on use cases and safety considerations


Individuals can make intentional choices about AI adoption – using some tools while refusing others, choosing open source over closed platforms where possible


Middle powers forming ad hoc coalitions of the willing rather than requiring one large bloc to compete with major AI powers


Focusing on application-specific and community-driven AI development rather than pursuing universal ‘one model to rule them all’ approaches


Balancing innovation protection through intellectual property with preventing abuse that creates unfair market conditions


Using existing cooperative and community governance models as frameworks for more equitable AI development


Thought provoking comments

I think part of what we were trying to do in the Biden administration was really go back to a kind of foundational sense of openness that comes out of an open source movement that really thinks about openness as a kind of socio-technical characteristic and not just a technical characteristic… it was about shifting power. It was about forms of accountability.

Speaker

Alondra Nelson


Reason

This comment reframes the entire discussion by distinguishing between technical openness (sharing model weights) and true openness rooted in democratic values. It challenges the audience to think beyond binary definitions and consider openness as fundamentally about power redistribution and accountability.


Impact

This set the intellectual foundation for the entire panel, with subsequent speakers building on this distinction between technical and socio-political openness. It shifted the conversation from technical specifications to governance and power dynamics.


So it may not be regulatory in the sense of formal rulemaking… but it is certainly hyper-regulatory… When we are doing AI policy by fiat and through executive authority only, those inputs, even if those limited inputs are even gone. So it’s not only, I think, quite heavy-handed. It’s unfortunately, I think, anti-democratic relative to the status quo.

Speaker

Alondra Nelson


Reason

This insight exposes a critical paradox in current AI governance – that ‘deregulatory’ approaches are actually highly regulatory but bypass democratic processes. It reveals how industrial policy tools (tariffs, immigration, trade) can be more controlling than traditional regulation while being less accountable.


Impact

This comment introduced a new analytical framework that influenced how other panelists discussed sovereignty and governance, particularly Anne Bouverot’s discussion of middle powers and coalitions.


We are not here to do the labor to test bed models that are built elsewhere… openness as dialogue, as distribution of value is what we need to think about.

Speaker

Astha Kapoor


Reason

This powerfully challenges the dominant narrative that positions Global South countries as grateful recipients of AI technology. It reframes the discussion from adoption-focused ‘inclusion’ to value creation and agency, exposing how ‘openness’ can mask exploitative relationships.


Impact

This comment fundamentally shifted the conversation’s perspective on Global South participation in AI, moving from a charity/development model to one of equal partnership and value distribution. It influenced subsequent discussions about community engagement and sovereignty.


So I would reframe what we mean by scale, because what we are taught by Silicon Valley is that scale means they distribute to everyone, but they are the sole distributor. And to me, that’s not scale. That’s a monopoly… what really we would want from scale is different communities all around the world… each developing models by and for them at scale.

Speaker

Karen Hao


Reason

This comment deconstructs one of Silicon Valley’s most fundamental concepts, revealing how ‘scale’ has been redefined to justify centralization rather than true distribution. It offers a radical alternative vision of decentralized, community-driven AI development.


Impact

This reframing influenced the entire panel’s thinking about alternatives to big tech dominance, connecting to Alondra’s points about community inclusion and Astha’s concerns about Global South agency. It provided a concrete alternative vision that other panelists could build upon.


This is the first [AI conference] I’ve ever been to that has included the community in any considerable way… if we’re really serious about having democracy and community and voice, AI conferences need to look much more like this one than the ones that we spend a lot of our time going to.

Speaker

Alondra Nelson


Reason

This meta-observation about the conference itself connects the theoretical discussions about openness and democracy to the practical reality of who gets to participate in AI governance conversations. It highlights how exclusionary most AI discourse actually is.


Impact

This comment validated the conference’s approach and reinforced the panel’s emphasis on community participation, while also serving as a critique of the broader AI governance ecosystem’s insularity.


It’s so interesting to observe corporate speak in these spaces… they have adopted the language of inclusion, diversity, empowering marginalized communities to talk about ultimately selling their technology and making sure that you kind of buy into helping them lock in their closed platforms.

Speaker

Karen Hao


Reason

This observation exposes how progressive language around inclusion and empowerment has been co-opted as a marketing strategy, warning the audience to be critical of how terms like ‘democratization’ are being used by corporations to advance their own interests.


Impact

This served as a powerful closing warning that tied together many of the panel’s themes about the gap between rhetoric and reality in AI governance, encouraging critical thinking about corporate participation in these discussions.


Overall assessment

These key comments fundamentally transformed what could have been a technical discussion about open-source AI into a sophisticated analysis of power, democracy, and global equity. The panelists built upon each other’s insights to create a layered critique that moved from technical definitions to governance structures to global power dynamics. Alondra Nelson’s reframing of openness as socio-technical rather than purely technical set the stage for deeper discussions about democracy and accountability. Astha Kapoor’s challenge to Global South positioning as mere adopters rather than co-creators added crucial perspective on global equity. Karen Hao’s redefinition of scale and warning about corporate co-optation of progressive language provided concrete alternatives and critical tools for analysis. Together, these comments created a conversation that was both intellectually rigorous and practically grounded, offering new frameworks for understanding AI governance that prioritize community agency, democratic participation, and equitable value distribution over technical specifications and corporate interests.


Follow-up questions

How can we ensure democratic input in AI policy when governance is increasingly happening through industrial policy, trade policy, and immigration rather than traditional rulemaking?

Speaker

Alondra Nelson


Explanation

This addresses the concern that non-traditional policy levers lack the democratic input mechanisms that formal rulemaking provides, making AI governance less accountable to the public


How do we put governance tools to use in ways that strengthen digital sovereignty and resilience for middle powers?

Speaker

Anne Bouverot


Explanation

This explores how middle power countries can leverage governance mechanisms, public funding, compute access, and data sets to build coalitions and maintain autonomy in AI development


How can Global South countries avoid being positioned primarily as markets for AI adoption rather than as co-designers and beneficiaries of AI systems?

Speaker

Astha Kapoor


Explanation

This addresses the risk that openness frameworks may exploit Global South populations for labor and adoption without addressing structural issues or providing genuine agency in AI development


How can we ensure technical transparency and governance transparency in AI systems deployment?

Speaker

Ravneet Kaur


Explanation

This focuses on the need for stakeholders to understand what AI technology is being used for and how it’s being governed, which is essential for maintaining competition and preventing market concentration


What is the tension between community-driven, participatory AI governance structures and achieving scale?

Speaker

Amba Kak


Explanation

This explores whether there’s a fundamental trade-off between the kind of inclusive, community-centered AI development described in examples like the Māori language project and scaling AI solutions broadly


How can we develop third-party labeling systems for AI models to help consumers make informed choices about the values and resources embedded in different AI technologies?

Speaker

Karen Hao


Explanation

This addresses the current lack of transparency that makes it difficult for consumers to understand the ethical and resource implications of different AI tools, similar to labeling systems in fashion and food industries


How can competition authorities assess whether claimed ‘openness’ genuinely lowers entry barriers or whether underlying dependencies still exist (open washing)?

Speaker

Audience member 6


Explanation

This addresses the need for new analytical tools and frameworks to evaluate whether companies’ claims of openness actually promote competition or are merely superficial marketing


How can we protect intellectual labor and publicly available data from being exploited by AI systems without compensation or credit?

Speaker

Audience member 5


Explanation

This explores technical approaches like ‘protection by design’ that could make data useless or harmful to AI systems if used without permission, and how this relates to openness principles


How do we reconcile individual use of AI tools with knowledge of the exploitation embedded in their development?

Speaker

Audience member 1


Explanation

This addresses the practical and ethical dilemma individuals face when expected to use AI tools in their work while being aware of the labor exploitation and other harms in the AI supply chain


How can we appropriately leverage Chinese open source AI models that may have embedded ideological perspectives?

Speaker

Audience member 3


Explanation

This explores the technical and governance challenges of using open source models from China while addressing concerns about embedded political perspectives


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.