Global Perspectives on Openness and Trust in AI

20 Feb 2026 15:00h - 16:00h

Global Perspectives on Openness and Trust in AI

Session at a glanceSummary, keypoints, and speakers overview

Summary

The panel convened by AI Now and AAPTI examined how the concept of “openness” shapes AI governance and its political economy, noting that the term does far more work than a simple technical label [12-17]. Participants argued that “open” functions as a proxy for broader values such as democratization, participation, and sovereignty rather than merely sharing code or model weights [16-17].


Alondra Nelson explained that the Biden administration has framed openness as a binary outcome-either a model is open or it is not-contrasting with the original open-source ethos that views openness as a socio-technical gradient [27-33][40-42]. She warned that this binary framing allows geopolitical concerns to eclipse accountability, transparency, and democratic control over AI systems [45-47], and added that U.S. AI policy now operates more through industrial levers such as tariffs, export controls, and high-cost H-1B visas, which she described as “hyper-regulatory” and less democratic than formal rulemaking [56-63][64-68].


Anne Bouverot highlighted that China’s use of open-source tools has enabled it to catch up technologically, while European countries view open source as a competitive lever for middle-power coalitions [75-84][88-92]. She argued that ad-hoc “coalitions of the willing” among middle powers can harness openness to build digital sovereignty without relying on a single dominant stack [89-92].


Astha Kapoor warned that for Global South nations, openness can become a risky “adoption” narrative that diverts resources from structural challenges and may turn these countries into test-beds for external AI models [111-119][124-126]. Ravneet Kaur described the Competition Commission of India’s study of AI markets, identifying risks such as ecosystem lock-in, price discrimination, and opaque partnerships, and emphasized that ensuring access to data, compute, and skills is essential for fair competition [128-138][148-158]. She argued that competition is a crucial tool for preserving national sovereignty in the AI era, requiring transparent governance and contestable markets [161-170][172-173].


Karen Hao presented two open-source initiatives-the “big science” multilingual LLM project and New Zealand’s Tahiku Media speech-recognition model-that illustrate participatory, consent-driven openness and return value to data-providing communities [179-202]. She cautioned that scaling such models should not mean monopolistic distribution, but rather a decentralized “small-AI” approach that enables diverse industries and communities to develop their own solutions [207-212].


The discussion concluded that redefining openness as a democratic, community-centered practice, supported by transparent competition policy and inclusive coalitions, is essential for equitable AI development worldwide [40-42][161-170][207-212].


Keypoints


Major discussion points


Re-defining “openness” in AI beyond technical binaries – The panel opened by noting that “open” is a stand-in for broader values such as democratization, participation and sovereignty [12-17]. Alondra emphasized that the U.S. has treated openness as a binary rather than a gradient and argued for a socio-technical view that links openness to power-shifting, accountability and community use [30-34][40-46][47-49]. Anne highlighted how open-source can be a strategic lever for middle-power countries while acknowledging its limits [75-89]. Karen illustrated concrete projects (the large-scale open-source LLM effort and the Tahiku Media Māori speech-recognition model) that embody a participatory, consent-driven notion of openness [179-202].


Governance mechanisms and the politics of U.S. AI policy – Alondra pointed out that, although the Biden administration appears “light-touch” on formal regulation, it is exercising heavy influence through trade, export controls and immigration policy, which she described as “hyper-regulatory” and “anti-democratic” compared with traditional rule-making that includes public comment [55-66][67-68]. Amba’s follow-up question framed this shift as a move away from transparent, accountable regulation toward less publicly scrutinised levers [50-52].


Competition, market power and digital sovereignty – Ravneet Kaur explained the Competition Commission of India’s focus on anti-competitive practices (self-preferencing, bundling, exclusive agreements) across digital markets and, more recently, AI [128-138][141-152]. She argued that competition is essential for preventing entry barriers, ensuring transparency, and protecting sovereignty, especially for “global-majority” economies [161-170][166-170]. The discussion linked competition policy to broader concerns about data, compute and talent access [154-159].


Inclusion, representation and gender equity – Amba noted that the panel was the only all-female one at the summit and called it a “badge of honor” that should be improved in future iterations [4-5][69-74]. Audience members raised questions about who is truly included in the “all-inclusive” AI vision, pointing to the under-representation of Chinese participants and the need for gender-balanced engagement [298-306][311-317]. Karen later critiqued “corporate speak” that co-opts inclusion language while preserving closed platforms [254-258].


Community agency, labor and ethical risks – Alondra reflected on the lack of community transparency around data-center siting and the importance of community involvement in AI conferences [219-226][232-236]. Karen and later audience participants highlighted labor exploitation in data-collection pipelines and called for third-party labeling, “open-washing” safeguards, and design-by-consent approaches to protect workers and data subjects [277-283][369-381].


Overall purpose / goal of the discussion


The panel was convened to broaden the conversation about “openness” in AI governance, interrogate how power, politics and market structures shape AI development, and explore concrete pathways-through policy levers, competition law, community-driven projects, and inclusive representation-to align AI with the public interest across diverse geopolitical contexts (U.S., Europe, India, Global South).


Overall tone and its evolution


Opening (0:00-12:00): Formal, optimistic, and collaborative, with Amba framing the session as a “stimulating” exchange and participants outlining shared values around openness [1-3][12-17].


Middle segment (12:00-28:00): Becomes more critical and analytical; Alondra critiques the binary view of openness and the “anti-democratic” nature of U.S. policy [55-68]; Anne and Astha discuss geopolitical power shifts and the risks of a one-size-fits-all model [75-89][111-126]; Ravneet details concrete anti-competitive concerns [128-158].


Later segment (28:00-41:00): Reflective and hopeful, emphasizing community participation, concrete open-source case studies, and the potential of competition to safeguard sovereignty [179-202][219-236][161-170].


Closing (41:00-end): Cautiously optimistic, acknowledging corporate co-optation of inclusion language while urging deeper democratic engagement and concrete actions for labor justice and broader representation [254-258][369-381][389-391].


Overall, the tone moves from introductory enthusiasm to a nuanced critique of existing power structures, then toward constructive optimism about community-driven solutions and the need for inclusive, democratic AI governance.


Speakers

Amba Kak – Moderator and co‑host of the panel; affiliated with the AI Now Institute and the AAPTI Institute.


Alondra Nelson – Former Deputy Director of the White House Office of Science and Technology (Biden administration); Harold F. Linder Professor, Institute for Advanced Study [​S22][​S24].


Anne Bouverot – French President’s Special Envoy for the AI Action Summit; Special Envoy for Artificial Intelligence, France; former Director General of the GSMA [​S27].


Astha Kapoor – Representative of the AAPTI Institute / Civil Society, Asia‑Pacific Group; policy researcher on data stewardship [​S7][​S9].


Ravneet Kaur – Chairperson, Competition Commission of India [​S1].


Karen Hao – Journalist and author of Empire of AI, covering AI policy and ethics.


Audience members


Audience member 1 – Founder of Corral Inc. [​S10].


Audience member 2 – Participant from a German delegation (part of a group from Germany) [​S29].


Audience member 3 – Student (asked about open‑source Chinese models) [​S13].


Audience member 4 – Intellectual property and business lawyer [​S17].


Audience member 5 – Audience participant (question on AI’s impact on labor); no specific role identified.


Audience member 6 – Audience participant (question on “open‑washing”); no specific role identified.


Additional speakers:


None (all speakers appearing in the transcript are listed above).


Full session reportComprehensive analysis and detailed insights

The panel was jointly convened by the AI Now Institute and the AAPTI Institute as a capstone to an intensive week of debate. Amba Kak opened by noting the “political economy of AI” as the common thread that links New York and Bangalore and highlighted the panel’s composition of senior figures from government, academia and journalism [1-3][4-5]. She also drew attention to the fact that this was the only all-female panel at the summit, framing it both as a point of pride and a reminder of the work still needed to normalise gender-balanced representation [4-5]. Kak also thanked Amlan Mohanty for co-conceptualising the panel and the summit organising team, Sanjana Mishra and Iksho Virat, for their logistical work [1-5].


A central theme introduced early on was the contested meaning of “openness”. Kak observed that discussions of openness have largely focused on technical affordances such as open-source code, model weights or hardware, yet the term is being used as a proxy for much broader values-including democratisation, participation, agency and even sovereignty [12-17].


Alondra Nelson (former Deputy Director of the White House Office of Science and Technology Policy) argued that the Biden administration has reframed openness as a binary outcome-either a model is “open” or it is not-rather than as a gradient that reflects the original open-source ethos of shifting power and fostering accountability [27-33][40-43]. She warned that this binary framing allows geopolitical concerns to eclipse the socio-technical dimensions of openness, such as transparency and democratic control, and that merely releasing model weights without accompanying data, APIs or governance mechanisms is insufficient [44-49].


Nelson explained that U.S. AI policy is increasingly pursued through industrial levers-tariffs, export controls, semiconductor restrictions and costly H-1B visas-rather than through traditional rule-making processes that invite public comment. She called the reliance on industrial levers a “hyper-regulatory” strategy and argued that, because it sidesteps formal rule-making, it is comparatively anti-democratic [55-63][64-68].


Anne Bouverot, France’s special envoy for the AI Action Summit, contextualised the geopolitical shift by recalling the U.S. announcement of the “Stargate” project and Vice-President Vance’s call for global customers [75-81]. She highlighted how China has leveraged open-source tools to catch up technologically, using open-source as a lever to gain a seat at the table [82-84]. For Europe and other middle-power nations, Bouverot argued that open-source can serve as a competitive instrument that enables “coalitions of the willing” to build digital sovereignty without having to develop an entire stack from scratch [88-92].


Astha Kapoor, representing Global South perspectives, cautioned that the prevailing narrative of openness as a catalyst for adoption can be hazardous for developing economies. She explained that framing openness merely as a driver of data or multilingual datasets risks turning Global South countries into test-beds for external AI models, diverting attention from structural challenges in health, education and broader development [111-119][124-126].


Ravneet Kaur, Chair of the Competition Commission of India, presented the commission’s recent market study on AI, which identified anti-competitive practices such as self-preferencing, bundling, tying, exclusive agreements and ecosystem lock-in across digital markets [128-138][141-152]. She stressed that access to data, compute infrastructure and skilled talent is pivotal for fair competition, and that transparency and accountability throughout the AI lifecycle are essential to safeguard consumer welfare and national sovereignty [153-158][161-170][166-170]. Kaur positioned competition policy as a concrete tool to prevent market foreclosure and to ensure that AI systems remain contestable and transparent [161-166].


Karen Hao illustrated concrete realisations of a broader, participatory notion of openness. She described the “big-science” multilingual large-language-model project, which brought together over a thousand researchers from 70 countries to create an open-source model with transparent data curation, shared governance and value-return mechanisms for contributing cultural institutions [179-182]. Hao also recounted the Tahiku Media Māori speech-recognition initiative in New Zealand, where the community was consulted from the outset, consent was obtained for data use, and the resulting model was co-designed to serve language revitalisation goals [183-202].


Continuing the discussion on scale, Hao argued that the Silicon-Valley conception of “scale”-a single model distributed to everyone by a monopolistic provider-is misleading. She proposed that true scale should be understood as many communities developing their own, application-specific models, thereby avoiding the concentration of power inherent in monolithic large-scale systems [207-212].


Nelson reflected that, unlike many prior conferences, this summit actively included a broad cross-section of participants-students, “aunties” and other community members-making the event “revolutionary” in its inclusivity [232-236]. She also highlighted the lack of transparency around data-centre siting, where local officials are often bound by NDAs, underscoring a gap in community oversight of critical infrastructure [219-226].


The audience-question segment broadened the conversation. On individual agency, labour concerns and the risk of “open-washing”, Hao suggested that consumers can exercise agency by choosing open-source tools aligned with their values and called for third-party labelling schemes-similar to those used in fashion or food supply chains-to make the provenance and resource usage of AI models clear [277-283]. She warned that corporate rhetoric about inclusion often masks a strategy of locking users into closed platforms [254-258]. In response to a question about gender balance and Chinese participation, Astha Kapoor noted that democratisation is largely about market access and that true inclusion must go beyond token representation, urging more gender-balanced participation [300-306]. When asked about Chinese open-source models, Alondra observed that, although she has not worked directly with them, they can be fine-tuned to remove overt ideological bias and are already being leveraged by enterprises [312-318]. Finally, regarding an IP-focused query, Ravneet Kaur clarified that the Competition Commission’s remit is limited to curbing anti-competitive abuse and does not extend to adjudicating intellectual-property rights [322-328].


In closing, Kak thanked the participants and noted the richness of the dialogue, emphasizing that the consensus underscored openness as a socio-technical, democratic practice that must be coupled with transparent competition policy and genuine community participation [389-391]. She also urged future summits to include more regulator and enforcement voices so that AI actors are held accountable to the public [389-391].


Overall, the discussion highlighted that openness must be understood as a socio-technical, democratic practice, that competition policy can serve as a tool for digital sovereignty, and that genuine community participation-across gender, geography and sector-is essential for an AI future that serves the public interest [389-391].


Session transcriptComplete transcript of the session
Amba Kak

The AI Now Institute and the AAPTI Institute, we are honored and delighted to be co -hosting this panel at the close of what has been an extremely stimulating, some would say over -stimulating week. What brings AAPTI and AI Now together, despite the many kinds of distance between New York and Bangalore, is our focus on the political economy of AI and our insistence that questions of technology are always questions of power. So we have a formidable panel by every standard, leaders in their field advocating for AI in the public interest, traversing several fields of government service, academia, and journalism, sometimes in the same person, as you will know if you read their bios, which I’m going to skip for reasons of expediency, but I’m going to talk through some of their specific advantages in the conversation.

You know, it always pains me a little bit to even bring it up, but I’m going to do it anyway, which is it is exceptional that this is also the only female -only panel at this symposium. Hopefully that’s not something we have to say a lot or something that we have to wear as a badge of honor, but more something to work on for future iterations. So before we begin, I don’t think he’s in the room, but I want to also thank Amlan Mohanty, who’s been a partner in conceptualizing and helping to bring this panel to light, and to our wonderful summit organizing team, Sanjana Mishra and Iksho Virat, for their tireless efforts. I hope you all get good sleep tonight after a very long week.

Okay, so let’s get into it. I’m going to moderate this panel, so I’ll take a seat. Thank you. So let’s get into it. Okay, let’s get into it. There have been many discussions about openness at this summit. You’ve probably been in at least one of them. For the most part, these discussions have focused on the kind of technical affordances of open source, open -weighted models, open hardware. But what’s clear is that the word open is doing a lot of work in these conversations. It’s a stand -in for many much broader values of democratization, of participation, agency, even sovereignty. So in today’s panel, we’re going to kind of widen our understanding of what openness could mean in this conversation about AI.

And I’m going to start with Alondra. Alondra has been the deputy director of the White House Office of… of science and technology under President Biden. And at the time, there was a very heated debate about the geopolitical but also safety implications of open source and what U .S. government policy would be on these issues. And it seems like under this current administration, we’ve landed on a pro -open source overall orientation. But at the same time, it feels as if in many senses, AI governance in the United States is more closed than it has ever been. So I guess I wanted to ask, what do you see as the broader challenges to openness in AI governance today?

Alondra Nelson

Thank you for organizing this, colleagues. And good to be here and good to close out this exciting summit with you all. So a couple of things. I mean, I would say the Biden administration, I think, took the questions. Question of open weight model. as a gradient, right? So it was a spectrum. So that open was not a binary. It’s either open or not open. And I think the new administration, the current administration, takes it much more as a binary, that open is a thing that you sort of have achieved and it is now open as opposed to being closed. I think the difference is that, to your point from the opening, Amba, is that I think part of what we were trying to do in the Biden administration was really go back to a kind of foundational sense of openness that comes out of an open source movement that really thinks about openness as a kind of socio -technical characteristic and not just a technical characteristic.

So certainly the questions around open models, AI models, are often around technical things like model weights. Are the model weights shared? Only the model weights shared? Is it also the case that the training data is shared? You know, is the API, open to a certain extent or closed to a certain extent. So the technical things are certainly there. But I think if we go back to a sort of broader understanding of openness that comes out of sort of open source software, it was about shifting power. It was about forms of accountability. It was about sort of openness as a kind of practice and openness as shared infrastructure, openness as resources that could be used by lots of different communities, things that could be, you could modify the technology, that you could sort of just use the technology for the sort of purposes of your community or the purposes that you had.

And so that meant that that older, I think, broader definition of open was much more about democracy and transparency and accountability in a way that if you take even, you know, a so -called open source model like Lama 2 or Lama 3, which isn’t really open source and that we’re… We’re being asked to be content with… model weights as open. So I think the, you know, why we want to really push back on that is because, you know, that we are often, I think, using geopolitical stakes as a justification for not doing the socio part of the socio -technical, for not doing the accountability and the transparency and the democracy part because, you know, too dangerous because in the UNESCO context, China, you know, these things just sort of sit in as signs for explanations for, you know, why things can’t be different.

And I think it’s the case that to go, you know, to be reminded of a kind of broader sense of open reminds us that, you know, it’s not this binary and that one can have, you know, there obviously may be places where you don’t want open source. Like, do you want open source, like nuclear deploy AI? Like, probably not, right? But the binary… The debate gets carried forward as if, like, every open source use or open weight use is that. use as opposed to the sort of gradient of uses that are much safer and moreover are beneficial to communities, to helping people achieve their goals and sort of certainly much better for public transparency and accountability about what these systems do in the world.

Amba Kak

Can I ask a quick follow -up and then I want to move to Anne, which is that the other sort of defining feature of certainly of U .S. government policy today is that it’s happening less through traditional, you know, the traditional forms of regulation that we’re used to and much more through industrial policy, through trade policy, through immigration. But these are also spheres that have been, I would say, relatively even more immunized from public accountability or harder to, you know, harder for the broader public to weigh in on. So just wanted your thoughts on how we…

Alondra Nelson

Yes, I’ve been writing and thinking about this. Thank you for that question. So… So, you know, we’ve spoken a lot about the new administration and gets talked about as being deregulatory in regards to AI and being very light and being, quote, unquote, light touch. And I think if we actually pose that as a question as opposed to accepting it as a statement and actually look at what the current administration in the U .S. is doing around AI, it’s actually taking a quite very heavy hand to sort of steer AI. So you mentioned some of the levers that they’re using, tariffs, trade policy, export controls of semiconductor trips, in the U .S. context even immigration. So, you know, there are, you know, I think companies are getting out of it and around it depending on their relationship to Washington, but we’re told that an H -1B visa for a high -tech worker is $100 ,000 per worker, right?

And so that’s, you know, 10x, 20x or whatever times a company, that’s quite a lot of money. And also just… The way that science is being funded to the extent that, you know, the federal government plays a large role in driving the sort of research ecosystem for technology. So all of those things are being very heavily shaped in the current administration in the U .S. And so… So it may not be regulatory in the sense of formal rulemaking as it happens in the United States context, but it is certainly hyper -regulatory, I think, in a lot of other ways. And I’ll go back to my keyword of the day, the democracy piece, which is the upside of formal rulemaking, even though it can be clunky, it can take a long time, sometimes the pace is too slow for the pace of the technology, all of those things can be true, is that it has democratic input.

So if you’re doing a rulemaking in the context of the U .S. federal government, there will be a public call, there will be a public notice that you’re doing the rulemaking, there will be a public call for input. So even if you don’t agree with the outcome, there are sort of moments of sort of democratic input. When we are doing AI policy by fiat and through executive authority only, those inputs, even if those limited inputs are even gone. So it’s not only, I think, quite heavy -handed. It’s unfortunately, I think, anti -democratic relative to the status quo.

Amba Kak

Yeah, exactly. Anne, I want to move to you. As the French president’s special envoy for the AI Action Summit, you’ve been at the heart of a lot of global coordination on AI governance. And there was a time, I would say, the last 10 years have been characterized by open versus closed as a kind of binary or a way of organizing the world into particular camps when it comes to AI, the democratic open world and the rest of the world. But it’s interesting how much that has, you know, the ground beneath us has shifted in the last few years. And it has been particularly interesting to note at this summit that it is middle powers as a frame that is coming through as a kind of new organizing principle.

So I guess I want to say, I mean, do you see that openness still has value in forging multilateral, solidarities and especially in this brave new world we’re in?

Anne Bouverot

Yes, absolutely. I mean, clearly the geopolitical landscape has really shifted. At the AI Action Summit in Paris, it was exactly a year ago in February. It was just after the inauguration in the U .S. It was the first international trip for Vice President Vance, and what a speech that was, just before Munich, the Munich Security Conference. It was a moment where the U .S. announced at the White House the Stargate project. So it was a very strong and loud message from the U .S. saying, we’re here, we’re investing, we’re the world leaders. And at the summit, J .D. Vance said very clearly, we want all of you to be customers of our technology. And at the same time, this is the moment when DeepSeek emerged on the world map and everybody realized that actually China, using open source, which is why I want to come to that, was really saying we have a seat at the table and we’re actually playing that game.

And China using open source is actually very interesting because open source has a number of benefits and also risks. I don’t think it’s the answer to everything, but clearly it’s a way for challengers to catch up. This is how Android came to the world of smartphones. There’s many examples, and this is what China has taken as a lever. To be in that race. But then on to what does it mean for other countries than the U .S. and China. It also means that this is a tool that can be used by other countries. which is why in France and in Europe we’re very much in favor of open source as a competitive tool and as a way to leverage the knowledge and the findings of others to then just stand on their shoulders and continue to develop technology.

It doesn’t mean that everything should be open source there are cases where you do want to be careful depending on the use case but as a way to develop and stimulate competition it is very powerful it’s not the only tool you mentioned middle economies middle powers there was this fantastic speech by Mark Carney at Davos and there was a speech by Macron as well that maybe I’ll conclude with but this idea that middle economies that have some resources, not the resources to build their own stack top to bottom and to fund frontier level AI but But together, by building coalitions of the willing, these middle economies can do a lot of things. I believe that Canada, France, Germany, Switzerland, India, Japan, Australia, I can name a few of them.

And it doesn’t have to be one big block of these middle powers, but ad hoc coalitions of the willing. So I believe this is really something that can be useful in the evolution of governance.

Amba Kak

That was a fascinating account, and I think what it also highlights is that actually, whether you’re China or the U .S. or the middle powers or France, there’s a level at which everyone, as we discussed, can in some limited way be pro -open source. So do you think then that the differentiation will be at the layer of governance and our approaches to how we govern? How do we govern these technologies?

Anne Bouverot

I don’t know, is really the answer. Governance is such a broad word. There’s a lot of, for example, open source is really being taken as a tool by startups and scale -ups in Europe and in other countries. I mean, by Mistral, by Cohere, by Sakana AI in Japan, by a number. Is that governance? I don’t know. But clearly, governance and countries and institutions have a role to play in saying, how do we shape those coalitions of the willing? How do we put public funding or access to publicly funded compute or access to data sets that countries can help to put together? How do we put that at use and in which ways? So what are the governance?

How do we put that at use and in which ways? How do we put that at use and in which ways? How do we put that at use and in which ways? that we use to strengthen digital sovereignty and resilience.

Amba Kak

Precisely, yeah, that’s sort of what I was getting at. Okay, Aas, I’ll quickly move to you. Middle powers, as we just discussed, it’s a very broad term, and what it conceals is that there are many different economic and political aspirations of the countries that are bundled in that mix, and especially for countries like India or other countries in the global south, what are the unique kind of forms of both leverage and dependence in this current environment?

Astha Kapoor

Yeah, thanks so much, Amba. I mean, I think that what we’ve been tussling with over the last few days is that we went from global south to middle powers very quickly in a matter of days, which changes our form a little bit and our aspirations, and I think that that is what we have to grapple with, which is that as global south, our needs are very different in terms of we have structure. We have structural issues around health, around education that need to be addressed. We also have, you know, things that we need to do in terms of moving the country forward beyond what is just technologically mediated progress. And I think that what we’ve been hearing around over the last five days is that things like, well, open data or multilingual data sets is what is going to be that push.

So, you know, our languages will now be online. But then at the same time, we also have to realize that without having openness or control or agency or frictions across that entire AI stack, we are basically risking our populations in the Global South doing the labor to bring people online. So openness as a driver of adoption is actually quite a dangerous frame for Global South countries because it moves attention from where we might need to invest our resources. to then thinking that the only way to our historical problems is via adoption. And we’ve also seen that in the absence of governance, India is not new to the openness discourse, right? We have had a history over the last 12 years or 15 years on digital public infrastructure, but we’ve also seen the limits of once adoption occurs and when you have innovation, people with the deepest pockets come to innovate there because this is an enormous market.

So I think that you mentioned, Karni, like if we are a middle power, we’re definitely on the menu as a market. If we are a global south country, I think that there’s value in thinking about what that solidarity is because you’re right, there’s no homogeneity. And I think we’ve missed some of those questions around what we as large markets diversify. We are not here. We’re not here to do the labor to, you know, test bed models that are built elsewhere. So I think openness as dialogue, as distribution of value is what we need to think about.

Amba Kak

so many soundbites that I want to clip out of what you just said that was incredible, thank you Chairperson Kaur, firstly thank you so much for being here, I think what Asha said actually leads in well to the question I wanted to ask you which is how does one combat this dependence and as the Chair of the Competition Commission of India you’re a regulator that has been kind of ahead of the curve of looking at anti -competitive trends in this market, so from your perspective can you say a little bit both about the key implications of competition in the AI market and also if you see competition as a lever in the so called sovereignty toolkit

Ravneet Kaur

Thank you Amba, so for us at the Competition Commission of India, we’ve been looking at a lot of developments happening in the internet economy and these developments have changed the way businesses work how consumers interact with the markets and how value is being is being created. So things are moving very rapidly on the digital front. And as the commission, we have looked at what can be the practices which can be anti -competitive. Apart from the benefits which are coming from a digital economy, we have numerous benefits when it comes to economies of scale, the network effects, the efficiencies which are coming from that. But then there are also these risks which are there. And some of these have already been observed by the commission.

So the key ones which we found in the case of digital markets is the self -preferencing which is happening. Tying and bundling is occurring in numerous cases. Leveraging is being done. And there are these exclusive agreements where unfair terms are being also sought and, you know, parity agreements, parity arrangements. are being put in place. So in the competition commission, we have looked at this conduct when it comes to search engines. We’ve looked at it mobile ecosystems, online intermediation services, whether it is hotel, bookings, food ordering, e -commerce, or it is social media platforms. So across the entire spectrum, the commission has been looking at it. And very interestingly, when we started looking at AI, what could be the impact of AI?

So we did a market study on AI and competition, and the report has been released recently, October 25. It’s available on our website. And we found a lot of similarities in the way AI can function as well. So AI can bring a lot of benefits. We are seeing a lot of benefits when it comes to healthcare, education, logistics, supply chain management, and a lot of agriculture. And I’m seeing a lot of good things happening on that front. But also there are these potential possibilities or risks where you could see concentration in the entire AI value chain. There could be ecosystem lock -in, which might happen. Then there could be targeted price discrimination of people based on location, economic means, et cetera.

And then exclusive partnerships and the systems being opaque. So those were the things identified in the market study. And as a first step, we thought we need to make everybody aware because the important issue is one of access. Who has the access? That is a person who will determine what will happen in future. So it is access to data. It’s access to compute infrastructure. It is access to even skill sets. So whether we are able to build up the required skill sets within the country to be able to compete effectively. so those issues have brought us to work towards a framework where we are saying in the entire life cycle of the AI system how can we bring in transparency how can we bring in accountability

Amba Kak

I think that’s so important too because we focus a lot on big tech control over infrastructure people are familiar, inputs but I think what you’re pointing to is that it’s access to the consumer the pathways to monetization are happening at the distribution layer so really paying close attention to making sure that we have free and open competition in that layer and firms can’t take dominance from one market into another seems really important my second maybe more provocative question was around do you see competition as a tool for particularly global majority countries to retain and exercise sovereignty in the kind of AI age

Ravneet Kaur

when we look at AI we are looking at how far we can develop and how much we can do to make sure that we are able to make the most of the market and how much we can do to make sure that we are able to and deploy, monitor our AI systems that we are putting in place. And that’s where the issue comes up that we need to have the autonomy to be able to deploy the systems as per our economic, strategic, and societal priorities. And that’s where we see the very critical thing that how we can ensure that AI does that. And competition is a very important aspect of it. We just can’t forget about it because competition is what is going to ensure that there are no entry barriers, that players who are already there are not using their dominance to foreclose competition, to foreclose the market, and also that the consumers are not left locked in into a particular system because they can’t move their data and their various benefits that they are deriving from the AI systems to some other applications.

So really competition is at the heart of it, and I don’t see any way where we can forget about market. Thank you. markets would need to be contestable, fair, competitive. And for that, you know, that is where I would like to point out about our study, that we have clearly brought out that people who are deploying the technology, they have to have technical transparency. The stakeholders have to be able to understand what’s happening, what is this technology or this application being used for. And then there has to be governance transparency. That is that how you are governing that system. That also needs to be transparent. So once we are able to ensure that the people who are deploying these systems are looking at all these aspects, then the self -audit is happening, then maybe we would be able to safeguard competition because at the really crux of it all is maintaining competition.

competition.

Amba Kak

Thank you so much. Karen, I’m going to move to you. And just by the fact that there was so many, a line of people trying to take a selfie with you before we started, I’m going to assume that many people in the audience are familiar with Karen’s incredible book, Empire of AI. Her work has really delved into the global inequities that are embedded in the AI sort of global supply chain. I want to ask you where, I mean, your book is full of rich examples, but where do you see that open approaches to developing AI in some ways pose a challenge to this empire model of AI?

Karen Hao

big science project. It was this project that brought together over a thousand researchers from 70 countries, 250 institutions to try and create an open source large language model that not only would allow many different researchers to then interrogate what is actually happening beneath the surface of a large language model, but also to completely rethink what it would take to develop these technologies in a fundamentally more beneficial way where, for example, there’s better data governance practices, where you’re actually curating and cleaning the data, making it transparent for people, being able to track which data owners are then contributing to what aspect of value generation within the model. And this kind of goes back to Alonzo’s point as well, where you were saying…

that we really need to understand openness with a much broader conception of what openness means. It’s not just technical openness. And this project really embodied that, where they were working together with lots of different cultural institutions, with libraries, historical institutions, to try and figure out better ways of capturing the rich data that they had, but with respect to that institution and with a way to then deliver value back to that institution so the value chain wasn’t going just to the model creators themselves. Another project that I really loved is one that I highlighted in my book in the epilogue, which is the Tahiku Media AI speech recognition model. So Tahiku Media, they are a nonprofit radio station in New Zealand, and they broadcast in Te Reo Maori, or the Maori language, the language of the indigenous peoples in New Zealand.

And when a couple years ago, there’s been this big movement within New Zealand to try and revitalize the Maori language because it has been a huge challenge for them. almost been lost through the process of colonization. And Tahiku Media thought they had a very unique opportunity with this rich archival audio of Tōrero Māori to open this up to the community and help facilitate more language learning. They wanted to make it more accessible than simply just allowing people to listen to it, though. They wanted to create an application where you listen to the audio while you see a transcription of the audio. You can click on the transcription to get automatic translation. You can figure out how the language actually works.

But they realized they didn’t have enough capability to transcribe this because there simply were not enough proficient Tōrero Māori speakers. So this was the perfect use case where they could leverage building an AI speech recognition tool to do that work for them. But they went about this project in a totally different way. They made it extremely open and participatory to the community. Also not in a technical way, but in a social way. where they engaged immediately with the community to ask them, do you want this AI tool? And once the community said yes, they then had a public education campaign where they taught everyone what is AI in the first place, what do we actually need, we need a model, we need data, this is the kind of data that we need, this is the data that we would need from you.

And then once they actually engaged in that process and they developed so much trust with the community, they were able to collect enough data from the community with full consent in just a few days to train a speech recognition model. And then they continued to go back to the community and they said, now that we have this model, what kinds of applications do you actually want us to develop with this? What kinds of new AI models do you want to develop with this? And all of this was built on another open source project, which was the Mozilla Foundation’s deep speech model, which was similarly developed. With that kind of broader definition of openness, it was a model developed purely with also consentful data donations.

And so the entire stack was with the spirit of collaboration, with participation from everyone in the community, with an equal exchange of value where the people who are giving the data have a vote, have a say in then how the model ultimately can help support their journey in language learning. So both of those examples I always hold in my head when I’m thinking of what are the visions of AI that we actually want to support, what are the visions of open space AI that we actually want to support.

Amba Kak

So as you were speaking, I was just thinking, apart from being open and participatory in all the ways you said, these examples also provide a contrast to the idea that there is one model to rule them all, there’s this very sort of large language, we’re taking a single bet on a single technology, type of approach. But similarly, one of the… of the, I guess, common retorts to these experiments in some sense is that we can’t do that at scale. And so I’m just curious, what do you see as the tension between these kinds of governance structures and scale, and is there a trade -off?

Karen Hao

So I would reframe what we mean by scale, because what we are taught by Silicon Valley is that scale means they distribute to everyone, but they are the sole distributor. And to me, that’s not scale. That’s a monopoly. And what really we would want from scale is different communities all around the world, different industries, different companies, each developing models by and for them at scale. Like, that’s, to me, like a much more appropriate way of thinking about scale. And in fact, what’s so interesting is, like, because of the data imperative for large language models and the compute imperative for large language models as they’re currently being trained by the main company, they’re not going to be able to do that.

There is not a, there isn’t a good ability to diffuse this technology across. many different industries or many different communities. Most industries are data -poor industries. They’re not like the Internet industries. They don’t sit on vast amounts of data. And so if we actually want to diffuse AI to more people around the world and for more use cases around the world, in fact, we need to think of scale from a small AI perspective, a community -driven perspective, application -specific perspective, and that’s how we’re going to get scale.

Amba Kak

Okay, we’ve heard, I guess, a range of rich perspectives, and I’m going to take it as a good sign that all our panelists seem to be actively taking notes and sort of engaging with what each other was saying. So I was going to propose as a sort of round two that I might ask, just based on the conversation we’ve just had, Alondra, what is something that’s sort of sticking with you or that you’re working through in response?

Alondra Nelson

Yeah, I think community. So Karen queued that up for me, and the note that I was just writing here was about that, and I was thinking about… is how the stack that we are building now is explicitly closed to community. And I was thinking in particular about the data center and cloud layer. So in the U .S. context, there’s a lot of contestation. There’s growing contestation in communities about data centers. What folks might not know is that part of the contestation is because elected officials are asked to sign NDAs and contracts are being signed to stand up data centers in the dark of night and communities don’t even know. So the sort of lack of openness around the infrastructure, that infrastructural piece of the AI stack is actually quite profound.

And then I was thinking the opposite. So my reflection on the time here, which I’m still going to be processing for quite a long time. It’s my first time in New Delhi, my first time in India. It’s been an incredible experience. But I’ve been to a lot of AI conferences like, you know, NeurIPS. and everything, you know, like professional ones, not professional ones. A lot. A lot. This is the first one I’ve ever been to that has included the community in any considerable way. And it just is, I mean, I think it’s a revolutionary thing. And if we’re really serious about having democracy and community and voice, AI conferences need to look much more like this one than the ones that we spend a lot of our time going to.

So, you know, so who knows what will be the outcome of this week together. But it has been extraordinary and distinctive in the inclusion of lots of, you know, unks, aunties, college students, and lots in between.

Amba Kak

Aastha, closing reflections.

Astha Kapoor

Yeah. First of all, thank you for that reframe. As somebody who was here on the 16th, I was feeling so overwhelmed, and my instinct was like, there are too many people. But I do appreciate that. That reframe on the fact that this is the community that is going. to build and question and do the work, I think, that we all keep talking about. And I think from that is also, my word is also community, but I think friction, how do we enable some of that, both the coalescing, but also the dialogue, the questions, the where is the value for me part of it. And I know an example that was presented yesterday on the Amul Co -op, we’ve been doing a lot of work with cooperatives, to me, which is a nice space because it is the governance question of one member, one vote, you can pool things.

So how do they become recipients but co -designers in some of the things that we’ve heard over the last few days. So, yeah. Just closing reflections and maybe even just a takeaway that you’re sitting with after this week.

Ravneet Kaur

Yeah, sure. So for me, I think the very important thing, which came out from this AI impact summit is that the governments need to be very active about how they are ensuring. that the deployment of AI is happening. And for that, I am very happy with the way we are going in terms of, you know, we did a great job when it came to digital identity and digital payments. So now we are looking at a digital public infrastructure, how you’re going to be able to provide compute platforms for startups, for people who don’t have the resources, make available data, and then the focus which is there on small language models. Everything doesn’t need to be large, especially when we look at things which are very language -specific, very related to our country and to our solutions.

So that’s one of the key takeaways that I have. And the other, of course, is that we’ll be going, all of us at the Competition Commission are now, you know, going back with this, that one needs to be very alert as to what are the kind of systems which are being put in place and are flexible. Is there transparency? Is there accountability? so those are the key things because at the end of the day it is trust if you can build up trust if your systems are not opaque then you would be able to get the people on board onto your applications and to your systems and that’s where success lies, that’s where value is

Amba Kak

I’ll say ma ‘am that one of my key takeaways and hopefully someone from the Swiss government is listening for next year is that we also need to see much more voices from the enforcers, those that are going to make sure that the players in this space are accountable to the public and not above the law and so I’m very grateful that you’re here and I hope that future summits see more enforcers at the table okay Karen you get the last word and I would say I’m going to open up for questions so start thinking of

Karen Hao

I think my biggest reflection from the summit which I also shared in an event last night is that um um It’s so interesting to observe corporate speak in these spaces. And the thing that struck me the most about this summit is that this corporate speak has gotten very sophisticated in that they have adopted the language of inclusion, diversity, empowering marginalized communities to talk about ultimately selling their technology and making sure that you kind of buy into helping them lock in their closed platforms. And I hope that because we have more community engagement and there’s more openness in a lot of the discussions that are happening kind of alongside this very sophisticated corporate speak that all of you will take away from the summit this broader idea of what it really means to ultimately build a future where AI can empower people.

It does not actually mean the democracy that the companies offer us. It in fact means that we should all be thinking very deeply about. What are the problems that we really need to solve in as individuals within our families, our communities, our companies, our context. and then whether or not AI is even the right solution for that problem and then how to design and develop from the ground up AI solutions that truly are empowering and enabling and help tackle those problems and bring everyone along together.

Amba Kak

That was, yeah, what a great note to end on. And honestly, a note of optimism and a note to build towards the futures we want to see. Okay, so does anyone have any questions? Okay, I saw you first. Go ahead.

Audience member 1

Hi, everyone. And, yeah, I was one of the people in line looking for the signature on the book. So I read Carol. It’s a reference book. And my question is addressed to you. So all of this, it makes sense, but it makes sense in a more macro way. From a micro perspective where an individual is exposed to AI and, you know, at their workplaces and we’re expected to use it and, you know, that there’s no getting away from it. How do we reconcile the fact that, you know, probably there is a whole lot of exploitation behind the models that we’re using? But at the same time, you can’t not use it because it’s just, it’s every day.

I don’t use it. Yeah. Yeah. So I’d like to know a little bit more about that. How? Yeah.

Karen Hao

No, I actually, I think it’s totally possible to not use these tools. But also, I would say that oftentimes our conversations around adopting AI are posed as a binary. Like either you go completely all in. Or you go none at all. Yeah. and there’s actually a million possibilities in between right there are so many different ways that you could refrain from using air in certain contexts but maybe there are other ways that it helps you um being more intentional about what kinds of ai tools you adopt from which kinds of companies like we’ve been talking a lot about openness so maybe you choose to use more open ai technologies rather than the closed ones um one of the things that i feel is missing right now within the ai ecosystem that makes the burden very very high on consumers is that we don’t really have third -party organizations doing analysis to make clear like clear and easy labels for consumers to determine what values and what degree of resources are being used to develop different types of ai models so that they’re they can actually make informed decisions but we have lots of precedent of this happening in other industries like the fashion supply chain and food and coffee and so i hope that someone out there listening will start working on this like develop some kind of third party third party labeling system so that consumers can actually start making more informed choices.

The other thing that I would say is I also don’t think individuals like we aren’t just consumers. That’s not the only way that individuals can push against the inevitability narratives of AI. We’ve seen amazing protests that have broken out all around the world to push against data centers. We’ve seen protests from parents who feel that their children are being harmed and this rapid escalation of AI advancement is getting out of control. We’ve seen artists and writers using the tools of litigation to counter when these companies are infringing on their intellectual property in ways that they don’t stand for. There are many different ways I think within your life AI is everywhere and also that means you as an individual and within your community have a thousand different touch points for how you can interact with the AI supply chain and in each of those touch points you can choose whether to resist or adopt or be neutral and so there’s yeah like I hope that people actually feel significantly more agency than I think people generally feel today.

Amba Kak

Thank you. Okay, I think we should do a couple of questions. So you, you, and you. Okay, let’s go in that order. So we’ll take those three questions and then…

Audience member 2

Hello, thank you so much. This was, I think, my favorite panel of the whole summit. And also, like, an all -female panel. I think it’s nice. It’s also kind of connected to a reflection. You know, my question is, like, I feel like at this space, I’ve realized there’s not as many women by far. As men. And, again, as you said, it’s the only female panel. And we’re here with a group of 15 people from Germany. And, like, half of us is male and half of us is female. Often just our male counterparts get addressed and somebody’s just speaking to them and, you know, not, like, asking them for money or other, like, in terms of, like, pitching their business idea, whatever.

But I’ve also noticed other things. Like, the theme is, right, AI all -inclusive, right? But I’m wondering, like, who does this include? In this specific context? In which vision, like, do you understand, like, from this summit, who you think is included in this vision for all -inclusive? and also I’ve realized, I don’t know if anybody else has realized but I feel like China is quite an important power in the AI governance space but the amount of Chinese people here I’ve seen is very low and it’s just something I realized that I noticed so I feel like it’s still just some reflection and I wonder how you see this, like what does this notion of all inclusive mean for you or how you perceived it here?

Amba Kak

Thank you, that was many important and provocative questions you just asked

Audience member 3

I was curious, kind of as a follow up to our colleague here, your role on the open source Chinese models which are clearly the most intelligent in the open source space but clearly have a deep CCP perspective and so I’m curious like how does that come together in this ecosystem and how can we leverage it appropriately?

Audience member 4

Hello. Thank you panel for the wonderful discussion. I’m an intellectual property and business lawyer. So my question is related to intellectual property, specific to Ravneet. Just I wanted to know how you see the openness of AI in context of the intellectual property as openness is somewhere giving the restriction in context of the intellectual property.

Amba Kak

Why don’t we start with that question?

Ravneet Kaur

Okay, sure. Sure. So when you look at intellectual property, because, you know, there’s a lot of research, development and innovation which has gone into the development of that technology. And whatever is put in place, and there are these copyrights, there are these patent acts which are protecting that. When it comes to the competition commission, we come into the picture only if we find that there is an abuse. Wherever, whatever innovation has been done, it is being used to ensure that there is an abuse. And we want to ensure that no other people can come into the map into the into the same map. And it is being used to enforce conditions which are unfair. So that is the only space where we come in.

Otherwise, the purpose of the commission is not to stifle innovation. We are to, in fact, protect innovation because that’s the way to grow. That’s the way markets will grow further. Competition will increase. New players will keep coming in, better technologies, better value for the customer. So consumer welfare is one of the very critical things we look at. That’s how we address these issues.

Amba Kak

I wonder if, Aastha, you can talk to the gender and that broader question on inclusion.

Astha Kapoor

Yeah. Thank you so much for that question. I think it’s what we’ve all been feeling as well. I think that basis, what I have understood in so very early, overwhelmed sense is that there is inclusion, as Karen was saying, is also being chosen host as a word for adoption. And I think that that is the primary framing that I’m taking away from this. I think that that is the primary framing that I’m taking away from this. I think that that is the primary framing that I’m taking away from this. I think that that is the primary framing that I’m taking away from this. I think that that is the primary framing that I’m taking away from this.

I think that that is the primary framing that I’m taking away from this. I think that that is the primary framing that I’m taking away from this. democratization is about market access. The working group also says so. And I think that the gender perspective will also, and we’ve seen this again in previous iterations of the tech will save us, financial inclusion, digital financial inclusion variety, which is like get people online. And then what ends up happening is that when you realize that you’re not able to make money of these, like, you know, the bottom 80%, then you start to get drop -offs there. So it is at the moment of that hype cycle of getting everybody online, and then whether we’re able

Amba Kak

I don’t know, maybe you could take the question on Chinese open source AI and how we feel about it.

Alondra Nelson

I’ll try. I mean, one thing I would say about, there’s been some news reporting on, you know, about the fact that this week took place during the Lunar New Year and that that probably had some impact on participation at Ramadan as well. I mean, you know, so I think, that’s not lost, I shouldn’t be lost on. any of us for this question of inclusion. I think, I mean, I haven’t worked with the Chinese model, so I don’t know, but if they’re open source models, you should be able to tune them so that they don’t have, you know, at least as much kind of, you know, CCP kind of ideological control. I don’t know if you do that in the training data or inference level or where you do it, but, and it seems that they are, there are a lot of companies that are building on the Chinese models, and so it seems like even in the enterprise space, and so that is clearly not a hurdle to some of the enterprise kind of uses and applications that people want to build on them, so.

Amba Kak

I think we can take two more questions. Okay, so your hand, and I just want to take someone from the middle. You can go. Okay. The alarm just went off. So if you could also make sure that it’s a crisp question that would allow there to also be answers. Yeah.

Audience member 5

So I am really interested in how AI is going to impact labor. And one of the biggest concerns in this area is the fact that, you know, AI can train on the intellectual labor of so many people without giving credit, without giving compensation. So there are obviously regulatory approaches to this. But I’m more interested in like an up. So new research that’s happening about protecting publicly available data, be it images, be it websites, be it written content in a way that that data, if it’s used directly by AI, it’s either useless to it or it’s harmful to it. I think there’s some research happening in University of Chicago around that and some other places. So my question here is twofold.

First, is this like a good approach to sort of protect intellectual property or data by creating? Protection by design. And two, how does it tackle? How does it go with the idea of openness? Right. Because on the one hand, it’s.

Amba Kak

Thank you for the question. I just want to make sure we have time for the others. They’re going to kick us out of this room. That’s the final question and then maybe Karen, you can address the labour question.

Audience member 6

Hi, I wanted to ask about open washing. We’ve been hearing the term in previous discussions about openness in competition. And I just wanted to ask in terms of enforcement, how should competition authorities assess whether this openness is genuinely lowering entry barriers or whether underlying dependencies still exist essentially. Do we need new analytical tools? Does there need to be a reworking of the frameworks around competition? That’s essentially the question I wanted to ask. Thank you.

Amba Kak

Karen and then Jayperson Kaur, you will have the last word.

Karen Hao

Sorry, can you remind me the very last part of your question? You were talking about… The labour one. Yes. I agree with everything that you said, basically, that, yes, this is a huge problem. Yeah, like labor exploitation is absolutely happening, both with the exploitation of the labor that is being used to produce the data and also labor exploitation of, like, data workers that are cleaning the data. And I think that just shows, given that the labor exploitation is happening all through the supply chain, that that is kind of inherent in the logic of how these models are being created, and we need to fundamentally rethink that from the ground up.

Ravneet Kaur

So when we do a competition assessment, we are looking at numerous economic factors that are also taken into consideration. It is not based on, you know, what has been submitted to us. And a very detailed analysis is done to understand whether there is any competition harm. And the other aspect which is looked into is what are the effects which are there. Is there an appreciable adverse effect? So we have to establish both the things, and this is done on a case -to -case basis after doing a very rigorous analysis. of both the data which is available in the public domain and the analysis done by our internal teams. Only then we are able to determine whether there’s a harm to competition.

Amba Kak

Okay, thank you all so much for being here. This is such a rich conversation and thank you all for being part of it. Thank you.

Related ResourcesKnowledge base sources related to the discussion topics (32)
Factual NotesClaims verified against the Diplo knowledge base (5)
Additional Contextmedium

“The panel was jointly convened by the AI Now Institute and the AAPTI Institute as a capstone to an intensive week of debate.”

The knowledge base confirms that the AI Now Institute was a convening organization for a panel on openness, but does not mention the AAPTI Institute, so the AI Now involvement is corroborated while the joint role of AAPTI is not documented.

Confirmedhigh

“A central theme introduced early on was the contested meaning of “openness”.”

The panel discussion explicitly examined the concept of openness in AI, as recorded in the knowledge base entry on the Global Perspectives on Openness and Trust in AI panel [S1] and the European AI Governance Strategy discussion that foregrounded openness [S114].

Additional Contextlow

“The panel was the only all‑female panel at the summit, highlighting gender‑balanced representation needs.”

While the report’s claim about an all-female panel is not directly confirmed, the knowledge base contains entries discussing gender parity and the importance of diverse representation in AI forums [S108] and broader gender-equality initiatives [S40], providing contextual background.

Additional Contextmedium

“Open‑source can serve as a competitive instrument for Europe and other middle‑power nations, enabling “coalitions of the willing” to build digital sovereignty without developing an entire stack from scratch.”

The knowledge base links openness to digital sovereignty and strategic positioning, noting that open-source tools are discussed as means for nations to achieve technological autonomy and collaborative coalitions [S114] and in broader digital sovereignty debates [S117].

Confirmedhigh

“Astha Kapoor, representing Global South perspectives, cautioned that the prevailing narrative of openness as a catalyst for adoption can be hazardous for developing economies.”

Astha (Aastha) Kapoor’s participation in a Global South-focused session on digital governance is recorded in the knowledge base, confirming her role as a speaker representing Global South concerns [S9].

External Sources (117)
S1
Global Perspectives on Openness and Trust in AI — -Ravneet Kaur- Chairperson of the Competition Commission of India
S2
Capacity Building in Digital Health — -Dr. Sarvjeet Kaur: Secretary of the Indian Nursing Council, represents 2.2 million nurses, regulatory role in nursing e…
S4
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S5
Global Perspectives on Openness and Trust in AI — – Alondra Nelson- Karen Hao- Amba Kak – Ravneet Kaur- Amba Kak
S6
Less experienced, low-income users prefer an open, unlimited internet, a recent study reports — A recent Master thesis at the Oxford University contributes to a heated debate about the pros and cons of the zero-rated…
S7
Dare to Share: Rebuilding Trust Through Data Stewardship | IGF 2023 Town Hall #91 — Astha Kapoor:Yeah. Thank you for this and thank you for the audience too for coming at this very early hour. I guess to …
S8
Global Perspectives on Openness and Trust in AI — These key comments fundamentally transformed what could have been a technical discussion about open-source AI into a sop…
S9
Global South Solidarities for Global Digital Governance | IGF 2023 Networking Session #110 — Astha Kapoor, Aapti Institute, Civil Society, Asia-Pacific Group
S10
AI Transformation in Practice_ Insights from India’s Consulting Leaders — -Audience member 1- Founder of Corral Inc -Audience member 6- Role/title not mentioned
S11
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S12
Building the Workforce_ AI for Viksit Bharat 2047 — -Audience- Role/Title: Professor Charu from Indian Institute of Public Administration (one identified audience member), …
S13
Global Perspectives on Openness and Trust in AI — – Alondra Nelson- Audience member 3
S14
AI Transformation in Practice_ Insights from India’s Consulting Leaders — -Audience member 3- Student -Audience member 6- Role/title not mentioned
S15
https://dig.watch/event/india-ai-impact-summit-2026/ai-transformation-in-practice_-insights-from-indias-consulting-leaders — Absolutely. Audience member 3: Namaste sir. I am a student. So my question is that what should be the effective strateg…
S16
AI Transformation in Practice_ Insights from India’s Consulting Leaders — -Audience member 4- Geeta, from GCC (Global Capability Center) background -Audience member 6- Role/title not mentioned
S17
Global Perspectives on Openness and Trust in AI — -Audience member 4- Intellectual property and business lawyer
S18
https://dig.watch/event/india-ai-impact-summit-2026/ai-transformation-in-practice_-insights-from-indias-consulting-leaders — Sorry, we have a lot of people who’ve raised their hands. I think we can just probably take a couple of questions. I thi…
S20
Harnessing Collective AI for India’s Social and Economic Development — – Professor Manjunath- Audience Member 5
S21
Global Perspectives on Openness and Trust in AI — – Karen Hao- Audience member 1- Audience member 5
S22
AI Safety at the Global Level Insights from Digital Ministers Of — -Alondra Nelson: Professor who holds the Harold F. Linder Chair and leads science, technology, and social values lab at …
S23
A Digital Future for All (afternoon sessions) — – Alondra Nelson – Harold F. Linder Professor, Institute for Advanced Study Alondra Nelson: I do. I do. I mean, I thin…
S24
Global Perspectives on Openness and Trust in AI — -Alondra Nelson- Former deputy director of the White House Office of Science and Technology under President Biden
S25
Digital Technologies and the Environment: a Synergy for the Future — 17. Sengupta, Rajid, 2021. World needs to rethink internet use post-COVID-19 . Retrieved 30 November 2021 from: https://…
S26
Global Perspectives on Openness and Trust in AI — – Alondra Nelson- Karen Hao – Ravneet Kaur- Karen Hao
S27
Building Trusted AI at Scale – Keynote Anne Bouverot — -Anne Bouverot: Special Envoy for Artificial Intelligence, France; Diplomat and technologist; Former Director General of…
S28
How to make AI governance fit for purpose? — – Anne Bouverot- Chuen Hong Lew – Jennifer Bachus- Anne Bouverot
S29
Global Perspectives on Openness and Trust in AI — -Audience member 2- Part of a group from Germany
S30
Day 0 Event #82 Inclusive multistakeholderism: tackling Internet shutdowns — – Nikki Muscati: Audience member who asked questions (role/affiliation not specified)
S31
The Arc of Progress in the 21st Century / DAVOS 2025 — – Paula Escobar Chavez: Audience member asking a question (specific role/title not mentioned)
S32
Study highlights inaccuracy of AI chatbots in providing election information — A recentstudyby the AI Democracy Projects, acollaborationbetween Proof News and the Science, Technology and Social Value…
S33
WS #466 AI at a Crossroads Between Sovereignty and Sustainability — – Pedro Ivo Ferraz da Silva Environmental Impact and Climate Justice Valdivia criticizes the lack of democratic partic…
S34
Main Session on Artificial Intelligence | IGF 2023 — Canales Lobel also highlights the significance of effective global processes in AI governance, advocating for seamless c…
S35
https://dig.watch/event/india-ai-impact-summit-2026/global-perspectives-on-openness-and-trust-in-ai — I think there’s some research happening in University of Chicago around that and some other places. So my question here …
S36
Principles for governing the Internet — – The pillar on ‘human rights’ is clearly related to the objective of freedom of expression (information, communication…
S37
From Technical Safety to Societal Impact Rethinking AI Governanc — “And I do think that the political level, while we need technical inputs, the only force in the world”[93]. “How can you…
S38
Leaders TalkX: When policy meets progress: paving the way for a fit for future digital world — Dr. Aminu Maida from Nigeria described their shift from traditional command-and-control regulation to data-driven approa…
S39
Taming Competition in Low and High Orbit — Similarly, national competition in the space sector is seen as a positive factor, fostering security collaboration and s…
S40
Charting New Horizons: Gender Equality in Supply Chains – Challenges and Opportunities — In addition to research partnerships, the chapter demonstrates a firm stance on gender equality by planning to offer sch…
S41
Internet standards and human rights | IGF 2023 WS #460 — In conclusion, the lack of diversity in internet standards bodies, such as the IETF, is a significant concern. The under…
S42
Opening remarks — Rodrigo de la Parra:Thank you, Professor Glaser, thank you for the invitation. Your Excellency, Minister Luciana Santos,…
S43
From summer disillusionment to autumn clarity: Ten lessons for AI — We must approach this with a clear understanding. Trading our knowledge for AI services is not inherently bad – in fact,…
S44
Global AI Policy Framework: International Cooperation and Historical Perspectives — This comment provided a conceptual resolution to many tensions discussed throughout the panel. It offered a concrete pol…
S45
Competition law and regulations for digital markets: What are the best policy options for developing countries? (UNCTAD) — Competition policy and advocacy play an important role, especially in developing countries, where competition authoritie…
S46
WS #19 Satellites, Data, Action: Transforming Tomorrow with Digital — There is competition in the LEO satellite market between private companies and government-backed initiatives. This compe…
S47
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — By engaging users and technical communities, policymakers can gain valuable insights and perspectives, ultimately leadin…
S48
Digital Policy Perspectives — Sulyna Abdullah:Thank you, Leona. First of all, I’d like to apologize for the misrepresentation of my photograph on scre…
S49
WS #323 New Data Governance Models for African Nlp Ecosystems — Melissa Omino: Thanks, Mark. I think that in order to have real equity, we need, we are required to think about communit…
S50
Rethinking Africa’s digital trade: Entrepreneurship, innovation, & value creation in the age of Generative AI (depHub) — This data collection occurred without clear information or consent from the individuals, leading to ethical concerns, es…
S51
Workshop 1: AI & non-discrimination in digital spaces: from prevention to redress — **Prevention measures:** Audience responses supported proactive approaches including impact assessments, community invol…
S52
Open Forum #37 Her Data,Her Policies:Towards a Gender Inclusive Data Future — Challenges in Policy Implementation There is a need for transparency and collaboration in communicating policies to the…
S53
Global Perspectives on Openness and Trust in AI — “Hi, I wanted to ask about open washing”[48]. “Do we need new analytical tools?”[39]. “how should competition authoritie…
S54
Digital Ecosystems and Competition Law: Ecological Approach (HSE University) — Competition authorities have recognised the need for ecosystem-level assessments. Competition authorities in developing…
S55
https://dig.watch/event/india-ai-impact-summit-2026/global-perspectives-on-openness-and-trust-in-ai — Thank you for the question. I just want to make sure we have time for the others. They’re going to kick us out of this r…
S56
EU Report calls for new antitrust rules for tech giants — The European Commission has published a report titled ‘Competition Policy for the Digital Era’examining the EU antitrust…
S57
Interdisciplinary approaches — AI-related issues are being discussed in various international spaces. In addition to the EU, OECD, and UNESCO, organisa…
S58
Artificial Intelligence & Emerging Tech — In conclusion, the meeting underscored the importance of AI in societal development and how it can address various chall…
S59
WS #288 An AI Policy Research Roadmap for Evidence-Based AI Policy — Virginia Dignam: Thank you very much, Isadora. No pressure, I see. You want me to say all kinds of things. I hope that i…
S60
360° on AI Regulations — In conclusion, the analysis reveals that AI regulation is guided by existing laws, and there is a complementary nature b…
S61
Responsible AI for Shared Prosperity — The balance between open-source development and community sovereignty presents ongoing challenges. While open-source app…
S62
How Small AI Solutions Are Creating Big Social Change — African languages. And we just released a data set of 21 now, 27 voice languages, given that Africa has 2 ,000 or so lan…
S63
Driving Social Good with AI_ Evaluation and Open Source at Scale — The conversation then shifted to the growing problem of AI-generated code submissions to open source projects. Sanket Ve…
S64
AI that serves communities, not the other way round — At theWSIS+20 High-Level Eventin Geneva, a vivid discussion unfolded around how countries in the Global South can build …
S65
Host Country Open Stage — Collaborative approaches are essential for addressing complex societal challenges in small populations Nordhaug argues …
S66
United Nations Office for Digital and Emerging Technologies — In hisRoadmap for Digital Cooperation,the UN Secretary-General recognised the critical role of open source solutions in …
S67
Connecting open code with policymakers to development | IGF 2023 WS #500 — Henri Verdier:Thank you for your very precise and important questions. First, as you said, most people of power has quit…
S68
WS #208 Democratising Access to AI with Open Source LLMs — Abraham Fifi Selby: All right, thank you very much for the session, and I’m very happy to join this panel. I’m from th…
S69
Searching for Standards: The Global Competition to Govern AI | IGF 2023 — In conclusion, the UNESCO recommendation on AI ethics provides crucial guidance for global AI governance. By grounding A…
S70
Discussion Report: Sovereign AI in Defence and National Security — Policy and Regulatory Considerations Regulatory frameworks can be adapted to different national contexts The moderator…
S71
Building Indias Digital and Industrial Future with AI — Deepak Maheshwari from the Centre for Social and Economic Progress provided historical context, tracing India’s digital …
S72
Impact & the Role of AI How Artificial Intelligence Is Changing Everything — That is why we must frame this not simply as technology policy, but as democratic governance. The choices made today abo…
S73
Global Perspectives on Openness and Trust in AI — Alondra Nelson, former deputy director of the White House Office of Science and Technology Policy, provided the panel’s …
S75
Democratizing AI: Open foundations and shared resources for global impact — This comment elevated the technical sophistication of the discussion and established credibility for Switzerland’s democ…
S76
How to make AI governance fit for purpose? — Anne Bouverot described Europe’s evolution from regulation-focused approaches toward innovation and practical outcomes. …
S77
Regulating Artificial Intelligence: U.S. and International Approaches and Considerations for Congress — During the Biden Administration, E.O. 14110 directed over 50 federal agencies to engage in more than 100 specific action…
S78
Competition law and regulations for digital markets: What are the best policy options for developing countries? (UNCTAD) — Competition policy and advocacy play an important role, especially in developing countries, where competition authoritie…
S79
Partnering on American AI Exports Powering the Future India AI Impact Summit 2026 — This comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents di…
S80
Leveraging AI to Support Gender Inclusivity | IGF 2023 WS #235 — Lucia Russo:OK. Well, thank you. So I’ve never done an analysis of all of the principles that exist, so I don’t know to …
S81
Charting New Horizons: Gender Equality in Supply Chains – Challenges and Opportunities — Nevertheless, collaboration with UN Women has amplified the registration of women-owned vendors, driving the figures fro…
S82
International Cooperation for AI & Digital Governance | IGF 2023 Networking Session #109 — Atsushi Yamanaka:Well, thank you so much, actually, it’s a very, very interesting questions. And then I have a few, actu…
S83
Workshop 1: AI & non-discrimination in digital spaces: from prevention to redress — **Prevention measures:** Audience responses supported proactive approaches including impact assessments, community invol…
S84
WS #323 New Data Governance Models for African Nlp Ecosystems — Melissa Omino: Thanks, Mark. I think that in order to have real equity, we need, we are required to think about communit…
S85
Rethinking Africa’s digital trade: Entrepreneurship, innovation, & value creation in the age of Generative AI (depHub) — This data collection occurred without clear information or consent from the individuals, leading to ethical concerns, es…
S86
Open Forum #22 Citizen Data to Advance Human Rights and Inclusion in the Di — Participants stressed the importance of involving women, girls, persons with disabilities, and other marginalized groups…
S87
Toward Collective Action_ Roundtable on Safe & Trusted AI — Professor Jonathan Shock warned against the “Silicon Valley approach of move fast and break things” when dealing with go…
S88
Opening of the session — The tone began very positively and constructively, with the Chair commending delegations for focused, specific intervent…
S89
How Multilingual AI Bridges the Gap to Inclusive Access — The tone was consistently collaborative, optimistic, and mission-driven throughout the conversation. Speakers demonstrat…
S90
Summit Opening Session — The tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-l…
S91
Building the Future STPI Global Partnerships & Startup Felicitation 2026 — The tone was consistently optimistic, collaborative, and forward-looking throughout the session. It maintained a formal …
S92
Opening address of the co-chairs of the AI Governance Dialogue — The tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate fo…
S93
Open Forum #75 Shaping Global AI Governance Through Multistakeholder Action — Ernst Noorman: Thank you very much, Zach, and thank you, Rasmus, for your words. While leaders at this moment gather in …
S94
Open Forum #78 Shaping the Future with Multistakeholder Foresight — 2. **Complete systemic collapse** – Featuring internet fragmentation and breakdown of current governance structures Anr…
S95
WS #193 Cybersecurity Odyssey Securing Digital Sovereignty Trust — The discussion maintained a consistently collaborative and constructive tone throughout. Speakers demonstrated mutual re…
S96
Workshop 2: The Interplay Between Digital Sovereignty and Development — Sofie Schönborn: the context for our interactive discussion. Thank you. Thank you so very much. It’s a pleasure to be he…
S97
Digital Cooperation and Empowerment: Insights and Best Practices for Strengthening Multistakeholder and Inclusive Participation — Tripti Sinha: Oh, thank you, Theresa. Thank you, Theresa. As you just said, I am very familiar with ICANN. So I’m gonna …
S98
BOOK LAUNCH: The law and politics of Global Competition — Competition laws are shaped by the unique history, culture, and values of each jurisdiction, which means that rules and …
S99
IN CONVERSATION WITH MITCHELL BAKER — Mozilla’s emphasis on open source technology and community building is another noteworthy aspect. They believe that open…
S100
Webinar session — The discussion maintained a diplomatic and constructive tone throughout, with participants demonstrating nuanced thinkin…
S101
Strengthening Corporate Accountability on Inclusive, Trustworthy, and Rights-based Approach to Ethical Digital Transformation — The discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while ack…
S102
What policy levers can bridge the AI divide? — The discussion maintained a collaborative and optimistic tone throughout, with participants sharing experiences construc…
S103
Opening & Plenary segment: Summit of the Future – General Assembly, 3rd plenary meeting, 79th session — While the summit was seen as a step towards revitalizing multilateralism, some speakers noted the challenges in translat…
S104
Safeguarding Children with Responsible AI — The discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in B…
S105
AI Infrastructure and Future Development: A Panel Discussion — -Cost Reduction and Efficiency Breakthroughs: The discussion addressed dramatic cost reductions in AI (from $33 to $0.09…
S106
ICF 2023: Digital Commons for Digital Sovereignty | IGF 2023 Day 0 Event #82 — Audience:know who you are, and then we can proceed. Yes, definitely. I am Alexandre Costa Barboza. I’m a fellow at the W…
S107
Building Trusted AI at Scale Cities Startups & Digital Sovereignty – Panel Discussion Moderator Sidharth Madaan — High level of consensus with complementary rather than conflicting perspectives. The agreement spans technical experts, …
S108
Towards Parity in Power / DAVOS 2025 — The discussion also addressed the need for diversity within gender representation, acknowledging intersecting identities…
S109
The WSIS welcome Part I: Meet the Movers Behind It — Noteworthy observations from the session included an acknowledgment of the gender imbalance on the panel, which was reco…
S110
https://dig.watch/event/india-ai-impact-summit-2026/press-briefing-by-hmit-ashwani-vaishnav-on-ai-impact-summit-2026-l-day-5 — I would also like to thank all the team members. All the stakeholders, right from media, from the organizers, from ITPO,…
S111
Building Scalable AI Through Global South Partnerships — He reflects that dealing with traffic and other logistical issues during the summit taught the team patience.
S112
Building Trusted AI at Scale Cities Startups &amp; Digital Sovereignty – Panel Discussion Moderator Amitabh Kant NITI — <strong>Moderator:</strong> With a big round of applause, kindly welcome the panelists of this last panel of AI Impact S…
S113
Panel Discussion AI &amp; Cybersecurity _ India AI Impact Summit — And I want to acknowledge the countries that came forward to really put this initiative together, starting first, of cou…
S114
Panel Discussion: Europe’s AI Governance Strategy in the Face of Global Competition — This response elevated the discussion from a binary choice between ‘might vs. values’ to a more nuanced exploration of h…
S115
© 2019, United Nations — In Africa, only some hubs have become ‘buzzing’ places, brimming with entrepreneurial activity (e.g. BongoHive in Zambia…
S116
To share or not to share: the dilemma of open source vs. proprietary Large Language Models — Despite these multifaceted benefits, there remains a discernible concern regarding the underappreciation of open source …
S117
Policy Network on Internet Fragmentation (PNIF) — Marilia Maciel: Thank you Bruna. I can take a couple of questions. Let me just say a few words about digital sovereignty…
Speakers Analysis
Detailed breakdown of each speaker’s arguments and positions
A
Alondra Nelson
4 arguments175 words per minute1527 words520 seconds
Argument 1
Openness as socio‑technical, non‑binary (Alondra Nelson)
EXPLANATION
Nelson argues that openness should be understood as a spectrum rather than a simple yes‑or‑no condition, and that it encompasses socio‑technical dimensions such as accountability, democracy and shared infrastructure, not merely the release of model weights or code.
EVIDENCE
She notes that the Biden administration originally treated openness as a gradient, but the current administration tends to view it as a binary state, prompting a return to the broader, socio-technical conception of openness rooted in the open-source movement, which includes shifting power, accountability, and shared resources for communities [30-33][40-43].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Nelson’s view of openness as a spectrum with socio-technical dimensions is articulated in [S1].
MAJOR DISCUSSION POINT
Defining Openness in AI
AGREED WITH
Amba Kak, Karen Hao
Argument 2
US relies on industrial, trade, immigration levers, limiting formal rulemaking (Alondra Nelson)
EXPLANATION
Nelson points out that U.S. AI governance increasingly uses policy tools such as tariffs, export controls, trade policy and costly immigration visas instead of traditional regulatory rulemaking, thereby reducing opportunities for democratic input.
EVIDENCE
She references the use of tariffs, trade policy, export controls on semiconductors, and the high cost of H-1B visas for high-tech workers as examples of the levers the administration employs, and observes that this approach is “hyper-regulatory” but lacks the public participation inherent in formal rulemaking processes [57-60][55-60].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
She notes the shift to industrial, trade and immigration levers over formal rulemaking in [S1].
MAJOR DISCUSSION POINT
Government Policy Mechanisms & Democratic Input
AGREED WITH
Amba Kak
Argument 3
Opacity of data‑center siting undermines community participation and democratic oversight
EXPLANATION
Nelson points out that the physical infrastructure of AI, such as data centres, is often built without informing or involving the local communities, which contradicts the broader notion of openness that includes democratic accountability.
EVIDENCE
She explains that elected officials are asked to sign NDAs and contracts for data-centre construction are signed covertly at night, leaving communities unaware of the installations and their impacts [224-226].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The lack of democratic participation in AI infrastructure siting is highlighted in [S33].
MAJOR DISCUSSION POINT
Openness and democratic governance of AI infrastructure
Argument 4
AI conferences should prioritize community inclusion to foster democratic legitimacy
EXPLANATION
Nelson argues that AI events need to move beyond traditional professional gatherings and actively involve a diverse community of participants to ensure that AI development reflects democratic values.
EVIDENCE
She reflects that this summit was the first she attended that included a broad community of students, aunties, and other non-expert participants, describing it as a revolutionary and distinctive experience that could set a new standard for future conferences [232-236].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for active stakeholder involvement alongside openness is discussed in [S41].
MAJOR DISCUSSION POINT
Community participation in AI discourse
A
Amba Kak
4 arguments131 words per minute1825 words833 seconds
Argument 1
Openness as proxy for democratization, participation, sovereignty (Amba Kak)
EXPLANATION
Kak describes the term “open” as a shorthand for broader values such as democratization, citizen participation, agency and even national sovereignty in the AI context.
EVIDENCE
She explicitly states that the word open is a stand-in for many broader values of democratization, participation, agency, and sovereignty [16-17].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Kak’s framing aligns with Nelson’s spectrum view of openness encompassing democratization and sovereignty [S1].
MAJOR DISCUSSION POINT
Defining Openness in AI
AGREED WITH
Alondra Nelson, Karen Hao
Argument 2
Shift from traditional regulation to policy levers reduces public accountability (Amba Kak)
EXPLANATION
Kak observes that U.S. AI policy is moving away from conventional regulatory mechanisms toward industrial, trade and immigration policies, which are less transparent and harder for the public to influence.
EVIDENCE
She notes that AI governance is happening less through traditional regulation and more through industrial policy, trade policy and immigration, which are relatively more immunized from public accountability [50-52].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
She points to the same policy-lever shift described by Nelson in [S1].
MAJOR DISCUSSION POINT
Government Policy Mechanisms & Democratic Input
AGREED WITH
Alondra Nelson
Argument 3
Question on using competition to safeguard sovereignty (Amba Kak)
EXPLANATION
Kak asks whether competition policy can serve as a tool for global‑majority countries to retain and exercise sovereignty in the AI age.
EVIDENCE
She poses the question directly, asking if competition can be a lever in the sovereignty toolkit for AI-driven economies [160-162].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The role of competition in preserving national sovereignty is examined in [S39].
MAJOR DISCUSSION POINT
Competition, Antitrust, and AI Sovereignty
Argument 4
Female‑only panel highlights gender imbalance; need broader inclusion (Amba Kak)
EXPLANATION
Kak points out that the panel is the only all‑female one at the summit, underscoring the broader gender imbalance in AI fields and the need for more inclusive representation.
EVIDENCE
She remarks that it is “exceptional that this is also the only female-only panel” and hopes this will improve in future iterations rather than being a badge of honor [4-5].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Gender representation concerns are echoed in studies on gender equality and diversity in tech governance [S40][S41].
MAJOR DISCUSSION POINT
Community Inclusion, Gender, and Representation
AGREED WITH
Alondra Nelson, Karen Hao
K
Karen Hao
6 arguments171 words per minute1765 words618 seconds
Argument 1
Openness must embed community participation, not just technical release (Karen Hao)
EXPLANATION
Hao stresses that true openness goes beyond releasing code or model weights; it requires active community involvement, consent, and shared value creation throughout the AI development pipeline.
EVIDENCE
She describes the Tahiku Media project where the community was consulted, educated, gave consent for data collection, and co-designed applications, illustrating a socially embedded openness beyond technical sharing [194-202]; she also references the BigScience initiative that involved many institutions and cultural partners to ensure transparent data governance [179-182].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The principle that openness requires community engagement is reinforced in [S41].
MAJOR DISCUSSION POINT
Defining Openness in AI
AGREED WITH
Alondra Nelson, Amba Kak
Argument 2
Corporate “open” language may mask lock‑in, requiring scrutiny (Karen Hao)
EXPLANATION
Hao observes that corporations often adopt inclusive and “open” rhetoric while actually promoting closed platforms that lock in users and profit from the narrative of openness.
EVIDENCE
She notes that corporate speak has become sophisticated, using language of inclusion and empowerment to sell technology while ultimately locking users into closed platforms [255-256].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Corporate rhetoric versus actual lock-in is critiqued in [S1] and further discussed in [S43].
MAJOR DISCUSSION POINT
Competition, Antitrust, and AI Sovereignty
Argument 3
Community‑driven projects empower marginalized groups (Karen Hao)
EXPLANATION
Hao highlights how community‑focused AI initiatives can empower historically marginalized populations by providing tools that serve their specific linguistic and cultural needs.
EVIDENCE
The Tahiku Media example shows a Maori-language radio station using an open, consent-based speech-recognition model to support language revitalization, with community members participating in data collection and application design [184-202].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Community-centric AI initiatives and their empowerment effects are noted in [S41].
MAJOR DISCUSSION POINT
Community Inclusion, Gender, and Representation
Argument 4
BigScience and Tahiku Media illustrate participatory open‑source AI at scale (Karen Hao)
EXPLANATION
Hao presents two large‑scale, open‑source projects that embody participatory principles, demonstrating that open AI can be pursued at both global research consortium level and local community level.
EVIDENCE
She describes the BigScience project, which coordinated over a thousand researchers from 70 countries to create an open-source LLM with transparent data governance [179-182]; she also details the Tahiku Media speech-recognition effort that engaged the Maori community and used open-source tools to build a culturally relevant model [184-202].
MAJOR DISCUSSION POINT
Open‑Source Projects, Scale, and Alternative Models
Argument 5
Open source enables diverse communities to develop their own models rather than a monopoly (Karen Hao)
EXPLANATION
Hao argues that true scale should mean many communities each building models for their own contexts, rather than a single dominant provider distributing a monopoly‑like product.
EVIDENCE
She reframes scale as multiple communities developing their own models, criticizing the Silicon Valley notion of scale as a monopoly and noting that most industries are data-poor, limiting diffusion of large-scale models [207-211].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Competition as a safeguard against monopoly and for diverse model development is highlighted in [S39].
MAJOR DISCUSSION POINT
Open‑Source Projects, Scale, and Alternative Models
AGREED WITH
Anne Bouverot
Argument 6
Data‑worker exploitation is inherent; requires rethinking of model creation (Karen Hao)
EXPLANATION
Hao points out that the AI supply chain relies on extensive labor for data collection and cleaning, often under exploitative conditions, and calls for a fundamental redesign of how models are built.
EVIDENCE
She states that labor exploitation occurs both in data generation and data-worker cleaning, and that this exploitation is built into the logic of current model creation, necessitating a ground-up rethink [380-381].
MAJOR DISCUSSION POINT
Labor, Intellectual Property, and Open‑Washing
A
Anne Bouverot
3 arguments140 words per minute645 words275 seconds
Argument 1
Open source as competitive, sovereign tool (Anne Bouverot)
EXPLANATION
Bouverot argues that open‑source software can serve as a strategic lever for countries to catch up technologically and assert digital sovereignty, providing shared infrastructure and fostering competition.
EVIDENCE
She notes that China’s use of open source gave it a seat at the AI table, that open source offers benefits and risks, and that Europe sees it as a competitive tool to leverage others’ knowledge while standing on their shoulders [82-88].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Open-source as a lever for digital sovereignty is described in [S1] and [S39].
MAJOR DISCUSSION POINT
Defining Openness in AI
AGREED WITH
Karen Hao
DISAGREED WITH
Astha Kapoor
Argument 2
Middle powers can build ad‑hoc coalitions using open source to compete (Anne Bouverot)
EXPLANATION
She highlights that middle‑income nations can form flexible coalitions to collectively develop AI capabilities using open‑source resources, compensating for limited individual resources.
EVIDENCE
She cites a speech by Mark Carney and Macron, then lists countries such as Canada, France, Germany, India, Japan, Australia that can cooperate in ad-hoc coalitions to advance AI governance and competition [89-92].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The potential of middle-power coalitions leveraging open source is discussed in [S39].
MAJOR DISCUSSION POINT
Middle Powers, Multilateral Coalitions & Global Governance
Argument 3
Open source is not a universal solution; must be applied case‑by‑case (Anne Bouverot)
EXPLANATION
Bouverot cautions that open‑source is not a one‑size‑fits‑all answer; its suitability depends on specific use‑cases, risks, and contexts.
EVIDENCE
She remarks that “it doesn’t mean everything should be open source” and that open source is a tool, not a universal remedy, emphasizing the need for case-by-case assessment [84-86].
MAJOR DISCUSSION POINT
Open‑Source Projects, Scale, and Alternative Models
A
Astha Kapoor
3 arguments185 words per minute852 words275 seconds
Argument 1
Openness can generate dependence for Global South (Astha Kapoor)
EXPLANATION
Kapoor warns that framing openness merely as a driver of adoption can create dependency for Global South nations, turning them into labor pools for AI development without addressing deeper structural challenges.
EVIDENCE
She explains that openness as a driver of adoption is dangerous because it shifts focus from needed investments to merely using AI, risking labor exploitation and dependence, especially when the entire AI stack lacks local control [118-119][111-118].
MAJOR DISCUSSION POINT
Defining Openness in AI
DISAGREED WITH
Anne Bouverot
Argument 2
Need to distinguish Global South needs from middle‑power aspirations; avoid labor‑only role (Astha Kapoor)
EXPLANATION
Kapoor stresses that Global South countries have distinct structural needs (health, education) and should not be reduced to test‑beds for AI models; their aspirations differ from those of middle powers.
EVIDENCE
She notes that Global South priorities involve structural issues, that they should not be merely labor sources for testing models, and that solidarity must recognize non-homogeneity among large markets [124-126][111-118].
MAJOR DISCUSSION POINT
Middle Powers, Multilateral Coalitions & Global Governance
Argument 3
Cooperatives as one‑vote‑one‑share models for inclusive AI design (Astha Kapoor)
EXPLANATION
Kapoor proposes cooperatives, which operate on a one‑member‑one‑vote principle, as a governance model that can turn users into co‑designers of AI systems, ensuring more equitable participation.
EVIDENCE
She references the Amul cooperative example, highlighting its one-vote-one-share structure and its potential to move participants from mere recipients to co-designers of AI initiatives [243-245].
MAJOR DISCUSSION POINT
Community Inclusion, Gender, and Representation
R
Ravneet Kaur
4 arguments168 words per minute1442 words512 seconds
Argument 1
Competition authority stresses transparency, accountability, democratic oversight (Ravneet Kaur)
EXPLANATION
Kaur emphasizes that the Competition Commission of India prioritizes transparency and accountability throughout the AI lifecycle, viewing these as essential for building public trust and democratic oversight.
EVIDENCE
She outlines the commission’s focus on transparency in governance, access to data, compute, and skill-sets, and stresses that trust depends on non-opaque systems, linking transparency to competition oversight [101-107][153-158].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Transparency and accountability in AI oversight are emphasized in [S38] and [S39].
MAJOR DISCUSSION POINT
Government Policy Mechanisms & Democratic Input
AGREED WITH
Alondra Nelson, Karen Hao
Argument 2
Competition as lever to prevent lock‑in, ensure contestable markets, protect sovereignty (Ravneet Kaur)
EXPLANATION
Kaur argues that robust competition is essential to avoid entry barriers, market foreclosure, and data lock‑in, thereby safeguarding national sovereignty in the AI era.
EVIDENCE
She states that competition ensures no entry barriers, prevents dominance from foreclosing markets, and protects consumers from being locked into particular systems, linking this to autonomy and sovereignty [161-166].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Competition’s role in preventing lock-in and protecting sovereignty is examined in [S39] and [S38].
MAJOR DISCUSSION POINT
Competition, Antitrust, and AI Sovereignty
Argument 3
Competition commission intervenes only on abusive practices, not on IP protection per se (Ravneet Kaur)
EXPLANATION
Kaur clarifies that the commission’s mandate is limited to addressing anti‑competitive abuses; it does not regulate intellectual‑property rights unless they result in market abuse.
EVIDENCE
She explains that the commission steps in only when there is abuse, aiming to protect innovation and consumer welfare, and does not stifle IP-driven innovation [315-324].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The commission’s limited mandate to address anti-competitive abuse, not IP per se, is reflected in [S35].
MAJOR DISCUSSION POINT
Labor, Intellectual Property, and Open‑Washing
Argument 4
Government must proactively build digital public infrastructure to support equitable AI development
EXPLANATION
Kaur emphasizes that state actors should create and provide shared resources such as compute platforms, data sets, and digital identity systems to enable startups and smaller players to develop AI solutions, especially for local language needs.
EVIDENCE
She cites the commission’s work on digital identity, digital payments, and the plan to offer compute platforms and data for startups, stressing the importance of small, language-specific models rather than only large-scale systems [246-250].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
Calls for shared digital infrastructure align with the shared-resource perspective in [S43] and [S1].
MAJOR DISCUSSION POINT
Public infrastructure as an enabling environment for AI
A
Audience member 2
1 argument190 words per minute256 words80 seconds
Argument 1
Observation of limited Chinese participation raises inclusion concerns (Audience member 2)
EXPLANATION
The audience member points out the low visibility of Chinese participants at the summit and asks how this affects the notion of an all‑inclusive AI vision.
EVIDENCE
She asks why Chinese representation is low and what “all-inclusive” means in this context, highlighting concerns about broader inclusion [298-306].
MAJOR DISCUSSION POINT
Middle Powers, Multilateral Coalitions & Global Governance
A
Audience member 1
1 argument138 words per minute141 words61 seconds
Argument 1
Individuals need agency and labeling to choose ethical AI tools (Audience member 1)
EXPLANATION
The participant stresses that individuals need clear, third‑party labeling of AI products to make informed, ethical choices, and that they should have agency to adopt, resist, or remain neutral toward AI tools.
EVIDENCE
She calls for third-party organizations to create easy-to-understand labels for AI models, similar to labeling in fashion or food industries, and notes that individuals have many touch-points to decide how to interact with AI [277-282][283-286].
MAJOR DISCUSSION POINT
Community Inclusion, Gender, and Representation
A
Audience member 5
1 argument183 words per minute167 words54 seconds
Argument 1
Protection‑by‑design for data could safeguard IP but challenges openness (Audience member 5)
EXPLANATION
The audience member asks whether designing AI systems to protect intellectual property and data—by rendering data unusable for models—can be an effective approach, and how it interacts with openness principles.
EVIDENCE
She raises the question about protecting publicly available data through protection-by-design, referencing research at the University of Chicago and elsewhere, and asks how this aligns with openness [354-363].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The question about protection-by-design and its relation to openness is raised in [S35].
MAJOR DISCUSSION POINT
Labor, Intellectual Property, and Open‑Washing
A
Audience member 6
1 argument128 words per minute78 words36 seconds
Argument 1
Open‑washing assessment needs new competition tools and frameworks (Audience member 6)
EXPLANATION
The participant queries how competition authorities should evaluate “open‑washing,” i.e., whether claimed openness truly lowers entry barriers or masks underlying dependencies, and whether new analytical tools are required.
EVIDENCE
She asks whether enforcement needs new tools or a reworking of competition frameworks to assess genuine openness versus hidden dependencies [369-374].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The need for new analytical tools to assess open-washing is discussed in [S35] and [S39].
MAJOR DISCUSSION POINT
Labor, Intellectual Property, and Open‑Washing
DISAGREED WITH
Ravneet Kaur
A
Audience member 4
1 argument140 words per minute57 words24 seconds
Argument 1
Tension between patents and open models highlighted by IP question (Audience member 4)
EXPLANATION
The audience member seeks clarification on how openness interacts with intellectual‑property regimes, questioning whether openness restricts or conflicts with patent protections.
EVIDENCE
She frames the question by noting her background as an IP and business lawyer and asks how openness of AI relates to intellectual-property restrictions [309-312].
EXTERNAL EVIDENCE (KNOWLEDGE BASE)
The interaction between openness and IP regimes is explored in [S43].
MAJOR DISCUSSION POINT
Labor, Intellectual Property, and Open‑Washing
A
Audience member 3
1 argument193 words per minute59 words18 seconds
Argument 1
Chinese open‑source AI models may embed CCP perspectives, raising governance concerns; mechanisms are needed to assess and mitigate political influence while leveraging technical strengths
EXPLANATION
The participant questions how to reconcile the technical excellence of Chinese open‑source models with the risk that they carry state‑driven ideological biases, and asks for ways to responsibly incorporate them into the broader AI ecosystem.
EVIDENCE
In their question they note that Chinese open-source models are “clearly the most intelligent in the open-source space but clearly have a deep CCP perspective,” and seek guidance on how to combine them appropriately within the ecosystem [308].
MAJOR DISCUSSION POINT
Geopolitical implications of open‑source AI
Agreements
Agreement Points
Openness should be understood as a socio‑technical, non‑binary concept that goes beyond merely releasing model weights or code and includes democratization, participation, shared infrastructure and sovereignty.
Speakers: Alondra Nelson, Amba Kak, Karen Hao
Openness as socio‑technical, non‑binary (Alondra Nelson) Openness as proxy for democratization, participation, sovereignty (Amba Kak) Openness must embed community participation, not just technical release (Karen Hao)
All three speakers stress that ‘open’ is a stand-in for broader democratic values and that true openness involves community engagement, accountability and shared resources, not just technical openness such as releasing weights. Alondra describes the shift from a gradient to a binary view and calls for a broader socio-technical definition [30-33][40-43]; Amba explicitly frames openness as shorthand for democratization, participation and sovereignty [16-17]; Karen illustrates this with the BigScience and Tahiku Media projects that embed community consent and value sharing [179-182][194-202].
POLICY CONTEXT (KNOWLEDGE BASE)
This framing matches the UN Secretary-General’s roadmap that positions open-source solutions as a means to advance the Sustainable Development Goals and strengthen digital sovereignty, and it is echoed in analyses of digital public goods that stress community-driven governance and reduced dependency on proprietary platforms [S66][S65][S61].
U.S. AI governance is increasingly being pursued through industrial, trade and immigration policy levers rather than traditional regulatory rulemaking, reducing opportunities for public democratic input.
Speakers: Alondra Nelson, Amba Kak
US relies on industrial, trade, immigration levers, limiting formal rulemaking (Alondra Nelson) Shift from traditional regulation to policy levers reduces public accountability (Amba Kak)
Both speakers note that the Biden administration is steering AI policy via tariffs, export controls, H-1B visa costs and other industrial tools, bypassing the formal rulemaking process that would allow public comment. Alondra points to tariffs, export controls and costly H-1B visas as examples of the new “hyper-regulatory” approach that lacks democratic input [57-60][55-60]; Amba observes the same shift and its opacity for the broader public [50-52].
Community participation and inclusion are essential for legitimate AI governance and should be embedded in conferences, projects and policy discussions.
Speakers: Alondra Nelson, Karen Hao, Amba Kak
AI conferences should prioritize community inclusion to foster democratic legitimacy (Alondra Nelson) Openness must embed community participation, not just technical release (Karen Hao) Female‑only panel highlights gender imbalance; need broader inclusion (Amba Kak)
All three stress that AI discourse must move beyond elite circles to involve diverse community members. Alondra describes this summit as the first with a broad community of students, aunties, etc., calling it revolutionary [232-236]; Karen warns that corporate “open” language can mask lock-in and stresses genuine community engagement as shown in the Tahiku Media example [255-256][194-202]; Amba points out the gender imbalance of the panel and the need for broader representation [4-5].
POLICY CONTEXT (KNOWLEDGE BASE)
The need for broad stakeholder involvement is highlighted in interdisciplinary AI governance forums such as the IGF and UNESCO initiatives, and was a central theme of the WSIS+20 High-Level Event on building AI capacity from the ground up in the Global South [S57][S64][S68].
Open‑source software can serve as a strategic lever for countries, especially middle powers, to build digital sovereignty, foster competition and avoid dependence on a single dominant provider.
Speakers: Anne Bouverot, Karen Hao
Open source as competitive, sovereign tool (Anne Bouverot) Open source enables diverse communities to develop their own models rather than a monopoly (Karen Hao)
Both argue that open-source is a tool for nations to catch up technologically and maintain sovereignty. Anne notes China’s use of open-source to gain a seat at the table and Europe’s view of it as a competitive lever, and highlights ad-hoc coalitions of middle powers [82-88][89-92]; Karen reframes scale as many communities building their own models, warning that the Silicon Valley notion of scale creates monopolies [207-211].
POLICY CONTEXT (KNOWLEDGE BASE)
Policy analyses from the UN digital cooperation agenda and the digital public goods discourse argue that open-source enables middle-power states to reduce reliance on dominant vendors and promote competitive ecosystems [S66][S65][S70].
Transparency and accountability throughout the AI lifecycle are essential for building public trust and ensuring fair competition.
Speakers: Ravneet Kaur, Alondra Nelson, Karen Hao
Competition authority stresses transparency, accountability, democratic oversight (Ravneet Kaur) Openness as a practice includes accountability and shared infrastructure (Alondra Nelson) BigScience project illustrates transparent data governance (Karen Hao)
All three emphasize that openness must be paired with transparent governance to foster trust. Ravneet outlines the commission’s focus on transparency in data, compute and governance as a condition for competition and trust [101-107][153-158]; Alondra links openness to accountability and shared resources [40-43]; Karen describes the BigScience initiative’s transparent data curation and value-return to contributors [179-182].
POLICY CONTEXT (KNOWLEDGE BASE)
Transparency and accountability are core pillars of the European Commission’s ‘Competition Policy for the Digital Era’ report, which calls for adapting antitrust rules to safeguard fair competition in AI markets [S56][S53].
Similar Viewpoints
Both see the U.S. moving away from conventional regulatory rulemaking toward industrial, trade and immigration tools, which diminishes democratic participation and transparency in AI governance. Alondra cites tariffs, export controls and H‑1B visa costs as examples of this “hyper‑regulatory” approach [57-60][55-60]; Amba notes the same shift and its relative immunisation from public oversight [50-52].
Speakers: Alondra Nelson, Amba Kak
US relies on industrial, trade, immigration levers, limiting formal rulemaking (Alondra Nelson) Shift from traditional regulation to policy levers reduces public accountability (Amba Kak)
Both view open‑source software as a strategic instrument for nations (especially middle powers) to achieve digital sovereignty and avoid concentration of power. Anne highlights open‑source as a lever for competition and coalition‑building among middle powers [82-88][89-92]; Karen argues that true scale means many communities building their own models, countering monopoly dynamics [207-211].
Speakers: Anne Bouverot, Karen Hao
Open source as competitive, sovereign tool (Anne Bouverot) Open source enables diverse communities to develop their own models rather than a monopoly (Karen Hao)
All three stress that openness must be coupled with transparent, accountable governance to build trust and ensure fair competition. Ravneet links transparency to competition and consumer trust [101-107][153-158]; Alondra ties openness to accountability and democratic practice [40-43]; Karen points to the BigScience consortium’s transparent data handling as a model of open governance [179-182].
Speakers: Ravneet Kaur, Alondra Nelson, Karen Hao
Competition authority stresses transparency, accountability, democratic oversight (Ravneet Kaur) Openness as a practice includes accountability and shared infrastructure (Alondra Nelson) BigScience project illustrates transparent data governance (Karen Hao)
Unexpected Consensus
Recognition that open‑source projects can be scaled through community‑driven, small‑AI approaches rather than monolithic large‑scale models.
Speakers: Karen Hao, Anne Bouverot
Open source enables diverse communities to develop their own models rather than a monopoly (Karen Hao) Open source as competitive, sovereign tool (Anne Bouverot)
While Anne frames open-source primarily as a geopolitical and competitive lever for nations, Karen extends the argument to a technical-scale perspective, asserting that true scale is achieved by many small, community-specific models rather than a single dominant one. This convergence of geopolitical and technical scaling arguments was not explicitly anticipated. Anne’s discussion of middle-power coalitions using open-source [89-92] aligns with Karen’s reframing of scale as distributed community development [207-211].
POLICY CONTEXT (KNOWLEDGE BASE)
Recent initiatives such as African small-AI language datasets and concerns about low-quality AI contributions to open-source repositories illustrate a shift toward lightweight, community-led models instead of monolithic systems [S62][S63][S68].
Agreement that competition policy is a crucial tool for protecting national sovereignty in the AI era.
Speakers: Amba Kak, Ravneet Kaur
Question on using competition to safeguard sovereignty (Amba Kak) Competition as lever to prevent lock‑in, ensure contestable markets, protect sovereignty (Ravneet Kaur)
Amba explicitly asks whether competition can be part of a sovereignty toolkit [160-162]; Ravneet later affirms that competition prevents market foreclosure and protects autonomy, linking it directly to sovereignty [161-166]. The alignment of a moderator’s probing question with a regulator’s policy stance was not foreseen.
POLICY CONTEXT (KNOWLEDGE BASE)
Competition authorities are increasingly invoking competition law to protect national sovereignty in AI, as reflected in ecosystem-level competition assessments and EU antitrust reforms targeting digital platforms [S53][S54][S56].
Overall Assessment

The panel displayed substantial convergence around three core themes: (1) openness must be understood as a socio‑technical, democratic principle rather than a simple technical release; (2) U.S. AI policy is shifting toward industrial and trade levers, limiting formal democratic rulemaking; (3) competition and transparent governance are essential to prevent lock‑in, protect sovereignty and build public trust. These shared viewpoints cut across speakers from academia, government, and civil society, indicating a strong consensus on the need for broader, inclusive, and accountable AI governance frameworks.

High consensus on the definition of openness, the importance of community participation, and the role of competition and transparency. The agreement spans multiple domains (AI, data governance, digital economy, human rights), suggesting that future policy initiatives are likely to incorporate multi‑stakeholder, open‑source and competition‑focused mechanisms to address power asymmetries in AI.

Differences
Different Viewpoints
Openness may create dependence for Global South versus being a strategic lever for sovereign development
Speakers: Astha Kapoor, Anne Bouverot
Openness can generate dependence for Global South (Astha Kapoor) Open source as competitive, sovereign tool (Anne Bouverot)
Astha warns that framing openness merely as a driver of adoption can turn Global South countries into labor pools and increase dependence, emphasizing the need to address structural challenges first [111-119]. Anne counters that open-source software is a strategic lever that allows countries, including middle-power and Global-South states, to catch up technologically and assert digital sovereignty, viewing it as a competitive tool rather than a source of dependence [75-88].
POLICY CONTEXT (KNOWLEDGE BASE)
Debates on whether openness leads to new dependencies for the Global South appear in UN digital cooperation literature and UNESCO-linked discussions on equitable access, highlighting both empowerment potential and risk of reliance on external codebases [S66][S61][S64][S68].
Whether existing competition assessment tools are sufficient to detect open‑washing or new analytical frameworks are needed
Speakers: Audience member 6, Ravneet Kaur
Open‑washing assessment needs new competition tools and frameworks (Audience member 6) Competition assessment uses rigorous case‑by‑case analysis (Ravneet Kaur)
The audience member asks if competition authorities require new tools or a reworking of frameworks to evaluate whether claimed openness truly lowers entry barriers or masks hidden dependencies [369-374]. Ravneet responds that the commission already conducts detailed, case-by-case economic and competition analyses, relying on existing data and internal expertise without indicating a need for new methodologies [382-388].
POLICY CONTEXT (KNOWLEDGE BASE)
The adequacy of current competition tools to identify ‘open-washing’ is questioned in IGF panels and EU competition reports, prompting calls for novel analytical frameworks to better capture hidden dependencies [S53][S54][S56].
Unexpected Differences
Open‑source as a sovereign tool versus risk of dependency for Global South
Speakers: Astha Kapoor, Anne Bouverot
Openness can generate dependence for Global South (Astha Kapoor) Open source as competitive, sovereign tool (Anne Bouverot)
While both discuss openness, Astha’s focus on the Global South’s structural needs leads her to view openness as potentially exploitative, whereas Anne treats open‑source as a universally beneficial lever for middle‑power and sovereign development. The tension between viewing openness as a risk versus an opportunity for less‑resourced nations was not anticipated given the generally shared pro‑openness framing elsewhere in the panel.
POLICY CONTEXT (KNOWLEDGE BASE)
The tension between open-source as a means of achieving digital sovereignty and the risk of creating new dependencies is discussed in UN and UNESCO policy briefs on digital public goods and community sovereignty [S66][S61][S64].
Need for new analytical tools to assess open‑washing versus reliance on existing competition analysis
Speakers: Audience member 6, Ravneet Kaur
Open‑washing assessment needs new competition tools and frameworks (Audience member 6) Competition assessment uses rigorous case‑by‑case analysis (Ravneet Kaur)
The audience’s call for novel frameworks to detect open‑washing was not mirrored by the competition authority’s confidence in its current methodology, revealing an unexpected split between external stakeholder expectations and regulator self‑assessment.
POLICY CONTEXT (KNOWLEDGE BASE)
Recent IGF discussions and competition-law scholarship argue that existing antitrust metrics may be insufficient to detect open-washing, urging the development of dedicated analytical tools [S53][S54].
Overall Assessment

The panel largely converged on the principle that openness should be understood broadly and linked to democratic participation, community involvement, and sovereign capacity. Divergences emerged around the practical implications of openness for Global South countries and the adequacy of existing competition tools to police open‑washing claims. These disagreements highlight the challenge of translating shared normative goals into concrete policy instruments that satisfy both equity concerns and regulatory capacities.

Moderate – while there is strong consensus on the value of openness, the panel split on how openness should be leveraged for development versus sovereignty, and on whether current competition frameworks are sufficient to address emerging open‑washing practices. The implications are that future AI governance discussions will need to reconcile these perspectives, possibly by designing differentiated openness strategies for Global South contexts and by evaluating the need for new competition‑law tools.

Partial Agreements
All three agree that openness should go beyond mere technical release of model weights and serve democratic, participatory goals. Alondra stresses a spectrum and socio‑technical dimensions [40-43]; Karen illustrates this with community‑driven projects that involve consent and co‑design [194-202]; Amba frames openness as shorthand for democratization and sovereignty [16-17]. However, they differ on emphasis: Alondra focuses on policy and power‑shifting, Karen on concrete community engagement practices, and Amba on the symbolic meaning of the term.
Speakers: Alondra Nelson, Karen Hao, Amba Kak
Openness as socio‑technical, non‑binary (Alondra Nelson) Openness must embed community participation, not just technical release (Karen Hao) Openness as proxy for democratization, participation, sovereignty (Amba Kak)
Both see the need for democratic oversight in AI governance. Alondra points out that the U.S. is shifting to industrial levers that bypass public rulemaking, reducing democratic input [57-60]. Ravneet emphasizes that competition oversight must ensure transparency and accountability throughout the AI lifecycle to build public trust [101-107][153-158]. They share the goal of democratic oversight but differ on the institutional mechanism: Alondra critiques the current U.S. approach, while Ravneet proposes competition policy as the corrective mechanism.
Speakers: Alondra Nelson, Ravneet Kaur
US relies on industrial, trade, immigration levers, limiting formal rulemaking (Alondra Nelson) Competition authority stresses transparency, accountability, democratic oversight (Ravneet Kaur)
Takeaways
Key takeaways
Openness in AI should be understood as a socio‑technical spectrum rather than a binary technical release, encompassing democracy, accountability, participation, and sovereignty. US AI governance is shifting from formal rulemaking to industrial, trade, and immigration policy levers, which reduces direct public accountability. Middle‑power countries can leverage open‑source tools and ad‑hoc coalitions to compete with US and China, but their needs differ from those of Global South nations. Competition policy is crucial for AI sovereignty: it must prevent ecosystem lock‑in, ensure contestable markets, and enforce transparency and accountability throughout the AI lifecycle. Gender imbalance and broader inclusion remain significant challenges; community‑driven projects (e.g., Tahiku Media, cooperatives) demonstrate how participatory openness can empower marginalized groups. Large‑scale open‑source projects (e.g., BigScience) show that openness can be combined with consent‑based data practices, but scaling such models requires rethinking ‘scale’ as many community‑specific solutions rather than a single monopoly. Labor exploitation and IP concerns are embedded in current AI supply chains; addressing them requires new governance mechanisms and possibly protection‑by‑design approaches. Corporate “open” language can mask lock‑in; third‑party labeling and scrutiny of open‑washing are needed to give users real agency.
Resolutions and action items
Develop public‑funded compute and data‑sharing infrastructure to support startups and smaller players (suggested by Ravneet Kaur). Create transparent, community‑focused labeling schemes for AI models and services to help consumers make informed choices (suggested by Karen Hao). Incorporate more enforcement voices (competition authorities, regulators) into future AI governance summits (suggested by Amba Kak). Encourage middle‑power coalitions (e.g., France, Canada, Germany, India, Japan, Australia) to co‑design open‑source AI initiatives (suggested by Anne Bouverot). Promote community‑driven AI projects that involve consent, co‑design, and benefit‑sharing with data contributors (highlighted by Karen Hao). Commission a study on open‑washing and develop analytical tools for competition authorities to assess true entry‑barrier reduction (raised by audience member 6).
Unresolved issues
How to balance openness with safety concerns for high‑risk AI applications (e.g., nuclear‑related AI). Mechanisms to ensure democratic input in US AI policy when reliance is on executive levers rather than formal rulemaking. Concrete strategies for integrating Chinese open‑source models while mitigating ideological bias. Effective protection‑by‑design methods for copyrighted data that reconcile IP rights with openness goals. Scalable models for community‑driven AI that can operate beyond niche projects without sacrificing impact. Specific metrics or frameworks to evaluate whether open‑source initiatives truly reduce dependence for Global South countries. How to systematically address data‑worker and broader labor exploitation throughout the AI value chain.
Suggested compromises
Treat openness as a gradient: allow closed development for clearly high‑risk domains while keeping other layers (data, APIs, governance) as open and participatory. Adopt a hybrid governance approach that combines formal rulemaking (for democratic legitimacy) with targeted industrial/trade policies (for speed). Encourage middle powers to cooperate in ad‑hoc coalitions rather than forming a single monolithic bloc, preserving flexibility for diverse national interests. Accept that not all AI models need to be large‑scale; promote small, domain‑specific open models alongside flagship models.
Thought Provoking Comments
Openness should be understood as a socio‑technical characteristic, not just a binary technical decision about model weights. It is about shifting power, accountability, shared infrastructure, and democratic participation.
She reframes the dominant narrative that treats ‘open’ as merely releasing code or weights, highlighting the deeper political and democratic dimensions of openness.
This set the analytical foundation for the whole panel, prompting others (e.g., Anne Bouverot and Karen Hao) to discuss openness beyond technology and to consider its role in governance, competition, and community empowerment.
Speaker: Alondra Nelson
U.S. AI policy is not deregulative; it is ‘hyper‑regulatory’ through trade policy, export controls, immigration fees, and funding decisions, which bypasses the democratic input that formal rulemaking would provide.
She challenges the common perception that the current administration is hands‑off, exposing a hidden layer of state power that shapes the AI ecosystem without public oversight.
Shifted the conversation from a surface‑level discussion of openness to a critique of the governance mechanisms themselves, leading Amba to ask about the role of competition as a sovereignty tool and prompting Ravneet Kaur to elaborate on competition as a democratic lever.
Speaker: Alondra Nelson
Middle powers can leverage open‑source as a competitive tool by forming ‘coalitions of the willing’, allowing them to punch above their weight without building a full stack from scratch.
She introduces a new geopolitical framing that moves beyond the U.S.–China binary, showing how a broader set of countries can shape AI governance through collaborative openness.
Opened a new line of discussion about multilateralism and the strategic use of openness, which Astha Kapoor later linked to the specific challenges of Global South nations and the need for agency rather than mere adoption.
Speaker: Anne Bouverot
Openness as a driver of adoption can be dangerous for Global South countries because it diverts attention from necessary investments and makes them a test‑bed for external innovators, turning them into labor providers rather than co‑designers.
She critiques the simplistic narrative that open data or models automatically benefit developing regions, highlighting structural inequities and the risk of dependency.
Prompted a deeper examination of sovereignty and competition (Ravneet Kaur) and reinforced the panel’s focus on community‑centric models, influencing Karen Hao’s examples of truly participatory projects.
Speaker: Astha Kapoor
Competition is a crucial lever for sovereignty: it ensures contestable markets, prevents ecosystem lock‑in, and requires transparency and accountability throughout the AI lifecycle.
She connects competition policy directly to democratic control and national sovereignty, positioning it as a concrete tool rather than an abstract principle.
Steered the discussion toward concrete policy mechanisms, leading to follow‑up questions about enforcement, open‑washing, and the need for new analytical tools in competition law.
Speaker: Ravneet Kaur
The Tahiku Media project shows that openness can be social as well as technical: community consent, co‑design, and value‑return to the data providers create a model where AI truly serves marginalized groups.
Provides a concrete, ground‑level illustration of the kind of openness Alondra described, moving the conversation from theory to practice.
Illustrated the feasibility of community‑driven AI, influencing later remarks about scaling (her own) and reinforcing the panel’s call for diverse, locally‑tailored AI solutions.
Speaker: Karen Hao
Scale should be re‑thought: true scale is not a single monopoly distributing to everyone, but many communities developing their own models for specific contexts; large‑scale monolithic models are a monopoly, not scale.
Challenges the Silicon Valley assumption that ‘scale = reach’, proposing a decentralized vision that aligns with the panel’s emphasis on sovereignty and community empowerment.
Prompted participants to reconsider the relationship between openness and market concentration, and set up the final reflections on how to build AI ecosystems that are both inclusive and competitive.
Speaker: Karen Hao
Corporate language of inclusion and diversity is often a veneer that masks the goal of locking users into closed platforms; genuine openness requires community engagement beyond marketing rhetoric.
Calls out performative inclusion, urging critical scrutiny of corporate narratives that claim openness while maintaining control.
Served as a concluding critique that tied together earlier points about democratic participation, competition, and the need for transparent, community‑led AI development.
Speaker: Karen Hao
Overall Assessment

The discussion was shaped by a series of pivotal interventions that repeatedly shifted the focus from abstract notions of ‘open’ to concrete political, economic, and community dimensions. Alondra Nelson’s reframing of openness as socio‑technical and her expose of hidden regulatory levers set the analytical tone. Anne Bouverot’s middle‑power coalition concept broadened the geopolitical frame, while Astha Kapoor’s critique of openness as a potentially exploitative adoption model grounded the debate in Global South realities. Ravneet Kaur linked these ideas to competition law, presenting it as a tangible sovereignty tool. Karen Hao’s vivid case studies and her deconstruction of corporate ‘open’ rhetoric provided practical illustrations and a critical lens that tied the conversation together. Collectively, these comments redirected the panel from a surface‑level discussion of technical openness to a nuanced exploration of power, governance, and community agency, ultimately shaping a richer, more actionable dialogue.

Follow-up Questions
How can a third‑party labeling system be developed to clearly indicate the values, resource usage, and openness of AI models so consumers can make informed choices?
Consumers currently lack easy, standardized information about the provenance and openness of AI tools, hindering responsible adoption and accountability.
Speaker: Karen Hao
What analytical tools or revised competition frameworks are needed to detect and assess “open‑washing” where firms claim openness but maintain hidden barriers to entry?
Ensuring that openness claims genuinely lower entry barriers is crucial for effective competition enforcement and preventing anti‑competitive practices.
Speaker: Audience member 6
Can protection‑by‑design techniques that render publicly available data unusable for AI training be effective, and how do they align with broader openness goals?
Explores a technical‑legal approach to safeguard intellectual labor and data rights while balancing the principle of openness.
Speaker: Audience member 5
Who is truly included in the “all‑inclusive” AI vision, particularly regarding gender representation and the participation of countries like China?
Clarifying inclusion criteria is essential to ensure that AI governance frameworks do not marginalize key stakeholders or regions.
Speaker: Audience member 2
How should the AI community handle open‑source Chinese models that may embed CCP ideological controls, and what methods exist to mitigate such influences?
Open‑source models from geopolitically sensitive contexts raise concerns about hidden political bias and require strategies for safe adaptation.
Speaker: Audience member 3
How can transparency and community oversight be increased for data‑center and cloud‑infrastructure decisions that are currently made behind NDAs and without public input?
The physical layer of the AI stack is critical to democratic control; lack of openness undermines community trust and accountability.
Speaker: Alondra Nelson
What role can cooperatives (e.g., the Amul co‑op) play as co‑designers and governance structures for AI development in the Global South?
Cooperatives could offer a democratic, one‑member‑one‑vote model for pooling resources and shaping AI applications to local needs.
Speaker: Astha Kapoor
How can regulator and enforcement voices be more systematically included in AI governance discussions and summit panels?
Involving enforcers ensures that policy recommendations are grounded in enforceable legal frameworks and protect public interest.
Speaker: Amba Kak
What are the challenges and possible solutions for achieving scale with community‑driven, small‑AI models versus the monopoly‑style scaling of large tech firms?
Understanding how to diffuse AI capabilities without concentrating power is key to a more equitable AI ecosystem.
Speaker: Karen Hao
What research is needed to understand and mitigate labor exploitation throughout the AI supply chain, from data annotation to model deployment?
Labor concerns span the entire AI pipeline; systematic study can inform policies that protect workers and ensure ethical AI development.
Speaker: Karen Hao
How can democratic input be incorporated into AI policy mechanisms that rely on industrial, trade, and immigration levers rather than formal rulemaking?
Current “hyper‑regulatory” approaches bypass traditional public comment periods, raising questions about legitimacy and participation.
Speaker: Alondra Nelson

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.