Global Perspectives on Openness and Trust in AI
20 Feb 2026 15:00h - 16:00h
Global Perspectives on Openness and Trust in AI
Summary
The panel convened by AI Now and AAPTI examined how the concept of “openness” shapes AI governance and its political economy, noting that the term does far more work than a simple technical label [12-17]. Participants argued that “open” functions as a proxy for broader values such as democratization, participation, and sovereignty rather than merely sharing code or model weights [16-17].
Alondra Nelson explained that the Biden administration has framed openness as a binary outcome-either a model is open or it is not-contrasting with the original open-source ethos that views openness as a socio-technical gradient [27-33][40-42]. She warned that this binary framing allows geopolitical concerns to eclipse accountability, transparency, and democratic control over AI systems [45-47], and added that U.S. AI policy now operates more through industrial levers such as tariffs, export controls, and high-cost H-1B visas, which she described as “hyper-regulatory” and less democratic than formal rulemaking [56-63][64-68].
Anne Bouverot highlighted that China’s use of open-source tools has enabled it to catch up technologically, while European countries view open source as a competitive lever for middle-power coalitions [75-84][88-92]. She argued that ad-hoc “coalitions of the willing” among middle powers can harness openness to build digital sovereignty without relying on a single dominant stack [89-92].
Astha Kapoor warned that for Global South nations, openness can become a risky “adoption” narrative that diverts resources from structural challenges and may turn these countries into test-beds for external AI models [111-119][124-126]. Ravneet Kaur described the Competition Commission of India’s study of AI markets, identifying risks such as ecosystem lock-in, price discrimination, and opaque partnerships, and emphasized that ensuring access to data, compute, and skills is essential for fair competition [128-138][148-158]. She argued that competition is a crucial tool for preserving national sovereignty in the AI era, requiring transparent governance and contestable markets [161-170][172-173].
Karen Hao presented two open-source initiatives-the “big science” multilingual LLM project and New Zealand’s Tahiku Media speech-recognition model-that illustrate participatory, consent-driven openness and return value to data-providing communities [179-202]. She cautioned that scaling such models should not mean monopolistic distribution, but rather a decentralized “small-AI” approach that enables diverse industries and communities to develop their own solutions [207-212].
The discussion concluded that redefining openness as a democratic, community-centered practice, supported by transparent competition policy and inclusive coalitions, is essential for equitable AI development worldwide [40-42][161-170][207-212].
Keypoints
Major discussion points
– Re-defining “openness” in AI beyond technical binaries – The panel opened by noting that “open” is a stand-in for broader values such as democratization, participation and sovereignty [12-17]. Alondra emphasized that the U.S. has treated openness as a binary rather than a gradient and argued for a socio-technical view that links openness to power-shifting, accountability and community use [30-34][40-46][47-49]. Anne highlighted how open-source can be a strategic lever for middle-power countries while acknowledging its limits [75-89]. Karen illustrated concrete projects (the large-scale open-source LLM effort and the Tahiku Media Māori speech-recognition model) that embody a participatory, consent-driven notion of openness [179-202].
– Governance mechanisms and the politics of U.S. AI policy – Alondra pointed out that, although the Biden administration appears “light-touch” on formal regulation, it is exercising heavy influence through trade, export controls and immigration policy, which she described as “hyper-regulatory” and “anti-democratic” compared with traditional rule-making that includes public comment [55-66][67-68]. Amba’s follow-up question framed this shift as a move away from transparent, accountable regulation toward less publicly scrutinised levers [50-52].
– Competition, market power and digital sovereignty – Ravneet Kaur explained the Competition Commission of India’s focus on anti-competitive practices (self-preferencing, bundling, exclusive agreements) across digital markets and, more recently, AI [128-138][141-152]. She argued that competition is essential for preventing entry barriers, ensuring transparency, and protecting sovereignty, especially for “global-majority” economies [161-170][166-170]. The discussion linked competition policy to broader concerns about data, compute and talent access [154-159].
– Inclusion, representation and gender equity – Amba noted that the panel was the only all-female one at the summit and called it a “badge of honor” that should be improved in future iterations [4-5][69-74]. Audience members raised questions about who is truly included in the “all-inclusive” AI vision, pointing to the under-representation of Chinese participants and the need for gender-balanced engagement [298-306][311-317]. Karen later critiqued “corporate speak” that co-opts inclusion language while preserving closed platforms [254-258].
– Community agency, labor and ethical risks – Alondra reflected on the lack of community transparency around data-center siting and the importance of community involvement in AI conferences [219-226][232-236]. Karen and later audience participants highlighted labor exploitation in data-collection pipelines and called for third-party labeling, “open-washing” safeguards, and design-by-consent approaches to protect workers and data subjects [277-283][369-381].
Overall purpose / goal of the discussion
The panel was convened to broaden the conversation about “openness” in AI governance, interrogate how power, politics and market structures shape AI development, and explore concrete pathways-through policy levers, competition law, community-driven projects, and inclusive representation-to align AI with the public interest across diverse geopolitical contexts (U.S., Europe, India, Global South).
Overall tone and its evolution
– Opening (0:00-12:00): Formal, optimistic, and collaborative, with Amba framing the session as a “stimulating” exchange and participants outlining shared values around openness [1-3][12-17].
– Middle segment (12:00-28:00): Becomes more critical and analytical; Alondra critiques the binary view of openness and the “anti-democratic” nature of U.S. policy [55-68]; Anne and Astha discuss geopolitical power shifts and the risks of a one-size-fits-all model [75-89][111-126]; Ravneet details concrete anti-competitive concerns [128-158].
– Later segment (28:00-41:00): Reflective and hopeful, emphasizing community participation, concrete open-source case studies, and the potential of competition to safeguard sovereignty [179-202][219-236][161-170].
– Closing (41:00-end): Cautiously optimistic, acknowledging corporate co-optation of inclusion language while urging deeper democratic engagement and concrete actions for labor justice and broader representation [254-258][369-381][389-391].
Overall, the tone moves from introductory enthusiasm to a nuanced critique of existing power structures, then toward constructive optimism about community-driven solutions and the need for inclusive, democratic AI governance.
Speakers
– Amba Kak – Moderator and co‑host of the panel; affiliated with the AI Now Institute and the AAPTI Institute.
– Alondra Nelson – Former Deputy Director of the White House Office of Science and Technology (Biden administration); Harold F. Linder Professor, Institute for Advanced Study [S22][S24].
– Anne Bouverot – French President’s Special Envoy for the AI Action Summit; Special Envoy for Artificial Intelligence, France; former Director General of the GSMA [S27].
– Astha Kapoor – Representative of the AAPTI Institute / Civil Society, Asia‑Pacific Group; policy researcher on data stewardship [S7][S9].
– Ravneet Kaur – Chairperson, Competition Commission of India [S1].
– Karen Hao – Journalist and author of Empire of AI, covering AI policy and ethics.
Audience members
– Audience member 1 – Founder of Corral Inc. [S10].
– Audience member 2 – Participant from a German delegation (part of a group from Germany) [S29].
– Audience member 3 – Student (asked about open‑source Chinese models) [S13].
– Audience member 4 – Intellectual property and business lawyer [S17].
– Audience member 5 – Audience participant (question on AI’s impact on labor); no specific role identified.
– Audience member 6 – Audience participant (question on “open‑washing”); no specific role identified.
Additional speakers:
– None (all speakers appearing in the transcript are listed above).
The panel was jointly convened by the AI Now Institute and the AAPTI Institute as a capstone to an intensive week of debate. Amba Kak opened by noting the “political economy of AI” as the common thread that links New York and Bangalore and highlighted the panel’s composition of senior figures from government, academia and journalism [1-3][4-5]. She also drew attention to the fact that this was the only all-female panel at the summit, framing it both as a point of pride and a reminder of the work still needed to normalise gender-balanced representation [4-5]. Kak also thanked Amlan Mohanty for co-conceptualising the panel and the summit organising team, Sanjana Mishra and Iksho Virat, for their logistical work [1-5].
A central theme introduced early on was the contested meaning of “openness”. Kak observed that discussions of openness have largely focused on technical affordances such as open-source code, model weights or hardware, yet the term is being used as a proxy for much broader values-including democratisation, participation, agency and even sovereignty [12-17].
Alondra Nelson (former Deputy Director of the White House Office of Science and Technology Policy) argued that the Biden administration has reframed openness as a binary outcome-either a model is “open” or it is not-rather than as a gradient that reflects the original open-source ethos of shifting power and fostering accountability [27-33][40-43]. She warned that this binary framing allows geopolitical concerns to eclipse the socio-technical dimensions of openness, such as transparency and democratic control, and that merely releasing model weights without accompanying data, APIs or governance mechanisms is insufficient [44-49].
Nelson explained that U.S. AI policy is increasingly pursued through industrial levers-tariffs, export controls, semiconductor restrictions and costly H-1B visas-rather than through traditional rule-making processes that invite public comment. She called the reliance on industrial levers a “hyper-regulatory” strategy and argued that, because it sidesteps formal rule-making, it is comparatively anti-democratic [55-63][64-68].
Anne Bouverot, France’s special envoy for the AI Action Summit, contextualised the geopolitical shift by recalling the U.S. announcement of the “Stargate” project and Vice-President Vance’s call for global customers [75-81]. She highlighted how China has leveraged open-source tools to catch up technologically, using open-source as a lever to gain a seat at the table [82-84]. For Europe and other middle-power nations, Bouverot argued that open-source can serve as a competitive instrument that enables “coalitions of the willing” to build digital sovereignty without having to develop an entire stack from scratch [88-92].
Astha Kapoor, representing Global South perspectives, cautioned that the prevailing narrative of openness as a catalyst for adoption can be hazardous for developing economies. She explained that framing openness merely as a driver of data or multilingual datasets risks turning Global South countries into test-beds for external AI models, diverting attention from structural challenges in health, education and broader development [111-119][124-126].
Ravneet Kaur, Chair of the Competition Commission of India, presented the commission’s recent market study on AI, which identified anti-competitive practices such as self-preferencing, bundling, tying, exclusive agreements and ecosystem lock-in across digital markets [128-138][141-152]. She stressed that access to data, compute infrastructure and skilled talent is pivotal for fair competition, and that transparency and accountability throughout the AI lifecycle are essential to safeguard consumer welfare and national sovereignty [153-158][161-170][166-170]. Kaur positioned competition policy as a concrete tool to prevent market foreclosure and to ensure that AI systems remain contestable and transparent [161-166].
Karen Hao illustrated concrete realisations of a broader, participatory notion of openness. She described the “big-science” multilingual large-language-model project, which brought together over a thousand researchers from 70 countries to create an open-source model with transparent data curation, shared governance and value-return mechanisms for contributing cultural institutions [179-182]. Hao also recounted the Tahiku Media Māori speech-recognition initiative in New Zealand, where the community was consulted from the outset, consent was obtained for data use, and the resulting model was co-designed to serve language revitalisation goals [183-202].
Continuing the discussion on scale, Hao argued that the Silicon-Valley conception of “scale”-a single model distributed to everyone by a monopolistic provider-is misleading. She proposed that true scale should be understood as many communities developing their own, application-specific models, thereby avoiding the concentration of power inherent in monolithic large-scale systems [207-212].
Nelson reflected that, unlike many prior conferences, this summit actively included a broad cross-section of participants-students, “aunties” and other community members-making the event “revolutionary” in its inclusivity [232-236]. She also highlighted the lack of transparency around data-centre siting, where local officials are often bound by NDAs, underscoring a gap in community oversight of critical infrastructure [219-226].
The audience-question segment broadened the conversation. On individual agency, labour concerns and the risk of “open-washing”, Hao suggested that consumers can exercise agency by choosing open-source tools aligned with their values and called for third-party labelling schemes-similar to those used in fashion or food supply chains-to make the provenance and resource usage of AI models clear [277-283]. She warned that corporate rhetoric about inclusion often masks a strategy of locking users into closed platforms [254-258]. In response to a question about gender balance and Chinese participation, Astha Kapoor noted that democratisation is largely about market access and that true inclusion must go beyond token representation, urging more gender-balanced participation [300-306]. When asked about Chinese open-source models, Alondra observed that, although she has not worked directly with them, they can be fine-tuned to remove overt ideological bias and are already being leveraged by enterprises [312-318]. Finally, regarding an IP-focused query, Ravneet Kaur clarified that the Competition Commission’s remit is limited to curbing anti-competitive abuse and does not extend to adjudicating intellectual-property rights [322-328].
In closing, Kak thanked the participants and noted the richness of the dialogue, emphasizing that the consensus underscored openness as a socio-technical, democratic practice that must be coupled with transparent competition policy and genuine community participation [389-391]. She also urged future summits to include more regulator and enforcement voices so that AI actors are held accountable to the public [389-391].
Overall, the discussion highlighted that openness must be understood as a socio-technical, democratic practice, that competition policy can serve as a tool for digital sovereignty, and that genuine community participation-across gender, geography and sector-is essential for an AI future that serves the public interest [389-391].
The AI Now Institute and the AAPTI Institute, we are honored and delighted to be co -hosting this panel at the close of what has been an extremely stimulating, some would say over -stimulating week. What brings AAPTI and AI Now together, despite the many kinds of distance between New York and Bangalore, is our focus on the political economy of AI and our insistence that questions of technology are always questions of power. So we have a formidable panel by every standard, leaders in their field advocating for AI in the public interest, traversing several fields of government service, academia, and journalism, sometimes in the same person, as you will know if you read their bios, which I’m going to skip for reasons of expediency, but I’m going to talk through some of their specific advantages in the conversation.
You know, it always pains me a little bit to even bring it up, but I’m going to do it anyway, which is it is exceptional that this is also the only female -only panel at this symposium. Hopefully that’s not something we have to say a lot or something that we have to wear as a badge of honor, but more something to work on for future iterations. So before we begin, I don’t think he’s in the room, but I want to also thank Amlan Mohanty, who’s been a partner in conceptualizing and helping to bring this panel to light, and to our wonderful summit organizing team, Sanjana Mishra and Iksho Virat, for their tireless efforts. I hope you all get good sleep tonight after a very long week.
Okay, so let’s get into it. I’m going to moderate this panel, so I’ll take a seat. Thank you. So let’s get into it. Okay, let’s get into it. There have been many discussions about openness at this summit. You’ve probably been in at least one of them. For the most part, these discussions have focused on the kind of technical affordances of open source, open -weighted models, open hardware. But what’s clear is that the word open is doing a lot of work in these conversations. It’s a stand -in for many much broader values of democratization, of participation, agency, even sovereignty. So in today’s panel, we’re going to kind of widen our understanding of what openness could mean in this conversation about AI.
And I’m going to start with Alondra. Alondra has been the deputy director of the White House Office of… of science and technology under President Biden. And at the time, there was a very heated debate about the geopolitical but also safety implications of open source and what U .S. government policy would be on these issues. And it seems like under this current administration, we’ve landed on a pro -open source overall orientation. But at the same time, it feels as if in many senses, AI governance in the United States is more closed than it has ever been. So I guess I wanted to ask, what do you see as the broader challenges to openness in AI governance today?
Thank you for organizing this, colleagues. And good to be here and good to close out this exciting summit with you all. So a couple of things. I mean, I would say the Biden administration, I think, took the questions. Question of open weight model. as a gradient, right? So it was a spectrum. So that open was not a binary. It’s either open or not open. And I think the new administration, the current administration, takes it much more as a binary, that open is a thing that you sort of have achieved and it is now open as opposed to being closed. I think the difference is that, to your point from the opening, Amba, is that I think part of what we were trying to do in the Biden administration was really go back to a kind of foundational sense of openness that comes out of an open source movement that really thinks about openness as a kind of socio -technical characteristic and not just a technical characteristic.
So certainly the questions around open models, AI models, are often around technical things like model weights. Are the model weights shared? Only the model weights shared? Is it also the case that the training data is shared? You know, is the API, open to a certain extent or closed to a certain extent. So the technical things are certainly there. But I think if we go back to a sort of broader understanding of openness that comes out of sort of open source software, it was about shifting power. It was about forms of accountability. It was about sort of openness as a kind of practice and openness as shared infrastructure, openness as resources that could be used by lots of different communities, things that could be, you could modify the technology, that you could sort of just use the technology for the sort of purposes of your community or the purposes that you had.
And so that meant that that older, I think, broader definition of open was much more about democracy and transparency and accountability in a way that if you take even, you know, a so -called open source model like Lama 2 or Lama 3, which isn’t really open source and that we’re… We’re being asked to be content with… model weights as open. So I think the, you know, why we want to really push back on that is because, you know, that we are often, I think, using geopolitical stakes as a justification for not doing the socio part of the socio -technical, for not doing the accountability and the transparency and the democracy part because, you know, too dangerous because in the UNESCO context, China, you know, these things just sort of sit in as signs for explanations for, you know, why things can’t be different.
And I think it’s the case that to go, you know, to be reminded of a kind of broader sense of open reminds us that, you know, it’s not this binary and that one can have, you know, there obviously may be places where you don’t want open source. Like, do you want open source, like nuclear deploy AI? Like, probably not, right? But the binary… The debate gets carried forward as if, like, every open source use or open weight use is that. use as opposed to the sort of gradient of uses that are much safer and moreover are beneficial to communities, to helping people achieve their goals and sort of certainly much better for public transparency and accountability about what these systems do in the world.
Can I ask a quick follow -up and then I want to move to Anne, which is that the other sort of defining feature of certainly of U .S. government policy today is that it’s happening less through traditional, you know, the traditional forms of regulation that we’re used to and much more through industrial policy, through trade policy, through immigration. But these are also spheres that have been, I would say, relatively even more immunized from public accountability or harder to, you know, harder for the broader public to weigh in on. So just wanted your thoughts on how we…
Yes, I’ve been writing and thinking about this. Thank you for that question. So… So, you know, we’ve spoken a lot about the new administration and gets talked about as being deregulatory in regards to AI and being very light and being, quote, unquote, light touch. And I think if we actually pose that as a question as opposed to accepting it as a statement and actually look at what the current administration in the U .S. is doing around AI, it’s actually taking a quite very heavy hand to sort of steer AI. So you mentioned some of the levers that they’re using, tariffs, trade policy, export controls of semiconductor trips, in the U .S. context even immigration. So, you know, there are, you know, I think companies are getting out of it and around it depending on their relationship to Washington, but we’re told that an H -1B visa for a high -tech worker is $100 ,000 per worker, right?
And so that’s, you know, 10x, 20x or whatever times a company, that’s quite a lot of money. And also just… The way that science is being funded to the extent that, you know, the federal government plays a large role in driving the sort of research ecosystem for technology. So all of those things are being very heavily shaped in the current administration in the U .S. And so… So it may not be regulatory in the sense of formal rulemaking as it happens in the United States context, but it is certainly hyper -regulatory, I think, in a lot of other ways. And I’ll go back to my keyword of the day, the democracy piece, which is the upside of formal rulemaking, even though it can be clunky, it can take a long time, sometimes the pace is too slow for the pace of the technology, all of those things can be true, is that it has democratic input.
So if you’re doing a rulemaking in the context of the U .S. federal government, there will be a public call, there will be a public notice that you’re doing the rulemaking, there will be a public call for input. So even if you don’t agree with the outcome, there are sort of moments of sort of democratic input. When we are doing AI policy by fiat and through executive authority only, those inputs, even if those limited inputs are even gone. So it’s not only, I think, quite heavy -handed. It’s unfortunately, I think, anti -democratic relative to the status quo.
Yeah, exactly. Anne, I want to move to you. As the French president’s special envoy for the AI Action Summit, you’ve been at the heart of a lot of global coordination on AI governance. And there was a time, I would say, the last 10 years have been characterized by open versus closed as a kind of binary or a way of organizing the world into particular camps when it comes to AI, the democratic open world and the rest of the world. But it’s interesting how much that has, you know, the ground beneath us has shifted in the last few years. And it has been particularly interesting to note at this summit that it is middle powers as a frame that is coming through as a kind of new organizing principle.
So I guess I want to say, I mean, do you see that openness still has value in forging multilateral, solidarities and especially in this brave new world we’re in?
Yes, absolutely. I mean, clearly the geopolitical landscape has really shifted. At the AI Action Summit in Paris, it was exactly a year ago in February. It was just after the inauguration in the U .S. It was the first international trip for Vice President Vance, and what a speech that was, just before Munich, the Munich Security Conference. It was a moment where the U .S. announced at the White House the Stargate project. So it was a very strong and loud message from the U .S. saying, we’re here, we’re investing, we’re the world leaders. And at the summit, J .D. Vance said very clearly, we want all of you to be customers of our technology. And at the same time, this is the moment when DeepSeek emerged on the world map and everybody realized that actually China, using open source, which is why I want to come to that, was really saying we have a seat at the table and we’re actually playing that game.
And China using open source is actually very interesting because open source has a number of benefits and also risks. I don’t think it’s the answer to everything, but clearly it’s a way for challengers to catch up. This is how Android came to the world of smartphones. There’s many examples, and this is what China has taken as a lever. To be in that race. But then on to what does it mean for other countries than the U .S. and China. It also means that this is a tool that can be used by other countries. which is why in France and in Europe we’re very much in favor of open source as a competitive tool and as a way to leverage the knowledge and the findings of others to then just stand on their shoulders and continue to develop technology.
It doesn’t mean that everything should be open source there are cases where you do want to be careful depending on the use case but as a way to develop and stimulate competition it is very powerful it’s not the only tool you mentioned middle economies middle powers there was this fantastic speech by Mark Carney at Davos and there was a speech by Macron as well that maybe I’ll conclude with but this idea that middle economies that have some resources, not the resources to build their own stack top to bottom and to fund frontier level AI but But together, by building coalitions of the willing, these middle economies can do a lot of things. I believe that Canada, France, Germany, Switzerland, India, Japan, Australia, I can name a few of them.
And it doesn’t have to be one big block of these middle powers, but ad hoc coalitions of the willing. So I believe this is really something that can be useful in the evolution of governance.
That was a fascinating account, and I think what it also highlights is that actually, whether you’re China or the U .S. or the middle powers or France, there’s a level at which everyone, as we discussed, can in some limited way be pro -open source. So do you think then that the differentiation will be at the layer of governance and our approaches to how we govern? How do we govern these technologies?
I don’t know, is really the answer. Governance is such a broad word. There’s a lot of, for example, open source is really being taken as a tool by startups and scale -ups in Europe and in other countries. I mean, by Mistral, by Cohere, by Sakana AI in Japan, by a number. Is that governance? I don’t know. But clearly, governance and countries and institutions have a role to play in saying, how do we shape those coalitions of the willing? How do we put public funding or access to publicly funded compute or access to data sets that countries can help to put together? How do we put that at use and in which ways? So what are the governance?
How do we put that at use and in which ways? How do we put that at use and in which ways? How do we put that at use and in which ways? that we use to strengthen digital sovereignty and resilience.
Precisely, yeah, that’s sort of what I was getting at. Okay, Aas, I’ll quickly move to you. Middle powers, as we just discussed, it’s a very broad term, and what it conceals is that there are many different economic and political aspirations of the countries that are bundled in that mix, and especially for countries like India or other countries in the global south, what are the unique kind of forms of both leverage and dependence in this current environment?
Yeah, thanks so much, Amba. I mean, I think that what we’ve been tussling with over the last few days is that we went from global south to middle powers very quickly in a matter of days, which changes our form a little bit and our aspirations, and I think that that is what we have to grapple with, which is that as global south, our needs are very different in terms of we have structure. We have structural issues around health, around education that need to be addressed. We also have, you know, things that we need to do in terms of moving the country forward beyond what is just technologically mediated progress. And I think that what we’ve been hearing around over the last five days is that things like, well, open data or multilingual data sets is what is going to be that push.
So, you know, our languages will now be online. But then at the same time, we also have to realize that without having openness or control or agency or frictions across that entire AI stack, we are basically risking our populations in the Global South doing the labor to bring people online. So openness as a driver of adoption is actually quite a dangerous frame for Global South countries because it moves attention from where we might need to invest our resources. to then thinking that the only way to our historical problems is via adoption. And we’ve also seen that in the absence of governance, India is not new to the openness discourse, right? We have had a history over the last 12 years or 15 years on digital public infrastructure, but we’ve also seen the limits of once adoption occurs and when you have innovation, people with the deepest pockets come to innovate there because this is an enormous market.
So I think that you mentioned, Karni, like if we are a middle power, we’re definitely on the menu as a market. If we are a global south country, I think that there’s value in thinking about what that solidarity is because you’re right, there’s no homogeneity. And I think we’ve missed some of those questions around what we as large markets diversify. We are not here. We’re not here to do the labor to, you know, test bed models that are built elsewhere. So I think openness as dialogue, as distribution of value is what we need to think about.
so many soundbites that I want to clip out of what you just said that was incredible, thank you Chairperson Kaur, firstly thank you so much for being here, I think what Asha said actually leads in well to the question I wanted to ask you which is how does one combat this dependence and as the Chair of the Competition Commission of India you’re a regulator that has been kind of ahead of the curve of looking at anti -competitive trends in this market, so from your perspective can you say a little bit both about the key implications of competition in the AI market and also if you see competition as a lever in the so called sovereignty toolkit
Thank you Amba, so for us at the Competition Commission of India, we’ve been looking at a lot of developments happening in the internet economy and these developments have changed the way businesses work how consumers interact with the markets and how value is being is being created. So things are moving very rapidly on the digital front. And as the commission, we have looked at what can be the practices which can be anti -competitive. Apart from the benefits which are coming from a digital economy, we have numerous benefits when it comes to economies of scale, the network effects, the efficiencies which are coming from that. But then there are also these risks which are there. And some of these have already been observed by the commission.
So the key ones which we found in the case of digital markets is the self -preferencing which is happening. Tying and bundling is occurring in numerous cases. Leveraging is being done. And there are these exclusive agreements where unfair terms are being also sought and, you know, parity agreements, parity arrangements. are being put in place. So in the competition commission, we have looked at this conduct when it comes to search engines. We’ve looked at it mobile ecosystems, online intermediation services, whether it is hotel, bookings, food ordering, e -commerce, or it is social media platforms. So across the entire spectrum, the commission has been looking at it. And very interestingly, when we started looking at AI, what could be the impact of AI?
So we did a market study on AI and competition, and the report has been released recently, October 25. It’s available on our website. And we found a lot of similarities in the way AI can function as well. So AI can bring a lot of benefits. We are seeing a lot of benefits when it comes to healthcare, education, logistics, supply chain management, and a lot of agriculture. And I’m seeing a lot of good things happening on that front. But also there are these potential possibilities or risks where you could see concentration in the entire AI value chain. There could be ecosystem lock -in, which might happen. Then there could be targeted price discrimination of people based on location, economic means, et cetera.
And then exclusive partnerships and the systems being opaque. So those were the things identified in the market study. And as a first step, we thought we need to make everybody aware because the important issue is one of access. Who has the access? That is a person who will determine what will happen in future. So it is access to data. It’s access to compute infrastructure. It is access to even skill sets. So whether we are able to build up the required skill sets within the country to be able to compete effectively. so those issues have brought us to work towards a framework where we are saying in the entire life cycle of the AI system how can we bring in transparency how can we bring in accountability
I think that’s so important too because we focus a lot on big tech control over infrastructure people are familiar, inputs but I think what you’re pointing to is that it’s access to the consumer the pathways to monetization are happening at the distribution layer so really paying close attention to making sure that we have free and open competition in that layer and firms can’t take dominance from one market into another seems really important my second maybe more provocative question was around do you see competition as a tool for particularly global majority countries to retain and exercise sovereignty in the kind of AI age
when we look at AI we are looking at how far we can develop and how much we can do to make sure that we are able to make the most of the market and how much we can do to make sure that we are able to and deploy, monitor our AI systems that we are putting in place. And that’s where the issue comes up that we need to have the autonomy to be able to deploy the systems as per our economic, strategic, and societal priorities. And that’s where we see the very critical thing that how we can ensure that AI does that. And competition is a very important aspect of it. We just can’t forget about it because competition is what is going to ensure that there are no entry barriers, that players who are already there are not using their dominance to foreclose competition, to foreclose the market, and also that the consumers are not left locked in into a particular system because they can’t move their data and their various benefits that they are deriving from the AI systems to some other applications.
So really competition is at the heart of it, and I don’t see any way where we can forget about market. Thank you. markets would need to be contestable, fair, competitive. And for that, you know, that is where I would like to point out about our study, that we have clearly brought out that people who are deploying the technology, they have to have technical transparency. The stakeholders have to be able to understand what’s happening, what is this technology or this application being used for. And then there has to be governance transparency. That is that how you are governing that system. That also needs to be transparent. So once we are able to ensure that the people who are deploying these systems are looking at all these aspects, then the self -audit is happening, then maybe we would be able to safeguard competition because at the really crux of it all is maintaining competition.
competition.
Thank you so much. Karen, I’m going to move to you. And just by the fact that there was so many, a line of people trying to take a selfie with you before we started, I’m going to assume that many people in the audience are familiar with Karen’s incredible book, Empire of AI. Her work has really delved into the global inequities that are embedded in the AI sort of global supply chain. I want to ask you where, I mean, your book is full of rich examples, but where do you see that open approaches to developing AI in some ways pose a challenge to this empire model of AI?
big science project. It was this project that brought together over a thousand researchers from 70 countries, 250 institutions to try and create an open source large language model that not only would allow many different researchers to then interrogate what is actually happening beneath the surface of a large language model, but also to completely rethink what it would take to develop these technologies in a fundamentally more beneficial way where, for example, there’s better data governance practices, where you’re actually curating and cleaning the data, making it transparent for people, being able to track which data owners are then contributing to what aspect of value generation within the model. And this kind of goes back to Alonzo’s point as well, where you were saying…
that we really need to understand openness with a much broader conception of what openness means. It’s not just technical openness. And this project really embodied that, where they were working together with lots of different cultural institutions, with libraries, historical institutions, to try and figure out better ways of capturing the rich data that they had, but with respect to that institution and with a way to then deliver value back to that institution so the value chain wasn’t going just to the model creators themselves. Another project that I really loved is one that I highlighted in my book in the epilogue, which is the Tahiku Media AI speech recognition model. So Tahiku Media, they are a nonprofit radio station in New Zealand, and they broadcast in Te Reo Maori, or the Maori language, the language of the indigenous peoples in New Zealand.
And when a couple years ago, there’s been this big movement within New Zealand to try and revitalize the Maori language because it has been a huge challenge for them. almost been lost through the process of colonization. And Tahiku Media thought they had a very unique opportunity with this rich archival audio of Tōrero Māori to open this up to the community and help facilitate more language learning. They wanted to make it more accessible than simply just allowing people to listen to it, though. They wanted to create an application where you listen to the audio while you see a transcription of the audio. You can click on the transcription to get automatic translation. You can figure out how the language actually works.
But they realized they didn’t have enough capability to transcribe this because there simply were not enough proficient Tōrero Māori speakers. So this was the perfect use case where they could leverage building an AI speech recognition tool to do that work for them. But they went about this project in a totally different way. They made it extremely open and participatory to the community. Also not in a technical way, but in a social way. where they engaged immediately with the community to ask them, do you want this AI tool? And once the community said yes, they then had a public education campaign where they taught everyone what is AI in the first place, what do we actually need, we need a model, we need data, this is the kind of data that we need, this is the data that we would need from you.
And then once they actually engaged in that process and they developed so much trust with the community, they were able to collect enough data from the community with full consent in just a few days to train a speech recognition model. And then they continued to go back to the community and they said, now that we have this model, what kinds of applications do you actually want us to develop with this? What kinds of new AI models do you want to develop with this? And all of this was built on another open source project, which was the Mozilla Foundation’s deep speech model, which was similarly developed. With that kind of broader definition of openness, it was a model developed purely with also consentful data donations.
And so the entire stack was with the spirit of collaboration, with participation from everyone in the community, with an equal exchange of value where the people who are giving the data have a vote, have a say in then how the model ultimately can help support their journey in language learning. So both of those examples I always hold in my head when I’m thinking of what are the visions of AI that we actually want to support, what are the visions of open space AI that we actually want to support.
So as you were speaking, I was just thinking, apart from being open and participatory in all the ways you said, these examples also provide a contrast to the idea that there is one model to rule them all, there’s this very sort of large language, we’re taking a single bet on a single technology, type of approach. But similarly, one of the… of the, I guess, common retorts to these experiments in some sense is that we can’t do that at scale. And so I’m just curious, what do you see as the tension between these kinds of governance structures and scale, and is there a trade -off?
So I would reframe what we mean by scale, because what we are taught by Silicon Valley is that scale means they distribute to everyone, but they are the sole distributor. And to me, that’s not scale. That’s a monopoly. And what really we would want from scale is different communities all around the world, different industries, different companies, each developing models by and for them at scale. Like, that’s, to me, like a much more appropriate way of thinking about scale. And in fact, what’s so interesting is, like, because of the data imperative for large language models and the compute imperative for large language models as they’re currently being trained by the main company, they’re not going to be able to do that.
There is not a, there isn’t a good ability to diffuse this technology across. many different industries or many different communities. Most industries are data -poor industries. They’re not like the Internet industries. They don’t sit on vast amounts of data. And so if we actually want to diffuse AI to more people around the world and for more use cases around the world, in fact, we need to think of scale from a small AI perspective, a community -driven perspective, application -specific perspective, and that’s how we’re going to get scale.
Okay, we’ve heard, I guess, a range of rich perspectives, and I’m going to take it as a good sign that all our panelists seem to be actively taking notes and sort of engaging with what each other was saying. So I was going to propose as a sort of round two that I might ask, just based on the conversation we’ve just had, Alondra, what is something that’s sort of sticking with you or that you’re working through in response?
Yeah, I think community. So Karen queued that up for me, and the note that I was just writing here was about that, and I was thinking about… is how the stack that we are building now is explicitly closed to community. And I was thinking in particular about the data center and cloud layer. So in the U .S. context, there’s a lot of contestation. There’s growing contestation in communities about data centers. What folks might not know is that part of the contestation is because elected officials are asked to sign NDAs and contracts are being signed to stand up data centers in the dark of night and communities don’t even know. So the sort of lack of openness around the infrastructure, that infrastructural piece of the AI stack is actually quite profound.
And then I was thinking the opposite. So my reflection on the time here, which I’m still going to be processing for quite a long time. It’s my first time in New Delhi, my first time in India. It’s been an incredible experience. But I’ve been to a lot of AI conferences like, you know, NeurIPS. and everything, you know, like professional ones, not professional ones. A lot. A lot. This is the first one I’ve ever been to that has included the community in any considerable way. And it just is, I mean, I think it’s a revolutionary thing. And if we’re really serious about having democracy and community and voice, AI conferences need to look much more like this one than the ones that we spend a lot of our time going to.
So, you know, so who knows what will be the outcome of this week together. But it has been extraordinary and distinctive in the inclusion of lots of, you know, unks, aunties, college students, and lots in between.
Aastha, closing reflections.
Yeah. First of all, thank you for that reframe. As somebody who was here on the 16th, I was feeling so overwhelmed, and my instinct was like, there are too many people. But I do appreciate that. That reframe on the fact that this is the community that is going. to build and question and do the work, I think, that we all keep talking about. And I think from that is also, my word is also community, but I think friction, how do we enable some of that, both the coalescing, but also the dialogue, the questions, the where is the value for me part of it. And I know an example that was presented yesterday on the Amul Co -op, we’ve been doing a lot of work with cooperatives, to me, which is a nice space because it is the governance question of one member, one vote, you can pool things.
So how do they become recipients but co -designers in some of the things that we’ve heard over the last few days. So, yeah. Just closing reflections and maybe even just a takeaway that you’re sitting with after this week.
Yeah, sure. So for me, I think the very important thing, which came out from this AI impact summit is that the governments need to be very active about how they are ensuring. that the deployment of AI is happening. And for that, I am very happy with the way we are going in terms of, you know, we did a great job when it came to digital identity and digital payments. So now we are looking at a digital public infrastructure, how you’re going to be able to provide compute platforms for startups, for people who don’t have the resources, make available data, and then the focus which is there on small language models. Everything doesn’t need to be large, especially when we look at things which are very language -specific, very related to our country and to our solutions.
So that’s one of the key takeaways that I have. And the other, of course, is that we’ll be going, all of us at the Competition Commission are now, you know, going back with this, that one needs to be very alert as to what are the kind of systems which are being put in place and are flexible. Is there transparency? Is there accountability? so those are the key things because at the end of the day it is trust if you can build up trust if your systems are not opaque then you would be able to get the people on board onto your applications and to your systems and that’s where success lies, that’s where value is
I’ll say ma ‘am that one of my key takeaways and hopefully someone from the Swiss government is listening for next year is that we also need to see much more voices from the enforcers, those that are going to make sure that the players in this space are accountable to the public and not above the law and so I’m very grateful that you’re here and I hope that future summits see more enforcers at the table okay Karen you get the last word and I would say I’m going to open up for questions so start thinking of
I think my biggest reflection from the summit which I also shared in an event last night is that um um It’s so interesting to observe corporate speak in these spaces. And the thing that struck me the most about this summit is that this corporate speak has gotten very sophisticated in that they have adopted the language of inclusion, diversity, empowering marginalized communities to talk about ultimately selling their technology and making sure that you kind of buy into helping them lock in their closed platforms. And I hope that because we have more community engagement and there’s more openness in a lot of the discussions that are happening kind of alongside this very sophisticated corporate speak that all of you will take away from the summit this broader idea of what it really means to ultimately build a future where AI can empower people.
It does not actually mean the democracy that the companies offer us. It in fact means that we should all be thinking very deeply about. What are the problems that we really need to solve in as individuals within our families, our communities, our companies, our context. and then whether or not AI is even the right solution for that problem and then how to design and develop from the ground up AI solutions that truly are empowering and enabling and help tackle those problems and bring everyone along together.
That was, yeah, what a great note to end on. And honestly, a note of optimism and a note to build towards the futures we want to see. Okay, so does anyone have any questions? Okay, I saw you first. Go ahead.
Hi, everyone. And, yeah, I was one of the people in line looking for the signature on the book. So I read Carol. It’s a reference book. And my question is addressed to you. So all of this, it makes sense, but it makes sense in a more macro way. From a micro perspective where an individual is exposed to AI and, you know, at their workplaces and we’re expected to use it and, you know, that there’s no getting away from it. How do we reconcile the fact that, you know, probably there is a whole lot of exploitation behind the models that we’re using? But at the same time, you can’t not use it because it’s just, it’s every day.
I don’t use it. Yeah. Yeah. So I’d like to know a little bit more about that. How? Yeah.
No, I actually, I think it’s totally possible to not use these tools. But also, I would say that oftentimes our conversations around adopting AI are posed as a binary. Like either you go completely all in. Or you go none at all. Yeah. and there’s actually a million possibilities in between right there are so many different ways that you could refrain from using air in certain contexts but maybe there are other ways that it helps you um being more intentional about what kinds of ai tools you adopt from which kinds of companies like we’ve been talking a lot about openness so maybe you choose to use more open ai technologies rather than the closed ones um one of the things that i feel is missing right now within the ai ecosystem that makes the burden very very high on consumers is that we don’t really have third -party organizations doing analysis to make clear like clear and easy labels for consumers to determine what values and what degree of resources are being used to develop different types of ai models so that they’re they can actually make informed decisions but we have lots of precedent of this happening in other industries like the fashion supply chain and food and coffee and so i hope that someone out there listening will start working on this like develop some kind of third party third party labeling system so that consumers can actually start making more informed choices.
The other thing that I would say is I also don’t think individuals like we aren’t just consumers. That’s not the only way that individuals can push against the inevitability narratives of AI. We’ve seen amazing protests that have broken out all around the world to push against data centers. We’ve seen protests from parents who feel that their children are being harmed and this rapid escalation of AI advancement is getting out of control. We’ve seen artists and writers using the tools of litigation to counter when these companies are infringing on their intellectual property in ways that they don’t stand for. There are many different ways I think within your life AI is everywhere and also that means you as an individual and within your community have a thousand different touch points for how you can interact with the AI supply chain and in each of those touch points you can choose whether to resist or adopt or be neutral and so there’s yeah like I hope that people actually feel significantly more agency than I think people generally feel today.
Thank you. Okay, I think we should do a couple of questions. So you, you, and you. Okay, let’s go in that order. So we’ll take those three questions and then…
Hello, thank you so much. This was, I think, my favorite panel of the whole summit. And also, like, an all -female panel. I think it’s nice. It’s also kind of connected to a reflection. You know, my question is, like, I feel like at this space, I’ve realized there’s not as many women by far. As men. And, again, as you said, it’s the only female panel. And we’re here with a group of 15 people from Germany. And, like, half of us is male and half of us is female. Often just our male counterparts get addressed and somebody’s just speaking to them and, you know, not, like, asking them for money or other, like, in terms of, like, pitching their business idea, whatever.
But I’ve also noticed other things. Like, the theme is, right, AI all -inclusive, right? But I’m wondering, like, who does this include? In this specific context? In which vision, like, do you understand, like, from this summit, who you think is included in this vision for all -inclusive? and also I’ve realized, I don’t know if anybody else has realized but I feel like China is quite an important power in the AI governance space but the amount of Chinese people here I’ve seen is very low and it’s just something I realized that I noticed so I feel like it’s still just some reflection and I wonder how you see this, like what does this notion of all inclusive mean for you or how you perceived it here?
Thank you, that was many important and provocative questions you just asked
I was curious, kind of as a follow up to our colleague here, your role on the open source Chinese models which are clearly the most intelligent in the open source space but clearly have a deep CCP perspective and so I’m curious like how does that come together in this ecosystem and how can we leverage it appropriately?
Hello. Thank you panel for the wonderful discussion. I’m an intellectual property and business lawyer. So my question is related to intellectual property, specific to Ravneet. Just I wanted to know how you see the openness of AI in context of the intellectual property as openness is somewhere giving the restriction in context of the intellectual property.
Why don’t we start with that question?
Okay, sure. Sure. So when you look at intellectual property, because, you know, there’s a lot of research, development and innovation which has gone into the development of that technology. And whatever is put in place, and there are these copyrights, there are these patent acts which are protecting that. When it comes to the competition commission, we come into the picture only if we find that there is an abuse. Wherever, whatever innovation has been done, it is being used to ensure that there is an abuse. And we want to ensure that no other people can come into the map into the into the same map. And it is being used to enforce conditions which are unfair. So that is the only space where we come in.
Otherwise, the purpose of the commission is not to stifle innovation. We are to, in fact, protect innovation because that’s the way to grow. That’s the way markets will grow further. Competition will increase. New players will keep coming in, better technologies, better value for the customer. So consumer welfare is one of the very critical things we look at. That’s how we address these issues.
I wonder if, Aastha, you can talk to the gender and that broader question on inclusion.
Yeah. Thank you so much for that question. I think it’s what we’ve all been feeling as well. I think that basis, what I have understood in so very early, overwhelmed sense is that there is inclusion, as Karen was saying, is also being chosen host as a word for adoption. And I think that that is the primary framing that I’m taking away from this. I think that that is the primary framing that I’m taking away from this. I think that that is the primary framing that I’m taking away from this. I think that that is the primary framing that I’m taking away from this. I think that that is the primary framing that I’m taking away from this.
I think that that is the primary framing that I’m taking away from this. I think that that is the primary framing that I’m taking away from this. democratization is about market access. The working group also says so. And I think that the gender perspective will also, and we’ve seen this again in previous iterations of the tech will save us, financial inclusion, digital financial inclusion variety, which is like get people online. And then what ends up happening is that when you realize that you’re not able to make money of these, like, you know, the bottom 80%, then you start to get drop -offs there. So it is at the moment of that hype cycle of getting everybody online, and then whether we’re able
I don’t know, maybe you could take the question on Chinese open source AI and how we feel about it.
I’ll try. I mean, one thing I would say about, there’s been some news reporting on, you know, about the fact that this week took place during the Lunar New Year and that that probably had some impact on participation at Ramadan as well. I mean, you know, so I think, that’s not lost, I shouldn’t be lost on. any of us for this question of inclusion. I think, I mean, I haven’t worked with the Chinese model, so I don’t know, but if they’re open source models, you should be able to tune them so that they don’t have, you know, at least as much kind of, you know, CCP kind of ideological control. I don’t know if you do that in the training data or inference level or where you do it, but, and it seems that they are, there are a lot of companies that are building on the Chinese models, and so it seems like even in the enterprise space, and so that is clearly not a hurdle to some of the enterprise kind of uses and applications that people want to build on them, so.
I think we can take two more questions. Okay, so your hand, and I just want to take someone from the middle. You can go. Okay. The alarm just went off. So if you could also make sure that it’s a crisp question that would allow there to also be answers. Yeah.
So I am really interested in how AI is going to impact labor. And one of the biggest concerns in this area is the fact that, you know, AI can train on the intellectual labor of so many people without giving credit, without giving compensation. So there are obviously regulatory approaches to this. But I’m more interested in like an up. So new research that’s happening about protecting publicly available data, be it images, be it websites, be it written content in a way that that data, if it’s used directly by AI, it’s either useless to it or it’s harmful to it. I think there’s some research happening in University of Chicago around that and some other places. So my question here is twofold.
First, is this like a good approach to sort of protect intellectual property or data by creating? Protection by design. And two, how does it tackle? How does it go with the idea of openness? Right. Because on the one hand, it’s.
Thank you for the question. I just want to make sure we have time for the others. They’re going to kick us out of this room. That’s the final question and then maybe Karen, you can address the labour question.
Hi, I wanted to ask about open washing. We’ve been hearing the term in previous discussions about openness in competition. And I just wanted to ask in terms of enforcement, how should competition authorities assess whether this openness is genuinely lowering entry barriers or whether underlying dependencies still exist essentially. Do we need new analytical tools? Does there need to be a reworking of the frameworks around competition? That’s essentially the question I wanted to ask. Thank you.
Karen and then Jayperson Kaur, you will have the last word.
Sorry, can you remind me the very last part of your question? You were talking about… The labour one. Yes. I agree with everything that you said, basically, that, yes, this is a huge problem. Yeah, like labor exploitation is absolutely happening, both with the exploitation of the labor that is being used to produce the data and also labor exploitation of, like, data workers that are cleaning the data. And I think that just shows, given that the labor exploitation is happening all through the supply chain, that that is kind of inherent in the logic of how these models are being created, and we need to fundamentally rethink that from the ground up.
So when we do a competition assessment, we are looking at numerous economic factors that are also taken into consideration. It is not based on, you know, what has been submitted to us. And a very detailed analysis is done to understand whether there is any competition harm. And the other aspect which is looked into is what are the effects which are there. Is there an appreciable adverse effect? So we have to establish both the things, and this is done on a case -to -case basis after doing a very rigorous analysis. of both the data which is available in the public domain and the analysis done by our internal teams. Only then we are able to determine whether there’s a harm to competition.
Okay, thank you all so much for being here. This is such a rich conversation and thank you all for being part of it. Thank you.
Alondra Nelson, former deputy director of the White House Office of Science and Technology Policy, provided the panel’s foundational framework by arguing for understanding openness as a spectrum rathe…
EventOpen Sovereignty and Third Way Approach
EventThis comment elevated the technical sophistication of the discussion and established credibility for Switzerland’s democratization claims. It moved the conversation from abstract principles to concret…
EventAnne Bouverot described Europe’s evolution from regulation-focused approaches toward innovation and practical outcomes. She announced major European investments, including 200 billion euros of investm…
EventDuring the Biden Administration, E.O. 14110 directed over 50 federal agencies to engage in more than 100 specific actions across eight overarching policy areas. 53 The E.O. also used Defense Productio…
ResourceCompetition policy and advocacy play an important role, especially in developing countries, where competition authorities should articulate the benefits of competition for consumers and markets. Empha…
EventThis comment demonstrates sophisticated understanding that ‘AI sovereignty’ isn’t a monolithic concept but represents different concerns and needs across different contexts. It shows policy flexibilit…
EventLucia Russo:OK. Well, thank you. So I’ve never done an analysis of all of the principles that exist, so I don’t know to which extent it’s fair to say that they don’t overlap, because I would assume th…
EventNevertheless, collaboration with UN Women has amplified the registration of women-owned vendors, driving the figures from below 200 to over 40,000, and seeing spending with these enterprises doubled (…
EventAtsushi Yamanaka:Well, thank you so much, actually, it’s a very, very interesting questions. And then I have a few, actually, comments on this, actually, fragmentations. Okay, fragmentations. Maybe pe…
Event**Prevention measures:** Audience responses supported proactive approaches including impact assessments, community involvement, and stronger regulatory frameworks. Menno Ettema: Thank you. We go back…
EventMelissa Omino: Thanks, Mark. I think that in order to have real equity, we need, we are required to think about communities as having ownership. and not just a group that would provide consent. Owners…
EventThis data collection occurred without clear information or consent from the individuals, leading to ethical concerns, especially as the individuals were paid for their data. Another argument highlight…
EventParticipants stressed the importance of involving women, girls, persons with disabilities, and other marginalized groups in data governance practices. Examples were shared of how citizen data initiati…
EventProfessor Jonathan Shock warned against the “Silicon Valley approach of move fast and break things” when dealing with government systems. He advocated for transparent, human-in-the-loop systems that m…
EventThe tone began very positively and constructively, with the Chair commending delegations for focused, specific interventions rather than general statements. Speakers expressed appreciation for the Cha…
EventThe tone was consistently collaborative, optimistic, and mission-driven throughout the conversation. Speakers demonstrated mutual respect and shared commitment to inclusive AI development. The atmosph…
EventThe tone throughout is consistently formal, diplomatic, and collaborative. Speakers maintain an optimistic and forward-looking perspective, emphasizing partnership and shared responsibility. The discu…
EventThe tone was consistently optimistic, collaborative, and forward-looking throughout the session. It maintained a formal yet encouraging atmosphere, with speakers expressing confidence in India’s AI po…
EventThe tone is consistently formal, diplomatic, and optimistic throughout. It maintains a ceremonial quality appropriate for a high-level international gathering, with speakers expressing honor, gratitud…
EventErnst Noorman: Thank you very much, Zach, and thank you, Rasmus, for your words. While leaders at this moment gather in The Hague to discuss defence and security, we are here to address a different bu…
Event2. **Complete systemic collapse** – Featuring internet fragmentation and breakdown of current governance structures Anriette Esterhuysen challenged traditional assumptions, arguing that “states have …
EventThe discussion maintained a consistently collaborative and constructive tone throughout. Speakers demonstrated mutual respect and built upon each other’s points rather than contradicting them. The ton…
EventSofie Schönborn: the context for our interactive discussion. Thank you. Thank you so very much. It’s a pleasure to be here. Let me just really briefly try and share my screen. Now you’re seeing the sp…
EventTripti Sinha: Oh, thank you, Theresa. Thank you, Theresa. As you just said, I am very familiar with ICANN. So I’m gonna cite ICANN as an example of multi-stakeholderism. So as you know, ICANN is funda…
EventCompetition laws are shaped by the unique history, culture, and values of each jurisdiction, which means that rules and regulations can vary significantly across countries. For example, EU competition…
EventMozilla’s emphasis on open source technology and community building is another noteworthy aspect. They believe that open source allows for greater participation and empowerment, levelling the playing …
EventThe discussion maintained a diplomatic and constructive tone throughout, with participants demonstrating nuanced thinking about complex trade-offs. While there were clear disagreements about the level…
EventThe discussion maintained a professional, collaborative tone throughout, with speakers demonstrating expertise while acknowledging the complexity of the challenges. The tone was constructive but reali…
EventThe discussion maintained a collaborative and optimistic tone throughout, with participants sharing experiences constructively and focusing on solutions rather than dwelling on challenges. The tone wa…
EventWhile the summit was seen as a step towards revitalizing multilateralism, some speakers noted the challenges in translating commitments into concrete actions. The overall tone was one of cautious opti…
EventThe discussion maintained a tone of “measured optimism” throughout. It began with urgency and concern (particularly in Baroness Shields’ opening about AI engineering “simulated intimacy”), evolved int…
Event“The panel was jointly convened by the AI Now Institute and the AAPTI Institute as a capstone to an intensive week of debate.”
The knowledge base confirms that the AI Now Institute was a convening organization for a panel on openness, but does not mention the AAPTI Institute, so the AI Now involvement is corroborated while the joint role of AAPTI is not documented.
“A central theme introduced early on was the contested meaning of “openness”.”
The panel discussion explicitly examined the concept of openness in AI, as recorded in the knowledge base entry on the Global Perspectives on Openness and Trust in AI panel [S1] and the European AI Governance Strategy discussion that foregrounded openness [S114].
“The panel was the only all‑female panel at the summit, highlighting gender‑balanced representation needs.”
While the report’s claim about an all-female panel is not directly confirmed, the knowledge base contains entries discussing gender parity and the importance of diverse representation in AI forums [S108] and broader gender-equality initiatives [S40], providing contextual background.
“Open‑source can serve as a competitive instrument for Europe and other middle‑power nations, enabling “coalitions of the willing” to build digital sovereignty without developing an entire stack from scratch.”
The knowledge base links openness to digital sovereignty and strategic positioning, noting that open-source tools are discussed as means for nations to achieve technological autonomy and collaborative coalitions [S114] and in broader digital sovereignty debates [S117].
“Astha Kapoor, representing Global South perspectives, cautioned that the prevailing narrative of openness as a catalyst for adoption can be hazardous for developing economies.”
Astha (Aastha) Kapoor’s participation in a Global South-focused session on digital governance is recorded in the knowledge base, confirming her role as a speaker representing Global South concerns [S9].
The panel displayed substantial convergence around three core themes: (1) openness must be understood as a socio‑technical, democratic principle rather than a simple technical release; (2) U.S. AI policy is shifting toward industrial and trade levers, limiting formal democratic rulemaking; (3) competition and transparent governance are essential to prevent lock‑in, protect sovereignty and build public trust. These shared viewpoints cut across speakers from academia, government, and civil society, indicating a strong consensus on the need for broader, inclusive, and accountable AI governance frameworks.
High consensus on the definition of openness, the importance of community participation, and the role of competition and transparency. The agreement spans multiple domains (AI, data governance, digital economy, human rights), suggesting that future policy initiatives are likely to incorporate multi‑stakeholder, open‑source and competition‑focused mechanisms to address power asymmetries in AI.
The panel largely converged on the principle that openness should be understood broadly and linked to democratic participation, community involvement, and sovereign capacity. Divergences emerged around the practical implications of openness for Global South countries and the adequacy of existing competition tools to police open‑washing claims. These disagreements highlight the challenge of translating shared normative goals into concrete policy instruments that satisfy both equity concerns and regulatory capacities.
Moderate – while there is strong consensus on the value of openness, the panel split on how openness should be leveraged for development versus sovereignty, and on whether current competition frameworks are sufficient to address emerging open‑washing practices. The implications are that future AI governance discussions will need to reconcile these perspectives, possibly by designing differentiated openness strategies for Global South contexts and by evaluating the need for new competition‑law tools.
The discussion was shaped by a series of pivotal interventions that repeatedly shifted the focus from abstract notions of ‘open’ to concrete political, economic, and community dimensions. Alondra Nelson’s reframing of openness as socio‑technical and her expose of hidden regulatory levers set the analytical tone. Anne Bouverot’s middle‑power coalition concept broadened the geopolitical frame, while Astha Kapoor’s critique of openness as a potentially exploitative adoption model grounded the debate in Global South realities. Ravneet Kaur linked these ideas to competition law, presenting it as a tangible sovereignty tool. Karen Hao’s vivid case studies and her deconstruction of corporate ‘open’ rhetoric provided practical illustrations and a critical lens that tied the conversation together. Collectively, these comments redirected the panel from a surface‑level discussion of technical openness to a nuanced exploration of power, governance, and community agency, ultimately shaping a richer, more actionable dialogue.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

