Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies

26 Jun 2025 12:00h - 13:00h

Open Forum #64 Local AI Policy Pathways for Sustainable Digital Economies

Session at a glance

Summary

This panel discussion examined the case for local artificial intelligence innovation that serves humanity’s benefit, focusing on three key dimensions: inclusivity, indigeneity, and intentionality. The session was moderated by Valeria Betancourt and featured experts from various organizations discussing how to develop contextually grounded AI that contributes to people and planetary well-being.


Anita Gurumurthy from IT4Change framed the conversation by highlighting the tension between unequal AI capabilities distribution and increasing demands from climate and energy impacts. She emphasized that current AI investment ($200 billion between 2022-2025) is three times global climate adaptation spending, raising concerns about energy consumption and cultural homogenization through Western-centric AI models. The discussion revealed that local AI development faces significant challenges, including limited access to computing infrastructure, data scarcity in local languages, and skills gaps.


Wai Sit Si Thou from UN Trade and Development presented a framework focusing on infrastructure, data, and skills as key drivers for inclusive AI adoption. The presentation emphasized working with locally available infrastructure, community-led data, and simple interfaces while advocating for worker-centric approaches that complement rather than replace human labor. Ambassador Abhishek Singh from India shared practical examples of democratizing AI access through government-subsidized computing infrastructure, crowd-sourced linguistic datasets, and capacity-building initiatives.


Sarah Nicole from Project Liberty Institute argued that AI amplifies existing centralized digital economy structures rather than disrupting them, advocating for radical infrastructure changes that give users data agency through cooperative models and open protocols. The discussion explored various approaches to data governance, including data cooperatives that enable collective bargaining power rather than individual data monetization.


The panelists concluded that developing local AI requires international cooperation, shared computing infrastructure, open-source models, and new frameworks for intellectual property that protect community interests while fostering innovation for the common good.


Keypoints

## Major Discussion Points:


– **Infrastructure and Resource Inequality in AI Development**: The discussion highlighted the significant AI divide, with infrastructure, data, and skills concentrated among few actors. Key statistics showed AI investment doubled to $200 billion between 2022-2025 (three times global climate adaptation spending), and single companies like NVIDIA controlling 90% of critical GPU production.


– **Local vs. Global AI Models and Cultural Preservation**: Participants debated the tension between large-scale global AI systems and the need for contextually grounded, local AI that preserves linguistic diversity and cultural knowledge. The conversation emphasized how current AI systems amplify “epistemic injustices” and western cultural homogenization while erasing local ways of thinking.


– **Data Ownership, Intellectual Property, and Commons**: A significant portion focused on rethinking data ownership models, moving from individual data monetization to collective approaches like data cooperatives. Participants discussed how current IP frameworks may not serve public interest and explored alternatives for fair value distribution from AI development.


– **Infrastructure Sharing and Cooperative Models**: Multiple speakers advocated for shared computing infrastructure (referencing models like CERN) and cooperative approaches to make AI development more accessible to smaller actors, developing countries, and local communities. Examples included India’s subsidized compute access and Switzerland’s supercomputer sharing initiatives.


– **Intentionality and Governance for Common Good**: The discussion emphasized the need for deliberate policy choices to steer AI development toward public benefit rather than purely private value creation, including precautionary principles, public procurement policies, and accountability mechanisms.


## Overall Purpose:


The discussion aimed to explore pathways for developing “local artificial intelligence” that serves humanity’s benefit, particularly focusing on how AI innovation can be made more inclusive, contextually relevant, and aligned with common good rather than concentrated corporate interests. The session sought to identify practical solutions for democratizing AI development and ensuring its benefits reach marginalized communities and developing countries.


## Overall Tone:


The discussion maintained a collaborative and solution-oriented tone throughout, with participants building on each other’s ideas constructively. While speakers acknowledged significant challenges and structural inequalities in current AI development, the tone remained optimistic about possibilities for change. The conversation was academic yet practical, with participants sharing concrete examples and policy recommendations. There was a sense of urgency about addressing these issues, but the overall atmosphere was one of thoughtful problem-solving rather than criticism alone.


Speakers

**Speakers from the provided list:**


– **Valeria Betancourt** – Moderator of the panel session on local artificial intelligence innovation pathways


– **Anita Gurumurthy** – From IT4Change, expert on digital justice and AI democratization


– **Wai Sit Si Thou** – From UN Trade and Development Agency (UNCTAD), participated remotely, expert on inclusive AI for development


– **Abhishek Singh** – Ambassador, Government of India, expert on AI infrastructure and digital governance


– **Sarah Nicole** – From Project Liberty Institute, expert on digital infrastructure and data agency


– **Thomas Schneider** – Ambassador, Government of Switzerland, economist and historian with expertise in digital policy


– **Nandini Chami** – From IT4Change, expert on AI governance and techno-institutional choices


– **Sadhana Sanjay** – Session coordinator managing remote participation and questions


– **Audience** – Various audience members including Dr. Nermin Salim (Secretary General of Creators Union of Arab, expert in intellectual property)


**Additional speakers:**


– **Dr. Nermin Salim** – Secretary General of Creators Union of Arab (consultative status with UN), expert in intellectual property law, specifically AI intellectual property protection


Full session report

# Local Artificial Intelligence Innovation Pathways Panel Discussion


## Introduction and Context


This panel discussion, moderated by Valeria Betancourt, examined pathways for developing local artificial intelligence innovation that serves humanity’s benefit. The session was structured around three key dimensions: inclusivity, indigeneity, and intentionality. Participants included Anita Gurumurthy from IT4Change, Ambassador Abhishek Singh from India, Wai Sit Si Thou from UN Trade and Development, Thomas Schneider (Ambassador from Switzerland), Sarah Nicole from Project Liberty Institute, and Nandini Chami from IT4Change.


The discussion was framed by striking statistics from the UN Digital Economy Report: AI-related investment doubled from $100 to $200 billion between 2022 and 2025, representing three times global spending on climate change adaptation. This established a central tension about democratising AI benefits whilst addressing resource constraints and environmental impact.


## Round One: Inclusivity and the AI Divide


### Infrastructure Inequality


Wai Sit Si Thou highlighted the profound inequalities in AI development capabilities, noting that NVIDIA produces 90% of critical GPUs, creating significant infrastructure barriers. This concentration represents what speakers termed the “AI divide,” where computing resources, data, and skills remain concentrated among few actors.


Anita Gurumurthy emphasised that despite improvements in chip efficiency, energy demand from data centres continues creating environmental concerns. She noted that efficiency gains are being used to build larger models rather than reducing overall environmental impact.


### Shared Infrastructure Solutions


Ambassador Singh presented India’s approach to addressing infrastructure inequality through public investment. India created shared compute infrastructure with government subsidising costs to less than a dollar per GPU per hour, making expensive AI computing resources accessible to smaller actors who cannot afford commercial cloud rates.


Thomas Schneider described similar initiatives including Switzerland’s supercomputer network and efforts to share computing power globally. Multiple speakers endorsed a CERN-like model for AI infrastructure sharing, where pooled resources from multiple countries could provide affordable access to computing power for developing countries and smaller organisations.


### Framework for Inclusive Development


Wai Sit Si Thou presented a framework for inclusive AI adoption based on three drivers: infrastructure, data, and skills, with equity as the central focus. This approach emphasised working with locally available infrastructure, community-led data, and simple interfaces to enable broader adoption.


The framework advocated for worker-centric AI development that complements rather than replaces human labour, addressing concerns about technological unemployment. Solutions should work offline to serve populations without reliable internet access and use simple interfaces to overcome technical barriers.


## Round Two: Indigeneity and Cultural Preservation


### Epistemic Justice and Cultural Homogenisation


Anita Gurumurthy highlighted how current AI development amplifies “epistemic injustices,” arguing that Western cultural homogenisation through AI platforms erases cultural histories and multilingual thinking structures. She noted that large language models extensively use Wikipedia, demonstrating how AI systems utilise commons-based resources whilst privatising benefits.


The discussion revealed tension between necessary pluralism for local contexts and generalised models that dominate market development. Gurumurthy posed the critical question: “We reject the unified global system. But the question is, are these smaller autonomous systems even possible?”


### Preserving Linguistic Diversity


Ambassador Singh provided examples of addressing this challenge through crowd-sourcing campaigns for linguistic datasets. India’s approach involved creating portals where people could contribute datasets in their local languages, demonstrating community-led data collection that supports AI development reflecting linguistic diversity.


Wai Sit Si Thou emphasised that AI solutions must work with community-led data and indigenous knowledge for local contexts, advocating for approaches that complement rather than replace local ways of knowing.


## Round Three: Intentionality and Governance


### Beyond “Move Fast and Break Things”


Nandini Chami presented a critique of Silicon Valley’s “move fast and break things” approach, arguing that the precautionary principle should guide AI development given potential for widespread societal harm. She emphasised that private value creation and public value creation in AI are not automatically aligned, requiring deliberate policy interventions.


Chami highlighted how path dependencies mean AI adoption doesn’t automatically enable economic diversification in developing countries, requiring intentional approaches to ensure public benefit.


### Data Governance and Collective Approaches


Sarah Nicole challenged mainstream thinking about individual data rights, arguing that data gains value when aggregated and contextualised. She advocated for collective approaches through data cooperatives that provide better bargaining power than individual data monetisation schemes.


This contrasted with Ambassador Singh’s examples of marketplace mechanisms where individuals could be compensated for data contributions, citing the Karya company that pays delivery workers for data contribution. Nicole argued that individual data monetisation yields minimal returns and could exploit economically vulnerable populations.


### Democratic Participation


The discussion addressed needs for public participation in AI decision-making beyond addressing harms. Chami argued for meaningful democratic participation in how AI systems are conceptualised, designed, and deployed.


Sarah Nicole supported this through advocating for infrastructure changes that give users voice, choice, and stake in their digital lives through data agency and cooperative ownership models.


## Audience Questions and Intellectual Property


Dr. Nermin Salim raised questions about intellectual property frameworks and platforms for protecting content creators. Timothy asked remotely about IP frameworks and natural legal persons in the context of AI development.


The speakers agreed that current intellectual property frameworks are inadequate for the AI era. Gurumurthy highlighted how trade secrets lock up data needed by public institutions, whilst large language models utilise commons like Wikipedia without fair compensation to contributors.


## Key Areas of Agreement


### Cooperative Models


Speakers demonstrated consensus on the viability of cooperative models for AI governance, with support spanning civil society, government, and international organisations. There was strong agreement on shared infrastructure approaches and resource pooling.


### Community-Led Development


All speakers agreed on the importance of community-led and contextual approaches to AI development, representing a challenge to top-down, technology-driven deployment approaches.


### Need for Reform


Multiple speakers identified problems with existing intellectual property frameworks, agreeing that current regimes inadequately balance private rights with public interest.


## Unresolved Challenges


The discussion left critical questions unresolved, including the fundamental tension between pluralism and generalised models: how can smaller autonomous AI systems be made economically viable against dominant large-language models with scaling advantages?


The complexity of developing concrete metrics for safety, responsibility, and privacy in AI systems beyond “do no harm” principles remains challenging, particularly for establishing accountability across transnational value chains.


## Recommendations


Speakers proposed several concrete actions:


– Establish shared AI infrastructure models pooling resources from multiple countries


– Create global repositories of AI applications in key sectors that can be shared across geographies


– Develop crowd-sourcing campaigns for linguistic datasets to support AI development in minoritised languages


– Implement public procurement policies steering AI development toward human-centric solutions


– Explore data cooperative models enabling collective bargaining power


## Conclusion


This panel discussion revealed both the urgency and complexity of developing local AI innovation pathways serving humanity’s benefit. The speakers demonstrated consensus on the need for alternative approaches prioritising collective organisation, public accountability, and cultural diversity over purely market-driven solutions.


The conversation highlighted that inclusivity, indigeneity, and intentionality must be addressed simultaneously in AI development. However, significant challenges remain in translating shared principles into practical implementation, particularly the tension between necessary pluralism and economic pressures toward centralisation.


The discussion provides foundation for alternative policy approaches emphasising public interest, collective action, and democratic participation in AI governance, opening space for more deliberate, community-controlled approaches to AI development that could better serve diverse human needs whilst respecting resource constraints.


Session transcript

Valeria Betancourt: Welcome, everybody. Thank you so much for your presence here. This session is going to look at the case for local artificial intelligence, innovation pathways to harness AI for benefit of humanity. We have, I have the privilege to moderate this panel today. As the Global Digital Compact underscores, there is an urgent imperative for digital cooperation to harness the power of artificial intelligence, innovation for the benefit of humanity. Evidence so far produced in several parts of the world, particularly in the context of the Global South, increasingly points to the importance of contextually grounded artificial intelligence, innovation for a just and sustainable digital transition. This session is going to look at three dimensions of local artificial intelligence, inclusivity, indigeneity, and intentionality. Our speakers from the expertise and on viewpoints will help us to get a deeper understanding of how these dimensions played out for local AI that is contextual and that contributes to the well-being of people and planet. So I have the pleasure of having Anita Gurumurthy from IT4Change to just help us to frame the conversation that we will have. And I will invite Anita then to come and please frame the conversation, set the ground for and the tone for the conversation.


Anita Gurumurthy: Thank you, thank you, Valeria, and it’s an honor to be part of this panel. So I think the starting point when we look at a just and sustainable digital transition is to reconcile two things. On the one hand, you have an unequal distribution of AI capabilities, and on the other, you actually have, you know, an increasing set of demands owing to climate and energy and the impacts of innovation on a planetary scale. And therefore, the question is, how do we democratize innovation and look at ideas of scale afresh, because the models we have today are on planetary scale. Both the production and consumption of AI innovation need to be cognizant of planetary boundaries. Essentially, then, what is this idea of local AI? Is it different from ideas of localizing AI? Is there a concept such as local AI? Will that even work? I just want to place before you some statistics, and we have a colleague online who will speak about this from UN Trade and Development, from the Digital Economy Report that was brought out by the UN, and I want to quote some statistics. Between 2022 and 2025, AI-related investment doubled from $100 to $200 billion. By comparison, this is about three times the global spending on climate change adaptation. So, we’re investing much more on R&D and for AI and much less on what we need to do to, in many ways, look at the energy question and the water question. Supercomputing chips have enabled some energy efficiency, but market trends suggest that this is not going to make way for building or for developing models differently. It’s going to support bigger, more complex large-language models, in turn mitigating the marginal energy savings. And I’m going to talk a little bit about the future of computing and how it’s going to change the way we do things that are possible because chips are becoming more energy efficient. So the efficiencies in compute are really not necessarily going to translate into some kind of respite for the kind of climate change impacts. Now, I want to give you, you know, this is just for shock value. Energy demand of data centers. And this is a very, very vital concern. We also know that around the world there have been water disputes, you know, because of this. So there is this big conundrum, you know, we do need and we do want small is beautiful models. But are they plausible? Are they probable? And while there is the strong case for diversified local models, I want to really underscore that there are lots of people already working on this. And we have some people, you know, governments that are investing in this. And there are communities that are investing in this. And these are very important because from an anglocentric perspective, you know, we think everything is working well enough. You know, LLMs are doing great for us. Chat GPT is very useful. And certainly so, you know, to some extent. But what we ignore is that there is a western cultural homogenization and these AI platforms amplify epistemic injustices. So we are certainly doing more than excluding non-English speakers. We are changing the way in which we look at the world. We are erasing cultural histories and ways of thinking. So we need to retain the structures of our multilingual societies so those structures allow us to think differently and decolonize scientific advancement and innovation in AI. So how do we build our own computational grammar? And this is a question I think that’s really important. And we reject the unified global system. But the question is, are these smaller autonomous systems even possible? And we do this for minoritized communities, minoritized languages. And the second question is, many of the efforts in this fragmented set of communities are really not able to come together. And perhaps there is a way to bring them in dialogue and enable them to collaborate. So this tension between pluralism that is so necessary and generalized models that seem to be the way, the only way AI models are developing in the market, is this tension is where the sweet spot of investigation actually lies. And with that, I revert back to you.


Valeria Betancourt: Thank you, Anita. Thank you for illustrating also why enabling public accountability is a must in the way in which artificial intelligence is conceptualized, designed, and deployed. Let’s go to the first round of the conversation I mentioned that we will be digging into three dimensions of local AI, inclusivity, indigeneity, and intentionality. The first round will focus on inclusion. And the question for Wai Sit Si Thou, the UN Trade and Development Agency, and also Lynette Wambuka from Global Partnership for Sustainable Development Data is, what are the pathways to AI innovation that are truly inclusive? And how can local communities be real beneficiaries of AI? So let me invite our panelists to please address this initial question. So can we go with Wai Sit Si Thou? Yes. It’s online. It’s remotely. Welcome. Thank you. Thank you very much.


Wai Sit Si Thou: Just to double-check whether you can see my screen and hear me well. Yes. Yes. Okay, perfect. So my sharing will be based on this UNTACF flagship publication that was just released two months ago on the title, Inclusive AI for Development. So I think it fits into the discussion very well. And to begin with, I would just want to highlight three key drivers of AI development over the past decades. And they are infrastructure, data, and skills. And we want to look into the questions of eq-fizziness. We need to focus on these three key elements. Because right now we can see a significant AI divide. For example, in terms of infrastructure, one single company, NVIDIA, actually produced 90% of the GPU, which is a critical component for computing resources. And we witness the same kind of AI divide in data, skills, and also other areas like R&D, patent, scientific publication on AI, etc. So this is the main framework that helps us to dive into the discussion on how to make AI inclusive. And the first message that I have is on the key takeaway to promote inclusive AI adoption. This is featured in our report on many successful AI adoption cases in developing countries. And based on the framework that I just shared, on infrastructure, one very important takeaway is to work around the local available digital infrastructure. Right now, over the world, we have still one third of the publication population without access to the Internet. So some kind of AI solution that is able to work offline would be essential for us to promote this adoption. And that is what I meant by working around the locally available infrastructure. And the second point on data, it is essential to work with community-led data and also indigenous knowledge so we can really focus on the specific problem, on the issue in the local context. And the third key takeaway is the skills that I mentioned. We should use simple interface that help a user to use all this AI solution. And the last one is on partnership, because from what we investigate, many of this AI adoption at the local level, The second message that I have is on the worker-centric approach of AI adoption. From the previous technological evolution, we understand there are four key channels where AI may also impact this productivity and workforce. On the left-hand side, we have on the top left, starting with the automation process that AI could substitute human neighbor. And then on the top right-hand side, we have AI complementing human neighbor. And the other two channels are deepening automation and creating new forms of jobs. And from the previous experience, automation or this technology adoption actually focuses on the left two bubbles, that is, replacing human neighbor. But if we really want to have an inclusive AI adoption that benefits everyone, we should focus on the right-hand side, on how AI can complement human neighbor and creating meaningful new jobs. And with that, we need to focus on three areas of action. The first one is, of course, empowering the workforce that include basic digital literacy to re-skilling and up-skilling, so to make them adapting to this new AI approach of work progress. And the second very important point is what I also mentioned before, with the engagement with the worker. So we work with the community, we work with the workers, with the design and implementation of AI to make sure that you fit the purpose and also gain the trust of this whole AI adoption process. And the last point is about fostering the development of human-centric AI solution. That would be the major responsibility of the government through our endeavoring public procurement. and other tax and credit incentives that steer this AI adoption to an inclusive and worker-centric approach. And the last thing that I want to highlight is at the global level, there are also four key areas that we can work on. As Anita mentioned, accountability is key. What we want to advocate here is to have a public disclosure accountability mechanism that could reference the ESG reporting framework that is really mature nowadays in the private sector. So an AI equivalent could happen with public disclosure on how this AI works and its potential impact. So this is the accountability period. And second one is on digital infrastructure. To provide equitable access to AI infrastructure, a very useful model that we can learn from is the CERN model, which is the world’s largest particle physics laboratory right here in Geneva that I am working at. And this model could help pool the resources to provide shared infrastructure for every stakeholder. And the third one is on open innovation, including open data and open source that can really democratize long-range resources for AI innovation. And what we need is to coordinate all these fomented resources for better sharing and better standard. And the last point that I want to highlight is on capacity building. We think that an AI-focused center and network model after the UN-CIMEC technology center and network could help in this regard to provide the necessary technology support and capacity building to developing countries. And of course, the self-serve cooperation could help us address common challenges. Like in East Africa, Rwanda may not have enough data source to change AI with the local language of Swahili. But grouping the East African countries together, then And we can put this Swahili common language in the region to have better AI training. So these are some of our recommendations that I have, and I am happy to engage in further discussion.


Valeria Betancourt: Thank you. Thank you. Thank you very much, Jackie. So obviously, multidimensional approach is needed for the dividends of AI to be distributed equally. With that, I would like to give the floor to Lynette Wamboka, Global Partnership for Sustainable Development Data, to also help us to… It’s not here. It’s not here? Yeah. OK, sorry. So is anyone in the panel willing to contribute to this part of the conversation in relation to how to bring the benefits of AI to local communities before we move to the other round? OK, if not, we can check whether there are any reactions from the remote participants, any questions in relation to this point, or from here from the audience. You are also welcome to comment and provide your viewpoint. OK, if not, we can move to the second round, which is going to look at indigeneity. With radical shifts, do we need an artificial intelligence infrastructure for an economy and society attentive and accountable to the people? And I will invite Ambassador Aridha Ambaki, Government of India, to comment, and Sarah Nicole from Project Liberty Institute to also help us to address this dimension of local AI. Please, Ambassador. Thank you.


Abhishek Singh: Thank you for convening this and bringing this very, very important subject at FORC, like how do we balance between wider AI adoption, building models, building applications, vis-a-vis the energy challenges that we are there, which hampers in some ways our goals towards sustainable development that we had all agreed. So, it’s not an easy challenge for governments across, because on one hand we want to take advantage of the benefits that are going to come, and on the other hand we want to limit the risks that are coming on climate change and sustainable development. So, the approach towards local AI sounds, seems to be good, but to make that happen there will be several necessary ingredients to that. Many of it was highlighted by our speaker from UNCTAD very succinctly, but I would like to just mention that what we observe in India, in many ways, given the diversity that we have, given the languages that we have, linguistic diversity, cultural diversity, contextual diversity that we have, is kind of a microcosm of the whole world. How do we ensure that whatever we build in a country of our size and magnitude applies to all sections of society, everybody becomes included in that. So, in that, one key challenge of course relates to infrastructure, because AI compute infrastructure is scarce, it’s expensive, it’s not easy to get, very few companies control it, and to do that, and if you democratize access to compute infrastructure, the model that we adopted in India was to ensure that we create a common central facility through which, of course provided through private sector providers, but this compute should be available to all researchers, academicians, startups, industry, those people who are training models or doing inferencing or building applications, and this compute, we worked out a mechanism so it becomes available at an affordable cost. We underwrite to the tune of almost 40% of the compute cost from the side of the government, so end user gets it at a rate which is less than a dollar per GPU per hour. So, this model has worked and I do believe the solution that was proposed earlier, building a CERN for AI, so if we can create a global compute infrastructure facility across countries or several foundations, multilateral bodies joining in and creating this infrastructure, making it available, it can really and Nisha. So, we have to make sure that we really solve the access to infrastructure challenge that we have. The second key ingredient for building AI applications and models is, of course, about data. How do we ensure that we have data sets available? So, it’s okay to desire to have local AI models, contextual models, but until we have necessary data sets in all languages and all contexts, all cultures, it will not really happen. So, we have to make sure that we have data sets in all languages and all contexts. So, we have to make sure that we have data sets in English and maybe major Indian languages, but when it came to minor Indian languages, we had very limited data sets. We launched a crowd-sourcing campaign to get linguistic data across languages, across cultures, in which people could kind of come to a portal and contribute data sets. So, that has really helped. So, that model can, again, be kind of made global, and that’s what we are trying to do. So, we have to make sure that we have data sets in all languages, as well as contextual and linguistic data sets. That can be, again, an innovative solution towards making the data sets more inclusive and more global. The third key ingredient on which we need to kind of enable, even if we have to push local AI, is about capacity-building and skills. Like, AI talent is also rare and scarce, it’s limited. So, we need to make sure that we have capacity-building and skills, and we need to make sure that we have capacity-building and skills, and we need to make training to students and to AI entrepreneurs with regard to how to train models, how to wire up even 1,000 GPUs. It requires necessary skills. If we can take up a capacity-building initiative driven by a central initiative through the UN body or the global partnership on AI, and ensure that all those capacity-building initiatives are implemented, it can really, really help. So, if we can take up capacity-building initiatives and training, and doing inferencing and building models and using AI for solving societal problems, it can really, really help. of course, build use cases, AI use cases in key sectors, whether it’s healthcare, whether it’s agriculture, whether it’s education, and create a global repository of AI applications which can be shareable across geographies. If we are able to take these three steps across the infrastructure, data sets, you know, training, capacity building, and building use cases, repository of use cases, I think we’ll be able to push forward the agenda of adoption of AI and building local AI at some stage. Absolutely, Ambassador. Definitely, AI models have to reflect contextually grounded innovation norms and ethics. Then I would like to invite Sarah Nicol from Project Liberty Institute.


Sarah Nicole: Please share your thoughts with us on this issue. Yeah, thank you very much for the invitation to give this short lightning talk. And thank you for the first insight as well. I will be a little bit controversial and really appreciate the way the question was framed. So I really appreciate the radicality aspect in it. Because the mainstream view is really that AI is a completely disruptive technology, that it changes everything in our societies, in our economies, in our daily life. But I would argue quite the contrary. AI is essentially a neural network, right, that replicates the way the brain works. It analyzes specific data sets. From those data sets, it finds connection, creates patterns, uses those patterns to respond to certain tasks like prompt, search, and so on. So overall, AI is an automation tool. It is a tool that accelerates and amplifies everything that we know. So necessarily, the current structure that is highly centralized and that strips users’ data out of their control is reinforced by AI. and it also reinforced the big tech companies and everything that we’ve been knowing for decades. It benefits from the centralization of the digital economy that is necessary to train its model. So, AI is very much so the result of the digital economy that has been in place for multiple, multiple years. So, if AI is a continuity and an amplification of what we already know, then the radicality needs to come from the response that we’ll bring to it. And at Project Liberty Institute, we believe that every people, users, citizen, call it what you want, deserves to have a voice, a choice and stake in their digital life. And this goes first by giving users data agency. This requires infrastructure design changes, profound one. In digital economy, data is not just a byproduct. It is a political, social and economic power that is deeply tied to our identities. And most of the network infrastructure that is currently in place has been captured by a few dominant tech platforms. So, necessarily everything that is built under falls under this proprietary realm. Scrapping, of course, empowerment of users, transparency, privacy and so on. So, AI rapidly shapes everything that we’re doing in our life. So, we need to rethink this infrastructure model because it shapes data agency. And Anita, you’ve been great to launch this report with us in Berlin last month. So, I’ll be happy to share also this report that we wrote for policymakers specifically to equip them with thinking how to digital infrastructure questions. But infrastructure for agency is really what we’re focusing on at the Institute. So, we are the steward of an open source protocol that is called a DSNP, it builds directly on top of TCPIP and allow users . And finally, DSNP allows users better control of their own data by enabling them to interact with a global, open, social graph. What this means is that your social identity on DSNP is not tied to one specific platform like it is today in most tech platforms, but it exists independently, and so it allows transportability of your data, but also interoperability. So, this is a core part of infrastructure that represents a radical shift for an economy and society, attentive and accountable to the people. But unfortunately, this would be a little bit too good to be true if all that was needed was a few lines of code and some spec and protocols. As important is the business model, and there’s a lot of work to be done here, because to this day, the most lucrative business model is the one that scraps data, users’ data, and then uses it for advertising, and we have yet to find a scalable alternative to this. And in order to build what we call the fair data economy, we are in need of metrics. We need to be better at articulating what we mean by safety, responsibility, privacy, what exactly do we mean behind this beautiful world? So, we need qualitative and quantitative metrics to define all this. Likewise, we need to go beyond the do-no-harm principle and also go beyond the do-no-harm principle to really shape a positive vision of technology that is socially and financially benefiting everyone. And one of the approaches that we are exploring at the institute is the one of data co-operative. The co-operative model has a legacy of hundreds of years, and it’s actually pretty well fit for the age of AI. Here’s a recent audio clip that was produced by an astrophysicist at the with those who wanted to, but let me extract two points from this report that I think is interesting for the sake of this discussion. Data cooperative allows to rethink the value of data in a collective manner, and I think that’s very important because the debate is very much structured around personal data and individual data, but the issue is so structural that we need to empower users with collective bargaining tools against suspected cooperation. And the second point is, in the age of AI, data needs to be of high quality, and data cooperatives provide the right incentive for data contributors to improve the quality of their data, because then it contributes to greater financial sustainability of their own co-op, so it’s also for data-pulling purposes. And of course there are many other models that exist, data commons, data trust, you name it. A radical shift for a better economy anyway will need many tries, many stakeholders to be involved, and we are already seeing this every day in multiple communities across the world. But one last thing that I wanted to mention here today is, what I just said, I don’t think this should be considered as radical at all. We own our identity in the analog world, we don’t accept others to make billions on top of our own identity, so why should it be that different in the online world? So all in all, the goal is really to have a voice, a choice, and stake online, and I don’t think this is radical, I think this is pretty much common sense.


Valeria Betancourt: Thanks. Thank you, thank you Sara, I think you have helped us to pave the way very nicely to the next round of conversation, because if we want AI to be meaningful to people, the intention behind it is absolutely crucial. And with that, I would like to invite Ambassador Thomas Schneider from the government of Switzerland and Nandini Chami from IT4Change to address the question on how should AI innovation… And now, I would like to ask you to share your views on how we can make the transition pathways be steered from the common good, with that intention of the common good.


Thomas Schneider: Please share your views on that. Ambassador, welcome. Thank you, and thank you for making me part of this discussion, because this is a discussion of fundamental importance that is also something that, maybe not necessarily a poor, but definitely a small country like mine, that we’re seeing, and you’ve highlighted some of the aspects. How can a small actor cope, survive, call it whatever you want, in such a system where, by design, the big ones have the resources, have the power? But the question is, does it have to be like this, or is it just, would there be alternatives? And I think we have already heard a number of elements where you actually can, how would the small ones need to cooperate in order to benefit from this as well? And of course, we know about the risks and all of this, but I think it would be a mistake not to use these technologies, because the potential is huge. And being an economist and a historian, and not a lawyer, actually, much of this reminds me of the first revolution in the industrial revolution, where, for instance, Switzerland was a country that was lagging behind. They had already trains and railways in the UK, and we were still walking around in the mountains. But then we were catching up quite quickly. But it wasn’t just enough to buy locomotives and coaches from the UK or produce them ourselves. We had to realize that you need to build a whole ecosystem in order to allow you to use this technology to make it your own, and some of it has been mentioned. What struck me, lately I just read an article about the extinguishing of the Credit Suisse, of the Swiss bank, and it struck me again, this bank was created by the politician and his people that were actually bringing the railways to Switzerland and building the railway system. So what did they do? They did not just buy coaches and build railways and bridges and tunnels. They also built the 88 Zurich. So they knew we need to have engineers. We need to have people that have the skills to actually drive these things, build the infrastructure. So they did not just create the railway. They created the first universities like in polytechnical universities. And they created, they knew like we are a small country, we do not have the resources. We need somebody that gives us credit. We need to have a financial system around it and it also connects you. You can have nice ideas, but if you do not get the resources for them, nothing happens. And that was remarkable that this was all through basically by one person plus his team in the 1840s and 50s. And I think we need to understand and I think we have heard a lot of input. What do we need? Each community for herself, but also in order to be able to create our own ecosystem and how to cooperate with others that are in the same situation. It can be communities in the different countries. It can actually also be communities at the other end of the world. But that may actually create a win-win situation with you. So I think this is really important and for the small actors, how can we break this vicious cycle of scaling effects that you cannot deliver? And we have heard also some elements that are important for us in Switzerland. The cooperative model is actually something much of our success stories economically are actually still cooperatives. The biggest supermarket in Switzerland was created 100 years ago as a cooperative. It is still a cooperative, not as much as it used to be, but legally it is a cooperative. Every customer can actually… We actually vote, so every few years there’s a discussion, should this supermarket be able to sell alcohol or not? And they want to, but the people say no. And we have insurances that are cooperatives and so on, so that’s an element. And another element is sharing the computing power. In Switzerland we’ve been working with NVIDIA to develop their chips 10 years ago, and now we have the result, we have one of the 10 biggest supercomputers, apart from the private ones of course of the big companies, that is in Switzerland. We cooperate with Lumi, with the Finns, and we try to create a network, we’ve started to set up a network to share computing power across the world for small actors, universities and so on. So this initiative is called ICANN, I-C-A-N. So there’s lots of things to do, and I think if we do a nice summary of the elements that we have heard so far, we can actually, yeah, that gives us some guidance for the next steps.


Valeria Betancourt: Thank you, thank you Ambassador Nandini, please help us with your views. It’s a very interesting conversation, and I think we are having this at a very timely moment,


Nandini Chami: when there is a recognition that if we are talking about a just and sustainable digital transition, we need to get out of the dominant AI paradigm and move towards something else. So I’ll just begin by sharing a couple of thoughts about challenges that we face in terms of steering AI innovation pathways for the common good. And these reflections come from the UNDP’s Human Development Report of 2025, which focuses on the theme, people and possibilities in the age of AI. So the first challenge that, you know, in this report we find is that in terms of shaping the trajectories of AI innovation, private value and public value creation goals are not always necessarily or automatically aligned. And to quote from a report, despite AI’s potential to accelerate technological progress and scientific discovery, current innovation incentives are geared towards rapid deployment, scale, and automation, often at the expense of transparency, fairness, and social inclusion. So how do we shape these with intentionality and consciously? That is very important. The second insight from this report is that since development is a path-dependent project, these path dependencies mean that AI adoption does not automatically open up routes to economic diversification. We just heard reflections on ecosystem strengthening, and this report also adds to the similar lens that the economic structures in many developing countries and LDCs may limit the local economy’s potential to absorb productivity spillovers from AI, and there may be fewer and weaker links to high-value added activities. So this actually means that there needs to be a complementarity between development roadmaps and AI roadmaps, and the objectives of development, the specific contextual strength opportunities challenges and weaknesses mapping in terms of where the potential for economic diversification lies, and where we use AI as bridge-building, as a general-purpose technology. These become extremely contextually grounded activities to do, and we need to move beyond an obsession with AI economy roadmap development as just a technological activity and look at it as an ecosystem activity. So from this perspective, I would just like to share from our work at IT4Change about three to four reflections on what it would take to make techno-institutional choices that will shape these innovation trajectories in these directions that we seek. So first, we come to the issue of technology foresight, and in the panel also we were discussing the question of do-no-harm principle. Oftentimes in these debates, we hear a discourse of inevitability of AI as a Frankenstein technology that will just definitely go out of control, and there’s a lot of long-termist alarmism about we will no longer be able to control AI. But what happens is this starts distracting from setting limits on AI development in the here and now, which actually means that in operationalizing and actionizing the do-no-harm principle, instead of moving fast and breaking things, we probably need to go back to the precautionary principle of the Rio Declaration about what we need to do to shape technologies. And secondly, as the Aarhus Convention on Environmental Matters specifies in the context of environmental decision-making, we need to be talking about right of the public to access information and participate in AI decision-making, so we are not just looking at rights of affected parties in the AI harms discourse. The second point is that in AI value chains which are transnational, which are very complex, and which have multiple actors and system providers and deployers and subject citizens on whom AI is finally deployed, how do we fix liability for individual, collective, and societal harms, and how do we update our product fault liability regimes so that the burden of proof is no longer on the affected party to prove the causal link between the defect in a particular AI product or service and the harm that was suffered? , they are the founders of OpenAI, and we are very proud to be a part of that, given the black box nature of this technology, thinking this through becomes very important. And thirdly, when we look at the technological infrastructure choices, of course, OpenAI affordances become very important as a starting point, but it’s also useful to remember that they don’t automatically become a part of the technology, and they do need to be part of the technology, but it’s very important to remember that there are many barriers to innovation and inclusivity, as experiences of how we build open-source AI on top of the stacks have shown, where it’s very much possible that a big tech firm’s dominant firms are able to use the primary infrastructure, and that’s why OpenAI is so important. So, I think that’s what this research shows. And my last point is actually about policy support for fostering alternatives, particularly federated AI commons thinking. So, there are alternative visions such as community AI that focus on looking at task-specific experiences in specific communities at IT4Change. We are exploring the development of such a model with the public school education system in Kerala, for instance. There have also been proposals that have been made in G20 discussions as part of the T20 dialogues about how do we shape public procurement policies and the directions of public research funding for the development of shared compute infrastructure, which came up in our discussion. And also, how do we ensure that in the participation of different market actors on public AI stacks and the use of public AI computing?


Valeria Betancourt: Thank you so much, Nandini. Let me check with Sadhana if there are remote participants who want to make interventions or have questions, and I also invite you all to also get ready with your questions and comments and reactions if you have one. Thank you, Valeria.


Sadhana Sanjay: I hope everyone can hear me. There is one question in the chat from Timothy who asks, digital transformation is built upon intellectual property rights frameworks, means of ownership and trade. When considering existing trends, projects and works that are resourced versus those that lack resourcing, how are the natural legal persons provided the necessary support to retain legal agency, both for themselves as well as to support traditional roles such as those of a parental guardian or others? Thank you, Sadhana. Is anyone in the panel? Anyone in the panel who would like to address that question? I didn’t hear the question clearly. This is about intellectual property. If you could repeat the second half, I got the first half, but not the second. If I understand correctly, the question is asking, given that there are ownership rights conferred on the developers of AI and non-natural legal persons such as corporations, the question is about how can natural legal persons such as ourselves retain our rights and agencies over the building blocks of AI, both individually as well as those who might be in charge of us such as guardians and custodians?


Abhishek Singh: One part is that, of course, the way the technology is evolving, there is IP-driven solutions and there are open-source solutions. So what we need to emphasize is to promote open-source solutions to the extent possible so that more and more developers get access to the APIs and they can build applications on top of it. The second part of it is that, with regard to… like, ultimately, somebody has to pay for these solutions. Like, it’s not that everything will come for free. And those companies which are known to provide services for free, they monetize your data. We all know about it. There have been big tech companies who are indulging in that. So at some point of time, we’ll have to take a call, whether if I want to use a service, like you mentioned, a chat GPT service, which kind of helps me in improving my efficiency, my productivity. Either I pay for their service or I contribute to their assets. So that call, individuals, companies, societies will need to take, that what is the cost of convenience or what is the cost of getting a service, in what form we can do. The other part which can be done, which is very complex, is to work out a marketplace kind of mechanism in which every service is priced. So, if we are contributing data sets, if I’m contributing to building a corpus in a particular language, then can we incentivize those who are contributing the data sets? In fact, there is an initiative, there is a company called Karya in India, which is doing that, which is actually paying people for contributing data sets, which kind of ensures that those who are part of the ecosystem, they do that. Then there are companies which have started incentivizing delivery boys, food delivery boys and cab drivers, Uber drivers, so that when they drive around, they get details about city amenities, about garbage dumps, about missing manual covers, street lights, traffic lights not functioning, sharing that information with the city government. and then they get in turn paid for doing that service. So, if the way a data contributor in what form is contributing, there can be models, there can be mechanisms in which a cost and revenue sharing model can be developed. But it will require specific approaches to the specific use case, but it’s not that cannot be done.


Valeria Betancourt: Thank you, Ambassador. Maybe if I can add, there’s a number of good examples. First of all, property rights are not carved in stone.


Thomas Schneider: This is something that can be and will need to be reformed, renegotiated. With what outcome and how, this is another question. Because otherwise, in many ways, property rights don’t work also for journalism, for media, in that part. So, we’ll have to develop a new approach and question what was the original idea behind property rights. The idea may be right, but then we need to find a new approach. That’s one element. And the other thing is like, this is what you may do on the political level, on the market level. And the other one is try to find ways to create a fair share system for benefits. And one is try to monetize it, like a kind of transaction, give every transaction or every data transaction a value. And the other thing is, and I think we’ve already heard this, go for, not think it only from the individual, but think it from society or from Switzerland. Also, we are a liberal country, but many things, people don’t want it to be privatized, because they think this should be in public hands. It’s like waste management or hospitals, it’s a very hot issue and so on. So, I think we should think about how, as a society, if we want to develop our health system, for instance. Health data is super important, it’s super valuable. And of course, the industry needs a lot of money to develop new pharmaceutical products. But how can we organize ourselves as a society, not because as an individual we are too weak, And the whole society can say, OK, we are offering something to businesses that can develop stuff that is OK, that they make money. But we want somehow a fair share of this because we are kind of your research lab. And then if you are a group, a big group, then you can actually have also a political weight. And then you need to find creative, concrete ways to actually then get this thing concretely done. So you need to work on the idea and the concept and on defining ways. But it’s a super important question. And if I can build on those two and fully agree with what’s been said.


Sarah Nicole: The question of having a stake in your data has often been framed on a personal level. And actual studies have shown that you would make very, very little if you were to monetize your own data. You know, the year would be like a couple of hundreds of euros or dollars. And the worst thing is that it could lead also to systems where poor people would probably spend lots of time online to generate very small revenue from this. So the answer will not be on an individual perspective, but it would be on a collective one. Because it’s when the data is aggregated, it’s when the data is in a specific context that then it gains value. And here again, let me bring the cooperative model. And that’s true. Theoretically, there’s a lot of work on data cooperative. Practically speaking, it’s still yet to emerge. But also one of the reasons is that it is not natural for businesses to turn into a cooperative model. Because it’s being perceived as this socialist or communist thing, which it is not. And hundreds of years of legacy have proven that. But there are many data cooperatives that pool specific data with a specific type of expertise. And then allow some AI to be trained on this expertise and high quality data. where we can have better rights and better protections for individuals once it is aggregated in common. So, really, the mentality really needs to shift from this personal data frame discussion, I think that benefits also a lot of the big tech companies, to a more collective and organization perspective.


Valeria Betancourt: Thank you. Anita.


Anita Gurumurthy: I don’t think that there’s an easy answer and I think we need to step up and rethink as people have said on this entire idea of what’s ownership. Two things I would like to say is that for developing countries particularly, I think in our global agreements on trade and intellectual property, we oftentimes cede our space to regulate in the public interest back in our countries. So often, transnational companies use the excuse of trade secrets to lock up data that otherwise should be available to public transportation authorities, public hospitals, etc. And perhaps we do need to strongly institute exceptions in IP laws for the sake of society to be able to use that threshold of aggregate data that is necessary to keep our societies in order. I’m sorry, I’m using that terminology in a very, very broad sense. But I mean, that is needed. You just can’t lock up that data and say it’s not available because it’s a trade secret. The second is that the largest source for the large language models, especially ChargPT, was Wikipedia. So you actually see free riding happening on top of these commons. And therefore, that’s another imperative, I think, for us to rethink the intellectual property regime on, well, we will do open source. But what if my open source meant for my community is actually servicing profiteering? So we do need laws to think through those data exchanges, whether it’s agricultural data, whatever data commons, or public data sets do need to protect society from free riding and also foul dealing. Foul dealing is when the exploitation really reaches a very, very high threshold. The last point I wanna make is we’ve been talking about the nudge economy that has generated the data sets, but what we read today is that there’s an economy of prompt. On top of AI models that you see when you search is the way in which you’re defining your prompts as users, and that is perfecting the large language models. So this is a complexity from nudge to prompt, which means that all of us are feeding the already monopolistic models with the necessary information for that to become more efficient. Which effectively means that the small can never survive.


Valeria Betancourt: So what do you do then for the small to survive is actually a question of societal commons so that this economy of prompt and economy of profiteering from prompt can actually be curtailed. And I think these are future questions for governance and regulation, but essentially also for international cooperation. That’s excellent. Okay, let me now invite your comment, please, or your question.


Audience: My name is Dr. Nermin Salim, the Secretary General of Creators Union of Arab. It’s a consultative status with the UN. And by accident, I’m expert intellectual property. So for comment for this question, I want just to comment about the intellectual property of AI. In the WIPO, the International Organization of Intellectual Property, they not yet reached the ideal convention for protecting AI because it’s divided between two sections. The AI as a data at the. . We are a platform for sharing content in a digital way, in a digital technology way, and the content which is generated by AI. But for this, we are at a civil society, have launched in the IGF last, IGF in Riyadh, a platform for protecting the content for users for digital area. When the users want to share their content, they find Whois, social media, internet, fax machines, Swish, we make a platform to submit and take a QR code, and verify by blockchain, go to the government, a minister siz responsible for registration, whoever gave them the 29 to the authority needs to verify it and customize it a little bit to obtain a personal property is taking into cza insid the case lĂ³g conflict between users. That’s just a comment for the questions. We are a minute away from the end of the session. I would like to invite everyone of you, the panel, to share some final remarks. Yes. I would like to start with Nandini, who is the chair of the


Valeria Betancourt: panel, and I would like to invite her to share some final remarks. It’s available now as a demo. Thank you very much. Just very brief final remarks, like


Nandini Chami: 10 seconds with the highlight that you would like to leave the audience with, please. Let me start with you, Nandini. I think the discussion is showing us that there’s a long history of AI being a problem in the world. AI is a problem in the world, and while continuing to incentivize innovation and preserve common heritage, particularly in knowledge IP, AI is a new instantiation of that problem. Yes, Ambassador Schneider. Thank you, I just say this was really exciting and I hope we can follow up on this because it’s super important and I thank you really for this discussion. Sarah.


Sarah Nicole: Thank you will be the last thing as well. Ambassador. Yeah, my takeaway is that the


Abhishek Singh: cooperative model for infrastructure data sets works and then maybe for models and applications we need to push forward more for open source models without the concerns of IP and other stuff. Absolutely. I’m thinking that the public and the local cannot exist without each other. Yeah, absolutely, absolutely and thank you so much for your presence and not easy answers as you said and oh yes, I’m sorry, Jackie, please your final remarks. Yes, thank you. I think data is a very strategic and key asset for both AI and the digital economy and with that I just want to


Audience: share with you that we have recently established a multi-stakeholder working group on data governance so hopefully that could provide some recommendation on how we can develop a good data governance framework. Thank you. Absolutely, so not easy answers, some of the responses and solutions


Valeria Betancourt: are coming from the margins, from the academia, from the social movements and different groups impacted by digitalization so yes, let’s keep the conversation going and let’s use this space and hopefully also the WSIS last interview in order to be able to define the grounds for different approaches and a different paradigm for AI for the common good. So, thank you so much for your presence and to all of you for your contributions. Thank you so much. Thank you.


A

Anita Gurumurthy

Speech speed

149 words per minute

Speech length

1094 words

Speech time

438 seconds

AI investment doubled from $100-200 billion between 2022-2025, three times global climate adaptation spending

Explanation

Gurumurthy highlights the massive financial resources being directed toward AI development compared to climate adaptation efforts. This disparity shows misaligned priorities given the urgent need for climate action and the environmental costs of AI infrastructure.


Evidence

Statistics from UN Trade and Development Digital Economy Report showing AI investment doubling from $100 to $200 billion between 2022-2025, which is three times global spending on climate change adaptation


Major discussion point

Resource allocation priorities between AI development and climate adaptation


Topics

Development | Economic


Energy demand of data centers creates water disputes and climate concerns despite chip efficiency improvements

Explanation

Despite technological improvements in chip efficiency, the overall energy and water consumption of AI infrastructure continues to grow. Market trends suggest these efficiencies will support larger, more complex models rather than reducing environmental impact.


Evidence

References to water disputes occurring globally due to data center demands and the trend toward bigger, more complex large-language models that offset marginal energy savings


Major discussion point

Environmental sustainability of AI infrastructure


Topics

Development | Infrastructure


Western cultural homogenization through AI platforms amplifies epistemic injustices and erases cultural histories

Explanation

Current AI systems, dominated by Western perspectives and English language, are not just excluding non-English speakers but actively changing worldviews and erasing diverse cultural knowledge systems. This represents a form of digital colonialism that threatens cultural diversity.


Evidence

Discussion of anglocentric perspective in AI development and how LLMs change ways of thinking and erase cultural histories


Major discussion point

Cultural preservation and decolonization in AI development


Topics

Sociocultural | Human rights principles


Need to retain multilingual society structures and decolonize scientific advancement in AI

Explanation

Preserving multilingual societies is essential because different language structures enable different ways of thinking and understanding the world. Decolonizing AI means building computational systems that reflect diverse epistemologies rather than imposing a single worldview.


Evidence

Emphasis on how multilingual structures allow different ways of thinking and the need to build ‘our own computational grammar’


Major discussion point

Decolonization and multilingualism in AI


Topics

Sociocultural | Human rights principles


Agreed with

– Wai Sit Si Thou
– Abhishek Singh

Agreed on

Community-led and contextual approaches are necessary for meaningful AI development


Tension exists between necessary pluralism and generalized models dominating market development

Explanation

There’s a fundamental conflict between the need for diverse, culturally-specific AI models and the market’s tendency toward unified, generalized systems. This tension represents the key challenge in developing truly inclusive AI that serves different communities.


Evidence

Discussion of the ‘sweet spot of investigation’ lying in the tension between pluralism and generalized models


Major discussion point

Balancing diversity with scalability in AI development


Topics

Sociocultural | Economic


Trade secrets shouldn’t lock up data needed by public institutions like hospitals and transportation authorities

Explanation

Transnational companies often use intellectual property protections to prevent public institutions from accessing data that would be beneficial for society. This creates barriers to public service delivery and societal functioning.


Evidence

Examples of public transportation authorities and public hospitals being denied access to data due to trade secret claims


Major discussion point

Public interest exceptions in intellectual property law


Topics

Legal and regulatory | Human rights principles


Agreed with

– Thomas Schneider
– Sadhana Sanjay

Agreed on

Current intellectual property frameworks are inadequate and need reform for the AI era


Large language models free-ride on commons like Wikipedia, requiring protection from exploitation

Explanation

Major AI systems like ChatGPT have been trained extensively on freely available resources like Wikipedia, representing a form of exploitation of digital commons. This highlights the need for legal frameworks to protect community-created resources from commercial exploitation.


Evidence

Specific mention that Wikipedia was the largest source for large language models, especially ChatGPT


Major discussion point

Protecting digital commons from commercial exploitation


Topics

Legal and regulatory | Economic


Agreed with

– Sarah Nicole
– Thomas Schneider

Agreed on

Individual data monetization is insufficient; collective approaches are more viable


W

Wai Sit Si Thou

Speech speed

140 words per minute

Speech length

951 words

Speech time

407 seconds

AI divide exists with NVIDIA producing 90% of GPUs, creating significant infrastructure inequality

Explanation

The concentration of critical AI infrastructure in the hands of a single company creates massive inequalities in access to AI capabilities. This monopolistic control over essential computing resources represents a fundamental barrier to democratizing AI development.


Evidence

Statistic that NVIDIA produces 90% of GPUs, which are critical components for AI computing resources


Major discussion point

Monopolization of AI infrastructure


Topics

Infrastructure | Economic


Three key drivers for inclusive AI: infrastructure, data, and skills with focus on equity

Explanation

Achieving inclusive AI requires addressing inequalities across three fundamental areas: access to computing infrastructure, availability of diverse datasets, and development of necessary technical skills. These elements must be developed with explicit attention to equity rather than assuming market forces will provide fair access.


Evidence

Framework analysis showing AI divide across infrastructure, data, skills, R&D, patents, and scientific publications


Major discussion point

Foundational requirements for inclusive AI development


Topics

Development | Infrastructure


Worker-centric approach needed focusing on AI complementing rather than replacing human labor

Explanation

Rather than following historical patterns of automation that replace workers, AI development should prioritize applications that enhance human capabilities and create meaningful employment. This requires intentional design choices and policy interventions to steer technology toward complementary rather than substitutional uses.


Evidence

Four-channel framework showing automation vs. complementation paths, with emphasis on right-hand side channels of complementing human labor and creating new jobs


Major discussion point

Human-centered AI development approach


Topics

Economic | Development


AI solutions must work with community-led data and indigenous knowledge for local contexts

Explanation

Effective AI applications for local communities require incorporating community-generated data and traditional knowledge systems rather than relying solely on external datasets. This approach ensures AI solutions address specific local problems and contexts.


Evidence

Emphasis on working with community-led data and indigenous knowledge to focus on specific local problems and issues


Major discussion point

Community-centered AI development


Topics

Sociocultural | Development


Agreed with

– Abhishek Singh
– Anita Gurumurthy

Agreed on

Community-led and contextual approaches are necessary for meaningful AI development


AI solutions should work offline to serve populations without internet access

Explanation

Given that one-third of the global population lacks internet access, AI solutions must be designed to function without constant connectivity. This technical requirement is essential for ensuring AI benefits reach underserved communities.


Evidence

Statistic that one-third of global population lacks internet access, making offline AI solutions essential


Major discussion point

Technical accessibility for underserved populations


Topics

Development | Infrastructure


Simple interfaces needed to enable broader user adoption of AI solutions

Explanation

AI systems must be designed with user-friendly interfaces that don’t require technical expertise to operate. This design principle is crucial for democratizing access to AI benefits across different skill levels and educational backgrounds.


Evidence

Emphasis on simple interfaces as key takeaway for promoting inclusive AI adoption


Major discussion point

User experience design for inclusivity


Topics

Development | Sociocultural


CERN model could provide shared AI infrastructure through pooled resources from multiple stakeholders

Explanation

The collaborative model used by CERN for particle physics research could be adapted for AI infrastructure, allowing multiple countries and organizations to pool resources for shared computing capabilities. This approach could democratize access to expensive AI infrastructure.


Evidence

Reference to CERN as world’s largest particle physics laboratory in Geneva and its successful resource-pooling model


Major discussion point

International cooperation models for AI infrastructure


Topics

Infrastructure | Development


Agreed with

– Abhishek Singh
– Thomas Schneider

Agreed on

Shared infrastructure and resource pooling are essential for democratizing AI access


South-South cooperation can address common challenges like training AI with regional languages

Explanation

Countries in the Global South can collaborate to overcome individual limitations in AI development, such as insufficient data for training models in shared languages. Regional cooperation can achieve what individual countries cannot accomplish alone.


Evidence

Example of East African countries pooling resources to train AI models in Swahili, which Rwanda alone couldn’t achieve


Major discussion point

Regional cooperation for AI development


Topics

Development | Sociocultural


Multi-stakeholder working group on data governance needed to develop good framework recommendations

Explanation

Given the strategic importance of data for both AI and the digital economy, a collaborative approach involving multiple stakeholders is necessary to develop effective governance frameworks. This multi-stakeholder model can provide comprehensive recommendations for data governance.


Evidence

Announcement of recently established multi-stakeholder working group on data governance


Major discussion point

Collaborative governance approaches for data


Topics

Legal and regulatory | Development


A

Abhishek Singh

Speech speed

177 words per minute

Speech length

1379 words

Speech time

466 seconds

India created shared compute infrastructure with government subsidizing 40% of costs to democratize access

Explanation

India addressed the challenge of expensive and scarce AI computing resources by creating a centralized facility that provides affordable access to researchers, academics, startups, and industry. Government subsidies make GPU access available at less than a dollar per hour, demonstrating a viable model for democratizing AI infrastructure.


Evidence

Specific details of 40% government subsidy and pricing at less than a dollar per GPU per hour for end users


Major discussion point

Government intervention to democratize AI infrastructure access


Topics

Infrastructure | Economic


Agreed with

– Wai Sit Si Thou
– Thomas Schneider

Agreed on

Shared infrastructure and resource pooling are essential for democratizing AI access


Disagreed with

– Nandini Chami

Disagreed on

Speed vs. Precaution in AI Development


Crowd-sourcing campaigns for linguistic datasets across languages and cultures can democratize data access

Explanation

When facing limited datasets for minor Indian languages, India launched crowd-sourcing initiatives that allowed people to contribute linguistic data through online portals. This approach can be scaled globally to address data scarcity for underrepresented languages and cultures.


Evidence

Description of portal-based crowd-sourcing campaign for linguistic data across Indian languages and cultures


Major discussion point

Community participation in AI dataset creation


Topics

Sociocultural | Development


Agreed with

– Wai Sit Si Thou
– Anita Gurumurthy

Agreed on

Community-led and contextual approaches are necessary for meaningful AI development


Global repository of AI applications in healthcare, agriculture, and education should be shareable across geographies

Explanation

Creating a centralized collection of AI use cases in critical sectors like healthcare, agriculture, and education would enable knowledge sharing and prevent duplication of effort across different regions. This repository approach could accelerate AI adoption for social good globally.


Evidence

Emphasis on building use cases in key sectors and creating shareable repositories across geographies


Major discussion point

Knowledge sharing for AI applications in social sectors


Topics

Development | Sociocultural


Capacity building initiatives needed for training on model development and GPU management skills

Explanation

The scarcity of AI talent requires systematic capacity building efforts to train people in technical skills like model training and managing large-scale computing resources. This skills development is essential for enabling local AI development capabilities.


Evidence

Mention of training needs for wiring up 1,000 GPUs and other technical AI development skills


Major discussion point

Technical skills development for AI


Topics

Development | Infrastructure


Marketplace mechanisms could incentivize data contributors through revenue sharing models

Explanation

Rather than having companies monetize user data without compensation, marketplace systems could be developed where data contributors receive payment for their contributions. This approach recognizes the value of data and provides fair compensation to those who generate it.


Evidence

Examples of Karya company paying people for contributing datasets and incentivizing delivery workers to share city information with governments


Major discussion point

Fair compensation for data contribution


Topics

Economic | Legal and regulatory


Agreed with

– Sarah Nicole
– Thomas Schneider

Agreed on

Cooperative models are viable and proven solutions for AI governance and data management


Disagreed with

– Sarah Nicole

Disagreed on

Individual vs. Collective Data Monetization Approaches


S

Sarah Nicole

Speech speed

164 words per minute

Speech length

1326 words

Speech time

484 seconds

AI is automation tool that amplifies existing centralized structures rather than disrupting them

Explanation

Contrary to mainstream narratives about AI being completely disruptive, it actually functions as a neural network that analyzes data and finds patterns, essentially automating and accelerating existing processes. This means AI reinforces current power structures and centralization rather than fundamentally changing them.


Evidence

Technical explanation of AI as neural networks that replicate brain functions and analysis of how AI benefits from existing digital economy centralization


Major discussion point

AI as continuity rather than disruption


Topics

Economic | Sociocultural


Disagreed with

– Valeria Betancourt

Disagreed on

AI as Disruption vs. Continuity


Users deserve voice, choice, and stake in digital life through data agency and infrastructure design changes

Explanation

People should have meaningful control over their digital existence, which requires fundamental changes to how digital infrastructure is designed. This goes beyond surface-level privacy controls to restructuring the underlying systems that govern digital interactions.


Evidence

Discussion of data as political, social, and economic power tied to identities, and mention of DSNP protocol development


Major discussion point

User empowerment through infrastructure redesign


Topics

Human rights principles | Infrastructure


Data cooperatives provide collective bargaining power and incentivize high-quality data contribution

Explanation

Cooperative models allow users to collectively negotiate with technology companies rather than being powerless as individuals. Additionally, when people have ownership stakes in data cooperatives, they’re incentivized to contribute higher quality data since it benefits their own cooperative’s financial sustainability.


Evidence

Reference to cooperative model’s hundreds of years of legacy and explanation of financial incentives for data quality in cooperative structures


Major discussion point

Collective organization for data rights


Topics

Economic | Legal and regulatory


Agreed with

– Thomas Schneider
– Abhishek Singh

Agreed on

Cooperative models are viable and proven solutions for AI governance and data management


Individual data monetization yields minimal returns; collective approaches through cooperatives more viable

Explanation

Studies show that individuals would earn very little money from monetizing their personal data – perhaps a few hundred dollars per year. Worse, this could create exploitative systems where poor people spend excessive time online for minimal income. Collective approaches through cooperatives offer more meaningful economic benefits.


Evidence

Specific mention of studies showing individual data monetization would yield only a couple hundred euros or dollars per year


Major discussion point

Economic viability of different data monetization models


Topics

Economic | Human rights principles


Agreed with

– Anita Gurumurthy
– Thomas Schneider

Agreed on

Individual data monetization is insufficient; collective approaches are more viable


Disagreed with

– Abhishek Singh

Disagreed on

Individual vs. Collective Data Monetization Approaches


Open source protocols like DSNP can enable user data portability and interoperability across platforms

Explanation

Technical solutions like the Decentralized Social Networking Protocol (DSNP) can be built on existing internet infrastructure to give users control over their social identity and data. This allows people to move their data between platforms and interact across different services without being locked into single platforms.


Evidence

Technical description of DSNP protocol building on TCP/IP and enabling global, open social graph with data transportability


Major discussion point

Technical solutions for user data control


Topics

Infrastructure | Human rights principles


N

Nandini Chami

Speech speed

137 words per minute

Speech length

1016 words

Speech time

444 seconds

Private value and public value creation goals in AI innovation are not automatically aligned

Explanation

The profit motives driving private AI development don’t naturally align with public interest goals like transparency, fairness, and social inclusion. Current innovation incentives prioritize rapid deployment and scale over social benefits, requiring intentional intervention to redirect these pathways.


Evidence

Quote from UNDP Human Development Report 2025 stating that innovation incentives favor rapid deployment and automation over transparency, fairness, and social inclusion


Major discussion point

Misalignment between private and public interests in AI


Topics

Economic | Human rights principles


Path dependencies mean AI adoption doesn’t automatically enable economic diversification in developing countries

Explanation

The existing economic structures in many developing countries may not be able to absorb and benefit from AI productivity gains. Without complementary development strategies, AI adoption may not lead to the economic transformation that countries hope for.


Evidence

Reference to UNDP report findings on limited local economy capacity to absorb AI productivity spillovers and weaker links to high-value activities


Major discussion point

Structural barriers to AI-driven development


Topics

Development | Economic


Precautionary principle should replace ‘move fast and break things’ approach in AI development

Explanation

Instead of the Silicon Valley mantra of rapid deployment followed by fixing problems later, AI development should adopt the precautionary principle from environmental law. This means carefully assessing potential harms before deployment rather than dealing with consequences afterward.


Evidence

Reference to Rio Declaration’s precautionary principle and critique of ‘move fast and break things’ mentality


Major discussion point

Risk management approaches in AI development


Topics

Legal and regulatory | Human rights principles


Disagreed with

– Abhishek Singh

Disagreed on

Speed vs. Precaution in AI Development


Public participation rights needed in AI decision-making beyond just addressing harms to affected parties

Explanation

Drawing from environmental law principles like the Aarhus Convention, the public should have rights to access information and participate in AI-related decisions that affect society. This goes beyond just protecting people from AI harms to giving them a voice in AI governance.


Evidence

Reference to Aarhus Convention on Environmental Matters and its principles for public participation in decision-making


Major discussion point

Democratic participation in AI governance


Topics

Human rights principles | Legal and regulatory


T

Thomas Schneider

Speech speed

172 words per minute

Speech length

1186 words

Speech time

412 seconds

Cooperative model has hundreds of years of legacy and fits well for AI age challenges

Explanation

Switzerland’s economic success stories include many cooperatives that continue to operate successfully, such as the country’s largest supermarket chain. This model, with its democratic governance and member ownership, provides a proven framework for organizing economic activity that could be applied to AI and data governance.


Evidence

Examples of Swiss cooperatives including the biggest supermarket created 100 years ago that still operates as a cooperative with customer voting rights, and cooperative insurance companies


Major discussion point

Historical precedents for cooperative organization


Topics

Economic | Legal and regulatory


Agreed with

– Sarah Nicole
– Abhishek Singh

Agreed on

Cooperative models are viable and proven solutions for AI governance and data management


Property rights need reform and renegotiation, with society-level approaches for fair benefit sharing

Explanation

Current intellectual property frameworks may not be suitable for the AI age and will need to be reformed. Rather than thinking only at the individual level, societies need to organize collectively to ensure fair sharing of benefits from AI development, similar to how some countries handle healthcare or infrastructure as public goods.


Evidence

Examples of Swiss public services like waste management and hospitals that remain public rather than privatized, and discussion of health data as valuable public resource


Major discussion point

Collective approaches to intellectual property and benefit sharing


Topics

Legal and regulatory | Economic


Agreed with

– Sarah Nicole
– Anita Gurumurthy

Agreed on

Individual data monetization is insufficient; collective approaches are more viable


Switzerland developed supercomputer network and ICANN initiative to share computing power globally for small actors

Explanation

Switzerland has created infrastructure sharing arrangements, including cooperation with Finland’s Lumi supercomputer and the ICANN network, to provide computing access to universities and small actors globally. This demonstrates how smaller countries can collaborate to access AI infrastructure.


Evidence

Mention of cooperation with NVIDIA on chip development, having one of the 10 biggest supercomputers, and the ICANN initiative for sharing computing power


Major discussion point

International cooperation for AI infrastructure access


Topics

Infrastructure | Development


Agreed with

– Wai Sit Si Thou
– Abhishek Singh

Agreed on

Shared infrastructure and resource pooling are essential for democratizing AI access


Small countries need ecosystem approach similar to 19th century railway development including education and finance

Explanation

Drawing lessons from Switzerland’s 19th-century railway development, small countries need to build complete ecosystems around AI, not just acquire the technology. This includes creating educational institutions, financial systems, and skilled workforce – just as railway development required polytechnical universities and banks like Credit Suisse.


Evidence

Historical example of Swiss railway development in 1840s-50s requiring creation of polytechnical universities, financial institutions, and complete infrastructure ecosystem


Major discussion point

Holistic ecosystem development for emerging technologies


Topics

Development | Infrastructure


V

Valeria Betancourt

Speech speed

121 words per minute

Speech length

929 words

Speech time

457 seconds

Global Digital Compact underscores urgent imperative for digital cooperation to harness AI for humanity’s benefit

Explanation

The Global Digital Compact recognizes the critical need for international cooperation in AI development to ensure it serves human welfare. This cooperation is particularly important for ensuring AI benefits reach the Global South through contextually grounded innovation.


Evidence

Reference to Global Digital Compact and evidence from Global South pointing to importance of contextually grounded AI innovation


Major discussion point

International cooperation for beneficial AI development


Topics

Development | Human rights principles


Disagreed with

– Sarah Nicole

Disagreed on

AI as Disruption vs. Continuity


Local AI must be examined through three dimensions: inclusivity, indigeneity, and intentionality

Explanation

Understanding local AI requires analyzing how it can be inclusive of different communities, respectful of indigenous knowledge systems, and designed with intentional purpose for social good. These three dimensions are essential for AI that contributes to well-being of people and planet.


Evidence

Framework for the panel discussion structured around these three dimensions


Major discussion point

Comprehensive framework for evaluating local AI


Topics

Development | Sociocultural | Human rights principles


Public accountability is essential in how AI is conceptualized, designed, and deployed

Explanation

AI development cannot be left solely to private actors but requires mechanisms for public oversight and accountability throughout the entire lifecycle. This ensures AI serves public interest rather than just private profit.


Evidence

Emphasis on enabling public accountability as a must in AI development processes


Major discussion point

Democratic oversight of AI development


Topics

Legal and regulatory | Human rights principles


S

Sadhana Sanjay

Speech speed

151 words per minute

Speech length

193 words

Speech time

76 seconds

Intellectual property frameworks create challenges for natural persons retaining legal agency in AI systems

Explanation

Current IP frameworks favor corporations and non-natural legal persons in AI development, potentially undermining individual rights and agency. This raises questions about how individuals can maintain control and rights over AI systems that affect them, including in guardian-ward relationships.


Evidence

Question about how natural legal persons can retain agency given existing IP frameworks and ownership structures


Major discussion point

Individual rights versus corporate control in AI systems


Topics

Legal and regulatory | Human rights principles


Agreed with

– Anita Gurumurthy
– Thomas Schneider

Agreed on

Current intellectual property frameworks are inadequate and need reform for the AI era


A

Audience

Speech speed

172 words per minute

Speech length

299 words

Speech time

103 seconds

Blockchain-based platform needed for protecting user content and intellectual property in digital era

Explanation

A platform using QR codes and blockchain verification can help users protect their digital content by providing proof of ownership and creation. This system would work with government authorities to verify and register content, providing legal protection in case of disputes.


Evidence

Description of platform launched at IGF in Riyadh that provides QR codes and blockchain verification for content protection, working with government registration authorities


Major discussion point

Technical solutions for content protection and IP rights


Topics

Legal and regulatory | Infrastructure


WIPO has not yet reached ideal convention for protecting AI intellectual property due to division between AI as data platform and AI-generated content

Explanation

The World Intellectual Property Organization faces challenges in creating comprehensive AI IP protection because of fundamental disagreements about whether to focus on AI systems as data platforms or on the content they generate. This division prevents unified international standards for AI intellectual property.


Evidence

Reference to WIPO’s ongoing struggles and the specific division between treating AI as data platform versus focusing on AI-generated content


Major discussion point

International challenges in AI intellectual property regulation


Topics

Legal and regulatory | Development


Agreements

Agreement points

Cooperative models are viable and proven solutions for AI governance and data management

Speakers

– Sarah Nicole
– Thomas Schneider
– Abhishek Singh

Arguments

Data cooperatives provide collective bargaining power and incentivize high-quality data contribution


Cooperative model has hundreds of years of legacy and fits well for AI age challenges


Marketplace mechanisms could incentivize data contributors through revenue sharing models


Summary

Multiple speakers endorsed cooperative models as effective organizational structures for AI and data governance, drawing on historical precedents and emphasizing collective approaches over individual solutions


Topics

Economic | Legal and regulatory


Shared infrastructure and resource pooling are essential for democratizing AI access

Speakers

– Wai Sit Si Thou
– Abhishek Singh
– Thomas Schneider

Arguments

CERN model could provide shared AI infrastructure through pooled resources from multiple stakeholders


India created shared compute infrastructure with government subsidizing 40% of costs to democratize access


Switzerland developed supercomputer network and ICANN initiative to share computing power globally for small actors


Summary

All speakers agreed that expensive AI infrastructure requires collaborative approaches and resource sharing to ensure equitable access, with concrete examples from different countries and international models


Topics

Infrastructure | Development


Community-led and contextual approaches are necessary for meaningful AI development

Speakers

– Wai Sit Si Thou
– Abhishek Singh
– Anita Gurumurthy

Arguments

AI solutions must work with community-led data and indigenous knowledge for local contexts


Crowd-sourcing campaigns for linguistic datasets across languages and cultures can democratize data access


Need to retain multilingual society structures and decolonize scientific advancement in AI


Summary

Speakers consistently emphasized the importance of involving local communities in AI development and ensuring AI systems reflect diverse cultural and linguistic contexts rather than imposing homogeneous solutions


Topics

Sociocultural | Development


Current intellectual property frameworks are inadequate and need reform for the AI era

Speakers

– Anita Gurumurthy
– Thomas Schneider
– Sadhana Sanjay

Arguments

Trade secrets shouldn’t lock up data needed by public institutions like hospitals and transportation authorities


Property rights need reform and renegotiation, with society-level approaches for fair benefit sharing


Intellectual property frameworks create challenges for natural persons retaining legal agency in AI systems


Summary

Multiple speakers identified fundamental problems with existing IP frameworks in the context of AI, calling for reforms that better balance private rights with public interest and individual agency


Topics

Legal and regulatory | Human rights principles


Individual data monetization is insufficient; collective approaches are more viable

Speakers

– Sarah Nicole
– Anita Gurumurthy
– Thomas Schneider

Arguments

Individual data monetization yields minimal returns; collective approaches through cooperatives more viable


Large language models free-ride on commons like Wikipedia, requiring protection from exploitation


Property rights need reform and renegotiation, with society-level approaches for fair benefit sharing


Summary

Speakers agreed that individual-level solutions for data rights and monetization are inadequate, emphasizing the need for collective organization and protection of digital commons


Topics

Economic | Legal and regulatory


Similar viewpoints

Both speakers from IT4Change emphasized how current AI development serves private interests at the expense of cultural diversity and public good, requiring intentional intervention to redirect AI toward more equitable outcomes

Speakers

– Anita Gurumurthy
– Nandini Chami

Arguments

Western cultural homogenization through AI platforms amplifies epistemic injustices and erases cultural histories


Private value and public value creation goals in AI innovation are not automatically aligned


Topics

Sociocultural | Human rights principles | Economic


Both speakers emphasized the fundamental importance of capacity building and skills development as essential components of inclusive AI development, alongside infrastructure and data access

Speakers

– Wai Sit Si Thou
– Abhishek Singh

Arguments

Three key drivers for inclusive AI: infrastructure, data, and skills with focus on equity


Capacity building initiatives needed for training on model development and GPU management skills


Topics

Development | Infrastructure


Both speakers challenged mainstream narratives about AI being inherently disruptive, instead arguing for more cautious, deliberate approaches that recognize AI’s role in reinforcing existing power structures

Speakers

– Sarah Nicole
– Nandini Chami

Arguments

AI is automation tool that amplifies existing centralized structures rather than disrupting them


Precautionary principle should replace ‘move fast and break things’ approach in AI development


Topics

Economic | Legal and regulatory


Unexpected consensus

Government intervention and public investment in AI infrastructure

Speakers

– Abhishek Singh
– Wai Sit Si Thou
– Thomas Schneider

Arguments

India created shared compute infrastructure with government subsidizing 40% of costs to democratize access


CERN model could provide shared AI infrastructure through pooled resources from multiple stakeholders


Switzerland developed supercomputer network and ICANN initiative to share computing power globally for small actors


Explanation

Despite representing different political and economic contexts, speakers from India, UN agency, and Switzerland all endorsed significant government intervention and public investment in AI infrastructure, challenging typical market-driven approaches to technology development


Topics

Infrastructure | Economic | Development


Rejection of Silicon Valley ‘move fast and break things’ mentality

Speakers

– Nandini Chami
– Sarah Nicole
– Valeria Betancourt

Arguments

Precautionary principle should replace ‘move fast and break things’ approach in AI development


AI is automation tool that amplifies existing centralized structures rather than disrupting them


Public accountability is essential in how AI is conceptualized, designed, and deployed


Explanation

There was unexpected consensus across speakers from different backgrounds in rejecting the dominant Silicon Valley approach to technology development, instead advocating for more cautious, accountable approaches typically associated with environmental and public health regulation


Topics

Legal and regulatory | Human rights principles


Overall assessment

Summary

The speakers demonstrated remarkable consensus on the need for alternative approaches to AI development that prioritize collective organization, public accountability, and cultural diversity over market-driven solutions. Key areas of agreement included the viability of cooperative models, the necessity of shared infrastructure, the importance of community-led development, and the inadequacy of current intellectual property frameworks.


Consensus level

High level of consensus with significant implications for AI governance. The agreement across speakers from different sectors (government, UN agencies, civil society, academia) and countries suggests growing recognition that current AI development paradigms are insufficient for achieving equitable outcomes. This consensus provides a foundation for alternative policy approaches that emphasize public interest, collective action, and democratic participation in AI governance, challenging dominant narratives about inevitable technological disruption and market-led solutions.


Differences

Different viewpoints

Individual vs. Collective Data Monetization Approaches

Speakers

– Abhishek Singh
– Sarah Nicole

Arguments

Marketplace mechanisms could incentivize data contributors through revenue sharing models


Individual data monetization yields minimal returns; collective approaches through cooperatives more viable


Summary

Singh advocates for marketplace mechanisms where individuals can be paid for data contributions, citing examples like Karya company. Nicole argues individual monetization yields minimal returns and could exploit poor people, advocating instead for collective cooperative approaches.


Topics

Economic | Legal and regulatory


AI as Disruption vs. Continuity

Speakers

– Sarah Nicole
– Valeria Betancourt

Arguments

AI is automation tool that amplifies existing centralized structures rather than disrupting them


Global Digital Compact underscores urgent imperative for digital cooperation to harness AI for humanity’s benefit


Summary

Nicole presents AI as fundamentally non-disruptive, arguing it reinforces existing power structures. Betancourt frames AI as requiring urgent cooperative action for humanity’s benefit, implying transformative potential that needs guidance.


Topics

Economic | Sociocultural | Development


Speed vs. Precaution in AI Development

Speakers

– Nandini Chami
– Abhishek Singh

Arguments

Precautionary principle should replace ‘move fast and break things’ approach in AI development


India created shared compute infrastructure with government subsidizing 40% of costs to democratize access


Summary

Chami advocates for precautionary approaches and careful assessment before AI deployment. Singh focuses on rapid infrastructure development and deployment to democratize access, representing a more accelerated approach.


Topics

Legal and regulatory | Human rights principles | Infrastructure


Unexpected differences

Fundamental Nature of AI Technology

Speakers

– Sarah Nicole
– Other speakers

Arguments

AI is automation tool that amplifies existing centralized structures rather than disrupting them


Explanation

Nicole’s characterization of AI as fundamentally non-disruptive contrasts sharply with the general framing by other speakers who treat AI as a transformative technology requiring new approaches. This philosophical disagreement about AI’s nature is unexpected in a discussion focused on local AI solutions.


Topics

Economic | Sociocultural


Intellectual Property Protection vs. Commons Access

Speakers

– Audience (Dr. Nermin Salim)
– Anita Gurumurthy

Arguments

Blockchain-based platform needed for protecting user content and intellectual property in digital era


Large language models free-ride on commons like Wikipedia, requiring protection from exploitation


Explanation

The audience member advocates for stronger IP protection mechanisms while Gurumurthy argues for protecting commons from IP exploitation. This represents an unexpected fundamental disagreement about whether the solution is more or less IP protection.


Topics

Legal and regulatory | Infrastructure


Overall assessment

Summary

The discussion shows moderate disagreement on implementation approaches rather than fundamental goals. Main areas of disagreement include individual vs. collective data monetization, AI’s disruptive nature, development speed vs. precaution, and IP protection vs. commons access.


Disagreement level

Medium-level disagreement with significant implications. While speakers generally agree on the need for inclusive, locally-relevant AI, their different approaches to achieving this goal could lead to incompatible policy recommendations. The disagreements reflect deeper philosophical differences about technology’s role, market mechanisms, and the balance between innovation speed and social protection.


Partial agreements

Partial agreements

Similar viewpoints

Both speakers from IT4Change emphasized how current AI development serves private interests at the expense of cultural diversity and public good, requiring intentional intervention to redirect AI toward more equitable outcomes

Speakers

– Anita Gurumurthy
– Nandini Chami

Arguments

Western cultural homogenization through AI platforms amplifies epistemic injustices and erases cultural histories


Private value and public value creation goals in AI innovation are not automatically aligned


Topics

Sociocultural | Human rights principles | Economic


Both speakers emphasized the fundamental importance of capacity building and skills development as essential components of inclusive AI development, alongside infrastructure and data access

Speakers

– Wai Sit Si Thou
– Abhishek Singh

Arguments

Three key drivers for inclusive AI: infrastructure, data, and skills with focus on equity


Capacity building initiatives needed for training on model development and GPU management skills


Topics

Development | Infrastructure


Both speakers challenged mainstream narratives about AI being inherently disruptive, instead arguing for more cautious, deliberate approaches that recognize AI’s role in reinforcing existing power structures

Speakers

– Sarah Nicole
– Nandini Chami

Arguments

AI is automation tool that amplifies existing centralized structures rather than disrupting them


Precautionary principle should replace ‘move fast and break things’ approach in AI development


Topics

Economic | Legal and regulatory


Takeaways

Key takeaways

Local AI development requires addressing three critical dimensions: inclusivity, indigeneity, and intentionality to ensure AI serves the common good rather than perpetuating existing inequalities


AI infrastructure inequality is severe, with massive investment disparities (AI investment 3x climate adaptation spending) and monopolistic control (NVIDIA controls 90% of GPUs)


Current AI models amplify Western cultural homogenization and epistemic injustices, erasing cultural histories and multilingual thinking structures


Cooperative models and shared infrastructure approaches can democratize AI access, as demonstrated by India’s subsidized compute infrastructure and Switzerland’s supercomputer sharing initiatives


Data governance must shift from individual to collective approaches, with data cooperatives providing better bargaining power and quality incentives than individual data monetization


AI is fundamentally an automation tool that amplifies existing centralized structures rather than disrupting them, requiring radical infrastructure changes for true user agency


The tension between necessary pluralism for local contexts and generalized models dominating the market represents a key challenge for inclusive AI development


Intellectual property frameworks need fundamental reform to prevent trade secrets from locking up data needed by public institutions and to protect commons from free-riding by commercial AI models


Resolutions and action items

Establish a CERN-like model for AI infrastructure sharing globally, pooling resources from multiple countries and organizations


Create global repository of AI applications in key sectors (healthcare, agriculture, education) that can be shared across geographies


Develop crowd-sourcing campaigns for linguistic datasets to support AI development in minoritized languages


Implement public procurement policies that steer AI development toward human-centric and worker-complementary solutions


Establish multi-stakeholder working group on data governance to develop framework recommendations


Create capacity building initiatives through UN bodies or global AI partnerships for training on model development and AI skills


Develop marketplace mechanisms for incentivizing data contributors through revenue sharing models


Reform intellectual property laws to include exceptions for public interest use of aggregated data


Unresolved issues

How to make small autonomous AI systems economically viable against dominant large-language models with massive scaling advantages


Finding scalable alternatives to data-scraping advertising business models that currently dominate the digital economy


Developing concrete metrics to define and measure safety, responsibility, and privacy in AI systems beyond ‘do no harm’ principles


Resolving the fundamental tension between open source AI development and preventing free-riding by commercial entities


Addressing the ‘economy of prompt’ where user interactions continue to improve monopolistic AI models


Determining how to fix liability for AI harms across complex transnational value chains with multiple actors


Establishing effective mechanisms for public participation in AI decision-making processes


Creating sustainable funding models for local AI development that don’t rely on exploitative data practices


Suggested compromises

Hybrid approach combining open source development with protections against commercial exploitation through reformed IP frameworks


Government subsidization of compute infrastructure costs (as demonstrated by India’s 40% cost underwriting) to balance private sector efficiency with public access


Society-level collective bargaining for data rights rather than purely individual or purely corporate control models


Balancing innovation incentives with precautionary principles by slowing ‘move fast and break things’ approach while preserving development momentum


Multi-stakeholder governance models that include private sector, government, and civil society in AI development decisions


Regional cooperation approaches (like East African countries pooling Swahili language data) to achieve necessary scale while maintaining local relevance


Public-private partnerships for AI infrastructure that leverage private sector capabilities while ensuring public benefit and access


Thought provoking comments

Between 2022 and 2025, AI-related investment doubled from $100 to $200 billion. By comparison, this is about three times the global spending on climate change adaptation… So the efficiencies in compute are really not necessarily going to translate into some kind of respite for the kind of climate change impacts.

Speaker

Anita Gurumurthy


Reason

This comment is deeply insightful because it reframes the AI discussion by introducing a critical tension between AI investment and climate priorities. It challenges the assumption that technological efficiency automatically leads to environmental benefits, revealing the paradox that AI efficiency gains are being used to build larger, more resource-intensive models rather than reducing overall environmental impact.


Impact

This comment established the foundational tension for the entire discussion, setting up the core dilemma that all subsequent speakers had to grapple with: how to democratize AI benefits while addressing planetary boundaries. It shifted the conversation from purely technical considerations to systemic sustainability concerns.


AI is essentially a neural network… So overall, AI is an automation tool. It is a tool that accelerates and amplifies everything that we know… So, if AI is a continuity and an amplification of what we already know, then the radicality needs to come from the response that we’ll bring to it.

Speaker

Sarah Nicole


Reason

This comment is profoundly thought-provoking because it directly challenges the mainstream narrative of AI as revolutionary disruption. By reframing AI as an amplification tool that reinforces existing power structures, it shifts the focus from the technology itself to the systemic responses needed to address its impacts.


Impact

This reframing fundamentally altered the discussion’s direction, moving away from technical solutions toward structural and infrastructural changes. It provided intellectual grounding for why radical responses are necessary and influenced subsequent speakers to focus more on systemic alternatives like cooperatives and commons-based approaches.


We reject the unified global system. But the question is, are these smaller autonomous systems even possible?… So this tension between pluralism that is so necessary and generalized models that seem to be the way, the only way AI models are developing in the market, is this tension is where the sweet spot of investigation actually lies.

Speaker

Anita Gurumurthy


Reason

This comment identifies the central paradox of local AI development – the need for cultural and linguistic diversity versus the economic and technical pressures toward centralized, generalized models. It articulates the core tension that makes this problem so complex and resistant to simple solutions.


Impact

This comment established the intellectual framework that guided much of the subsequent discussion. It helped other speakers understand why technical solutions alone (like shared computing infrastructure) need to be coupled with new governance models and cooperative approaches.


The largest source for the large language models, especially ChatGPT, was Wikipedia. So you actually see free riding happening on top of these commons… But what if my open source meant for my community is actually servicing profiteering?

Speaker

Anita Gurumurthy


Reason

This observation is particularly insightful because it reveals how current AI development exploits commons-based resources while privatizing the benefits. It challenges the assumption that open-source solutions automatically serve community interests and highlights the need for protective mechanisms.


Impact

This comment deepened the discussion about intellectual property and data governance, leading to more nuanced conversations about how to structure commons-based approaches that can’t be easily exploited by commercial interests. It influenced the later discussion about cooperative models and collective bargaining.


The question of having a stake in your data has often been framed on a personal level… the answer will not be on an individual perspective, but it would be on a collective one. Because it’s when the data is aggregated, it’s when the data is in a specific context that then it gains value.

Speaker

Sarah Nicole


Reason

This comment is insightful because it challenges the dominant framing of data rights as individual privacy issues and redirects attention to collective action and cooperative models. It provides a practical pathway forward that moves beyond the limitations of individual data monetization.


Impact

This comment shifted the discussion from individual rights to collective organizing, influencing other speakers to elaborate on cooperative models and community-based approaches. It helped bridge the gap between theoretical critiques and practical alternatives.


We launched a crowd-sourcing campaign to get linguistic data across languages, across cultures, in which people could kind of come to a portal and contribute data sets… If we can take up capacity-building initiatives and training… it can really, really help.

Speaker

Abhishek Singh


Reason

This comment is valuable because it provides concrete, implementable examples of how local AI can work in practice, moving beyond theoretical discussions to actual policy implementations. It demonstrates that alternative approaches are not just idealistic but practically feasible.


Impact

This grounded the discussion in real-world examples and gave other participants concrete models to reference. It helped shift the conversation from problem identification to solution implementation, influencing the final recommendations about cooperative infrastructure and capacity building.


Overall assessment

These key comments fundamentally shaped the discussion by establishing a progression from problem identification to systemic analysis to practical alternatives. Anita Gurumurthy’s opening comments about the climate-AI investment paradox and the tension between pluralism and generalization set up the core dilemmas. Sarah Nicole’s reframing of AI as amplification rather than disruption provided the theoretical foundation for why radical responses are necessary. The subsequent comments built on this foundation, moving from critique to concrete alternatives like cooperative models, shared infrastructure, and community-based data governance. Together, these comments transformed what could have been a technical discussion about AI optimization into a deeper conversation about power structures, commons governance, and alternative economic models. The discussion evolved from identifying problems with current AI development to articulating a coherent vision for community-controlled, environmentally sustainable AI systems.


Follow-up questions

Are smaller autonomous AI systems even possible, and how can fragmented community efforts be brought together to collaborate?

Speaker

Anita Gurumurthy


Explanation

This addresses the fundamental tension between necessary pluralism and the market trend toward generalized models, which is crucial for enabling local AI development


How do we build our own computational grammar and reject unified global systems while maintaining viability?

Speaker

Anita Gurumurthy


Explanation

This is essential for decolonizing scientific advancement and preserving multilingual societies’ diverse ways of thinking


How can we create a global compute infrastructure facility (CERN model for AI) across countries with multilateral bodies joining to make infrastructure available affordably?

Speaker

Abhishek Singh


Explanation

This could democratize access to expensive AI compute infrastructure that is currently controlled by few companies


How can we establish a global repository of AI applications and use cases that can be shared across geographies?

Speaker

Abhishek Singh


Explanation

This would enable knowledge sharing and prevent duplication of efforts in developing AI solutions for common problems


How do we find a scalable alternative business model to the current data scraping and advertising model?

Speaker

Sarah Nicole


Explanation

Current business models undermine user agency and data ownership, so alternatives are needed for a fair data economy


How do we develop qualitative and quantitative metrics to define safety, responsibility, and privacy in AI systems?

Speaker

Sarah Nicole


Explanation

Clear metrics are needed to move beyond vague principles and create accountability mechanisms


How do we fix liability for individual, collective, and societal harms in complex transnational AI value chains?

Speaker

Nandini Chami


Explanation

Current liability regimes are inadequate for the complexity of AI systems and the difficulty of proving causal links to harms


How do we update product fault liability regimes so the burden of proof is not on affected parties to prove causal links between AI defects and harms?

Speaker

Nandini Chami


Explanation

Given the black box nature of AI technology, current liability frameworks place unfair burden on those harmed by AI systems


How can we work out marketplace mechanisms where data contribution is priced and contributors are incentivized?

Speaker

Abhishek Singh


Explanation

This addresses the fundamental question of how to fairly compensate those whose data contributes to AI development


How do we institute exceptions in IP laws for public interest use of aggregate data by public authorities?

Speaker

Anita Gurumurthy


Explanation

Trade secrets are being used to lock up data that should be available to public transportation, hospitals, and other essential services


How do we protect open source and data commons from free riding by profit-making entities?

Speaker

Anita Gurumurthy


Explanation

Current systems allow companies to profit from commons like Wikipedia without fair compensation to the community


How do we curtail the ‘economy of prompt’ where users perfect monopolistic models through their interactions?

Speaker

Anita Gurumurthy


Explanation

User prompts are continuously improving large language models, further entrenching monopolistic advantages


How can we develop good data governance frameworks through multi-stakeholder approaches?

Speaker

Wai Sit Si Thou


Explanation

Data governance is strategic for both AI and digital economy development, requiring collaborative frameworks


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.