WS #466 AI at a Crossroads Between Sovereignty and Sustainability

27 Jun 2025 10:30h - 11:30h

WS #466 AI at a Crossroads Between Sovereignty and Sustainability

Session at a glance

Summary

This Internet Governance Forum 2025 panel discussion explored the intersection of artificial intelligence sovereignty and environmental sustainability, examining how nations can reduce technological dependency while minimizing environmental impacts. The session was organized by LAPIN, the Sustainable AI Lab at Bonn University, and VLK Advogados, bringing together experts from government, academia, international organizations, and civil society.


Ana Valdivia from Oxford Internet Institute highlighted the environmental costs of AI infrastructure, noting that digital sovereignty is impossible when countries depend on minerals extracted from other nations for AI chips. She cited examples from Mexico where data centers consume water 24/7 while local communities have access to water only one hour per week, demonstrating how AI reproduces climate injustice. Valdivia advocated for “digital solidarity” rather than digital sovereignty to foster collaborative approaches.


Alex Moltzau from the European AI Office emphasized the need for responsible AI deployment within the context of climate crisis, noting that the EU is investing 200 billion euros in AI infrastructure while working on energy reduction standards. Pedro Ivo Ferraz da Silva from Brazil’s Ministry of Foreign Affairs discussed the asymmetry in AI development, where 84% of large language models provide no disclosure of energy use or emissions, and stressed the importance of inclusive international cooperation ahead of COP30 in Brazil.


Yu Ping Chan from UNDP warned that only 10% of AI’s economic value by 2030 will benefit the Global South, emphasizing the need for holistic approaches that address connectivity, skills, and infrastructure gaps. Alexandra Costa Barbosa from Brazil’s Homeless Workers Movement introduced the concept of “popular digital sovereignty,” focusing on grassroots efforts to achieve meaningful connectivity and digital literacy in marginalized communities.


The discussion concluded that addressing AI sovereignty requires tackling multiple interconnected crises—environmental, social, and digital—through coordinated efforts that empower local communities and social movements while ensuring responsible technology deployment.


Keypoints

## Major Discussion Points:


– **Digital Sovereignty vs. Environmental Sustainability Tension**: The panel explored the fundamental challenge of how nations can achieve AI sovereignty and reduce technological dependency while minimizing environmental impacts, particularly given AI’s heavy reliance on minerals, energy, and water resources.


– **Global South Dependency and Digital Colonialism**: Extensive discussion on how AI development perpetuates colonial patterns, with Global South countries providing raw materials (cobalt, tungsten, copper) and labor for AI training while remaining excluded from shaping AI systems, with only 10% of AI’s economic value projected to accrue to Global South countries by 2030.


– **Environmental Justice and Resource Competition**: Detailed examination of how AI infrastructure creates climate injustice, exemplified by data centers in Mexico’s Querétaro state having 24/7 water access while local communities receive water only one hour per week in a drought-stricken region.


– **Labor Rights and AI Development**: Discussion of exploitative labor practices in AI development, particularly the hidden human labor required for training large language models, often performed under poor conditions in call center-like environments, with concerns about replicating historical labor exploitation patterns.


– **Alternative Approaches to Digital Sovereignty**: Presentation of concepts like “digital solidarity” instead of competitive digital sovereignty, “popular digital sovereignty” from grassroots movements, and community-driven approaches that prioritize local needs and environmental justice over purely technological advancement.


## Overall Purpose:


The discussion aimed to examine the intersection between AI sovereignty aspirations and environmental sustainability, particularly focusing on how developing nations and marginalized communities can achieve greater technological independence without exacerbating climate change and environmental degradation. The panel sought to identify policy solutions that could address both digital dependency and environmental concerns through inclusive, multi-stakeholder approaches.


## Overall Tone:


The discussion maintained a consistently serious and urgent tone throughout, with speakers expressing genuine concern about current trajectories in AI development. The tone was collaborative and solution-oriented, with panelists building on each other’s points rather than debating. There was a notable shift from academic analysis in the early presentations to more activist and practical perspectives as grassroots representatives spoke, culminating in calls for political mobilization and collective action. The overall atmosphere was one of informed concern coupled with cautious optimism about the possibility of more equitable and sustainable approaches to AI development.


Speakers

– **Alexandra Krastins Lopes**: Co-founder of LAPIN (Laboratory of Public Policy and Internet), former member of Brazilian Data Protection Authority, represents VLK Advogados (Brazilian law firm), provides legal counsel on data protection, AI, cybersecurity and government affairs


– **Jose Renato Laranjeira de Pereira**: Co-founder of LAPIN (Laboratory of Public Policy and Internet), PhD student at University of Bonn


– **Ana Valdivia**: Departmental research lecturer in artificial intelligence, government and policy at the Oxford Internet Institute, University of Oxford, investigates how data certification and algorithmic systems are transforming political, social and ecological territories


– **Alex Moltzau**: Policy officer at European AI office in the European Commission, seconded national expert from Norwegian Ministry of Digitalization and Governance, coordinates work on AI regulatory sandboxes, visiting policy fellow at University of Cambridge, background in social data science and master’s in artificial intelligence for public services


– **Pedro Ivo Ferraz da Silva**: Career diplomat, Coordinator for Scientific and Technological Affairs at the Climate Department of the Ministry of Foreign Affairs in Brazil, member of the Technology Executive Committee of UNFCCC, Brazil’s focal point to the Intergovernmental Panel on Climate Change (IPCC)


– **Yu Ping Chan**: Heads digital partnerships and engagements at UNDP (United Nations Development Program), former diplomat in Singaporean Foreign Service, Bachelor of Arts from Harvard University, Master’s of Public Administration from Columbia University


– **Alexander Costa Barbosa**: Member of the Homeless Workers Movement, digital policy consultant and researcher


– **Raoul Danniel Abellar Manuel**: Member of parliament from the Philippines representing the Youth Party


– **Edmon Chung**: From Dot Asia


– **Participant**: (Role/expertise not specified)


Additional speakers:


– **Lucia**: From Peru, works with civil society organizations (full name not provided in transcript)


Full session report

# Panel Discussion Report: AI Sovereignty and Environmental Sustainability


## Introduction and Context


This panel discussion, organized by LAPIN (Laboratory of Public Policy and Internet), the Sustainable AI Lab at Bonn University, and VLK Advogados, examined the intersection between artificial intelligence sovereignty and environmental sustainability. Jose Renato Laranjeira de Pereira, co-founder of LAPIN and PhD student at University of Bonn, introduced the session by explaining the panel’s focus on how the intersection of AI sovereignty and climate change creates both challenges and opportunities.


The panel featured diverse perspectives from government, academia, international organizations, and civil society, including Alexandra Krastins Lopes (co-founder of LAPIN and former member of Brazilian Data Protection Authority), Ana Valdivia (Oxford Internet Institute, participating remotely from an AI ethics conference), Alex Moltzau (European AI Office), Pedro Ivo Ferraz da Silva (Brazilian Ministry of Foreign Affairs), Yu Ping Chan (UNDP), Alexander Costa Barbosa (Homeless Workers Movement), and Raoul Danniel Abellar Manuel (Philippine Parliament member).


## Key Speaker Contributions


### Ana Valdivia – Digital Solidarity vs. Digital Sovereignty


Ana Valdivia argued that AI infrastructure cannot be truly sovereign because it depends on minerals and natural resources from other countries, creating inevitable interdependencies. She proposed replacing “digital sovereignty” with “digital solidarity” to create networks of cooperation between states rather than competition.


Valdivia highlighted environmental justice concerns, citing examples from Mexico’s Querétaro state where data centers have 24/7 water access while local communities receive water only one hour per week during drought conditions. She emphasized that AI development reproduces climate injustice through unequal resource access and that data centers are deployed without democratic consultation with affected communities.


She also challenged industry narratives, arguing that larger language models reproduce more stereotypes and biases while consuming more resources without necessarily being better. Valdivia noted that AI development is now dominated by big tech companies rather than universities, limiting innovation access and creating dependency for researchers.


### Pedro Ivo Ferraz da Silva – Brazilian Government Perspective


Pedro Ivo, speaking after concluding June climate negotiations in Bonn where AI was discussed, argued against creating a false binary between national sovereignty and global cooperation. He maintained that both are needed and should be rooted in equity and climate responsibility, introducing the Brazilian concept of “mutirão” (collective community effort) as a framework for AI governance.


He revealed that 84% of widely used large language models provide no disclosure of their energy use or emissions, preventing informed policy design. Pedro Ivo emphasized that developing countries need to strengthen three strategic capabilities: skills, data, and infrastructure to shape AI according to local priorities.


He advocated for moving beyond the “triple planetary crisis” narrative to address a “poly-crisis” including environmental, social, and digital rights crises. Pedro Ivo also mentioned Brazil’s role in hosting COP30 in Belém and chairing the BRICS Summit, noting the BRICS Civil Popular Forum’s work on digital sovereignty.


### Yu Ping Chan – UNDP Development Perspective


Yu Ping Chan warned that only 10% of AI’s economic value by 2030 will benefit Global South countries excluding China, with over 95% of top AI talent concentrated in six research universities in the US and China. She emphasized that digital transformation must be part of a holistic approach beyond single ministries, encompassing connectivity, infrastructure, and energy.


Chan raised questions about ownership regarding who owns the products of labor used to create large language models that end up owned by big tech companies. She stressed the need for collective action and mobilization to address AI challenges.


### Alexander Costa Barbosa – Grassroots Movement Perspective


Alexander Costa Barbosa from Brazil’s Homeless Workers Movement introduced the concept of “popular digital sovereignty,” involving communities providing services that the state hasn’t delivered, focusing on meaningful connectivity and digital literacy in peripheries. He explained the movement’s work addressing Brazil’s housing crisis, where 33 million people lack adequate housing.


Barbosa noted that workers’ rights were initially excluded from AI regulation discussions, highlighting the political nature of these debates. He connected alternative development approaches like Buen Vivir and commons-based development with climate justice discussions.


### Alex Moltzau – European AI Office Perspective


Alex Moltzau acknowledged that while AI operates within existing labor legislation frameworks, there are concerns about protecting workers involved in supervised machine learning tasks. He stressed that AI rollout must be as responsible, sustainable, and green as possible within the context of the climate crisis.


Moltzau announced the European Commission’s collaboration with Africa on generative AI with 5 million euros funding, with a deadline of October 2nd.


## Audience Questions and Responses


Raoul Danniel Abellar Manuel from the Philippine Parliament asked about ensuring labor protections in AI development to avoid replicating exploitative practices, especially in training large language models. He emphasized the need to protect workers involved in the hidden human labor required for AI training.


An audience member named Lucia asked about environmental sustainability advocacy and connecting organizations working on these issues. Ana Valdivia responded by offering to connect civil society organizations across Latin America working on data center transparency and environmental advocacy.


## Concrete Outcomes and Initiatives


Several concrete initiatives were announced during the discussion:


– The Hamburg Declaration on Responsible AI for the SDGs was launched with over 50 stakeholders committed, welcoming more organizations to sign


– The BRICS Civil Popular Forum Digital Sovereignty Working Group document was announced for release with guidelines for financing digital public infrastructures


– Commitment to connect Latin American organizations working on data center transparency and environmental advocacy


– European Commission funding for AI collaboration with Africa


## Key Themes and Challenges


The discussion revealed several interconnected challenges:


**Environmental Justice**: The panel extensively examined how AI infrastructure creates climate injustice, with Global South countries providing raw materials while bearing environmental costs but remaining excluded from AI governance decisions.


**Labor Rights**: Multiple speakers addressed exploitative labor practices in AI development, particularly the hidden human labor required for training large language models under poor working conditions.


**Transparency**: The lack of disclosure regarding AI’s environmental impacts was highlighted as a critical barrier to informed policy-making and accountability.


**Digital Colonialism**: Speakers examined how AI development perpetuates colonial patterns, with Global South countries providing resources and labor while being excluded from shaping AI systems.


## Multi-stakeholder Approaches


Alexandra Krastins Lopes emphasized applying the multi-stakeholder model of internet governance to sustainable AI sovereignty policies to effectively include social movements. The discussion highlighted the importance of moving beyond conventional development models toward comprehensive approaches addressing interconnected social, environmental, and digital challenges.


## Conclusion


The panel concluded with calls for continued advocacy and mobilization, emphasizing the need for collective action to address AI challenges. Speakers encouraged political mobilization and highlighted the importance of coordinated efforts between government officials, researchers, international organizations, and social movements to develop more equitable and sustainable approaches to AI development and governance.


The discussion demonstrated that addressing AI sovereignty requires tackling multiple interconnected crises through approaches that empower local communities while ensuring responsible technology deployment and environmental sustainability.


Session transcript

Alexandra Krastins Lopes: Good morning everyone both here in the room and those joining us online. It’s a pleasure to welcome you all to this important session at the Internet Governance Forum 2025. This panel titled AI at the Crossroads between Sovereignty and Sustainability is a joint initiative between LAPIN, the Laboratory of Public Policy and Internet, the Sustainable AI Lab at Bonn University and VLK Advogados. We are truly honored to host such a timely and global conversation and I want to begin by thanking our distinguished panelists for being here today. I’m Alexandra, I’m a co-founder of LAPIN and served for a few years in the Brazilian Data Protection Authority. Today I represent VLK Advogados, a Brazilian law firm where I provide legal counsel on data protection, AI, cyber security and cybersecurity on juridical matters and government affairs. Now I’d like to pass the floor to José Renato who will present himself and introduce the central topic of our discussion. José Renato.


Jose Renato Laranjeira de Pereira: Hello Ale, can you hear me? Yes. So, working good? Okay, great. Well, hello everyone. Good morning, good afternoon, good evening for those watching us. It is a pleasure to be here and thank you very much Ale, Alexandra for introducing me. My name is José Renato, I am also a co-founder of the Laboratory of Public Policy and Internet, LAPIN, and now also doing a PhD at the University of Bonn. I would also like to thank Thiago Moraes and Sietse Piku for helping organize the session and I’m very happy to be here. Well, our discussion is exactly in this intersection between artificial intelligence sovereignty and also the need for us to secure that technological developments that we carry out are consistent also with a very urgent need to tackle climate change, environmental collapse as a whole. So we have been identifying how there’s a growing discourse on not only AI but digital sovereignty as a whole among different governments. European Union is an example. Brazil, China, U.S. Social movements. So, different initiatives among indigenous peoples against worker or among workers movements are also talking about digital sovereignty, AI sovereignty, and etc. In the global south, both these nation-led discourses and also social movements discourse are very interrelated with the history of dependency, particularly on technology and infrastructure that dates back to colonial times and which persists through terms and periods in which coloniality and what many have called digital colonialism is also influencing these discourses. And well, we also know at the same time that AI is deeply connected with physical infrastructure, so it is very dependent, strongly dependent on minerals, on energy and water. So, our idea is how to discuss here how to advance these calls for further independence and control over these technologies and their infrastructures, while also avoiding expanding on the effects over the environment which are leading mostly to climate change. We’re also interested in understanding the differences between global south and north approaches to digital sovereignty, to AI sovereignty as a whole, and that is why we have participants from different, from distinct backgrounds here, government officials, representatives of international organization, academia and civil society as well, including from one social movement in Brazil which is taking the lead to claim digital sovereignty over their activities. So, I pass on now back to Alexandra to talk about the policy questions that we have thought for this panel. I’m looking very much forward to our discussion.


Alexandra Krastins Lopes: Thank you, José Renato. So, today we aim to explore the following policy questions. How can nations reduce their technological dependency in the realm of AI while ensuring that the development of these technologies leaves low environmental impacts and supports them in achieving the SDGs? What are the main tensions between the aspirations of governments and communities, including social movements and indigenous communities, with regard to AI sovereignty and how can they be addressed? And finally, how can the multi-stakeholder model of internet governance can be applied within the design of policies aimed at fostering sustainable AI sovereignty so as to have the demands of social movements effectively taken into consideration? So let’s start with initial speeches from our dear panelists. I would like to check if Ana Valdivia is already with us. Okay, I’ll introduce her. Ana Valdivia is a departmental research lecturer in artificial intelligence, government and policy at the Oxford Internet Institute at the University of Oxford. She investigates how data certification and algorithmic systems are transforming political, social and ecological territories. Ana, the floor is yours. Thank you.


Ana Valdivia: Thank you very much, Jose, for organizing this panel about digital sovereignty and data colonialism. I’m very pleased to be here and I’m so sorry that I cannot be there in Norway because I’ve been attending the main international conference in AI ethics where we have been discussing these current debates, right? And something very relevant that we have found out was that LLMs or generative AI is becoming bigger and that doesn’t mean that it’s becoming better because something that we found out in the conference is that LLMs that are bigger reproduce and learn more stereotypes than smaller LLMs. So that comes with oversight. I’ve been studying the effects, like environmental impacts, and I’ve been analyzing the environmental impacts of AI for years now, and I’ve been analyzing the supply chain of artificial intelligence. And something that I realized is that while national states have this narrative towards digital surveillancy, and for instance in the UK, the government wants to develop more data centers to be digital surveying, there is another part of this debate that is neglected, which is that this infrastructure cannot be surveying, because this infrastructure depends, as you have said, on different minerals and other natural resources that are not embedded in our so-called national states. So that’s it. If the UK wants to become digital surveying, it depends on other countries like Brazil, like Pakistan, like China, like Taiwan, to develop all this infrastructure. For instance, to develop the AI chips, which are named GPUs, graphical processing units, you need cobalt, you need tungsten, you need copper, you need aluminum, and these minerals are extracted from other geographies that are basically geographies within the global majority, and the extraction of these minerals have direct impacts on communities living nearby, as we have seen in the past literature on geography and extractivism. But then the increasing size of AI algorithms like GPTs and other LLMs come with other side effects, as I have said, because now it’s not only about mineral extraction, it’s also about the processing, the training of these algorithms. And this comes also with other environmental impacts like water extraction and land, and I have seen that in Mexico, for instance. So in Mexico, we have the state of Querétaro that is inviting a lot of data centers and a lot of big tech companies to deploy their infrastructure. to talk about AI infrastructure there, while I can see like the positive side of this, which is like, you know, the infrastructure of AI is going to be democratized because it’s going to be present in different states. That comes with other side effects like the government is inviting this infrastructure without asking democratically to the communities, whether they want this infrastructure there. Because this infrastructure, like we know that data centers are that are connected to the electricity 24 hours, seven days a day, 365 days a year. So that means that they are using water, they are using electricity, all the days. And in Querétaro, Querétaro is becoming the only state in Mexico, which 100% of its territory is at risk of drought. So that means that communities don’t have access to water. And this is something that I’ve witnessed with my own eyes, like when I visit these communities in Querétaro, I’ve seen how they don’t have access to water. They only have access to water one hour per week, while on the other side, these infrastructures have access to water 24 hours a day. So AI is not only nowadays reproducing stereotypes and biases, it’s also reproducing climate injustice, because if we don’t regulate how this infrastructure is becoming is being implemented in different geographies, it’s going to exacerbate the consequences of climate justice. So something that we have proposed in this conference on AI ethics is that rather than talk about digital sovereignty, that creates sort of like frictions between states, because all the states in the world want to become digital sovereign, we should talk about digital solidarity. And we should talk about how we can create networks of solidarity, that we help one state with other states, and we help one state with another state. all together to develop digital divinity and how we can become as a community independent from big tech companies that nowadays are accumulating all the innovation. Because for instance, as an expert in AI, when I did my PhD, I could develop my own AI algorithms with my own laptop. And nowadays, I could see that the innovation on AI relies on big tech companies. We are not able to develop AI technology anymore. We have to depend on big tech companies. And it has also become clear in this conference on ethics how the LLMs that we know, like GPT and LAMA, are developed by big tech companies. They are not developed by universities. They are not developed by other institutions, technical institutions anymore. So it’s not only about infrastructure. It’s also about how we can become digital surveying and how we can develop this AI with our own hands and with our own infrastructure. So I think that’s my intervention. And thank you. I’m looking forward to hear what the others have to say and the Q&A. Thank you very much.


Alexandra Krastins Lopes: Thank you, Ana. Now I’ll pass the floor to Alex Moltzau. He joined the European AI office in the European Commission the day it went live as a policy officer, seconded national experts sent from the Norwegian Ministry of Digitalization and Governance to the GCNECT, A2 Artificial Intelligence Regulation and Compliance. He coordinates work on AI regulatory sandboxes and is currently also a visiting policy fellow at the University of Cambridge.


Alex Moltzau: Thank you so much. And it’s a pleasure to be here today. And really great to listen to the intervention of George. and Jose and Ana as well. So my name is Alex Moltzau, as was said, and I also, you know, I think it’s being here today is really wonderful as someone who is seconded from Norway to the Commission to see kind of everyone come together, you know, and I think it’s a really, really bright community. But this topic that we are discussing here is really, really close to my heart. So my background is in social data science, so which combines kind of social concerns with data science methods, I mean, programming oriented, but at the same time, with inspiration from a lot of social science fields. But I also have a master’s in artificial intelligence for public services. And where Jose is placed, they run a conference about AI and sustainability and I spoke at the first edition, although I have not spoken at the ones prior. I previously held a TEDx talk on AI and the climate crisis in 2020. So I think like, for me, I just saw that, you know, I think we were seeing all this compute increasing, you know, infrastructure being built. And with the consumption patterns, you know, in all other fields, it was kind of strange to think that this is not going to be a problem. So I honestly, I think what we are dealing with here, you know, is something that is strange that we haven’t seen much more clear, you know, because I think we want to deliver great services to our people. And we want to also have amazing companies and compete in a friendly way as much as possible. But at the same time, we have a shared problem, you know. And these are expressed through the sustainable development goals. And I worked with AI policy full time for the last five years prior to joining the AI office, where I have worked now for one year. And before that, I worked with a nonprofit organization. and the so-called Young Sustainable Impact. So that had a community of around 11,000 people around the world and we worked to try to think how can we bring forward new solutions and new companies to address the Sustainable Development Goals. But I think maybe we were a bit naive. But I think we have to be naive and I think we have to believe in that brighter future and for sure that is not to just senselessly use technology without any thoughts about responsibility and without the context that we live in. Because we live in a time where we have a climate crisis, we live in a time where we have a plurality of different crises that we are facing and we can only face them in digital solidarity. So I really think that what Ana says and what she said about the minerals is something that also is very clear. And I’m also glad to say that where I’m working now in DigiConnect in the European AI office at the very ground floor of the building, we have a really large artwork and it’s called the Anatomy of an AI System which shows the value chains of Alexa Echo and how that is linked together and it’s an artwork created by Kate Crawford and Vladan Joler. So in a way every single time we walk into the building we are looking at that artwork by Kate Crawford and Vladan Joler. So I think what I like about the European Commission and what I like about the people that work there is that they really care deeply about these things. So I can tell you that for sure it’s not something that we want to ignore, it’s something that we really want to commit towards. But today I’m not talking on behalf of the European Commission, I’m not representing their official perspectives, I’m just here as an individual, but I will still tell you about a few of the things that we are working on. It’s also a collaboration that we have rolled out to finance, also collaboration on generative AI to kind of get new perspectives, solutions, companies in collaboration with Africa. And there’s 5 million euros there committed to this. So I will encourage anyone working here in Europe or in Africa to kind of apply together for that. And the deadline is the 2nd of October this year. So please consider seeing if there’s any kind of good project for collaboration on that. And if you have read the EU AI Act, you might have seen a small part of it. It’s also that there’s a commitment to a standardization request on energy reduction. And there is also a study on green AI running now internally in the commission. So I think this is also kind of like, although I would like to have seen that we did a lot more, it is not like we are doing nothing, I’m happy to say. But I think what we have to do is to think about the rollout of all these large-scale policy mechanisms that we are rolling out now. And it is a lot. Invest AI was announced during the AI Action Summit, 200 billion euros. Investment is not a joke. There’s quite a significant investment there. We’re rolling out AI factories, gigafactories. We have the AI Cloud and Development Act now to try to think about this in more way. There are a lot of movements to really scale up this digital in Europe. But sovereignty doesn’t mean that we should decide for a better future. If sovereignty means that we can make those decisions, if sovereignty means that we can decide to do something that would be better for our citizens, better for the population, then I would think that means that also that rollout has to be as responsible as possible, as sustainable as possible, as green as possible. And of course, that is my personal opinion. And I really look forward to listening to the other panelists.


Alexandra Krastins Lopes: and discussing today. Thank you, Alex. A very interesting thing you said about we shouldn’t use that without the context we live in. José Renato wants to say something.


Jose Renato Laranjeira de Pereira: Yeah, sure. Thanks, Ale. I would just like now to, well, first of all, thank the first two speakers. I think that we already have lots of interesting topics for the Q&A, but I would like now to introduce Pedro Ivo Ferraz da Silva. He’s a career diplomat and the Coordinator for Scientific and Technological Affairs at the Climate Department of the Ministry of Foreign Affairs in Brazil. He’s also a member of the Technology Executive Committee of the United Nations Framework Convention on Climate Change, the UNFCCC, and Brazil’s focal point to the Intergovernmental Panel on Climate Change, the IPCC. And, well, Pedro, the floor is yours, but I would also like to say that I don’t think that there’s no one now more attentive and also with the knowledge of what’s going on in the discussions at the UNFCCC and this intersection with technology than Pedro, and particularly considering the fact that Brazil will host the next COP now in November in Belém. So, Pedro Ivo, the floor is yours. It’s great to see you here.


Pedro Ivo Ferraz da Silva: Yeah, thank you very much, José Renato, Alexandra, and also other colleagues in the panel. It’s a pleasure to actually reconnect with the IGF after 10 years. I had the honor to organize IGF 2015 in João Pessoa, in Brazil, and by that time AI was actually emerging as a topic and climate change and sustainable development goals in more general terms were rather just a subtopic, you know, in the context of the discussion. So, I’m glad that, you know, after 10 years things have evolved and and we are here delving into very interesting topics. I greet you all from Bonn. We’ve just concluded the June climate negotiations and AI was a very important topic of discussion here. And, you know, tackling of course, the benefits that AI can bring to climate action. And also, of course, the footprint, various environmental footprints, as also Ana Valdivia indicated, that was also part here of the discussions. So, as you know, the world is facing, among many others, the challenge of accelerating digital transformation while staying within planetary boundaries. And as I said, AI is both a powerful tool and a source of new tensions. It can be used in many ways to, for example, model climate risks, forecast disasters. It can be used to optimize infrastructure for low carbon development, but it can also deepen inequality. It can, you know, centralize control and, you know, again, exacerbate many environmental harms if it is left unchecked. So the question, I mean, it’s not whether AI will be used or not. It is already being used. The real question is, you know, who decides how it is used, for what purpose and at what cost? In this context, I think governments have a critical role, not only as regulators, but also as stewards of public interest and also as a driver of innovation and development. You know, governments must ensure that AI governments, governance frameworks are rooted in democratic values. That is very important that AI is aligned with climate goals and also protect human rights. At the same time, I think that these frameworks, they must encourage innovation. And if we look at the climate, you know, innovation within the climate context, I think there is a dire need that AI is not only a driver for innovation for mitigation purposes, but also for adaptation and resilience in vulnerable communities. So I think this discussion that we are having here, and I think again, Lapine and partners for organizing this panel here at the IGF, I think it’s timely, you know, as we look ahead for COP30 in Belém, in the heart of the Amazon, know that the Brazilian presidency of COP30 has proposed a vision, yeah, for the COP that is centered around the idea of mutirão, is a word a bit difficult to, you know, pronounce, but it means it is a collective and community driven effort to tackle, you know, shared challenges. And it is a concept that reminds us permanently that, you know, climate action is not just about technology, but also about, you know, cooperation, participation and shared responsibilities. So in this kind of ethos must also guide us how we approach the governance of AI. And yeah, I mean, the current global landscape of AI, I think, reflects a profound symmetry, you know, while, and we have mentioned it here, AI has an enormous potential to support climate action. Its development and deployment are dominated by a few countries and a few corporations, and I think it was also mentioned by the previous panelists. So most of the world remains excluded from shaping these systems. And at the same time, the environmental footprint of AI is increasing. And here, a very important aspect, while transparency of AI is declining. I mean, a recent study, a study from one, two weeks ago, found that 84% of widely used large language models provide, no disclosure at all of their energy use or emissions. Without better reporting, we cannot assess the actual trade-offs and design, we cannot design informed policies and we can also hold AI and related infrastructures accountable. You know, that’s why inclusive international cooperation is essential and it must be accompanied by, you know, local empowerment. I refer again to another report that was prepared by UNCTAD, its technology innovation report from this year, titled Inclusive AI for Development. You know, AI lays out among many other things that developing countries need to strengthen specifically three strategic capabilities in order to be able also to shape AI skills, data and infrastructure. So, as they turn this as leverage points that will allow countries of the global south, not only to access AI, but to really, you know, shape it in ways that must reflect local priorities, protect, of course, biodiversity, protect natural resources and advance climate justice. And, you know, this is not just about developing new technologies, it’s also about ensuring that AI systems are embedded in institutions, practices and values that are transparent, inclusive, and of course climate aligned. And as we look into the future, I think we should not reject, or let’s say we should reject actually the false binary that exists between national sovereignty and global cooperation. I think we need both of them to be rooted in equity, climate responsibility, and I think the Muchidão spirit kind of conveys this and allows us to come forward. So these are my initial remarks. I thank you all for, again, for the invitation, discussion, and looking forward to the Q&A. Thank you very much.


Jose Renato Laranjeira de Pereira: Thank you very much, Pedro. Great thoughts. Well, I’m really looking forward to the Q&A. But well, for now, I’ll introduce Yu Ping Chan, who is with us on site as well. Yu Ping Chan heads digital partnerships and engagements at the UNDP, the United Nations Development Agency. And before joining the UN Secretariat, Yu Ping was a diplomat in the Singaporean Foreign Service. So lots of diplomats here in this session. Yu Ping has a Bachelor of Arts Magna Cum Laude from Harvard University and a Master’s of Public Administration from Columbia University’s School of International and Public Affairs. Welcome Yu Ping. And now you have the floor, please.


Yu Ping Chan: Thank you so much to the organizers for having me here today. So I represent the United Nations Development Program. As Jose has mentioned, we are the development wing of the United Nations. We’re in over 170 countries and territories around the world, supporting governments through all phases of development, all aspects, sectors, and so forth. The digital programming at UNDP is actually quite extensive. now in more than 130 countries, I believe, supporting them on leveraging digital and AI to achieve the sustainable development goals. And so it’s really very interesting to be part of this conversation and really hearing your thoughts about what is so critical in terms of this intersection between digital and the environment. I’m also very privileged to follow Pedro because I couldn’t agree more with some of the areas that he’s highlighted in terms of the challenges here. And we as UNDP have been actually very privileged to work very closely with the Brazilian COP presidency in the lead up to COP and thinking about how these issues intertwine. So when he talks about these challenges around AI exclusion, AI inequality, this is also the framing that UNDP is looking at. When in terms of considering how the AI revolution is going to potentially leave behind even more countries in the world and really exacerbate the divides between the global South and the global North. When for instance, projections show that only 10% of the economic value that will be generated by AI in 2030 will accrue to the global South majority countries with the exception of China. We really have a situation where the AI future is going to be even more unequal than what we already see today. When presently, for instance, over 95% of the top AI talent of the world is concentrated in six research universities, which are in the US and China basically, you really see how we run this risk of having AI be in some ways the domain of, as already pointed about the panelists, certain exclusive types of monopolies, tech companies and develop in certain ways and not responding to the needs of local populations and the majority of the world. So this is where UNDP really has been looking at how we strengthen local ecosystems, ensure inclusivity in data models, LLMs, and the AI systems that will be generated in the future. This is also not to say that it’s not even just about AI, right? Because even before we have AI, we need to have data. Before we have data, we need to have basic connectivity. Before we have connectivity, we also have to talk about things such as infrastructure and energy, all of which are challenges for the global sub-countries across the globe. So, from UNDP’s perspective, it’s not enough to just think of AI by itself, right? You need to think about the entire developmental spectrum across all these issues and really tie digital and AI, digital transformation itself, as part of this holistic approach that goes beyond just one ministry but really thinks about the broader approaches to sustainability and inclusion and really digital transformation as part of the societal approach as a whole. So, for instance, we’ve initiated a lot of work around some of the areas that other panelists have already highlighted. The gaps around skills, compute, and talent. Just last week in Italy, we launched the AI Hub for Sustainable Development with the Italian Presidency that is a product of the G7 Presidency that is looking at how we can support local AI ecosystems in Africa, strengthen AI innovation, and also partner with AI startups in Africa to bring them to scale and to really build that capacity within Africa to be part of the AI revolution. We’ve also worked on various areas when it comes to digital and connectivity, as well as digital and environmental sustainability and climate issues. We have a Digital for Planet offer where, besides the fact that we’ve worked closely with the Brazilian COP Presidency, we also lead the Coalition on Digital Environmental Sustainability with the International Telecommunications Union, UNEP, the German Environmental Ministry, the Kenyan government, and various civil society organizations such as the International Science Council, Future of Earth, and so forth, to really think about what kind of thought leadership and global advocacy we need around this intersection of digital and environmental sustainability. And this is in addition to the work that is being done, as I mentioned, in UNDP’s country offices all around the world, where we’ve worked on national carbon registry systems, digital public infrastructure for climate in countries like Namibia, Cote d’Ivoire, Costa Rica, Nigeria, Sri Lanka. I have a very long list of many, many projects that I could list, but suffice to say there is a lot of information online about what UNDP is doing in the area of digital environment climate all around the world. But all of this is not to say that it’s enough, because I think some of the other people are saying that it’s not enough. panelists have already talked about how we are aspiring to something a lot greater than just these pieces, right? It’s not enough to say we are doing these projects, we also have to be very thoughtful in how we roll out these projects, roll out these big investments exactly as just how you’ve spoken about. And actually it’s very interesting that Jose invited me to be part of this panel today because this actually came from another convening that we did last year at the IGF in Riyadh, where we were developing what we call the Hamburg Declaration, a responsible AI for the SDGs. And this was actually just launched two weeks ago at the Hamburg Sustainability Conference where we precisely are asking development practitioners, the multi-stakeholder community, governments, investment banks and civil society community to come together to think about how in the use of AI we have to be responsible in how we deploy and use design AI for development outcomes precisely in these areas of people, planet, inclusivity and so forth. So we’ve already garnered over 50 stakeholders that signed on to the Hamburg Declaration on Responsible AI for the SDGs, which is the first multi-stakeholder document in this particular space. We would encourage and welcome more organizations to sign up, make commitments in this regard because it’s precisely that. How do you thoughtfully engage with AI and how do you commit to using AI responsibly in achievement of the sustainable development goals and in environmental sustainability as well. So I look forward to hearing from all of you.


Alexandra Krastins Lopes: Thank you. Now I pass the floor to Alexandra Costa Barbosa. He’s a member of the Homeless Workers Movement, a digital policy consultant and researcher.


Alexander Costa Barbosa: Thank you, Alexandra. Can you hear me? Yes. I would like to say hello to the panelists. I’m really pleased to be here. Thank you for the invitation. I am Alexandra Costa Barbosa. As Alexandra said, I’m a member of the Homeless Workers Movement. Some of you may be asking, what is a Homeless Workers Movement? It’s a housing movement in Brazil. It was founded in 1997. As you can imagine, there’s a huge gap of housing in Brazil. Different statistics, but you can consider even 30 million people living in precarious conditions of housing. So when we think that this state has not the proper tools and instruments to really deal with this issue, people themselves started struggling and fighting for this. I’m just saying that because the same applies to technology and digital sovereignty. Our approach to digital sovereignty, or what we call popular digital sovereignty, and by popular here, I’m referring to the Latin American version of popular, it deals with the massive aspect of sovereignty instead of the so-called folkloric aspect of popular. For us, it’s mainly what we’ve been doing the past five years. It’s like doing things that the state actually haven’t provided to us so far. So really fighting for meaningful connectivity, digital literacy in periphery, in favelas, in slums, and so on. Also fighting for decent work, decent digital labor, beyond the statements in the academy. And then we realized that what we’ve been doing in practice, it’s somehow what we claim by digital sovereignty. But for this specific panel, I think it’s relevant to emphasize that in this first semester, Brazil is also chairing the BRICS Summit in the following week. And within the BRICS structure, there is the BRICS Civil Forum, and in which Brazil also added this popular dimension, so the BRICS Civil Popular Forum. And we also co-led the Digital Sovereignty Working Group with the Landless Workers Movement, which is another social movement really important in Brazil, struggling for the land reform. And this work was really, really interesting. You can, you’re probably gonna have access to this document in the following week, but there we promoted this idea of people-centric digital sovereignty. And we also outlined some guidelines for the New Government Bank to finance digital public infrastructures, having in consideration both people and nature, climate needs, and so on. There are also other guidelines specifically to deal with AI development. And I think it’s really worth checking this document in the following week. When I mentioned here this meaningful access, digital literacy, decent work, and so on, it’s just to highlight that whenever we talk about AI sovereignty, we cannot restrict this discussion to, I think as the other panelists already mentioned it, to like computing power, or to regulatory capacity, or even data capacity, or risk-based regulation, and so on. But also considering this connectivity, electricity access, digital literacy access, and also a transition to decent, better jobs in the AI so-called era. I think that’s mainly the initial contribution that I put in place. If you have any other questions, feel free to reach us. If you’re curious about what a social movement is doing in regards of digital sovereignty, you can also access our website. I will provide later in the chat here, and eventually the moderators can also share with the other attendees. Our approach to digital sovereignty, I think it’s pretty much aligned with the sustainable vision of digital sovereignty. And just to add this more critical aspect of sustainability here, right? We’ve been watching this greenwashing agenda over sustainability in the past 15 years. So eventually it’s a time to change to alternatives to development, right? Especially in Latin America, and briefly speaking here, Latin American environment, we have other agendas, alternative agendas, such as the Buen Vivir, or the Good Living, Buen Vivir, or even the Commons-based development. I think that’s pretty much aligned with this climate justice discussion. Thank you very much.


Alexandra Krastins Lopes: Thank you, Alexandra. Now, I would like to know if we have any questions on floor. Please feel free to join the microphone.


Raoul Danniel Abellar Manuel: Hello. Can I be heard? Yes. Okay, thank you. My name is Raul Manuel and I am a member of parliament from the Philippines representing the Youth Party. So I’d like to address my question to a representative from the European Commission. So, because in the Philippines in our case, we also would like to look at the labor angle of artificial intelligence because to develop the large language models, it also entails a lot of labor, especially for those in like call centers, where the structure is like a call center, but what the people do is actually to train the large language models. And the one thing that we want to ensure also for our citizens is how to not replicate, you know, the old exploitative practices in labor and how that might extend to AI development. So, since the continent of Europe is like some steps ahead in terms of regulating AI, I’d like to ask if ever there are any provisions in your current laws or policies that also touch into protection of labor and workers. Thank you.


Alexandra Krastins Lopes: Thank you. Just a reminder to the speakers, when answering the question, please also state your final remarks. Thanks.


Alex Moltzau: Yes. So I guess this is a question directly to me. And I’m here today in a personal capacity. So I’m not presenting the official views and perspectives of the European Commission. First and foremost, I would like to say that. And I mean, like I have a bit of a background as well as a Norwegian, you know. as a country who cares a lot about labor legislation and about collaboration. I also always actively talk to unions when I travel back to Oslo because I think it’s extremely important to think about the impact on labors and the impact on the way that we work but also the way that we are affected. And I think what you are saying is extremely interesting also because what we are seeing is that all these large language models they require a lot of supervised machine learning. So we have to tag all these different algorithms and that requires a lot of human labor. And I think part of the backdrop here as well is that for example Kenya, there were a lot of movements as well to unionize to see is there any kind of way to increase our rights or to increase the pay that we get for doing all of this work and making sure that these models actually work in practice. So I think your question is extremely timely. And in the European Union we still have fairly strong labor legislation, right? So I think it’s like saying that AI does not operate in a vacuum. We have existing laws. We have existing values. So let’s make sure that those existing laws and the values that we have really are ways that we act in the field of AI because I don’t think it is right now. But I think there is such a long way to go. So I just wanted to thank you for that. And in a sense how do we have ways to handle that within the field of AI is also something that I have seen the European Commission is working on currently but I don’t think I can give you kind of a definite answer on how to protect overall the laborers. AI Act does include kind of concerns regarding employment as well in kind of risk categories. So in this way, at least in our region, it has a consequence. But with that I guess that’s my final comment. And I just pass it to other questions.


Edmon Chung: speakers. Thank you. We have another question. Edmond Chung from Dot Asia. Thank you. Thank you for bringing the topic and especially linking it to sovereignty and digital sovereignty. I think many of the panelists have touched on this and I think Pedro mentioned about the false dichotomy between national digital sovereignty and the global cooperation especially global public interests in my mind. One of and one of the things I think perhaps I’d like to hear from the panel but also to to really think about the personal digital sovereignty as well. I think earlier, sorry I forgot the name of the person, mentioned about popular sovereignty because it’s the personal digital sovereignty and I think Yu Ping mentioned about data coming before AI. The personal digital sovereignty is actually a very important part of you know really safeguarding AI that is people centric like the for for the end end user ultimately. So it is not even a dichotomy. I think coming in order to bring it to full circle it’s both you know it’s not only both it’s the personal digital sovereignty, national digital sovereignty and the global public interest which brings it into the full loop. So yeah that’s that’s my contribution. Okay I’ll take the next


Participant: question and then we’ll go to the answers. Yes you can. Hello thank you for this amazing panel. My name is Lucia and I come from Peru which is a country in which digital divide is also a huge concern so that’s why I think this vision of digital sovereignty also involving things as you know digital literacy and appropriation of the technology. So I would like to ask about environmental sustainability because at least in my country there’s like a race in order to regulate AI. and we are like the first country in our region that has AI law and we are trying to also approve a regulation on this but there’s a huge environmental view missing and we also know that this is also happening in the digital public infrastructure in general so I would like to ask how do you think that, for instance, we as civil society organizations, also with grassroot organizations, can advocate about that without getting into this greenwashing approach that our colleague from Brazil was sharing with us?


Alexandra Krastins Lopes: Thank you. We have less than five minutes, please. The speakers feel free to answer rapidly. Thank you.


Ana Valdivia: Thank you very much. Thank you for this question. I can share my insights from doing fieldwork in Mexico and Chile regarding the sustainability and environmental impacts of data centers there. I think one solution would be to talk to other of your colleagues because there are a lot of social movements in Latin America and I can talk about Surciendo in Mexico, Derechos Digitales. There are also other movements in Chile and if you want to come put you in touch with them because they have been advocating for more transparency. Currently, for instance, in Mexico, we don’t know how much water and energy data centers are using. Chile had a platform where the citizens in Chile could access the environmental reports of data centers and due to pressures by the data center industry, the Chilean government has decided to cancel this platform so data centers are not allowed anymore to report these environmental impacts in this platform anymore. I think we can create this sort of solidarity that I mentioned in my intervention and I will be happy to be in touch and to put you in contact with other organizations in Latin America. Thank you for your question.


Alexandra Krastins Lopes: Okay, Alexandra Barbosa.


Alexander Costa Barbosa: I’d like to also react to the first question as well, and just to emphasize that when you’re dealing at this moment in this contemporary conjuncture, all of this AI discussion, AI sovereignty, AI regulation, AI and environmental sustainability, it has to do with politics, right? In the end of the day. In the beginning of the discussion on AI regulation, it couldn’t have any workers’ right in the final approval legislation, and it pretty much applies for organizing social movements, especially popular movements, grassroots movements, to deal with environmental concerns. We’ve been seeing the indigenous struggle against deforestation in the Amazonian region, for instance, in the past years, and just for you to have a glimpse, the Brazilian Congress is completely against any efforts from the government at this moment. So just to have an idea that it’s much, much more difficult than any specific guideline that we have in mind. Thank you very much for the opportunity.


Alexandra Krastins Lopes: Yu Ping?


Yu Ping Chan: And also, just to add to this dimension, it’s not just politics, right? It’s also the big tech and the profit motivation. So to the first question, this fact that there’s a need for labor regulation, but there’s also a question around who owns the products of those labor in the end, because the LRMs themselves are going to be owned by big tech companies and not freely available to the populations that were putting in the data or the efforts to actually create them. So there’s really all these issues that are tied into technology, which really requires, I think, and I really like this aspect about the mobilization of concerned individuals, groups and so forth that share experiences and thoughts about how to respond to this. So that question about what should we do, and I want to link it to what you had said, that maybe perhaps sometimes we are naive in what we try to achieve. My last closing message I think would be to continue to speak up, to be involved and to really think about how we collectively can make those changes that we want to see.


Pedro Ivo Ferraz da Silva: Pedro? Yes, I know we are, I think the time is over, just I think from all the questions and comments that were made here, I think one conclusion that I draw is that, you know, we need perhaps to move away from some narratives that comes especially from developed countries, that we live in a moment of, for example, a triple planetary crisis, you know, a view that tries to limit the problems that we face in the world. Actually I would rather say we live in a moment of a poly-crisis that contains of course the environmental crisis, but also the social crisis with diminishing labor rights, also while people still fight for, you know, to overcome the challenges of hunger and poverty. So I think that’s, and of course the crisis related to digital rights, which actually is a crisis that has been very central to the debate in IGF 2015 where, that I was participating. So I think we need to, you know, to tackle all this crisis in a coherent way. And I think encouraging social movement and grassroots movement is fundamental. I think technology can play a very important role here by leveraging those movements. So perhaps that is the final message here. Let’s consider that we are facing various crises at the moment and let’s use technology in order to address them in a very coherent way. Thank you.


Alexandra Krastins Lopes: Thank you all for the great discussion. Can we please take a picture? Can you put the speakers on the screen, please? Thank you. Thank you. for the on


J

Jose Renato Laranjeira de Pereira

Speech speed

137 words per minute

Speech length

702 words

Speech time

305 seconds

Growing discourse on AI sovereignty among governments and social movements is interrelated with history of technological dependency dating back to colonial times

Explanation

The speaker argues that current discussions about AI and digital sovereignty are deeply connected to historical patterns of technological dependency that originated during colonial periods. This dependency continues today through what is termed ‘digital colonialism,’ influencing how both governments and social movements approach sovereignty over AI technologies.


Evidence

Examples include European Union, Brazil, China, U.S. initiatives, and social movements among indigenous peoples and workers


Major discussion point

AI Sovereignty and Digital Dependency


Topics

Development | Human rights principles | Legal and regulatory


A

Ana Valdivia

Speech speed

140 words per minute

Speech length

1085 words

Speech time

461 seconds

AI infrastructure cannot be truly sovereign because it depends on minerals and natural resources from other countries, creating interdependencies

Explanation

Valdivia contends that national digital sovereignty is impossible because AI infrastructure requires minerals like cobalt, tungsten, copper, and aluminum that are extracted from different geographies, primarily in the Global South. This creates unavoidable dependencies between countries, making true sovereignty unattainable.


Evidence

UK’s digital sovereignty depends on countries like Brazil, Pakistan, China, Taiwan for minerals needed for AI chips (GPUs)


Major discussion point

AI Sovereignty and Digital Dependency


Topics

Development | Infrastructure | Economic


Agreed with

– Pedro Ivo Ferraz da Silva

Agreed on

Global South bears environmental costs while being excluded from AI governance


Digital sovereignty should be replaced with digital solidarity to create networks of cooperation between states rather than competition

Explanation

Instead of pursuing individual digital sovereignty that creates friction between states, Valdivia proposes a model of digital solidarity where countries work together cooperatively. This approach would help all states become collectively independent from big tech companies that currently dominate AI innovation.


Evidence

Big tech companies now control AI development – researchers can no longer develop AI algorithms independently as they could in the past


Major discussion point

AI Sovereignty and Digital Dependency


Topics

Development | Economic | Legal and regulatory


Disagreed with

– Pedro Ivo Ferraz da Silva

Disagreed on

Digital Sovereignty vs Digital Solidarity Approach


Larger language models reproduce more stereotypes and biases while consuming more resources without necessarily being better

Explanation

Valdivia argues that as large language models become bigger, they actually learn and reproduce more stereotypes rather than improving in quality. This challenges the assumption that larger AI models are inherently better while highlighting their increased resource consumption.


Evidence

Findings from international AI ethics conference showing LLMs that are bigger reproduce more stereotypes than smaller LLMs


Major discussion point

Environmental Impact and Climate Justice


Topics

Human rights principles | Development | Sociocultural


AI development reproduces climate injustice through unequal access to resources like water, with communities having limited access while data centers operate 24/7

Explanation

The speaker demonstrates how AI infrastructure creates environmental injustice by monopolizing essential resources like water. While local communities face severe water scarcity, data centers maintain constant access to water for their operations, exacerbating existing inequalities.


Evidence

In Querétaro, Mexico, communities have access to water only one hour per week while data centers have 24/7 access; Querétaro is becoming the only Mexican state with 100% territory at drought risk


Major discussion point

Environmental Impact and Climate Justice


Topics

Development | Human rights principles | Sustainable development


Data centers in Mexico are being deployed without democratic consultation with communities, exacerbating drought conditions

Explanation

Valdivia criticizes the lack of democratic participation in decisions about AI infrastructure deployment. Governments are inviting data centers without consulting local communities who will bear the environmental costs, particularly in water-stressed regions.


Evidence

State of Querétaro inviting big tech companies to deploy AI infrastructure while becoming 100% at risk of drought


Major discussion point

Environmental Impact and Climate Justice


Topics

Human rights principles | Development | Legal and regulatory


Agreed with

– Pedro Ivo Ferraz da Silva

Agreed on

Need for transparency in AI environmental impact reporting


AI development is now dominated by big tech companies rather than universities or other institutions, limiting innovation access

Explanation

The speaker argues that AI development has become centralized in big tech companies, unlike in the past when researchers could develop AI algorithms independently. This concentration limits broader access to AI innovation and development capabilities.


Evidence

LLMs like GPT and LAMA are developed by big tech companies, not universities or other technical institutions; researchers can no longer develop AI with their own laptops as they could during PhD studies


Major discussion point

Global Inequality and Exclusion in AI Development


Topics

Economic | Development | Legal and regulatory


Agreed with

– Yu Ping Chan

Agreed on

AI development is dominated by big tech companies, excluding broader participation


P

Pedro Ivo Ferraz da Silva

Speech speed

115 words per minute

Speech length

1170 words

Speech time

609 seconds

False binary exists between national sovereignty and global cooperation – both are needed and should be rooted in equity and climate responsibility

Explanation

Silva argues against viewing national sovereignty and international cooperation as opposing concepts. Instead, he advocates for an approach that combines both elements, grounded in principles of equity and climate responsibility, rejecting the either-or mentality.


Evidence

Brazilian COP30 presidency’s vision of ‘mutirão’ – collective and community-driven effort for shared challenges


Major discussion point

AI Sovereignty and Digital Dependency


Topics

Legal and regulatory | Development | Human rights principles


Agreed with

– Edmon Chung
– Alexander Costa Barbosa

Agreed on

Multi-level approach to digital sovereignty needed


Disagreed with

– Ana Valdivia

Disagreed on

Digital Sovereignty vs Digital Solidarity Approach


84% of widely used large language models provide no disclosure of their energy use or emissions, preventing informed policy design

Explanation

Silva highlights the lack of transparency in AI systems regarding their environmental impact. Without proper disclosure of energy consumption and emissions, policymakers cannot make informed decisions or hold AI infrastructure accountable for their environmental footprint.


Evidence

Recent study from two weeks prior showing 84% of widely used LLMs provide no disclosure of energy use or emissions


Major discussion point

Environmental Impact and Climate Justice


Topics

Legal and regulatory | Development | Sustainable development


Agreed with

– Ana Valdivia

Agreed on

Need for transparency in AI environmental impact reporting


Most of the world remains excluded from shaping AI systems while bearing environmental costs of mineral extraction

Explanation

Silva points out the fundamental injustice where AI development is controlled by a few countries and corporations, while the environmental and social costs of mineral extraction for AI infrastructure are borne by communities in the Global South who have no say in how these systems are developed.


Evidence

AI development dominated by few countries and corporations while extraction impacts affect Global South communities


Major discussion point

Global Inequality and Exclusion in AI Development


Topics

Development | Human rights principles | Economic


Agreed with

– Ana Valdivia

Agreed on

Global South bears environmental costs while being excluded from AI governance


Developing countries need to strengthen three strategic capabilities: skills, data, and infrastructure to shape AI according to local priorities

Explanation

Based on UNCTAD research, Silva identifies three key areas that developing countries must develop to move beyond merely accessing AI to actually shaping it according to their local needs and priorities. This represents a pathway from AI consumption to AI sovereignty.


Evidence

UNCTAD Technology Innovation Report ‘Inclusive AI for Development’ identifying skills, data, and infrastructure as leverage points


Major discussion point

Global Inequality and Exclusion in AI Development


Topics

Development | Capacity development | Infrastructure


AI can be powerful tool for climate action through modeling risks, forecasting disasters, and optimizing low-carbon infrastructure

Explanation

Silva acknowledges the positive potential of AI for addressing climate challenges, including its applications in risk assessment, disaster prediction, and infrastructure optimization. However, he emphasizes this must be balanced against AI’s environmental costs and governance challenges.


Evidence

AI applications in climate risk modeling, disaster forecasting, and low-carbon infrastructure optimization


Major discussion point

Sustainable Development and AI Applications


Topics

Sustainable development | Development | Infrastructure


International cooperation must be accompanied by local empowerment and community participation

Explanation

Silva argues that effective AI governance requires both international collaboration and meaningful participation from local communities. This dual approach ensures that global cooperation doesn’t override local needs and priorities in AI development and deployment.


Evidence

Brazilian COP30 presidency’s ‘mutirão’ concept emphasizing collective and community-driven efforts


Major discussion point

Multi-stakeholder Governance and Cooperation


Topics

Human rights principles | Development | Legal and regulatory


Social movements and grassroots organizations should be leveraged through technology to address multiple crises coherently

Explanation

Silva advocates for using technology to strengthen and support social movements and grassroots organizations as they work to address interconnected crises. He sees these movements as essential actors in creating coherent responses to complex challenges.


Evidence

Recognition of poly-crisis including environmental, social, and digital rights crises that need coherent responses


Major discussion point

Multi-stakeholder Governance and Cooperation


Topics

Human rights principles | Development | Sociocultural


Need to move beyond triple planetary crisis narrative to address poly-crisis including environmental, social, and digital rights crises

Explanation

Silva critiques the limited framing of current global challenges as merely a ‘triple planetary crisis’ and argues for recognizing a broader ‘poly-crisis’ that includes social issues like diminishing labor rights, ongoing poverty and hunger, and digital rights challenges that require comprehensive, interconnected solutions.


Evidence

Recognition that current crises include environmental issues, social crisis with diminishing labor rights, ongoing hunger and poverty, and digital rights crisis


Major discussion point

Sustainable Development and AI Applications


Topics

Development | Human rights principles | Sustainable development


Disagreed with

– Yu Ping Chan

Disagreed on

Scope of Current Global Crisis


Y

Yu Ping Chan

Speech speed

195 words per minute

Speech length

1301 words

Speech time

399 seconds

Only 10% of economic value generated by AI in 2030 will accrue to Global South countries excluding China, exacerbating existing inequalities

Explanation

Chan presents projections showing that the economic benefits of AI will be heavily concentrated in developed countries, with the Global South receiving only a small fraction of the value. This distribution pattern will worsen existing global economic inequalities rather than providing development opportunities.


Evidence

Projections showing 10% of AI economic value in 2030 will go to Global South majority countries excluding China


Major discussion point

Global Inequality and Exclusion in AI Development


Topics

Economic | Development | Digital access


Disagreed with

– Pedro Ivo Ferraz da Silva

Disagreed on

Scope of Current Global Crisis


Over 95% of top AI talent is concentrated in six research universities in the US and China, creating exclusive monopolies

Explanation

Chan highlights the extreme concentration of AI expertise in a handful of institutions, primarily in two countries. This concentration creates knowledge monopolies that exclude most of the world from participating in cutting-edge AI research and development.


Evidence

Over 95% of top AI talent concentrated in six research universities in US and China


Major discussion point

Global Inequality and Exclusion in AI Development


Topics

Development | Capacity development | Economic


Agreed with

– Ana Valdivia

Agreed on

AI development is dominated by big tech companies, excluding broader participation


Digital transformation must be part of holistic approach beyond single ministries, encompassing connectivity, infrastructure, and energy

Explanation

Chan argues that effective digital transformation requires coordination across multiple sectors and government departments rather than being confined to technology ministries. The interconnected nature of digital infrastructure demands comprehensive planning that addresses connectivity, infrastructure, and energy needs simultaneously.


Evidence

UNDP’s approach recognizing that before AI you need data, before data you need connectivity, before connectivity you need infrastructure and energy


Major discussion point

Sustainable Development and AI Applications


Topics

Infrastructure | Development | Legal and regulatory


Need for collective action and mobilization of concerned individuals and groups to address AI challenges

Explanation

Chan emphasizes that addressing AI-related challenges requires organized collective action from various stakeholders including individuals, civil society groups, and organizations. She advocates for continued advocacy and collaborative efforts to achieve desired changes in AI governance and development.


Evidence

Hamburg Declaration on Responsible AI for SDGs with over 50 stakeholders signing on as first multi-stakeholder document in this space


Major discussion point

Multi-stakeholder Governance and Cooperation


Topics

Human rights principles | Development | Legal and regulatory


Question of ownership arises regarding who owns the products of labor used to create LLMs that end up owned by big tech companies

Explanation

Chan raises critical questions about labor exploitation in AI development, pointing out that while workers contribute their labor to train large language models, the resulting products are owned by big tech companies rather than being freely available to the communities that helped create them.


Evidence

LLMs are owned by big tech companies and not freely available to populations that provided data or efforts to create them


Major discussion point

Labor Rights and AI Development


Topics

Economic | Future of work | Human rights principles


Agreed with

– Alexander Costa Barbosa
– Raoul Danniel Abellar Manuel

Agreed on

Labor rights concerns in AI development


A

Alexander Costa Barbosa

Speech speed

121 words per minute

Speech length

823 words

Speech time

406 seconds

Popular digital sovereignty involves communities doing what the state hasn’t provided, focusing on meaningful connectivity and digital literacy in peripheries

Explanation

Barbosa explains that popular digital sovereignty emerges from communities taking initiative to address digital needs that governments have failed to meet. This grassroots approach focuses on practical solutions like ensuring meaningful internet access and digital education in marginalized areas like favelas and slums.


Evidence

Homeless Workers Movement’s work on meaningful connectivity, digital literacy in periphery, favelas, and slums, plus advocacy for decent digital labor


Major discussion point

AI Sovereignty and Digital Dependency


Topics

Development | Digital access | Human rights principles


Agreed with

– Pedro Ivo Ferraz da Silva
– Edmon Chung

Agreed on

Multi-level approach to digital sovereignty needed


Workers’ rights were initially excluded from AI regulation discussions, highlighting the political nature of these debates

Explanation

Barbosa points out that labor protections were not originally included in AI regulation frameworks, demonstrating how these policy discussions are fundamentally political processes where different interests compete for inclusion. This exclusion reflects broader power dynamics in technology governance.


Evidence

Workers’ rights couldn’t be included in final AI regulation approval initially, similar to environmental concerns facing opposition in Brazilian Congress


Major discussion point

Labor Rights and AI Development


Topics

Future of work | Legal and regulatory | Human rights principles


Agreed with

– Yu Ping Chan
– Raoul Danniel Abellar Manuel

Agreed on

Labor rights concerns in AI development


Alternative development approaches like Buen Vivir and commons-based development align with climate justice discussions

Explanation

Barbosa advocates for moving beyond traditional development models toward alternative approaches rooted in Latin American concepts like ‘Buen Vivir’ (Good Living) and commons-based development. These approaches offer more sustainable and equitable alternatives that align with climate justice principles.


Evidence

Latin American alternative agendas such as Buen Vivir and commons-based development as alternatives to traditional development models


Major discussion point

Sustainable Development and AI Applications


Topics

Sustainable development | Development | Sociocultural


E

Edmon Chung

Speech speed

140 words per minute

Speech length

211 words

Speech time

90 seconds

Personal digital sovereignty is essential alongside national and global approaches to create people-centric AI systems

Explanation

Chung argues that digital sovereignty must operate at multiple levels simultaneously – personal, national, and global – rather than viewing these as competing approaches. Personal digital sovereignty is particularly important for ensuring that AI systems truly serve end users and protect individual rights.


Evidence

Recognition that data comes before AI, and personal digital sovereignty safeguards people-centric AI for end users


Major discussion point

AI Sovereignty and Digital Dependency


Topics

Human rights principles | Privacy and data protection | Development


Agreed with

– Pedro Ivo Ferraz da Silva
– Alexander Costa Barbosa

Agreed on

Multi-level approach to digital sovereignty needed


A

Alexandra Krastins Lopes

Speech speed

112 words per minute

Speech length

562 words

Speech time

299 seconds

Multi-stakeholder model of internet governance should be applied to sustainable AI sovereignty policies to include social movements effectively

Explanation

Lopes proposes adapting the established multi-stakeholder governance model from internet governance to AI sovereignty policy-making. This approach would ensure that social movements and diverse stakeholders have meaningful participation in designing policies for sustainable AI development.


Major discussion point

Multi-stakeholder Governance and Cooperation


Topics

Legal and regulatory | Human rights principles | Development


A

Alex Moltzau

Speech speed

168 words per minute

Speech length

1446 words

Speech time

515 seconds

AI rollout must be as responsible, sustainable, and green as possible within the context of climate crisis

Explanation

Moltzau argues that given the current climate crisis and multiple global challenges, any deployment of AI technology must prioritize responsibility, sustainability, and environmental considerations. He emphasizes that technology deployment cannot ignore the broader context of environmental and social crises.


Evidence

European Commission’s commitment to standardization request on energy reduction and internal study on green AI


Major discussion point

Environmental Impact and Climate Justice


Topics

Sustainable development | Legal and regulatory | Development


AI operates within existing labor legislation frameworks, but there’s concern about protecting workers involved in supervised machine learning tasks

Explanation

Moltzau acknowledges that AI development should be governed by existing labor laws and protections, but expresses concern about whether current frameworks adequately protect workers involved in training AI systems. He references unionization efforts in countries like Kenya as examples of workers seeking better protections.


Evidence

EU AI Act includes employment concerns in risk categories; reference to Kenya unionization movements for AI training work


Major discussion point

Labor Rights and AI Development


Topics

Future of work | Legal and regulatory | Human rights principles


R

Raoul Danniel Abellar Manuel

Speech speed

121 words per minute

Speech length

182 words

Speech time

90 seconds

Need to ensure labor protections in AI development to avoid replicating exploitative practices, especially in training large language models

Explanation

Manuel raises concerns about labor exploitation in AI development, particularly in the training of large language models which requires significant human labor in call center-like structures. He advocates for ensuring that AI development doesn’t perpetuate the same exploitative labor practices found in other industries.


Evidence

Philippines context where AI training involves call center-like structures for training large language models


Major discussion point

Labor Rights and AI Development


Topics

Future of work | Human rights principles | Development


Agreed with

– Yu Ping Chan
– Alexander Costa Barbosa

Agreed on

Labor rights concerns in AI development


P

Participant

Speech speed

128 words per minute

Speech length

176 words

Speech time

81 seconds

Environmental sustainability perspective is missing from AI regulation efforts, and civil society must advocate without falling into greenwashing

Explanation

The participant from Peru points out that environmental considerations are largely absent from AI regulation efforts, even as countries rush to develop AI laws. They seek guidance on how civil society can effectively advocate for environmental sustainability in AI policy without falling into superficial greenwashing approaches.


Evidence

Peru as first country in region with AI law but missing environmental perspective in digital public infrastructure regulation


Major discussion point

Environmental Impact and Climate Justice


Topics

Legal and regulatory | Sustainable development | Development


Agreements

Agreement points

AI development is dominated by big tech companies, excluding broader participation

Speakers

– Ana Valdivia
– Yu Ping Chan

Arguments

AI development is now dominated by big tech companies rather than universities or other institutions, limiting innovation access


Over 95% of top AI talent is concentrated in six research universities in the US and China, creating exclusive monopolies


Summary

Both speakers agree that AI development has become centralized in a small number of big tech companies and elite institutions, primarily in the US and China, which excludes most of the world from participating in AI innovation and creates monopolistic control over AI technologies.


Topics

Economic | Development | Legal and regulatory


Global South bears environmental costs while being excluded from AI governance

Speakers

– Ana Valdivia
– Pedro Ivo Ferraz da Silva

Arguments

AI infrastructure cannot be truly sovereign because it depends on minerals and natural resources from other countries, creating interdependencies


Most of the world remains excluded from shaping AI systems while bearing environmental costs of mineral extraction


Summary

Both speakers highlight the fundamental injustice where Global South countries provide the raw materials and bear environmental costs for AI infrastructure while having no control over how AI systems are developed or deployed.


Topics

Development | Human rights principles | Economic


Need for transparency in AI environmental impact reporting

Speakers

– Ana Valdivia
– Pedro Ivo Ferraz da Silva

Arguments

Data centers in Mexico are being deployed without democratic consultation with communities, exacerbating drought conditions


84% of widely used large language models provide no disclosure of their energy use or emissions, preventing informed policy design


Summary

Both speakers emphasize the critical lack of transparency regarding AI’s environmental impacts, with AI infrastructure being deployed without proper disclosure of resource consumption or community consultation.


Topics

Legal and regulatory | Development | Sustainable development


Labor rights concerns in AI development

Speakers

– Yu Ping Chan
– Alexander Costa Barbosa
– Raoul Danniel Abellar Manuel

Arguments

Question of ownership arises regarding who owns the products of labor used to create LLMs that end up owned by big tech companies


Workers’ rights were initially excluded from AI regulation discussions, highlighting the political nature of these debates


Need to ensure labor protections in AI development to avoid replicating exploitative practices, especially in training large language models


Summary

All three speakers express concern about labor exploitation in AI development, particularly regarding workers who train AI systems but don’t benefit from the resulting products, and the systematic exclusion of labor protections from AI governance discussions.


Topics

Future of work | Human rights principles | Economic


Multi-level approach to digital sovereignty needed

Speakers

– Pedro Ivo Ferraz da Silva
– Edmon Chung
– Alexander Costa Barbosa

Arguments

False binary exists between national sovereignty and global cooperation – both are needed and should be rooted in equity and climate responsibility


Personal digital sovereignty is essential alongside national and global approaches to create people-centric AI systems


Popular digital sovereignty involves communities doing what the state hasn’t provided, focusing on meaningful connectivity and digital literacy in peripheries


Summary

These speakers agree that digital sovereignty cannot be achieved through a single approach but requires coordination across personal, community, national, and international levels, rejecting false dichotomies between different scales of governance.


Topics

Human rights principles | Development | Legal and regulatory


Similar viewpoints

Both speakers advocate for cooperative rather than competitive approaches to digital governance, emphasizing solidarity and collaboration while ensuring meaningful local participation in decision-making processes.

Speakers

– Ana Valdivia
– Pedro Ivo Ferraz da Silva

Arguments

Digital sovereignty should be replaced with digital solidarity to create networks of cooperation between states rather than competition


International cooperation must be accompanied by local empowerment and community participation


Topics

Development | Human rights principles | Legal and regulatory


Both speakers emphasize that AI and digital transformation must be approached holistically, considering environmental sustainability and requiring coordination across multiple sectors and policy areas rather than isolated technology-focused approaches.

Speakers

– Yu Ping Chan
– Alex Moltzau

Arguments

Digital transformation must be part of holistic approach beyond single ministries, encompassing connectivity, infrastructure, and energy


AI rollout must be as responsible, sustainable, and green as possible within the context of climate crisis


Topics

Sustainable development | Development | Infrastructure


Both speakers advocate for moving beyond conventional development models toward more comprehensive approaches that address interconnected social, environmental, and digital challenges through alternative frameworks rooted in justice and equity.

Speakers

– Alexander Costa Barbosa
– Pedro Ivo Ferraz da Silva

Arguments

Alternative development approaches like Buen Vivir and commons-based development align with climate justice discussions


Need to move beyond triple planetary crisis narrative to address poly-crisis including environmental, social, and digital rights crises


Topics

Sustainable development | Development | Human rights principles


Unexpected consensus

Critique of larger AI models

Speakers

– Ana Valdivia

Arguments

Larger language models reproduce more stereotypes and biases while consuming more resources without necessarily being better


Explanation

It’s unexpected to find consensus challenging the prevailing industry narrative that bigger AI models are inherently better. This technical critique from an AI researcher directly contradicts the dominant trend toward ever-larger models, suggesting the field may be moving in the wrong direction.


Topics

Human rights principles | Development | Sociocultural


Government officials acknowledging AI governance failures

Speakers

– Pedro Ivo Ferraz da Silva
– Alex Moltzau

Arguments

84% of widely used large language models provide no disclosure of their energy use or emissions, preventing informed policy design


AI operates within existing labor legislation frameworks, but there’s concern about protecting workers involved in supervised machine learning tasks


Explanation

It’s notable that government representatives openly acknowledge significant gaps and failures in current AI governance, including lack of transparency and inadequate worker protections. This honest assessment from policy makers suggests genuine commitment to addressing these issues rather than defending status quo.


Topics

Legal and regulatory | Future of work | Sustainable development


Social movement and international organization alignment on systemic change

Speakers

– Alexander Costa Barbosa
– Yu Ping Chan
– Pedro Ivo Ferraz da Silva

Arguments

Alternative development approaches like Buen Vivir and commons-based development align with climate justice discussions


Need for collective action and mobilization of concerned individuals and groups to address AI challenges


Social movements and grassroots organizations should be leveraged through technology to address multiple crises coherently


Explanation

There’s unexpected consensus between grassroots social movements and established international organizations on the need for fundamental systemic change rather than incremental reforms. This alignment suggests broader recognition that current approaches are inadequate.


Topics

Development | Human rights principles | Sociocultural


Overall assessment

Summary

The speakers demonstrate remarkable consensus on several critical issues: the concentration of AI power in big tech companies, the environmental and social injustices created by current AI development patterns, the need for transparency and accountability, and the importance of multi-stakeholder approaches that include marginalized communities. There’s also strong agreement on the interconnected nature of digital, environmental, and social challenges.


Consensus level

High level of consensus with significant implications for AI governance. The agreement spans diverse stakeholders from government officials to social movement representatives, suggesting these concerns transcend traditional institutional boundaries. This consensus provides a strong foundation for coordinated action on AI governance reform, particularly around environmental sustainability, labor rights, and inclusive participation in AI development decisions.


Differences

Different viewpoints

Digital Sovereignty vs Digital Solidarity Approach

Speakers

– Ana Valdivia
– Pedro Ivo Ferraz da Silva

Arguments

Digital sovereignty should be replaced with digital solidarity to create networks of cooperation between states rather than competition


False binary exists between national sovereignty and global cooperation – both are needed and should be rooted in equity and climate responsibility


Summary

Valdivia argues for abandoning digital sovereignty discourse entirely in favor of digital solidarity, while Silva maintains that both national sovereignty and global cooperation can coexist without being contradictory


Topics

Development | Legal and regulatory | Human rights principles


Scope of Current Global Crisis

Speakers

– Pedro Ivo Ferraz da Silva
– Yu Ping Chan

Arguments

Need to move beyond triple planetary crisis narrative to address poly-crisis including environmental, social, and digital rights crises


Only 10% of economic value generated by AI in 2030 will accrue to Global South countries excluding China, exacerbating existing inequalities


Summary

Silva advocates for a broader ‘poly-crisis’ framework that encompasses multiple interconnected challenges, while Chan focuses more specifically on economic inequality and AI exclusion as primary concerns


Topics

Development | Human rights principles | Sustainable development


Unexpected differences

Terminology and Framing of Sovereignty Discourse

Speakers

– Ana Valdivia
– Edmon Chung

Arguments

Digital sovereignty should be replaced with digital solidarity to create networks of cooperation between states rather than competition


Personal digital sovereignty is essential alongside national and global approaches to create people-centric AI systems


Explanation

While both speakers are concerned with power dynamics in AI governance, Valdivia wants to abandon sovereignty terminology entirely, while Chung wants to expand it to include personal sovereignty. This disagreement on terminology is unexpected given their shared concerns about democratizing AI governance


Topics

Human rights principles | Development | Privacy and data protection


Overall assessment

Summary

The main areas of disagreement center on approaches to digital sovereignty (solidarity vs. combined sovereignty-cooperation), the scope of global crises (poly-crisis vs. focused inequality concerns), and terminology for governance frameworks. However, there is strong consensus on core problems: AI’s environmental impact, exclusion of Global South, labor exploitation, and need for community empowerment


Disagreement level

Low to moderate disagreement level with high convergence on problem identification but differing on solutions and framing. The disagreements are more about strategic approaches and terminology rather than fundamental values, suggesting potential for synthesis and collaboration among the speakers’ perspectives


Partial agreements

Partial agreements

Similar viewpoints

Both speakers advocate for cooperative rather than competitive approaches to digital governance, emphasizing solidarity and collaboration while ensuring meaningful local participation in decision-making processes.

Speakers

– Ana Valdivia
– Pedro Ivo Ferraz da Silva

Arguments

Digital sovereignty should be replaced with digital solidarity to create networks of cooperation between states rather than competition


International cooperation must be accompanied by local empowerment and community participation


Topics

Development | Human rights principles | Legal and regulatory


Both speakers emphasize that AI and digital transformation must be approached holistically, considering environmental sustainability and requiring coordination across multiple sectors and policy areas rather than isolated technology-focused approaches.

Speakers

– Yu Ping Chan
– Alex Moltzau

Arguments

Digital transformation must be part of holistic approach beyond single ministries, encompassing connectivity, infrastructure, and energy


AI rollout must be as responsible, sustainable, and green as possible within the context of climate crisis


Topics

Sustainable development | Development | Infrastructure


Both speakers advocate for moving beyond conventional development models toward more comprehensive approaches that address interconnected social, environmental, and digital challenges through alternative frameworks rooted in justice and equity.

Speakers

– Alexander Costa Barbosa
– Pedro Ivo Ferraz da Silva

Arguments

Alternative development approaches like Buen Vivir and commons-based development align with climate justice discussions


Need to move beyond triple planetary crisis narrative to address poly-crisis including environmental, social, and digital rights crises


Topics

Sustainable development | Development | Human rights principles


Takeaways

Key takeaways

AI sovereignty cannot be achieved in isolation due to dependencies on minerals, infrastructure, and resources from other countries, requiring a shift from competitive sovereignty to collaborative digital solidarity


AI development is reproducing and exacerbating existing inequalities, with only 10% of AI’s economic value projected to accrue to Global South countries (excluding China) by 2030


Environmental impacts of AI are largely undisclosed, with 84% of widely used large language models providing no information about their energy use or emissions, preventing informed policy-making


AI infrastructure deployment often occurs without democratic consultation with affected communities, creating climate injustice where data centers have 24/7 access to resources while local communities face scarcity


Effective AI governance requires addressing multiple interconnected crises (environmental, social, digital rights) rather than focusing on technology in isolation


Popular/grassroots digital sovereignty involves communities providing services the state hasn’t delivered, focusing on meaningful connectivity, digital literacy, and decent work conditions


Multi-stakeholder approaches must genuinely include social movements, indigenous communities, and grassroots organizations in AI policy design


Labor rights protection is essential in AI development to prevent exploitation of workers involved in training large language models and data processing


Resolutions and action items

European Commission collaboration with Africa on generative AI with 5 million euros funding (deadline October 2nd)


Hamburg Declaration on Responsible AI for the SDGs launched with over 50 stakeholders committed, welcoming more organizations to sign


BRICS Civil Popular Forum Digital Sovereignty Working Group document to be released with guidelines for financing digital public infrastructures


Ana Valdivia offered to connect civil society organizations across Latin America working on data center transparency and environmental advocacy


Encouragement for continued advocacy and mobilization of concerned individuals and groups to collectively address AI challenges


Unresolved issues

How to effectively regulate AI environmental impacts without falling into greenwashing approaches


How to ensure meaningful participation of Global South countries in shaping AI systems beyond just accessing them


How to balance rapid AI development and deployment with environmental sustainability requirements


How to address the concentration of AI innovation in big tech companies versus universities and public institutions


How to implement effective transparency requirements for AI energy consumption and emissions disclosure


How to ensure labor rights protection in AI development across different jurisdictions and regulatory frameworks


How to democratically involve communities in decisions about AI infrastructure deployment in their territories


Suggested compromises

Adopting ‘digital solidarity’ framework instead of competitive digital sovereignty to enable cooperation while maintaining autonomy


Developing AI governance that balances innovation encouragement with climate goals and human rights protection


Creating networks of collaboration between states and civil society organizations to share experiences and strategies


Implementing the ‘mutirão’ (collective community-driven effort) approach to AI governance that emphasizes cooperation and shared responsibility


Strengthening three strategic capabilities (skills, data, infrastructure) in developing countries while maintaining global cooperation


Using existing labor legislation frameworks as foundation for AI worker protection rather than creating entirely new systems


Thought provoking comments

Rather than talk about digital sovereignty, that creates sort of like frictions between states, because all the states in the world want to become digital sovereign, we should talk about digital solidarity. And we should talk about how we can create networks of solidarity, that we help one state with other states… all together to develop digital divinity and how we can become as a community independent from big tech companies.

Speaker

Ana Valdivia


Reason

This comment fundamentally reframes the entire discussion by challenging the competitive nationalism inherent in ‘digital sovereignty’ discourse and proposing a collaborative alternative. It’s intellectually provocative because it suggests that the very framing of sovereignty creates the problems the panelists are trying to solve.


Impact

This concept of ‘digital solidarity’ became a recurring theme throughout the discussion. Pedro Ivo later referenced it directly, and Yu Ping’s closing remarks about collective action echo this sentiment. It shifted the conversation from nation-state competition to collaborative problem-solving.


AI is not only nowadays reproducing stereotypes and biases, it’s also reproducing climate injustice, because if we don’t regulate how this infrastructure is becoming is being implemented in different geographies, it’s going to exacerbate the consequences of climate justice.

Speaker

Ana Valdivia


Reason

This comment introduces a critical new dimension by connecting AI bias research with environmental justice, using concrete examples from Querétaro, Mexico, where communities have water access only one hour per week while data centers have 24/7 access. It demonstrates how AI infrastructure creates new forms of inequality.


Impact

This framing of ‘climate injustice’ influenced subsequent speakers to address environmental impacts more seriously. Pedro Ivo built on this by discussing the need for transparency in AI energy reporting, and Yu Ping referenced the broader developmental spectrum needed to address these inequalities.


We should reject actually the false binary that exists between national sovereignty and global cooperation. I think we need both of them to be rooted in equity, climate responsibility, and I think the Muchidão spirit kind of conveys this.

Speaker

Pedro Ivo Ferraz da Silva


Reason

This comment introduces the Brazilian concept of ‘mutirão’ (collective community effort) as a framework for AI governance, challenging the either/or thinking that dominates policy discussions. It’s culturally grounded yet universally applicable.


Impact

This concept provided a philosophical foundation that other speakers built upon. Edmond Chung later expanded this to include personal digital sovereignty, creating a three-level framework (personal, national, global) that enriched the discussion’s complexity.


Our approach to digital sovereignty, or what we call popular digital sovereignty… it deals with the massive aspect of sovereignty instead of the so-called folkloric aspect of popular. For us, it’s mainly what we’ve been doing the past five years. It’s like doing things that the state actually haven’t provided to us so far.

Speaker

Alexander Costa Barbosa


Reason

This comment introduces a grassroots perspective that challenges both state-centric and corporate-centric approaches to digital sovereignty. It’s particularly insightful because it comes from lived experience of a housing movement that has extended its organizing principles to digital rights.


Impact

This intervention grounded the theoretical discussion in practical organizing experience. It influenced the Q&A session, with participants asking about labor rights and grassroots advocacy, and Yu Ping’s final remarks about the importance of speaking up and collective action.


84% of widely used large language models provide no disclosure at all of their energy use or emissions. Without better reporting, we cannot assess the actual trade-offs and design, we cannot design informed policies and we can also hold AI and related infrastructures accountable.

Speaker

Pedro Ivo Ferraz da Silva


Reason

This statistic is striking because it reveals the fundamental lack of transparency that makes informed policy-making impossible. It connects the abstract discussion of sustainability to concrete governance challenges.


Impact

This transparency issue became a focal point for practical solutions. Ana Valdivia later referenced similar transparency struggles in Latin America, and it reinforced the need for the regulatory approaches that Alex Moltzau described from the European perspective.


It’s not only about infrastructure. It’s also about how we can become digital surveying and how we can develop this AI with our own hands and with our own infrastructure… We are not able to develop AI technology anymore. We have to depend on big tech companies.

Speaker

Ana Valdivia


Reason

This observation about the shift from distributed to centralized AI development is particularly insightful because it comes from someone who experienced this transition firsthand as a researcher. It highlights how technological sovereignty has been eroded even within academic institutions.


Impact

This comment deepened the discussion about what sovereignty actually means in practice. It influenced Yu Ping’s later comments about ownership of AI products and the concentration of AI talent, and connected to Alexander’s points about movements doing what states haven’t provided.


Overall assessment

These key comments fundamentally shaped the discussion by moving it beyond traditional policy frameworks toward more collaborative and justice-oriented approaches. Ana Valdivia’s concept of ‘digital solidarity’ and Pedro Ivo’s rejection of false binaries created space for Alexander’s grassroots perspective to be heard as equally valid to governmental and institutional approaches. The concrete examples of environmental injustice and lack of transparency grounded abstract concepts in lived realities. Together, these interventions transformed what could have been a conventional policy discussion into a more nuanced exploration of power, justice, and alternative frameworks for technology governance. The discussion evolved from technical and regulatory concerns toward questions of collective action, environmental justice, and community-driven solutions.


Follow-up questions

How can we create networks of digital solidarity between states to help develop digital sovereignty collectively and become independent from big tech companies?

Speaker

Ana Valdivia


Explanation

This addresses the need to move beyond competitive national digital sovereignty approaches toward collaborative frameworks that can challenge big tech monopolies


How can we improve transparency and reporting requirements for AI systems’ energy use and emissions, given that 84% of widely used large language models provide no disclosure?

Speaker

Pedro Ivo Ferraz da Silva


Explanation

Without better reporting, it’s impossible to assess trade-offs, design informed policies, or hold AI infrastructure accountable for environmental impacts


How can developing countries strengthen the three strategic capabilities (skills, data, and infrastructure) needed to shape AI according to local priorities?

Speaker

Pedro Ivo Ferraz da Silva


Explanation

This is essential for Global South countries to not just access AI but actively shape it to reflect local priorities and advance climate justice


How can AI regulation include stronger provisions for labor protection, particularly for workers involved in training large language models?

Speaker

Raoul Danniel Abellar Manuel


Explanation

There’s a need to ensure that AI development doesn’t replicate exploitative labor practices, especially in countries providing data annotation and model training services


How can civil society organizations advocate for environmental sustainability in AI regulation without falling into greenwashing approaches?

Speaker

Lucia (participant from Peru)


Explanation

Many countries are rushing to regulate AI but missing the environmental dimension, and there’s a need for effective advocacy strategies that avoid superficial environmental commitments


How can personal digital sovereignty be integrated with national digital sovereignty and global public interest to create a comprehensive framework?

Speaker

Edmon Chung


Explanation

This addresses the need to move beyond false dichotomies and create frameworks that protect individual rights while enabling national autonomy and global cooperation


How can we address the ownership and control issues around AI products created through Global South labor but owned by big tech companies?

Speaker

Yu Ping Chan


Explanation

This highlights the need to examine who benefits from AI development when the labor comes from one region but the profits and control remain with corporations in another


How can we develop coherent approaches to address the poly-crisis (environmental, social, digital rights crises) rather than treating them as separate issues?

Speaker

Pedro Ivo Ferraz da Silva


Explanation

Moving beyond the ‘triple planetary crisis’ narrative to address interconnected crises including labor rights, poverty, hunger, and digital rights alongside environmental concerns


How can technology be leveraged to support and amplify grassroots and social movements working on digital sovereignty and environmental justice?

Speaker

Pedro Ivo Ferraz da Silva


Explanation

This explores the potential for technology to empower social movements rather than just serve corporate or state interests


How can we ensure democratic participation in decisions about AI infrastructure deployment, particularly regarding environmental impacts on local communities?

Speaker

Ana Valdivia


Explanation

This addresses the problem of governments inviting AI infrastructure without consulting communities who will bear the environmental costs, such as water scarcity


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.