DC-DNSI: Beyond Borders – NIS2’s Impact on Global South
DC-DNSI: Beyond Borders – NIS2’s Impact on Global South
Session at a Glance
Summary
This discussion focused on AI and data governance from the perspective of the global majority, exploring challenges and opportunities in various regions. The panel, organized by the Data and AI Governance coalition of the IGF, brought together experts from diverse backgrounds to discuss the impact of AI on human rights, democracy, and economic development in the Global South.
Key themes included the need for regional approaches to AI governance, the importance of inclusive frameworks, and the challenges of implementing AI in healthcare and other sectors. Speakers highlighted the potential of AI to address social issues but also raised concerns about data privacy, labor exploitation, and the widening technological gap between developed and developing nations.
Several presenters discussed specific regional initiatives, such as Brazil’s and Chile’s efforts to establish AI regulatory bodies, and Africa’s continental strategy on AI. The discussion also touched on the environmental and social costs of AI development, including issues of embodied carbon and the exploitation of workers in the Global South.
Innovative approaches were proposed, including reparative algorithmic impact assessments and the development of AI tools that prioritize the needs of the global majority. Speakers emphasized the importance of capacity building, knowledge transfer, and international cooperation in bridging the North-South divide in AI governance.
The discussion concluded by highlighting the complexity of AI governance issues in the Global South and the potential for collaborative solutions. Participants agreed on the need for continued dialogue and research to ensure that AI development benefits all of humanity, not just a privileged few.
Keypoints
Major discussion points:
– AI governance frameworks and policies emerging in different regions of the global majority (e.g. Africa, Latin America, Asia)
– Challenges of AI development and deployment in the global south, including issues of data colonialism, labor exploitation, and unequal access
– Environmental and social impacts of AI, particularly on marginalized communities
– Need for inclusive AI development that incorporates diverse perspectives and addresses local needs
– Proposals for more equitable AI governance, such as reparative algorithmic impact assessments
Overall purpose:
The goal of this discussion was to highlight perspectives on AI governance and development from the “global majority” (developing nations and the global south). It aimed to showcase both challenges and potential solutions for more equitable and inclusive AI systems that serve the needs of diverse populations worldwide.
Speakers
– Luca Belli: Professor of digital governance and regulation at Fundação Getulio Vargas (FGV) Law School, Rio de Janeiro, where he directs the Center for Technology and Society (CTS-FGV) and the CyberBRICS project
– Ahmad Bhinder: Policy and Innovation Director at Digital Cooperation Organisation
– Ansgar Koene: Global AI Ethics and Regulatory Leader at EY
– Melody Musoni: Digital Governance and Digital Economy Policy Officer at the European Centre for Development Policy Management.
– Bianca Kremer: Assistant Professor and Project Leader at the Faculty of Law of IDP University (Brazil)
Full session report
AI Governance from the Global Majority Perspective: Challenges and Opportunities
This comprehensive discussion, organised by the Data and AI Governance coalition of the IGF, brought together experts from diverse backgrounds to explore the challenges and opportunities of AI governance from the perspective of the global majority. The panel focused on the impact of AI on human rights, democracy, and economic development in the Global South, highlighting the need for inclusive frameworks and regional approaches to AI governance.
Key Themes and Discussion Points
1. AI Governance Frameworks and Approaches
The discussion emphasised the importance of developing inclusive AI governance frameworks that consider the perspectives of the global majority. Ahmad Bhinder highlighted the need for regional AI strategies and policies, discussing the Digital Cooperation Organization’s (DCO) work on AI readiness assessment and ethical principles. He mentioned the development of a self-assessment tool for AI readiness, which will be made available to member states across different dimensions of their AI readiness, including governance and capacity building.
Melody Musoni stressed the importance of creating inclusive frameworks for the global majority, mentioning the African Union’s continental strategy on AI and data policy framework. This initiative aims to provide a unified approach to AI governance across the African continent.
Elise Racine proposed the implementation of reparative algorithmic impact assessments to address historical inequities. This novel framework combines theoretical rigour with practical action, offering a potential solution for creating more equitable AI systems.
Guangyu Qiao Franco addressed the gap between North and South in military AI governance, highlighting the need for an inclusive AI arms control regime. She provided specific statistics on participation in UN deliberations, emphasizing the underrepresentation of Global South countries in these discussions.
2. AI Ethics and Human Rights
Ethical considerations and human rights protections emerged as crucial aspects of AI development and deployment. Bianca Kremer provided a stark example of AI bias in Brazil, stating that “90.5% of those who are arrested in Brazil today with the use of facial recognition technologies are black and brown.” This statistic underscores the urgent need to address AI bias and its societal implications, especially in diverse societies.
Kremer also discussed her research on the economic impact of algorithmic racism in digital platforms, highlighting how these biases can perpetuate and exacerbate existing inequalities.
3. AI Impact on Labour and Economy
The discussion explored the significant impacts of AI on labour, the economy, and the environment. Amrita Sengupta examined the impact of AI on medical practitioners’ work, emphasising the need to prioritise AI development in areas that provide the most public benefit with the least disruption to existing workflows in healthcare.
Avantika Tewari analysed the exploitation of digital labour in AI development, highlighting how platforms like Amazon Mechanical Turk outsource tasks to workers in the global majority, often underpaying and undervaluing their contributions. She also discussed India’s Data Empowerment and Protection Architecture, providing context for data sharing models and digital labor issues in the country.
4. Environmental Concerns in AI Development
Rachel Leach examined the environmental and social costs of AI development, including the issue of embodied carbon in AI technologies. She highlighted that current regulations are furthering AI development without properly addressing environmental harms, emphasising the need to balance AI advancement with environmental sustainability. Leach also discussed the techno-solutionist approach of countries like Brazil and the U.S., which often overlooks the environmental impact of AI technologies.
5. AI in Content Moderation and Misinformation
Hellina Hailu Nigatu addressed challenges in AI-powered content moderation for diverse languages, while Isha Suri focused on developing policy responses to counter false information in the age of AI. Suri emphasized the need for collaborative efforts between governments, tech companies, and civil society to address the challenges posed by AI-generated misinformation.
6. AI in Judicial Systems
The implementation of AI in judicial systems was discussed by Liu Zijing and Ying Lin, who provided insights into China’s AI initiatives in the judicial system. They presented information about specific AI systems like Faxin, Phoenix, and the 206 system, which are being used to assist judges and improve efficiency in Chinese courts. However, they also raised concerns about transparency and fairness in AI-assisted judicial decisions.
7. Regional Perspectives on AI Development
The discussion provided insights into AI development and regulation across various regions, including in Russia, Latin America, and in Africa. Luca Belli provided a Brazilian perspective on AI and cybersecurity, noting that while Brazil has adopted various sectoral regulations, implementation remains “very patchy and not very sophisticated in some cases.” This observation highlighted the gap between formal regulations and actual implementation, revealing a critical issue in AI governance, especially in developing countries.
8. AI and Disabilities
The discussion also touched on the intersection of AI with disabilities, educational technologies, and medical technologies. This highlighted the potential for AI to improve accessibility and support for individuals with disabilities, while also raising concerns about ensuring inclusive design in AI systems.
Agreements and Consensus
Key areas of agreement included:
1. The need for inclusive AI governance frameworks
2. The importance of addressing biases and discrimination in AI systems
3. Consideration of the hidden costs of AI development, including environmental and labour impacts
4. The development of region-specific AI strategies
This consensus suggests a growing recognition of the need for more inclusive and equitable approaches to AI governance globally, which could lead to more collaborative efforts in developing AI policies and frameworks that address the diverse needs of different regions and populations.
Differences and Unresolved Issues
While there was general agreement on the need for inclusive AI governance, differences emerged in approaches to specific issues:
1. Approaches to AI regulation varied, with some favouring cautious development (e.g., Russia) and others establishing specialised regulatory bodies (e.g., Latin America).
2. The focus of AI governance differed, with some emphasising ethical principles and others prioritising environmental concerns.
3. Addressing biases in AI systems revealed different priorities, such as algorithmic racism in law enforcement versus content moderation challenges for diverse languages.
Unresolved issues included:
1. Balancing AI development with environmental sustainability
2. Addressing the exploitation of digital labour in AI development
3. Resolving disparities in military AI governance between global North and South
4. Determining liability in AI-assisted medical decisions
5. Ensuring fairness and transparency in AI-powered judicial systems
6. Developing effective content moderation systems for diverse languages and contexts
Proposed Solutions and Action Items
The discussion yielded several proposed solutions and action items:
1. Develop more inclusive AI governance frameworks that incorporate perspectives from the global majority
2. Implement reparative algorithmic impact assessments to address historical inequities
3. Create open repositories and taxonomies for AI cases and accidents
4. Develop original AI solutions tailored to regional languages and contexts
5. Increase capacity building and knowledge transfer in AI between global North and South
6. Incorporate environmental justice concerns comprehensively in AI discussions and policies
7. Enhance collaboration between governments, tech companies, and civil society to address AI-generated misinformation
Conclusion
This discussion highlighted the complexity of AI governance issues in the Global South and the potential for collaborative solutions. It emphasised the need for continued dialogue and research to ensure that AI development benefits all of humanity, not just a privileged few. The variety of regional perspectives contributed to a collaborative, global-minded approach to addressing the challenges and opportunities presented by AI in the context of the global majority.
Session Transcript
Luca Belli: Morning, good afternoon actually to everyone. I think you can get started. So we have a very intense and long list of panelists today. These are only a part of the panelists. We have also online panelists joining us due to the fact that we have a lot of co-authors for this book that we are launching today. So this session on AI and data governance from the global majority is organized by a multi-stakeholder group of the IGF called the Data and AI Governance DAIG coalition of the IGF together with the Data and Trust coalition which is another multi-stakeholder group. So we have merged our effort. This report is the annual report of the Data and AI Governance coalition that I have the pleasure to chair. My name is Luca Belli. Actually, pardon my lack of politeness. I forgot to introduce myself. My name is Luca Belli. I’m a professor of digital governance and regulation at Fundação Getulio Vargas (FGV) Law School, Rio de Janeiro, where I direct the Center for Technology and Society (CTS-FGV) and the CyberBRICS project. I’m going to briefly introduce the topic of today and what we are doing here and then I will ask each panelist to introduce him or herself because as we have an enormous list of panelists I might spend five minutes only reading their resumes. So it’s in the interest of time management if it is better if everyone. I will of course call everyone but then if they want to introduce themselves they do it by themselves. So are you hearing well? All right. So the reason of the creation of this group that is leading this effort on data and AI governance is to try to bring into the perspective of data and AI governance debates ideas, problems, challenges but even solutions from the Global South, the global majority. And this is why this year report is precisely dedicated to AI from the global majority and as you may see we have a pretty diverse panel here and even more diverse if we consider also the online speakers. Our goal is precisely to assess evidence, gather evidence, engage stakeholders to understand to what extent AI and data technologies, data intensive technologies can have an impact on individuals life, on the full enjoyment of human rights, on the protection of democracy and the rule of law but also on very essential things like the fight against inequalities, the fight against discrimination and biases against disinformation, the need to protect cybersecurity and safety and all these things are explored to some extent in this book. We also launched another book last year on AI sovereignty, transparency and accountability. I see that many, some of the authors at least of last year’s book are also here in the room and all the publications are freely available on the IGF website. Let me also state that these books that we launched here are preliminary versions. They are then, although they have a very nice design, they are printed, they are preliminary version and then they are officially published with an editor but it takes more time so the AI sovereignty book is going to be releasing in two months with Springer. This will be consolidated so if you have any comments we are here also to receive your feedback and comments so that we can improve the work in a cooperative way. I had the pleasure to author a chapter on AI meets cybersecurity, exploring the Brazilian perspective on information security with regard to AI and this is actually a very interesting case study because it’s an example of a country that even if it has climbed cybersecurity rankings like the ITU cybersecurity index being now the third most cybersecurity in the Americas according to the index, it’s also at the same time a country that is in the top three of the most cyber attacked in the world and this is actually a very interesting case study because it means that even if formally it has climbed the cybersecurity index because it has adopted a lot of cybersecurity sectoral regulation like in data protection, like in telecoms sector, in the banking sector and so on, in the energy sector but the implementation is very patchy and not very sophisticated in some cases so the one of the main takeaways of the study and I will not enter into details because I hope you will read it together with the others, is precisely to adopting a multi-stakeholder approach not to pay lip service to all the stakeholders that join hands and find solutions but because it is necessary to understand to what extent AI can be used for offensive, defensive purposes and to what extent geeks can cooperate with policymakers to identify what are the best possible tools but also what kind of standardization can be implemented to specify what are very vague elements that we typically find in laws like what is a reasonable or adequate security measures. Reasonable and adequate are the favorite words of lawyers. I say this as a lawyer because it means pretty much everything and you can charge hefty fees to your clients to discuss what is reasonable and adequate. If you don’t have a regulator or a standard that tells you what is a reasonable or an adequate security measures it’s pretty much impossible to implement. Now I’m not going to enter too much into this I hope you will check it and I would like to give the floor to our first speaker hoping that they will respect the five minutes time each, save those who are splitting the presentation that will have three minutes per person. So we will start with Ahmad Binder, Policy and Innovation Director at the DCO.
Ahmad Bhinder: Thank you very much Dr. Luca and I’m really feeling overwhelmed to be engulfed with such knowledgeable people. So my name is Ahmad Binder, I represent Digital Cooperation Organization that is an organization, intergovernmental organization that is headquartered in Riyadh and we have 16 member states mainly from the Middle East, from Africa, a couple of European countries and from South Africa, sorry from South Asia and we are in active discussions with the new members from Latin America, from Asia etc. So we are a global organization and although we are a global organization the countries that we represent they come from the global majority. So we are focusing horizontally on the digital economy and all the digital economy topics that are relevant including AI governance and data governance, they are very relevant to us. So I would very quickly introduce some of the work that and it’s on a preliminary level and then how to action some of that work. So I should keep it like this, yeah okay. So we have developed two agendas as I say, one is the data agenda and since data governance is bedrock of AI governance so we have something on the AI agenda as well. So very quickly we are developing a tool for assessment of AI readiness for our member states. This is a self assessment tool and this tool is we will make it available in a month’s time to the member states across different dimensions of their AI readiness that includes governance but that goes beyond governance to a lot of other dimensions from for example capacity building, the adoption of AI and that assessment is going to help the member states assess and it would recommend what needs to be done for the adoption of AI across the societies. Another tool that we are working on is quite an interesting one and I am actually working actively on that. So there are a lot of now I think what we have covered in the in the AI domain is to come up with the ethical principles. So there’s a there’s kind of a harmonization from a lot of multilateral organization on what the ethical principles should be for example explainability, accountability etc. We’ve taken those principles and as a basis and we have done an assessment for the DCO member states on how does AI intersect under those principles to the basic human rights. We’ve created a framework that I presented in a couple of sessions earlier so I will not go into the details but we are looking at for example there is data privacy or data privacy is an ethical AI principle. Looking at data privacy and seeing what are the risks that come under attack from the AI systems and then mapping those risks against the human rights of so a basic human rights of privacy or a basic human right of of whatever. So once we once we take that through through this framework we will make it just a tool available to the AI systems, deployers and developers in the DCO member states or and beyond as well to answer a whole lot of detailed questions and and and assess their the systems, under those ethical principles considerations. So basically, we are trying to put the principles that have been adopted into practice. And the system, and also the recommendations on how AI systems can improve themselves. So this is on AI. Very, very quickly, I think I have a minute left. So we are trying to focus on the data privacy and we are developing or drafting DCO AI, sorry, DCO data privacy principles that take a lot of inspirations from the principles that are out there, but the changed realities with AI, we are taking them into consideration. And we are developing an interoperability mechanism for trusted cross-border data flows across the DCO member states. And we are also developing some foundations on what could go into those interoperability mechanism, for example, some model contractual clauses, et cetera, et cetera. So that, in a meaningful multilateral way, that would facilitate the trusted cross-border data flows and of course, serve as foundation for AI governance. I could say a lot, but I think I am, thank you, it’s time over for me. So thank you very much.
Luca Belli: Fantastic, thank you very much, Ahmad. And now, as you were speaking about ethics and AI, Ansgar Kern, you have been leading the Anderson Young work on AI ethics globally. So I would like to give you the floor to provide us a little bit of punch remarks on what are the challenges and what are the possibilities to deal with this?
Ansgar Koene: Certainly, thank you very much, Luca. And it’s a pleasure and honor to be able to join the panel today. So yes, my name is Ansgar Kern and I’m the Global AI Ethics and Regulatory Leader at EY. As a globally operating firm, of course, we try to help organizations, be it public or private sector, in most countries around the world with setting up their governance frameworks around the use of AI. And one of the big challenges is for organizations to clearly identify, actually, what are the particular impacts that these systems are going to have on people, both those who are directly using the system, but also those who are going to be indirectly impacted by these. And one example, for instance, that is probably of particular concern for the global majority is the question about how these systems are going to impact on young people. The global majority, of course, being a space where there are a lot of young people. And if you look at a lot of organizations, they do not fully understand how young people are interacting with their systems, be it systems that are provided through online platforms or be it systems that are integrated into other kinds of tools. They do not know who and from what ages is engaging with these platforms or what kind of particular concerns they need to be taking into account. A different kind of dimension of a concern is how to make sure that, as we are operating in the AI space, often with systems that are produced by a technology-leading company, but then are being deployed by a different organization, that the obligations, be it regulatory or otherwise, fall onto the party that has the actual means to address these considerations. Often, the deploying party does not know fully what kind of data went into creating the system, does not know fully the extent to which the system has been tested, whether it’s going to be biased against one group or another, and does not have the means to do so. It must rely on a supplier. Do we have the right kind of review processes as part of procurement, as part of making sure that, as these systems are being taken on board, that they do benefit the users?
Luca Belli: That was excellent and also fast, which is even more excellent. So we can now pass directly to Melody Musoni, who is Policy Officer at ECDPM and was former Data Protection Advisor of the South African Development Community Secretariat. Melody, the floor is yours.
Melody Musoni: Thank you, Luca. When I was preparing for this session, I was looking at my previous interventions at IGF last year. It seems like a lot has happened from last year till now in terms of what Africa has been doing, and I guess to speak more on what the developments on AI governance in Africa and trying to answer one of the questions. So I’ll try to speak about the developments on AI governance in Africa and trying to answer as well one of the policy questions we have. How can AI governance frameworks ensure equitable access to and promote development of AI technologies for the global majority? So this year has been an important year and a very busy year for policy makers in Africa. We saw earlier at the beginning of the year, the African Union Development Agency developing a white paper on AI, which kind of gave a layout of the land of what are the expectations from a continental level and the priorities that the continent has as far as the development of AI on the continent is concerned. And later in June this year, we saw again the African Union adopting a continental strategy on AI, and it’s something that was in response to, I guess, conversations that we have at platforms like this, that at least if we can have a continental strategy, which give or direct us and guide us on the future of AI development in Africa. And apart from the two frameworks, we also have a data policy framework. It has been in place since 2022, and it is there to support member states on how to utilize or unlock the value of data. So it’s not only looking at personal data, it’s also looking at non-personal data and issues on data sharing are quite central in the policy framework. Issues on cross-border data flows are also quite central. And again, we are towards the finalization of the African Continental Free Trade Agreement and a protocol specifically on digital trade, which also emphasizes the need for AI development in Africa, the need for data sharing. So some, I guess, some of the important issues that the continent is prioritizing on, the first one I’ll touch on is human capital development. So there’s a lot of discussion around how best can we skill the people of Africa? So we have more and more people with AI skills. We have more and more people who are working in the STEM field, for example. And a lot of initiatives are actually going towards building our own human capital. And I guess with people who are already late in their careers, there’s also that question of how can we best re-skill them? And I think that’s where we need the support from the private sector mostly to support a lot of people who are advanced in their careers on how to re-skill and get new skills that are relevant to the edge of AI. And an important area, again, an important pillar for Africa is on infrastructure. So we’ve been talking about digital, global digital divides and the need to have access to digital infrastructure. And that is still a big challenge for Africa. So it’s not just talking about AI, it’s coming back to the foundational steps that we need. We need to start having access to the internet. We need to have access to basic infrastructure, building on that. And then, of course, with AI, there’s discussions around computing power and how best can we have more and more data centers in Africa to support, again, AI innovation. And I’m not going to talk about enabling environment because that’s more regulatory issues. And I’m sure we have been talking about the issues on how best to regulate. But there, just to emphasize again, that the discussion apart from regulating AI and personal data, discussions around how can we best have laws, be it intellectual property laws, taxation laws, and different incentives to attract more and more innovation on the continent. And then, I’ll guess the most important for the continent is building of the AI economy. How do we go about it in a way that is going to bring actual value to African actors and African citizens? And there, again, there are promises. It’s still not clear how we’ll go about it. For example, I see I’m running out of time. Can I just go to? Yes, so another important issue, again, is the importance of strategic partnerships. We cannot do this by ourselves. We are aware of that. And there is need, again, to see how best can we collaborate with international partners to help us to develop our own AI ecosystem. So, and then. Fantastic, and exactly, these are points that apply around the full spectrum of global South countries.
Luca Belli: But it’s very, very important to raise them. Let’s now move to another part of the world, which is close to you, Professor Bianca Kramer. She is member. of the board of CGI.br, the Brazilian Steering Committee for the Internet, and I also have the pleasure of having her as a colleague at FGV Law School Rio. Please, Bianca, the floor is yours.
Bianca Kremer: Thank you, Luca. I will take off my headphones because it’s not working very well and I don’t want to bother very much the conference for now. So, thank you so much for inviting me. It’s a pleasure to be here. This is my first IGF, despite I have been working with AI and tech for the last 10 years. I have been a professor, an activist in Brazil, and also a researcher on the topics of AI and algorithmic racism, its impact in our country in Brazil, understanding also other perspectives to improve, develop, and also use the technology in our own perspectives, in our own terms. And this is something we have to consider when we talk about the impacts of AI and other new technologies, because we don’t have only AI. AI is the hype for now, but we have other sorts of technology that impacts us socially and also economically speaking. So, I have been concerned on this topic, this specific topic of algorithmic bias in the last 10 years. And from 2022 to 2023, I have been thinking about how to raise awareness of the problem in our country, developing research, and also understanding the impacts for our society on this topic. But this year, I have been changing a little bit my perspective, because I have been concerned about raising awareness on the topic for the last year, and I thought that maybe it was important to give a next step to the research. So, I have been developing a research that has been funded also. It’s partially, one part of my research, I have been developing research on data and AI in FGV University with Professor Luca, and the impact of our Brazilian data protection law and economic platforms as well. But personally, I have been working on the topic of the economic impact of algorithmic racism in digital platforms. This is something that is very complex to do. We have to raise indicators to understand the economic impact that could, when we could see and observe the specificities of these impacts, and maybe provide some changes in our environment, in our legislation, and also in our public policies. So, this is something I have been up to, and just to address a little bit about why this is a concern for us. Until last year, I have been working specifically in one type of technology that is facial recognition, for example. Just to clarify a little bit how the algorithmic racism works in Brazil. We have been addressing a huge amount of acquisitions of facial recognition technologies in the public sector, specifically for public security purposes. And raising researches, we have found that 90.5% of those who are arrested in Brazil today with the use of facial recognition technologies are black and brown. The brown people in Brazil are called pardos. So, we have more than 90% of the population being biased with the use of technology. And this is not something that is trivial, because Brazil today is the third population that incarcerates the most in the world. So, we are the third place. We only lose to China and the United States, for example. So, this is an important topic for us. And which are the economic impacts of these technologies? What do we lose when we incarcerate this amount of people? Which are the losses, the economic losses for the person, for the ethnic group that is arrested, and also for society? Which are the heritages that we feel now, with the use of these pervasive technologies, they are back from the colonial heritage? So, this is something that I have been working with, trying to not only raise awareness, but also understanding the actual economic impacts. And with the use of economic metrics, for example. It’s ongoing, but it’s something that we have to understand a little bit. So, thank you so much, Luca, for the space, for the opportunity. I’m looking forward to hear a little more about my colleagues on their topics. Thank you. Fantastic. Thank you very much also for being on time. And indeed, actually, as the human rights arguments are something that we have been repeating for some years, probably the economic ones might be more persuasive, maybe with policymakers. Now, let’s go directly to the next panelist.
Luca Belli: We have Liu Xinjing from Guanghua Law School of Zhejiang University.
Liu Zijing: Hello, everyone. I’m Liu Xinjing from Zhejiang University in China. And this is my co-writer, Ling Ying. And we also have a co-writer, and he is in China now. We love to share Chinese experience about the artificial intelligence utilization. And our report is about building a smart code through large language models. The experience from China. And Chinese has a smart code reform, and it was since 2016. But before that, in 1980s, China’s leader had to consider how to utilize the computer to modernize the code management and also to modernize the legal work. And until 2016, China government officially launched a program called the Smart Code Reform to digitalize the code management. And now in this year, it has entered into the third phase, which is the AI phase. And in this year, China’s code has launched their own unique large language models, which was very impressive. So we’d like to share some experience from China. And in this year, in 2016 and 2022, the Supreme Court of China has launched a system named Faxin system, which is driven by the large legal language models. And it helps the judges to do their legal research as well as the legal reasoning. And also, in the local court level, such as in Zhejiang province, the Zhejiang High Court, they launched their own unique language model named the Phoenix. And they also have an AI co-pilot named Xiaozhi. And it was being used in the court, especially for the per litigation mediation, which was also a feature of Zhejiang province. And also in Shanghai, the Shanghai High Court, they launched a system named 206 system. And it was especially for the criminal cases. So you can see there are many features in China’s utilization of the large language models, especially in the judicial sector. And we also concluded several features about China’s success. And the first one is that we have a very strong and sustained up-down policy. And the second one is that there is a weaker resistance within the judicial sectors. And also, one of the most important features is that in China, there is a close cooperation between the private sector and public sector to develop the large language models by themselves. Because we witnessed that in this year, lots of judges over the world, they also use AI textbooks such as ChatGPT. But in China, the Chinese court, they developed their own large language models. So it was quite unique. And I will share my time with my co-writer.
Ying Lin: Hello, everyone. I’m Ying from Free University of Brussels. I would like to continue with my colleague on challenges and provide some initial suggestions. There are many three concerns for us. One is about development. As we know, advanced AI requires substantial financial resources and only a few developed regions can afford it. As we mentioned before, like Shanghai. So it calls for special funds for less developed regions to foster equitable access to AI-powered judicial resources. There are also issues about public-private partnership. The biggest problem is public input, but private output. What if those private companies use those data and similar products for their own benefit? What if those private companies dominate this relationship and put great influence in judicial decisions? So robust oversight mechanisms are needed to prevent undue influence and ensure transparency. And the second, the fairest problem. On the one hand, AI assistants raise concerns about transparency and due process. Can the judge really know how the algorithm works? And the decision is really made by the AI or by the human being? And the decision-making authority to AI assistance provides lies of responsibility, potentially weakening judicial accountability. And due to this autonomous process, there is also an issue about whether all the parties in the cases represent them fully. And this emphasizes the importance of transparency and explainability. And on the other hand, there are substantial fairness issues and AI are biased and sometimes they make up things. We need a human in the loop. So integration of a single framework and a guideline into AI system are helpful. And the ongoing dialogue between legal experts and AI development will also work. And the last one is the card issue. When making judicial decisions, it will involve massive process of sensitive personal data. We need the strict data security protocols and the many technicals and the recognition of government data assets and used by private partners and governments. And when smart courts are developed in a national level, there’s an issue like national security risks. So robust cybersecurity measures to prevent unauthorized data breach are essential and to ensure the integrity and security of the smart court system in China.
Luca Belli: Thank you, that’s all. Thank you very much also for being perfectly on time and for raising two very important issues at least. First is the fact that even if we build AI, then it has to run on something. So it’s not only the model, it’s also the compute that is relevant. And second, the fact that it needs to be transparent because probabilistic systems like LLM, they are frequently very opaque and it is not really acceptable from a due process and rule of law perspective to say we know how it works, but it needs to be explainable. All right, fantastic. Let’s get to the last couple of speakers in person. Rodrigo Rosa-Gameru and Katherine Bailick from MIT. Please, the floor is yours. Hello guys, can you hear me?
Rodrigo Rosa Gameiro: Okay, my name is Rodrigo. I’m a physician. I’m also a lawyer by training. I grew up in Brazil, but I currently live in the US. I work at MIT with Dr. Bailick here where we do research in AI development, alignment and fairness. So one question that I had in mind while I was thinking about this panel is how do we make sense of where we stand with AI globally today? And I often find myself turning to literature for perspective and there is this one line from Dickens’ A Tale of Two Cities that feels especially fitting. And it is, it was the best of times, it was the worst of times. Because for some, this is indeed the best of times. AI can work and does work in many cases. In healthcare, AI enabled us to make diagnosis that were simply not possible before. AI is enabling us to accelerate drug development and transform our understanding of medicine in ways that we never imagined. The problem is, this is also the worst of times. The benefits of AI remain largely confined to a handful of nations with robust infrastructure. Meanwhile, the global majority is pushed to the sidelines. And even within countries that lead AI development, these technologies often serve only to the privileged few. We have documented, for instance, AI systems recommending different levels of care based on race. And vast regions of the world where these technologies don’t even reach communities at all. The digital divide isn’t just about access, it’s about who gets to shape these technologies, who benefits from them, and who bears their risks. So, how do we ensure that AI upholds human rights for everyone? How do we build AI that truly serves every population? AI that follows the principles of non-maleficence, beneficence, autonomy, and justice? I would argue that the answer actually lies in the title of this panel and of this book, because there can be no AI for the global majority if it is not from the global majority. And this brings me to our chapter in the book, which is From AI Bias to AI by Us. And at our lab at MIT, led by Dr. Leo Celli, who unfortunately could not be here today, we’ve made efforts to move beyond just talking about these issues. We’ve created concrete ways to measure progress and drive change. And what we’ve learned is powerful. When you give everyone a seat at the table, innovation flourishes. Let me share a little story that illustrates this. Through our work, we connected with researchers in Uganda. We didn’t come as saviors or teachers, we came as collaborators. As a result of our collaboration, the team there has built their own data set, developed their own algorithms to solve their own local challenges. This also secured international funding. In fact, they taught us much more than we taught them. And this isn’t an isolated story. Through Physionet, which is our platform for sharing healthcare data, we have enabled collaboration across more than 20 countries. We’ve hosted datathons that bring together multidisciplinary local talent and leadership worldwide to collaborate on solving local problems. The results, more than 2,000 publications with 9,000 citations, but most importantly, AI solutions that actually work for the communities that they serve. But here’s what we’ve learned about all else. Our approach isn’t the only answer. Effective AI governance needs more than individual initiatives. It requires all stakeholders working together towards shared goals. And my colleague, Dr. Bielik, will explain this further. Thank you. Thank you, Dr. Romero.
Catherine Bielick: So my name is Dr. Katherine Bielik. I’m an infectious disease physician. I’m an instructor at Harvard Medical School, and I’m a scientist at MIT studying AI. Outcome improvement for people with HIV and bias reduction. So I work here at MIT Critical Data. We are publishing here as a case study, but I think we’re just one group, I think, in one country, from one perspective in one professional field about healthcare, artificial intelligence. And this discussion is about so much more. And I think one way that I would like to think about international governance of AI from a global majority is to think about it from a historical precedent and context, because we don’t want to reinvent the wheel. We don’t think that everyone around the world should be doing what MIT Critical Data is doing. Individual countries have individual needs. And I think there’s already a precedent, actually, that we’d like to contend as a good framework that we can emulate going forward for AI from the global majority. And I’m talking actually about the Paris Agreement, the Climate Accords, where nearly 200 countries came together to agree on one common goal with individual needs per country based on their own unique populations. And I think there’s five core features that I want to take away from the Paris Agreement in a way that we can parallelize it over to AI from the global majority. The main thing is that this is a global response to a crisis of what I will say is inequitable access to responsible AI. And I think all those words carry a lot of different meaning and weight. But the key here, I think, for the five core features, the first is there’s a collective response internationally with differentiated responsibilities, where I think that the wealthier nations carry more of the burden to have open leadership and knowledge sharing. The second is, I think, maybe the most important, which is localized flexibility. There are nationally determined contributions in the Paris Agreement that I think parallelize over to AI from the global majority, where each country defines their AI priorities for their own people, and we come together and we put them together and agree on a global standard. Because I think implementation domains differ in so many areas, in healthcare, in agriculture, disaster response, education, law enforcement, job displacement, you can go on, economic sustainability and environmental energy needs. There’s just no one size fits all. And what comes with that, I think, is a core feature of transparency and accountability. And that is accounted for in the Paris Agreement, which I think can also parallelize to us today. There are regular reviews from every country, and they are domain-specific non-negotiables, reducing carbon emissions by a quantifiable amount per country. And in this case, there can be a federated auditing system, I think, which would be similar to federated learning in a way that protects privacy. The last two include, I think, financial supporting, channeling, where developing nations must have resources channeled over, where people can not only use those resources and technology sharing to develop and implement their focused AI tools, but the infrastructure to evaluate those outcomes as well, which I think is just as important, if not more important. And then lastly, is the global stockade term, which was used a lot for the Paris Agreement, I think. What the key here is that there are specific outcomes determined by specific groups, by specific countries, and then we can aggregate those towards a single tracking of progress. And I think with this unified vision for the future, it takes us out of the picture, I think, because I don’t think we can or should be prescribing what the global majority wants or needs from Harvard or MIT or wherever. I think every stakeholder needs to have an equal voice in this. And that’s the pathway, I think, to an international governance with those core features. And why can’t a meeting like this, why aren’t we talking about the equivalent of an international agreement, where we can all have the same equal voice in participating towards the same common goal? We’re all here. There’s no shortage of beneficence from all of you, not maleficence, equity, justice. These are medical ethical pillars, and there’s no shortage of resources, I think, when we can come together for a unified partnership.
Luca Belli: Thanks. Fantastic. So we have a lot of, already a lot of things to think about. And so I also would like to first ask the people in the room to start thinking about their comments or questions, because the reason why we are trying to do, is to then have a debate with you guys. So let’s now. pass to the online panelists which also are a bunch and I really hope they will strictly respect their three minutes each. We should have already a lot of them online so that is the moment where our remote moderation. Friends should be supportive. The first one should be Professor Sizwe Snail Ka Mtuze.
Sizwe Snail Ka Mtuze: Thank you very much Dr. Belli. Thank you very much delegates and everyone in the room. Indeed, IGF time is always a good time and it’s always a good time to collaborate. I’ve had the pleasure of working with two lovely ladies this year, Ms. Morihe and Ms. Nzemande. Ms. Morihe is one of the attorneys at the firm and Ms. Nzemande is a paralegal, on looking at the evolving landscape of artificial intelligence policy in South Africa on the one hand, as well as possibly drafting artificial intelligence legislation. I’m mindful of the three minutes that that’s been allocated to us. I want to fast forward and say in South Africa, the topic of artificial intelligence has been discussed over the last two to three years on various levels. On the one level, there was a presidential commission in terms of which the president of South Africa had made certain recommendations in terms of a panel he had constituted on how the fourth industrial revolution should be perceived and what interventions should be made with regards to aspects such as artificial intelligence. It was a bit quiet. Covid came and went and data protection was the big, big, big issue. However, artificial intelligence is back. It’s the elephant in the room and South Africa has been trying to keep up with what is happening internationally. On the one hand, South Africa drafted what they called the South African draft AI strategy and this was published earlier on this year and the strategy received both very warm comments and very cold comments. Some of the authors and some of the jurists in South Africa were very happy with it saying it’s a way forward, it’s a good way forward and other jurists were of the view to say but this is just a document, it’s 53 pages, why are we having this? South Africa then responded in early August after all the critique and everything that was said with a national artificial intelligence policy framework. This document has been reworked, it looks much better, it has objectives and it has been trimmed from the 53 page document. Having a look at what is happening in Africa as well, I think it is in line with some of the achievements that people want to do in Africa with regards to artificial intelligence and the regulation thereof. And it looks like I’m running out of time, so that is my contribution on this session. All right, thank you very much for having respected the time and again we are mindful that every short presentation is providing only a very teaser of the broader picture but we encourage you to read the deep
Luca Belli: Next speaker actually is the speaker from our partner organization, the Coalition on Data and Trust.
Stefanie Efstathiou: Fantastic. I’m happy to be here. As mentioned, I’m an IP and digital resolution lawyer based in Germany, in-house counsel and a PhD candidate on researching on AI. However, I’m here today in my capacity as a member of the EURid Youth Committee. So ladies and gentlemen, esteemed colleagues, I would like to draw today the attention to the transformative and urgent discourse on regional approaches to AI governance as highlighted in the recent report, AI from Global Majority. This report underscores that while artificial intelligence promises to reshape our societies, it must do so inclusively and equitably. So from Latin America to Africa and Asia, regional efforts as we see in the report demonstrate resilience and innovation. Latin American nations are forging frameworks inspired by global standards yet rooted in local realities, emphasizing regulatory collaboration. And in Africa, the RISE governance framework exemplifies a vision for integrated data governance, emphasizing cooperation, accountability and enforcement. These efforts reflect not only the unique socio-political context but also the shared aspiration to ensure AI serves as a tool for empowerment and not exploitation. A key dimension often overlooked is the role of youth in shaping AI’s trajectory. The younger generation across but not limited to the global majority, of course, should not only adapt to regional frameworks but should actively participate and lead the change. Youth should be more in the focus and participate as a stakeholder since it has a unique inherent advantage. They are the ones who will have to adapt more than any other generation to the change and effectively live in a different world than other generations before. The involvement can have various forms. However, starting from data protection driven policies on ensuring student data privacy in Africa to youth led innovation hubs in Latin America is a good way to go. Nonetheless, it is our duty to amplify these voices and incorporate their ideas into policymaking processes as well as it is the duty of the youth to actively participate and emerge itself in the sphere of responsible AI innovation and policymaking. The energy and the creativity of the younger generation shall signal a brighter future for AI governance. However, challenges persist and we have seen this. Digital colonialism, data inequities as well as systemic biases threaten to widen the divides. As the report highlights, however, it is imperative to address these disparities by adopting inclusive frameworks, fostering regional cooperation and prioritizing capacity building initiatives tailored to each region’s needs. However, with a minimum common global understanding similar to what Dr. Billig described earlier. As we move forward, let us reaffirm and I want to close with this, we shall reaffirm our commitment to an AI future that embodies fairness, sustainability and human centered innovation, however, grounded in regional diversity, but without causing fragmentation and inspired by the vision and the drive of youth. Thank you very much.
Luca Belli: Thank you very much, Stephanie. And actually, this is a very good introduction. Also, the one that UNCs were provided to our this first slot of online presentation dedicated to regional approaches to AI. So what kind of approaches is emerging at the regional level in various regions of the world. Our next speaker, Dr. Jona Welker, that is at MIT and former tech envoy and also now leading multiple EU sponsored projects has worked quite a lot on this and he has also a little bit of presentation for us. So we have it our technical support can confirm that he can share his presentation.
Yonah Welker: Yes, yes, my pleasure to be here. And it’s my pleasure. Excellent. Welcome. Go back to Riyadh, where I serve as an envoy and advisor to the Ministry of AI. I would love to be mindful about the time and address the issue of disabilities, educational and medical technologies is extremely complex area. And it’s almost one year since 28 countries signed the Bletchley Declaration. And unfortunately, this area is still underrepresented, including not only complexity. Currently, there are over 120 companies working on assistive technology. but also complexity of involved models we have biases related to supervised, unsupervised, reinforcement learning, issues of recognition, cues, exclusion. So I would love to quickly share the outcomes, what we can do to actually fix it. First of all, I believe we should work on original solutions. We can’t regeneralize chart GPT because for most original languages we have 1,000 times less data and we need to build our original solutions and not only LLM but also SLM with maybe less parameters but with more specific objective and efficiency. Second, we should work together to create open repositories cases in taxonomies, not only overused cases but also what we call accidents and with what we work with the OECD. The first thing is a dedicated safety models. It includes additional agents which help to improve the areas of accuracy, fairness and privacy and also dedicated safety environments and oversights, specific simulation environments for complex and high-risk models. Also, we actively working on more specific intersectional frameworks and guidelines with UNESCO or UNICEF. For instance, digital solutions for girls with disabilities in emerging regions who AI in health or OECD disability in the AI accidents repositories. And finally, we should understand that all of the biases we have today in technology, it’s actually a reflection of a historical and social issues. For instance, even beyond AI, only 10% of the world population have access to assistive technologies. 50% of children with disabilities in emerging countries still are not enrolled into schools. So we can’t fix it through one policy but through combination of AI, digital, social and accessibility frameworks. Thank you so much. Thank you very much, Jona, for respecting the time and now let’s move to another
Luca Belli: We have our friend Ekaterina Martynova from the Higher School of Economics. She was a researcher with us in Rio last year. Very nice to see you again, Katya, even if only online. Please, the floor is yours.
Ekaterina Martynova: Yes, thank you so much, Professor Luca. I will be very brief, just to give an overview of the current stage of AI development here in Russia and the first thing I should note is the increase in spending from the budget, actual unprecedented level of spending and actually development of AI is one of the key priorities of the state. Though the approach in terms of regulation is still quite cautious, so it seems that the priority is to develop technology, not to hinder somehow the development. So we don’t have still a comprehensive legal act as a federal law on AI and we have some national strategies as a piece of subordinated legislation and also some pieces of self-regulation in the market driven by the marketplace. In terms of practical application, AI is being used quite intensively in the public services and we have some sandboxes, especially here in Moscow, first of all in the public health care system and of course in the field of the public security and investigating. So here I come to the main concerns with using AI in these fields is of course the first one, the obvious, the human rights concern which has already been raised and it is very acute for Russia and it was also a question conceded by the European Court of Human Rights in terms of procedural safeguards provided to people being detained through the use of facial recognition system and we still need to develop very much our legislation here to provide more safeguards and here we look very closely at the Council of Europe Framework Convention on AI and Human Rights, Democracy and the Rule of Law. Though Russia is not currently a member to the Council of Europe, still we consider that these provisions on the standards of transparency, accountability, remedies can be useful for us, for our national development and maybe for development of some common basis within the BRICS countries or with our partners in the Shanghai Cooperation Organization. The second problem is the data security problem and here we have a special center created under the auspices of the Ministry of Digital Development so that to be the central hub of this data sanitation process and minimization of data, especially biometric personal data which is used in this public health care service digitalization process and finally actually what Luca you have mentioned at the beginning in the opening speech is the problem of AI and cyber security, this particular the topic which I research and the problem of the AI powered cyber attacks which Russia is being targeted these years and we here consider which are the mechanisms which legal mechanism which can be developed to hinder the use of such use of AI or malicious activities in cyberspace by state actors, non-state actors and here of course we need some efforts joined on the international level to develop a framework of responsible use of AI by states and the rules of responsibility and the rules of attribution of these types of attacks to the states which can be sponsoring such operations. So I will stop here and thank you very much for your attention. Thank you very much Katja for these very good
Luca Belli: Now let’s conclude this first segment of the online contribution with Dr. Rocco Saverino from the Free University of Brussels.
Rocco Saverino: Thank you Dr. Luca Belli, I’m not yet a doctor because I’m a PhD candidate but thank you and yes of course I am one of the authors of the paper we submitted and with my colleagues here at the Free University of Brussels and but to respect the time I’m going to wrapping up the key points of our paper. We look at the global trends and how Latin America countries are incorporating AI rules into the data protection of frameworks influenced by these global trends, particularly the new digital regulations and this also lead to the emerging of AI regulations in Latin America and because of this we analyzed particularly the case of Brazil and Chile which are establishing the specialized AI regulatory bodies reflecting the region’s awareness of the complex issues of the AI technologies and we look at the Brazil approach with the with the law 23 38 of 2023 but in this case we should make a disclaimer because of course as many of you know in on 28th of November was presented another proposal and we couldn’t update in our paper because it was already submitted but we analyzed the previous one where the role of the data protection authority was very important and we looked also at the Chile’s approach because Chile is advancing in its AI governance model proposing an AI technical advisory council and a data protection agency to enforce the AI law. Of course when we are talking about AI regulation we also talk about the data governance and data governance is a key factor in shaping the AI oversight with a focus on transparency, accountability and data protection of fundamental rights. This leads to challenges and opportunities. Latin American countries face challenges such as the need for coordination among the regulatory bodies developing specialized expertise and allocating sufficient resources but also opportunities because the region has the opportunity to shape AI governance proactively adopting a risk-based approach and integrating AI governance into existing data protection frameworks. We believe that Latin American countries can contribute to the global AI governance discourse by developing the regulatory models that reflect its unique social, economic and cultural context.
Luca Belli: Excellent, fantastic and now we can now we have concluded our regional perspective and we can enter into the social and economical perspectives. Actually the first presenter is Rachel Leach that has been co-authoring one of the papers on the AI impact in terms of environmental and economic and social impacts. So Dr. Leach please the floor is yours.
Rachel Leach: Thank you. Our project is an exploratory analysis of AI regulatory frameworks in Brazil and the United States focusing on how the environment particularly issues of environmental justice are considered in regulations in these countries and broadly we found that regulations in both countries are furthering the development of AI without properly interrogating the role AI itself and other big data systems play in causing harms to the environment, particularly in exacerbating environmental disparities within and across countries. For example, in July 2024, the Brazilian federal government launched the Brazil Plan for Artificial Intelligence investing $4 billion BRL in hoping to lead AI regulation in the global majority. The plan centered the benefits of AI with the slogan AI for Good for Everyone and invested in the use of AI to mitigate extreme weather, including a supercomputer system to predict such events. Additionally, in the U.S., President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence operates under the assumption that AI is a tool with the potential to enable the provision of clean electric power, again without examining the environmental issues raised by the technology itself. These examples are just a snapshot of the trend we identified, that both of these countries have a largely techno-solutionist approach to understanding AI. What this means is that their regulations tend to operate under the assumption that there is a technological solution to any problem. This approach leads to regulations that vastly under-consider the externalities or harms of technology and that center technology and solutions even in instances where that may not be the best approach. Okay, so turning now to the solutions we wanted to highlight. First when considering environmental and social costs of AI, it’s crucial to consider embodied carbon, meaning the environmental impact of all of the stages of the product’s life. As many people have discussed, developing and using AI involves various energy-intensive processes from the extraction of raw material and water to the energy and infrastructure needed to train and retrain these models to the disposal and recycling of materials. And often these environmental costs fall much harder on the global majority, particularly when data centers from US-based countries are citing a lot of their data centers in Latin America, for instance, and just exacerbating issues such as droughts in that region. The second action we wanted to highlight is the importance of centering environmental justice concerns comprehensively across all discussions about AI, from curriculum to research to policy. We think this is really important to interrogate the assumption that AI technology can necessarily solve social and environmental problems. So yeah, thank you again for having us.
Luca Belli: Excellent. Also very good that you are almost all on time. Next one, Avantika Tewari, that is PhD candidate at the Center for Comparative Politics and Political Theory at Jawaharlal Nehru University in New Delhi. Do we have Avantika? Yes. Hi, can you hear me? Perfect. Yes. Very well. Thank you so much. Welcome. Great to be here with all of you.
Avantika Tewari: So I’m just going to start without much ado. And I think just to give you a little bit of context about this paper, in India we have something called the Data Empowerment and Protection Architecture, which essentially all the debates around AI governance are also hinged on the control and regulation and distribution of data. So there has been an emphasis on consent-based data sharing models, and that’s devised to basically make a data-empowered citizenry. So it is in the context that I have written this paper, and I want to foreground that while these technologies, such as chat GPT and generative artificial intelligence technologies appear to be autonomous, their functionality depends on vast networks of human labor, such as data annotators, moderators, and data laborers, hidden behind the polished facade of machinic intelligence. Platforms like Amazon Mechanical Turk outsource these tasks to workers in the global majority, reducing them to fragmented, repetitive tasks that remain unacknowledged and underpaid. These workers sustain AI systems that disproportionately benefit corporations in the global north, transforming colonial legacies into new forms of digital exploitation through the cheap appropriation of land and labor, resources for compute technologies, and digital infrastructure. Similarly, digital platform users are framed as empowered participants, with their likes, shares, and posts generating immense profits for tech giants, all without compensation. This represents the double bind of digital capitalism, where the unpaid participation of users is reframed as agency, and the labor precarity is disguised as opportunity, with the global majority bearing the brunt of both. The platform economy built on twin pillars of fragmented attention and compulsive participation rebrands user exploitation as agency and convenience. By embedding individuals in digital enclosures, it transforms participatory cultures into systems of unpaid labor, commodifying interactions which were previously non-commodified, such as social relations of interactions and communication. What emerges is what I term an undead dimension of social enjoyment, which is a relentless pursuit of meaning, success, and community, which is inherently mediated by algorithms. Yet the promise of satisfaction remains elusive in snaring individuals in a loop of alienation and exploitation, while making their engagement complicit in the production of data analytics and AI. Data is thus fetishized as a commodity, retroactively imbued with meaning, as valuable information fueling market expansion, diversification, and stackification, which is paradoxically framed as a governance model, where data is framed as a resource that can be reclaimed as an extension of the self or as a social knowledge commons. Yet this transformation conceals a deeper reality, which is that the labor upon which these platforms depend is increasingly fragmented into gig-based, task-based work. This labor sustains the development of AI technologies that paradoxically aim to automate the very low-skilled tasks on which they rely. The shift towards the low-skilled, task-based, on-demand work is not merely a strategic adaptation by platforms, but an ideological reconfiguration of labor relations, which is what I call the ideology of prosumerism in the paper. So increasingly, the fragmentation is actually an attempt by capital to overcome its own dependency on labor. And so what I want to really foreground in this paper is that the real paradox is not whether technology can empower us, but in how monopoly capitals drive to overcome its dependence on labor leads to a fragmentation of global division of labor, which then disproportionately impacts the global majority. And this results in the now partialization of work, automation of tasks that are actually produced by the severance of labor’s embeddedness within the production process by the kind of fragmentation of work processes. So I’ll stop here, and thank you again.
Luca Belli: Thank you very much, Avantika, for bringing these considerations about labor, the difference between consumer and prosumer, and this kind of antagonism that you very well situated. Now staying in India, next speaker is Amrita Sengupta from the Center for Internet and Society, soon to be one of our incoming fellows at the FGV Law School. Please, Amrita, the floor is yours.
Amrita Sengupta: Thank you so much, Professor Belli. I’m also joined by my co-author, Shweta Mohandas, who’s also online. So our essay, The Impact of AI on the Work of Medical Practitioners, is actually a part of a larger mixed methods empirical study that we did, trying to understand the AI data supply chain in health care in India. So in this particular essay, through primary research with medical professionals, we did a survey of 150 medical practitioners and also did in-depth interviews. We tried to look at the current use of AI by medical practitioners in their research and practice, and also look at what are some of the new challenges and the perceived benefits Through this, we also try to raise certain concerns and issues about its current use and what is the cost and benefit to the work that the doctors and medical professionals have to put in now in the AI systems as they start developing these systems. So there are four big issues that we want to raise. The first one is that in the short term, doctors have to put in additional time and effort in preparing data through labeling, annotation, but also learning these technologies and providing feedback on AI models. These are real costs that need to be considered before we burden an already overburdened health care system. So, for example, in our survey, we heard that nearly 60% medical practitioners expressed the lack of AI-related training and education as a big barrier to adoption of AI systems. Doctors also raised concerns on the efforts and infrastructure required on their side to digitize health reports because of the nascent stages at which digital health data exists in the current health care system in India today. The second issue that we want to foreground is also about the current use of AI in private health care and less so in public health care, which is where there is a much larger need for meaningful interventions and for providing more efficiency, time-saving, and providing meaningful health, which actually raises the question, what does it serve and who is it privileging through the ways in which it is currently being operated? The third issue, and a critical one at that, is one of liability. Academics and medical professionals in our study flagged the issue of liability. For instance, who would be liable for an error in diagnosis made by an AI application that aids medical professionals? A common concern we also heard from doctors and academics was that AI was meant to assist doctors, but often enough, doctors felt this pressure that AI could take their place or was threatening to take their place. The last issue that we want to also raise is the longer-term impact of AI. In our survey, 41% of medical professionals suggested that AI could be beneficial in time-saving, but also help in improving clinical decisions. The question that we ask is, what are the kinds of risks that this raises with the over-reliance on AI, leading to, let’s say, a lack of or loss of clinical skills, or of course the representational biases that the AI models may present because of of where the data is coming from, the problems of reliance on global north data and so on. Lastly, we say that if we need to prioritize AI, we should prioritize in areas where they could most benefit and is in larger public interest and with the least disruption to the existing workflows and be considerate of whether the costs actually outweigh the benefits.
Luca Belli: Excellent. And now we are going to start to see how the global majority is reacting to AI and which kind of innovative thinking and solution is put forward in our last section. And then we will open the floor hopefully for debate as we have started with some minutes of delay. I hope our colleagues will indulge on us and give us five extra minutes. We have now Elise Racine from University of Oxford. Do we have Elise here? Yes, please go ahead. Hi. Hi, everyone. So I shared a presentation PDF in the chat.
Elise Racine: I’m Elise Racine. I’m a doctoral candidate at the University of Oxford. I study artificial intelligence, including reparative practices. So AI really does promise transformative societal benefits, but it also presents significant challenges in ensuring equitable access and value for the global majority. Today, I’ll introduce preparative algorithmic impact assessments, a novel framework combining robust accountability mechanisms with a reparative praxis to form a more culturally sensitive, justice-oriented methodology. So the problem is multifaceted. The global majority remains critically underrepresented in AI design, development, deployment, research and governance. This leads to systems, as we’ve discussed, that not only inadequately serve, but often harm large portions of the world’s population. For example, AI technology developed primarily in Western contexts often fail to account for diverse cultural norms, values, and social structures. While traditional algorithmic impact assessments provide valuable accountability mechanisms, they often fall short in ameliorating injustices and amid marginalized and minoritized voices. Reparative algorithmic impact assessments address these challenges through five steps that combine theoretical rigor with practical action. First, socio-historical research, delving into the context and power dynamics that shape AI systems. Second, participant engagement and impact harm co-construction that goes beyond tokenism and redistributes power. Three, sovereign and reparative data practices that incorporate decolonial intersectional principles while ensuring communities retain control over their information. Fourth is ongoing monitoring and adaptation focused on sustainable development and adjusted based on real-world impacts. And the fifth and last step is redress or moving beyond identifying issues to implementing concrete, actional plans that address inequities. To illustrate these steps in practice, considered a US-based company deploying an AI-powered mental health chatbot in rural India. So a reparative approach may, for instance, employ information specialists with data curation and archival expertise to ground social historical research in actual reality. Implement flexible participation options with fair compensation and mental health support to drive meaningful community engagement. Establish community-controlled data trusts. Develop new evaluation metrics that incorporate diverse cultural values and priorities. And partner with local AI hubs and research institutes that empower communities to develop their own AI capabilities. These are just several examples. There’s a few more, again, in the PDF, as well as in the report. But through this comprehensive approach, I wanna emphasize how reparative algorithmic impact assessments move beyond merely avoiding harm to actively redressing historical, structural, and systemic inequities, including colonial legacies in their algorithmic manifestations. That was a large focus of the paper. By doing so, we can foster justice and equity, ultimately ensuring AI truly serves all of humanity, not just a privileged few. Thank you very much.
Luca Belli: Thank you very much. We are almost done with our speakers. We have now Hellina Hailu Nigatu from the UC Berkeley. Please, Elina, the floor is yours. Thank you. I am going to share my screen real quick. Okay. Hi, everyone.
Hellina Hailu Nigatu: My name is Hellina, and today I’ll briefly present our work with my collaborator, Zirak. So social media platforms, such as YouTube, TikTok, and Instagram are used by millions of people across the globe. And while these platforms certainly have their benefits, they’re also a playground for online harm and abuse. Research showed that in 2019, majority of the content that was posted on YouTube was created in languages other than English. However, non-English speakers are hit the hardest with content that they quote, regret watching. Social media platforms have also resulted in physical harm. Facebook faced backlash in 2021 for its role in fueling violence in Ethiopia and Myanmar. With this in mind, we take a look at how, when we take a look at how platforms protect their users, platforms rely on automated content moderation systems or human moderators. For instance, Google reported that 81% of the content that is flagged for moderation is self-detected and that most of the content that is detect, most of the content is detected by automated systems and then redirected to human reviewers. Additionally, Google uses machine translation tools in their moderation pipeline. However, automated systems do not work well for all languages. A research shows that the intersection of social, political and technological constraints results in disparate performance for languages spoken by majority of the world’s population. In terms of human moderators, the Google Transparency Report states that about 81% of the human moderators operate in English and of the non-English moderators, only 2% of them operate in languages other than the highly resourced European ones. Majority world is a term coined by Shahdu Alam to refer to what were mostly called third world, developing nations, global south communities, et cetera. And the term global majority emphasizes that collectively these communities comprise majority of the world’s population. And as with their size, these communities are very diverse in terms of race, ethnicity, economic status, culture and languages. Within NLP, the majority world is exposed to harm and marginalization because they are excluded from state-of-the-art models and research. They are hired for pennies on the dollar as moderators with little to no mental or legal support. They’re exposed to harmful content when conducting their jobs as moderators and they are harmed by the failures of existing moderation pipelines. With the cycle of harm we see, there are two major lines of argument on including or not including these languages and their communities in AI. Either you are included in the current technology and as a result are surveilled, or you are left in the trenches with no protection or support. We argue that this is a false dichotomy in our paper and ask that if we remove the guise of capitalism that currently dominates content moderation landscape, is there a way to have moderation with the power primarily residing in the users?
Luca Belli: Thank you so much. Excellent. Now we have only two speakers to go. Isha Suri is the Research Lead at the Center for Interest in Society. Please, the floor is yours.
Isha Suri: Thank you, Professor Luka. I’ll just quickly share my screen. I’m joined by my co-author Shiva Kanwar and we looked at countering false information and policy responses for the global majority in the age of AI. I’ll quickly give you a teaser and a rundown and we’d be happy to take any questions. So one of the things that we… Something wrong with my screens here, sharing. A background and context World Economic Forum recognizes false information, including misinformation and disinformation as the most severe global risk anticipated over the next two years. And multiple studies have demonstrated that social media is designed to reward and amplify divisive content, hate speech and disinformation. For instance, an internal Facebook study revealed that its newsfeed algorithms exploit the human brain’s attraction to divisiveness. And if left unchecked, it would feed users more and more divisive content to gain user attention and increase time over platform. And one of the factors that emerged was these integrated structures, profit maximizing incentives, ensure that platforms continue to employ algorithms recommending divisive content. For instance, a team at YouTube tried to change its recommender systems to suggest more diverse content, but they realized that their engagement numbers were going down, which was ultimately impacting their advertising revenue and they had to roll back on some of these changes. And this, as we found, leads to a lot of harmful divisive content being promoted on these systems. We then delve into what are the regulatory responses that are emerging from the global majority countries and we sort of realized that it was bucketed into one of these three large categories. One was amendments to existing laws, including penal code, civil law, electoral law, and cybersecurity law. And largely the focus was on ascribing criminal liability in cases where false information is defined broadly. And we later found that that carries significant risks of censorship. In our paper, we also go into an India-specific case study where empirical research has demonstrated that platforms over-comply and that leads to a chill effect on speech and freedom of speech and expression. Another aspect that is emerging is that legislative… proposals are transferring the obligation to internet communication corporations, largely like the intermediary liability regime is being tinkered with. Legislations are being tied to the size of a platform. I think the German example comes to mind, where a for-profit platform with more than 2 million users has additional obligations, where manifestly illegal content and illegal content has to be taken down. There are also ex-ante obligations on intermediaries, such as the Digital Services Act in the EU. The Digital Services Act is an important one, I think, because that is one piece of legislation that really transfers the obligation on platform providers to have more transparency in how their algorithms are working. In addition to regulatory responses, I think fact-checking initiatives have also emerged as a response to counter-false information. Meta’s fact-checking initiative is the one that has probably taken a lot of prominence. But again, it leads to questions of inherent conflict. There are also concerns about the payment methods, how is Meta paying or reimbursing these fact-checkers, and there is lack of clarity whether there is sufficient independence within the organization as such. We sort of also see a trend within a global majority countries to mimic EU or the global north regulations, also known as the Brussels effects. And with this, I’ll just also segue into our conclusions and recommendations and sort of tie down whatever we’ve discussed in the past few minutes. This is the broad table that we have in the essay. I’ll not stop over it, but just to give you an overview of how we’ve categorized some of these countries and looked at what the instrument and response is, what are the sort of criminal sanctions, whether it’s an intermediary liability framework that they’ve sort of introduced, and whether there is a transparency and accountability sort of an obligation that they have introduced. European Union and Germany, as an example, has been given because we felt that they have additional transparency and accountability requirements, as opposed to some of the other countries that you see on your screen.
Luca Belli: So I’ll stop here, and thank you so much. Fantastic. And now last, but of course not least, Dr. Guangyu Qiao-Franco from the Radboud University.Dr. Xiao-Franco, the floor is yours. Thanks, Professor Belli. And thanks for staying around for my presentation.
Guangyu Qiao-Franco: So my contribution is co-hosted with Mr. Mahmoud Javadi of Free University Brussels, who is also present online today. So our research is on military AI governance. And in our paper, we highlight the concerning and widening gap between the North and the South in military AI governance. One striking observation is the limited and decreasing participation of global South countries in UN deliberations on military AI. Between 2014 and 2023, fewer than 20 developing countries contributed to UNCCW meetings on lethal alternative weapons systems on a regular basis. Our interviews indicate different priorities in AI governance. While the global North emphasizes security governance and ethical frameworks, the global South prioritizes economic development and capacity building. The North and the South also diverge in their preferred approaches to military AI governance. Most developing countries prefer a legal ban on autonomous weapon systems, while the North favors soft law approaches represented by the re-aimed blueprint for action and the US political declaration. However, these North-led frameworks have received limited endorsement from global South countries. And notably, none of the BRICS member states, key players in global innovation, have endorsed these documents. Global South’s participation in military AI governance is further complicated by the dual user nature of AI technologies, geopolitical tensions, stricter access control led by the global North, and concerns about hindering AI development for security reasons have contributed to disengagement among global South nations. So we want, in our paper, and also want to use this opportunity to call for the building of an inclusive AI arms control regime that begins with a thorough assessment of the distinct needs and priorities of both the North and the South, fostering international dialogue, building trust, and promoting partnerships are essential to bridging the divide. Capacity building and knowledge transfer must also be prioritized to incentivize responsible technology use and encourage broader, more active engagement. So I will stop there, and thanks for your attention. Thank you very much for this.
Luca Belli: This has been an incredible marathon, very intense. We have a lot of food for thought. I am pretty sure people in the audience that have been with us over the past hour and a half have comments. If we can, I would take one or two quick, very quick questions or comments, if we have them, from the room. Otherwise, we can have them over coffee. Is there any? Yes, I see one, only one comment. Good, fantastic. So can anyone give a mic, so otherwise, I can borrow mine? You can borrow mine. Hello. We’ll work very quickly.
Audience: Thiago Moraes, PhD fellow from VLB. And some of my peers are here today, both on site and online, which is great, and also several colleagues, which I like a lot. So going very quickly and seeing the work that IGE has been doing, last year, I was able to be an author of the last edition. So it’s very nice to see the initiatives that are being discussed in the document. So what I was thinking here, there has been some ongoing discussions through the IGEF of how regional IGEFs could contribute to the global discussion. And I was just thinking, maybe the cases, especially from global majority, for example, could be discussed in these regional IGEFs. And we try to find some way of showcasing them and then making these connections, especially now that we’re discussing the WCs, like the renewal of the mandate and how we could make Mood Stakeholder a bit more concrete to action. Maybe that could be an interesting way. We can talk more later. I know we don’t have time. So I’ll stop here.
Luca Belli: All right, fantastic. Thank you. Can you hear me? OK, so this is not working. All right, so thank you very much, everyone, for the comments, the presentations, respecting the time. Just to remind that there are still four copies of the book available here and people praying to have them. So you can still have four, or actually five. I don’t need one. And you can download it for free. Actually, as there will be only six months between now and the next IGEF, and we have carefully presented this volume and also the volume of last year, we might use the occasion of the next IGEF to have a debate building on what I’ve already presented this year and last year. Maybe for those who are interested, building a paper on the achievements that we have showcased here in this volume and in the volume of past year. And this actually could be a good way also of building upon what Tiago was mentioning. So try to connect in the dots and showing also what is the rationale behind all this and showing also the complexity. I think if something is clear from the very intense debate and presentation of today is that there is not only a lot of problems, but also a lot of thinking and a lot of potential solutions that can come up from the global south. A lot of challenges, of course, but there is also a lot of room to improve things and collectively at least identify problems and potential common solutions. So let me thank everyone for the very insightful work that I urge everyone to read in this volume and for the excellent presentation of today. Thank you very much. Thank you. Thank you. Thank you.
Ahmad Bhinder
Speech speed
134 words per minute
Speech length
741 words
Speech time
330 seconds
Developing regional AI strategies and policies
Explanation
Ahmad Bhinder discusses the Digital Cooperation Organization’s efforts to develop AI readiness assessment tools and data privacy principles for member states. The organization is working on creating frameworks to evaluate AI systems against ethical principles and human rights considerations.
Evidence
DCO is developing an AI readiness assessment tool for member states and drafting data privacy principles that consider AI implications.
Major Discussion Point
AI Governance Frameworks and Approaches
Developing ethical AI principles and assessment tools
Explanation
Ahmad Bhinder describes the DCO’s work on creating frameworks to assess AI systems against ethical principles. They are developing tools to help AI developers and deployers evaluate their systems’ compliance with ethical considerations.
Evidence
DCO is creating a framework that maps AI ethical principles to basic human rights and developing a tool for AI system developers to assess their systems against these principles.
Major Discussion Point
AI Ethics and Human Rights
Differed with
Rachel Leach
Differed on
Focus of AI governance
Ansgar Koene
Speech speed
150 words per minute
Speech length
386 words
Speech time
153 seconds
Addressing biases and discrimination in AI systems
Explanation
Ansgar Koene discusses the challenges of identifying and addressing the impacts of AI systems on different groups, particularly young people. He emphasizes the need for organizations to understand how their AI systems affect users and to address potential biases.
Evidence
Koene mentions that organizations often do not fully understand how young people interact with their AI systems or what particular concerns need to be taken into account.
Major Discussion Point
AI Ethics and Human Rights
Agreed with
Bianca Kremer
Agreed on
Addressing biases and discrimination in AI systems
Melody Musoni
Speech speed
145 words per minute
Speech length
810 words
Speech time
333 seconds
Creating inclusive AI governance frameworks for the global majority
Explanation
Melody Musoni discusses recent developments in AI governance in Africa, including the adoption of a continental strategy on AI. She highlights the priorities for AI development in Africa, including human capital development, infrastructure, and building an AI economy.
Evidence
Musoni mentions the African Union’s adoption of a continental strategy on AI and the development of a data policy framework to support member states in utilizing data.
Major Discussion Point
AI Governance Frameworks and Approaches
Agreed with
Catherine Bielick
Stefanie Efstathiou
Agreed on
Need for inclusive AI governance frameworks
Bianca Kremer
Speech speed
149 words per minute
Speech length
725 words
Speech time
290 seconds
Addressing biases and discrimination in AI systems
Explanation
Bianca Kremer discusses her research on algorithmic racism and its impact in Brazil. She highlights the need to understand and address the economic impacts of algorithmic bias, particularly in the context of facial recognition technologies used in public security.
Evidence
Kremer cites research showing that 90.5% of those arrested in Brazil using facial recognition technologies are black and brown individuals.
Major Discussion Point
AI Ethics and Human Rights
Agreed with
Ansgar Koene
Agreed on
Addressing biases and discrimination in AI systems
Addressing the economic impact of algorithmic racism
Explanation
Bianca Kremer emphasizes the importance of understanding the economic impacts of algorithmic racism. She is conducting research to assess the economic losses for individuals, ethnic groups, and society as a whole due to biased AI systems in law enforcement.
Evidence
Kremer mentions her ongoing research on the economic impact of algorithmic racism in digital platforms, focusing on developing indicators to measure these impacts.
Major Discussion Point
AI Impact on Labor and Economy
Liu Zijing
Speech speed
133 words per minute
Speech length
424 words
Speech time
190 seconds
Implementing AI in smart court systems
Explanation
Liu Zijing discusses China’s implementation of AI in its judicial system, including the development of large language models for legal research and reasoning. The presentation highlights the use of AI in various aspects of the judicial process, from pre-litigation mediation to criminal cases.
Evidence
Liu mentions specific AI systems implemented in Chinese courts, such as the Faxin system by the Supreme Court and the Phoenix system in Zhejiang province.
Major Discussion Point
AI in Judicial Systems
Ying Lin
Speech speed
122 words per minute
Speech length
366 words
Speech time
179 seconds
Addressing transparency and fairness concerns in AI-assisted judicial decisions
Explanation
Ying Lin discusses the challenges and concerns related to the use of AI in judicial systems. She highlights issues of transparency, due process, and the potential weakening of judicial accountability when decision-making authority is delegated to AI assistants.
Evidence
Lin raises questions about judges’ understanding of AI algorithms and the potential for AI to make up information, emphasizing the need for human oversight and explainable AI in judicial processes.
Major Discussion Point
AI in Judicial Systems
Luca Belli
Speech speed
150 words per minute
Speech length
2525 words
Speech time
1008 seconds
Adopting a multi-stakeholder approach to AI governance
Explanation
Luca Belli emphasizes the importance of a multi-stakeholder approach in AI governance, particularly in addressing cybersecurity challenges. He argues that cooperation between technical experts and policymakers is necessary to identify the best tools and standardization measures for AI governance.
Evidence
Belli cites his research on AI and cybersecurity in Brazil, highlighting the need for multi-stakeholder cooperation to implement effective security measures.
Major Discussion Point
AI Governance Frameworks and Approaches
Rodrigo Rosa Gameiro
Speech speed
160 words per minute
Speech length
599 words
Speech time
223 seconds
Ensuring equitable access to AI technologies
Explanation
Rodrigo Rosa Gameiro discusses the dual nature of AI development, highlighting both its benefits and challenges. He emphasizes the need to ensure that AI technologies serve all populations and uphold human rights principles for everyone.
Evidence
Gameiro mentions examples of AI benefits in healthcare, such as enabling new diagnoses and accelerating drug development, while also pointing out the digital divide and unequal access to these technologies.
Major Discussion Point
AI Ethics and Human Rights
Agreed with
Melody Musoni
Catherine Bielick
Stefanie Efstathiou
Agreed on
Need for inclusive AI governance frameworks
Catherine Bielick
Speech speed
167 words per minute
Speech length
739 words
Speech time
264 seconds
Creating inclusive AI governance frameworks for the global majority
Explanation
Catherine Bielick proposes using the Paris Agreement as a model for international AI governance. She suggests adopting a framework that allows for collective response with differentiated responsibilities, localized flexibility, and regular reviews to ensure accountability and progress.
Evidence
Bielick outlines five core features from the Paris Agreement that could be applied to AI governance, including nationally determined contributions and a global stocktake mechanism.
Major Discussion Point
AI Governance Frameworks and Approaches
Agreed with
Melody Musoni
Stefanie Efstathiou
Agreed on
Need for inclusive AI governance frameworks
Sizwe Snail ka Mtuze
Speech speed
115 words per minute
Speech length
450 words
Speech time
232 seconds
Exploring AI development and challenges in Africa
Explanation
Sizwe Snail ka Mtuze discusses recent developments in AI policy and strategy in South Africa. He highlights the country’s efforts to create a national AI strategy and policy framework, while also noting the mixed reception these initiatives have received.
Evidence
Snail ka Mtuze mentions the South African draft AI strategy published earlier in the year and the subsequent national artificial intelligence policy framework released in August.
Major Discussion Point
Regional Perspectives on AI Development
Stefanie Efstathiou
Speech speed
128 words per minute
Speech length
501 words
Speech time
233 seconds
Ensuring equitable access to AI technologies
Explanation
Stefanie Efstathiou emphasizes the need for inclusive AI governance that serves the global majority equitably. She highlights the importance of youth participation in shaping AI’s trajectory and calls for amplifying diverse voices in policymaking processes.
Evidence
Efstathiou mentions examples such as data protection-driven policies for student privacy in Africa and youth-led innovation hubs in Latin America.
Major Discussion Point
AI Ethics and Human Rights
Agreed with
Melody Musoni
Catherine Bielick
Agreed on
Need for inclusive AI governance frameworks
Yonah Welker
Speech speed
135 words per minute
Speech length
374 words
Speech time
165 seconds
Ensuring equitable access to AI technologies
Explanation
Yonah Welker discusses the challenges and opportunities in developing AI technologies for people with disabilities. He emphasizes the need for original solutions tailored to specific languages and contexts, rather than relying on generalized models like ChatGPT.
Evidence
Welker mentions that there are over 120 companies working on assistive technology and highlights the need for dedicated safety models and environments for complex and high-risk AI applications.
Major Discussion Point
AI Ethics and Human Rights
Agreed with
Melody Musoni
Catherine Bielick
Stefanie Efstathiou
Agreed on
Need for inclusive AI governance frameworks
Ekaterina Martynova
Speech speed
145 words per minute
Speech length
545 words
Speech time
224 seconds
Examining AI development and regulation in Russia
Explanation
Ekaterina Martynova discusses the current state of AI development and regulation in Russia. She highlights the government’s increased spending on AI development and the cautious approach to regulation, focusing on developing technology rather than hindering it through strict laws.
Evidence
Martynova mentions the use of AI in public services, healthcare, and public security in Russia, as well as the development of sandboxes for AI testing.
Major Discussion Point
Regional Perspectives on AI Development
Differed with
Rocco Saverino
Differed on
Approach to AI regulation
Protecting human rights in AI development and deployment
Explanation
Ekaterina Martynova discusses the human rights concerns associated with AI use in Russia, particularly in public security and facial recognition systems. She emphasizes the need for more safeguards and transparency in AI deployment.
Evidence
Martynova mentions a case considered by the European Court of Human Rights regarding procedural safeguards for people detained through facial recognition systems in Russia.
Major Discussion Point
AI Ethics and Human Rights
Rocco Saverino
Speech speed
117 words per minute
Speech length
408 words
Speech time
207 seconds
Analyzing AI governance trends in Latin America
Explanation
Rocco Saverino discusses the emerging AI regulations in Latin America, focusing on Brazil and Chile. He highlights the establishment of specialized AI regulatory bodies and the integration of AI governance into existing data protection frameworks.
Evidence
Saverino mentions Brazil’s law 23.38 of 2023 and Chile’s proposed AI technical advisory council and data protection agency to enforce AI laws.
Major Discussion Point
Regional Perspectives on AI Development
Differed with
Ekaterina Martynova
Differed on
Approach to AI regulation
Rachel Leach
Speech speed
159 words per minute
Speech length
444 words
Speech time
166 seconds
Examining the environmental and social costs of AI development
Explanation
Rachel Leach discusses the environmental and social impacts of AI development, particularly in Brazil and the United States. She argues that current regulations are furthering AI development without properly addressing the harms caused by AI and big data systems to the environment.
Evidence
Leach mentions Brazil’s $4 billion BRL investment in AI development and the U.S. Executive Order on AI, which both focus on the benefits of AI without fully examining its environmental impacts.
Major Discussion Point
AI and Environmental Concerns
Differed with
Ahmad Bhinder
Differed on
Focus of AI governance
Considering embodied carbon in AI technologies
Explanation
Rachel Leach emphasizes the importance of considering embodied carbon in AI technologies. This includes the environmental impact of all stages of an AI product’s lifecycle, from raw material extraction to energy consumption for training and retraining models.
Evidence
Leach mentions that environmental costs often fall harder on the global majority, citing examples of U.S.-based companies locating data centers in Latin America, exacerbating issues such as droughts.
Major Discussion Point
AI and Environmental Concerns
Avantika Tewari
Speech speed
138 words per minute
Speech length
604 words
Speech time
262 seconds
Analyzing the exploitation of digital labor in AI development
Explanation
Avantika Tewari discusses the hidden human labor behind AI systems, particularly in data annotation and moderation. She argues that this labor, often outsourced to workers in the global majority, is underpaid and unacknowledged, perpetuating digital exploitation and colonial legacies.
Evidence
Tewari mentions platforms like Amazon Mechanical Turk that outsource tasks to workers in the global majority, reducing them to fragmented, repetitive tasks.
Major Discussion Point
AI Impact on Labor and Economy
Amrita Sengupta
Speech speed
183 words per minute
Speech length
583 words
Speech time
190 seconds
Examining the impact of AI on medical practitioners’ work
Explanation
Amrita Sengupta discusses the challenges and benefits of AI adoption in healthcare, based on a study of medical practitioners in India. She highlights issues such as the additional time and effort required for data preparation, concerns about liability, and the potential long-term impacts on clinical skills.
Evidence
Sengupta cites survey results showing that 60% of medical practitioners expressed lack of AI-related training as a barrier to adoption, and 41% suggested AI could be beneficial for time-saving and improving clinical decisions.
Major Discussion Point
AI Impact on Labor and Economy
Elise Racine
Speech speed
137 words per minute
Speech length
444 words
Speech time
194 seconds
Implementing reparative algorithmic impact assessments
Explanation
Elise Racine introduces the concept of reparative algorithmic impact assessments as a framework to address inequities in AI development and deployment. This approach combines accountability mechanisms with reparative practices to create a more culturally sensitive and justice-oriented methodology.
Evidence
Racine outlines five steps in the reparative algorithmic impact assessment process, including socio-historical research, participant engagement, sovereign data practices, ongoing monitoring, and concrete redress plans.
Major Discussion Point
AI Governance Frameworks and Approaches
Hellina Hailu Nigatu
Speech speed
153 words per minute
Speech length
480 words
Speech time
187 seconds
Addressing challenges in AI-powered content moderation for diverse languages
Explanation
Hellina Hailu Nigatu discusses the challenges of content moderation on social media platforms, particularly for non-English content. She highlights the limitations of automated systems and human moderators in effectively moderating content in languages spoken by the majority of the world’s population.
Evidence
Nigatu cites research showing that the majority of content posted on YouTube in 2019 was in languages other than English, yet 81% of human moderators operate in English.
Major Discussion Point
AI in Content Moderation and Misinformation
Isha Suri
Speech speed
162 words per minute
Speech length
750 words
Speech time
277 seconds
Developing policy responses to counter false information in the age of AI
Explanation
Isha Suri examines regulatory responses to false information in global majority countries. She discusses various approaches, including amendments to existing laws, transferring obligations to internet communication corporations, and fact-checking initiatives.
Evidence
Suri provides examples of regulatory responses from different countries, such as Germany’s approach to regulating platforms with more than 2 million users and the EU’s Digital Services Act requiring more transparency in platform algorithms.
Major Discussion Point
AI in Content Moderation and Misinformation
Guangyu Qiao Franco
Speech speed
122 words per minute
Speech length
329 words
Speech time
161 seconds
Addressing the gap between North and South in military AI governance
Explanation
Guangyu Qiao Franco highlights the concerning gap between the global North and South in military AI governance. She discusses the limited participation of global South countries in UN deliberations on military AI and the divergent priorities and approaches to governance between the North and South.
Evidence
Franco mentions that fewer than 20 developing countries contributed regularly to UNCCW meetings on lethal autonomous weapons systems between 2014 and 2023, and notes that none of the BRICS member states have endorsed North-led frameworks for military AI governance.
Major Discussion Point
AI Governance Frameworks and Approaches
Agreements
Agreement Points
Need for inclusive AI governance frameworks
Melody Musoni
Catherine Bielick
Stefanie Efstathiou
Creating inclusive AI governance frameworks for the global majority
Creating inclusive AI governance frameworks for the global majority
Ensuring equitable access to AI technologies
These speakers emphasize the importance of developing AI governance frameworks that are inclusive and consider the needs of the global majority, including youth participation and localized flexibility.
Addressing biases and discrimination in AI systems
Ansgar Koene
Bianca Kremer
Addressing biases and discrimination in AI systems
Addressing biases and discrimination in AI systems
Both speakers highlight the need to identify and address biases and discrimination in AI systems, particularly their impacts on different groups and in specific contexts like facial recognition technologies.
Similar Viewpoints
Both speakers address the hidden costs of AI development, with Leach focusing on environmental impacts and Tewari on labor exploitation, particularly in the global majority countries.
Rachel Leach
Avantika Tewari
Examining the environmental and social costs of AI development
Analyzing the exploitation of digital labor in AI development
Both speakers discuss challenges related to content moderation and misinformation in the context of AI, particularly focusing on the needs of diverse language communities and global majority countries.
Hellina Hailu Nigatu
Isha Suri
Addressing challenges in AI-powered content moderation for diverse languages
Developing policy responses to counter false information in the age of AI
Unexpected Consensus
Importance of regional and localized AI strategies
Ahmad Bhinder
Melody Musoni
Sizwe Snail ka Mtuze
Ekaterina Martynova
Rocco Saverino
Developing regional AI strategies and policies
Creating inclusive AI governance frameworks for the global majority
Exploring AI development and challenges in Africa
Examining AI development and regulation in Russia
Analyzing AI governance trends in Latin America
Despite representing different regions and contexts, these speakers all emphasize the importance of developing localized AI strategies and governance frameworks tailored to specific regional needs and priorities.
Overall Assessment
Summary
The main areas of agreement include the need for inclusive AI governance frameworks, addressing biases and discrimination in AI systems, considering the hidden costs of AI development, and developing region-specific AI strategies.
Consensus level
There is a moderate level of consensus among speakers on the importance of considering the needs and perspectives of the global majority in AI development and governance. This consensus suggests a growing recognition of the need for more inclusive and equitable approaches to AI governance globally, which could lead to more collaborative efforts in developing AI policies and frameworks that address the diverse needs of different regions and populations.
Differences
Different Viewpoints
Approach to AI regulation
Ekaterina Martynova
Rocco Saverino
Examining AI development and regulation in Russia
Analyzing AI governance trends in Latin America
Martynova discusses Russia’s cautious approach to AI regulation, focusing on developing technology rather than strict laws, while Saverino highlights Latin American countries’ efforts to establish specialized AI regulatory bodies and integrate AI governance into existing frameworks.
Focus of AI governance
Ahmad Bhinder
Rachel Leach
Developing ethical AI principles and assessment tools
Examining the environmental and social costs of AI development
Bhinder emphasizes developing ethical AI principles and assessment tools, while Leach argues that current regulations are furthering AI development without properly addressing environmental harms.
Unexpected Differences
Economic impact of AI
Bianca Kremer
Avantika Tewari
Addressing the economic impact of algorithmic racism
Analyzing the exploitation of digital labor in AI development
While both speakers address economic impacts of AI, their focus is unexpectedly different. Kremer examines the economic consequences of algorithmic racism in law enforcement, while Tewari highlights the exploitation of digital labor in AI development. This difference shows the diverse economic challenges posed by AI in different contexts.
Overall Assessment
summary
The main areas of disagreement include approaches to AI regulation, focus of AI governance, addressing biases in AI systems, and economic impacts of AI.
difference_level
The level of disagreement among speakers is moderate. While there are differing perspectives on specific issues, there is a general consensus on the need for inclusive AI governance and addressing the challenges posed by AI technologies. These differences reflect the diverse contexts and priorities of different regions and stakeholders, highlighting the complexity of developing global AI governance frameworks that address the needs of the global majority.
Partial Agreements
Partial Agreements
Both speakers agree on the need to address biases in AI systems, but they focus on different aspects: Kremer emphasizes algorithmic racism in law enforcement, while Nigatu highlights content moderation challenges for diverse languages.
Bianca Kremer
Hellina Hailu Nigatu
Addressing biases and discrimination in AI systems
Addressing challenges in AI-powered content moderation for diverse languages
Both speakers advocate for inclusive AI governance frameworks, but they propose different approaches: Musoni focuses on regional strategies in Africa, while Bielick suggests adapting the Paris Agreement model for international AI governance.
Melody Musoni
Catherine Bielick
Creating inclusive AI governance frameworks for the global majority
Creating inclusive AI governance frameworks for the global majority
Similar Viewpoints
Both speakers address the hidden costs of AI development, with Leach focusing on environmental impacts and Tewari on labor exploitation, particularly in the global majority countries.
Rachel Leach
Avantika Tewari
Examining the environmental and social costs of AI development
Analyzing the exploitation of digital labor in AI development
Both speakers discuss challenges related to content moderation and misinformation in the context of AI, particularly focusing on the needs of diverse language communities and global majority countries.
Hellina Hailu Nigatu
Isha Suri
Addressing challenges in AI-powered content moderation for diverse languages
Developing policy responses to counter false information in the age of AI
Takeaways
Key Takeaways
AI governance frameworks need to be inclusive and consider perspectives from the global majority
There are significant disparities in AI development and governance between the global North and South
AI has major impacts on labor, the economy, and the environment that need to be addressed
Ethical considerations and human rights protections are crucial in AI development and deployment
Regional approaches to AI governance are emerging, with varying priorities and challenges
Content moderation and countering misinformation are key challenges in the age of AI
AI is being implemented in judicial systems, raising concerns about transparency and fairness
Resolutions and Action Items
Develop more inclusive AI governance frameworks that incorporate perspectives from the global majority
Implement reparative algorithmic impact assessments to address historical inequities
Create open repositories and taxonomies for AI cases and accidents
Develop original AI solutions tailored to regional languages and contexts
Increase capacity building and knowledge transfer in AI between global North and South
Incorporate environmental justice concerns comprehensively in AI discussions and policies
Unresolved Issues
How to effectively balance AI development with environmental sustainability
Addressing the exploitation of digital labor in AI development
Resolving disparities in military AI governance between global North and South
Determining liability in AI-assisted medical decisions
Ensuring fairness and transparency in AI-powered judicial systems
Developing effective content moderation systems for diverse languages and contexts
Suggested Compromises
Adopting a co-regulatory approach to AI governance, balancing government oversight with industry self-regulation
Developing AI tools for content moderation that are inclusive of diverse languages and contexts
Balancing the need for AI development with environmental and social costs through comprehensive impact assessments
Implementing human-in-the-loop systems for AI in judicial decision-making to balance efficiency with fairness
Thought Provoking Comments
AI meets cybersecurity, exploring the Brazilian perspective on information security with regard to AI… even if formally it has climbed the cybersecurity index because it has adopted a lot of cybersecurity sectoral regulation like in data protection, like in telecoms sector, in the banking sector and so on, in the energy sector but the implementation is very patchy and not very sophisticated in some cases
speaker
Luca Belli
reason
This comment highlights the gap between formal regulations and actual implementation, revealing a critical issue in AI governance.
impact
It set the tone for discussing practical challenges in implementing AI governance frameworks, especially in developing countries.
We are developing a tool for assessment of AI readiness for our member states. This is a self assessment tool and this tool is we will make it available in a month’s time to the member states across different dimensions of their AI readiness that includes governance but that goes beyond governance to a lot of other dimensions from for example capacity building, the adoption of AI
speaker
Ahmad Bhinder
reason
This introduces a concrete tool for assessing AI readiness, moving the discussion from theory to practical implementation.
impact
It shifted the conversation towards actionable steps countries can take to prepare for AI adoption and governance.
90.5% of those who are arrested in Brazil today with the use of facial recognition technologies are black and brown. The brown people in Brazil are called pardos. So, we have more than 90% of the population being biased with the use of technology.
speaker
Bianca Kremer
reason
This statistic starkly illustrates the real-world impact of AI bias, particularly on marginalized communities.
impact
It brought the discussion to focus on the urgent need to address AI bias and its societal implications, especially in diverse societies.
Platforms like Amazon Mechanical Turk outsource these tasks to workers in the global majority, reducing them to fragmented, repetitive tasks that remain unacknowledged and underpaid. These workers sustain AI systems that disproportionately benefit corporations in the global north, transforming colonial legacies into new forms of digital exploitation
speaker
Avantika Tewari
reason
This comment exposes the hidden labor behind AI systems and the exploitation of workers in the global majority.
impact
It broadened the discussion to include labor rights and global inequalities in AI development.
Reparative algorithmic impact assessments address these challenges through five steps that combine theoretical rigor with practical action.
speaker
Elise Racine
reason
This introduces a novel framework for addressing AI inequities, combining theory with practical steps.
impact
It moved the conversation towards concrete solutions and methodologies for creating more equitable AI systems.
Overall Assessment
These key comments shaped the discussion by highlighting the complex interplay between AI governance, societal impacts, and global inequalities. They moved the conversation from theoretical frameworks to practical challenges and potential solutions, emphasizing the need for inclusive, culturally sensitive approaches to AI development and governance. The discussion evolved from identifying problems to proposing concrete tools and methodologies for addressing these issues, particularly focusing on the perspectives and needs of the global majority.
Follow-up Questions
How can AI governance frameworks ensure equitable access to and promote development of AI technologies for the global majority?
speaker
Melody Musoni
explanation
This is a key policy question that needs to be addressed to ensure AI benefits are distributed fairly globally.
What are the economic impacts of algorithmic racism in digital platforms?
speaker
Bianca Kremer
explanation
Understanding the economic consequences could provide compelling arguments for policymakers to address algorithmic bias.
How can we develop more specific intersectional frameworks and guidelines for AI in healthcare and education, particularly for underserved populations?
speaker
Yonah Welker
explanation
This would help ensure AI applications in critical sectors like health and education are inclusive and beneficial for diverse populations.
How can we develop AI regulatory models that reflect the unique social, economic and cultural contexts of Latin American countries?
speaker
Rocco Saverino
explanation
This would allow Latin American countries to shape AI governance proactively in a way that suits their specific needs and contexts.
How can we prioritize AI development in areas that provide the most public benefit with the least disruption to existing workflows in healthcare?
speaker
Amrita Sengupta
explanation
This approach could help maximize the positive impact of AI in healthcare while minimizing potential negative consequences.
Is there a way to have content moderation with power primarily residing in the users, rather than being dominated by capitalist interests?
speaker
Hellina Hailu Nigatu
explanation
This could lead to more equitable and culturally sensitive content moderation practices.
How can we build an inclusive AI arms control regime that addresses the distinct needs and priorities of both the global North and South?
speaker
Guangyu Qiao Franco
explanation
This is crucial for developing effective global governance of military AI applications.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event
Internet Governance Forum 2024
15 Dec 2024 06:30h - 19 Dec 2024 13:30h
Riyadh, Saudi Arabia and online