WS #82 A Global South perspective on AI governance
WS #82 A Global South perspective on AI governance
Session at a Glance
Summary
This panel discussion focused on global approaches to AI governance, with particular emphasis on perspectives from the Global South. Experts from various regions discussed the challenges and opportunities in developing AI regulatory frameworks that respect human rights and cultural diversity.
The discussion highlighted the African Union’s recent adoption of a regional AI strategy, which aims to promote ethical AI use aligned with the continent’s development goals. Challenges faced by African countries in implementing AI governance include infrastructure limitations, skills shortages, and the need to address unique socio-economic issues.
In Asia, a more cautious “wait-and-see” approach was noted, with most countries opting for soft law and best practices rather than hard regulations. The importance of addressing existing digital divides and human rights concerns before fully embracing AI was emphasized.
The European Union’s AI Act was presented as a pioneering regulatory framework, balancing innovation with fundamental rights protection. Its potential influence on global AI governance through the “Brussels effect” was discussed, along with the challenges of exporting such a politically rooted framework to other regions.
The geopolitical aspects of AI governance were explored, including the role of organizations like G20 and BRICS in shaping policies. The need for collaboration between Global North and South was stressed, with human rights proposed as a universal language for developing inclusive AI governance frameworks.
The discussion concluded by addressing the challenges faced by Global South countries in regulating technologies primarily developed outside their jurisdictions. Participants emphasized the importance of leveraging existing legal frameworks and regional collaboration to effectively participate in global AI governance discussions.
Keypoints
Major discussion points:
– Different regional approaches to AI governance, including the EU AI Act, African Union strategy, and developments in Asia
– Challenges faced by Global South countries in developing AI governance frameworks, including lack of infrastructure and technical capacity
– The need to center human rights in AI governance approaches rather than just focusing on risk
– Geopolitical tensions shaping AI governance and the potential for fragmentation of approaches
– The role of the private sector and need for corporate accountability in AI governance
Overall purpose:
The purpose of this panel discussion was to examine global approaches to AI governance, with a focus on perspectives from the Global South. The panelists aimed to highlight emerging approaches from different regions and discuss challenges and opportunities for more inclusive AI governance frameworks.
Tone:
The overall tone was academic and policy-oriented, with panelists providing expert insights from their regional perspectives. There was a collaborative spirit, with panelists building on each other’s points. Towards the end, the tone became more urgent in discussing the need for Global South countries to move beyond being consumers of AI and become active contributors to global governance frameworks.
Speakers
– Melody Musoni: Digital Policy Officer at the European Centre for Development Policy Management, moderator of the panel
– Jenny Domino: Non-resident fellow at the Atlantic Council’s Digital Forensic Research Lab, Senior Case and Policy Officer at the Oversight Board at META, PhD candidate at Harvard Law School on global governance of technology
– Lufuno T Tshikalange: Expert in cyber law, founder of Orizo Consulting specializing in data privacy, cyber security governance, and ICT contract management in South Africa
– Gianclaudio Malgieri: Associate Professor of Law and Technology and board member of the eLaw Center for Law and Digital Technology at the University of Leiden Law School, Co-Director of the Brussels Privacy Hub, Managing Director of Computer Law and Security Review
Additional speakers:
– Dr. Sabine Witting: German lawyer and Assistant Professor for Law and Digital Technologies at Leiden University, co-founder of TechLegality consultancy firm
Full session report
Global Approaches to AI Governance: Perspectives from the Global South
This panel discussion brought together experts from various regions to examine global approaches to AI governance, with a particular focus on perspectives from the Global South. The panellists explored the challenges and opportunities in developing AI regulatory frameworks that respect human rights and cultural diversity, highlighting the need for inclusive and collaborative approaches to global AI governance.
Regional Approaches to AI Governance
The discussion revealed significant variations in regional approaches to AI governance:
1. African Union: Lufuno T Tshikalange highlighted the African Union’s recent adoption of a regional AI strategy in August 2024, aligned with the continent’s Agenda 2063 and digital transformation strategy. This strategy aims to promote ethical AI use in line with Africa’s development goals while addressing the triple challenges of inequality, poverty, and unemployment.
2. Asia: Jenny Domino described a more cautious “wait-and-see” approach in Asia, with most countries opting for soft law and best practices rather than hard regulations. This approach reflects a desire to observe the outcomes of regulatory efforts in other regions before implementing comprehensive frameworks.
3. European Union: Gianclaudio Malgieri presented the EU AI Act as a pioneering regulatory framework, which has evolved from a product safety tool to a fundamental rights tool. The Act attempts to balance innovation with fundamental rights protection and includes a list of prohibited practices. The potential influence of the EU AI Act on global AI governance through the “Brussels effect” was discussed, along with the challenges of exporting such a politically rooted framework to other regions.
4. Council of Europe: The panel discussed the Council of Europe AI Convention, which focuses on public sector AI and its potential relationship with the EU AI Act.
Challenges in Developing AI Governance Frameworks
The panellists identified several key challenges faced by Global South countries in developing and implementing AI governance frameworks:
1. Infrastructure limitations: Lufuno T Tshikalange emphasised the lack of necessary infrastructure in many developing countries, including inadequate internet access and insufficient data for training models, which hinders their ability to fully engage with AI technologies and governance.
2. Skills shortages: The need for technical expertise and capacity building in AI-related fields was highlighted as a significant challenge for many Global South countries.
3. Existing digital divides: Jenny Domino stressed the importance of addressing existing digital divides and human rights concerns before fully embracing AI governance.
4. Measuring risks to fundamental rights: Gianclaudio Malgieri pointed out the difficulty in quantifying and measuring risks to fundamental rights when developing risk-based approaches to AI governance.
5. Limited leverage over tech companies: An audience member raised the issue of Global South countries having limited influence over tech companies based in other jurisdictions, making it challenging to enforce regulations effectively.
Human Rights-Centred Approach
A recurring theme throughout the discussion was the importance of centring human rights in AI governance approaches:
1. Universal framework: Jenny Domino proposed human rights as a potential universal language for developing inclusive AI governance frameworks that could be understood across different regions and political contexts.
2. Balancing risks and rights: While the EU AI Act was praised for attempting to balance risks and rights, there was debate about whether a purely risk-based approach was sufficient to protect fundamental rights.
3. Corporate accountability: The discussion touched on the need for mechanisms to hold companies accountable for adverse human rights impacts caused by AI systems, drawing parallels with the UN Guiding Principles on Business and Human Rights.
Geopolitical Aspects of AI Governance
The panel explored the significant role of geopolitics in shaping AI governance approaches:
1. G20 and BRICS: Melody Musoni highlighted the potential for organisations like G20 and BRICS to influence global AI governance discussions and policies. South Africa’s G20 presidency was mentioned as an opportunity for collaboration in AI governance.
2. Political standpoints: Differing political views on issues such as social scoring were noted as factors shaping diverse governance approaches across regions.
3. Fragmentation concerns: Jenny Domino emphasised the need to consider geopolitical tensions and the potential fragmentation of internet governance when developing AI governance frameworks.
Collaboration and Inclusivity
The panellists stressed the importance of collaboration between the Global North and South in developing effective AI governance frameworks:
1. Moving beyond consumer status: Lufuno T Tshikalange urged Global South countries to leverage their skills and case studies to contribute actively to global AI governance discussions, rather than being mere consumers of technology and regulation.
2. Regional collaboration: The potential for regional bodies like the African Union to increase leverage in regulating multinational tech companies was discussed.
3. Inclusive dialogue: The need for diverse voices in shaping AI governance was emphasised, with calls for the Global South to be seen as contributors rather than subjects of discussion.
Practical Approaches and Future Directions
The discussion highlighted several practical approaches and unresolved issues that require further attention:
1. Building on existing frameworks: Panellists suggested focusing on existing laws (e.g., data protection, criminal law) as a starting point for regulating AI in countries with limited capacity.
2. Influence of EU AI Act: The potential impact of the EU AI Act on other regions was noted, with the Brazilian AI Act mentioned as an example of its influence.
3. Achieving a universal approach to AI governance while respecting regional differences
4. Effectively regulating AI in developing countries with limited leverage over tech companies
5. Balancing innovation with protection of fundamental rights in AI governance
6. Measuring and quantifying risks to fundamental rights in AI systems
The panel concluded by emphasising the need for continued dialogue and collaboration to address these challenges and develop more inclusive and effective global AI governance frameworks.
Session Transcript
Melody Musoni: Hi, Dr. Sabin, everyone. Hi, everyone. Hi, everyone. Hi, everyone. Okay. I have very good morning, everyone. Okay. Can you hear me. Can you hear me. Good morning, everyone. Welcome to our panel discussion where we are going to be talking about a global South perspective on AI governance. My name is Melody Musoni, I’m a Digital Policy Officer at the European Centre for Development Policy Management and I am going to be moderating this session together with my co-moderator Dr. Sabine. Dr. Sabine Wietzing is a German lawyer and an Assistant Professor for Law and Digital Technologies at Leiden University in the Netherlands. She’s also the co-founder of TechLegality, a consultancy firm specializing in human rights and digital technologies. And the purpose of our panel discussion is basically to have a conversation around the global approaches to AI governance and especially the emerging approaches that we see from the global South. And I think this conversation and this discussion is quite critical as we are at a point where we are setting international standards on AI governance and these standards tend to be set by the global North countries. But as of late we have seen global South countries pushing back and also demanding for fair and equal representation of their cultural and social norms and values when it comes to developing of AI frameworks. So there is also a question of whether it’s possible to even have an international framework on AI governance that is representative and reflective of all the different or diverse cultures that we have and the different political, economic and political and social contexts for different countries. And I guess one of the outcomes that came from the Summit of the Future being the Global Digital Compact is that more and more countries are making that commitment to enhance international governance of AI for the benefit of humanity. and our quote from the Global Digital Compact, it says, for a balanced, inclusive, and risk-based approach to the governance of AI, with the full and equal representation of all countries, especially developing countries, and the meaningful participation of all stakeholders. So I think that the future of AI governance is also being shaped by the geopolitics and the geopolitical competition, especially of the superpowers that we have, the U.S. and, example, China. And, of course, because of that AI race, we need to also start thinking about the implications on that, on how global South countries actually approach the question of AI governance. And, of course, the governance of AI will likely be shaped by aspirations of AI sovereignty or tech sovereignty, and very little to do with the promotion and protection of human rights. So in this panel discussion, we are going to try to answer three policy questions. The first one is, what regulatory approaches to AI are being adopted by the global South, and do these approaches advance the protection of human rights? And the second policy question is, what challenges are being faced by the global South in developing their AI governance framework? And the last question is, what are the implications of different regulatory approaches on AI development and deployment in the global South? That being said, I am joined on this panel by a panel of brilliant experts who will be representing different regions. We have a colleague representing Africa, China, talking about Africa, Latin America, Asia, and Europe. So maybe I’ll start with the introductions. Sitting with us today, we have Jenny Domino. She’s a non-resident fellow at the Atlantic Council’s Digital Forensic Research Lab. and a Senior Case and Policy Officer at the Oversight Board at META, where she oversees content policy development concerning elections, protest, and democratic processes. She’s also completing her PhD at Harvard Law School on the global governance of technology. So welcome, Jenny.
Jenny Domino: Thank you.
Melody Musoni: And then online, we are joined by advocate Lufuno Chikalange, who is a distinguished expert in cyber law and the founder of Orizo Consulting, a firm specializing in data privacy, cyber security governance, and ICT contract management in South Africa. She has over 10 years of multidisciplinary experience in the ICT sector and is recognized as one of Africa’s top 50 women in cyber security. So welcome, advocate. And we are also joined by Gian Claudio Malgheri. He’s an Associate Professor of Law and Technology and a board member of the eLaw Center for Law and Digital Technology at the University of Leiden Law School. He’s also the Co-Director of the Brussels Privacy Hub and the Managing Director of Computer Law and Security Review. So welcome, panelists. So I guess I’m just going to jump right into our discussion, and I’m going to start with advocate Lufuno to shed more light on the development of AI governance, especially on the African continent. What you see is some of the challenges that African countries are experiencing when they are developing their AI frameworks as well. You can go ahead.
Lufuno T Tshikalange: Thank you, Dr. Melody, and thank you for having us here today. In Africa, we do now have a regional artificial intelligence strategy. which has been developed this year, 2024. And it was adopted by the African Union Executive Council in August, 2024. And currently we are participating within the auspices of African Union Commission to contribute to the strategy implementation plan for the next five years, which will be 2025 to 2030. So this strategy is seeking to ensure that we have appropriate establishment of AI governance and regulations and to ensure that we accelerate the AI adoption in key sectors like agriculture, education, and health. And this is aligned to the agenda 2063 and the digital transformation strategy that we had adopted earlier. It is also promoting the creation of an enabling environment for an AI startup ecosystem and also to deal with issues of skills and talent development, which is not just AI skills shortage, but ICT across the board, we have a shortage of skills. And it is also promoting a grassroots research and innovation partnership in AI, which is important for us if we were to participate meaningfully in the digital economy and more importantly in the AI economy. The important part is that the implementation of this strategy is going to be based on ethical principles. which respects human rights and diversity, and also ensuring that we have appropriate technical standards for AI safety and security. So the approach that this AI strategy have taken is a multi-pronged approach, emphasizing that the regulatory framework is flexible, agile, adaptable, context, and risk-based, because it’s important that we do not do cut and paste, though the development of this strategy did learn from the EU, the OECD, and other regions, but the intention is to make sure that the strategy is responding to our own unique challenges. Importantly, the triple challenge, the inequality, poverty, and unemployment. It is also emphasizing that the approach be multi-tiered and is collaborative. The strategy goal is to promote human-centric approach, which is very important that the technology is developed as a tool to assist our developmental goals, not to exploit the people or violate the people’s human rights. And at the same time, it is also very strong on security by design, and ensuring that there is accountability and transparency in both the design and deployment of AI, systems. The strategy highlights the needs to make sure that mechanisms throughout the AI life cycle, life cycle, they. mitigate potential harms from AI technologies while we foster responsible development and use of AI across Africa. It is important to note, as I’ve mentioned earlier, that the AI strategy is not a standalone, but it is a part of the broader digital transformation strategy. So even the priorities that are in the strategy are derived from the digital transformation strategy, which is also informed by the agenda 2063. So the intention is not to have an AI strategy for the sake of it, but to have an AI strategy that will respond to our unique challenges, stimulating our economy and promoting the integration that is a target for the agenda 2063, and ensuring that we generate inclusive economic growth and stimulating job creation, and that there is the holistic approach to digital revolution for socioeconomic development that meets the needs of Africans. Other important baseline policy frameworks include the AU Convention on Cybersecurity and Personal Data Protection, which is commonly known as the Malabo Convention. The current challenge that we have is that the Malabo Convention has been adopted in 2014, came to effect June last year, but is yet to be implemented. We also have the AU Data Policy Framework, which has an expanded framework in terms of how effective data governance can be. to be achieved even as we implement imaging technologies like the the AI. So we have our priorities and guiding principles that are provided by the strategy which is local first people centered and human rights and dignity. It’s everywhere in this strategy because it’s important that as we. Technology or advance. We are not violating human rights like the right to privacy as they have charter and different constitutions in in in states in Africa. Peace and prosperity is also one of the reason or the principles that will be governing this strategy. Inclusion and diversity. As we said that we are struggling with triple challenges, one of which is inequality. Exacerbated by a lot of. Divides digital divide information, divide infrastructure device and a number of device that makes the technological development and even within Africa which is even worse. If we then have to compare ourselves with other regions, and it also promotes ethics and transparency as one of the principles. Cooperation and integration as AI or any other implementation of imaging technologies cannot be done inside. We have to do it in collaboration with each other. So we say that the AI governance conversion is in what conversation is important to us because of the seven aspirations that we need to achieve according to agenda. 2063 and the SDGs that correspond with the same. Our key concerns, however, is the misuse of intellectual property and the potential for misinformation threatening our societal cohesion and democracy, lack of infrastructure. And we also are considering barriers that may make AI adoption a bit of a challenge, including inadequate internet access, insufficient data for training models or unstructured data, limited computing resources, and us not having sufficient skills. So the risk that we have identified and will be addressed fully in the implementation plan includes environmental risks, system level risk like the biasness that may come with the AI systems, structural risk like your automation, which might increase disparities that we have and inequalities, cultural and societal risk because heritage and culture as we advance into the world of technologies. So we are working on strategies that will help us ensure that so our triple challenge and other challenges as identified in the agenda 2063, we are not now creating more challenges for ourselves. We have examples that we will be looking at in terms of case studies that can help us accelerate the implementation and adoption of AI within the African region. as Algeria, Egypt, Benin, Mauritius, Rwanda, and Senegal are already leading in the space, having their own to deal with the identified objectives within their own countries.
Melody Musoni: So, should we continue? Okay, it seems like we have lost our advocate, Lufuno, but from what she was sharing, she was giving an overview of what is happening in Africa, particularly with the continental AI strategy and how it connects with the broader vision of Agenda 2063 and the digital transformation strategy, as well as drawing examples of some of the African countries that are actually developing their own national AI frameworks. So, I guess I’m going to come to you, Jenny, to talk more about the case of Asia. I remember last year at the IGF, we were in Kyoto, and Japan was sharing its role in the shaping of the international frameworks on AI through the Hiroshima AI process and how it shaped the G7 AI principles. And now we are here in Saudi Arabia, and they also developed the Riyadh charter on AI principles for the Islamic world, which is aimed at aligning AI frameworks with Islamic values and principles. And as someone who has been doing work on AI governance, Do you see an emerging of different approaches on AI governance and do you see these approaches promoting or upholding human rights in your perspective?
Jenny Domino: Can you hear me? Yeah? First of all, good morning everyone. I’m happy to be here just to qualify that I’m speaking in my personal capacity and my views don’t necessarily reflect the views of the organizations I represent. To answer your question, I echo what Advocate Lufuno already mentioned and discussed. In Asia, by and large, we see more of a wait-and-see approach. So we don’t have anything that comes close to the EU AI Act in terms of hard law, in terms of regulation. So there are draft legislative initiatives. There’s the ASEAN guidance. So more on best practice approach or soft law approach rather than hard law that we see so far in Asia. Draft laws in Japan, Korea, Thailand, and other countries. And what I see as a common thread among all these various initiatives is it has overlaps with the EU AI Act, for example, in terms of risk tiering, right? With minimal risk, limited risk, up to high risk. There is a lack of centering of human rights and many human rights groups have actually called this out, including in the EU AI Act, that the focus on risk tends to eclipse the focus on rights. And I think there’s something here that we can learn from platform governance, from governing social media platforms. We’ve seen how emerging technology, when I speak about social media… from decade ago, more than a decade ago, there was a lot of excitement, there was a lot of hype, and the lack of centering of human rights actually led to a lot of human rights violations. The UN fact-finding mission on Myanmar many years ago in 2018 identified Facebook being used by ultranationalist monks and military officials in Myanmar to incite violence against an ethnic minority. And this is just to give you an example of how if we don’t center human rights when we talk about AI regulation, whether it’s legislative or policy, then it could be misused. And we already see this happening, for example, in the use of generative AI in conflict settings. And as you said, Melody, there is a lot of geopolitical tensions happening around the world, a lot of armed conflict situations happening around the world. One of the things that we need to think about also is how AI technology can be used. Another thing that I want to emphasize also is, you know, when we speak about AI, AI encompasses a broad range of uses and purposes. So there are more traditional AI uses, for example, in content moderation of social media platforms. They use automated technology to enforce their rules on social media in taking down things like misinformation or hate speech. There are, you know, so there is that, but then there’s also more emerging technology such as generative AI, like chat GPT, and many of the issues that involve these technologies, Advocate Lufuno already mentioned, and I won’t be repeating that here. Except to emphasize that there is a divide in terms of basic infrastructure, on the one hand, inequality. So you can’t really, so I, one of the things that I’m worried about is the hype surrounding AI, again, tends to overlook many infrastructure problems that we still see in Asia in developing countries. And we can’t really think about AI uptake in these countries until we solve basic issues like access to education and digital education. And then on the other hand, we also have existing human rights issues in many countries in the world, including in Asia. We see internet shutdowns, website shutdowns. So if these things are not even addressed, we can’t really talk about AI as a shiny, it’s a shiny new thing, but we haven’t really addressed existing issues concerning the internet. And so I think that’s something that I want to highlight in this conversation. And as Advocate Lufuno also mentioned, we need to center stakeholders, but actually in the human rights framework, it’s rights holders. Who are the communities affected by this technology? and who are we leaving out? Who are we including in the conversations that must happen? I’ll stop here.
Melody Musoni: Thank you so much for your contribution. I guess you have actually raised something that I was talking about earlier coming here, that now we are just talking about AI, AI, but we are forgetting about the important issues, the foundation of AI, issues on digital infrastructure, for example, accessibility to the internet and things like that. So AI is coming in and creating further divide and our focus is now on AI governance, overlooking some of the existential problems that we already face. I’m going to come to Gian Claudio to talk more about the developments that are happening in Europe. I’m sure everyone has heard about the EU AI Act, but what exactly is this Act about? And perhaps you can also touch more on the Council of Europe and the developments that are happening within the Council of Europe in developing another framework on AI and making that distinction because we tend to be confused, like what is the EU AI Act and what is the Council of Europe framework on AI governance as well. So over to you, Gian.
Gian Claudio: Thank you very much. I hope you hear me well. Yeah, good. So yeah, indeed, the AI Act was a bit everywhere in the, I think, in the discussion because all countries, all other countries might refer to the AI Act as a model or as a non-model. So I think I would like to start from the, yeah, maybe the history, the legislative history of the AI Act very briefly. Then we see the mission of the law and the structure in comparison with the other laws. that we have in Europe also regulating AI-related aspects, and maybe, indeed, a focus on the AI Convention and interplay between the AI Act and the AI Convention. Now, doing everything in seven, 10 minutes is impossible, but we will try to do some highlights. Okay, so the AI Act, indeed, is the oldest, sorry, the first AI regulation we have in the world. Of course, we had already some AI-related views in many countries, including Europe, but the AI Act has this ambition to regulate artificial intelligence as a whole. It took three years, more than three years, from the initial text of the European Commission to the final approval, so from April 21 to August 24. August 24, the first of August 24 is the entry into force, but then we have more, you know, a different timing from entry into force for different companies, different kind of entities. At the same time, I was hearing that, for example, the Asian approach is wait and see, and in Europe, we don’t have that approach. Why so? I think the main point is that sometimes, wait and see is too late, because, so we usually have this, there’s this narrative also for big tech, et cetera, right, that it’s better to first let innovation go, and then we can regulate. Then, of course, I think the European Union took a different position, also because sometimes, if we let innovation go, if we let companies go, et cetera, then the consequence might be that, the consequence might be that, indeed, the harm already produced, human rights and fundamental. And sometimes these technologies create dependency on society. So society and individuals might depend on these technologies. So it might be the case that it’s too late or too big to, you know, to sanction, too big to fail. This was happening, for example, to generative AI system like OpenAI, but also to some chatbots. But going zooming a bit more in the structure of the AI act. So I would like to say that the AI act has this difficult balancing between risks and rights. And in particular, also, it is this particular development because it was initially considered as a product safety tool. So something more related to consumer protection or product safety, which is also touching consumer protection. And indeed, it was proposed. This is also interesting to notice that the European Commission has different DG, different Directorate General. And this was not proposed by DG Justice, which is usually the part of European Commission responsible for rights, fundamental rights, liberties, etc. But it was proposed by DG Connect. So it was also interesting that it was developed really as a technological safety tool. Then during the negotiations, both at the European Parliament and also the Council, during these three years, the AI act became something else. It became a fundamental rights tool. So we still see, like, the structure and the DNA of the product safety directive, product safety regulation, and then with a lot of fundamental rights in it. But it’s not a tool for individual rights, it’s a tool for protecting fundamental rights through risks, so this is important. And I think this is one of the main merits of the AI Act, because the AI Act takes some choices. It’s not leaving everything to the decisions of companies or the decisions of member states. There are some political choices about risk to fundamental rights that the AI Act takes. And it’s the first and only time, now we see that other laws like the Brazilian AI Act, I don’t want to talk about other regions, but just to say it’s the first time and maybe the only case in which we have a list of prohibited practices. And this brave choice was a political choice. This was part of the most problematic section of the negotiations. So what was the choice in terms of prohibited practices of AI? So for example, social scoring on AI is prohibited. Manipulation or let’s say exploitation of manipulation for special vulnerabilities is prohibited. We have also special prohibitions for emotional recognition for workers and students in school and in the workplace. But also we have the special prohibitions related to law enforcement and to the use of biometrics. So this is a list, I cannot go through the list, it’s a long list with a lot of exceptions. But I think then what’s important to know… is that in addition to this unbearable risks list, we have also high risk. For high risk, the approach is allowance, so they are allowed, but there’s a lot of governance duties that are considered. One of the main governance duties is that, of course, these high risk systems need to have a data governance plan, need to check their bias, and so to have a system to limit or delete biases. Of course, you cannot delete bias. You can mitigate certain sense of biases. Then, you know, so there are different other rules like human oversight, interpretability, and so on. Then there is an area for limited risk in the AI Act, and the final area of protection is about general purpose AI. General purpose AI. So there’s a parallel regulation in the whole law. The whole law is very big. It’s 112 articles. It’s huge, but just to say, there is a parallel also chapter about general purpose AI, that aims to regulate also generative AI, and they are divided for systemic risk and non-systemic risk. So the generative AI, which produce systemic risk and generative AI, that does. Just to say, I think the balancing with innovation was also really in mind of the legislators, in particular the council, and in particular some parties of European Parliament. So the conclusions on, there is a whole chapter on how AI for innovation should be considered. So I think one of the main aspects of the innovation part is that there are some rules for SMEs, so more than medium enterprises, but also there are some protections, specific protections for the training of AI. with exceptions to the data protection rules. And this is called the regulatory sandboxes, if it’s for public purposes. Okay, of course, I cannot say more on the AI Act, it’s huge, there are so many things I didn’t mention, but just a couple of more minutes to contextualize the AI Act in the broader EU picture, and then the AI Convention about the Council of Europe. So just to say that the AI Act is not the first piece of legislation to regulate AI-related elements or AI-related societal aspects in Europe and in the European Union. Already the GDPR, the first data protection rules that we had on a regional perspective in the world was regulating several aspects about AI. For example, Article 22 is about automated decision-making, but also there’s the definition of profiling, there are many transparency rules. Connected to the GDPR, we have the Digital Services Act, which regulates digital services for online platforms, and also the use of AI, recommender systems, online behavioral advertising using profiling. So there’s a lot also related to AI in the Digital Services Act, which was approved 2022. Digital Markets Act, a bit also, so also regulating big platforms in the digital market. Then we have, of course, Data Act, DGA, et cetera, but just to say it’s not in the silo of just AI Act. And then we have AI Liability Directive that has been proposed that would be probably never accepted because, as you know, the European Union has a compact, which is a big process, and we already know that the Council of Europe, in particular France, are against this Liability Directive because now there’s this wave of, you know, we regulate it too much, we need to have more innovation. France, in particular with Macron, was very concerned about not adding new rules. But actually the AI Liability Directive is just the final element of the AI Act, it’s connected. It’s just providing some exemptions and rules for liability when the AI Act is violated. Okay. But just to say, this is a very, very broad picture. Then 30 seconds of the AI Convention and I will stop. So the Council of Europe this year, actually a few weeks ago, finally approved also the AI Convention. It’s interesting to notice this is not just a regional act, because many other countries that are not part of Europe and not part of European Union, like not part of let’s say political and geographic Europe, are also members of, so are also signatory members. For example, the United States are signatory of the AI Convention. But also we have other countries in the Caucasian, et cetera, for example, just to mention Georgia is a signatory, et cetera. So the AI Convention is similar to the AI Act in terms of, you know, definition of AI. They both take the OECD definition of AI. But it’s different in terms of scope. The AI Convention, so the Council of Europe AI Convention, just regulates public sectors AI. So the use of AI by public sectors. But the member states can also opt in for private actors regulating AI, sorry, providing and deploying AI. Then another important thing is that the AI Act regulates deployers and providers of AI, while the AI Convention just regulates the whole, I mean, not just, but regulates the whole life cycle of AI. So from the very first phase of elaboration to the final elements of the commercialization of AI. So there is this difference. I think this is, these are all some of. the main, let’s say, difference that they have. Also, the AI Convention doesn’t have a list of prohibitions. Then maybe the open question, and I will leave with this question, is whether the AI Act can be a European Union implementation of the AI Convention, because we know that the AI Convention is an international treaty, why it should be implemented by signatories, should be implemented by Member States, while the AI Act is a law of a legal system, which is European Union. So even though the AI Act was approved before the AI Convention, the AI Act might be an implementation of the AI Convention in Europe. And this is an open question that we are still working on. Thank you.
Melody Musoni: Thank you so much, Gian-Claudio, for your contribution. And I guess the points you raise, especially on the roadmap and the discussions leading to the EU AI Act, is something that I find relevant, especially in the African context, where there is a push to regulate, but not taking our time to actually have these negotiations, to actually understand what our approach is. And I always give the EU processes as an example, that it took over three years for you to be where you are with the EU AI Act. And I guess the issues that you also raise on AI, that it’s not just the EU AI Act that’s regulating AI Act, we have the Digital Services Act, the Digital Markets Act, and the GDPR, which also in a way regulate AI, is also important to our approaches to understanding AI governance and regulation. And of course, the distinctions between the Council of Europe framework and the EU AI Act, I think it’s also quite important for me as someone who is learning more about the EU processes and what’s happening with the Council of Europe. So I’m going to change things a bit because one of our speakers was not able to join. So before we go to a second round of questions for the panelists, I wanted to find out from the audience online and here, if you have any questions to our panelists based on what they have already contributed before we move on to the next segment of our discussion. Online?
AUDIENCE: Melody, if I may, just maybe one of the things that I always ask myself, and I know I should know the answer to that, but I don’t. And it might also be interesting for others. I think this risk-based approach, Jean-Claude, that you were talking about, risk to what? So because I think in the EU-AI Act, as you have said, it started as a product liability framework and then later on we infused it with fundamental rights left, right and center, or we tried at least. But so the EU-AI Act looks at risk to what and who is determining, or what was the discussion around these different risk categories? How was that assessed? Maybe you can tell us a little bit more about it. Because Jenny also said that Asian countries are also looking at this kind of tiered approach to risk, and maybe we can hear then from Jenny a little bit more about what the discussions are, how to determine risk and think about these different categories. So maybe Jean-Claude, and then Jenny, you could follow. And then Advocate Lufuno also from maybe the EU perspective.
Gian Claudio: Yeah, sure. Thank you. Great question. Of course, this is a bleeding spot of the legislation about technology, I would say, in Europe, because already in the GDPR, we had this problem. Because the GDPR was one of the first… I think, to first introduce the concept of risk to fundamental rights. What is a risk to a right? Risks come from business management doctrine, let’s say, while fundamental rights come from human rights and fundamental rights analysis. And they are based on very, very, very different mental frameworks, intellectual frameworks. Risks can be measured and controlled. Fundamental rights are moral boundaries that are difficult to conceive and consider as a measurable element. They are usually not measurable, they are qualitatively measurable, if we can say that, but not really quantitatively. So this was a big problem we already had in 2016 with the General Data Protection Regulation. And then it went on because the Digital Services Act has similar problems for risk assessment. And then now the AI Act. So I would say that the idea is risk to fundamental rights is a short answer, but then how can we measure it, how can we analyze it? So I think one of the ambiguities of the law is that also in Article 1, Article 1 says that the AI Act aims to regulate and to protect fundamental rights, health, safety. And it’s always, I’m always a bit skeptical, always, I have to say, a bit critical when you put health and safety as something different from fundamental rights, from the health and safety as part of a bigger understanding of fundamental rights. And then Article 1 goes on and says fundamental rights, especially democracy. But democracy is not really a part of fundamental rights charter. If you do counter-left democracy, you will not find much. much. So it’s a bit of a paradox that health and safety are considered as something else than fundamental rights. And democracy is considered part of that. Of course, we know that they’re all connected, right? They’re all under the same approach. So just to say, there are, of course, political choices to be made, because when you go from fundamental rights to applications and prohibitions, we cannot leave everything to just private accountability. So I think the important thing is that the regulator said, for me, I don’t know, mental integrity means this. And so they prohibited manipulation and expectation of vulnerability. So just to make an example. And yeah, I think, and I will conclude with this. I, with a colleague in Ultrex, Cristiano Santos, we did some research on how we can really measure risk to fundamental rights, in particular, the severity element. And now we are working with an NGO, ECNL, to try to understand how to really do that. So we published an article this year, to try to understand how the subjective use of marginalized groups, for example, can participate, can inform the discussion of how to measure fundamental rights severity. So the risk severity, okay. And how other elements like adverse effects, but also violation of laws, because this, we have usually these big discussions in Europe. Some people say that risk to fundamental rights is just a violation of the law. But this is a bit limited, because it’s just yes or no. The question is black or white. It’s difficult to measure. Other people say that we should measure the adverse effects. But to measure the adverse effects, you would need something quantifiable like property or health, you know, and then you reduce all violation of fundamental rights. of privacy, et cetera, it’s reduced to what the psychologist or a psychiatrist or an economist would measure, not what a real human rights expert measures. So this is a bit the open issue, I guess, but this is bigger than just AI.
Jenny Domino: Yes, thank you. So in the Asia Pacific region, so what I see in terms of risk, it’s sort of similar to the EU AI Act. So there are prohibited uses that are enumerated, for example, in the ASEAN guidance on AI governance and ethics. There is not a lot of articulation on risks to whom, to answer your question, Sabine. And then when it comes to human rights, there’s not a lot of mention of it, if at all. And also with stakeholder engagement or rights holder engagement. Before I delve further into this, I want to just make a note about the earlier panelists about the race to regulate. And I think what’s so interesting to me about this framing is that in this race, who is competing against whom and who is left out. So if we’re thinking about the global South, right, developing countries, I agree that we need to hold tech companies accountable. At the same time, we also need to think about the geopolitical tensions, right? How are we looking at all these competing regulatory initiatives and who is left out from this race, right? Why is it even framed as a race, right? Because a race means there’s going to be a winner in terms of the, or a leader in terms of regulation. But we have to be thinking about this from a global perspective because the internet is global. There will be overlaps. We don’t want fragmentation. And my worry is that that if we don’t even have a unifying framework, and if we’re thinking about this as a race to regulate, then there will be people at the bottom and people on top. So I just wanted to push back a little bit on that framing because it’s interesting to me as somebody coming from the Philippines. And so back to the human rights discussion, I think what’s interesting about corporate accountability, I do agree that we need to hold corporations accountable, tech companies accountable. But I see the discussion here as sort of mirroring the history of the UN guiding principles on business and human rights. That’s the whole reason why the UNGPs were formed because many multinational companies decades ago were operating in developing countries. And what do you do when the human rights violators sometimes are the government actors, right? And that’s why I really do believe that government regulation is warranted. They provide the minimum baseline, but over and beyond that, there are many, many issues that we wouldn’t want governments to regulate under human rights law. So just to give you an example, and I’m using examples from my own experience in content moderation and in platform governance because that’s what I’m most familiar with. So for example, Article 19 of the International Covenant on Civil and Political Rights guarantees freedom of expression of persons. And under Article 19 of the ICCPR, there are areas where you can limit freedom of expression, right? But this whole treaty, this Article 19 was designed to hold state actors accountable. So all treaties are state-based, right? It’s based on the idea that we need to guarantee fundamental rights of persons against their government. And so Article 19 provides the minimum exceptions, right? But what we call in content moderation, there are so many issues that we call lawful but awful. So these are like speech, for example, misinformation. Do we really want governments to legislate a way prohibited false content, right? There are so many human rights groups criticizing fake news laws several years ago that were coming out in different countries around the world, right? Or hate speech, hate speech. I know in certain countries, there are hate speech laws in other countries. There are no hate speech laws regardless, right? So there are, what I’m saying is that there are issues. And that’s what I was talking about earlier when I say AI is all-encompassing. It affects so many different rights, so many human rights. So when we’re thinking about what do we want to regulate? What do we want our governments to regulate? What do we want corporate actors to do above and beyond regulation, above and beyond government regulation? Why? Because under human rights law, there are so many areas of human rights that we wouldn’t want governments to regulate. And one example of that is speech, right? So how do you then regulate that? like Gen AI companies, right? Because you wouldn’t also want the curb expression for satire or for condemning human rights abuses. So we want companies to go beyond what regulation requires, right? And so I think that I see this as complimentary. And for example, in the platform governance space, I think that’s how I see the oversight board is doing its job. It’s not meant to replace government regulation as it should be, it shouldn’t. But at the same time, there are many areas concerning countries around the world where we need companies to step up because we can’t just rely on governments to do their job. So I think that’s what I really want to, I want to nuance this discussion a little bit about regulation and the race theory.
Melody Musoni: I’m not sure, advocate Lufuno, do you want to chip in?
Lufuno T Tshikalange: Can you hear me?
Melody Musoni: Yes, and then we have another two more questions in the audience, you can go ahead.
Lufuno T Tshikalange: Okay, thank you. Hopefully I will finish what I’m saying this time around. In terms of the risk identification from how I’ve understood the approach of the African Union Commission in the development of the artificial intelligence strategy and how even we are doing our own policy framework in South Africa, it’s more human centric. So the technology for the people, not people for technology is the mantra that I believe is coming through a lot to ensure that human rights are respected and reading through them. the Continental Artificial Intelligence Strategy, you see that there is a lot of references to human-centric development and respect and protection of human rights, which is also one of the eight principles for our strategy. So I believe that this approach was because the strategy is based on the Human Rights Charter, the AU Human Rights Charter, and a number of constitutions around the African region. So the risk is to the people. If we do not have ethical behavior and we have technology, people for technology instead of technology for the people, meaning that technology is a tool of advancing our lives and helping us to move out of poverty, inequality, and unemployment, and we are now being made a commodity, that is what the strategy was, by all means, trying to avoid, that the technology is here to enhance, not necessarily to abuse anything or take advantage of anything. So rise to privacy, environmental rights, issues of climate change, that’s why they were identified, because they impact on human rights at individual level. And though there are other issues that were related to economy, the important part was that the risk will be to the consumers of these technologies and they need to be protected. from any risk or harm that may come out of it. So I believe that was the approach to make sure that this strategy is human rights centric and whatever that comes out of the technology itself, it is advancing the human rights, not necessarily violating the same. So I do hope that makes sense. And also the 2063 agenda. Agenda 2063 also says that our development must be human centric. So everything that we have from a policy perspective, it is putting human rights at the center of everything. Thank you.
AUDIENCE: Ends up. We cannot hear. Rely on ISO 31,000 is what they see as the kind of framework for risk assessments. And I think in a lot of tech companies, there’s a compliance team, which is different to the human rights team. And the compliance team are thinking about business risks. And so I think what we need to do is to map what are the risks to the business of having an adverse impact on human rights? Is there an accountability mechanism? Will there be some kind of legislative or regulatory risk? Is there some consequence? Or is there going to be a requirement in legislation or in regulations to provide a remedy to victims of adverse human rights impacts? Because without that, I think there will be a nice human rights department. maybe produce human rights impact assessments, but there’s not really any real consequence. And unless there’s a real consequence to the business and it’s seen as a business risk, I don’t think we’re going to see a lot of change. Thanks very much, first of all, for the insights from all the regions about AI regulation in the different jurisdictions. And I would like to draw the attention a little bit more to some technical requirements. If we look at the technical realities, we currently see around 140 large language models on the globe. More than 120 of them are coming from the US, then you have 20 from China, and all the rest is shared between a tiny numbers of jurisdictions. In practice and beyond any regulatory approaches, that means that a huge number of countries, including the European Union, is dependent on large language models from other regions, from the US and from China. And if we speak about fundamental rights and how we can guarantee access to this technology and fundamental rights when using AI, I think one important aspect is digital sovereignty. To what extent are countries in the position to train AI models as the basis for every AI system themselves? And the fact is that even in Europe, we have not the data center capacities and computing power capacities to train European AI models, which means we are dependent on AI models, large language models from other regions. I think this is a very important factor also in the light of access to AI in developing countries because for them it might be even more challenging to build up the data center and computing power infrastructure to be sovereign and to train AI models themselves. And yeah, I think that’s an important factor for the discussion. A unified AI regulatory framework would be an important aspect. but it won’t work if we do not have the same conditions for training AI models in all regions over the globe. Thank you.
Melody Musoni: Questions?
AUDIENCE: Yes, so I have a question here from Advocate Nusipro. How do we ensure within AI paradigm that the same infrastructure service providers that operate transnationally sustain quality provisions across all jurisdictions in line with human rights protocols? So I guess it goes back to the question around accountability across country borders. I think Jen, you spoke about that a little bit. Maybe anything you wanna add to that?
Jenny Domino: Yeah, of course. Thank you. Maybe I’ll just quickly comment on all the questions and comments. So on remedy, I completely agree. And I think what is, I’m afraid that my answer is more of a question, which is what would constitute sufficient remedy, right? If there is an adverse human rights impact, there is an affected community, what would be remedy in this regard? And again, the UN Guiding Principles was constructed at a time when they were contemplating a different kind of industry, not a tech industry, not a cross-border tech company that may not be within the regulatory jurisdiction the country concern. But yeah, I guess that would be, it’s just an addition. It’s more like what would be good remedy. And again, with respect to platform governance in Myanmar many years ago, when the UN identified Facebook as inciting violence, there were some groups in Bangladesh in the refugee camps that wrote to Facebook saying, you know, give us money, help us, you know, build a life here because your platform was partly used to this, to cause this, what is happening to us. And Facebook said no, saying that they’re not a charity organization, right? And I think that raises at the very least interesting questions about what would constitute remedy? And from a legal standpoint as well, how do you attribute causation or, because again, we’re talking about liability and that’s a different framework altogether from the contribute costs and link to framework under the UNGP. So I think all of this is really interesting as a matter of law, and I don’t have easy answers for that as well, but just to complicate the discussion further. On the second comment, I completely agree. I just want, what I want to add there is the training, the labeling, the labor, the labor aspect of this. And I think this is something that we haven’t discussed yet, right? How are developing countries involved in the training of this data, in the labeling, right? Because cheap labor, again, it’s in the developing countries. Again, we see this in content moderation, we see this in AI as well. And the question there is what are these governments in developing countries doing to regulate that, to regulate labor when there are not even adequate labor protections in many parts of the world? Yeah, so I forgot the third question, quality assurance, right? Across, so yeah, so I think that’s why it’s still very relevant to try to articulate more detailed guidance from the UN Guiding Principles on Business and Human Rights. And I know that the UN-BTEC project has been leading this initiative. So, no easy answers, but I think that the relevant organizations at the UN, but also civil society groups are doing what they can to help articulate more concrete guidance to ensure quality.
Melody Musoni: Thank you so much, Jenny. I guess for the sake of our time, we need to go to the second round of questions. I see there are still more questions online. We’ll try to attend to them at the end of our session. So, the next question that I’m going to ask, I guess I’ll direct my question to Gian-Claudio. One of the policy questions that have been coming up, especially within the African context after the EU adopted the EU AI Act, was the externalization of that framework. And I guess it’s coming from a point where when the GDPR was adopted, most African countries felt that they were being pushed to adopt a similar framework within their regions, so as to comply with the EU and to be able to do business with the EU. And I was wondering, in the context of the EU AI Act and the provisions that we have, does it have the same extraterritorial effect similar to the GDPR? And do you see many countries, for example, emulating the EU AI Act and adopting similar frameworks in their own national legislative processes? Or do you see it very different because it’s very different from the GDPR?
Gian Claudio: Yeah, sure. Great question. And I think it’s connected to some of the other points and questions that were raised in the first round of questions. So, it’s also for me an opportunity to connect on that. So, I think there are two separate aspects to consider. One is the real scope, like the extraterritorial, potential extraterritorial scope of EU law, in particular the AI Act. And the second point is the possible Brussels effect, to name a book by Anno Bradford, so how the Brussels effect that we have for the GDPR, the emulation by other countries and systems might happen also for the AI Act. So, I think these are two separate questions, but connected. And also connected to some of the elements or questions that were raised before, for example, about how can we have a homogeneous regulation of AI if then we have countries that cannot have data centers or cannot have a general purpose AI. But this is connected because, of course, if we just regulate generative AI, we know that generative AI is produced and developed mostly in the US, we need to have some rules for the application, for the, let’s say, extraterritorial application of the law. The AI Act follows that approach. So, for example, if you commercialize in Europe, but you have produced somewhere else, but you commercialize in Europe, then you have to follow the rules of the AI Act. The GDPR went even broader because for the GDPR, even if you just monitor behaviors of people in European Union, or if you offer services even for free to people who are in the European Union, the GDPR was applicable. And this was also, and this is, I think the two parts are connected, right? The fact that the extraterritorial and the law and the process effects are connected because the more extraterritorial… territoriality you have in the law, the more you are pushing other countries to have similar systems in order to be adequate. Adequate is the term that is used by the European Commission to analyse the compatibility of other legal systems to the European data protection system. So just to say, I think the scope and the particular scope of the AI Act tries to go beyond what’s just in the borders of the European Union, but of course this is not easy. About the Brussels effect, this is mostly a political analysis. So what do we expect? Do we expect that other countries will follow the AI Act as it happened with GDPR? And this is also connected by a comment about a race between the first and the last, and what do we do? So I think of course it’s not a race to be the best country, the best region, and I think this is very important that there is a big risk of legal colonialism that Europe can have, like Europe going to impose laws to other countries. So of course this is a risk, and this is not something that we want. I don’t think we should push for Europe to dictate the legal agenda to all other countries, right? This is not what we want. At the same time, if these laws are pro-human rights and pro-fundamental rights, then it’s also, I think, a good thing if the Brussels effect takes effect. So I mean, if other countries copy the European Union for prohibiting exploitation of migrants, or for example, a full laissez-faire approach to bias and discrimination online or to human oversight, then it’s good if other countries maybe copy European Union in that. And we already see that happening a bit. We have indeed, I was mentioning the Brazilian AI Act. It was approved by the first chamber two weeks ago. And it really reflects the European Union AI Act in the structure, in the design duties, in the governance, in the prohibitions. It reflects a lot. And it reminds how the LGBP or GDPR, which is the general data protection law in Brazil, reflects the GDPR. So I’m just saying this is possible, but there is a big caveat, a big difference. It’s difficult to export the AI Act from one point of view. The AI Act is highly rooted on political considerations that cannot be easily exported. Because for example, even the prohibition of social scoring was written having in mind examples from China, etc. So there are political considerations of what the risk to human right is. And also the same idea of fundamental rights is not applicable all over the world. So while the GDPR could be exported, I’m using the word exported, which is very brutal, but just to say the image that we have, it could have been reapplied or reproduced in other regions, because it was based on some principles that are not so difficult to reproduce in other regions. For example, a purpose limitation or storage limitation. These are kind of technical principles, right? The AI Act is rooted. on fundamental rights, in a way. And fundamental rights are not, in particular the Charter of Fundamental Rights, just applies to the European Union. So, you know, this is maybe one of the major obstacles of the Brussels effect. But we can wait and see, because I was, when we were preparing this panel, I hadn’t read the Brazilian law. And now I read it and they say, oh, it’s very similar to the AI. So I can say that even though in principles it’s difficult, in practice, I see that the Brussels effect is already maybe taking effect.
Melody Musoni: Thank you for that Gian-Claudio. And I think earlier we were talking about the geopolitics and how it also kind of shapes the approaches that we adopt when it comes to AI governance, for example. And I think that what you mentioned about, especially with the social scoring in the position that the EU AI Act adopted is just one example. So I’m going to come to advocate Lufuno again, looking at this whole geopolitics and how it’s actually shaping our approaches to AI governance. You are sitting in South Africa, and as of this month, South Africa has taken over the G20 presidency. And there are quite high and big expectations for the continent, for both South Africa and the African Union as members of the G20. And a lot are seeing it as an opportunity for Africa and South Africa to promote and to advocate for inclusive digital development.
Lufuno T Tshikalange: I can’t hear you, Dr. Milode. Technico. Okay, you were at…
Melody Musoni: Can you hear me now?
Lufuno T Tshikalange: Yes, I can hear you. You were at G20 promoting Africa and it’s cut.
Melody Musoni: Yes, so I was saying that there are big expectations that with South Africa now taking over the G20 presidency and the AAU being a member of the G20, we should see more and more conversations and discussions around digital development and AI governance. And I also mentioned that South Africa is also a member of the BRICS plus economic block who are also shaping AI policies. And what I wanted to ask you is, do you think that global South countries are able to develop maybe an alternative to what we have in the EU AI Act? Do you see us global South countries developing an alternative approach to AI governance? If so, what do you think that would actually look like?
Lufuno T Tshikalange: Thank you. Yes, we are the president for the G20 at 2025 years and the AU is a member which is an exciting opportunity for us and the theme is sustainability, equality and I am missing one word in that theme. I believe that we need to take an advantage of the G20 which I’m personally within our consultancy already doing that looking at the issue of proposing the establishment of the cyber diplomacy for the G20 and obviously, to ensure that we are assisted in a sense, as I said, that collaboration is very important. So this G20 will help us to ensure that we collaborate better and we find ourselves in terms of the principles that we want to see informing the conversation of AI governance. I don’t necessarily see us cutting and pasting the EU AI Act, but I believe that the EU has really set an example that we can study and see what lessons can be learned moving forward. And I believe, if I’m not mistaken, the EU Commission is also part of the G20 alongside the African Union. So I am seeing an opportunity that we are not competing as to who can do better than who, but we are bringing our resources together to make sure that the world becomes a better place for everyone. And I believe if we form appropriate collaborative relationship during our presidency, that we can then have opportunities to have skills exchange programs where some of the African talents can be sent to the global north to learn the skills that we naturally just didn’t manage to develop as our R&D is not very good. But I believe that there are a lot of opportunities that can be derived out of this. We’re not doing well in our cyber security space. Our cyber security posture as Africa is not something to brag about. And we are having some of the strong countries within the G20 that are doing well in that space. So I am seeing that if we really use this opportunity, there is a lot of advantages from where I am seated. And the expectations are big for sure, because we were also doing a panel at the South African Science Forum, which happened first week of December. And there is a lot of expectations that the presidents set out and the minister of Department of Science and Innovation and Technology set out for us to be able to achieve and taking advantage of us having the G20 presidency. So I hope I’m answering your questions, but a lot of challenges that we have, if we can master the collaboration approach, I believe this one year of presidency will leave us having achieved more. Thank you.
Melody Musoni: Okay, thank you, advocate. And I’m gonna come to you, Jenny. I think you already touched on this and you were kind of asking us, like, why are we competing? Okay, I’m gonna come to Jenny and coming back to… to the discussion on AI governance and you were mentioning about the race that why are we competing? And if we are competing, there’ll be winners and there will be losers. Technical team, can you help us with the audio for the online participants? Okay. So, the way I see it, maybe because I’m moving from law and moving more into policy and public policy and I see a lot of influence that is coming from different political actors on how we approach AI governance. So, just giving the example that Ajaan Claudio gave with the EUA Act and the social scoring that it’s considered as high risk. It’s definitely not allowed in terms of the regulation in Europe but yet we see other regions like China where they still have social scoring. And I think those different political standpoints are definitely going to shape the future of AI governance. In September, there was the China Africa Summit and one of the declarations and the commitments that China and Africa made was we need to support each other on discussions on AI governance. For example, on global AI governance discussions. And already that is an indication that we are going to a place where we are not going to have one uniform regulatory framework on AI governance. And my question to you is how do you think we should be able to achieve a more universal approach to AI governance or AI regulatory frameworks? And where do you actually see us moving from here? What should we start prioritizing? What kind of discussions do we need to start having where we are able to say, these are our this is our baseline or these are our minimum standards that we expect to see in different frameworks, whether it’s being adopted by the African Union, it’s being adopted in Latin America, what kind of conversations should we start having and what kind of principles should we see in these frameworks and in these discussions on AI governance?
Jenny Domino: Such a big question. I feel like if I know the answer to this, we can all go home and there would be no IGF next year because we won’t have a need for it. So my short answer to this question is, I think human rights really provides a common language. The UN Special Rapporteur on Freedom of Expression referring to platform governance at least, has described human rights as offering a universal language that everybody can understand regardless of where you are in the world, regardless of the political situation in that country, the people in those country understand a rights framework. And that’s why I think human rights groups have been criticizing the more risk-oriented approach to AI governance as opposed to a rights-centric framework. And I know that that answer can be also seen or perceived by some as naive or too idealistic, but I actually think it is pragmatic because first of all, it’s something that is understandable to everyone. It’s something that civil society and underrepresented regions can use that kind of language when talking about the risks, the harms posed by AI technology. So I think that’s something that we should aim for, to bring back human rights front and center in AI governance.
Melody Musoni: Okay, thank you so much. So we need to prioritize human rights protections in our frameworks on AI. I see we have four minutes left. I wonder, do we have questions from the audience online and here?
AUDIENCE: Thank you for the wonderful thought provoking conversation. I wanted to ask, I only attended half of the session. So if this is repeated, you don’t need to answer it. I wanna understand when the Global South regulates or writes policies for the internet and digital technologies, they are writing policies to govern companies and organizations that exist outside of their jurisdiction most of the time. And what we see in the EU and generally in China and the US, that they are writing policies and regulations for their own leverage points and their own companies and their own markets. So when the Global South comes and writes their own policies, even when it comes to Africa, it has a huge population, a young population, and they’re consuming a lot of these technologies. They don’t have as much leverage points. And perhaps the IGF is a great space for us to collaborate to prioritize some of the things that the Global South can add. So what are these advantage points that the Global South needs to capitalize on and bring to the conversation where we can contribute to the conversation with the EU experts? I think one of the things that really makes the EU a stronger market is because they have a lot of experts who are very capable in terms of collaboration, in terms of having the organizations and the funding that enables them to make huge leapfrogs. in terms of knowing what exactly can serve the political organization and also civil society organizations and so on. So what do we need to do in the global South? What do we need to leverage? And what should we prioritize? And what should be the things that we focus on in order to help us have a productive conversation with the global North counterparts? Thanks. I take this one. So you raise an important question. I think it’s one of the questions that we always raise when we are talking about regulation of AI, especially in Africa to say, what are we regulating? And do we even have the institutional capacity to go after big tech? And in some of the conversations we have is the question of what are we regulating? And what we have is they don’t have any legal presence in a lot of countries. For example, META, you have offices in South Africa, Kenya, and Nigeria. So if there is unlawful processing of data in another country that doesn’t have data protection laws, they are not protected. And just to give an example on data protection, one approach was perhaps it makes sense from a regional perspective, if we can collaborate as a continent and have one regulatory body that represents the interests of everyone else. I remember in 2017 with the Cambridge Analytica crisis that South Africa was able to take further action against META, Facebook at the time because they already had a regulator. And similarly, I think there were discussions with Kenya as well. So learning from that example to say… for countries that already have this institutional capacity.
Lufuno T Tshikalange: Thank you, Melody, I’ll take it.
Melody Musoni: You can. Yes, at the moment with AI, it’s very difficult. We always have these conversations and my position is rather let’s start with the low hanging fruit, the data protection laws that we have and trying to see what are the laws we can kind of extend. For example, from a criminal law perspective, we’re talking about misinformation, hate speech. We already have some laws that to some extent cover issues where AI is being used for misinformation. So from a criminal law perspective, you extend the law, deletional perspective, you extend the law. So there are different ways of regulating AI without particularly having a specific law because at the moment, we definitely don’t have that capacity in the framework to go after the big tech if they are not in our countries. I don’t know, Jenny or Advocate Lufuno, if you want to contribute more on the global South. One minute each. I’ll start with Jenny and then Lufuno, you can go immediately after. Do we have our questions online as well? Advocate Lufuno, can you hear us?
Lufuno T Tshikalange: Yes, I can hear you.
Melody Musoni: Thank you. You can go ahead.
Lufuno T Tshikalange: Oh, I thought you said Jenna was going first.
Melody Musoni: Thank you.
Lufuno T Tshikalange: Yeah. I believe that from the global South perspective, just to remind you, one minute. Yes. Wrap up in one minute. From the Global South perspective, I believe that we do have enough skills and case studies that we can collaboratively use to combine our resources and come out of the consumer status that we have been for so long. We need to work towards coming into the table not as a subject matter of discussion but as a contributor. Thank you.
Melody Musoni: Okay. Thank you so much. I see we have run out of time, but I would like to take this opportunity to thank our speakers both online and here with us, as well as thanking you, the audience, for participating in our discussion. Feel free to come and engage with the speakers who are here if you have additional questions. So let’s give a round of applause to our brilliant speakers. Thank you.
Lufuno T Tshikalange
Speech speed
110 words per minute
Speech length
2051 words
Speech time
1113 seconds
African Union developing continental AI strategy aligned with Agenda 2063
Explanation
The African Union has developed a regional AI strategy adopted in 2024. This strategy aims to establish AI governance and regulations while accelerating AI adoption in key sectors like agriculture, education, and health.
Evidence
The strategy is aligned with Agenda 2063 and the digital transformation strategy. It promotes the creation of an enabling environment for an AI startup ecosystem and addresses skills shortages.
Major Discussion Point
Global approaches to AI governance
Agreed with
Jenny Domino
Gianclaudio Malgieri
Agreed on
Need for human rights-centered approach in AI governance
Lack of infrastructure and skills in developing countries
Explanation
Lufuno highlights the challenges faced by African countries in developing AI frameworks, including inadequate internet access, insufficient data for training models, limited computing resources, and skills shortages.
Major Discussion Point
Challenges in developing AI governance frameworks
Agreed with
Jenny Domino
Agreed on
Challenges in infrastructure and skills for developing countries
Need for collaboration rather than competition in governance approaches
Explanation
Lufuno emphasizes the importance of collaboration in AI governance, particularly in the context of South Africa’s G20 presidency. She sees this as an opportunity for skills exchange and addressing common challenges.
Evidence
Reference to the G20 presidency and the potential for collaborative relationships to address issues like cybersecurity.
Major Discussion Point
Implications of different regulatory approaches
Importance of Global South contributing expertise, not just being subject of discussion
Explanation
Lufuno argues that the Global South needs to move beyond being merely a subject of discussion in AI governance. She emphasizes the need for collaborative use of skills and case studies to contribute meaningfully to global discussions.
Major Discussion Point
Role of geopolitics in shaping AI governance
Jenny Domino
Speech speed
141 words per minute
Speech length
2257 words
Speech time
953 seconds
Asia taking a “wait-and-see” approach with soft law and draft initiatives
Explanation
In Asia, there is a more cautious approach to AI regulation compared to the EU. Countries are focusing on soft law approaches and draft legislative initiatives rather than comprehensive hard law regulations.
Evidence
Examples include the ASEAN guidance and draft laws in Japan, Korea, and Thailand.
Major Discussion Point
Global approaches to AI governance
Differed with
Gianclaudio Malgieri
Differed on
Approach to AI regulation
Existing human rights issues and digital divides need to be addressed
Explanation
Jenny emphasizes that basic infrastructure problems and existing human rights issues in many countries need to be addressed before focusing on AI uptake. These include access to education, digital education, and internet shutdowns.
Evidence
Examples of internet shutdowns and website shutdowns in some Asian countries are mentioned.
Major Discussion Point
Challenges in developing AI governance frameworks
Agreed with
Lufuno T Tshikalange
Agreed on
Challenges in infrastructure and skills for developing countries
Human rights as potential universal framework for AI governance
Explanation
Jenny suggests that human rights provide a common language that everyone can understand regardless of their location or political situation. She argues for bringing human rights back to the center of AI governance discussions.
Evidence
Reference to the UN Special Rapporteur on Freedom of Expression describing human rights as offering a universal language in the context of platform governance.
Major Discussion Point
Implications of different regulatory approaches
Need to consider geopolitical tensions and fragmentation of internet governance
Explanation
Jenny emphasizes the importance of considering geopolitical tensions in AI governance discussions. She warns against framing AI regulation as a race, as it may lead to winners and losers, potentially excluding some countries.
Major Discussion Point
Role of geopolitics in shaping AI governance
Need for human-centric approach centered on human rights
Explanation
Jenny advocates for a human-centric approach to AI governance that prioritizes human rights. She criticizes the risk-oriented approach for potentially eclipsing the focus on rights.
Evidence
Reference to human rights groups criticizing the risk-based approach in AI governance frameworks like the EU AI Act.
Major Discussion Point
Global approaches to AI governance
Agreed with
Lufuno T Tshikalange
Gianclaudio Malgieri
Agreed on
Need for human rights-centered approach in AI governance
Differed with
Gianclaudio Malgieri
Differed on
Focus on risks vs. rights in AI governance
Gianclaudio Malgieri
Speech speed
138 words per minute
Speech length
3415 words
Speech time
1482 seconds
EU AI Act as first comprehensive AI regulation, balancing risks and rights
Explanation
Gian Claudio explains that the EU AI Act is the first comprehensive AI regulation in the world. It aims to balance risks and rights, with a structure based on product safety but incorporating fundamental rights considerations.
Evidence
The AI Act includes a list of prohibited practices, high-risk applications, and rules for general purpose AI.
Major Discussion Point
Global approaches to AI governance
Agreed with
Jenny Domino
Lufuno T Tshikalange
Agreed on
Need for human rights-centered approach in AI governance
Differed with
Jenny Domino
Differed on
Focus on risks vs. rights in AI governance
Difficulty in measuring and quantifying risks to fundamental rights
Explanation
Gianclaudio Malgieri highlights the challenge of measuring and quantifying risks to fundamental rights in the context of AI regulation. He points out that fundamental rights are qualitative and not easily measurable, unlike business risks.
Evidence
Reference to ongoing research on how to measure risk to fundamental rights, particularly the severity element.
Major Discussion Point
Challenges in developing AI governance frameworks
Potential for “Brussels effect” with other countries emulating EU approach
Explanation
Gianclaudio Malgieri discusses the possibility of a “Brussels effect” where other countries might adopt similar approaches to the EU AI Act. He notes that this is already happening to some extent, with the Brazilian AI Act reflecting elements of the EU approach.
Evidence
Example of the Brazilian AI Act recently approved by the first chamber, which reflects the EU AI Act in structure, design duties, and prohibitions.
Major Discussion Point
Implications of different regulatory approaches
Risk of legal colonialism if EU approach imposed on other regions
Explanation
Gianclaudio Malgieri warns against the risk of legal colonialism if the EU approach to AI regulation is imposed on other countries. He emphasizes that Europe should not dictate the legal agenda for all other countries.
Major Discussion Point
Implications of different regulatory approaches
Melody Musoni
Speech speed
0 words per minute
Speech length
0 words
Speech time
1 seconds
G20 presidency as opportunity for Africa to shape inclusive digital development
Explanation
Melody highlights the expectations for South Africa and the African Union to promote inclusive digital development during South Africa’s G20 presidency. This is seen as a chance for Africa to influence global discussions on AI governance.
Major Discussion Point
Role of geopolitics in shaping AI governance
Differing political standpoints (e.g. on social scoring) shaping governance approaches
Explanation
Melody points out that different political standpoints, such as views on social scoring, are shaping approaches to AI governance. This leads to divergent regulatory frameworks across regions.
Evidence
Example of social scoring being prohibited in the EU AI Act but still practiced in China.
Major Discussion Point
Role of geopolitics in shaping AI governance
AUDIENCE
Speech speed
156 words per minute
Speech length
1247 words
Speech time
479 seconds
Limited leverage of Global South over tech companies based elsewhere
Explanation
An audience member points out that Global South countries often regulate companies and organizations outside their jurisdiction. This limits their leverage compared to regions like the EU, US, and China that regulate their own markets and companies.
Major Discussion Point
Challenges in developing AI governance frameworks
Agreements
Agreement Points
Need for human rights-centered approach in AI governance
Jenny Domino
Lufuno T Tshikalange
Gianclaudio Malgieri
Need for human-centric approach centered on human rights
African Union developing continental AI strategy aligned with Agenda 2063
EU AI Act as first comprehensive AI regulation, balancing risks and rights
The speakers agree on the importance of prioritizing human rights in AI governance frameworks, emphasizing a human-centric approach that balances risks and rights.
Challenges in infrastructure and skills for developing countries
Jenny Domino
Lufuno T Tshikalange
Existing human rights issues and digital divides need to be addressed
Lack of infrastructure and skills in developing countries
Both speakers highlight the challenges faced by developing countries in terms of infrastructure, skills, and digital divides that need to be addressed alongside AI governance.
Similar Viewpoints
Both speakers emphasize the challenges and importance of incorporating human rights considerations into AI governance frameworks, highlighting the difficulty in quantifying these rights within a risk-based approach.
Gianclaudio Malgieri
Jenny Domino
Difficulty in measuring and quantifying risks to fundamental rights
Need for human-centric approach centered on human rights
Both speakers recognize the significant role of geopolitics and differing political standpoints in shaping AI governance approaches globally.
Melody Musoni
Jenny Domino
Differing political standpoints (e.g. on social scoring) shaping governance approaches
Need to consider geopolitical tensions and fragmentation of internet governance
Unexpected Consensus
Collaboration over competition in AI governance
Lufuno T Tshikalange
Jenny Domino
Need for collaboration rather than competition in governance approaches
Need to consider geopolitical tensions and fragmentation of internet governance
Despite representing different regions, both speakers emphasize the importance of collaboration over competition in AI governance, which is somewhat unexpected given the often competitive nature of international relations and technology development.
Overall Assessment
Summary
The main areas of agreement include the need for a human rights-centered approach in AI governance, addressing infrastructure and skills challenges in developing countries, and recognizing the impact of geopolitics on governance approaches.
Consensus level
There is a moderate level of consensus among the speakers on key issues, particularly on the importance of human rights and the challenges faced by developing countries. This consensus suggests a growing recognition of the need for inclusive and rights-based approaches to AI governance globally, which could potentially influence future policy discussions and international cooperation in this field.
Differences
Different Viewpoints
Approach to AI regulation
Jenny Domino
Gianclaudio Malgieri
Asia taking a “wait-and-see” approach with soft law and draft initiatives
EU AI Act as first comprehensive AI regulation, balancing risks and rights
Jenny Domino describes Asia’s cautious approach with soft law, while Gianclaudio Malgieri highlights the EU’s comprehensive regulation through the AI Act.
Focus on risks vs. rights in AI governance
Jenny Domino
Gian Claudio
Need for human-centric approach centered on human rights
EU AI Act as first comprehensive AI regulation, balancing risks and rights
Jenny Domino advocates for a human rights-centered approach, while Gianclaudio describes the EU AI Act’s attempt to balance risks and rights.
Unexpected Differences
Perspective on global collaboration in AI governance
Jenny Domino
Lufuno T Tshikalange
Need to consider geopolitical tensions and fragmentation of internet governance
Need for collaboration rather than competition in governance approaches
While both speakers advocate for collaboration, Jenny unexpectedly emphasizes the need to consider geopolitical tensions, whereas Lufuno focuses more on the benefits of collaboration without explicitly addressing these tensions.
Overall Assessment
summary
The main areas of disagreement revolve around the approach to AI regulation (comprehensive vs. wait-and-see), the focus on risks vs. rights, and the perspective on global collaboration in AI governance.
difference_level
The level of disagreement among the speakers is moderate. While there are clear differences in approaches and focus areas, there is also a shared recognition of the challenges facing AI governance globally. These differences reflect the complexity of developing a unified approach to AI governance across diverse regions and highlight the need for continued dialogue and collaboration to address global challenges while respecting regional contexts.
Partial Agreements
Partial Agreements
All speakers agree on the need to address fundamental challenges in AI governance, but they focus on different aspects: Jenny on existing human rights issues, Lufuno on infrastructure and skills gaps, and Gianclaudio Malgieri on the difficulty of quantifying risks to rights.
Jenny Domino
Lufuno T Tshikalange
Gianclaudio Malgieri
Existing human rights issues and digital divides need to be addressed
Lack of infrastructure and skills in developing countries
Difficulty in measuring and quantifying risks to fundamental rights
Similar Viewpoints
Both speakers emphasize the challenges and importance of incorporating human rights considerations into AI governance frameworks, highlighting the difficulty in quantifying these rights within a risk-based approach.
Gianclaudio Malgieri
Jenny Domino
Difficulty in measuring and quantifying risks to fundamental rights
Need for human-centric approach centered on human rights
Both speakers recognize the significant role of geopolitics and differing political standpoints in shaping AI governance approaches globally.
Melody Musoni
Jenny Domino
Differing political standpoints (e.g. on social scoring) shaping governance approaches
Need to consider geopolitical tensions and fragmentation of internet governance
Takeaways
Key Takeaways
Different regions are taking varied approaches to AI governance, from comprehensive regulation (EU) to wait-and-see approaches (Asia)
There is a need to center human rights in AI governance frameworks rather than focusing solely on risk-based approaches
Developing countries face significant challenges in AI governance due to lack of infrastructure, skills, and leverage over tech companies
Geopolitics and differing political standpoints are shaping approaches to AI governance globally
Collaboration rather than competition is seen as key for effective global AI governance
Resolutions and Action Items
None identified
Unresolved Issues
How to achieve a universal approach to AI governance while respecting regional differences
How to effectively regulate AI in developing countries with limited leverage over tech companies
How to balance innovation with protection of fundamental rights in AI governance
How to measure and quantify risks to fundamental rights in AI systems
Suggested Compromises
Using human rights as a universal framework for AI governance that can be understood across different regions and political contexts
Focusing on extending existing laws (e.g. data protection, criminal law) to cover AI issues rather than creating entirely new AI-specific regulations in developing countries
Collaborating at regional levels (e.g. African Union) to increase leverage in regulating multinational tech companies
Thought Provoking Comments
I think this risk-based approach, Gianclaudio, that you were talking about, risk to what? So because I think in the EU-AI Act, as you have said, it started as a product liability framework and then later on we infused it with fundamental rights left, right and center, or we tried at least. But so the EU-AI Act looks at risk to what and who is determining, or what was the discussion around these different risk categories? How was that assessed?
speaker
Audience member
reason
This question challenges the fundamental approach of risk-based AI regulation and prompts deeper consideration of how risks are defined and assessed.
impact
It led to an in-depth discussion on the challenges of measuring risks to fundamental rights and the potential limitations of a risk-based approach to AI governance.
I see the discussion here as sort of mirroring the history of the UN guiding principles on business and human rights. That’s the whole reason why the UNGPs were formed because many multinational companies decades ago were operating in developing countries. And what do you do when the human rights violators sometimes are the government actors, right?
speaker
Jenny Domino
reason
This comment draws an insightful parallel between AI governance and existing frameworks for business and human rights, highlighting the complex dynamics between corporations, governments, and human rights.
impact
It broadened the discussion to consider the roles and responsibilities of different actors in AI governance, particularly in the context of developing countries.
The AI Act follows that approach. So, for example, if you commercialize in Europe, but you have produced somewhere else, but you commercialize in Europe, then you have to follow the rules of the AI Act. The GDPR went even broader because for the GDPR, even if you just monitor behaviors of people in European Union, or if you offer services even for free to people who are in the European Union, the GDPR was applicable.
speaker
Gianclaudio Malgieri
reason
This explanation clarifies the extraterritorial scope of EU regulations and their potential global impact, which is crucial for understanding the implications of EU AI governance on other regions.
impact
It sparked a discussion on the potential for a ‘Brussels effect’ in AI regulation and the challenges of exporting EU-style regulations to other contexts.
I believe that from the global South perspective, just to remind you, one minute. Yes. Wrap up in one minute. From the Global South perspective, I believe that we do have enough skills and case studies that we can collaboratively use to combine our resources and come out of the consumer status that we have been for so long. We need to work towards coming into the table not as a subject matter of discussion but as a contributor.
speaker
Lufuno T Tshikalange
reason
This comment challenges the narrative of the Global South as merely consumers of AI technology and regulation, asserting their potential to contribute meaningfully to global AI governance discussions.
impact
It shifted the conversation towards considering how the Global South can actively shape AI governance rather than simply adopting frameworks from other regions.
Overall Assessment
These key comments shaped the discussion by highlighting the complexities of AI governance across different regions and regulatory frameworks. They prompted deeper consideration of how risks and rights are balanced in AI regulation, the challenges of applying regulations across borders, and the need for inclusive global dialogue that recognizes the potential contributions of the Global South. The discussion evolved from a focus on specific regulatory approaches to a broader consideration of how to achieve a more universal and equitable approach to AI governance that respects human rights and addresses the needs of diverse stakeholders.
Follow-up Questions
What would constitute sufficient remedy for adverse human rights impacts caused by AI systems?
speaker
Jenny Domino
explanation
This is important to determine how to hold companies accountable and provide appropriate compensation to affected communities.
How can developing countries be involved in the training and labeling of AI data in a way that ensures fair labor practices?
speaker
Jenny Domino
explanation
This is crucial to address potential exploitation of workers in developing countries and ensure ethical AI development practices.
How can we ensure quality provisions for AI services across all jurisdictions in line with human rights protocols?
speaker
Audience member (via Advocate Nusipro)
explanation
This is important to maintain consistent standards and protections for users of AI systems globally.
How can countries in the Global South develop the necessary infrastructure and computing power to train their own AI models and achieve digital sovereignty?
speaker
Audience member
explanation
This is crucial for ensuring countries are not dependent on AI models from other regions and can develop AI tailored to their specific needs and contexts.
What should be the baseline or minimum standards expected in AI governance frameworks across different regions?
speaker
Melody Musoni
explanation
This is important for establishing a common ground in global AI governance while allowing for regional variations.
What are the advantage points that the Global South needs to capitalize on and bring to the conversation on AI governance?
speaker
Audience member
explanation
This is crucial for ensuring the Global South has a meaningful voice in shaping global AI governance and addressing their specific needs and concerns.
Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.
Related event

Internet Governance Forum 2024
15 Dec 2024 06:30h - 19 Dec 2024 13:30h
Riyadh, Saudi Arabia and online