Main Session 2: The governance of artificial intelligence

25 Jun 2025 11:30h - 13:00h

Session at a glance

Summary

This discussion focused on the governance of artificial intelligence, examining the current landscape of AI regulation and the challenges of creating inclusive, effective frameworks. The panel, moderated by Kathleen Ziemann from the German development agency GIZ and Guilherme Canela from UNESCO, brought together representatives from the private sector, government, civil society, and international organizations to discuss how different stakeholders can collaborate on AI governance.

The panelists acknowledged that the AI governance landscape has become increasingly complex, with numerous frameworks, principles, and regulatory initiatives emerging globally, including the OECD AI principles, UNESCO’s AI ethics recommendations, the EU AI Act, and various national strategies. Melinda Claybaugh from Meta emphasized that while there is no lack of governance frameworks, there remains disagreement about what constitutes AI risks and how they should be measured, suggesting the need for broader conversations about enabling innovation alongside managing risks. Mlindi Mashologu, representing the South African government, highlighted the importance of locally relevant AI governance that addresses context-specific challenges while maintaining human rights principles and ensuring AI systems are ethical, inclusive, and accountable.

Jhalak Kakkar from the Centre for Communication Governance stressed the importance of meaningful multi-stakeholder participation in AI governance processes and argued against creating a false dichotomy between innovation and regulation, advocating for parallel development of both. Jovan Kurbalija from the Diplo Foundation called for bringing “knowledge” back into AI discussions, noting that current frameworks focus too heavily on data while overlooking the knowledge dimension of AI systems. The discussion revealed tensions between different approaches to AI governance, with some emphasising the need for more regulation and others cautioning against over-regulation that might stifle innovation.

Key themes included the democratisation of AI access, the need for transparency and explainability in AI systems, the importance of addressing bias and ensuring inclusive representation in AI development, and the challenge of balancing global coordination with local relevance. The panelists ultimately agreed on the importance of continued multi-stakeholder dialogue and the need to learn from past experiences with internet governance while avoiding previous mistakes in technology regulation.

Keypoints

Major Discussion Points:

The Current AI Governance Landscape: The panelists discussed the “blooming but fragmented” nature of AI governance, with numerous frameworks, principles, and regulations emerging globally (OECD principles, UNESCO recommendations, EU AI Act, G7 Hiroshima AI process, etc.). There was debate about whether this represents progress or creates confusion and fragmentation.

Innovation vs. Risk Management – A False Dichotomy: A central tension emerged around balancing AI innovation with risk mitigation. While some panelists argued for focusing more on enabling innovation rather than just managing risks, others contended this creates a false choice – that governance and innovation must go hand-in-hand from the beginning rather than being treated as opposing forces.

Global South Perspectives and Local Relevance: Significant emphasis was placed on ensuring AI governance is locally relevant and includes voices from the Global South. Panelists discussed the need for context-aware regulation, capacity building in developing countries, and avoiding a “one-size-fits-all” approach that might not address specific regional needs and priorities.

Knowledge vs. Data in AI Governance: A philosophical discussion emerged about shifting focus from “data” back to “knowledge” in AI governance frameworks. This included concerns about knowledge attribution, preserving local and indigenous knowledge, and ensuring that AI systems don’t centralize and monopolize human knowledge without proper attribution.

Multi-stakeholder Participation and Transparency: Throughout the discussion, panelists emphasized the importance of meaningful multi-stakeholder engagement in AI governance processes, moving beyond tokenistic participation to genuine influence on outcomes. This included calls for transparency in risk assessments and decision-making processes.

Overall Purpose:

The discussion aimed to examine how different stakeholders can collaborate to shape AI governance frameworks that are inclusive, effective, and globally coordinated while respecting local contexts. The session sought to move beyond theoretical principles toward practical approaches for implementing AI governance that balances innovation with human rights protection and addresses the needs of all regions, particularly the Global South.

Overall Tone:

The discussion maintained a professional and collaborative tone throughout, though it became more animated and engaged as panelists began to challenge each other’s perspectives. Initially, the conversation was more formal with structured introductions, but it evolved into a more dynamic exchange where panelists directly responded to and sometimes disagreed with each other’s points. The tone remained respectful despite clear philosophical differences, particularly around the innovation-regulation balance and the urgency of implementing governance measures. The moderators successfully encouraged both consensus-building and healthy debate, creating an atmosphere where diverse viewpoints could be expressed and examined.

Speakers

Speakers from the provided list:

Kathleen Ziemann – Lead of AI project at German development agency GIZ (Fair Forward project), Session moderator

Guilherme Canela – Director at UNESCO in charge of digital transformation, inclusion and policies, Session co-moderator

Melinda Claybaugh – Director of privacy policy at Meta

Jovan Kurbalija – Executive director of Diplo Foundation, based in Geneva with background from Eastern Europe

Jhalak Kakkar – Executive director of the Centre for Communication Governance in New Delhi, India

Mlindi Mashologu – Deputy director general at South Africa’s Ministry of Communications and Digital Technology (filling in for the deputy minister)

Online moderator

Audience – Multiple audience members who asked questions during the session:

  • Diane Hewitt-Mills – Founder of global data protection office called Hewitt-Mills
  • Kunle Olorundare – President of Internet Society Nigerian chapter, from Nigeria, involved in advocacy
  • Pilar Rodriguez – Youth coordinator for the Internet Governance Forum in Spain
  • Anna – Representative from R3D in Mexico
  • Grace Thompson – Online participant (question relayed through online moderator)
  • Michael Nelson – Online participant (question relayed through online moderator)

Full session report

AI Governance Discussion: Stakeholder Perspectives on Inclusive and Effective Frameworks

Executive Summary

This discussion on artificial intelligence governance brought together diverse stakeholders to examine current AI regulation approaches and explore pathways towards inclusive frameworks. Moderated by Kathleen Ziemann from GIZ’s Fair Forward project and Guilherme Canela from UNESCO, the session featured representatives from Meta (Melinda Claiborne), the South African government (Mlindi Mashologu), the Centre for Communication Governance (Jhalak Kakkar), and the Diplo Foundation (Jovan Kurbalija). The discussion covered the current fragmented landscape of AI governance, debates around balancing innovation with risk management, and the importance of Global South perspectives in developing effective AI frameworks.

Current AI Governance Landscape

Fragmented Framework Development

Kathleen Ziemann opened by describing the current AI governance landscape as “blooming but fragmented,” highlighting numerous parallel initiatives including:

  • OECD AI principles
  • UNESCO’s AI ethics recommendations
  • EU AI Act
  • G7 Hiroshima AI process
  • Various national strategies emerging globally

Melinda Claiborne from Meta characterized this as “an inflection point” where many frameworks exist but fundamental questions remain about implementation and effectiveness. She noted that while governance frameworks are abundant, significant disagreement persists about what constitutes AI risks and how to measure them scientifically.

Jovan Kurbalija provided additional context, noting that AI has become commoditized with 434 large language models in China alone. This proliferation has shifted the risk landscape from concerns about a few powerful AI systems to challenges arising from widespread deployment of numerous AI models.

Panelist Perspectives

Private Sector View: Meta’s Position

Melinda Claiborne argued that the AI governance conversation may have become overweighted towards risk and safety concerns. She advocated for broadening the discussion to include opportunity and enabling innovation, asking: “Can we talk about opportunity? Can we talk about enabling innovation? Can we broaden this conversation about what we’re talking about and who we’re talking with?”

Claiborne emphasised that existing laws and frameworks already address many AI-related harms and suggested assessing their fitness for purpose rather than creating new regulatory structures. She advocated for risk assessment processes that are “objective, transparent, and auditable, similar to GDPR accountability structures.”

Civil Society Perspective: Governance from the Start

Jhalak Kakkar directly challenged the framing of innovation versus regulation as competing priorities, arguing it creates a “false sense of dichotomy.” She contended that innovation and governance must go hand in hand, emphasising that “we need to be carrying out AI impact assessments from a socio-technical perspective so that we really understand impacts on society and individuals.”

Kakkar stressed the importance of meaningful multi-stakeholder participation and strengthening mechanisms like the Internet Governance Forum (IGF) to ensure holistic input from diverse perspectives. She emphasised that transparency and explainability are crucial when bias affects decision-making systems.

Government Perspective: Context-Aware Approaches

Mlindi Mashologu from South Africa emphasised that “there is no one-size-fits-all when it comes to AI,” advocating for foundational approaches grounded in equity. He promoted “context-aware regulatory innovation” through adaptive governance models including regulatory sandboxes that enable responsible innovation while managing risks.

Mashologu highlighted South Africa’s G20 presidency work on developing a toolkit to reduce AI-related inequalities from a Global South perspective. He emphasized that AI governance must ensure technology empowers individuals rather than undermining their rights and dignity.

International Governance Perspective: Knowledge vs. Data

Jovan Kurbalija introduced a unique perspective by arguing for a fundamental shift in AI governance language from data back to knowledge. He observed that while the World Summit on the Information Society originally focused on knowledge, current frameworks have moved to focus on data instead. “AI is about knowledge,” he argued, not merely data processing.

Kurbalija also provided a nuanced view on bias, arguing against the “obsession with cleaning bias” and distinguishing between illegal biases that threaten communities and natural human biases that reflect legitimate diversity. “We should keep in mind that we are biased machines,” he noted.

Key Themes Discussed

Innovation and Risk Management Balance

The discussion revealed different perspectives on balancing innovation with risk management. While Claiborne emphasised concerns about over-regulation stifling innovation, Kakkar argued for implementing governance mechanisms from the beginning to prevent harmful path dependencies. Mashologu offered a middle ground through adaptive governance approaches like regulatory sandboxes.

Global South Inclusion and Local Relevance

Multiple panellists emphasised the importance of ensuring AI governance frameworks include meaningful Global South participation and local relevance. Mashologu highlighted regional initiatives like the African Union AI strategy, while Kakkar emphasised international coordination through existing multi-stakeholder forums.

Human Rights and Transparency

There was broad agreement on anchoring AI governance in human rights principles and ensuring transparency and explainability, particularly for systems affecting human lives. However, disagreements remained about implementation approaches, with industry preferring self-regulatory mechanisms and civil society advocating for external oversight.

Audience Engagement

Environmental and Social Justice Concerns

An audience member from R3D in Mexico challenged the panel about environmental impacts and extractivism related to AI infrastructure development, particularly regarding data center placement and resource extraction. This highlighted how AI governance discussions often overlook broader environmental and social costs that disproportionately affect Global South communities.

Practical Implementation Questions

Online questions addressed specific frameworks like the Council of Europe’s convention and practical implementation challenges. Audience members also raised concerns about bias in data collection and the need for inclusive approaches that account for multiple stakeholder perspectives.

B Corp Social Offset Proposal

One audience member proposed a B Corp social offset model for AI companies, suggesting mechanisms for corporate accountability beyond traditional regulatory approaches.

Areas of Agreement and Disagreement

Consensus Points

Panellists agreed on several fundamental principles:

  • Importance of multi-stakeholder participation
  • Need for transparency and explainability
  • Value of building upon existing legal frameworks rather than creating entirely new structures
  • Importance of human rights as foundational principles
  • Need for contextual adaptation of governance frameworks

Persistent Tensions

Key disagreements included:

  • Emphasis and timing of governance mechanisms (early implementation vs. avoiding over-regulation)
  • Adequacy of existing frameworks versus need for AI-specific mechanisms
  • Preferences for self-regulation versus external oversight
  • Approaches to addressing bias and ensuring inclusivity

Conclusions

The discussion highlighted both the complexity of AI governance challenges and the diversity of stakeholder perspectives. While panellists agreed on many fundamental principles, significant differences remained regarding implementation approaches and priorities. The conversation demonstrated the ongoing need for inclusive dialogue that brings together diverse perspectives while addressing practical governance challenges.

The session underscored the importance of ensuring Global South voices are meaningfully included in AI governance development, and that frameworks must be adaptable to local contexts while maintaining coherent overarching principles. The debate between innovation enablement and risk management continues to be a central tension requiring careful navigation as AI governance frameworks evolve.

Session transcript

Kathleen Ziemann: Welcome. Welcome to the main session on the governance of AI. My name is Kathleen Ziemann. I lead an AI project at the German development agency GIZ. The project is called Fair Forward. I will be moderating this session today together with Guilherme. Guilherme, maybe you introduce yourself as well. Hello, good morning everyone. My name is Guilherme Canelaand I’m the director in UNESCO in charge of digital transformation, inclusion and policies. A real pleasure to be here with Kathleen and this fantastic panel.

Yes, so Guilherme and I are very excited to have representatives from different regions and sectors here on the panel that will discuss AI governance with us. And dear panelists, thank you so much for coming. Let me briefly introduce you. So, to our left we have Melinda Claiborne, director of privacy policy at Meta. Welcome Melinda. And next to Melinda sits Jovan Kurbalija, executive director of Diplo Foundation, based in Geneva but with a background from Eastern Europe. And next to you, Jovan, sits Jhalak Kakkar. Welcome Jhalak, happy to have you. Jhalak Kaka is the executive director of the Centre for Communication Governance in New Delhi, India. And we are very happy also to welcome you, Mlindi Moshlogu.

You are filling in for the deputy minister from South Africa from the Ministry of Communications and Digital Technology and your title is you are the deputy director general at the ministry. Thank you all for coming and very sad that Mondly couldn’t come. He was affected by the recent activities in Israel and Iran and his flight could not come through. Well everyone, thank you for coming. Before you will be able to set the scene from your perspective, I would love to give a brief introduction on what we perceive under AI governance at the moment and also give us an idea how to discuss this further. As this IGF’s theme is building digital governance together, we want to discuss how we can shape AI governance together as we still observe different levels and possibilities of engagement in sectors and regions. I would say that currently the AI governance landscape is blooming.

Yes, we have AI governance tools like principles, processes and bodies emerging globally and I think somehow we also can lose track in that like blooming landscape, just to name a few. So in 2019, the OECD issued its AI principles followed by UNESCO’s recommendations on the ethics of AI in 2022. In 2023, I don’t know if you remember still, but AI companies such as OpenAI, Alphabet and Meta made also voluntary commitments to implement measures like watermarking AI-generated content and finally last year the EU AI Act came into force as the first legal framework for governing AI. Additionally, existing fora and groups are addressing AI and its governance. For example, last year the G7 launched the Hiroshima AI process and G20 has declared AI a key priority this year and I think we’ll be hearing more about that from you, Melinda, later. And then we have also various declarations and endorsement and significant communications issued by many like the Africa AI declaration that was signed in Kigali, for example, or the declaration on responsible AI that was signed in Hamburg recently.

And as a core document for 193 member states, the UN’s Global Digital Compact calls for concrete actions for global AI governance by establishing, for example, a global AI policy dialogue and a scientific panel on AI. So when we look at all these efforts, it seems like AI governance is not only a blooming but also a fragmented landscape with different levels and possibilities of engagement. So how do you, dear panelists, perceive this and what are your perspectives but also ideas on the current AI governance? What should be changed? What is missing? We would love to start with your perspective, Melinda, from the private sector. Feel free to use the next three to four minutes for an introduction statement and yes, there you go.

Melinda Claybaugh: Great, thank you so much and thanks for having me. It’s a pleasure to be here. Just a little perspective to set the context from where Meta sits in this conversation. So at Meta, I think everyone’s familiar with our products and our services, our social media and messaging apps. But in the AI space, we sit at two places. One, we are a developer of large language models, foundational Gen AI models. They’re called Lama and many of you might be familiar with them or familiar with applications built on top with them. So we are a developer in that sense and we focus largely on open source as the right approach to building large generative AI models.

At the same time, we build on top of models and we provide applications and systems through our products. So we’re kind of in both camps, just to situate folks. I was glad that you laid out the, I mean it really, in the last couple of years, it’s incredible the number of frameworks and commitments and principles and policy frameworks. It’s head spinning at times, having lived through that. And so I think it is really important to remember there’s no lack of governance in this space. But I do think that we are at an interesting inflection point. And I think we’re all kind of wondering, well, what now? We set down these principles, we have these frameworks, companies like Meta, for example, has put out a frontier AI framework that sets out how we assess for catastrophic risks when we’re developing our models and what steps we take to mitigate them.

And yet there’s still a lot of questions and concerns. And I think we’re at this inflection point for a few reasons. One, we don’t necessarily agree on what the risks are and whether there are risks and how we quantify them. So I think we see different regions and countries want to focus more on innovation and opportunity. Other folks want to focus more on safety and the risks. There’s also a lack of technical agreement and scientific agreement about risks and how they should be measured. I think there’s also an interesting inflection point in regulation. The EU, for example, was very fast to move to regulate AI with the landmark AI Act. And I think it’s running into some problems. I think there’s now kind of a consensus amongst key policymakers and voices in the EU that maybe we went too far and actually we don’t know whether this is really tied to the state of the science and how to actually implement this in a way. And now they’re looking to pause and reconsider certain aspects of digital regulation in Europe.

And then a lot of countries are kind of looking for what to do and are looking for solutions for how to actually adopt and implement AI. And so I don’t think I have an easy answer, but I think we are at a moment to kind of take stock and say, okay, we’ve talked about risk. Can we talk about opportunity? Can we talk about enabling innovation? Can we broaden this conversation about what we’re talking about and who we’re talking with and making sure the right voices, the right representation from little tech to big tech from all corners of the world are represented to have these conversations about governance.

Kathleen Ziemann: Thank you very much, Linda. I would love to continue with you, Mlindi, and giving us the perspective from the South African government. How do you perceive the current landscape? What is important to you at the moment?

Mlindi Mashologu: Thank you. Thank you, Kathleen, for that. I think from the South African government what we see, I think it’s a general knowledge that we see that AI is a true general purpose technology, which is the same as like electricity or Internet, but also it does affect various sectors of our economy. But also, you know, we see that, you know, with such a power, transformative power, so it comes with the responsibility which we want to ensure that, you know, the AI systems are not only effective, but also, you know, they are ethical, inclusive, and accountable.

So, I think it’s one of the first things that we want to do. But also, also to govern AI effectively, we’re trying to come with a shared vocabulary and principled, you know, foundation as reflected in some initiatives that you mentioned before, like OECD principles, UNI level. So, we are trying to make sure that we are not only focusing on AI, but also we just need to also focus on making sure that we do have required sector-specific policy interventions, you know, that are technically informed and locally relevant, so that’s what we’re trying to do, because we see that if we were to look on AI in financial services, it would be different in regulating it in AI in, say, agriculture.

So, we’re trying to come, you know, with different, you know, different approaches, but also, you know, we’re trying to make sure that we are not only focusing on AI, but also some of the areas that we are focusing on as a government, as well, is from the regional point of view, is to make sure that, you know, our approach is grounded from the principle of data justice, which also puts, you know, human rights, economic equity, as well as environmental sustainability, you know, at the center of AI, but also, we recognize that, you know, the impact of climate change, you know, on human rights, and environmental sustainability, and also, you know, to reinforce, you know, the historical inequities, so that’s one of the concrete proposals that we’re looking into, but also, the other area that we’re focusing on is the area of sufficient explainability, which is the requirement for the AI decisions, you know, it’s one of the areas that we’re advocating, especially, you know, those that impact, you know, human lives and livelihoods, so we see those areas, you know, as well as, you know, the development of human rights.

So, you know, if you were to look on the areas of, say, for instance, credit scoring, you know, predictive policing, you know, healthcare diagnostics, you know, we need to have a right to understand how these decisions have actually come, you know, and how the AI systems are trying to make those decisions, but also, further from that, one of the areas that we are following, as well, is the area of human in the loop learning, so, you know, the human learning, you know, development of AI systems, but from the design, and as well as deployment, so humans must guide, and when needed, override automated systems, so this also includes, you know, the reinforcement learning with human feedback and clear thresholds for the interventions in the high-risk domains.

I think the last point that I just want to focus on is, you know, our participation in terms of the global AI governance has been, you know, very, very important. So, you know, we have a lot of partnerships that are there, so, from our side as a country, in terms of the policy that we are currently developing, so we are looking, you know, to leverage on the areas, you know, that have already developed some frameworks, which include your African Union data policy framework, so we are building the models, you know, you know, of governance rooted in equity, so, you know, we are working with the African Union, we are working with the African Union, and we don’t want the AI to replace humans, but we want the AI to actually work with the humans in terms of, you know, assisting us in some of the most pressing needs of our society.

Kathleen Ziemann: Thank you very much, especially the local relevance of AI governance will be also discussed in our round later, so that is a very important point you made. Thank you very much. I think, you know, I think that you are very much rooted in the civil society, so I think that you are very much rooted in the civil society, so I think that you are very much rooted in the civil society, so if you could bring these two perspectives together, that would be very much appreciated.

Jhalak Kakkar: Thank you. Thank you, Kathleen. I think, you know, when we think about AI governance, one is what is the process and input into the creation of AI governance either internationally or domestically? And then, actually, what is the substance of, you know, what we are structuring AI governance as? And if I can first just take a couple of minutes to talk about the process. I think, you know, if we learn from the past, it is very important to have multi-stakeholder input as any sort of governance mechanism is being created, because different stakeholders sitting at different parts of the ecosystem are able to bring forth different perspectives, and we end up in a more multi-stakeholder environment.

I think one of the things that we have increasingly seen is a shift towards multilateralism, and I think, you know, at the IGF, it is a perfect place to talk about the need to focus on multi-stakeholderism and enabling meaningful participation, and not participation that is being done as a matter of form, but participation that actually is meaningful. So, I think, you know, one of the things that we have increasingly seen is a shift towards multilateralism, and I think, you know, at the IGF, it is a perfect place to talk about multilateralism and enabling meaningful participation, but participation that actually impacts outcomes and outputs.

I think the second piece that I want to talk about when I talk about process is increasing the need to meaningfully engage with a broader cross-section of civil society academia and researchers, so not only those bringing perspectives from the global south, but also those bringing perspectives from the global south, so not only those bringing perspectives from the global south, but also those bringing valued and informed perspectives from the global south. The way, you know, a toaster works in the United States, versus the way it works in Japan, versus the way it works in Vietnam or India, it is pretty much the same, but, you know, AI as a technology will be shaped in its, in the way it functions, in the way it impacts very differently in different contexts.

So, I think the third piece that I want to talk about is creating a process that is meaningful, that is meaningful to civil society across the global majority is very important to enable, and we can talk about maybe later in this conversation what some of the challenges are that have been preventing that currently. I think if we talk about the substance of AI governance, one is the piece around how do we really, truly democratise access to AI? We’ve seen a lot of technology development historically has been concentrated in certain regions. At a moment in time when we’re talking about the WSIS plus 20 review, I want to go back to something that was sort of articulated in the Tunis agenda, which spoke about facilitating technology transfer to bridge developmental divides.

While it’s happened, perhaps, with ICANN and ISOC, you know, supporting digital literacy and training, there’s sort of been less substantial moves to operationalisation of AI. I think in this context, it’s very important to think about how, from the get-go, we enhance capacity of countries to create local AI ecosystems so that we don’t have a concentration of infrastructure and technology in certain regions. We talk about mechanisms such as open data set platforms, you know, some kind of AI commons, and, you know, how do we facilitate that? How do we facilitate that? How do we facilitate that access to technology, and how do we facilitate that access to AI commons, and really think about how do we democratise access to this technology so that, you know, we have AI for social good which is contextually developed for different regions and different contexts? I think the last one I want to talk about is regulation and governance is not a bad word. Very often, I’m hearing conversations about, you know, we’ve talked about risk.

Let’s focus on innovation now. I think it’s creating a false sense of dichotomy. I think they have to go hand in hand. And, you know, I think in the past, the mistakes that we’ve made is sort of letting, you know, not sort of developing governance mechanisms from the get-go. And it doesn’t have to be heavy governance and regulation from the get-go, right? I think at this stage, and Melinda was talking about the fact that we don’t understand what the risks are, so we need to be implementing risks. We need to be carrying out AI impact assessments. This has to be done from a socio-technical perspective so that we really understand impacts and society impacts on individuals because until otherwise, you know, we’re going around in circles talking about we don’t know what the risks are, we don’t know what the harms are, we don’t know how it’s going to impact us.

So let’s start setting up mechanisms, whether it’s sandboxes, you know, whether it is AI impact assessments, whether it is audits. I know that, you know, we’ll go back to conversations on, but there’s a regulatory burden to this. It’s going to slow down innovation. But are there ways we can start to think about how we can operationalize these in light-touch ways so that we can parallelly start to understand what are the harms, what are the impacts that are coming up so that we don’t create part-dependencies for ourselves later on where then we’re just doing band-aid solutions.

So I think that’s a big part of what we’re trying to do. So I think it’s important to start with the approach of understanding the impact of AI on the whole. And I think it’s important to also think about the evolution of technology so it’s beneficial to our society and individuals rather than landing up in a space where it’s developed in a direction we didn’t quite sort of envisage and we didn’t realize unintended consequences that would come with shaping it in a particular way. I’ll stop here and come in with more points later. I’ll stop here and come in with more points later.

Kathleen Ziemann: Thank you. Jovan, you have a lot of practice in AI, you call yourself a master in AI. role in AI, but also on how AI is governed in Europe.

Jovan Kurbalija: Thank you, Kathleen. It’s a really great pleasure to be here today. When I was preparing cognitively for the session, I thought, I asked myself how we can make a difference. And one point which is fascinating is that in three years’ time, AI landscape has changed profoundly. Almost three years ago, when ChargePT was released, it was magical technology. It can write you a poetry, it can write you a thesis, whatever you want. And at that time, you remember the reactions were, let’s regulate it, let’s ban it, let’s control it.

There were knee-jerk reactions, let’s establish something analogous to the nuclear agency in Vienna for AI. And there were so many ideas. Fast forward, today we have a realism, and for those colleagues from Latin America, metaphor could be that AI governance is a bit of a magical realism, like Llosa, Marques, and others. You have the magic of AI, like any other technology. And I guess many of us in this room are attracted to internet and AI and digital because of this magical element. But there is a realism. And I will focus now on this realism. First point is that AI became commodity. We heard yesterday that in China there are 434, as of yesterday, large language models. I think similar statistics is for other countries worldwide.

Therefore, AI is not something which is just reserved for a few people in the lab. It’s becoming affordable commodity. It has enormous impact. One impact is that you can develop AI agent in five minutes. Exactly, our record is four minutes, 34 seconds. That’s basically unthinkable. Only a few years ago, it was a matter of years of research. There’s a first point. Therefore, the whole construct about risks is basically shifting towards this affordable commodity. Second point is that we are now on the edge where we will have basically AI on our mobile. And then the question we can ask is, today we will produce some knowledge here in our interaction.

Should that knowledge belong to us, to IGF, to our organizations, or to somebody else? Therefore, this is the second point of bottom-up AI. We will be able to codify our knowledge, to preserve our knowledge, individual or group, family, and that will profoundly shift AI governance discussions. And the third point in this context which I would like to advance in this introductory remark is that we have to change our governance language. If you read WSIS documents, both Tunis and Geneva, the key term was knowledge, not data. Data was mentioned here and there. Now, somehow, in 20 years’ time, I hope it will be reflected in WSIS Plus 20, knowledge is completely cleaned. You don’t have it in GDC, you don’t have it in the WSIS documents, you have only data. And AI is about knowledge. It’s not just about data.

That’s an interesting framing issue. In discussion, I hope that we can come to some concrete issues about, for example, sharing weights, through that sharing our knowledge, the way how we can protect our knowledge, especially from perspectives of developing countries. Because we are on the edge of the risk that that knowledge can be basically centralized and monopolized, and we had all the experiences in the early days of the Internet, where the risk that anyone can develop digital solution, Internet solution, ended at the end of the day that just a few can do it. And that wisdom should help us in developing AI governance solution, and we can discuss concrete ideas and proposals.

Kathleen Ziemann: Thank you very much, Jovan, also for the references to the whole history of the Internet. I think that’s always great to have here as expertise on the panel at IGF. Thank you all for setting the scene. I think we got already an idea about the different perspectives we have here, and also the possibilities for synergies, but maybe also for conflict. And that’s also a bit our role as moderators, to bring in these two different possibilities in you being on the panel. We would love to start now a more open round of discussion here. We have prepared questions for you. We will start, but then we will also hope that something evolves in between you, and that you can also refer to each other and answer a bit of the questions you’ve already put in the room here. But first of all, we would start with you, Mlindi, in terms of also giving us an idea. You already spoke about the local relevance of AI, how to insert that into global processes, and as South Africa is currently holding the G20 presidency, how will you make sure within your functions that the local relevance of AI and the AI frameworks that South Africa also has established will be included in the global dialogue here?

Mlindi Mashologu: Thank you, Kathleen. I think it’s important to know that AI is a priority in terms of our G20 presidency. I think it’s because we see that the reason why we put it in there, we also picked up that, I mean, how we then govern it, it will determine how we keep it inclusive, you know, and just how our societies will actually be tomorrow. So, from our approach, you know, so what we have tried to do is to ground, you know, the governance in two complementary dimensions, which one being macro foresight as well as micro precision.

So, from the macro foresight point of view, we look on AI from the long-term view, and also to recognize, you know, its impact, you know, in society for a much longer period, shaping, you know, our economy. So, but also from our G20 agenda, then we are championing the development of a toolkit, which will try to reduce the inequalities, you know, connected to the use of AI. So, this toolkit also seeks to identify the structural and systematic ways in which AI can both amplify and redress the inequality, especially from the global south. But also, we see that, you know, this foresight requires, you know, the geopolitical realism, because, I mean, AI, we see that it cannot be, you know, dominated by, you know, a handful of countries or private sector actors, but it has to be, you know, multilateral, multi-stakeholder, as well as multi-sectoral. So, that is why, then, you know, we are working on expanding, you know, the scope of the participation, you know, bringing more voices from the continent, from the global south, and also some of the underrepresented communities, in terms of the center of the AI governance dialogue. But also, if we were to look, then, matching, you know, the macro vision with the micro precision, whereby we’re looking on the ability to address granular context-specific realities.

So, as I highlighted before, that, I mean, we see that, you know, there is no one-size-fits-all when it comes to AI. So, from there on, then, we advocate, you know, for the context-aware regulatory innovation, so, which include your regulatory sandboxes, human interlock mechanism, but also adaptive policy tools that can be calibrated to center-specific risks and benefits. But also, one of the areas that we are focusing on, as well, is to ensure that we do capacity building, develop local talent, research ecosystem, as well as ethical oversight mechanisms, because we believe that the AI governance must be owned as much as, you know, for all sectors of our economy, being on the rural areas to, you know, the cities and all that.

But also, from our presidency, we also aim to bridge, you know, the governance within the regional frameworks, so we align with the African Union’s emerging AI strategy, the Nepal’s science, technology, innovation frameworks, as well as the regional policy harmonization through the SADC. So, we see that this integration at regional levels needs a peripheral, but also they are foundational in terms of, you know, the global governance agenda. I think, finally, in terms of RG20, we just would like to call our partners and international institutions to support the distributed AI governance architectures, so that, you know, we can all be inclusive and, you know, we can have the equitable, as well as, you know, make sure that the benefits of AI, you know, can have, you know, much meaningful in terms of our society, while we are also addressing, you know, the associated risk, you know, related to AI, I think.

Guilherme Canela: So, Melinda, moving to you now, actually, Jhalak stole my thunder when I was preparing the follow-up question to you, because I think she touched on a point that I’m sure several in the audience and online have thought when you were speaking about what she called the false dichotomy between innovation and protecting human rights, right? Because at the end, the objective of governance, if it’s done in alignment with the international human rights law, is to protect human rights for all, not only for the companies, right? So, how you respond to this, right?

You framed, of course, very briefly, as if it was an antagonism between those two things. At the same time, we know all companies, including yours, are investing in human rights departments, reports, and when there are specific issues, like elections, on how to deal with these technologies and the risks, for example, for elections. But yet, there is a lot of scepticism regarding the way the private sector, not only your company, is dealing with this situation. So, if you could go a bit deeper on, actually, what Jhalak was saying about what, in her view, is a false dichotomy on those two things, right?

Melinda Claybaugh: Yeah, I mean, I guess I would agree, to be provocative. In fact, I mean, I think that what I’m trying to say is that we need to look at everything together, and it seems that the debate about AI, and by AI, to be clear, I’m talking about advanced, you know, generative AI. I think we tend to talk about AI kind of loosely, but the conversations to date at the kind of international institution level and the framework and commitment level have really been about the most advanced generative AI. Those conversations have largely been focused around risk and safety risk, and I think that’s an important piece, of course, and we’ve implemented a frontier AI safety framework to address concerns about catastrophic risks. I think, however, the conversation around harm and risk, two things.

One, I think we need to be very specific about what are the harms we’re trying to avoid, and as you point out, a lot of the harms we’re trying to avoid are harms that already exist that we’ve been trying to deal with. So, people talk about misinformation, people talk about kids, people talk about all the things that are existing problems that have existing policy frameworks and solutions to varying degrees that differ in different places. What I am trying to convey is that we also need to be talking about enabling the technology, not to say ignoring risk, not to say not having that conversation, but we’re missing a key element if we’re not talking about everything together.

Because otherwise it becomes overweighted in one direction, and, you know, I don’t think there’s a global consensus around the idea that advanced generative AI is inherently dangerous and risky. I think that’s a live question that a lot of people have opinions about, but there is a lot of interest and opinions about the benefits and advances of AI, and so I think that all needs to be brought together into a conversation. I will also say that there’s existing laws and frameworks that are already in place, and so I think even the pre-date chat GPT, right, and so we have laws around the harms that people are talking about around copyright and data use and misinformation and safety and all of that. We have legal frameworks for it, so I would like to see attention around how are those legal frameworks fit for purpose or not with the new technology, rather than seeking to regulate the technology.

Kathleen Ziemann: Thank you, that’s a very interesting aspect that Jhalak was also touching upon a bit, especially on that idea whether we can use the already existing laws and frameworks in the context of this new technology. Jhalak, how do you perceive this? Do we have all the rules already, and if not, what is missing?

Jhalak Kakkar: Yeah, I think, you know, there’s been a lot of conversation around whether there is existing regulation that can apply to AI, and whether there’s need for more regulation, and I think there are several existing pieces of legislation that would be relevant in the context of AI, just to name a few. Data protection, competition antitrust law, platform governance laws in different countries, consumer protection laws, criminal laws. So, yes, I think I also agree with Melinda’s point that we need to think about how maybe some of these laws are fit for purpose.

Do they need to be reinterpreted, reimagined, amended to account for a different context that AI brings in? I mean, if I can give an example, the way we’ve seen the need for traditional antitrust competition law to evolve in the context of digital markets, you know, when internet platforms came in, you could have said we have existing competition law, we have existing antitrust law, and that’s going to apply, and we have seen over the last couple of decades that it is not fit for purpose to deal with, you know, the new realities of network effects, data advantage, zero price services, multiple sided markets, that have come in with the advent of internet platforms.

Similarly, we already see a hot debate happening around intellectual copyright laws, whether copyright law is well-positioned to deal with the unique sort of situation that has arisen where, you know, companies are training their LLMs on, you know, a lot of knowledge and data available on the internet, relying on the fair use exception. What was the intention of the fair use exception under copyright law? It was that big publishers should not amass a lot of knowledge with them, and it gives people like you and me access to use that knowledge and reference that knowledge and build on that knowledge. But, you know, it’s an interesting situation where you have large companies now sort of, you know, leveraging fair use.

So I think, you know, we already have courts around the world, you know, dealing with this issue. I’m sure legislatures are going to deal with it, and it’s a question that I think as a society we have to think about is, yes, you know, there is, you know, development and new things that these companies are doing, you know, fundamentally maybe there’s a transformation that they’re doing when they’re building on this, but you know, what are we losing out? What are the advantages in weighing all of that to think through? I think coming back to, you know, the false dichotomy point, I want to go back to that. I think, yes, we know a lot of harms that have already arisen in the digital and internet platform context. We’re well aware of that, and we’re looking out for that. Civil society academia researchers as, you know, we see AI, and if we’re talking more specifically about LLMs.

But those are existing harms that we’re looking for. There are a lot of harms that we don’t know may exist, and just to give an example, I don’t think 15 years back we thought about the kind of harm social media platforms would have on children. It just wasn’t something that was envisaged. I mean, maybe someone could have envisaged, you know, see some content, but, you know, the mental health impacts, the kind of cyber bullying, the extent and nature of it, a lot of this is unintended and unenvisaged, and I think unless we are scrutinizing these systems, and it’s not only a question of catastrophic risk, I think the, you know, we have to think about individual level impacts and societal level impacts, and unless we’re engaging with these systems and understanding these systems from the get-go, those impacts and implications and negative consequences will only surface five to ten years from now, and we cannot only rely on, you know, while it’s wonderful to see companies heavily investing in human rights teams, trust and safety teams, you know, as a space we didn’t have trust and safety ten years back, so it’s a new space that has grown.

You have so many professionals coming into this space with specialized skill sets, and it’s great to see that, but we’ve also seen that companies have never been particularly adept at only working under the realm of self-regulation. I mean, whether, and this is across industries, I’m not only pointing to tech, you know, we’ve seen that time and time again over the last 150 years of, you know, really, you know, when we talk about industrial regulation that’s been coming through, so I think we we have to move beyond the sense that companies will self-regulate. Very often they don’t disclose harms that are apparent to them and we need external regulators, we need communities to be engaging, a bottom-up approach, civil society to be engaging, multilateral institutions to be coming in. We need the development of guidance, guidelines to operationalize the AI principles that we’ve all been talking about and working on over the last five, seven, eight years. So I think we have to move forward into the next phase of AI governance.

Guilherme Canela: Thank you, very interesting. So now what’s going to happen, I will do a follow-up question to Jovan, but then we are going to open to you. So if you want to start queuing on the available mics, you are welcome to do it. Jovan, let’s go back to the magical realism and the issue of getting back knowledge to the discussion. It’s a very interesting point you raised. You probably remember when there was the Tunis round of the World Summit, UNESCO published a very groundbreaking report called Towards Knowledge Societies. It’s very interesting, until today, every week that report is one of the most downloaded reports in the UNESCO online library, which shows that independently of what we are discussing here in these very limited circles, the people overall are still very much interested in the knowledge component of this conversation.

So doing this preamble to ask you to go a bit deeper, so how we bring knowledge back to this conversation, of course connecting with the new topics, of course data is a relevant issue, we can’t ignore the discussion of data governance, but I mean the South African presidency has three main topics, correct me if I’m wrong, meaning the solidarity, equality and sustainability. And if you read that report of UNESCO of 20 years ago, connecting with the challenges of the then Information Society, you’ll see those three keywords appearing maybe in different ways. So people like Manuel Castells and Nestor Garcia-Conclini were telling those things. So what is your view on how we get back to this important part of the conversation when we are looking to the AI governance frameworks?

Jovan Kurbalija: Sure, it’s good that you brought this, by the way, excellent report. Two reports are excellent, the UNESCO and World Bank report on the digital dividends, those are landmark. What worried me was, I studied and I didn’t want to bring it, but you told that you don’t mind controversies, even UNESCO, which set the knowledge stage with that excellent report, backpedaled on the knowledge in the subsequent years, which was part of the overall policy fashion. Data is, even in the ethical framework, the recommendation data is more present.

That’s the first point. The second point, why people download it, they react intuitively, they can understand knowledge. Data is a bit abstract, knowledge is what we are now exchanging, creating, developing. And my point is that common sense element is extremely important, and through that, through bottom-up AI, through, let’s say, preserving knowledge of today’s discussion, may be excellent questions that we’ll have. This is knowledge that was generated by us at this moment, and this is also, back to Marcus and other magical realism, you have to grasp the moment. And it’s not, it’s technically possible, it’s financially affordable, and it’s ethically desirable, if you want this trinity.

But let me just, on that, on your question, just reflect on two points of discussion. There are many false dichotomies, including in the question of knowledge. I can list, you have multilateral versus multi-stakeholder, privacy versus security, freedom versus public interest. And we can label them as false dichotomies, but I think we should make a step forward. Ideally, we should have both, multi-stakeholder, multi-lateral security, and, but sometimes you have to make trade-offs. And this is a critical, that trade-offs are done in transparent way, that you can say, okay, in this case, I’m going to multilateral solution, because governments have respective roles and responsibilities. You can find in many other fields.

And back to your question in this, bringing discussion to common sense, and the references that colleagues made. I would go, not only 150 years, or even the Napoleonic code, I would go to Hammurabi 3,400 years ago. There is a provision in Hammurabi’s law, if you build a house, and if house collapses, the builder of the house should be punished with that sentence. It was a bit, that was the time. What we are Harsh one. Harsh one. We don’t want that. What we are missing today, let me give you one, just end with this, I will conclude. Deployees has its own AI. We are reporting from this session. And let’s hypothetical situation, our AI system confuses and says that two of you said, or all of us, something which you didn’t say.

And you go back to Paris, and your boss say, hey, by the way, did you say really that? And said, no, I didn’t say it, but Diplo reported. Now, who is responsible for it, ethically, politically, legally? I’m responsible. I’m director of Diplo. Nobody forced me to establish AI system, to develop AI system. Therefore, we are losing a common sense, which exists since Hammurabi, Napoleonic code, till today. Somebody who develop AI system, and make it available, should be responsible for that. There are nuances now in that, but the core principles are common sense principles. Therefore, in that sense, people, by downloading knowledge, they’re reacting with common sense. I think in governance, AI governance, you should really get back to the common sense, and being in position to explain to five years old what is AI governance. And it’s possible. And I would say this is a major challenge for all of us in this room, and I will say policy community to make AI governance common sense, bottom-up, and explainable to anyone who is using AI.

Kathleen Ziemann: Thank you very much. I don’t see a queue behind the mics yet, and I think we also have someone. That is great. Welcome. Happy to have your questions now towards the panel. It would be great if you could say who you are, from which institution, and also to whom you would like to direct your question.

Audience: Thank you. So, my name is Diane Hewitt-Mills, and I’m the founder of a global data protection office called Hewitt-Mills. For those that don’t know, under the GDPR, there are certain organizations are mandated to appoint a data protection officer, and that’s an individual or an organization that has responsibility for independently and objectively reviewing the compliance of the organization when it comes to data protection, cybersecurity, and increasingly AI. So, I’m a UK qualified barrister. I’ve been working in the area of governance for over 25 years, data protection focused and privacy focused governance, and so I’ve been running this organization for seven years, which I’m you know very proud to do as a sort of female founder. I know I’m a very rare beast, but importantly, I decided five years ago to go for this standard called a B Corp standard, and I don’t know if you’re aware, but B Corp is a standard for organizations that can demonstrate high standards in environmental social governance, ESG, and so my sort of comment or recommendation is we oversee carbon sort of offsets and the sort of efforts of organization in terms of demonstrating ESG, and I had a thought about would it be an idea if organizations could also demonstrate their social offset. So, for example, if you are a tech business or health business using AI, would it be sort of an idea that you document the existing risks, think about foreseeable risks, and think about actually how you could offset those risks in an objective way and to have an independent overseer of that type of activity. I just thought I’d throw that out there to the panelists, because we’re thinking about creative ideas of making sort of AI governance tangible and explainable, and I wondered for example if that were the sort of requirement 15 years ago for social media platforms to demonstrate their social offset, what sort of world we might be in today.

Kathleen Ziemann: Thank you very much. So, I think it was not specifically directed to someone on the panel, so whoever wants to take that question, I’m looking at you, Melinda, but I think it might be relevant for others as well.

Melinda Claybaugh: Yeah, I’m happy to take a start at it. I think that that exact kind, I mean what you’re talking about is really a risk assessment process that is objective and transparent and auditable in some fashion. I think that is the, you’re right, the basis of kind of the GDPR and accountability structure that so many data protection laws have been built on. I think increasingly we see it in the content regulation space, particularly in Europe as well, that there are risk assessments and mitigations and transparency measures that can be assessed by external parties. And interestingly we are seeing that in some early AI regulation attempts. I speak most fluently about what’s going on in the US, but we are seeing very similar structures around documenting, identifying, documenting risks, demonstrating how you’re mitigating them, and then in some fashion making that viewable to some set of external parties. I do think that is a proven and durable type of governance mechanism that makes a lot of sense. I think we still come to the issue, however, of what are the risks that are, and how are they assessed. And I say that because it is a particularly thorny challenge in the AI, particularly in the AI safety space, and you know there’s healthy debates around kind of what risks are tolerable or not. But I do think as a framework that that makes a lot of sense, and there are a lot of professionals who already work in that way, and companies already have those internal mechanisms and structures. So I would be surprised if we didn’t land in a place like, and in fact that’s what the EU AI Act essentially proposes as a structure.

Guilherme Canela: Sorry, just a quick follow-up question, but in that case even if there is no consensus about what are the risks, the transparency that you were saying that you also agree is part of the solution, right? The companies don’t need to be forced in agreeing on the risks, but they need to be transparent in telling the stakeholders what are the issues they consider risks and how they are mitigating, right? Because the problem may be to say this is a risk you need to report on that, but when the requirement is report on how you do risk assessments, then it’s a different ball game, right?

Melinda Claybaugh: Yeah, I think the trick, I’m thinking about this through the lens of an open source model provider, and this is another tricky area of AI governance and regulation. How you govern closed models and open models may be very different, and so we provide, we do all kinds of testing and risk assessment and mitigation of our model, and then we release it for people to build on and add their own data to and build it for their own applications. We don’t know how people are going to use it. We don’t know how they end up using it. We can’t see that, and so there’s a very, we can’t predict how the model will be used. So I think there’s just nuances as we think about this in terms of who’s responsible for what. I do think some of it’s common sense, who’s using it, you know, but so I think that’s part of the value chain issue that people talk about.

Kathleen Ziemann: I see that also, Mlindi wanted to react to the question.

Mlindi Mashologu: I think for me the important thing is, that is why for us as policymakers, we just want everybody to play fair when it comes to AI. You know, there are areas that we understand that I mean self-regulation will be there from the organizations, but all we, it’s important is to make sure that at least we can look on all these risks that are emanating and make sure that we all deal with those risks collectively, both from the private sector as well as from the government, because from us as government, we don’t want to be seen as, you know, doing that hard regulation regulation and all that, which might end up, you know, starting innovation, but we want to make sure that, you know, everybody can be protected, but while also from the private sector point of view, you can also derive the value that you want to derive from the AI systems.

I think that’s what is important, but also the other area that I’ve touched on before, the area of explainability, it’s actually very important because, I mean, you actually use these models and, I mean, they might have, you know, some decisions that can be very harmful to human lives, so that’s why then we say that you need to have, you know, these decisions being explainable, but then it also touches to say that whenever the model does a decision, it needs to have considered the broad aspects of data sets, you know, from various demographics as well to make sure that you don’t look on a few demographics and say that, okay, no, the model can actually take a decision based on, you know, the small amount of data that you actually train the model on.

Kathleen Ziemann: Yes, definitely, and that’s also, I think, a big achievement of the open source community to really stress that factor of explainability, what is happening actually within the data and within the models. Yes, we would love to move further on with on-site questions and we switch to this microphone. Yes, happy to hear your question. Thank you very much.

Audience: So, my name is Kunle Olorundare. I’m from Nigeria and I’m the president of Internet Society, the Nigerian chapter, so to say, and we are into advocacy and stuff like that, so to say. So, my concern is this. I know it’s just about the right time for us to start discussing AI governance. There’s no gains in that. However, there are issues that we really need to start to look at critically and one of those issues has to do with the way data is being collected. I listened to Jovan the other time when he was emphasizing the issue of knowledge. I agree 100%, right, because the end product of artificial intelligence is knowledge, so to say.

However, how we gather this data, I think it’s very important. What I’m saying that is because we are looking at an AI that is going to, you know, be inclusive, that we be able to have value for every community, so to say, and you will agree with me that this data gathering is being done by experts and for every individual person, right, everybody has its own bias.

So, I believe that whatever data you gather is as inherently flawed as the bias of the person that has gathered the data in the first place. So, we need to start looking at how we are going to bring inclusivity in how we bring all this data together, considering all the multistakeholders. I think that is very important. That is on one hand. And for me, I think it will get to a stage that even this AI we are talking about is going to become DPG, data public routes. I’m saying that because it’s going to be available to everybody and everybody should be able to use it for whatever purpose they want to use it for.

But before we get there, how do we ensure that we put everybody on the same pedestal in the sense that we need to have a framework that is universal? I’ve listened to Melinda when she was talking about the framework and I began to see some kind of, okay, different frameworks coming from, you know, different stakeholders. So, we need to sit down and bring all these, you know, frameworks together so that we can have a universal framework that’s going to speak to issues that bother everybody and the AI we’re going to have at the end of the day is going to be universal and it’s going to be able to take care of everybody’s concern. So, I want the panelists to react to this. I think Jovan and probably Melinda should be able to react to this. Thank you very much.

Kathleen Ziemann: Thank you very much. Jovan, do you want to go first?

Jovan Kurbalija: Just a quick, excellent point and question. Two comments. Both are controversial, but first one is more controversial. We have had a lot of discussion of cleaning biases and I’m not speaking about illegal biases, biases which are basically insulting people’s dignity. That’s clear. That should be dealt even by law, but let alone that. But we should keep in mind that we are bias machines. I am biased. My culture, my age, my, I don’t know, hormones, whatever, are defining what I’m saying now or what questions you ask.

Therefore, this obsession, which is now calming down, but it existed, let’s say, one or two years ago with cleaning bias, was very dangerous. Yes, illegal biases, biases that threaten communities, definitely. But I would say at that point we have to bring a more common sense again into this. Second point that you mentioned is about knowledge. Knowledge, like a bias, should have attribution. Financial, legal, this, the question you ask, is your knowledge built on your understanding and other things? The problem currently in the debate is that we are throwing our knowledge into some pot where we don’t know.

It’s like, I call it AI Bermuda Triangle, and it’s disappearing and suddenly we are revisiting it. even testing big systems in our lab in deep layer testing where we put very specific knowledge, contextual, and we realize that it is taken, repackaged, and then not sold yet to us but maybe in the future. That’s a critical issue. Your knowledge, knowledge of local community in Africa, Ubuntu, oral knowledge, written knowledge, belongs to somebody or should be attributed, shared with the universal framework, definitely, but attributed. That’s a critical issue when it comes to knowledge and also to your previous question what we should do with the knowledge.

And again, instruments are there and the risk is that confusing AI governance discussion, well everything and anything, magical realism a bit, is basically missing the core points and it is like baby crying. Instead of answering the question with existing tools, we are giving the toys to the baby, which is discussion on ethics, philosophy, which I love, I love philosophy, but there are some issues that we can solve with existing instruments related to your question, question of bias, and question of knowledge.

Kathleen Ziemann: Melinda, before you react as well, I look at Jhalak’s face and I see that you might not agree with all of the points mentioned by Jovan, especially possibly the one that bias and data can be neglected. Is that something you’re thinking about?

Jhalak Kakkar: I mean, I don’t disagree with him actually. I think there is a reality that there is a level of bias in all of us and it’s not that the world is completely unbiased, it’s not that when judges make decisions there’s no bias over there, but so and ultimately AI is trained on data from this world and biases will get embedded into that. They’re trained on existing data sets which capture societal bias. I think the difference is with human decision-making in many contexts, we have set up processes and systems and there has to be disclosure of the thinking and reasoning going into it and that can be whetted if someone raises an objection. I think with AI systems that’s the challenge, is explainability has in many contexts of various kinds of AI systems has been challenging to establish and I think that’s a question that is still being grappled with. So I think that, you know, and I think disclosure of use of AI systems in various contexts and whether someone knows that an AI system is being used and they are being subject to it and then the kind of bias that comes into decision-making that impacts them, I think that’s the other piece to it.

Kathleen Ziemann: Thank you. Melinda?

Melinda Claybaugh: Just two quick thoughts. I think it is critical that AI works for everyone and so part of that is making sure that we do have the data, that there is a way of either training a model on the data or fine-tuning model on as representative of data as possible. I think that’s a foundational key concept. I also think that there needs to be a lot of education around AI outputs and so when people are interacting with AI, they understand that what they’re getting back may not be the truth, right?

Like what is it? It’s actually just a prediction about the next right word and so I think we’re at the very early stages of this in society and so our expectations of what it is, what it should be, what these outputs should be relied on for, I think is very evolving. I do agree that when AI is being used to make decisions about people or their eligibility for services or jobs that there is an extra level of concern and caution and requirements that should be added in terms of a human in the loop or transparency around a decision was made. I absolutely understand the concerns around that so I think as a society we get more experience and understand these tools more and what they should be used for and what they should not be used for. I think these questions will get more sorted out.

Kathleen Ziemann: Thank you very much. So at IGF we want to be as inclusive as possible that’s why we also have the online participation for people that can’t be here and that can’t be also maybe afford to travel here and we have our online moderator Pedro behind the mic here. Pedro, if you could give us like two relevant questions from the online space that need to be addressed to the panel that would be really great.

Online moderator: Perfect, thanks. We have a question from Grace Thompson directed to Jhalak and then Melinda. What are the panelists views about the consensus gathered in the concept of Europe framework convention on AI, human rights, democracy and the rule of law. The first international treaty and legally binding document to safeguard people in the development and oversight of AI systems. We at the Center for AI and Digital Policy advocate for endorsement of this international AI treaty which has 42 signatories to date including non-European states.

Kathleen Ziemann: It’s not really going through Pedro, we have difficulties to understand you I think as well. Can you give us maybe the two main words that need to be discussed. Was it the EU AI Act in the first one?

Online moderator: The concept of Europe framework convention on AI, human rights, democracy and the rule of law. The comments on the panel for Jhalak and Melinda.

Kathleen Ziemann: Okay, thank you very much. I think that went through okayish. Jhalak, do you want to react?

Jhalak Kakkar: Are they asking about the framework? Yes. So I think you know there has been a lot of conversation globally around what is the right approach to take. You know Melinda was saying we need to think about what systems need you know more scrutiny versus others, systems that are impacting individuals and people directly versus those that are not.

There’s been a whole conversation which we’ve referenced earlier in this in this dialogue around you know innovation versus regulation and what is the right level of you know regulation to come in with this point. What is too heavy? What is not enough? And I think I don’t have the answer to that right. I think in different contexts it’s going to be different. In countries which have a high regulatory capacity and context there is more that they can do and implement. In countries that don’t we have to frame regulation and laws which work for those sort of regulatory and policy contexts.

But what I think is really important is you know at occasions like for instance the India AI Impact Summit which is an opportunity because India is you know trying to emerge as a leader in the global majority to really bring together thinking from civil society, academia, researchers, industry, governments particularly from the global majority to talk about what what would be the right way forward. Would it be borrowing from ideas that have developed in another context and perhaps there are ideas that are relevant to pick up from there.

But what is contextually and locally useful and relevant from within the contexts we come from right. I mean you know places like India and South Africa may have a lot of AI that is being developed say a Sloan-Kettering health diagnostic tool which is then brought in and deployed in the Indian context. But demographics are different. You know the kind of testing and treatments available at our primary health care, secondary health and tertiary health care settings are different.

You know so there are a lot of differences. So how do we think about something like that which may not be really a topic of discussion in in other parts of the world. So I think in India and places like South Africa we may have slightly different challenges to grapple with and I think it’s very important that those conversations happen as well.

Mlindi Mashologu: Yeah, I think from the South African point of view as my colleague has just highlighted here, you know one of the areas is the areas of human rights and it’s enshrined in the Constitution. So whatever you do from the technology point of view you need to make sure that it doesn’t really impact the human rights you know as well as the Bill of Rights. So it’s one of the things that we’re trying to do to make sure that whenever then you actually put these types of technologies they are not infringing on the rights of people.

But also you’ll find that you know you do have you know some of the other laws that we’ve got like your protection of personal information act. So which says that you know you can’t just use my information we didn’t need. But then how do we then make sure that we can use your information for the public good. So now you’re competing with these two laws, one is trying to use this information for the better good but one is saying that you can’t just use my information.

So I think it’s going to be quite a balancing act that we’re trying to do to say that one of the – some of the things that we can use to make sure that we can drive innovation but what are the things that we need to do to make sure that we don’t also infringe on the human rights as well as, you know, the information of the people.

Kathleen Ziemann: Yes. Thank you very much. I see there’s further questions from the floor. Jovan, you will be reacting briefly because I think it would be also great.

Jovan Kurbalija: There are two questions concrete on the UAE Act and Council of Europe Convention. Just quickly, those are very interesting points. You moved fast and probably too far. Now as we’re hearing from Brussels there is a bit of revisiting of some provision, especially on the defining high-risk models through FLOPs and other things. Council of Europe is an interesting organisation.

They adopted Convention on AI but they’re an interesting organisation because under one roof, first you have the Convention but you have also human rights coverage, next is also human rights court, you have cybercrime, Council of Europe Convention is host of Budapest Convention. You have science. Therefore, it’s one of the rare organisations that interplay between existing silos when it comes to AI could be basically bridged within one organisation. Those are just two points on UAE Act and the Council of Europe Convention.

Kathleen Ziemann: Thank you very much. Let’s pull last two questions from the floor. I see two people standing behind the mic over there.

Audience: Yes, thank you. My name is Pilar Rodriguez, I’m the youth coordinator for the Internet Governance Forum in Spain. I wanted to follow up a little on what Ms Jhalak was saying about how countries can achieve AI governance and AI sovereignty if this doesn’t lead to, let’s say, an AI fragmentation. I’m not just thinking from a regulatory perspective because we have the AI Act in Europe, the California AI regulation, China has this regulation, so doesn’t that lead to more fragmentation and coming from the youth perspective, is there a way to ensure that we have, let’s say, a global minimum so that future generations can be, let’s say, protected?

Kathleen Ziemann: Thank you very much. Let’s also take the next question of the person behind you.

Audience: Hi, I’m Anna from R3D in Mexico. It’s going to sound like I’m making a comment more than a question, but I promise that for Melinda there’s going to be a question because I was very concerned of hearing how there was this underestimation about the risks of AI, making it sound like it’s something hypothetical and not that it was actually materialized in several examples around the world. And Jovan was mentioning this topic of knowledge and education while at the same time speaking about illegal biases when I think that in reality there has been several examples of how classism, racism, misogynist is affecting how people can access basic services around the world or how police are predicting who is a suspect or not, so we shouldn’t disinform people about the actual risks.

But the question to Melinda would be related to the emergency crisis that we are living and since she mentioned that companies such as META are doing these risk assessments, I wonder how META is planning to self-regulate, for example, when it hasn’t done environmental or human rights assessments, when it has established hyperscale data centers in places like the Netherlands that had made people publicly pressured for them to stop being constructed there, so then you move them to global south countries or to Spain in that case, so that all the issues with extractivism, with hydric crisis, with pollution arrive to other communities where there hasn’t been any consultancy but you are claiming that there has. That would be my question.

Kathleen Ziemann: Thank you very much. So two relevant points. One would be the point of fragmentation and the other one of global AI justice, basically. Melinda, do you want to react first?

Melinda Claybaugh: Sure. I mean, I can’t really speak to the data center piece more, I think that was your question around kind of basically the energy needs for AI and where data centers are placed. Essentially, I can’t really speak to that. I can say that I think we all know that the AI future is going to require a lot of energy and I think that there are a lot of questions about where the energy needs are and where the solutions to those energy needs are going to come from, but I can’t speak in any detail about how particular decisions are made.

Kathleen Ziemann: And in terms of fragmentation, that was part of the first question, right, the fragmentation of AI governance, having so many initiatives, so many different stakeholders. That is also basically then the question, how could you, coming from different sectors and regions, cooperate more in that area? Is there an idea here on the panel how that could look like? Who would like to react on that?

Jovan Kurbalija: It was a question, how to avoid?

Kathleen Ziemann: How different sectors and regions could cooperate even better on AI governance? How to counteract against that fragmentation that might also occur from the blooming landscape of AI governance?

Jovan Kurbalija: We have to define what is fragmentation, you know. Having the AI adjusted to Indian, South African, Norwegian context, German, Swiss, whatever, is basically fine. Probably the communication or exchanges should be done by some sort of standardisation about the weights. Weights are basically the key element into AI systems. Therefore we may think about some sort of standards and to avoid the situation that we have with social media. If you are on one platform, you cannot migrate with your network to the other. Now the Digital Services Act in the UAE is trying to mitigate that. But the same thing may relate to the AI. If my knowledge is qualified by one company and I want to move to the other platform, company, whatever, there are no tools to do that. My advice would be to be very specific and to focus on the standards for the weights and then to see how we can share the weights and in that context how we can share the knowledge.

Kathleen Ziemann: So joint standardisation, Mlindi?

Mlindi Mashologu: I think from the continent we started this governance as early as 2020-2021 when we developed the AI blueprint for the continent and I think from there on the African Union also went into developing the AI strategy but also the individual member countries are also developing their policies and strategies. So I think there is not much fragmentation but it’s such that from the grassroots level you’ll find that each country will have its particular priorities that they would like to focus on. But I think generally and if you were to look in all the published policies and strategies and legislations you’ll find that AI normally generally addresses more of the core principles, the issues of ethics, the issues of bias, the issues of risk and I think even from the South African point of view, from the policy development point of view, the one that we are currently finalising now, we are actually advancing some of those aspects as well. I think we are looking on this thing but from the grassroots level, from the country level, you’ll find that you’re not going to have exactly the same but it’s not that they are always the same, but it’s such that there are different priorities as well.

Kathleen Ziemann: Thank you.

Guilherme Canela: Do you want to add anything?

Jhalak Kakkar: Yeah. Yeah. I think there is a concern that in the drive for innovation there’s a race to the bottom in terms of the adherence to responsible AI, ethical AI, rights frameworks. We have several existing documents ranging from the UDHR to the ICCPR which can be interpreted, which through international organisations, norm building can happen, which sets a certain baseline. I think the IGF, as WSIS plus 20 review happens, I think the IGF should be strengthened to really help with not only agenda setting of the action. lines, but also as a feedback loop into CSTD, the WSIS forum, and other mechanisms where there is holistic sort of input from multi-stakeholders going into these processes, which accounts for many of the concerns that have been raised, you know, ranging from environmental concerns to, you know, impact of extraction in global majority contexts.

It could be questions of labor for AI, you know, whether it’s labeling and, you know, a lot of, like, worker-related concerns. So I think all of this needs to be surfaced, and, you know, then these conversations need to feed back into the agenda setting as well as, you know, the final outcomes that we have. Because I think that level of international coordination, both at the multilateral level but at the multi-stakeholder level, is important. And we have to all sort of come together and work together to find sort of ways to set this common baseline so that we don’t sort of, in the race for, you know, getting ahead, we don’t sort of lose focus on our common values that we have articulated in documents like the UDHR.

Guilherme Canela: Thank you. So now we are walking towards the end, so if the online moderator has a very straightforward question.

Online moderator: Yes, we have one from Michael Nelson about the two sectors. The two sectors that are spending the most money on AI are the finance sector and the military. And we know very little about their successes and failures, so he would like from the panelists, especially from Jovan and Melinda, what are their fears and hopes about those two sectors?

Jovan Kurbalija: The question is about AI?

Online moderator: Especially finance sector and the military.

Guilherme Canela: Okay, so fears and hopes, finance, military, but then I will give the floor back to all of you. One minute to, if you want to comment on that, and within this one minute, what is your key takeaway of this session? So let’s start with you, Melinda.

Melinda Claybaugh: Okay, I don’t have an answer on the finance and military sector hopes and fears, to be honest. We are very focused on adding AI personalisation to our family of apps and products. I’ll leave the finance and military to others. On the key takeaway from the session, I think it really is interesting to take stock of where we are at these meetings. I’ve been at the last couple IGFs and I think the pace of discussion and the developments in the space are really fast, right, and fast-moving. And so I’m encouraged, and I would encourage us all to keep having these conversations. I think multi-stakeholder will be the word that everyone is going to say here, but it really is a unique important role that the IGF plays in bringing people together. I know we have a lot of Meta colleagues here. We take everything we hear here back home and talk to people and inform our own direction. And so I think let’s keep having these conversations. I think the convening power is the most important contribution right now in the space of bringing these particular voices together.

Kathleen Ziemann: Thank you. Jovan?

Jovan Kurbalija: On military and AI, it’s obviously getting unfortunately central stage with the conflicts, especially Ukraine and Gaza, together with the question of use of drones. There are discussions in the UN with the laws, little autonomous weapons or robot killers, and then the Secretary General has been very, very vocal for the last five years to ban killer robots, which basically is about AI. What is my take from it? Awareness building, education. At Diplo, we run AI apprenticeship program, which is explaining AI by developing AI. People learn about AI by developing their own AI agents. And I would say let’s demystify AI, but still enjoy in its magic.

Kathleen Ziemann: Thank you. Jhalak?

Jhalak Kakkar: Yeah, I think, you know, my sort of final thoughts would be, I think we need to learn from the past, the successes of the past. Things like, you know, the multi-stakeholder model, successes we’ve seen in international cooperation. But we also need to learn from the past in terms of mistakes that have been made around governance, around technology, and not sort of repeat those. And I think we need to continue to work together to build, you know, a robust and wholesome, impactful, beneficial digital ecosystem.

Kathleen Ziemann: Thank you. Mlindi?

Mlindi Mashologu: I think from my side, I just want to say that, you know, AI need to be anchored in the human rights. We need to make sure that the technology empower the individuals. But also, when it comes to innovation, we need to do that responsibly by looking on the adaptive governance models, which includes like your regulated sandboxes. But I think the last point that I want to touch on is the issue of collaboration, you know, aligning, you know, the national regional as well as the global efforts in terms of ensuring that, you know, the AI benefits, you know, are spread across, you know, everybody in our society. I think those are my final thoughts.

Guilherme Canela: Thank you very much. So, now I have the very difficult task to try to summarize it, which would be impossible. But just the disclaimer, whatever I’m going to say now is full responsibility of Guilherme Canella’s chat. It’s not any of you, right? But I think there is an interesting element in this conversation, that when many years ago I was involved in some of these similar debates, AI governance, etc., the first thing that appeared was bias. And bias appeared very late in our panel, which is a good sign, because the first things that appeared were the processes.

Even if we disagree, right? The dichotomy, the eventually false dichotomy between innovation or risks, but all those keywords that we spoke, risks, innovation, public goods, data governance, the knowledge, bringing knowledge back, those are actually more structured frameworks that look into the real but this very specific issues of bias or disinformation of conspiracy theories and so on. So, I think this is a good sign for all of us, even if we disagree, as you noticed, that we are looking into something that we can take to the next level of conversation from a governance point of view.

Because when we are too much concentrated in the specific pieces of content rather than the processes, then the conversation becomes very difficult, because it’s related to polarizations, to specific opinions, which everyone has the right to have on what is false, what’s not false, what is dangerous, on what is not danger, while when we are concentrating on transparency and accountability on public goods, etc., all those keywords, they come with lots of interesting knowledge behind them on how we transform them in concrete governance, which doesn’t mean only governmental governance, can be self-regulation, can be co-regulation, and so on. But we also, for obvious reasons, for lack of time, left important things out of the conversation that also need to be part of governance frameworks. For example, the issue of energy consumption of these machines should be part of governance frameworks, and it appeared very late today because of the time and so on.

But I do think that the panel did a good job in putting also some of the divergences of this conversation, which is part of the game. Last thing I want to say is that, and this is not on the shoulders of the panelists or my co-moderator, I do invite you to think that being innovative is to leave no one behind in this conversation. When Eleanor Roosevelt was holding the Universal Declaration of Human Rights on that famous photo, the conversation there was, this was the real innovation, how we came together and put those 33 articles in a groundbreaking way that is not solved until today. So what we really require is an innovation that includes everyone and not only the 1%. Thank you very much. Thank you, my co-moderator. It was a pleasure.

K

Kathleen Ziemann

Speech speed

142 words per minute

Speech length

1666 words

Speech time

701 seconds

AI governance is blooming but fragmented with different levels of engagement across sectors and regions

Explanation

Ziemann describes the current AI governance landscape as having numerous emerging tools, principles, processes and bodies globally, but notes this creates a fragmented environment with varying levels of participation across different sectors and regions. She emphasizes that while there are many initiatives, there are different possibilities for engagement.

Evidence

Examples include OECD AI principles (2019), UNESCO recommendations (2022), voluntary commitments by AI companies, EU AI Act, G7 Hiroshima AI process, G20 declarations, Africa AI declaration in Kigali, and UN Global Digital Compact

Major discussion point

Current State of AI Governance Landscape

Topics

Legal and regulatory | Development

M

Melinda Claybaugh

Speech speed

151 words per minute

Speech length

1982 words

Speech time

783 seconds

We are at an inflection point with many frameworks but questions remain about implementation and effectiveness

Explanation

Claybaugh argues that while there’s no lack of governance frameworks and principles in the AI space, there are still many questions and concerns about their practical implementation. She suggests we need to take stock of what has been established and consider broadening the conversation beyond just risk to include opportunity and innovation.

Evidence

Meta has put out a frontier AI framework for assessing catastrophic risks, but there’s still disagreement on what risks are and how to quantify them. EU AI Act is facing implementation problems with policymakers reconsidering certain aspects

Major discussion point

Current State of AI Governance Landscape

Topics

Legal and regulatory | Economic

False dichotomy between innovation and risk management – both must go hand in hand

Explanation

Claybaugh argues that the conversation has been overweighted toward risk and safety concerns, and suggests we need to talk about enabling technology and innovation alongside risk management. She emphasizes the need to broaden the conversation to include the right voices and representation from different stakeholders.

Evidence

Different regions and countries focus more on innovation and opportunity while others focus on safety and risks. There’s lack of technical and scientific agreement about risks and measurement

Major discussion point

Innovation vs. Risk Management Balance

Topics

Economic | Legal and regulatory

Agreed with

Agreed on

False dichotomy between innovation and risk management – both must go hand in hand

Disagreed with

Disagreed on

Innovation vs. Risk Management Balance – False Dichotomy Debate

Existing laws and frameworks already address many AI-related harms, need to assess fitness for purpose

Explanation

Claybaugh contends that there are already legal frameworks in place that pre-date ChatGPT covering issues like copyright, data use, misinformation, and safety. She suggests focusing on whether these existing frameworks are fit for purpose with new technology rather than creating new regulation specifically for the technology itself.

Evidence

Laws around harms people discuss regarding copyright, data use, misinformation and safety already exist and pre-date ChatGPT

Major discussion point

Regulatory Approaches and Implementation

Topics

Legal and regulatory | Human rights

Disagreed with

Disagreed on

Existing Legal Frameworks vs. New AI-Specific Regulation

Risk assessment processes should be objective, transparent, and auditable similar to GDPR accountability structures

Explanation

Claybaugh supports the idea of objective risk assessment processes that can be viewed by external parties, similar to GDPR’s accountability structure. She sees this as a proven and durable governance mechanism that makes sense for AI, though notes challenges around defining and assessing risks.

Evidence

GDPR accountability structure and similar approaches in content regulation space in Europe with risk assessments, mitigations and transparency measures

Major discussion point

Regulatory Approaches and Implementation

Topics

Legal and regulatory | Human rights

Agreed with

Agreed on

Importance of transparency and explainability in AI systems

Different governance approaches needed for open source vs. closed AI models

Explanation

Claybaugh explains that governance challenges differ between open and closed models, particularly regarding responsibility and oversight. With open source models, companies can test and assess risks before release, but cannot predict or control how others will use the models after release.

Evidence

Meta provides open source models where they do testing and risk assessment before release, but people build on them with their own data for applications that Meta cannot see or predict

Major discussion point

Regulatory Approaches and Implementation

Topics

Legal and regulatory | Economic

AI must work for everyone requiring representative training data and fine-tuning

Explanation

Claybaugh emphasizes that for AI to be effective for all users, it’s critical to train models on data that is as representative as possible or fine-tune models appropriately. She also stresses the need for education about AI outputs so people understand the limitations and nature of AI responses.

Evidence

AI outputs are predictions about the next right word, not necessarily truth, and society is in early stages of understanding what AI should be relied on for

Major discussion point

Data Bias and Inclusivity

Topics

Development | Human rights

Convening power of IGF is crucial for bringing diverse voices together in AI governance discussions

Explanation

Claybaugh highlights the unique and important role that IGF plays in bringing different stakeholders together for AI governance conversations. She notes that Meta takes insights from these discussions back to inform their own direction and emphasizes the value of continued multi-stakeholder dialogue.

Evidence

Meta has colleagues attending IGF sessions and they take learnings back home to inform company direction

Major discussion point

Multi-stakeholder Participation and Process

Topics

Legal and regulatory | Development

Agreed with

Agreed on

Need for meaningful multi-stakeholder participation in AI governance

M

Mlindi Mashologu

Speech speed

171 words per minute

Speech length

2193 words

Speech time

766 seconds

Need for sector-specific policy interventions that are technically informed and locally relevant

Explanation

Mashologu argues that AI governance cannot be one-size-fits-all and requires different approaches for different sectors. He emphasizes that regulating AI in financial services would be different from regulating it in agriculture, requiring sector-specific interventions that are both technically informed and locally relevant.

Evidence

AI in financial services requires different regulation than AI in agriculture

Major discussion point

Current State of AI Governance Landscape

Topics

Legal and regulatory | Development

Agreed with

Agreed on

AI governance must be contextually relevant and locally adapted

Importance of bringing voices from the global south and underrepresented communities to governance dialogues

Explanation

Mashologu emphasizes South Africa’s G20 presidency focus on expanding participation in AI governance discussions by bringing more voices from the African continent, global south, and underrepresented communities to the center of AI governance dialogue. He argues this is essential for multilateral, multi-stakeholder, and multi-sectoral approaches.

Evidence

South Africa’s G20 presidency is working on expanding scope of participation and developing a toolkit to reduce inequalities connected to AI use

Major discussion point

Multi-stakeholder Participation and Process

Topics

Development | Legal and regulatory

Agreed with

Agreed on

Need for meaningful multi-stakeholder participation in AI governance

Should focus on adaptive governance models including regulatory sandboxes for responsible innovation

Explanation

Mashologu advocates for context-aware regulatory innovation that includes regulatory sandboxes, human interlock mechanisms, and adaptive policy tools that can be calibrated to center-specific risks and benefits. He emphasizes the need for responsible innovation while ensuring AI empowers individuals.

Evidence

Regulatory sandboxes, human interlock mechanisms, and adaptive policy tools that can be calibrated to specific contexts

Major discussion point

Innovation vs. Risk Management Balance

Topics

Legal and regulatory | Economic

Need for sufficient explainability in AI decisions that impact human lives and livelihoods

Explanation

Mashologu argues for requirements that AI decisions, especially those impacting human lives and livelihoods, must be sufficiently explainable. He emphasizes the right to understand how AI systems make decisions in critical areas and the need for broad demographic representation in training data.

Evidence

Examples include credit scoring, predictive policing, and healthcare diagnostics where people need to understand how AI decisions are made

Major discussion point

Human Rights and Ethical Considerations

Topics

Human rights | Legal and regulatory

Agreed with

Agreed on

Importance of transparency and explainability in AI systems

Human-in-the-loop mechanisms essential for high-risk domains with clear intervention thresholds

Explanation

Mashologu advocates for human-in-the-loop learning in AI system development from design through deployment, where humans must guide and when needed override automated systems. This includes reinforcement learning with human feedback and clear thresholds for interventions in high-risk domains.

Evidence

Reinforcement learning with human feedback and clear thresholds for interventions in high-risk domains

Major discussion point

Human Rights and Ethical Considerations

Topics

Human rights | Legal and regulatory

AI governance must be anchored in human rights and ensure technology empowers individuals

Explanation

Mashologu emphasizes that AI governance must be grounded in human rights principles as enshrined in South Africa’s Constitution and Bill of Rights. He stresses that whatever technology is implemented should not infringe on people’s rights while still enabling innovation and public good applications.

Evidence

South African Constitution and Bill of Rights, Protection of Personal Information Act creates tension between using information for public good and protecting individual information

Major discussion point

Human Rights and Ethical Considerations

Topics

Human rights | Legal and regulatory

AI governance should be grounded in data justice principles with focus on economic equity and environmental sustainability

Explanation

Mashologu argues that South Africa’s regional approach to AI governance is grounded in data justice principles that put human rights, economic equity, and environmental sustainability at the center of AI development. He recognizes the impact of climate change on human rights and the need to address historical inequities.

Evidence

Recognition of climate change impacts on human rights and environmental sustainability, addressing historical inequities

Major discussion point

Environmental and Social Justice

Topics

Human rights | Development | Sustainable development

Regional frameworks like African Union AI strategy should align with global governance efforts

Explanation

Mashologu explains that South Africa’s AI governance approach leverages existing regional frameworks including the African Union data policy framework and emerging AI strategy, NEPAD’s science and technology frameworks, and regional policy harmonization through SADC. He sees regional integration as foundational to global governance agendas.

Evidence

African Union data policy framework, NEPAD science and technology innovation frameworks, SADC regional policy harmonization

Major discussion point

Global Cooperation and Standardization

Topics

Development | Legal and regulatory

G20 presidency focuses on developing toolkit to reduce AI-related inequalities from global south perspective

Explanation

Mashologu describes South Africa’s G20 presidency championing the development of a toolkit to reduce inequalities connected to AI use, particularly from a global south perspective. The toolkit seeks to identify structural and systematic ways AI can both amplify and redress inequality.

Evidence

G20 agenda includes developing toolkit to identify structural and systematic ways AI can amplify and redress inequality, especially from global south perspective

Major discussion point

Global Cooperation and Standardization

Topics

Development | Economic

J

Jhalak Kakkar

Speech speed

168 words per minute

Speech length

2889 words

Speech time

1031 seconds

Need for meaningful multi-stakeholder input in AI governance creation, not just participation as a matter of form

Explanation

Kakkar emphasizes the importance of multi-stakeholder input in creating AI governance mechanisms, but stresses that participation must be meaningful and actually impact outcomes and outputs, not just be done as a formality. She argues that different stakeholders bring different perspectives that lead to better governance outcomes.

Evidence

Different stakeholders sitting at different parts of the ecosystem bring forth different perspectives

Major discussion point

Multi-stakeholder Participation and Process

Topics

Legal and regulatory | Development

Agreed with

Agreed on

Need for meaningful multi-stakeholder participation in AI governance

False dichotomy between innovation and risk management – both must go hand in hand

Explanation

Kakkar argues against creating a false sense of dichotomy between focusing on risks versus innovation, contending that both must go hand in hand. She warns against the mistakes of not developing governance mechanisms from the beginning and emphasizes that regulation and governance are not bad words.

Evidence

Past mistakes of letting technology develop without governance mechanisms from the beginning, leading to band-aid solutions later

Major discussion point

Innovation vs. Risk Management Balance

Topics

Legal and regulatory | Economic

Agreed with

Agreed on

False dichotomy between innovation and risk management – both must go hand in hand

Disagreed with

Disagreed on

Innovation vs. Risk Management Balance – False Dichotomy Debate

Need for AI impact assessments and audits to understand societal impacts from the beginning

Explanation

Kakkar advocates for implementing AI impact assessments from a socio-technical perspective to understand impacts on society and individuals. She suggests mechanisms like sandboxes and audits can be implemented in light-touch ways to avoid creating path dependencies that require band-aid solutions later.

Evidence

Need to understand harms and impacts rather than going in circles about not knowing what risks are

Major discussion point

Regulatory Approaches and Implementation

Topics

Legal and regulatory | Human rights

Disagreed with

Disagreed on

Existing Legal Frameworks vs. New AI-Specific Regulation

Multi-stakeholder model should be strengthened through IGF and other mechanisms for holistic input

Explanation

Kakkar argues that the IGF should be strengthened as part of the WSIS plus 20 review to help with agenda setting and serve as a feedback loop into CSTD, WSIS forum, and other mechanisms. She emphasizes the need for holistic multi-stakeholder input that addresses various concerns from environmental to labor issues.

Evidence

Need to address environmental concerns, impact of extraction in global majority contexts, labor for AI including labeling and worker-related concerns

Major discussion point

Multi-stakeholder Participation and Process

Topics

Development | Legal and regulatory

International coordination needed to set common baseline while respecting local contexts

Explanation

Kakkar emphasizes the need for international coordination both at multilateral and multi-stakeholder levels to establish a common baseline based on shared values like those in the Universal Declaration of Human Rights. She warns against a race to the bottom in responsible AI adherence while competing for innovation leadership.

Evidence

Existing documents ranging from UDHR to ICCPR can be interpreted through international organizations for norm building

Major discussion point

Global Cooperation and Standardization

Topics

Human rights | Legal and regulatory

Agreed with

Agreed on

AI governance must be contextually relevant and locally adapted

Transparency and explainability crucial when bias affects decision-making systems

Explanation

Kakkar acknowledges that bias exists in all human decision-making but argues that AI systems present unique challenges because explainability has been difficult to establish in many AI contexts. She emphasizes the importance of disclosure when AI systems are used and people are subject to biased decision-making that impacts them.

Evidence

Human decision-making has processes and systems with disclosure of thinking and reasoning that can be challenged, but AI systems lack this explainability

Major discussion point

Data Bias and Inclusivity

Topics

Human rights | Legal and regulatory

Agreed with

Agreed on

Importance of transparency and explainability in AI systems

Disagreed with

Disagreed on

Approach to AI Bias Management

J

Jovan Kurbalija

Speech speed

147 words per minute

Speech length

2050 words

Speech time

836 seconds

AI has become a commodity with 434 large language models in China alone, shifting the risk landscape

Explanation

Kurbalija argues that AI has transformed from magical technology to affordable commodity in just three years since ChatGPT’s release. He notes that AI development has become accessible, with the ability to develop AI agents in under five minutes, fundamentally shifting discussions about risks and governance from exclusive lab research to widespread accessibility.

Evidence

434 large language models in China as of the session date, ability to develop AI agent in 4 minutes 34 seconds compared to years of research previously required

Major discussion point

Current State of AI Governance Landscape

Topics

Economic | Legal and regulatory

AI is about knowledge, not just data – need to shift governance language back to knowledge

Explanation

Kurbalija argues that AI governance discussions have shifted away from knowledge to focus primarily on data, but AI is fundamentally about knowledge creation and preservation. He points out that WSIS documents originally emphasized knowledge, but this has been cleaned out of recent documents like the Global Digital Compact in favor of data-centric language.

Evidence

WSIS documents from Geneva and Tunis emphasized knowledge as key term, but 20 years later knowledge is absent from GDC and current WSIS documents which only mention data

Major discussion point

Knowledge vs. Data Framework

Topics

Legal and regulatory | Development

Disagreed with

Disagreed on

Knowledge vs. Data Framework Priority

Knowledge should have attribution and belong to communities rather than disappearing into AI systems

Explanation

Kurbalija argues that knowledge, including local community knowledge like Ubuntu and oral traditions, should be attributed and shared rather than disappearing into what he calls an ‘AI Bermuda Triangle.’ He emphasizes that knowledge belongs to someone and should be attributed even when shared through universal frameworks.

Evidence

Testing in their lab shows specific contextual knowledge being taken, repackaged, and potentially sold back. Examples include Ubuntu, oral knowledge, and written knowledge from local communities in Africa

Major discussion point

Knowledge vs. Data Framework

Topics

Human rights | Development

Disagreed with

Disagreed on

Knowledge vs. Data Framework Priority

Risk of knowledge centralization and monopolization similar to early internet development

Explanation

Kurbalija warns that there’s a risk of knowledge being centralized and monopolized in AI systems, similar to what happened in the early days of the Internet where the promise that anyone could develop digital solutions ended up with only a few being able to do it. He suggests this historical wisdom should inform AI governance solutions.

Evidence

Early Internet experience where initial promise of universal access to development ended with concentration among few players

Major discussion point

Knowledge vs. Data Framework

Topics

Economic | Development

AI responsibility should follow the eternal legal principle since Hammurabi’s code that developers are responsible for their products and activities.

Explanation

Kurbalija argues that AI governance should return to common-sense principles that have existed since ancient times, citing Hammurabi’s law about builders being responsible for house collapses. He contends that whoever develops and deploys AI systems should be responsible for their outcomes, and that AI governance should be explainable to a five-year-old.

Evidence

Hammurabi’s law from 3,400 years ago about builder responsibility, the Napoleonic code, and a hypothetical example of Jovan’s responsibility for Diplo’s AI system making false reports about session participants.

Major discussion point

Innovation vs. Risk Management Balance

Topics

Legal and regulatory | Human rights

Need for joint standardisation, particularly around AI weights sharing to avoid platform lock-in

Explanation

Kurbalija suggests that to avoid fragmentation while allowing local AI adaptation, there should be standardisation around AI weights sharing. He warns against repeating social media platform problems where users cannot migrate their networks between platforms, advocating for standards that allow knowledge portability between AI systems.

Evidence

Current social media platform lock-in, where users cannot migrate networks, EU Digital Services Act trying to address this issue

Major discussion point

Global Cooperation and Standardisation

Topics

Legal and regulatory | Economic

Agreed with

Agreed on

AI governance must be contextually relevant and locally adapted

Need to distinguish between illegal biases and natural human biases while maintaining common sense

Explanation

Kurbalija argues that while illegal biases that insult human dignity must be addressed with urgency, there has been a dangerous obsession with cleaning all biases from AI systems. He contends that humans are naturally ‘biased machines’ influenced by culture, age, and many other identity aspects, and that cleaning biases in AI systems could be, at least, impossible and, at worst, dangerous.

Evidence

Personal example of each of us being influenced by our culture, age and many other specificities.

Major discussion point

Data Bias and Inclusivity

Topics

Human rights | Legal and regulatory

Disagreed with

Disagreed on

Approach to AI Bias Management

G

Guilherme Canela

Speech speed

145 words per minute

Speech length

1156 words

Speech time

477 seconds

Innovation should mean leaving no one behind in the conversation

Explanation

Canela argues that true innovation in AI governance should be inclusive and ensure that everyone is part of the conversation, not just the 1%. He draws a parallel to Eleanor Roosevelt and the Universal Declaration of Human Rights as an example of groundbreaking innovation that brought people together in an inclusive way.

Evidence

Eleanor Roosevelt holding the Universal Declaration of Human Rights as example of real innovation that was groundbreaking and inclusive with 33 articles

Major discussion point

Human Rights and Ethical Considerations

Topics

Human rights | Development

A

Audience

Speech speed

140 words per minute

Speech length

1152 words

Speech time

493 seconds

Data gathering inherently contains bias from experts collecting it, need inclusive approaches

Explanation

An audience member from Nigeria argues that data gathering for AI is inherently flawed because it’s done by experts who each have their own biases, making the resulting AI systems biased from the start. They emphasize the need for inclusive approaches that bring all stakeholders into the data gathering process to ensure AI serves all communities.

Evidence

Every individual person has their own bias, so whatever data is gathered is as inherently flawed as the bias of the person gathering it

Major discussion point

Data Bias and Inclusivity

Topics

Development | Human rights

Social offset mechanisms could help organizations demonstrate responsibility for AI risks

Explanation

An audience member suggests that organizations using AI could document existing and foreseeable risks and demonstrate how they offset those risks in an objective way with independent oversight. They propose this as a creative way to make AI governance tangible and explainable, drawing parallels to carbon offset mechanisms.

Evidence

B Corp standard for environmental social governance (ESG), carbon offset mechanisms, hypothetical example of social media platforms demonstrating social offset 15 years ago

Major discussion point

Environmental and Social Justice

Topics

Legal and regulatory | Human rights

Need to address environmental impacts and extractivism related to AI infrastructure development

Explanation

An audience member from Mexico challenges the underestimation of AI risks and specifically questions how companies like Meta plan to self-regulate when they haven’t conducted environmental or human rights assessments for hyperscale data centers. They point to examples of data centers being moved from places like the Netherlands to Global South countries without proper consultation.

Evidence

Hyperscale data centers in Netherlands facing public pressure, then moved to Global South countries or Spain, issues with extractivism, hydric crisis, and pollution affecting communities without consultation

Major discussion point

Environmental and Social Justice

Topics

Development | Human rights | Sustainable development

O

Online moderator

Speech speed

145 words per minute

Speech length

178 words

Speech time

73 seconds

Questions about the Council of Europe framework convention on AI as the first international legally binding treaty

Explanation

The online moderator relayed a question from Grace Thompson about panelists’ views on the Council of Europe framework convention on AI, human rights, democracy and the rule of law. This represents the first international treaty and legally binding document to safeguard people in AI system development and oversight.

Evidence

42 signatories to date including non-European states, advocacy from Center for AI and Digital Policy for endorsement

Major discussion point

Global Cooperation and Standardization

Topics

Legal and regulatory | Human rights

Finance and military sectors are biggest AI spenders but lack transparency about successes and failures

Explanation

The online moderator conveyed Michael Nelson’s observation that finance and military sectors spend the most money on AI development but there is very little public knowledge about their successes and failures. This raises concerns about transparency and accountability in these critical sectors.

Evidence

Finance sector and military are the two sectors spending the most money on AI

Major discussion point

Current State of AI Governance Landscape

Topics

Economic | Legal and regulatory

Agreements

Agreement points

False dichotomy between innovation and risk management – both must go hand in hand

False dichotomy between innovation and risk management – both must go hand in hand

False dichotomy between innovation and risk management – both must go hand in hand

Both speakers agree that creating a division between focusing on innovation versus managing risks is counterproductive. They argue that both aspects must be addressed simultaneously rather than treating them as opposing priorities.

Legal and regulatory | Economic

Need for meaningful multi-stakeholder participation in AI governance

Convening power of IGF is crucial for bringing diverse voices together in AI governance discussions

Need for meaningful multi-stakeholder input in AI governance creation, not just participation as a matter of form

Importance of bringing voices from the global south and underrepresented communities to governance dialogues

All three speakers emphasize the critical importance of inclusive, meaningful participation from diverse stakeholders in AI governance processes, with particular attention to ensuring voices from the Global South and underrepresented communities are heard.

Legal and regulatory | Development

Importance of transparency and explainability in AI systems

Risk assessment processes should be objective, transparent, and auditable similar to GDPR accountability structures

Transparency and explainability crucial when bias affects decision-making systems

Need for sufficient explainability in AI decisions that impact human lives and livelihoods

Speakers agree that AI systems, particularly those affecting human lives and decision-making, must be transparent and explainable, with objective and auditable processes for risk assessment and accountability.

Legal and regulatory | Human rights

AI governance must be contextually relevant and locally adapted

Need for sector-specific policy interventions that are technically informed and locally relevant

International coordination needed to set common baseline while respecting local contexts

Need for joint standardization, particularly around AI weights sharing to avoid platform lock-in

Speakers agree that while some standardization and coordination is needed, AI governance must be adapted to local contexts, sectors, and specific needs rather than applying one-size-fits-all solutions.

Legal and regulatory | Development

Similar viewpoints

Both speakers advocate for proactive governance mechanisms that can assess and address AI impacts from the early stages of development, using adaptive approaches like regulatory sandboxes to enable responsible innovation while managing risks.

Need for AI impact assessments and audits to understand societal impacts from the beginning

Should focus on adaptive governance models including regulatory sandboxes for responsible innovation

Legal and regulatory | Economic

Both speakers draw lessons from internet governance history, warning against concentration of power and emphasizing the need for distributed, multi-stakeholder approaches to prevent repeating past mistakes of centralization.

Risk of knowledge centralization and monopolization similar to early internet development

Multi-stakeholder model should be strengthened through IGF and other mechanisms for holistic input

Economic | Development

Both speakers emphasize that human rights principles should be the foundation of AI governance, with international coordination needed to establish common baselines while allowing for local adaptation and ensuring technology serves to empower rather than harm individuals.

AI governance must be anchored in human rights and ensure technology empowers individuals

International coordination needed to set common baseline while respecting local contexts

Human rights | Legal and regulatory

Unexpected consensus

Existing legal frameworks may be sufficient with adaptation rather than new AI-specific regulation

Existing laws and frameworks already address many AI-related harms, need to assess fitness for purpose

Transparency and explainability crucial when bias affects decision-making systems

Despite representing different sectors (private sector vs. civil society), both speakers acknowledge that many existing legal frameworks may be applicable to AI governance challenges, though they may need adaptation. This consensus is unexpected given typical tensions between industry and civil society on regulatory approaches.

Legal and regulatory | Human rights

Acknowledgment of natural human bias while focusing on harmful biases

Need to distinguish between illegal biases and natural human biases while maintaining common sense

Transparency and explainability crucial when bias affects decision-making systems

Both speakers, despite different backgrounds, agree that not all bias is problematic and that efforts should focus on addressing harmful or illegal biases rather than attempting to eliminate all bias. This nuanced view is unexpected in AI governance discussions that often call for complete bias elimination.

Human rights | Legal and regulatory

Common sense and historical legal principles should guide AI governance

Common sense principles from historical legal frameworks like Hammurabi’s code should guide AI responsibility

Risk assessment processes should be objective, transparent, and auditable similar to GDPR accountability structures

Unexpectedly, both the academic/diplomatic representative and the private sector representative agree that AI governance should build on established legal principles and common sense approaches rather than creating entirely new frameworks. This suggests convergence on evolutionary rather than revolutionary regulatory approaches.

Legal and regulatory | Human rights

Overall assessment

Summary

The speakers demonstrated significant consensus on key principles including the need for multi-stakeholder participation, transparency and explainability, contextual adaptation of governance frameworks, and the integration of innovation with risk management. There was also agreement on building upon existing legal frameworks rather than creating entirely new regulatory structures.

Consensus level

High level of consensus on fundamental principles with constructive disagreement on implementation details. This suggests a mature discussion where stakeholders from different sectors (private, public, civil society, and academic) have moved beyond basic positions to focus on practical governance solutions. The implications are positive for AI governance development, as this level of agreement on core principles provides a strong foundation for collaborative policy development while allowing for contextual adaptation and sector-specific approaches.

Differences

Different viewpoints

Innovation vs. Risk Management Balance – False Dichotomy Debate

False dichotomy between innovation and risk management – both must go hand in hand

False dichotomy between innovation and risk management – both must go hand in hand

While both speakers agree it’s a false dichotomy, Claybaugh argues the conversation has been overweighted toward risk and safety concerns and suggests broadening to include opportunity and innovation. Kakkar counters that regulation and governance are not bad words and warns against repeating past mistakes of not developing governance mechanisms from the beginning.

Legal and regulatory | Economic

Existing Legal Frameworks vs. New AI-Specific Regulation

Existing laws and frameworks already address many AI-related harms, need to assess fitness for purpose

Need for AI impact assessments and audits to understand societal impacts from the beginning

Claybaugh advocates for using existing legal frameworks that pre-date ChatGPT and assessing their fitness for purpose rather than creating new AI-specific regulation. Kakkar argues for implementing new AI impact assessments and audits from the beginning to understand societal impacts that may not be covered by existing frameworks.

Legal and regulatory | Human rights

Approach to AI Bias Management

Need to distinguish between illegal biases and natural human biases while maintaining common sense

Transparency and explainability are crucial when bias affects decision-making systems

Kurbalija argues against the ‘obsession’ with cleaning all biases from AI systems, distinguishing between illegal biases that should be addressed and natural human biases that are inevitable. Kakkar emphasizes the unique challenges AI systems present regarding explainability and the need for transparency when biased decision-making impacts people.

Human rights | Legal and regulatory

Knowledge vs. Data Framework Priority

AI is about knowledge, not just data – need to shift governance language back to knowledge

Knowledge should have attribution and belong to communities rather than disappearing into AI systems

Kurbalija uniquely emphasizes shifting AI governance discussions from data-centric to knowledge-centric language, arguing that knowledge should have attribution and belong to communities. Other panelists focus more on data governance, bias, and regulatory frameworks without specifically addressing this knowledge vs. data distinction.

Legal and regulatory | Development

Unexpected differences

Self-regulation vs. External Oversight Effectiveness

Risk assessment processes should be objective, transparent, and auditable, similar to GDPR accountability structures

Need for AI impact assessments and audits to understand societal impacts from the beginning

Need to address environmental impacts and extractivism related to AI infrastructure development

Unexpected tension emerged when Claybaugh discussed Meta’s self-regulatory efforts while the audience member from Mexico directly challenged Meta’s track record on environmental and human rights assessments for data centers. This created an unexpected confrontation about corporate accountability that Claybaugh couldn’t fully address.

Development | Human rights | Sustainable development

Urgency of AI Governance Implementation

We are at an inflection point with many frameworks but questions remain about implementation and effectiveness

False dichotomy between innovation and risk management – both must go hand in hand

While both speakers acknowledged the current state of AI governance, an unexpected disagreement emerged about timing and urgency. Claybaugh suggested taking stock and potentially slowing down (referencing the EU’s reconsideration), while Kakkar emphasised the urgency of implementing governance mechanisms immediately to avoid path dependencies.

Legal and regulatory | Economic

Overall assessment

Summary

The main areas of disagreement centered around the balance between innovation and regulation, the adequacy of existing legal frameworks versus need for new AI-specific governance, approaches to bias management, and the priority of knowledge versus data frameworks in AI governance discussions.

Disagreement level

Moderate disagreement with significant implications. While speakers shared common goals of inclusive, responsible AI governance, their different approaches could lead to fragmented implementation strategies. The disagreements reflect broader tensions in the AI governance community between industry self-regulation and external oversight, between leveraging existing frameworks and creating new ones, and between global standardization and local adaptation. These disagreements are constructive and represent legitimate different perspectives rather than fundamental conflicts, but they highlight the complexity of achieving coordinated AI governance across different stakeholders and regions.

Partial agreements

Partial agreements

Similar viewpoints

Both speakers advocate for proactive governance mechanisms that can assess and address AI impacts from the early stages of development, using adaptive approaches like regulatory sandboxes to enable responsible innovation while managing risks.

Need for AI impact assessments and audits to understand societal impacts from the beginning

Should focus on adaptive governance models including regulatory sandboxes for responsible innovation

Legal and regulatory | Economic

Both speakers draw lessons from internet governance history, warning against concentration of power and emphasizing the need for distributed, multi-stakeholder approaches to prevent repeating past mistakes of centralization.

Risk of knowledge centralization and monopolization similar to early internet development

Multi-stakeholder model should be strengthened through IGF and other mechanisms for holistic input

Economic | Development

Both speakers emphasize that human rights principles should be the foundation of AI governance, with international coordination needed to establish common baselines while allowing for local adaptation and ensuring technology serves to empower rather than harm individuals.

AI governance must be anchored in human rights and ensure technology empowers individuals

International coordination needed to set common baseline while respecting local contexts

Human rights | Legal and regulatory

Takeaways

Key takeaways

AI governance is at a critical inflection point with numerous frameworks established but implementation challenges remaining

Multi-stakeholder participation must be meaningful and inclusive, particularly bringing voices from the global south and underrepresented communities

The innovation vs. risk management debate represents a false dichotomy – both elements must be addressed simultaneously through adaptive governance models

AI governance should shift focus from data to knowledge, with proper attribution and community ownership of knowledge being essential

Human rights must anchor all AI governance efforts, with explainability and human-in-the-loop mechanisms required for high-risk applications

Existing legal frameworks can address many AI-related harms but need assessment for fitness-for-purpose in the AI context

Global cooperation requires standardization (particularly around AI weights sharing) while respecting local contexts and priorities

Bias in AI systems is inevitable but must be distinguished between natural human bias and illegal/harmful bias, with transparency being key

Environmental and social justice considerations, including extractivism and energy consumption, must be integrated into AI governance frameworks

The IGF’s convening power is crucial for bringing diverse stakeholders together to advance AI governance discussions

Resolutions and action items

South Africa’s G20 presidency will develop a toolkit to reduce AI-related inequalities from a global south perspective

Continue strengthening the IGF as a feedback mechanism into CSTD, WSIS forum, and other multilateral processes

Implement AI impact assessments and audits to understand societal impacts from early stages of development

Develop regulatory sandboxes and adaptive policy tools for context-specific AI governance

Focus on joint standardization efforts, particularly around AI weights sharing standards

Align national AI policies with regional frameworks like the African Union AI strategy

Establish human-in-the-loop mechanisms with clear intervention thresholds for high-risk AI domains

Unresolved issues

How to achieve global consensus on what constitutes AI risks and how to measure them scientifically

How to balance innovation incentives with regulatory requirements without creating a race to the bottom

How to ensure meaningful participation from global majority countries in AI governance processes given resource constraints

How to address the environmental impact and energy consumption of AI systems in governance frameworks

How to handle responsibility and liability in open source AI models where usage cannot be predicted or controlled

How to prevent knowledge centralization and monopolization while enabling AI development

How to create universal frameworks while respecting local contexts and priorities

How to address the concentration of AI development in certain regions and democratize access to AI technology

How to implement effective transparency and explainability requirements for complex AI systems

Suggested compromises

Use risk assessment frameworks similar to GDPR that are objective, transparent, and auditable rather than prescriptive technology regulation

Implement light-touch regulatory mechanisms like sandboxes and impact assessments to understand harms without stifling innovation

Focus on sector-specific governance approaches rather than one-size-fits-all AI regulation

Distinguish between different types of AI systems (open source vs. closed, high-risk vs. low-risk) for differentiated governance approaches

Build on existing legal frameworks and assess their fitness for purpose rather than creating entirely new regulatory structures

Establish common baseline standards through international cooperation while allowing for local adaptation and priorities

Balance self-regulation by companies with external oversight and multi-stakeholder input

Address both macro-level foresight and micro-level precision in AI governance through complementary approaches

Thought provoking comments

We don’t necessarily agree on what the risks are and whether there are risks and how we quantify them… Can we talk about opportunity? Can we talk about enabling innovation? Can we broaden this conversation about what we’re talking about and who we’re talking with?

Speaker

Melinda Claybaugh

Reason

This comment challenged the dominant risk-focused narrative in AI governance discussions and introduced the concept of a false dichotomy between innovation and safety. It was provocative because it suggested the AI governance community might be overemphasizing risks at the expense of opportunities.

Impact

This comment became a central theme throughout the discussion, with Jhalak directly addressing it as a ‘false dichotomy’ and arguing that innovation and governance must go hand in hand. It shifted the conversation from purely technical governance issues to fundamental questions about how we frame AI development.

We have to change our governance language. If you read WSIS documents, both Tunis and Geneva, the key term was knowledge, not data… Now, somehow, in 20 years’ time, knowledge is completely cleaned. You don’t have it in GDC, you don’t have it in the WSIS documents, you have only data. And AI is about knowledge.

Speaker

Jovan Kurbalija

Reason

This observation was intellectually provocative because it identified a fundamental shift in how we conceptualize information governance – from knowledge (which implies human understanding and context) to data (which is more technical and abstract). It connected current AI debates to broader historical patterns in digital governance.

Impact

This comment introduced a new analytical framework that influenced subsequent discussions about attribution, ownership, and the democratization of AI. It led to deeper conversations about who owns knowledge embedded in AI systems and how to preserve local and contextual knowledge.

Very often, I’m hearing conversations about, you know, we’ve talked about risk. Let’s focus on innovation now. I think it’s creating a false sense of dichotomy. I think they have to go hand in hand… We need to be carrying out AI impact assessments from a socio-technical perspective so that we really understand impacts on society and individuals.

Speaker

Jhalak Kakkar

Reason

This comment directly challenged Melinda’s framing and provided a sophisticated counter-argument that governance and innovation are complementary rather than competing priorities. It introduced the concept of socio-technical impact assessments as a practical solution.

Impact

This response elevated the discussion from a simple either/or debate to a more nuanced conversation about how to implement governance mechanisms that support rather than hinder innovation. It led to practical discussions about sandboxes, audits, and light-touch regulatory approaches.

We advocate for context-aware regulatory innovation… There is no one-size-fits-all when it comes to AI. We need peripheral foundational approaches that are grounded in equity and don’t want AI to replace humans, but we want AI to work with humans.

Speaker

Mlindi Mashologu

Reason

This comment introduced the crucial concept of ‘context-aware regulatory innovation’ and emphasized the Global South perspective on AI governance. It challenged universalist approaches to AI governance while maintaining focus on equity and human-centered development.

Impact

This perspective influenced the entire panel’s discussion about local relevance versus global coordination, leading to deeper conversations about how to avoid AI governance fragmentation while respecting local contexts and priorities.

We should keep in mind that we are bias machines. I am biased. My culture, my age, my hormones, whatever, are defining what I’m saying now… This obsession with cleaning bias was very dangerous. Yes, illegal biases, biases that threaten communities, definitely. But I would say we have to bring more common sense into this.

Speaker

Jovan Kurbalija

Reason

This was a controversial and thought-provoking comment that challenged the prevailing orthodoxy about bias elimination in AI systems. It introduced nuance by distinguishing between harmful biases and natural human perspectives, advocating for a more realistic approach to bias in AI.

Impact

This comment sparked immediate reactions from other panelists and audience members, leading to a more sophisticated discussion about what types of bias are problematic versus natural, and how to handle bias in AI systems without losing valuable diversity of perspectives.

How do we really, truly democratize access to AI? We need to enhance capacity of countries to create local AI ecosystems so that we don’t have a concentration of infrastructure and technology in certain regions… How do we facilitate access to technology and create AI commons?

Speaker

Jhalak Kakkar

Reason

This comment shifted the focus from governance frameworks to fundamental questions of global equity and access. It connected AI governance to broader development and justice issues, introducing concepts like ‘AI commons’ and technology transfer.

Impact

This perspective influenced the discussion toward more structural questions about global AI inequality and led to conversations about how governance frameworks should address not just safety and innovation, but also equitable access and development.

Overall assessment

These key comments fundamentally shaped the discussion by introducing several important tensions and frameworks: the innovation-versus-governance debate, the knowledge-versus-data paradigm shift, the global-versus-local governance challenge, and the bias-elimination-versus-natural-diversity question. Rather than settling these tensions, the comments elevated the conversation to a more sophisticated level where participants grappled with complex trade-offs and nuanced positions. The discussion evolved from initial position statements to a more dynamic exchange where panelists directly engaged with each other’s frameworks, ultimately producing a richer understanding of AI governance challenges that goes beyond simple regulatory approaches to encompass questions of equity, access, knowledge ownership, and cultural context.

Follow-up questions

How can existing legal frameworks be adapted to be fit for purpose with AI technology, particularly in areas like antitrust/competition law, copyright law, and data protection?

Speaker

Jhalak Kakkar

Explanation

This addresses the gap between current regulations and the new realities that AI brings, such as network effects, data advantages, and fair use exceptions being leveraged by large companies in ways not originally intended.

How can we develop mechanisms to share AI model weights and preserve knowledge attribution while enabling interoperability between AI systems?

Speaker

Jovan Kurbalija

Explanation

This is crucial for preventing knowledge monopolization and ensuring that knowledge generated by communities can be preserved and attributed properly, while avoiding the platform lock-in problems seen with social media.

How can we implement AI impact assessments and auditing mechanisms in light-touch ways that don’t burden innovation but help us understand societal impacts?

Speaker

Jhalak Kakkar

Explanation

This addresses the need to understand AI’s impacts on society and individuals before path-dependencies are created, allowing for proactive rather than reactive governance.

How can we ensure meaningful participation from the global majority in AI governance processes, not just token representation?

Speaker

Jhalak Kakkar

Explanation

This is essential because AI will function and impact differently in different contexts, requiring diverse perspectives in governance frameworks rather than one-size-fits-all approaches.

How can we develop context-aware regulatory frameworks that address sector-specific AI applications while maintaining coherent governance principles?

Speaker

Mlindi Mashologu

Explanation

Different sectors (financial services, agriculture, healthcare) require different regulatory approaches, but there’s a need to understand how to balance specificity with consistency.

How can we establish clear responsibility and liability frameworks for AI systems, particularly in cases where AI makes errors or causes harm?

Speaker

Jovan Kurbalija

Explanation

Using the example of Diplo’s AI potentially misreporting statements, this highlights the need for clear accountability mechanisms similar to historical legal principles like those in Hammurabi’s code.

How can we create universal frameworks for AI governance while respecting local contexts and avoiding fragmentation?

Speaker

Kunle Olorundare

Explanation

This addresses the tension between having consistent global standards and accommodating different regional needs and priorities in AI governance.

How can we ensure inclusive data collection processes that account for multiple stakeholder perspectives and reduce inherent biases in AI training data?

Speaker

Kunle Olorundare

Explanation

This is fundamental to creating AI systems that work for everyone, as biased data collection by experts can perpetuate existing inequalities and exclusions.

How can we address the environmental and social justice impacts of AI infrastructure, particularly regarding data center placement and resource extraction?

Speaker

Anna from R3D

Explanation

This highlights the need to consider the broader impacts of AI development, including environmental costs and how they disproportionately affect communities in the Global South.

What are the implications of AI development and deployment in high-stakes sectors like finance and military, and how should these be governed?

Speaker

Michael Nelson (online)

Explanation

These sectors are investing heavily in AI but with little transparency about successes and failures, raising questions about oversight and accountability in critical applications.

How can we establish international coordination mechanisms that set common baselines for AI governance while allowing for innovation?

Speaker

Jhalak Kakkar

Explanation

This addresses the need to prevent a ‘race to the bottom’ in AI governance standards while maintaining space for technological advancement and regional adaptation.

How can we democratize access to AI technology and create AI commons to prevent concentration of AI capabilities in certain regions?

Speaker

Jhalak Kakkar

Explanation

This relates to ensuring equitable access to AI benefits and preventing the same concentration patterns seen in previous technology developments.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.