Global AI Governance: Reimagining IGF’s Role & Impact

25 Jun 2025 15:45h - 17:00h

Global AI Governance: Reimagining IGF’s Role & Impact

Session at a glance

Summary

This discussion focused on the Policy Network on AI (PNI) and its role within the Internet Governance Forum (IGF) ecosystem for global AI governance. The panel brought together experts from different regions to examine how PNI can address emerging challenges in AI policy development across national, regional, and global levels.


Shamira Ahmed introduced PNI as a global, multi-stakeholder initiative hosted by the IGF that facilitates inclusive dialogue on AI governance, particularly elevating voices from the Global South. The network has produced significant reports in 2023 and 2024 addressing AI governance best practices, environmental sustainability, human rights, and socioeconomic impacts. Panelists identified several critical concerns including the “intelligence divide” that parallels existing digital divides, with AI potentially deepening inequalities between those who can govern and access AI versus those who cannot.


Regional perspectives revealed common challenges across different contexts. In Italy, efforts focus on education, research, and addressing job displacement from AI automation. Latin American research highlighted inadequate regulatory frameworks, questionable data handling practices, and lack of human rights impact assessments in public sector AI implementation. African representatives emphasized how AI risks replicating colonial patterns, with African data being harvested and processed elsewhere while the continent lacks necessary infrastructure and governance capacity.


The discussion emphasized that AI governance requires learning from past internet governance experiences, where market-driven approaches led to mainstreamed online harms and deepened digital divides. Panelists stressed the need for bold action to reassert fundamental rights principles of transparency, accountability, and ethics as non-negotiable starting points. The role of PNI was identified as bridging gaps between technical development and social impact, fostering trust through multi-stakeholder dialogue, and ensuring that AI development serves public interest while protecting human dignity and environmental sustainability.


Keypoints

## Major Discussion Points:


– **AI Governance Challenges and Digital Divides**: Panelists discussed various forms of inequality emerging from AI development, including the “intelligence divide,” digital apartheid-like conditions, and how AI systems may replicate historical patterns of discrimination and colonialism, particularly affecting the Global South and Africa.


– **Multi-stakeholder Approach and Trust Building**: The conversation emphasized the importance of inclusive dialogue involving government, civil society, academia, and private sector stakeholders, with particular focus on elevating voices from marginalized communities and the Global South in AI governance discussions.


– **Regional Perspectives on AI Implementation**: Panelists shared experiences from different regions (Italy, Latin America, China, Africa) regarding national AI strategies, regulatory frameworks, and the challenges of implementing AI in public administration while protecting human rights and fundamental freedoms.


– **Role of PNI (Policy Network on AI) within IGF Ecosystem**: Discussion centered on how the Policy Network on AI can serve as a bridge between technical AI development and policy-making processes, contribute to global AI governance frameworks like the Global Digital Compact, and facilitate knowledge sharing and capacity building.


– **Data Sovereignty and Cross-border Data Issues**: Significant attention was given to concerns about data extraction from developing countries, lack of local data centers and infrastructure, and the need for African and Global South countries to have greater control over their data resources used in AI training.


## Overall Purpose:


The discussion aimed to explore the role and potential impact of the Policy Network on AI (PNI) within the Internet Governance Forum (IGF) ecosystem, gather feedback on how PNI can better contribute to global AI governance processes, and facilitate dialogue between diverse stakeholders on addressing AI-related challenges while ensuring inclusive and rights-respecting AI development.


## Overall Tone:


The discussion maintained a professional and collaborative tone throughout, with participants demonstrating both urgency and cautious optimism. While there was clear concern about AI’s potential harms and existing inequalities, the tone remained constructive and solution-oriented. The conversation became more passionate when addressing issues of digital colonialism and representation gaps, but overall maintained a diplomatic and academic atmosphere focused on building consensus and identifying actionable pathways forward.


Speakers

**Speakers from the provided list:**


– **Elizabeth Orembo** – Research Fellow, Research ICT Africa; Moderator of the panel


– **Mario Nobile** – Director General, AGID Agency for Digital Italy


– **Shuyan Wu** – Deputy Director, User and Market Research, China Mobile Institute


– **Ivana Bartoletti** – Virtual panelist (specific role/title not mentioned in transcript)


– **Shamira Ahmed** – Founder and Executive Director, Economic Policy Hub of Africa (Data Economy Policy Hub)


– **Paloma Lara-Castro** – Policy Director, Derechos Digitales


– **German Lopez Ardila** – Online moderator from the Colombian Chamber of IT and Telecoms


– **William Bird** – Director, Media Monitoring Africa


– **Audience** – Various audience members who asked questions


**Additional speakers:**


– **Poncelet** – Member of the Policy Network for AI


– **Adriana Castro** – From Externado University of Colombia


– **Kunle Olorundari** – President of Internet Society Nigeria chapter


– **Kossi Amessin** – From Benin


– **Jasmine Khoo** – From Hong Kong, part of PNAI and Asia Pacific Policy Observatory


– **Titti Cassa** – Mentioned as being in the IGF Italy Secretariat (present but did not speak)


– **Amrita** – Mentioned as someone who can provide more information about PNAI (present but did not speak)


Full session report

# Comprehensive Report: Policy Network on AI (PNI) and Global AI Governance Discussion


## Introduction and Context


This discussion, moderated by Elizabeth Orembo from Research ICT Africa, brought together a diverse panel of international experts to examine the role of the Policy Network on AI (PNI) within the Internet Governance Forum (IGF) ecosystem for global AI governance. The panel featured representatives from Italy, China, Latin America, Africa, and other regions, alongside civil society organisations and academic institutions.


Orembo outlined the session structure: a 5-minute introduction to PNI, followed by a 30-minute panel discussion, a Mentimeter questionnaire to gather audience values, a Q&A session, and concluding with a survey for feedback. German Lopez Ardila served as the online moderator, facilitating questions from virtual participants.


## Overview of the Policy Network on AI (PNI)


Shamira Ahmed, Founder and Executive Director of the Economic Policy Hub of Africa, provided an introduction to PNI as a global, interdisciplinary, bottom-up multi-stakeholder initiative hosted by the IGF. She explained that PNI facilitates open dialogue on AI governance through collaborative working groups that transform community insights into policy briefs and recommendations.


Ahmed highlighted PNI’s 2024 report on “Multi-stakeholder Approaches to AI Governance,” which was presented at the previous year’s IGF in Riyadh and focuses on environmental sustainability considerations. She emphasised that PNI’s mission centres on facilitating inclusive dialogue that particularly elevates voices from the Global South, addressing a critical gap in current AI governance discussions.


## Regional Perspectives on AI Governance


### Italian National Strategy


Mario Nobile (correcting the pronunciation of his name during the session), Director General of AGID Agency for Digital Italy, outlined Italy’s comprehensive approach to AI governance built on four strategic pillars: education, scientific research, public administration, and enterprises.


Nobile provided a key reframing of AI governance challenges, arguing that “the debate is not humans versus machines but rather those who know, manage and govern AI versus those who don’t.” He emphasised that technology evolves faster than governments can adapt, necessitating continuous discussion and updating of principles within existing regulatory frameworks.


He suggested that PNI could establish a comprehensive repository of AI governance information, including research papers and best practices across different sectors such as health, manufacturing, transportation, and tourism.


### Latin American Implementation Challenges


Paloma Lara-Castro, Policy Director at Derechos Digitales, provided analysis of AI implementation across Latin America, highlighting significant gaps between technological deployment and adequate governance frameworks. She noted that Latin American states are implementing AI systems in sensitive public policy areas without adequate regulatory frameworks or human rights impact assessments.


Lara-Castro introduced the concept of AI as a “social technical tool,” emphasising that AI systems arise from society and carry existing social conditions and inequalities. Her research revealed concerning patterns across the region, including questionable handling of personal data due to database fragmentation and lack of common standards.


She emphasised the importance of multi-stakeholder platforms like IGF, particularly as civic space shrinks and centralised UN discussions create participation barriers for civil society organisations.


### Chinese Perspective on International Cooperation


Shuyan Wu, Deputy Director of User and Market Research at China Mobile Institute, addressed the global nature of AI governance challenges, noting that AI development brings uncertainties including misinformation, information leakage, and widening digital divides that require international cooperation.


Wu introduced the concept of the “intelligent divide” as distinct from traditional digital divides, recognising that AI creates new forms of inequality beyond mere access to technology. She advocated for strengthening dialogue mechanisms and creating databases of best practices while implementing the Global Digital Compact.


### African Data Sovereignty Concerns


William Bird, Director of Media Monitoring Africa, drew parallels between current AI development patterns and historical colonialism, arguing that AI is recreating colonial-style inequalities with data harvested from Africa but processed elsewhere without local benefit or control.


Bird noted that “we’ve seen this movie before” regarding allowing markets to determine technological development without adequate governance. He highlighted critical infrastructure gaps, noting that African countries lack AI infrastructure and data centres on the continent while serving as primary data sources for global AI systems.


Bird advocated for collective action, arguing that African states need to mobilise collectively and impose digital development taxes on multinational technology companies to address fundamental inequalities.


## Audience Participation and Key Questions


### Mentimeter Results


The session included a Mentimeter questionnaire where participants identified key values for AI governance. Results showed priorities including “digital cooperation,” “digital divide,” “integrity,” and “inclusion.”


### Audience Questions and Discussions


Several significant questions emerged from the audience:


**Kossi Amessin** raised a critical question about African data sovereignty: “In Africa we produce data but our data are often externalised or they are accessible in data centres that are not on our territories. How can we participate to… how do we participate to the training of artificial intelligence in our own labs without having the data on site?”


**German Lopez Ardila** asked about coordination challenges: How can we benefit from UN-level AI discussions while avoiding regulatory fragmentation across different UN agencies and bodies?


Other audience questions addressed:


– Whether there should be universal AI frameworks or context-specific approaches


– The concept of “digital apartheid” and its implications


– Sectoral versus general approaches to AI governance


– Coordination between different UN governance documents and processes


## Key Governance Challenges


### The Intelligence Divide and Digital Inequalities


A central theme was recognition that AI creates multiple forms of digital divides beyond traditional connectivity issues. Nobile’s reframing of the challenge as being between “those who govern AI versus those who don’t” provided a framework that influenced subsequent discussions.


The discussion revealed how these divides manifest differently across regions – in Africa through data sovereignty and infrastructure gaps, in Latin America through regulatory capacity issues, and in developed countries through workforce transition challenges.


### Regulatory Approaches and Coordination


Concerns about fragmentation in AI governance discussions across different international forums emerged throughout the session. Multiple speakers recognised the risk of duplication and contradiction across various institutions (UN, UNESCO, ITU, G7, G20) developing parallel AI governance processes.


The discussion suggested that platforms like IGF and PNI could play important coordination roles, serving as bridges between different governance processes and ensuring multi-stakeholder perspectives inform formal multilateral negotiations.


### Multi-stakeholder Participation


There was consensus across speakers about the importance of multi-stakeholder participation in AI governance, with emphasis on inclusive dialogue that brings together diverse voices, particularly from marginalised communities and the Global South.


## Suggested Actions and Recommendations


The discussion generated several concrete recommendations:


– **Repository Development**: Nobile suggested PNI establish a comprehensive repository of AI governance information and best practices across sectors


– **Collective Action**: Bird called for African states to mobilise collectively on digital development taxation


– **Strengthened Dialogue**: Multiple speakers emphasised strengthening dialogue mechanisms and experience sharing while implementing the Global Digital Compact


– **Capacity Building**: Addressing knowledge gaps through literacy programmes and skills development


## Session Conclusion


Elizabeth Orembo concluded the session by directing participants to a survey for additional feedback and thanking the panelists and audience for their contributions. The discussion demonstrated both the complexity of global AI governance challenges and the potential for multi-stakeholder approaches to address them.


The conversation highlighted that AI governance involves questions of power, equity, and human rights, with particular attention needed for addressing digital divides and ensuring inclusive participation in governance processes. PNI’s role emerged as that of a bridge-builder, helping ensure diverse voices are heard in AI governance while contributing to coordination across different international processes.


Session transcript

Elizabeth Orembo: Please welcome to the stage, the Moderator, Elizabeth Orembo, Research Fellow, Research ICT Africa. Good evening, ladies and gentlemen. I’m trying to look at my time to be correct. That is not afternoon. I think it’s afternoon. Good afternoon, ladies and gentlemen. Thank you. Thank you for being here and also thank you to our remote participants who are joining us from various locations. This is going to be a policy network on AI topic and we are going to discuss the role of PNI network within the ecosystem of IGF. But before I go on to introduce this panel, let me welcome my fellow panelists who are Mario Nobile, Director General, AGID Agency for Digital Italy. We have Paloma Lara-Castro, Policy Director, Direchos Digitales. We also have William Bird, Director, Media, Monitoring Africa and we have Shuyan Wu, Deputy Director, User and Market Research, China Mobile Institute. Before we start this panel, we’ll have a small presentation to talk to us about what is PNI about and that will be Shamira Ahmed, Founder and Executive Director, Economic Policy Hub of Africa. Welcome, my panelists, to the stage. Thank you, everyone. I decided to cross flow here because that was a bit far from you and I want this session to be very, very interactive. Like I had started introducing what this panel is going to be about, it’s some sort of a discussion about what the PNI is about and its role in the internet governance as well as AI governance ecosystem, given that there’s a lot of policy processes happening globally within different multilateral systems, regional blocks and even at national level. So, we are going to discuss the role of PNI on those levels, what PNI is about, the capacity building it does, research and all that. So, this is going to be the structure of our session because it’s not just a dialogue but it’s also a feedback mechanism where we discuss amongst the panelists and also amongst the audience that we have here as well as the remote participation that we have on what’s going to be the role of PNI also moving forward. So, we are going to have a five-minute introduction of what PNI is about and then afterwards we are going to have a 30-minute discussion with the panel that we have. Each of us, we are going to share those 30 minutes and then we are going to have a Mentimeter questionnaire. It’s going to be shown on the screen. So, prepare your phones. I hope your phones are charged so that we start having those interactions through the Mentimeter. And then afterwards, we’ll have questions and answers from the participants and from the panel itself. And then lastly, we’ll share a survey with you which we will ask you to participate in the questions that PNI is asking about how it’s going to shape its role going forward. So, before the panelists, let me give it to Shamira Ahmed to introduce us to PNI and what it’s about. Thanks.


Shamira Ahmed: Thanks, Liz. So, my name is Shamira Ahmed and I am the Executive Director of the Data Economy Policy Hub, the DEP Hub, and it’s the first independent think tank that’s founded by an indigenous African woman in South Africa. So, I’m honored to present the work of the PNI. It’s a global interdisciplinary bottom-up multi-stakeholder initiative that’s hosted by the IGF. At its core, the PNI is dedicated to facilitating open, inclusive, multidisciplinary dialogue on global AI governance. And we prioritize elevating diverse voices, particularly from the global south, to be involved and promote meaningful international cooperation grounded in multidimensional challenges that are often avoided in many other AI governance forums. And some of the discussions we highlight are human dignity, intersectional equity, justice, mental well-being, and environmental protection. In a practical sense, PNI operates through an open, collaborative process, and we do this through our various multi-stakeholder working groups that aim to transform community insights into concrete recommendations and policy briefs. We also ensure regular engagement through online meetings from town halls, consultations, and other forms of collaborative participation to ensure that our insights and our outputs genuinely reflect a wide range of perspectives. In terms of our achievements to date, we have two major milestones that we have developed since our inception. The first one was the development and presentation of our 2023 report that we presented at the 2023 IGF in Tokyo, entitled Strengthening the Multi-Stakeholder Approach to Global AI Governance, Protecting the Environment and Human Rights in the Era of Generative AI. The report identified good practices for AI governance. It emphasized the need for transparency, fairness, and accountability, and highlighted the importance of environmental sustainability and human rights in the age of generative AI. Our second milestone was the 2024 report, which was presented at last year’s IGF in Riyadh, and this policy brief deepened our focus on our four priority areas. We focused mainly on AI governance, interoperability and good practices, environmental sustainability throughout the AI value chain, and labor issues and the socioeconomic impact of AI. We also highlighted the liability and accountability frameworks that are necessary for inclusive and sustainable global AI governance. Looking ahead, we aim to develop a policy brief this year, which will further advance the themes I mentioned and that we have done previously, and respond to multi-dimensional emerging challenges that take place in this dynamic space. Beyond our reports, as I mentioned, PNI has organized multiple interactive sessions where we have brought together stakeholders from government, civil society, academia, and the private sector to share perspectives, co-create knowledge products, and build capacity for responsible global AI governance. PNI serves as an open platform for co-creating meaningful dialogue, knowledge exchange, and policy innovation. As a community, we are committed to ensuring that AI is developed and governed in ways that are ethical, inclusive, and beneficial to all. So, I invite you to join in our network if you want to be part of a global collective. I invite you all to engage with our work, contribute your expertise, and help us shape a future where AI serves the public interest for all people and the planet, now and in the future. Thank you.


Elizabeth Orembo: Thank you, Shamira Ahmed, for that introduction of PNI as an initiative within the IGF looking at AI governance. It’s a policy network on AI governance. Now, to discuss some of the things that we are going to discuss in this panel, I’d like to remind the participants that there’s translation, if you have the translation gadgets, and that’s because Shuyan Wu is going to respond to our questions in Chinese. So, just get ready for that. To Shuyan Wu, just to set stage for this discussion and to understand the context in which PNI exists, I’d like you to tell us what are those risks that you’re concerned with from your research center about the global and even national and even regional risks that you’re concerned with, and how can regulation as well as PNI address these risks?


Shuyan Wu: Okay. Thank you very much. I want to focus on the IGF and BNAI’s role in global AI governance and also under the context of current AI development which focus areas should we work on. We know that AI is an important platform for international AI governance, it promotes inclusive and capacity building and has great advantages. It has a bottom-up model which encourages the participation of multi-stakeholders and also BNAI is a multi-stakeholder platform. It focus on the hard topics to gather the consensus on AI development. You know that AI development is unstoppable because of the iteration of innovation, the adoption of AI technology is widespread but it also brings uncertainties. At the moment we see this information and information leakage and also the digital divide which are the areas of concerns for the international community. No single country or body can deal with those challenges. We have to work together to tackle all these risks and challenges. So I believe that IGF and BNAI will play a crucial role and they will be increasingly important. As to the areas of focus, I think that intelligent divide is one of the focus areas because innovation would come with uncertainties. At the moment AI technology is growing fast, it has brought a lot of benefits but at the moment how to minimize or mitigate the intelligent divide before it grows wider, I think this is an important topic the international AI governance should face. We think intelligent divide would involve access and governance. In September 2024, the GDC also mentioned that we need to promote the multi-stakeholder model to tackle the challenges. How to set up the rules and how to formulate the pathway and use the advantages of the multi-stakeholder model to learn from the best practice and also strengthen communication with other platforms, I think that’s what IGF needs to focus on currently and in future. How to play the platform value of IGF and BNAI? I would touch on a few areas. First, we need to strengthen dialogue mechanism to organize the public sector, the private sector, academia and other stakeholders to have cross-disciplinary dialogues and also when to share more, we can set up database, user case and to promote experience sharing and we can try to have improved collaborative model. Third, we need to implement the GDC together. We need to use the regional and the national strength to promote objective of GDI. That concludes my thought.


Elizabeth Orembo: Thank you. My name is Shuyan Wu and you’ve really touched on very important topics on not just digitally divide but also intelligence divide and also the need for PNAI also to have an impact within the GDC process and these other AI policy development processes that are currently happening which is a discussion that the other panelists are going to talk about. But before we move there, let me turn to Mario Nobile from Italy. If you can speak on the context of Italy as well as tell us some of your national and also regional progress on AI and how PNAI fits into this national and regional context. How can it support some of these processes at that level?


Mario Nobile: Thank you. It’s Nobile, not in English nobile but it’s the same. It’s Mario Nobile. I think that talking about AI governance, we need first to understand the position of our ship in this big ocean and in the current context. Also for the hype generated by some CEOs for purely commercial reasons. Some argue that if you say hello or thank you or you are welcome to a chatbot, you can improve its response. But until the artificial general intelligence is developed, this is not the debate. The debate is not humans versus machines but rather those who know, manage and govern AI versus those who don’t. And this knowledge gap calls for action like literacy programs, like upskilling and reskilling initiatives. In Italy, and I go to answer to your question, our Italian strategy rests on four pillars. Education, scientific research, public administration and enterprises. Our efforts aim to bridge the divide, ensuring inclusive growth and empowering people, individuals to thrive in this AI-driven era. I think that IGF could help us, also the PNAI, they could establish a repository of information on AI governance, also including research papers, best practices. I agree with the Chinese colleague, we need best practices for AI governance in different sectors, health, manufacturing, transportation, tourism. And this repository could serve as a valuable resource for discussion on AI governance. And I think that we need the big point now in Italy, but all over the world, is the potential for AI to displace jobs. So we have also in Italy, but all over the world, several think tanks suggest shifting the tax burden from labor to digital capital to avoid inefficient automation. We know why automation may cause a loss of jobs. We know this. How we can implement innovative policies to mitigate this impact. So, and I go to conclusion, we have little time. I think that we need discussion. We in Italy, the Agency for Digital Italy, I am here with Titti Cassa, she’s in the IGF Italy Secretariat, and we need the discussion. We need a multi-stakeholder dialogue with academia, with communities, with policy makers. We are policy makers with industrial stakeholders and the civil society organization. How we think about job losses, how we think about measures that can reach the goal that no one is left behind by this revolution. So I do believe that IGF and PNAI can contribute with repository discussions. Thank you.


Elizabeth Orembo: Thank you, Mario, for also touching on AI divides and also not just intelligence divides. You also spoke about those who are governing in AI and those who can’t participate in AI governance, as well as impacts of AI on automation, who’s going to be displaced, who’s not going to be displaced, like who does it give it power to, and who remains disadvantaged there. Thanks for those remarks. I’ll move to Paloma, and I’d like to ask you, using Latin American experience, I love how we are regionally diverse so that we give experiences from our different regions. How can PNAI ensure multi-stakeholder dialogue? How can AI enhance capacity building in terms of governance of AI and also in terms of skills for those who are going to use AI?


Paloma Lara-Castro: Thank you, Liz. Hi, everyone. Thank you for the space. I’m representing Derechos Digitales. We are a Latin American organization with 20 years of experience working in the intersection of technology and human rights. In our area of work, we’ve been focusing a lot on AI, mainly not only because it’s an emerging issue regarding human rights, but because we participate very actively in different global, regional, and local processes regarding regulation and policy. And we are basing all of our interventions in evidence-based arguments. So in that sense, I would like to mention that within our frame of work, we have a program that is called Artificial Intelligence and Inclusion, where we have an active investigation from 2019 on the increasing use of AI in sensitive areas of public policies in Latin America. This is important, and it relates to your question, but I’m going to expand a bit more on what we found in this research. So this research departs from the concept that AI is a social technical tool, which means that it arises from society, and so it comes with a baggage of all the social conditions of this production. Thus, to understand the potential impact on fundamental rights is not enough to just analyze the technology in itself, for example, to examine algorithms or automation processes, but since they are not implemented in isolation or in a vacuum. Policies that use AI as a tool are implemented in specific and social and political contexts, in countries with diverse democratic composition, particularly democratic and regulatory frameworks, democratic characteristics that are tied to historical processes, and governmental administrations that respond to specific situations of each territory. In that such special attention should be paid to the differentiated impact that the use of AI may have on historically discriminated groups, to mention, for example, gender or indigenous communities, in compliance with the principles of equality and non-discrimination. In concrete, our investigation seeks to assess the human rights impacts from state usage of AI and how and if the principles of legality, necessity, and proportionality are taken into account. The cases that we analyzed show examples of use in sensitive areas of public administration, such as the areas of employment, social protection, public safety, education, and management of procedures, as well as usage in the administration of justice. The frequent perception of these technologies that are focused mainly on their utility in streamlining processes has a decisive influence on the decision to apply them in the public sector. What we found is that the implementation of technologies based on artificial intelligence by the state poses important challenges in terms of protection of fundamental rights. One of the main difficulties within our research was to find information, both from open sources to learn about the actual use and from interviews from state representatives, given the reluctance of some officials to give details of such use. It is worth mentioning that many of these technologies are implemented by companies, universities, or third parties, contracted by the state under different modalities, which makes access to information even more difficult, and that it also shows that in Latin America, different states are delegating to a greater or lesser extent decisions to other persons or corporate entities that in turn execute automated decision-making models for the development of the action. Some of the main findings that we found in this research is, first of all, inadequate regulatory frameworks and lack of compliance with international human rights obligations, such as, for example, the three-party test. It is also important to consider the different levels of regulatory development in the region with respect to, for example, data protection. We have countries very advanced that have independent authorities, and then we have other countries that don’t have still any data protection laws. The human rights impacts related to the use of artificial intelligence have already been recognized by different international pronouncements, which highlight not only the need for these tools to fully comply with international human rights, but also that in case that they don’t meet the criteria at the beginning, then they have to be subjected to a ban or even a moratorium. This is something that we’re not seeing happening in the process of implementing this technology within public sector. There is no risk assessment on human rights that depart from international pronouncements about it. The other important thing that we found is that there is questionable handling of personal data. Given that the main input for the technical development of artificial intelligence-based technologies is data, this issue is central. However, states in Latin America have difficulties in maintaining robust data use, management, and storage practices, partly due to problems such as fragmentation of databases, heterogeneities are in the perspective of data among state agencies, the diversity of information systems, and the lack of a common language. It is necessary to consider that if we don’t take into account specific impact assessments on human rights, we are not only deepening structural inequalities and especially harming marginalized communities, but we are generating new forms of exclusion. Just to link it back to the IGF, and I’m going to wrap it up with this, it’s important to know that one of the main characteristics of this process, besides from the lack of human rights impact assessment, is the lack of participatory mechanisms for civil society and for other stakeholders. This brings us back to why IGF, and not only the global IGF, but also the regional and local IGFs, bring about an important relevance to this issue, considering that these are the spaces that are maintained as an essentially multi-stakeholder model. While we’re seeing an accelerated shrinking of civic space, these spaces become even more important when we think about how global policies also advance into centralized discussions in New York, which also brings about deepening of other obstacles such as visa constraints and language barriers and sometimes financial aspects. To bring it back, we need to secure the spaces that have already been proven to be effective in a multi-stakeholder model, and looking into the IGF and its connection with WSIS and the PLUS Lab 20 review, there is an urgent need to advance to a vision of digital justice with a gender and an intersectional perspective that contextualizes the WSIS core vision to the diverse living experiences of marginalized communities that are deeply affected by state and by corporate authoritarianism. It’s a crucial moment. Inclusion and recognition of differentiated impacts is essential to secure a rights-respecting future. Thank you. Thank you.


Elizabeth Orembo: Thank you so much, Paloma. Without much time, I’d like to turn to William Bird to give us some of the context in Africa, and in answering this question, what unique value can the IGF and PNI give to the global AI governance, as well as regional AI governance, and now with context to Africa?


William Bird: Thanks. I had some notes, but actually I’ve been attending a lot of sessions, and some of the things have impacted what I want to say, which is that we’ve seen this movie before, right? We’ve seen what happens when you allow the markets to determine what goes on. We’ve seen what happens if you say, let’s not regulate. There were people on the same stage earlier today saying, no, no, no, don’t regulate. Let us do it. We’re here for the good of the world, and these kinds of patently ridiculous things. What that resulted in is that we’ve mainstreamed misogyny. We’ve now got online gender-based violence as a default operating mechanism on so many of these social media platforms, and that’s just one of the areas of online harms. If it’s xenophobia, hey, social media is your new tool of play. I mean, it’s the thing that has deepened and replicated online violence, and deepened inequalities, and fundamentally, for our continent, it’s deepened the digital divide in ways that have made that digital divide that much greater. At the moment, there seems to be nothing, or very little, let me not say nothing, but very little that suggests that this AI revolution is going to shift in a different direction for the people of Africa. We see that digital inequality replicated not just in terms of those who have access to the internet, but replicated in the inequalities of the actual infrastructure, the data centres, all of these things that are there that mean and make sure that AI can function, most of them aren’t on our continent. The means and ability for Africans to use this, to harness it, aren’t on the continent, and yet, when you want to look and see where they’re taking their data from, Africa, again, is fodder for these LLMs and these other mechanisms. So we’ve seen this, and if we don’t learn the lesson, IGF is going to ultimately fail, because this is the latest test case, right? The thing that’s curious about this is that the Global Digital Compact says Our goal is an inclusive, open, sustainable, fair, safe and secure digital future for all. It’s a lovely goal and yet we already seem to be miles away from this. I would have thought that this move by the United States government to effectively ban any kind of regulation of AI systems would be causing the people here to be running around in a state of near panic or at least to be saying and coming out with clear, unambiguous, strongly condemning statements that say this is not okay because this is how we go down exactly the road of deepening the digital divide. This is how you allow people’s data, this is how you allow the worst that AI can deliver to be delivered. So I think if we’re serious about it, we need to be saying the role of the IGF and the policy network around AI is that this is the time to be bold. We need to be reasserting fundamental rights and not allow those to be ghosted. I think that we need to go back to those basic principles of transparency and accountability and ethical. Those must be the starting point and AI systems must be permitted if they don’t undermine those principles. My last point and then I’ll be quiet is that I think being a multi-state, we’re very good at this in South Africa. We have people that literally hate each other sitting down at the table with each other but what that doesn’t mean is it doesn’t mean they sit there and don’t talk about anything difficult. The nature of these things is that the IGF must facilitate hard conversations. It must make sure that the hard conversations are had and that at the end of those things, the rights-based principles, the principles of dignity, equality, sustainability, those are the things that emerge. No matter what else gets said, those are the principles that we cannot deviate from because if we allow ourselves to deviate from that, then all of the evils that we’re talking about are going to just persist.


Elizabeth Orembo: Thanks, William. I like how across the panelists we’ve talked about how we have different kinds of divides and it takes me back to when we were talking about the initial, I can’t say the initial, but some 10 years ago when you’re still talking about Internet governance connecting the next billion, connectivity was an issue and it’s still an issue right now and the lessons that we can borrow from the milestone that we’ve achieved from 10 years to up to now to even the persistent challenges that we have right now in connectivity and use of the Internet, even how the Internet has evolved from an infrastructure that was so much open to right now the different kinds of business models that are existing that are actually entrenching some of these inequalities. So I see there are very many lessons to learn, not just from AI, where we are at with AI, but also with global Internet governance, the principles we had then, what principles do we have right now and what principles have we had to let go because of how the world has evolved and even how technology has evolved through. So thanks for that. I’d like to bring in our virtual panelist who is Ivana Bartoletti. Please forgive me for that. I used to think that African names are the most difficult to pronounce, but these are also challenging as well. So welcome Ivana Bartoletti and I’d like to ask you the same question that I asked William Bird, to put PNI into the context of the different policy processes that are happening and what is it that the IGF and PNI can bring into these global policy processes. What are those values that we can insist for them to be there in the processes led by UN, in the processes led by different multilateral processes like OECD and even African Union and European Union. Over to you.


Ivana Bartoletti: Thank you very much and so sorry for not being able to be physically with you. So I think I wanted to say on this really interesting question just a couple of things. The first is that we’re living in quite a strange time because I think a lot of people from across the world are looking at artificial intelligence with both excitement and fear in the sense that I think that everybody knows and many, many people know how AI can be a source of good, can take medicine in places where it can’t otherwise be taken. It can help education. It can improve our well-being. But I also think a lot of people know the potential harms, be it disinformation, be it damages to our cognitive abilities, be it, for example, issues related to the impact on young people because of the persuasive capabilities of these tools. So we have all that from one hand. And on the other hand, we are still discussing whether we need governance or not. We’re still discussing how to organize governance and wrap controls around artificial intelligence, which is a fantastic technology that can bring a lot of good to our world if we manage it properly. So if we only look at what happened over the last few weeks, and it was mentioned earlier, there are discussions in the US about a moratorium. There are discussions in the European Union to delay implementation of the European Act. But at the same time, there are countries such as Japan, which have introduced different but interesting governance around artificial intelligence, or India, where, for example, there is an important privacy and data protection bill that has an impact on artificial intelligence. So the reason why I mention this is because I see the policy network as a bridge between these two things. The fact that on the one hand, there is the concern that people have on the social, economical impact on systems that make decisions about people, decide what we see, have an impact on our human rights, and shape quite differently the way that our social relations are organized. On the other hand, we have artificial intelligence that is part of the arsenal that people have when it comes to global competition with different ways across the world, and still a patchwork approach to regulation. So what I see as the network of the IGF, the policy network, is the capability to bridge that gap between these two different dimensions around artificial intelligence. So things, for example, around what are best practices at a global level in the way that artificial intelligence is being governed. What are the skills that are needed for businesses and for the public sector across the world to leverage data without infringing upon the rights of individuals? And I’m thinking around technologies to enhance privacy. How do we train a generation of business leaders to use AI not because it’s glamorous, not because there is a hype, but because it increases productivity and makes the work of employees better? How do we create and use AI in a way that does not destroy our environment, adding to the pollution that we already have? All these are practical elements and ways that we can govern artificial intelligence. I speak as a business, and I want to be clear. Businesses across the world using AI need governance. Because without governance, we can’t innovate. We can’t create a long-lasting innovation rooted in the ability to reinvent the way that we work and operate. We need AI that we trust. So trust, to me, is the most important element. I want people to trust AI, companies to trust AI, so that we can use it, and by using it, enhance productivity. But to generate that trust, organizations like the Policy Network are crucial because we need to share how we create that trust. How do we embed human rights, privacy, cybersecurity, legal protection into the design and the deployment of these tools? Without places where all these levels of governance can be discussed, shared, implemented, I think it’s going to be difficult. difficult moving forward to be able to in the fragmented world we live in. I think the importance of avenues like this is only going to increase. Thank you.


Elizabeth Orembo: Thank you Ivana Bartoletti. Please apologize if I am butchering your name for those interventions. And I think now from our panelists across our panelists we’ve had what the concerns are on the governance of AI from digital divide to human rights to harms and even as Ivanica says the role of PNI can be to build trust within this different policy processes so that we can we can instill or we can champion some of these values from the IGF into those policy processes which is basically the same people we see in these processes and you also see them at the IGF. They are still in our networks. So I haven’t forgotten our virtual moderator who I will ask if we have any comments from virtual participation or even questions from virtual participation. Over to you.


German Lopez Ardila: Yeah, thank you everyone. Please introduce yourself and then go ahead to tell us what’s happening there virtually. Absolutely. Thank you. Here is a German Lopez from the Colombian Chamber of IT and Telecoms. It’s a pleasure to join you as online moderator. So far we don’t have any particular question in the in the chat. However, I might like to ask the conversation that you’re having right now a question for the panelists. How do you think we should on one hand profit from all these different discussions that are happening at the UN level? I think that’s something that pretty much all of you have mentioned while also avoiding let’s say regulatory or dialogue fragmentation that makes it difficult for all these like different compartments inside the UN to not be able to coordinate properly. I think we all know what we’re doing here at the IGF with AI but also what’s happening with UNESCO, what’s happening with the ITU. So I would like maybe to know your take on it and thank you very much.


Elizabeth Orembo: Many thanks Lopez and thanks to our panelists. Now I’d like to turn to the audience here and but before I turn to our audience there’s a Mentimeter that will go live on the on the board. Please make use of your phone and try answer that question on menti.com. The code is 3350 2806. And as we engage with this menti.com, do we have any questions from the floor? I don’t know whether we have a standing mic so that we have a queue somewhere here at the center. We only have four minutes so if you can tweak your question into 14 characters that would be great.


Audience: Yeah thank you Elizabeth Ponsleit speaking a member of the Policy Network for AI. What I want is to get from views from the panelists on the UN document on governing AI for humanity. We have gaps in terms of representation and I’ll just focus on that to see what their comments are on governance. Thank you. Thank you Ponsleit. Next one. Okay hi my name is Adriana Castro from Externado University of Colombia. Thank you for the panel. Please share your thoughts on whether the AI governance should have a general approach with global minimums or via sectors via local approach and if there is an urgency to prioritize some sectors in particular. I have a second and third questions. In your consideration what is the role of the UN guiding principles on business and human rights on the AI governance? Finally if you can share some comments whether on how the role of social sciences and academia in AI governance and some insights on how to be included in the conversation. Thank you very much. Thank you. Yeah and the each hand I’m from the PAI. Actually I have a question to the panel which is regard what do you think what kind of improvement and the PAI IGF needs to do in order for them to have more in terms of the intervention power you know inferences in the policy making because it’s a multi-stakeholder process where there’s a multilateral process going on at the UN level. So how do you perceive the relationship between the PAI or the IGF and all these multilateral process going on the global level? Thank you. All right thank you very much. My name is Kunle Olorundari. I’m from Nigeria. I’m the president of Internet Society Nigeria chapter. So I’ll go straight to my question. So it has to do with trust. So for me trust is a big deal especially when it comes to an environment where a lot of people are developing different applications in terms of talking about AI. Yeah so my question is this. I want the panelists to hear I want to hear the views of the panelists with respect to having a universal framework on artificial intelligence because I believe that is what can actually build this trust we are talking about. Trust in which every stakeholder is carried along. Thank you very much. Thank you very much. May I first acknowledge and say to my fellow countrymen William Bird that we are very proud of you as South Africa and it’s good to see you on the panel and you would understand and appreciate the context of my question more than anybody else I would think. Is the notion true? Is there any truth in the notion that AI systems is recreating a digital apartheid in the fact that it is murdering historical patterns of racial segregation? I thank you.


Elizabeth Orembo: Thanks. Please go ahead and can I ask that we close the line now because our time is up. So we have one question there and one here. Please go ahead.


Audience: I’ll move in French. Hello, I’m Kossi Amessin. I come from Benin. Hello everyone. I’m Kossi Amessin. I come from Benin and this session for me is absolutely crucial. It allows us to talk about a topic that we do not talk about enough. In Africa we produce data but our data are often externalized or they are accessible in data centers that are not on our territories. How can we participate to, how do we participate to the training of artificial intelligence in our own labs without having the data on site? So can we have virtual spaces to conserve the data for us to train our own AIs? Thank you. Hello, this is Jasmine Khoo from Hong Kong. So I’m actually part of PNAI and also a youth lead research group called Asia Pacific Policy Observatory. So this season we are doing research about AI impact in different areas such as digital well-being and misinformation. One question I would like to seek for your advice is given that we have so many toolkit and resources guidelines developed by different institutions, the leading ones such as UNESCO, EU and also ITU, etc. So how would you suggest researchers to navigate this, you know, a lot of different frameworks developed differently and how can we be strategic in order to make use of the existing resources and also to spot if there is any gap that hasn’t been adjusted enough in this existing thing? Thank you very much.


Elizabeth Orembo: Thanks, thanks a lot. I think now we have some of the answers coming up to this question on menti.com that I would like our panelists to respond to as we respond to some of the questions that have come here. We have about 18 minutes to do that but I’ll read some for you and this I would translate this as values because the question was in one word what would be the future role of the IGF and the policy network on AI in global AI governance and we have some of the the panelists, and then we have these other values, digital cooperation, digital divide and integrity and inclusion. So I’d like to turn to our panelists, and I will start with our virtual panelists because we ate into so much of your time. If you can respond to some of these questions, and I would like to mention some of them. Representation gaps, and I think here Poncelet mentioned that in these various policy processes there’s lack of representation from participants from the global south, as well as these processes themselves, they also lack country participation themselves. As countries come together to have AI governance, or friendly AI governance kind of cooperation, and not much is happening in the global south when it comes to those processes, such as G7, G20, and the rest. Please also speak about the role of UN guiding principles on business and human rights. So over to you, Ivana Bartoletti, if you can use two minutes.


Ivana Bartoletti: Thank you so much. So I wanted to say first, I wanted to say that AI has never been not regulated. Sorry to start from this. Legislation around privacy, consumer equality, non-discrimination, human rights law, already apply to artificial intelligence. Why am I saying this? Because often artificial intelligence has been used as an excuse to breach existing legislation. This is very important to bear in mind. It’s very important to bear in mind that we’re not starting from scratch, that we have to think about how existing legislation applies to artificial intelligence, algorithmic decision making, and all of that. Governance of AI comes in many different ways, business governance, state governance, global governance, the UN, and very important, the digital compact. My key issue is, how do we make sure that we know where countries and how they are performing against it? How do we know what is happening? And is there a way that we can assess, as we do with many other things, as we assess in a sort of index, how countries are interpreting those values in the digital compact? And to me, that is very, very important. We’ve had, over the years, a proliferation of documents from the OECD and its fantastic work it’s doing, especially on privacy, from the UNESCO, amazing work, especially on education, a proliferation of tools that are the soft governance around artificial intelligence. And we know that there has been a prevalence of the North into these processes, not the UN one necessarily, but when it comes, for example, to the G7, G20, and this is a space where we can certainly have the more global participation that is needed. But this is absolutely true. And even in where the big AI tech are located across the world, that there is a concentration that keeps many countries out. So I wanted just to conclude by saying that trust, what does it need? It needs governance in many different levels, existing laws and how we comply with those, upholding human rights in the age of AI and what that means. Do we need more? Do we need, how do we do this in practice, including privacy to privacy-enhancing technology, civic participation across the globe? And we need to ensure that we have, although different in different countries from the European, the EU, Japan, China, India, across different, we need to make sure that we have regulation supporting the entrance of these tools into the market so that we know that people can trust them and business can trust them because due diligence has gone into these tools before they’re marketed. And I think this is a crucial element that needs to be taken into consideration. How we govern all of this, the legislation that we put around this, this is complex, obviously. The European AI Act is one example of potentially many, but we need to understand that there is no conflict between having some controls around these tools and innovation. Actually, they can go hand in hand. And this is something that we, in occasions like this, we really have to stress and highlight. Thanks.


Elizabeth Orembo: Thanks, Ivana Bartoletti. And on what you’re speaking about, some of them is about not really a conflict of values, but how we are going to balance between one value and the other. And that requires honest conversations, and it also requires a trust environment. Now, I’d like to bring the conversation to our panelists who are here with us physically and to turn to you, Mario, to talk about trust, bridging from what Ivana Bartoletti has spoken about and speak about the role of PNI in fostering that trust environment in the IGF network and these policy processes.


Mario Nobile: Thank you. I agree with Ivana Bartoletti, and I’ll try to answer also to friends from Nigeria. I think technology evolves faster than most governments, public administration, and stakeholders can innovate. We have technology at a relentless pace. So I agree with Ivana Bartoletti, I’m European, so we have GDPR for privacy, AI Act, NIS2 for cybersecurity, and so on. But it’s a matrix, so everyone is looking for trustworthy AI. But trustworthy AI is regulation, it’s compliance with the regulation we have, but it’s also digital sovereignty. Our friend from Nigeria said, how my data center is in Nigeria or anywhere? And so I’m happy with the European regulation, we have the FRIA, the Fundamental Right Impact Assessment in the AI Act, but we have yet to establish a European hyperscaler cloud, so we have a problem. So I go to conclusion, I think it’s a matrix, I think that no one has the solution, but it’s important to talk, to discuss about these themes. So the role of P&I, as I mentioned earlier, is to discuss digital sovereignty, compliance, fundamental rights, and technology evolves. Now we have agentic AI, agents, so big opportunities, natural language processing, inclusion, citizen inclusion, people with disabilities, we can get to goals, unbelievable. But we have another technology, so the principles in the GDPR, in the AI Act, are old, we must refresh them, this is the point. We must talk about and discuss about this.


Elizabeth Orembo: Indeed, there are learning points from some of the other processes that have come before AI that you can use in the governance of AI when it comes to the role of P&I. I’d like to go to Shui and Wu, and the question is related to what you had already presented previously, and I’d like to ask you, what improvements can IGF do, how can it gain more influence when it navigates these other policy processes?


Shuyan Wu: Thank you very much for your question, I actually was going to… to talk about how IGF and PNI can help create trust, inclusive in dialogues and thoughts about those topics. I think AI presents shared challenges and opportunities for humanity. So we should use PNI and such platforms to ensure diversity so that our conversations can include different genders, races, regions, and age groups. Also during that process of communications and dialogues, we need to allow their opinions to be fully expressed so that we need to think about whether marginalized groups can have their items included in agendas. Also the equality in access resources, we need to ensure that equal opportunities are provided to those marginalized groups. And also the global equality issue, the principles or rules have to consider different specific development stages of different countries. I think with this kind of efforts, we can make better use of IGF and the PNI, PNAI, to create an environment with greater trust. Thank you.


Elizabeth Orembo: Thank you, Shuyan Wu. And I’d like to go to William Bird. We’ve had several interventions from African participants and also an intervention from one of our panelists on how we can ensure trust. And also in data, data that is not in Africa, how can we ensure international collaboration to make sure that this data is used for the development of AI and even other development, African development. Please speak to the issue of data, cross-border data flows, and also things to do with the representation gaps as it affects the global South and also Africa.


William Bird: Thank you. I mean, I think that the reality is simply, the question was, is it recreating some form of new version of apartheid? And the short answer is, I’m not sure if it’s quite as extreme as that, but it’s certainly recreating the inequalities that have been, that have typified colonialism. And this seems to be on many levels a new form of that in many practices. And your colleague from Benin, who spoke just after you, highlighted that point, talking exactly about data that is harvested from Africans, not kept in African shores. We have no idea where it goes, and yet it’s there. I think the short answer to how we’re going to deal with this is, as African states, we need to mobilize. We need to come together, because many individual states don’t have the kind of power to challenge these multinationals, but together we can. As the continent that’s getting younger with young people, we need to make sure we have a duty to them to make sure that we mobilize and come together and start to levy huge fines on these global entities. The Global Digital Justice Forum, in fact, just made some of its submission to the WSIS Plus 20, and one of the things that they’re calling for is a digital development tax that should be imposed on these entities in order to fundamentally address this inequality. Because we can talk about it, we know what it is, the question is just what do we do about it? Short answer, tax, fine them, digital tax for that development. These aren’t nice-to-haves that we’re saying, oh, we poor African states, we need this. This is about fundamental rights to equality, it’s about redress and repatriation.


Elizabeth Orembo: Thanks, William. And Paloma, I’ve reserved the question of human rights to you, because your presentation really much touched on human rights, and in the digital age also touching on the human rights, global human rights framework. If you can match some of those questions to your earlier presentation.


Paloma Lara-Castro: Thanks, Liz. Well I think that, as I mentioned, we are at a crucial moment. We are discussing issues that are going to be, that are central, are going to be even more central in the following years. One of the things we’re seeing is, like I mentioned, lack of compliance with existing frameworks. That translates to two things. On the one hand, like it was mentioned within one of the questions for the audience, is the fragmentation of discussions. This fragmentation of discussions does not only happen in the themes to be discussed in different fora, but this fragmentation also happens in the recognition of certain rights and limitations. We see some pronouncements in international human rights that point to the need to take into account not only existing framework, but also the need to contextualize to the present challenges, and as the colleague mentioned, the need to safeguard the future and youth. What is the youth going to be, how is the youth going to be interacting with this world where there’s lack, not only democratic setbacks, but also geopolitical shifts that are affecting deeply into the world and in affected communities. We do need to think about the coherence of the international framework in these discussions. That’s on the one hand, and on the other hand, the need to avoid duplication of processes. We are in a moment right now, as was mentioned, we have the GDC, we have Fact for the Future, we have the WSIS Plus 20 review. We need to coordinate these efforts in order, not only, as I mentioned, avoid duplication, but also ensure meaningful participation as a key element to advance in the protection of international human rights. This is important to mention, that it’s not only participation, or meaningful participation, it’s not only a human rights in itself, but it’s also necessary to guide states in the compliance of their obligations in human rights.


Elizabeth Orembo: Thanks, Paloma. And I’m going to be unfair by not giving you a chance to have a parting shot. If I was going to do that, I was going to ask you to do it in one word, but that’s not going to be possible. We are now checking the time. But I’d like to thank our panelists, please join me in thanking them for the great contributions that they’ve had to this session. And to the moderator. Yeah, someone says clap for me as a moderator. Yes, and also to our audience, those who are participating with us physically, and those who are also participating with us virtually. And like I said, this is an interaction that is meant to act as a feedback for the PNA network to see how they are going to shape their work going forward. Thank you for participating in the menti.com, which is also going to be very valuable. People who ask questions also, we are also going to take those questions into our feedback mechanism. And the last process for feedback, we have this survey link, please take the code, take the link. There are a few questions for you there to help us improve the policy network on AI moving forward. Shamira, who had presented to us what PNAI is about and the different tracks and the processes and the reports that they have produced and shared within different processes. She also shared how you can participate in the PNA network. If you need more information about that, Amrita is there, and there are some other people who are on the mic saying that they are on the PNA network. You can join the network, see how you can contribute to the network, and see how we can widen the impact of PNA. Otherwise, thank you. It’s been a great session. Thank you. Your website is your waiting room. Make a great first impression with CookieBot’s customizable cookie banners and start building trust through privacy-led marketing. Get started for free at cookiebot.com.


S

Shamira Ahmed

Speech speed

129 words per minute

Speech length

536 words

Speech time

249 seconds

PNI is a global interdisciplinary bottom-up multi-stakeholder initiative hosted by IGF that facilitates open dialogue on AI governance

Explanation

PNI operates as a comprehensive platform dedicated to facilitating inclusive, multidisciplinary dialogue on global AI governance. It prioritizes elevating diverse voices, particularly from the global south, and promotes meaningful international cooperation on multidimensional challenges often avoided in other AI governance forums.


Evidence

PNI focuses on human dignity, intersectional equity, justice, mental well-being, and environmental protection in AI governance discussions


Major discussion point

Role and Structure of Policy Network on AI (PNI)


Topics

Human rights principles | Capacity development | Interdisciplinary approaches


Agreed with

– Shuyan Wu
– Paloma Lara-Castro
– William Bird

Agreed on

AI governance requires multi-stakeholder participation and inclusive dialogue


PNI operates through collaborative working groups that transform community insights into policy briefs and recommendations

Explanation

PNI uses an open, collaborative process through various multi-stakeholder working groups to convert community insights into concrete policy recommendations. The initiative ensures regular engagement through online meetings, town halls, consultations, and other collaborative participation methods.


Evidence

PNI conducts town halls, consultations, and collaborative participation to ensure outputs reflect wide range of perspectives


Major discussion point

Role and Structure of Policy Network on AI (PNI)


Topics

Capacity development | Interdisciplinary approaches | Data governance


PNI has produced significant reports in 2023 and 2024 focusing on multi-stakeholder approaches and environmental sustainability

Explanation

PNI developed two major milestone reports: the 2023 report on strengthening multi-stakeholder approaches to AI governance and protecting environment and human rights, and the 2024 report focusing on AI governance interoperability, environmental sustainability, labor issues, and liability frameworks. These reports identify good practices and emphasize transparency, fairness, and accountability in AI governance.


Evidence

2023 report titled ‘Strengthening the Multi-Stakeholder Approach to Global AI Governance, Protecting the Environment and Human Rights in the Era of Generative AI’ and 2024 report covering four priority areas including environmental sustainability and labor issues


Major discussion point

Role and Structure of Policy Network on AI (PNI)


Topics

Sustainable development | Human rights principles | Future of work


S

Shuyan Wu

Speech speed

102 words per minute

Speech length

579 words

Speech time

338 seconds

AI development brings uncertainties including misinformation, information leakage, and widening digital divides that require international cooperation

Explanation

AI development is unstoppable due to continuous innovation iteration, and while adoption is widespread, it brings significant uncertainties. These challenges include misinformation, information leakage, and digital divides that no single country or body can address alone, requiring collaborative international efforts.


Evidence

Mentioned that AI innovation is unstoppable and adoption is widespread, but brings uncertainties that require working together to tackle risks and challenges


Major discussion point

AI Governance Challenges and Digital Divides


Topics

Content policy | Privacy and data protection | Digital access


Agreed with

– Mario Nobile
– William Bird
– Elizabeth Orembo

Agreed on

AI creates and deepens digital divides and inequalities


AI governance requires inclusive dialogue with diverse representation across genders, races, regions, and age groups, especially marginalized communities

Explanation

AI presents shared challenges and opportunities for humanity, requiring platforms like PNI to ensure diversity in conversations. This includes different genders, races, regions, and age groups, with special attention to ensuring marginalized groups have their voices heard and equal access to resources.


Evidence

Emphasized need to allow marginalized groups’ opinions to be fully expressed and ensure equal opportunities are provided to them


Major discussion point

Multi-stakeholder Participation and Trust


Topics

Human rights principles | Gender rights online | Digital access


Agreed with

– Shamira Ahmed
– Paloma Lara-Castro
– William Bird

Agreed on

AI governance requires multi-stakeholder participation and inclusive dialogue


IGF and PNI should strengthen dialogue mechanisms and create databases of best practices while implementing the Global Digital Compact

Explanation

To maximize platform value, IGF and PNI should organize cross-disciplinary dialogues among public sector, private sector, academia and other stakeholders. They should establish databases and use cases to promote experience sharing and implement improved collaborative models to achieve Global Digital Compact objectives.


Evidence

Suggested setting up databases, user cases, and promoting experience sharing while using regional and national strength to promote GDC objectives


Major discussion point

Policy Coordination and Implementation


Topics

Capacity development | Data governance | Interdisciplinary approaches


Disagreed with

– William Bird
– Audience

Disagreed on

Solutions to digital inequality – collective action vs technical solutions


M

Mario Nobile

Speech speed

89 words per minute

Speech length

651 words

Speech time

434 seconds

The debate is not humans versus machines but those who govern AI versus those who don’t, creating knowledge gaps requiring literacy programs

Explanation

The real challenge in AI governance is not about humans competing with machines, but rather the divide between those who understand, manage, and govern AI versus those who don’t. This knowledge gap necessitates action through literacy programs, upskilling, and reskilling initiatives to ensure inclusive growth.


Evidence

Italy’s strategy rests on four pillars: education, scientific research, public administration, and enterprises, with efforts to bridge divides and empower individuals


Major discussion point

AI Governance Challenges and Digital Divides


Topics

Capacity development | Online education | Digital access


Agreed with

– Shuyan Wu
– William Bird
– Elizabeth Orembo

Agreed on

AI creates and deepens digital divides and inequalities


Italy’s AI strategy rests on four pillars: education, scientific research, public administration, and enterprises to ensure inclusive growth

Explanation

Italy has developed a comprehensive national AI strategy built on four foundational pillars. The strategy aims to bridge divides, ensure inclusive growth, and empower individuals to thrive in the AI-driven era, with particular focus on addressing job displacement concerns.


Evidence

Mentioned Italy’s focus on potential for AI to displace jobs and need for innovative policies, including shifting tax burden from labor to digital capital


Major discussion point

Regional AI Governance Approaches


Topics

Future of work | Capacity development | Taxation


Technology evolves faster than governments can adapt, requiring continuous discussion and updating of principles in existing regulations

Explanation

Technology advances at a relentless pace that outstrips the ability of governments, public administration, and stakeholders to innovate and adapt. This creates a need for ongoing discussions and regular updates to regulatory principles, as new technologies like agentic AI emerge with both opportunities and challenges.


Evidence

Referenced European regulations like GDPR, AI Act, NIS2 for cybersecurity, and mentioned emergence of agentic AI and natural language processing


Major discussion point

Policy Coordination and Implementation


Topics

Legal and regulatory | Privacy and data protection | Cybersecurity


Disagreed with

– Ivana Bartoletti
– Paloma Lara-Castro

Disagreed on

Regulatory approach – existing laws vs new frameworks


P

Paloma Lara-Castro

Speech speed

151 words per minute

Speech length

1419 words

Speech time

562 seconds

Latin American states implement AI in sensitive public policy areas without adequate regulatory frameworks or human rights impact assessments

Explanation

Research shows Latin American countries are implementing AI technologies in sensitive areas like employment, social protection, public safety, education, and justice administration without proper regulatory frameworks. These implementations often lack compliance with international human rights obligations and fail to conduct proper human rights impact assessments.


Evidence

Research from 2019 on AI use in sensitive public policy areas, with examples in employment, social protection, public safety, education, and justice administration


Major discussion point

Human Rights and AI Implementation


Topics

Human rights principles | Legal and regulatory | Privacy and data protection


Disagreed with

– Ivana Bartoletti
– Mario Nobile

Disagreed on

Regulatory approach – existing laws vs new frameworks


Different countries have varying levels of data protection development, from advanced independent authorities to no data protection laws

Explanation

There are significant disparities in regulatory development across the Latin American region regarding data protection. Some countries have advanced systems with independent authorities, while others still lack any data protection laws, creating an uneven landscape for AI governance.


Evidence

Mentioned that some countries are very advanced with independent authorities while others don’t have any data protection laws


Major discussion point

Regional AI Governance Approaches


Topics

Privacy and data protection | Legal and regulatory | Data governance


States demonstrate questionable handling of personal data due to database fragmentation and lack of common standards

Explanation

Latin American states face significant challenges in maintaining robust data use, management, and storage practices. These problems stem from fragmentation of databases, heterogeneous perspectives on data among state agencies, diversity of information systems, and absence of common language or standards.


Evidence

Identified problems including fragmentation of databases, heterogeneities in data perspectives among state agencies, diversity of information systems, and lack of common language


Major discussion point

Human Rights and AI Implementation


Topics

Privacy and data protection | Data governance | Digital standards


Without human rights impact assessments, AI deepens structural inequalities and creates new forms of exclusion for marginalized communities

Explanation

The absence of specific human rights impact assessments in AI implementation not only deepens existing structural inequalities but also generates new forms of exclusion. This particularly harms marginalized communities and fails to address differentiated impacts on historically discriminated groups like women and indigenous communities.


Evidence

Research shows lack of human rights impact assessments and mentions specific impacts on gender and indigenous communities


Major discussion point

Human Rights and AI Implementation


Topics

Human rights principles | Gender rights online | Rights of persons with disabilities


Multi-stakeholder platforms like IGF become crucial as civic space shrinks and centralized UN discussions create participation barriers

Explanation

As civic space experiences accelerated shrinking globally, multi-stakeholder platforms like IGF become increasingly important. Centralized discussions in places like New York create additional barriers including visa constraints, language barriers, and financial obstacles that limit meaningful participation.


Evidence

Referenced accelerated shrinking of civic space and barriers like visa constraints, language barriers, and financial aspects in centralized UN discussions


Major discussion point

Multi-stakeholder Participation and Trust


Topics

Human rights principles | Capacity development | Multilingualism


Agreed with

– Shamira Ahmed
– Shuyan Wu
– William Bird

Agreed on

AI governance requires multi-stakeholder participation and inclusive dialogue


There is fragmentation in AI governance discussions across different forums, requiring coordination to avoid duplication and ensure coherence

Explanation

Current AI governance discussions are fragmented across multiple forums, creating both thematic fragmentation and inconsistent recognition of rights and limitations. This requires coordination efforts to avoid duplication of processes and ensure coherence in international frameworks, especially given multiple ongoing processes like GDC, Pact for the Future, and WSIS Plus 20.


Evidence

Mentioned ongoing processes including GDC, Pact for the Future, and WSIS Plus 20 review that need coordination


Major discussion point

Policy Coordination and Implementation


Topics

Human rights principles | Legal and regulatory | Interdisciplinary approaches


Agreed with

– German Lopez Ardila
– Ivana Bartoletti
– Audience

Agreed on

Need for coordination across multiple AI governance processes and frameworks


W

William Bird

Speech speed

166 words per minute

Speech length

991 words

Speech time

357 seconds

AI is recreating colonial-style inequalities with data harvested from Africa but processed elsewhere without local benefit

Explanation

AI development is replicating historical patterns of colonial exploitation, where African data is harvested and processed in data centers located outside the continent. This creates a situation where Africa serves as a source of raw materials (data) for AI systems but receives little benefit from the value created.


Evidence

Referenced that digital inequality is replicated in infrastructure and data centers, with most not located on the African continent, while Africa serves as fodder for LLMs


Major discussion point

AI Governance Challenges and Digital Divides


Topics

Digital access | Sustainable development | Critical internet resources


Agreed with

– Shuyan Wu
– Mario Nobile
– Elizabeth Orembo

Agreed on

AI creates and deepens digital divides and inequalities


African countries lack AI infrastructure and data centers on the continent while serving as data sources for global AI systems

Explanation

There is a fundamental infrastructure gap where the means and ability for Africans to effectively use and harness AI are not available on the continent. Despite this lack of local infrastructure, African data is extensively used as input for global AI systems, creating an exploitative relationship.


Evidence

Noted that infrastructure, data centers, and means for Africans to use AI aren’t on the continent, yet Africa is used as data source for LLMs


Major discussion point

Regional AI Governance Approaches


Topics

Critical internet resources | Digital access | Telecommunications infrastructure


African states need to mobilize collectively and impose digital development taxes on multinationals to address fundamental inequalities

Explanation

Individual African states lack sufficient power to challenge multinational corporations, but collective action can be effective. The solution involves mobilizing together as a continent and imposing significant fines and digital development taxes on global entities to address fundamental rights violations and inequality.


Evidence

Referenced Global Digital Justice Forum’s submission to WSIS Plus 20 calling for digital development tax on entities to address inequality


Major discussion point

AI Governance Challenges and Digital Divides


Topics

Taxation | Human rights principles | Sustainable development


Disagreed with

– Shuyan Wu
– Audience

Disagreed on

Solutions to digital inequality – collective action vs technical solutions


IGF must facilitate hard conversations on difficult topics while maintaining rights-based principles of dignity, equality, and sustainability

Explanation

The IGF should not avoid difficult conversations despite its multi-stakeholder nature. While bringing together diverse and sometimes opposing viewpoints, the platform must ensure that hard conversations happen and that rights-based principles of dignity, equality, and sustainability remain non-negotiable outcomes.


Evidence

Used South Africa as example where people who hate each other can sit at table together but still discuss difficult topics


Major discussion point

Multi-stakeholder Participation and Trust


Topics

Human rights principles | Sustainable development | Interdisciplinary approaches


Agreed with

– Shamira Ahmed
– Shuyan Wu
– Paloma Lara-Castro

Agreed on

AI governance requires multi-stakeholder participation and inclusive dialogue


I

Ivana Bartoletti

Speech speed

113 words per minute

Speech length

1246 words

Speech time

658 seconds

IGF and PNI serve as bridges between public concerns about AI and the fragmented global regulatory landscape

Explanation

The policy network serves as a crucial bridge between widespread public concerns about AI’s social and economic impacts and the current fragmented approach to AI regulation globally. It connects people’s worries about AI systems that make decisions about their lives with the reality of different regulatory approaches across countries.


Evidence

Referenced different approaches like US moratorium discussions, EU AI Act delays, Japan’s governance, and India’s privacy bill


Major discussion point

Role and Structure of Policy Network on AI (PNI)


Topics

Human rights principles | Legal and regulatory | Interdisciplinary approaches


Agreed with

– Paloma Lara-Castro
– German Lopez Ardila
– Audience

Agreed on

Need for coordination across multiple AI governance processes and frameworks


AI has never been unregulated as existing privacy, consumer, equality, and human rights laws already apply to AI systems

Explanation

Contrary to common perception, AI has always been subject to regulation through existing legislation covering privacy, consumer protection, equality, non-discrimination, and human rights law. The problem is that AI has often been used as an excuse to breach existing legislation rather than comply with it.


Evidence

Mentioned that legislation around privacy, consumer equality, non-discrimination, and human rights law already apply to AI


Major discussion point

Human Rights and AI Implementation


Topics

Privacy and data protection | Consumer protection | Human rights principles


Disagreed with

– Mario Nobile
– Paloma Lara-Castro

Disagreed on

Regulatory approach – existing laws vs new frameworks


Businesses need AI governance to innovate effectively, as governance creates the trust necessary for long-term sustainable innovation

Explanation

Rather than hindering innovation, governance is essential for businesses to create trustworthy AI that enables long-term sustainable innovation. Companies need governance frameworks to embed human rights, privacy, cybersecurity, and legal protections into AI design and deployment.


Evidence

Emphasized that businesses using AI need governance to innovate and that without governance, long-lasting innovation cannot be achieved


Major discussion point

Multi-stakeholder Participation and Trust


Topics

Digital business models | Privacy and data protection | Human rights principles


Need for assessment mechanisms to evaluate how countries perform against digital compact values, similar to other international indices

Explanation

There should be systematic ways to assess and monitor how countries are performing against the values outlined in the digital compact. This would involve creating index-like mechanisms similar to other international assessments to track country-level implementation and compliance.


Evidence

Suggested creating assessment mechanisms similar to other indices to evaluate country performance against digital compact values


Major discussion point

Policy Coordination and Implementation


Topics

Legal and regulatory | Human rights principles | Data governance


E

Elizabeth Orembo

Speech speed

125 words per minute

Speech length

2474 words

Speech time

1180 seconds

AI governance faces similar challenges to internet governance, with lessons to be learned from connectivity issues and business model evolution

Explanation

The moderator draws parallels between current AI governance challenges and past internet governance issues, noting that connectivity was a major concern 10 years ago and remains an issue today. She emphasizes that the internet has evolved from an open infrastructure to one with business models that entrench inequalities, suggesting these lessons should inform AI governance approaches.


Evidence

Referenced the evolution from ‘connecting the next billion’ discussions to current persistent connectivity challenges and how internet business models have entrenched inequalities


Major discussion point

AI Governance Challenges and Digital Divides


Topics

Digital access | Digital business models | Human rights principles


Agreed with

– Shuyan Wu
– Mario Nobile
– William Bird

Agreed on

AI creates and deepens digital divides and inequalities


PNI should focus on building trust environments and championing IGF values in various policy processes

Explanation

The moderator suggests that PNI’s role should be to create trust environments within IGF networks and policy processes, and to champion IGF values in various policy forums. She notes that the same people participating in different policy processes are also part of IGF networks, creating opportunities for value transfer.


Evidence

Noted that the same people seen in various policy processes are also present at IGF and within IGF networks


Major discussion point

Multi-stakeholder Participation and Trust


Topics

Human rights principles | Interdisciplinary approaches | Capacity development


G

German Lopez Ardila

Speech speed

154 words per minute

Speech length

178 words

Speech time

68 seconds

Need to balance benefiting from UN-level AI discussions while avoiding regulatory fragmentation across different UN agencies

Explanation

The virtual moderator raises concerns about how to effectively leverage the various AI discussions happening at UN level while preventing regulatory or dialogue fragmentation. He specifically mentions the challenge of coordination between different UN compartments including IGF, UNESCO, and ITU to ensure they can work together properly rather than in isolation.


Evidence

Referenced ongoing AI discussions at UNESCO and ITU alongside IGF activities


Major discussion point

Policy Coordination and Implementation


Topics

Legal and regulatory | Interdisciplinary approaches | Human rights principles


Agreed with

– Paloma Lara-Castro
– Ivana Bartoletti
– Audience

Agreed on

Need for coordination across multiple AI governance processes and frameworks


A

Audience

Speech speed

131 words per minute

Speech length

734 words

Speech time

334 seconds

UN AI governance document has representation gaps that need to be addressed

Explanation

An audience member from the Policy Network for AI raised concerns about representation gaps in the UN document on governing AI for humanity. This highlights ongoing issues with inclusive participation in global AI governance processes.


Evidence

Specific reference to ‘UN document on governing AI for humanity’ and its representation gaps


Major discussion point

Multi-stakeholder Participation and Trust


Topics

Human rights principles | Capacity development | Interdisciplinary approaches


AI governance needs either global minimum standards or sector-specific local approaches with priority sectors identified

Explanation

An audience member from Colombia questioned whether AI governance should adopt a general approach with global minimums or take sector-specific local approaches. They also asked about prioritizing certain sectors and the role of UN guiding principles on business and human rights in AI governance.


Evidence

Question about general vs sectoral approaches and reference to UN guiding principles on business and human rights


Major discussion point

Policy Coordination and Implementation


Topics

Legal and regulatory | Human rights principles | Digital business models


Need for universal AI framework to build trust among all stakeholders

Explanation

An audience member from Nigeria emphasized the importance of trust in AI development and questioned panelists about establishing a universal framework for artificial intelligence. They argued this universal framework would be essential for building trust where every stakeholder is included and represented.


Evidence

Emphasized trust as ‘a big deal’ in AI development and called for universal framework for stakeholder inclusion


Major discussion point

Multi-stakeholder Participation and Trust


Topics

Human rights principles | Legal and regulatory | Digital standards


AI systems may be recreating digital apartheid by replicating historical patterns of racial segregation

Explanation

A South African audience member raised concerns about whether AI systems are creating a form of digital apartheid that mirrors historical patterns of racial segregation. This question highlights concerns about AI perpetuating or amplifying existing discriminatory practices and social inequalities.


Evidence

Direct reference to ‘digital apartheid’ and ‘historical patterns of racial segregation’


Major discussion point

Human Rights and AI Implementation


Topics

Human rights principles | Digital access | Rights of persons with disabilities


African countries need virtual spaces to train AI systems using their own data when physical data centers are externalized

Explanation

An audience member from Benin highlighted the challenge that African countries produce data but often have it stored in external data centers not located on their territories. They questioned how African countries can participate in AI training in their own labs without having local data access, suggesting the need for virtual spaces to conserve data for local AI training.


Evidence

Mentioned that African data is often externalized and stored in data centers not on African territories


Major discussion point

Regional AI Governance Approaches


Topics

Critical internet resources | Data governance | Digital access


Disagreed with

– William Bird
– Shuyan Wu

Disagreed on

Solutions to digital inequality – collective action vs technical solutions


Researchers need strategic guidance to navigate multiple AI frameworks and identify gaps in existing resources

Explanation

A young researcher from Hong Kong, representing both PNAI and Asia Pacific Policy Observatory, sought advice on navigating the numerous AI toolkits and guidelines developed by different institutions like UNESCO, EU, and ITU. They emphasized the need for strategic approaches to utilize existing resources effectively while identifying unaddressed gaps in current frameworks.


Evidence

Referenced specific institutions like UNESCO, EU, and ITU that have developed AI frameworks and guidelines


Major discussion point

Policy Coordination and Implementation


Topics

Capacity development | Legal and regulatory | Interdisciplinary approaches


Agreed with

– Paloma Lara-Castro
– German Lopez Ardila
– Ivana Bartoletti

Agreed on

Need for coordination across multiple AI governance processes and frameworks


Agreements

Agreement points

AI governance requires multi-stakeholder participation and inclusive dialogue

Speakers

– Shamira Ahmed
– Shuyan Wu
– Paloma Lara-Castro
– William Bird

Arguments

PNI is a global interdisciplinary bottom-up multi-stakeholder initiative hosted by IGF that facilitates open dialogue on AI governance


AI governance requires inclusive dialogue with diverse representation across genders, races, regions, and age groups, especially marginalized communities


Multi-stakeholder platforms like IGF become crucial as civic space shrinks and centralized UN discussions create participation barriers


IGF must facilitate hard conversations on difficult topics while maintaining rights-based principles of dignity, equality, and sustainability


Summary

All speakers agree that effective AI governance requires inclusive, multi-stakeholder participation that brings together diverse voices, particularly from marginalized communities and the global south, through platforms like IGF and PNI


Topics

Human rights principles | Capacity development | Interdisciplinary approaches


AI creates and deepens digital divides and inequalities

Speakers

– Shuyan Wu
– Mario Nobile
– William Bird
– Elizabeth Orembo

Arguments

AI development brings uncertainties including misinformation, information leakage, and widening digital divides that require international cooperation


The debate is not humans versus machines but those who govern AI versus those who don’t, creating knowledge gaps requiring literacy programs


AI is recreating colonial-style inequalities with data harvested from Africa but processed elsewhere without local benefit


AI governance faces similar challenges to internet governance, with lessons to be learned from connectivity issues and business model evolution


Summary

Speakers consistently identify AI as creating or exacerbating various forms of digital divides, from knowledge gaps to infrastructure inequalities to colonial-style data exploitation


Topics

Digital access | Human rights principles | Capacity development


Need for coordination across multiple AI governance processes and frameworks

Speakers

– Paloma Lara-Castro
– German Lopez Ardila
– Ivana Bartoletti
– Audience

Arguments

There is fragmentation in AI governance discussions across different forums, requiring coordination to avoid duplication and ensure coherence


Need to balance benefiting from UN-level AI discussions while avoiding regulatory fragmentation across different UN agencies


IGF and PNI serve as bridges between public concerns about AI and the fragmented global regulatory landscape


Researchers need strategic guidance to navigate multiple AI frameworks and identify gaps in existing resources


Summary

Multiple speakers recognize the problem of fragmented AI governance discussions across various forums and emphasize the need for better coordination and coherence among different processes


Topics

Legal and regulatory | Interdisciplinary approaches | Human rights principles


Similar viewpoints

Both express concerns about African data being exploited by external entities while African countries lack local infrastructure and control over their own data for AI development

Speakers

– William Bird
– Audience

Arguments

AI is recreating colonial-style inequalities with data harvested from Africa but processed elsewhere without local benefit


African countries need virtual spaces to train AI systems using their own data when physical data centers are externalized


Topics

Critical internet resources | Data governance | Digital access


Both emphasize that existing legal frameworks apply to AI but need continuous updating and proper implementation rather than completely new regulatory approaches

Speakers

– Mario Nobile
– Ivana Bartoletti

Arguments

Technology evolves faster than governments can adapt, requiring continuous discussion and updating of principles in existing regulations


AI has never been unregulated as existing privacy, consumer, equality, and human rights laws already apply to AI systems


Topics

Legal and regulatory | Privacy and data protection | Human rights principles


Both stress the importance of applying existing human rights frameworks to AI implementation and conducting proper impact assessments to prevent discrimination and exclusion

Speakers

– Paloma Lara-Castro
– Ivana Bartoletti

Arguments

Without human rights impact assessments, AI deepens structural inequalities and creates new forms of exclusion for marginalized communities


AI has never been unregulated as existing privacy, consumer, equality, and human rights laws already apply to AI systems


Topics

Human rights principles | Privacy and data protection | Rights of persons with disabilities


Unexpected consensus

Business need for AI governance

Speakers

– Ivana Bartoletti
– Mario Nobile

Arguments

Businesses need AI governance to innovate effectively, as governance creates the trust necessary for long-term sustainable innovation


Italy’s AI strategy rests on four pillars: education, scientific research, public administration, and enterprises to ensure inclusive growth


Explanation

Unexpectedly, there is consensus that businesses actually need and benefit from AI governance rather than viewing it as a hindrance to innovation. This challenges common assumptions about business resistance to regulation


Topics

Digital business models | Human rights principles | Capacity development


Trust as fundamental requirement for AI adoption

Speakers

– Ivana Bartoletti
– Audience
– Shuyan Wu

Arguments

Businesses need AI governance to innovate effectively, as governance creates the trust necessary for long-term sustainable innovation


Need for universal AI framework to build trust among all stakeholders


AI governance requires inclusive dialogue with diverse representation across genders, races, regions, and age groups, especially marginalized communities


Explanation

There is unexpected consensus across different stakeholder perspectives that trust is not just desirable but essential for AI development and adoption, requiring governance frameworks to establish this trust


Topics

Human rights principles | Digital business models | Legal and regulatory


Overall assessment

Summary

Strong consensus exists on the need for inclusive multi-stakeholder AI governance, recognition of AI-driven inequalities, and coordination across fragmented governance processes. Unexpected agreement on business need for governance and trust as fundamental requirement.


Consensus level

High level of consensus on fundamental principles and challenges, with implications that PNI and IGF have clear mandate to facilitate inclusive dialogue, bridge governance gaps, and champion rights-based approaches across all AI governance processes. The consensus suggests potential for unified action despite diverse regional and stakeholder perspectives.


Differences

Different viewpoints

Regulatory approach – existing laws vs new frameworks

Speakers

– Ivana Bartoletti
– Mario Nobile
– Paloma Lara-Castro

Arguments

AI has never been unregulated as existing privacy, consumer, equality, and human rights laws already apply to AI systems


Technology evolves faster than governments can adapt, requiring continuous discussion and updating of principles in existing regulations


Latin American states implement AI in sensitive public policy areas without adequate regulatory frameworks or human rights impact assessments


Summary

Bartoletti argues AI is already regulated through existing laws, while Nobile emphasizes need to update principles as technology evolves faster than regulation, and Lara-Castro highlights inadequate frameworks in Latin America requiring new approaches


Topics

Legal and regulatory | Human rights principles | Privacy and data protection


Solutions to digital inequality – collective action vs technical solutions

Speakers

– William Bird
– Shuyan Wu
– Audience

Arguments

African states need to mobilize collectively and impose digital development taxes on multinationals to address fundamental inequalities


IGF and PNI should strengthen dialogue mechanisms and create databases of best practices while implementing the Global Digital Compact


African countries need virtual spaces to train AI systems using their own data when physical data centers are externalized


Summary

Bird advocates for aggressive collective action and taxation, Wu focuses on dialogue and best practice sharing, while audience members suggest technical solutions like virtual spaces for data access


Topics

Digital access | Taxation | Critical internet resources


Unexpected differences

Role of business in AI governance

Speakers

– Ivana Bartoletti
– William Bird

Arguments

Businesses need AI governance to innovate effectively, as governance creates the trust necessary for long-term sustainable innovation


African states need to mobilize collectively and impose digital development taxes on multinationals to address fundamental inequalities


Explanation

Unexpected disagreement on whether to collaborate with or confront businesses – Bartoletti sees businesses as needing governance to innovate, while Bird advocates for punitive measures against multinationals


Topics

Digital business models | Taxation | Human rights principles


Overall assessment

Summary

Main disagreements center on regulatory approaches (existing vs new frameworks), solutions to inequality (collective action vs dialogue), and business relationships (collaboration vs confrontation)


Disagreement level

Moderate disagreement level with speakers sharing common goals of inclusive AI governance but differing significantly on implementation strategies. This reflects broader tensions between Global North and South perspectives, with implications for PNI’s ability to build consensus on concrete policy recommendations


Partial agreements

Partial agreements

Similar viewpoints

Both express concerns about African data being exploited by external entities while African countries lack local infrastructure and control over their own data for AI development

Speakers

– William Bird
– Audience

Arguments

AI is recreating colonial-style inequalities with data harvested from Africa but processed elsewhere without local benefit


African countries need virtual spaces to train AI systems using their own data when physical data centers are externalized


Topics

Critical internet resources | Data governance | Digital access


Both emphasize that existing legal frameworks apply to AI but need continuous updating and proper implementation rather than completely new regulatory approaches

Speakers

– Mario Nobile
– Ivana Bartoletti

Arguments

Technology evolves faster than governments can adapt, requiring continuous discussion and updating of principles in existing regulations


AI has never been unregulated as existing privacy, consumer, equality, and human rights laws already apply to AI systems


Topics

Legal and regulatory | Privacy and data protection | Human rights principles


Both stress the importance of applying existing human rights frameworks to AI implementation and conducting proper impact assessments to prevent discrimination and exclusion

Speakers

– Paloma Lara-Castro
– Ivana Bartoletti

Arguments

Without human rights impact assessments, AI deepens structural inequalities and creates new forms of exclusion for marginalized communities


AI has never been unregulated as existing privacy, consumer, equality, and human rights laws already apply to AI systems


Topics

Human rights principles | Privacy and data protection | Rights of persons with disabilities


Takeaways

Key takeaways

PNI serves as a crucial bridge between public concerns about AI impacts and fragmented global regulatory approaches, operating through multi-stakeholder working groups to transform community insights into policy recommendations


AI governance faces multiple divides – not just digital divides but ‘intelligence divides’ between those who can govern AI and those who cannot, requiring comprehensive capacity building and literacy programs


Current AI development is recreating colonial-style inequalities, with data harvested from Global South countries (particularly Africa) but processed elsewhere without local benefit or control


Existing legal frameworks (privacy, human rights, consumer protection) already apply to AI, but implementation gaps and lack of human rights impact assessments are deepening structural inequalities


Multi-stakeholder platforms like IGF become increasingly important as civic space shrinks and centralized policy discussions create barriers to meaningful participation from marginalized communities


Technology evolves faster than governance frameworks can adapt, requiring continuous dialogue and updating of regulatory principles while maintaining core values of dignity, equality, and sustainability


Trust in AI systems requires governance at multiple levels – business, state, and global – with transparency, accountability, and rights-based approaches as fundamental prerequisites


Resolutions and action items

African states should mobilize collectively to impose digital development taxes on multinational tech companies to address fundamental inequalities and fund development


PNI should establish a repository of AI governance best practices across different sectors (health, manufacturing, transportation, tourism) to serve as a resource for policy discussions


IGF and PNI should strengthen dialogue mechanisms, create databases of use cases, and promote experience sharing while implementing the Global Digital Compact


Participants were asked to complete a feedback survey to help shape PNI’s future work and impact


Need to develop assessment mechanisms to evaluate how countries perform against Digital Compact values, similar to other international indices


PNI should focus on facilitating hard conversations on difficult topics while maintaining rights-based principles as non-negotiable starting points


Unresolved issues

How to effectively coordinate AI governance discussions across fragmented international forums (UN, UNESCO, ITU, G7, G20) to avoid duplication while ensuring coherence


How to balance innovation and regulation without stifling technological advancement while protecting human rights and addressing inequalities


How to ensure meaningful participation from Global South countries in AI governance processes that are often dominated by Northern perspectives and interests


How to address the fundamental infrastructure gaps in Africa and other developing regions that limit their ability to participate in AI development and governance


How to navigate the tension between national AI strategies focused on competitiveness and global cooperation requirements for addressing shared challenges


How to implement effective human rights impact assessments for AI systems in public administration across different regulatory maturity levels


How to create virtual spaces for data conservation that allow developing countries to train their own AI systems without having physical data centers


Suggested compromises

Recognition that different countries may have varying regulatory approaches (European AI Act, Japanese governance model, Indian privacy bills) while working toward common principles and interoperability


Balancing the need for global AI governance standards with respect for different development stages and specific contexts of various countries


Using existing legal frameworks as a foundation while developing AI-specific governance mechanisms rather than starting from scratch


Leveraging both global platforms (like IGF/PNI) and regional/national IGF processes to ensure broader participation while maintaining coordination


Combining soft governance approaches (guidelines, best practices) with harder regulatory measures where necessary to build trust and enable innovation


Thought provoking comments

The debate is not humans versus machines but rather those who know, manage and govern AI versus those who don’t. And this knowledge gap calls for action like literacy programs, like upskilling and reskilling initiatives.

Speaker

Mario Nobile


Reason

This reframes the entire AI governance discussion by shifting focus from the commonly discussed human-AI competition narrative to a more nuanced understanding of power dynamics based on knowledge and access. It identifies the real divide as being between those with AI literacy and governance capabilities versus those without.


Impact

This comment redirected the conversation toward practical solutions (education, capacity building) and influenced subsequent speakers to address knowledge gaps and capacity building as central themes. It established a framework that other panelists built upon when discussing digital divides and inclusion.


We’ve seen this movie before, right? We’ve seen what happens when you allow the markets to determine what goes on… What that resulted in is that we’ve mainstreamed misogyny. We’ve now got online gender-based violence as a default operating mechanism on so many of these social media platforms.

Speaker

William Bird


Reason

This comment provides crucial historical context by drawing parallels between current AI governance challenges and past failures in internet governance. It challenges the narrative of letting markets self-regulate by providing concrete examples of harmful outcomes, particularly highlighting how marginalized groups suffer most from unregulated technology.


Impact

This intervention significantly shifted the tone of the discussion from theoretical policy considerations to urgent, rights-based concerns. It introduced a sense of urgency and moral imperative that influenced the moderator’s follow-up questions and reinforced themes about protecting vulnerable populations that resonated throughout the remaining discussion.


AI is a social technical tool, which means that it arises from society, and so it comes with a baggage of all the social conditions of this production… Policies that use AI as a tool are implemented in specific and social and political contexts, in countries with diverse democratic composition.

Speaker

Paloma Lara-Castro


Reason

This comment introduces a sophisticated analytical framework that challenges purely technical approaches to AI governance. By conceptualizing AI as inherently social and contextual, it highlights how existing inequalities and power structures become embedded in AI systems, requiring governance approaches that account for diverse political and social realities.


Impact

This theoretical framing elevated the discussion’s analytical depth and provided a foundation for understanding why universal AI governance approaches may be insufficient. It influenced subsequent discussions about the need for contextualized, rights-based approaches and reinforced arguments about the importance of including diverse voices in governance processes.


At the moment AI technology is growing fast, it has brought a lot of benefits but at the moment how to minimize or mitigate the intelligent divide before it grows wider, I think this is an important topic the international AI governance should face.

Speaker

Shuyan Wu


Reason

This comment introduces the concept of ‘intelligent divide’ as distinct from the digital divide, recognizing that AI creates new forms of inequality beyond mere access to technology. It emphasizes the preventive aspect of governance – acting before inequalities become entrenched.


Impact

This concept of ‘intelligent divide’ became a recurring theme that other panelists referenced and built upon. It helped establish a forward-looking perspective in the discussion, emphasizing proactive rather than reactive governance approaches.


We need AI that we trust. So trust, to me, is the most important element. I want people to trust AI, companies to trust AI, so that we can use it, and by using it, enhance productivity. But to generate that trust, organizations like the Policy Network are crucial because we need to share how we create that trust.

Speaker

Ivana Bartoletti


Reason

This comment identifies trust as the fundamental prerequisite for beneficial AI deployment, bridging the gap between business needs and social concerns. It positions trust not as an abstract ideal but as a practical necessity for AI adoption and innovation, while highlighting the role of multi-stakeholder platforms in building this trust.


Impact

This focus on trust as a central organizing principle influenced the moderator to specifically ask other panelists about trust-building mechanisms. It helped synthesize various concerns raised by other speakers into a coherent framework and provided a constructive path forward for the PNI’s role.


In Africa we produce data but our data are often externalized or they are accessible in data centers that are not on our territories. How can we participate to… how do we participate to the training of artificial intelligence in our own labs without having the data on site?

Speaker

Kossi Amessin (audience member)


Reason

This question crystallizes the colonial dimensions of AI development by highlighting how African data is extracted for AI training while Africans lack access to the infrastructure and capabilities needed to benefit from their own data. It makes abstract discussions about data governance concrete and urgent.


Impact

This intervention brought sharp focus to issues of data sovereignty and digital colonialism, prompting William Bird’s strong response about the need for African states to mobilize collectively and his call for digital development taxes. It grounded the theoretical discussion in lived realities of inequality.


Overall assessment

These key comments fundamentally shaped the discussion by establishing multiple analytical frameworks that moved beyond technical considerations to address power dynamics, historical patterns, and social justice concerns. Mario Nobile’s reframing of the AI divide as a knowledge gap, William Bird’s historical contextualization of regulatory failures, and Paloma’s conceptualization of AI as socio-technical combined to create a sophisticated understanding of AI governance challenges. The audience intervention about African data extraction brought urgency and concreteness to these frameworks. Together, these comments elevated the discussion from abstract policy considerations to a nuanced analysis of how AI governance intersects with existing inequalities, colonial legacies, and the need for inclusive, trust-based approaches. They established trust, equity, and meaningful participation as central organizing principles for the PNI’s work, while emphasizing the urgency of proactive governance to prevent the entrenchment of new forms of digital inequality.


Follow-up questions

How can PNI establish a repository of information on AI governance, including research papers and best practices for different sectors like health, manufacturing, transportation, and tourism?

Speaker

Mario Nobile


Explanation

This would serve as a valuable resource for discussions on AI governance and help bridge knowledge gaps across different sectors


How can innovative policies be implemented to mitigate the impact of AI on job displacement and automation?

Speaker

Mario Nobile


Explanation

There’s a need to address potential job losses from AI automation and ensure no one is left behind by this technological revolution


How can we profit from different UN-level discussions while avoiding regulatory or dialogue fragmentation that makes coordination difficult between different UN compartments?

Speaker

German Lopez Ardila


Explanation

There’s concern about fragmentation across different UN bodies (IGF, UNESCO, ITU) working on AI governance


What are the representation gaps in the UN document on governing AI for humanity and how can governance be improved?

Speaker

Ponsleit


Explanation

There are concerns about lack of representation, particularly from the Global South, in key AI governance processes


Should AI governance have a general approach with global minimums or use sectoral/local approaches, and what sectors should be prioritized?

Speaker

Adriana Castro


Explanation

There’s need to determine the most effective governance structure for AI regulation across different contexts


What is the role of UN guiding principles on business and human rights in AI governance?

Speaker

Adriana Castro


Explanation

Understanding how existing human rights frameworks apply to AI governance is crucial for comprehensive regulation


How can social sciences and academia be better included in AI governance conversations?

Speaker

Adriana Castro


Explanation

There’s a need to ensure diverse academic perspectives are incorporated into AI policy development


What improvements does PNI/IGF need to have more intervention power and influence in policy making, especially in relation to multilateral UN processes?

Speaker

Audience member from PAI


Explanation

Understanding how multi-stakeholder processes can better influence formal multilateral policy-making is important for effective governance


Is there need for a universal framework on artificial intelligence to build trust among all stakeholders?

Speaker

Kunle Olorundari


Explanation

A universal framework could help establish trust and ensure all stakeholders are included in AI governance


Is there truth in the notion that AI systems are recreating a digital apartheid by mirroring historical patterns of racial segregation?

Speaker

South African audience member


Explanation

This addresses concerns about AI perpetuating or creating new forms of discrimination and inequality


How can African countries participate in training AI in their own labs without having data centers on their territories, and can virtual spaces be created to conserve data for training local AIs?

Speaker

Kossi Amessin


Explanation

This addresses the challenge of data sovereignty and local AI development capacity in Africa


How should researchers navigate the many different AI toolkits, resources, and guidelines developed by various institutions (UNESCO, EU, ITU, etc.) and how can they be strategic in using existing resources while identifying gaps?

Speaker

Jasmine Khoo


Explanation

There’s a proliferation of AI governance frameworks and researchers need guidance on how to effectively utilize and contribute to this landscape


How can we create assessment mechanisms or indices to measure how countries are performing against digital compact values and AI governance principles?

Speaker

Ivana Bartoletti


Explanation

There’s a need for accountability mechanisms to track implementation of AI governance commitments


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.