DC-DNSI: Beyond Borders – NIS2’s Impact on Global South

DC-DNSI: Beyond Borders – NIS2’s Impact on Global South

Session at a Glance

Summary

This discussion focused on AI and data governance from the perspective of the global majority, exploring challenges and opportunities in various regions. The panel, organized by the Data and AI Governance coalition of the IGF, brought together experts from diverse backgrounds to discuss the impact of AI on human rights, democracy, and economic development in the Global South.

Key themes included the need for regional approaches to AI governance, the importance of inclusive frameworks, and the challenges of implementing AI in healthcare and other sectors. Speakers highlighted the potential of AI to address social issues but also raised concerns about data privacy, labor exploitation, and the widening technological gap between developed and developing nations.

Several presenters discussed specific regional initiatives, such as Brazil’s and Chile’s efforts to establish AI regulatory bodies, and Africa’s continental strategy on AI. The discussion also touched on the environmental and social costs of AI development, including issues of embodied carbon and the exploitation of workers in the Global South.

Innovative approaches were proposed, including reparative algorithmic impact assessments and the development of AI tools that prioritize the needs of the global majority. Speakers emphasized the importance of capacity building, knowledge transfer, and international cooperation in bridging the North-South divide in AI governance.

The discussion concluded by highlighting the complexity of AI governance issues in the Global South and the potential for collaborative solutions. Participants agreed on the need for continued dialogue and research to ensure that AI development benefits all of humanity, not just a privileged few.

Keypoints

Major discussion points:

– AI governance frameworks and policies emerging in different regions of the global majority (e.g. Africa, Latin America, Asia)

– Challenges of AI development and deployment in the global south, including issues of data colonialism, labor exploitation, and unequal access

– Environmental and social impacts of AI, particularly on marginalized communities

– Need for inclusive AI development that incorporates diverse perspectives and addresses local needs

– Proposals for more equitable AI governance, such as reparative algorithmic impact assessments

Overall purpose:

The goal of this discussion was to highlight perspectives on AI governance and development from the “global majority” (developing nations and the global south). It aimed to showcase both challenges and potential solutions for more equitable and inclusive AI systems that serve the needs of diverse populations worldwide.

Speakers

– Luca Belli: Professor of digital governance and regulation at Fundação Getulio Vargas (FGV) Law School, Rio de Janeiro, where he directs the Center for Technology and Society (CTS-FGV) and the CyberBRICS project

– Ahmad Bhinder: Policy and Innovation Director at Digital Cooperation Organisation

– Ansgar Koene: Global AI Ethics and Regulatory Leader at EY

– Melody Musoni: Digital Governance and Digital Economy Policy Officer at the European Centre for Development Policy Management.

– Bianca Kremer: Assistant Professor and Project Leader at the Faculty of Law of IDP University (Brazil)

Full session report

AI Governance from the Global Majority Perspective: Challenges and Opportunities

This comprehensive discussion, organised by the Data and AI Governance coalition of the IGF, brought together experts from diverse backgrounds to explore the challenges and opportunities of AI governance from the perspective of the global majority. The panel focused on the impact of AI on human rights, democracy, and economic development in the Global South, highlighting the need for inclusive frameworks and regional approaches to AI governance.

Key Themes and Discussion Points

1. AI Governance Frameworks and Approaches

The discussion emphasised the importance of developing inclusive AI governance frameworks that consider the perspectives of the global majority. Ahmad Bhinder highlighted the need for regional AI strategies and policies, discussing the Digital Cooperation Organization’s (DCO) work on AI readiness assessment and ethical principles. He mentioned the development of a self-assessment tool for AI readiness, which will be made available to member states across different dimensions of their AI readiness, including governance and capacity building.

Melody Musoni stressed the importance of creating inclusive frameworks for the global majority, mentioning the African Union’s continental strategy on AI and data policy framework. This initiative aims to provide a unified approach to AI governance across the African continent.

Elise Racine proposed the implementation of reparative algorithmic impact assessments to address historical inequities. This novel framework combines theoretical rigour with practical action, offering a potential solution for creating more equitable AI systems.

Guangyu Qiao Franco addressed the gap between North and South in military AI governance, highlighting the need for an inclusive AI arms control regime. She provided specific statistics on participation in UN deliberations, emphasizing the underrepresentation of Global South countries in these discussions.

2. AI Ethics and Human Rights

Ethical considerations and human rights protections emerged as crucial aspects of AI development and deployment. Bianca Kremer provided a stark example of AI bias in Brazil, stating that “90.5% of those who are arrested in Brazil today with the use of facial recognition technologies are black and brown.” This statistic underscores the urgent need to address AI bias and its societal implications, especially in diverse societies.

Kremer also discussed her research on the economic impact of algorithmic racism in digital platforms, highlighting how these biases can perpetuate and exacerbate existing inequalities.

3. AI Impact on Labour and Economy

The discussion explored the significant impacts of AI on labour, the economy, and the environment. Amrita Sengupta examined the impact of AI on medical practitioners’ work, emphasising the need to prioritise AI development in areas that provide the most public benefit with the least disruption to existing workflows in healthcare.

Avantika Tewari analysed the exploitation of digital labour in AI development, highlighting how platforms like Amazon Mechanical Turk outsource tasks to workers in the global majority, often underpaying and undervaluing their contributions. She also discussed India’s Data Empowerment and Protection Architecture, providing context for data sharing models and digital labor issues in the country.

4. Environmental Concerns in AI Development

Rachel Leach examined the environmental and social costs of AI development, including the issue of embodied carbon in AI technologies. She highlighted that current regulations are furthering AI development without properly addressing environmental harms, emphasising the need to balance AI advancement with environmental sustainability. Leach also discussed the techno-solutionist approach of countries like Brazil and the U.S., which often overlooks the environmental impact of AI technologies.

5. AI in Content Moderation and Misinformation

Hellina Hailu Nigatu addressed challenges in AI-powered content moderation for diverse languages, while Isha Suri focused on developing policy responses to counter false information in the age of AI. Suri emphasized the need for collaborative efforts between governments, tech companies, and civil society to address the challenges posed by AI-generated misinformation.

6. AI in Judicial Systems

The implementation of AI in judicial systems was discussed by Liu Zijing and Ying Lin, who provided insights into China’s AI initiatives in the judicial system. They presented information about specific AI systems like Faxin, Phoenix, and the 206 system, which are being used to assist judges and improve efficiency in Chinese courts. However, they also raised concerns about transparency and fairness in AI-assisted judicial decisions.

7. Regional Perspectives on AI Development

The discussion provided insights into AI development and regulation across various regions, including in Russia, Latin America, and in Africa. Luca Belli provided a Brazilian perspective on AI and cybersecurity, noting that while Brazil has adopted various sectoral regulations, implementation remains “very patchy and not very sophisticated in some cases.” This observation highlighted the gap between formal regulations and actual implementation, revealing a critical issue in AI governance, especially in developing countries.

8. AI and Disabilities

The discussion also touched on the intersection of AI with disabilities, educational technologies, and medical technologies. This highlighted the potential for AI to improve accessibility and support for individuals with disabilities, while also raising concerns about ensuring inclusive design in AI systems.

Agreements and Consensus

Key areas of agreement included:

1. The need for inclusive AI governance frameworks

2. The importance of addressing biases and discrimination in AI systems

3. Consideration of the hidden costs of AI development, including environmental and labour impacts

4. The development of region-specific AI strategies

This consensus suggests a growing recognition of the need for more inclusive and equitable approaches to AI governance globally, which could lead to more collaborative efforts in developing AI policies and frameworks that address the diverse needs of different regions and populations.

Differences and Unresolved Issues

While there was general agreement on the need for inclusive AI governance, differences emerged in approaches to specific issues:

1. Approaches to AI regulation varied, with some favouring cautious development (e.g., Russia) and others establishing specialised regulatory bodies (e.g., Latin America).

2. The focus of AI governance differed, with some emphasising ethical principles and others prioritising environmental concerns.

3. Addressing biases in AI systems revealed different priorities, such as algorithmic racism in law enforcement versus content moderation challenges for diverse languages.

Unresolved issues included:

1. Balancing AI development with environmental sustainability

2. Addressing the exploitation of digital labour in AI development

3. Resolving disparities in military AI governance between global North and South

4. Determining liability in AI-assisted medical decisions

5. Ensuring fairness and transparency in AI-powered judicial systems

6. Developing effective content moderation systems for diverse languages and contexts

Proposed Solutions and Action Items

The discussion yielded several proposed solutions and action items:

1. Develop more inclusive AI governance frameworks that incorporate perspectives from the global majority

2. Implement reparative algorithmic impact assessments to address historical inequities

3. Create open repositories and taxonomies for AI cases and accidents

4. Develop original AI solutions tailored to regional languages and contexts

5. Increase capacity building and knowledge transfer in AI between global North and South

6. Incorporate environmental justice concerns comprehensively in AI discussions and policies

7. Enhance collaboration between governments, tech companies, and civil society to address AI-generated misinformation

Conclusion

This discussion highlighted the complexity of AI governance issues in the Global South and the potential for collaborative solutions. It emphasised the need for continued dialogue and research to ensure that AI development benefits all of humanity, not just a privileged few. The variety of regional perspectives contributed to a collaborative, global-minded approach to addressing the challenges and opportunities presented by AI in the context of the global majority.

Session Transcript

Luca Belli: Morning, good afternoon actually to everyone. I think you can get started. So we have a very intense and long list of panelists today. These are only a part of the panelists. We have also online panelists joining us due to the fact that we have a lot of co-authors for this book that we are launching today. So this session on AI and data governance from the global majority is organized by a multi-stakeholder group of the IGF called the Data and AI Governance DAIG coalition of the IGF together with the Data and Trust coalition which is another multi-stakeholder group. So we have merged our effort. This report is the annual report of the Data and AI Governance coalition that I have the pleasure to chair. My name is Luca Belli. Actually, pardon my lack of politeness. I forgot to introduce myself. My name is Luca Belli. I’m a professor of digital governance and regulation at Fundação Getulio Vargas (FGV) Law School, Rio de Janeiro, where I direct the Center for Technology and Society (CTS-FGV) and the CyberBRICS project. I’m going to briefly introduce the topic of today and what we are doing here and then I will ask each panelist to introduce him or herself because as we have an enormous list of panelists I might spend five minutes only reading their resumes. So it’s in the interest of time management if it is better if everyone. I will of course call everyone but then if they want to introduce themselves they do it by themselves. So are you hearing well? All right. So the reason of the creation of this group that is leading this effort on data and AI governance is to try to bring into the perspective of data and AI governance debates ideas, problems, challenges but even solutions from the Global South, the global majority. And this is why this year report is precisely dedicated to AI from the global majority and as you may see we have a pretty diverse panel here and even more diverse if we consider also the online speakers. Our goal is precisely to assess evidence, gather evidence, engage stakeholders to understand to what extent AI and data technologies, data intensive technologies can have an impact on individuals life, on the full enjoyment of human rights, on the protection of democracy and the rule of law but also on very essential things like the fight against inequalities, the fight against discrimination and biases against disinformation, the need to protect cybersecurity and safety and all these things are explored to some extent in this book. We also launched another book last year on AI sovereignty, transparency and accountability. I see that many, some of the authors at least of last year’s book are also here in the room and all the publications are freely available on the IGF website. Let me also state that these books that we launched here are preliminary versions. They are then, although they have a very nice design, they are printed, they are preliminary version and then they are officially published with an editor but it takes more time so the AI sovereignty book is going to be releasing in two months with Springer. This will be consolidated so if you have any comments we are here also to receive your feedback and comments so that we can improve the work in a cooperative way. I had the pleasure to author a chapter on AI meets cybersecurity, exploring the Brazilian perspective on information security with regard to AI and this is actually a very interesting case study because it’s an example of a country that even if it has climbed cybersecurity rankings like the ITU cybersecurity index being now the third most cybersecurity in the Americas according to the index, it’s also at the same time a country that is in the top three of the most cyber attacked in the world and this is actually a very interesting case study because it means that even if formally it has climbed the cybersecurity index because it has adopted a lot of cybersecurity sectoral regulation like in data protection, like in telecoms sector, in the banking sector and so on, in the energy sector but the implementation is very patchy and not very sophisticated in some cases so the one of the main takeaways of the study and I will not enter into details because I hope you will read it together with the others, is precisely to adopting a multi-stakeholder approach not to pay lip service to all the stakeholders that join hands and find solutions but because it is necessary to understand to what extent AI can be used for offensive, defensive purposes and to what extent geeks can cooperate with policymakers to identify what are the best possible tools but also what kind of standardization can be implemented to specify what are very vague elements that we typically find in laws like what is a reasonable or adequate security measures. Reasonable and adequate are the favorite words of lawyers. I say this as a lawyer because it means pretty much everything and you can charge hefty fees to your clients to discuss what is reasonable and adequate. If you don’t have a regulator or a standard that tells you what is a reasonable or an adequate security measures it’s pretty much impossible to implement. Now I’m not going to enter too much into this I hope you will check it and I would like to give the floor to our first speaker hoping that they will respect the five minutes time each, save those who are splitting the presentation that will have three minutes per person. So we will start with Ahmad Binder, Policy and Innovation Director at the DCO.

Ahmad Bhinder: Thank you very much Dr. Luca and I’m really feeling overwhelmed to be engulfed with such knowledgeable people. So my name is Ahmad Binder, I represent Digital Cooperation Organization that is an organization, intergovernmental organization that is headquartered in Riyadh and we have 16 member states mainly from the Middle East, from Africa, a couple of European countries and from South Africa, sorry from South Asia and we are in active discussions with the new members from Latin America, from Asia etc. So we are a global organization and although we are a global organization the countries that we represent they come from the global majority. So we are focusing horizontally on the digital economy and all the digital economy topics that are relevant including AI governance and data governance, they are very relevant to us. So I would very quickly introduce some of the work that and it’s on a preliminary level and then how to action some of that work. So I should keep it like this, yeah okay. So we have developed two agendas as I say, one is the data agenda and since data governance is bedrock of AI governance so we have something on the AI agenda as well. So very quickly we are developing a tool for assessment of AI readiness for our member states. This is a self assessment tool and this tool is we will make it available in a month’s time to the member states across different dimensions of their AI readiness that includes governance but that goes beyond governance to a lot of other dimensions from for example capacity building, the adoption of AI and that assessment is going to help the member states assess and it would recommend what needs to be done for the adoption of AI across the societies. Another tool that we are working on is quite an interesting one and I am actually working actively on that. So there are a lot of now I think what we have covered in the in the AI domain is to come up with the ethical principles. So there’s a there’s kind of a harmonization from a lot of multilateral organization on what the ethical principles should be for example explainability, accountability etc. We’ve taken those principles and as a basis and we have done an assessment for the DCO member states on how does AI intersect under those principles to the basic human rights. We’ve created a framework that I presented in a couple of sessions earlier so I will not go into the details but we are looking at for example there is data privacy or data privacy is an ethical AI principle. Looking at data privacy and seeing what are the risks that come under attack from the AI systems and then mapping those risks against the human rights of so a basic human rights of privacy or a basic human right of of whatever. So once we once we take that through through this framework we will make it just a tool available to the AI systems, deployers and developers in the DCO member states or and beyond as well to answer a whole lot of detailed questions and and and assess their the systems, under those ethical principles considerations. So basically, we are trying to put the principles that have been adopted into practice. And the system, and also the recommendations on how AI systems can improve themselves. So this is on AI. Very, very quickly, I think I have a minute left. So we are trying to focus on the data privacy and we are developing or drafting DCO AI, sorry, DCO data privacy principles that take a lot of inspirations from the principles that are out there, but the changed realities with AI, we are taking them into consideration. And we are developing an interoperability mechanism for trusted cross-border data flows across the DCO member states. And we are also developing some foundations on what could go into those interoperability mechanism, for example, some model contractual clauses, et cetera, et cetera. So that, in a meaningful multilateral way, that would facilitate the trusted cross-border data flows and of course, serve as foundation for AI governance. I could say a lot, but I think I am, thank you, it’s time over for me. So thank you very much.

Luca Belli: Fantastic, thank you very much, Ahmad. And now, as you were speaking about ethics and AI, Ansgar Kern, you have been leading the Anderson Young work on AI ethics globally. So I would like to give you the floor to provide us a little bit of punch remarks on what are the challenges and what are the possibilities to deal with this?

Ansgar Koene: Certainly, thank you very much, Luca. And it’s a pleasure and honor to be able to join the panel today. So yes, my name is Ansgar Kern and I’m the Global AI Ethics and Regulatory Leader at EY. As a globally operating firm, of course, we try to help organizations, be it public or private sector, in most countries around the world with setting up their governance frameworks around the use of AI. And one of the big challenges is for organizations to clearly identify, actually, what are the particular impacts that these systems are going to have on people, both those who are directly using the system, but also those who are going to be indirectly impacted by these. And one example, for instance, that is probably of particular concern for the global majority is the question about how these systems are going to impact on young people. The global majority, of course, being a space where there are a lot of young people. And if you look at a lot of organizations, they do not fully understand how young people are interacting with their systems, be it systems that are provided through online platforms or be it systems that are integrated into other kinds of tools. They do not know who and from what ages is engaging with these platforms or what kind of particular concerns they need to be taking into account. A different kind of dimension of a concern is how to make sure that, as we are operating in the AI space, often with systems that are produced by a technology-leading company, but then are being deployed by a different organization, that the obligations, be it regulatory or otherwise, fall onto the party that has the actual means to address these considerations. Often, the deploying party does not know fully what kind of data went into creating the system, does not know fully the extent to which the system has been tested, whether it’s going to be biased against one group or another, and does not have the means to do so. It must rely on a supplier. Do we have the right kind of review processes as part of procurement, as part of making sure that, as these systems are being taken on board, that they do benefit the users?

Luca Belli: That was excellent and also fast, which is even more excellent. So we can now pass directly to Melody Musoni, who is Policy Officer at ECDPM and was former Data Protection Advisor of the South African Development Community Secretariat. Melody, the floor is yours.

Melody Musoni: Thank you, Luca. When I was preparing for this session, I was looking at my previous interventions at IGF last year. It seems like a lot has happened from last year till now in terms of what Africa has been doing, and I guess to speak more on what the developments on AI governance in Africa and trying to answer one of the questions. So I’ll try to speak about the developments on AI governance in Africa and trying to answer as well one of the policy questions we have. How can AI governance frameworks ensure equitable access to and promote development of AI technologies for the global majority? So this year has been an important year and a very busy year for policy makers in Africa. We saw earlier at the beginning of the year, the African Union Development Agency developing a white paper on AI, which kind of gave a layout of the land of what are the expectations from a continental level and the priorities that the continent has as far as the development of AI on the continent is concerned. And later in June this year, we saw again the African Union adopting a continental strategy on AI, and it’s something that was in response to, I guess, conversations that we have at platforms like this, that at least if we can have a continental strategy, which give or direct us and guide us on the future of AI development in Africa. And apart from the two frameworks, we also have a data policy framework. It has been in place since 2022, and it is there to support member states on how to utilize or unlock the value of data. So it’s not only looking at personal data, it’s also looking at non-personal data and issues on data sharing are quite central in the policy framework. Issues on cross-border data flows are also quite central. And again, we are towards the finalization of the African Continental Free Trade Agreement and a protocol specifically on digital trade, which also emphasizes the need for AI development in Africa, the need for data sharing. So some, I guess, some of the important issues that the continent is prioritizing on, the first one I’ll touch on is human capital development. So there’s a lot of discussion around how best can we skill the people of Africa? So we have more and more people with AI skills. We have more and more people who are working in the STEM field, for example. And a lot of initiatives are actually going towards building our own human capital. And I guess with people who are already late in their careers, there’s also that question of how can we best re-skill them? And I think that’s where we need the support from the private sector mostly to support a lot of people who are advanced in their careers on how to re-skill and get new skills that are relevant to the edge of AI. And an important area, again, an important pillar for Africa is on infrastructure. So we’ve been talking about digital, global digital divides and the need to have access to digital infrastructure. And that is still a big challenge for Africa. So it’s not just talking about AI, it’s coming back to the foundational steps that we need. We need to start having access to the internet. We need to have access to basic infrastructure, building on that. And then, of course, with AI, there’s discussions around computing power and how best can we have more and more data centers in Africa to support, again, AI innovation. And I’m not going to talk about enabling environment because that’s more regulatory issues. And I’m sure we have been talking about the issues on how best to regulate. But there, just to emphasize again, that the discussion apart from regulating AI and personal data, discussions around how can we best have laws, be it intellectual property laws, taxation laws, and different incentives to attract more and more innovation on the continent. And then, I’ll guess the most important for the continent is building of the AI economy. How do we go about it in a way that is going to bring actual value to African actors and African citizens? And there, again, there are promises. It’s still not clear how we’ll go about it. For example, I see I’m running out of time. Can I just go to? Yes, so another important issue, again, is the importance of strategic partnerships. We cannot do this by ourselves. We are aware of that. And there is need, again, to see how best can we collaborate with international partners to help us to develop our own AI ecosystem. So, and then. Fantastic, and exactly, these are points that apply around the full spectrum of global South countries.

Luca Belli: But it’s very, very important to raise them. Let’s now move to another part of the world, which is close to you, Professor Bianca Kramer. She is member. of the board of CGI.br, the Brazilian Steering Committee for the Internet, and I also have the pleasure of having her as a colleague at FGV Law School Rio. Please, Bianca, the floor is yours.

Bianca Kremer: Thank you, Luca. I will take off my headphones because it’s not working very well and I don’t want to bother very much the conference for now. So, thank you so much for inviting me. It’s a pleasure to be here. This is my first IGF, despite I have been working with AI and tech for the last 10 years. I have been a professor, an activist in Brazil, and also a researcher on the topics of AI and algorithmic racism, its impact in our country in Brazil, understanding also other perspectives to improve, develop, and also use the technology in our own perspectives, in our own terms. And this is something we have to consider when we talk about the impacts of AI and other new technologies, because we don’t have only AI. AI is the hype for now, but we have other sorts of technology that impacts us socially and also economically speaking. So, I have been concerned on this topic, this specific topic of algorithmic bias in the last 10 years. And from 2022 to 2023, I have been thinking about how to raise awareness of the problem in our country, developing research, and also understanding the impacts for our society on this topic. But this year, I have been changing a little bit my perspective, because I have been concerned about raising awareness on the topic for the last year, and I thought that maybe it was important to give a next step to the research. So, I have been developing a research that has been funded also. It’s partially, one part of my research, I have been developing research on data and AI in FGV University with Professor Luca, and the impact of our Brazilian data protection law and economic platforms as well. But personally, I have been working on the topic of the economic impact of algorithmic racism in digital platforms. This is something that is very complex to do. We have to raise indicators to understand the economic impact that could, when we could see and observe the specificities of these impacts, and maybe provide some changes in our environment, in our legislation, and also in our public policies. So, this is something I have been up to, and just to address a little bit about why this is a concern for us. Until last year, I have been working specifically in one type of technology that is facial recognition, for example. Just to clarify a little bit how the algorithmic racism works in Brazil. We have been addressing a huge amount of acquisitions of facial recognition technologies in the public sector, specifically for public security purposes. And raising researches, we have found that 90.5% of those who are arrested in Brazil today with the use of facial recognition technologies are black and brown. The brown people in Brazil are called pardos. So, we have more than 90% of the population being biased with the use of technology. And this is not something that is trivial, because Brazil today is the third population that incarcerates the most in the world. So, we are the third place. We only lose to China and the United States, for example. So, this is an important topic for us. And which are the economic impacts of these technologies? What do we lose when we incarcerate this amount of people? Which are the losses, the economic losses for the person, for the ethnic group that is arrested, and also for society? Which are the heritages that we feel now, with the use of these pervasive technologies, they are back from the colonial heritage? So, this is something that I have been working with, trying to not only raise awareness, but also understanding the actual economic impacts. And with the use of economic metrics, for example. It’s ongoing, but it’s something that we have to understand a little bit. So, thank you so much, Luca, for the space, for the opportunity. I’m looking forward to hear a little more about my colleagues on their topics. Thank you. Fantastic. Thank you very much also for being on time. And indeed, actually, as the human rights arguments are something that we have been repeating for some years, probably the economic ones might be more persuasive, maybe with policymakers. Now, let’s go directly to the next panelist.

Luca Belli: We have Liu Xinjing from Guanghua Law School of Zhejiang University.

Liu Zijing: Hello, everyone. I’m Liu Xinjing from Zhejiang University in China. And this is my co-writer, Ling Ying. And we also have a co-writer, and he is in China now. We love to share Chinese experience about the artificial intelligence utilization. And our report is about building a smart code through large language models. The experience from China. And Chinese has a smart code reform, and it was since 2016. But before that, in 1980s, China’s leader had to consider how to utilize the computer to modernize the code management and also to modernize the legal work. And until 2016, China government officially launched a program called the Smart Code Reform to digitalize the code management. And now in this year, it has entered into the third phase, which is the AI phase. And in this year, China’s code has launched their own unique large language models, which was very impressive. So we’d like to share some experience from China. And in this year, in 2016 and 2022, the Supreme Court of China has launched a system named Faxin system, which is driven by the large legal language models. And it helps the judges to do their legal research as well as the legal reasoning. And also, in the local court level, such as in Zhejiang province, the Zhejiang High Court, they launched their own unique language model named the Phoenix. And they also have an AI co-pilot named Xiaozhi. And it was being used in the court, especially for the per litigation mediation, which was also a feature of Zhejiang province. And also in Shanghai, the Shanghai High Court, they launched a system named 206 system. And it was especially for the criminal cases. So you can see there are many features in China’s utilization of the large language models, especially in the judicial sector. And we also concluded several features about China’s success. And the first one is that we have a very strong and sustained up-down policy. And the second one is that there is a weaker resistance within the judicial sectors. And also, one of the most important features is that in China, there is a close cooperation between the private sector and public sector to develop the large language models by themselves. Because we witnessed that in this year, lots of judges over the world, they also use AI textbooks such as ChatGPT. But in China, the Chinese court, they developed their own large language models. So it was quite unique. And I will share my time with my co-writer.

Ying Lin: Hello, everyone. I’m Ying from Free University of Brussels. I would like to continue with my colleague on challenges and provide some initial suggestions. There are many three concerns for us. One is about development. As we know, advanced AI requires substantial financial resources and only a few developed regions can afford it. As we mentioned before, like Shanghai. So it calls for special funds for less developed regions to foster equitable access to AI-powered judicial resources. There are also issues about public-private partnership. The biggest problem is public input, but private output. What if those private companies use those data and similar products for their own benefit? What if those private companies dominate this relationship and put great influence in judicial decisions? So robust oversight mechanisms are needed to prevent undue influence and ensure transparency. And the second, the fairest problem. On the one hand, AI assistants raise concerns about transparency and due process. Can the judge really know how the algorithm works? And the decision is really made by the AI or by the human being? And the decision-making authority to AI assistance provides lies of responsibility, potentially weakening judicial accountability. And due to this autonomous process, there is also an issue about whether all the parties in the cases represent them fully. And this emphasizes the importance of transparency and explainability. And on the other hand, there are substantial fairness issues and AI are biased and sometimes they make up things. We need a human in the loop. So integration of a single framework and a guideline into AI system are helpful. And the ongoing dialogue between legal experts and AI development will also work. And the last one is the card issue. When making judicial decisions, it will involve massive process of sensitive personal data. We need the strict data security protocols and the many technicals and the recognition of government data assets and used by private partners and governments. And when smart courts are developed in a national level, there’s an issue like national security risks. So robust cybersecurity measures to prevent unauthorized data breach are essential and to ensure the integrity and security of the smart court system in China.

Luca Belli: Thank you, that’s all. Thank you very much also for being perfectly on time and for raising two very important issues at least. First is the fact that even if we build AI, then it has to run on something. So it’s not only the model, it’s also the compute that is relevant. And second, the fact that it needs to be transparent because probabilistic systems like LLM, they are frequently very opaque and it is not really acceptable from a due process and rule of law perspective to say we know how it works, but it needs to be explainable. All right, fantastic. Let’s get to the last couple of speakers in person. Rodrigo Rosa-Gameru and Katherine Bailick from MIT. Please, the floor is yours. Hello guys, can you hear me?

Rodrigo Rosa Gameiro: Okay, my name is Rodrigo. I’m a physician. I’m also a lawyer by training. I grew up in Brazil, but I currently live in the US. I work at MIT with Dr. Bailick here where we do research in AI development, alignment and fairness. So one question that I had in mind while I was thinking about this panel is how do we make sense of where we stand with AI globally today? And I often find myself turning to literature for perspective and there is this one line from Dickens’ A Tale of Two Cities that feels especially fitting. And it is, it was the best of times, it was the worst of times. Because for some, this is indeed the best of times. AI can work and does work in many cases. In healthcare, AI enabled us to make diagnosis that were simply not possible before. AI is enabling us to accelerate drug development and transform our understanding of medicine in ways that we never imagined. The problem is, this is also the worst of times. The benefits of AI remain largely confined to a handful of nations with robust infrastructure. Meanwhile, the global majority is pushed to the sidelines. And even within countries that lead AI development, these technologies often serve only to the privileged few. We have documented, for instance, AI systems recommending different levels of care based on race. And vast regions of the world where these technologies don’t even reach communities at all. The digital divide isn’t just about access, it’s about who gets to shape these technologies, who benefits from them, and who bears their risks. So, how do we ensure that AI upholds human rights for everyone? How do we build AI that truly serves every population? AI that follows the principles of non-maleficence, beneficence, autonomy, and justice? I would argue that the answer actually lies in the title of this panel and of this book, because there can be no AI for the global majority if it is not from the global majority. And this brings me to our chapter in the book, which is From AI Bias to AI by Us. And at our lab at MIT, led by Dr. Leo Celli, who unfortunately could not be here today, we’ve made efforts to move beyond just talking about these issues. We’ve created concrete ways to measure progress and drive change. And what we’ve learned is powerful. When you give everyone a seat at the table, innovation flourishes. Let me share a little story that illustrates this. Through our work, we connected with researchers in Uganda. We didn’t come as saviors or teachers, we came as collaborators. As a result of our collaboration, the team there has built their own data set, developed their own algorithms to solve their own local challenges. This also secured international funding. In fact, they taught us much more than we taught them. And this isn’t an isolated story. Through Physionet, which is our platform for sharing healthcare data, we have enabled collaboration across more than 20 countries. We’ve hosted datathons that bring together multidisciplinary local talent and leadership worldwide to collaborate on solving local problems. The results, more than 2,000 publications with 9,000 citations, but most importantly, AI solutions that actually work for the communities that they serve. But here’s what we’ve learned about all else. Our approach isn’t the only answer. Effective AI governance needs more than individual initiatives. It requires all stakeholders working together towards shared goals. And my colleague, Dr. Bielik, will explain this further. Thank you. Thank you, Dr. Romero.

Catherine Bielick: So my name is Dr. Katherine Bielik. I’m an infectious disease physician. I’m an instructor at Harvard Medical School, and I’m a scientist at MIT studying AI. Outcome improvement for people with HIV and bias reduction. So I work here at MIT Critical Data. We are publishing here as a case study, but I think we’re just one group, I think, in one country, from one perspective in one professional field about healthcare, artificial intelligence. And this discussion is about so much more. And I think one way that I would like to think about international governance of AI from a global majority is to think about it from a historical precedent and context, because we don’t want to reinvent the wheel. We don’t think that everyone around the world should be doing what MIT Critical Data is doing. Individual countries have individual needs. And I think there’s already a precedent, actually, that we’d like to contend as a good framework that we can emulate going forward for AI from the global majority. And I’m talking actually about the Paris Agreement, the Climate Accords, where nearly 200 countries came together to agree on one common goal with individual needs per country based on their own unique populations. And I think there’s five core features that I want to take away from the Paris Agreement in a way that we can parallelize it over to AI from the global majority. The main thing is that this is a global response to a crisis of what I will say is inequitable access to responsible AI. And I think all those words carry a lot of different meaning and weight. But the key here, I think, for the five core features, the first is there’s a collective response internationally with differentiated responsibilities, where I think that the wealthier nations carry more of the burden to have open leadership and knowledge sharing. The second is, I think, maybe the most important, which is localized flexibility. There are nationally determined contributions in the Paris Agreement that I think parallelize over to AI from the global majority, where each country defines their AI priorities for their own people, and we come together and we put them together and agree on a global standard. Because I think implementation domains differ in so many areas, in healthcare, in agriculture, disaster response, education, law enforcement, job displacement, you can go on, economic sustainability and environmental energy needs. There’s just no one size fits all. And what comes with that, I think, is a core feature of transparency and accountability. And that is accounted for in the Paris Agreement, which I think can also parallelize to us today. There are regular reviews from every country, and they are domain-specific non-negotiables, reducing carbon emissions by a quantifiable amount per country. And in this case, there can be a federated auditing system, I think, which would be similar to federated learning in a way that protects privacy. The last two include, I think, financial supporting, channeling, where developing nations must have resources channeled over, where people can not only use those resources and technology sharing to develop and implement their focused AI tools, but the infrastructure to evaluate those outcomes as well, which I think is just as important, if not more important. And then lastly, is the global stockade term, which was used a lot for the Paris Agreement, I think. What the key here is that there are specific outcomes determined by specific groups, by specific countries, and then we can aggregate those towards a single tracking of progress. And I think with this unified vision for the future, it takes us out of the picture, I think, because I don’t think we can or should be prescribing what the global majority wants or needs from Harvard or MIT or wherever. I think every stakeholder needs to have an equal voice in this. And that’s the pathway, I think, to an international governance with those core features. And why can’t a meeting like this, why aren’t we talking about the equivalent of an international agreement, where we can all have the same equal voice in participating towards the same common goal? We’re all here. There’s no shortage of beneficence from all of you, not maleficence, equity, justice. These are medical ethical pillars, and there’s no shortage of resources, I think, when we can come together for a unified partnership.

Luca Belli: Thanks. Fantastic. So we have a lot of, already a lot of things to think about. And so I also would like to first ask the people in the room to start thinking about their comments or questions, because the reason why we are trying to do, is to then have a debate with you guys. So let’s now. pass to the online panelists which also are a bunch and I really hope they will strictly respect their three minutes each. We should have already a lot of them online so that is the moment where our remote moderation. Friends should be supportive. The first one should be Professor Sizwe Snail Ka Mtuze.

Sizwe Snail Ka Mtuze: Thank you very much Dr. Belli. Thank you very much delegates and everyone in the room. Indeed, IGF time is always a good time and it’s always a good time to collaborate. I’ve had the pleasure of working with two lovely ladies this year, Ms. Morihe and Ms. Nzemande. Ms. Morihe is one of the attorneys at the firm and Ms. Nzemande is a paralegal, on looking at the evolving landscape of artificial intelligence policy in South Africa on the one hand, as well as possibly drafting artificial intelligence legislation. I’m mindful of the three minutes that that’s been allocated to us. I want to fast forward and say in South Africa, the topic of artificial intelligence has been discussed over the last two to three years on various levels. On the one level, there was a presidential commission in terms of which the president of South Africa had made certain recommendations in terms of a panel he had constituted on how the fourth industrial revolution should be perceived and what interventions should be made with regards to aspects such as artificial intelligence. It was a bit quiet. Covid came and went and data protection was the big, big, big issue. However, artificial intelligence is back. It’s the elephant in the room and South Africa has been trying to keep up with what is happening internationally. On the one hand, South Africa drafted what they called the South African draft AI strategy and this was published earlier on this year and the strategy received both very warm comments and very cold comments. Some of the authors and some of the jurists in South Africa were very happy with it saying it’s a way forward, it’s a good way forward and other jurists were of the view to say but this is just a document, it’s 53 pages, why are we having this? South Africa then responded in early August after all the critique and everything that was said with a national artificial intelligence policy framework. This document has been reworked, it looks much better, it has objectives and it has been trimmed from the 53 page document. Having a look at what is happening in Africa as well, I think it is in line with some of the achievements that people want to do in Africa with regards to artificial intelligence and the regulation thereof. And it looks like I’m running out of time, so that is my contribution on this session. All right, thank you very much for having respected the time and again we are mindful that every short presentation is providing only a very teaser of the broader picture but we encourage you to read the deep

Luca Belli: Next speaker actually is the speaker from our partner organization, the Coalition on Data and Trust.

Stefanie Efstathiou: Fantastic. I’m happy to be here. As mentioned, I’m an IP and digital resolution lawyer based in Germany, in-house counsel and a PhD candidate on researching on AI. However, I’m here today in my capacity as a member of the EURid Youth Committee. So ladies and gentlemen, esteemed colleagues, I would like to draw today the attention to the transformative and urgent discourse on regional approaches to AI governance as highlighted in the recent report, AI from Global Majority. This report underscores that while artificial intelligence promises to reshape our societies, it must do so inclusively and equitably. So from Latin America to Africa and Asia, regional efforts as we see in the report demonstrate resilience and innovation. Latin American nations are forging frameworks inspired by global standards yet rooted in local realities, emphasizing regulatory collaboration. And in Africa, the RISE governance framework exemplifies a vision for integrated data governance, emphasizing cooperation, accountability and enforcement. These efforts reflect not only the unique socio-political context but also the shared aspiration to ensure AI serves as a tool for empowerment and not exploitation. A key dimension often overlooked is the role of youth in shaping AI’s trajectory. The younger generation across but not limited to the global majority, of course, should not only adapt to regional frameworks but should actively participate and lead the change. Youth should be more in the focus and participate as a stakeholder since it has a unique inherent advantage. They are the ones who will have to adapt more than any other generation to the change and effectively live in a different world than other generations before. The involvement can have various forms. However, starting from data protection driven policies on ensuring student data privacy in Africa to youth led innovation hubs in Latin America is a good way to go. Nonetheless, it is our duty to amplify these voices and incorporate their ideas into policymaking processes as well as it is the duty of the youth to actively participate and emerge itself in the sphere of responsible AI innovation and policymaking. The energy and the creativity of the younger generation shall signal a brighter future for AI governance. However, challenges persist and we have seen this. Digital colonialism, data inequities as well as systemic biases threaten to widen the divides. As the report highlights, however, it is imperative to address these disparities by adopting inclusive frameworks, fostering regional cooperation and prioritizing capacity building initiatives tailored to each region’s needs. However, with a minimum common global understanding similar to what Dr. Billig described earlier. As we move forward, let us reaffirm and I want to close with this, we shall reaffirm our commitment to an AI future that embodies fairness, sustainability and human centered innovation, however, grounded in regional diversity, but without causing fragmentation and inspired by the vision and the drive of youth. Thank you very much.

Luca Belli: Thank you very much, Stephanie. And actually, this is a very good introduction. Also, the one that UNCs were provided to our this first slot of online presentation dedicated to regional approaches to AI. So what kind of approaches is emerging at the regional level in various regions of the world. Our next speaker, Dr. Jona Welker, that is at MIT and former tech envoy and also now leading multiple EU sponsored projects has worked quite a lot on this and he has also a little bit of presentation for us. So we have it our technical support can confirm that he can share his presentation.

Yonah Welker: Yes, yes, my pleasure to be here. And it’s my pleasure. Excellent. Welcome. Go back to Riyadh, where I serve as an envoy and advisor to the Ministry of AI. I would love to be mindful about the time and address the issue of disabilities, educational and medical technologies is extremely complex area. And it’s almost one year since 28 countries signed the Bletchley Declaration. And unfortunately, this area is still underrepresented, including not only complexity. Currently, there are over 120 companies working on assistive technology. but also complexity of involved models we have biases related to supervised, unsupervised, reinforcement learning, issues of recognition, cues, exclusion. So I would love to quickly share the outcomes, what we can do to actually fix it. First of all, I believe we should work on original solutions. We can’t regeneralize chart GPT because for most original languages we have 1,000 times less data and we need to build our original solutions and not only LLM but also SLM with maybe less parameters but with more specific objective and efficiency. Second, we should work together to create open repositories cases in taxonomies, not only overused cases but also what we call accidents and with what we work with the OECD. The first thing is a dedicated safety models. It includes additional agents which help to improve the areas of accuracy, fairness and privacy and also dedicated safety environments and oversights, specific simulation environments for complex and high-risk models. Also, we actively working on more specific intersectional frameworks and guidelines with UNESCO or UNICEF. For instance, digital solutions for girls with disabilities in emerging regions who AI in health or OECD disability in the AI accidents repositories. And finally, we should understand that all of the biases we have today in technology, it’s actually a reflection of a historical and social issues. For instance, even beyond AI, only 10% of the world population have access to assistive technologies. 50% of children with disabilities in emerging countries still are not enrolled into schools. So we can’t fix it through one policy but through combination of AI, digital, social and accessibility frameworks. Thank you so much. Thank you very much, Jona, for respecting the time and now let’s move to another

Luca Belli: We have our friend Ekaterina Martynova from the Higher School of Economics. She was a researcher with us in Rio last year. Very nice to see you again, Katya, even if only online. Please, the floor is yours.

Ekaterina Martynova: Yes, thank you so much, Professor Luca. I will be very brief, just to give an overview of the current stage of AI development here in Russia and the first thing I should note is the increase in spending from the budget, actual unprecedented level of spending and actually development of AI is one of the key priorities of the state. Though the approach in terms of regulation is still quite cautious, so it seems that the priority is to develop technology, not to hinder somehow the development. So we don’t have still a comprehensive legal act as a federal law on AI and we have some national strategies as a piece of subordinated legislation and also some pieces of self-regulation in the market driven by the marketplace. In terms of practical application, AI is being used quite intensively in the public services and we have some sandboxes, especially here in Moscow, first of all in the public health care system and of course in the field of the public security and investigating. So here I come to the main concerns with using AI in these fields is of course the first one, the obvious, the human rights concern which has already been raised and it is very acute for Russia and it was also a question conceded by the European Court of Human Rights in terms of procedural safeguards provided to people being detained through the use of facial recognition system and we still need to develop very much our legislation here to provide more safeguards and here we look very closely at the Council of Europe Framework Convention on AI and Human Rights, Democracy and the Rule of Law. Though Russia is not currently a member to the Council of Europe, still we consider that these provisions on the standards of transparency, accountability, remedies can be useful for us, for our national development and maybe for development of some common basis within the BRICS countries or with our partners in the Shanghai Cooperation Organization. The second problem is the data security problem and here we have a special center created under the auspices of the Ministry of Digital Development so that to be the central hub of this data sanitation process and minimization of data, especially biometric personal data which is used in this public health care service digitalization process and finally actually what Luca you have mentioned at the beginning in the opening speech is the problem of AI and cyber security, this particular the topic which I research and the problem of the AI powered cyber attacks which Russia is being targeted these years and we here consider which are the mechanisms which legal mechanism which can be developed to hinder the use of such use of AI or malicious activities in cyberspace by state actors, non-state actors and here of course we need some efforts joined on the international level to develop a framework of responsible use of AI by states and the rules of responsibility and the rules of attribution of these types of attacks to the states which can be sponsoring such operations. So I will stop here and thank you very much for your attention. Thank you very much Katja for these very good

Luca Belli: Now let’s conclude this first segment of the online contribution with Dr. Rocco Saverino from the Free University of Brussels.

Rocco Saverino: Thank you Dr. Luca Belli, I’m not yet a doctor because I’m a PhD candidate but thank you and yes of course I am one of the authors of the paper we submitted and with my colleagues here at the Free University of Brussels and but to respect the time I’m going to wrapping up the key points of our paper. We look at the global trends and how Latin America countries are incorporating AI rules into the data protection of frameworks influenced by these global trends, particularly the new digital regulations and this also lead to the emerging of AI regulations in Latin America and because of this we analyzed particularly the case of Brazil and Chile which are establishing the specialized AI regulatory bodies reflecting the region’s awareness of the complex issues of the AI technologies and we look at the Brazil approach with the with the law 23 38 of 2023 but in this case we should make a disclaimer because of course as many of you know in on 28th of November was presented another proposal and we couldn’t update in our paper because it was already submitted but we analyzed the previous one where the role of the data protection authority was very important and we looked also at the Chile’s approach because Chile is advancing in its AI governance model proposing an AI technical advisory council and a data protection agency to enforce the AI law. Of course when we are talking about AI regulation we also talk about the data governance and data governance is a key factor in shaping the AI oversight with a focus on transparency, accountability and data protection of fundamental rights. This leads to challenges and opportunities. Latin American countries face challenges such as the need for coordination among the regulatory bodies developing specialized expertise and allocating sufficient resources but also opportunities because the region has the opportunity to shape AI governance proactively adopting a risk-based approach and integrating AI governance into existing data protection frameworks. We believe that Latin American countries can contribute to the global AI governance discourse by developing the regulatory models that reflect its unique social, economic and cultural context.

Luca Belli: Excellent, fantastic and now we can now we have concluded our regional perspective and we can enter into the social and economical perspectives. Actually the first presenter is Rachel Leach that has been co-authoring one of the papers on the AI impact in terms of environmental and economic and social impacts. So Dr. Leach please the floor is yours.

Rachel Leach: Thank you. Our project is an exploratory analysis of AI regulatory frameworks in Brazil and the United States focusing on how the environment particularly issues of environmental justice are considered in regulations in these countries and broadly we found that regulations in both countries are furthering the development of AI without properly interrogating the role AI itself and other big data systems play in causing harms to the environment, particularly in exacerbating environmental disparities within and across countries. For example, in July 2024, the Brazilian federal government launched the Brazil Plan for Artificial Intelligence investing $4 billion BRL in hoping to lead AI regulation in the global majority. The plan centered the benefits of AI with the slogan AI for Good for Everyone and invested in the use of AI to mitigate extreme weather, including a supercomputer system to predict such events. Additionally, in the U.S., President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence operates under the assumption that AI is a tool with the potential to enable the provision of clean electric power, again without examining the environmental issues raised by the technology itself. These examples are just a snapshot of the trend we identified, that both of these countries have a largely techno-solutionist approach to understanding AI. What this means is that their regulations tend to operate under the assumption that there is a technological solution to any problem. This approach leads to regulations that vastly under-consider the externalities or harms of technology and that center technology and solutions even in instances where that may not be the best approach. Okay, so turning now to the solutions we wanted to highlight. First when considering environmental and social costs of AI, it’s crucial to consider embodied carbon, meaning the environmental impact of all of the stages of the product’s life. As many people have discussed, developing and using AI involves various energy-intensive processes from the extraction of raw material and water to the energy and infrastructure needed to train and retrain these models to the disposal and recycling of materials. And often these environmental costs fall much harder on the global majority, particularly when data centers from US-based countries are citing a lot of their data centers in Latin America, for instance, and just exacerbating issues such as droughts in that region. The second action we wanted to highlight is the importance of centering environmental justice concerns comprehensively across all discussions about AI, from curriculum to research to policy. We think this is really important to interrogate the assumption that AI technology can necessarily solve social and environmental problems. So yeah, thank you again for having us.

Luca Belli: Excellent. Also very good that you are almost all on time. Next one, Avantika Tewari, that is PhD candidate at the Center for Comparative Politics and Political Theory at Jawaharlal Nehru University in New Delhi. Do we have Avantika? Yes. Hi, can you hear me? Perfect. Yes. Very well. Thank you so much. Welcome. Great to be here with all of you.

Avantika Tewari: So I’m just going to start without much ado. And I think just to give you a little bit of context about this paper, in India we have something called the Data Empowerment and Protection Architecture, which essentially all the debates around AI governance are also hinged on the control and regulation and distribution of data. So there has been an emphasis on consent-based data sharing models, and that’s devised to basically make a data-empowered citizenry. So it is in the context that I have written this paper, and I want to foreground that while these technologies, such as chat GPT and generative artificial intelligence technologies appear to be autonomous, their functionality depends on vast networks of human labor, such as data annotators, moderators, and data laborers, hidden behind the polished facade of machinic intelligence. Platforms like Amazon Mechanical Turk outsource these tasks to workers in the global majority, reducing them to fragmented, repetitive tasks that remain unacknowledged and underpaid. These workers sustain AI systems that disproportionately benefit corporations in the global north, transforming colonial legacies into new forms of digital exploitation through the cheap appropriation of land and labor, resources for compute technologies, and digital infrastructure. Similarly, digital platform users are framed as empowered participants, with their likes, shares, and posts generating immense profits for tech giants, all without compensation. This represents the double bind of digital capitalism, where the unpaid participation of users is reframed as agency, and the labor precarity is disguised as opportunity, with the global majority bearing the brunt of both. The platform economy built on twin pillars of fragmented attention and compulsive participation rebrands user exploitation as agency and convenience. By embedding individuals in digital enclosures, it transforms participatory cultures into systems of unpaid labor, commodifying interactions which were previously non-commodified, such as social relations of interactions and communication. What emerges is what I term an undead dimension of social enjoyment, which is a relentless pursuit of meaning, success, and community, which is inherently mediated by algorithms. Yet the promise of satisfaction remains elusive in snaring individuals in a loop of alienation and exploitation, while making their engagement complicit in the production of data analytics and AI. Data is thus fetishized as a commodity, retroactively imbued with meaning, as valuable information fueling market expansion, diversification, and stackification, which is paradoxically framed as a governance model, where data is framed as a resource that can be reclaimed as an extension of the self or as a social knowledge commons. Yet this transformation conceals a deeper reality, which is that the labor upon which these platforms depend is increasingly fragmented into gig-based, task-based work. This labor sustains the development of AI technologies that paradoxically aim to automate the very low-skilled tasks on which they rely. The shift towards the low-skilled, task-based, on-demand work is not merely a strategic adaptation by platforms, but an ideological reconfiguration of labor relations, which is what I call the ideology of prosumerism in the paper. So increasingly, the fragmentation is actually an attempt by capital to overcome its own dependency on labor. And so what I want to really foreground in this paper is that the real paradox is not whether technology can empower us, but in how monopoly capitals drive to overcome its dependence on labor leads to a fragmentation of global division of labor, which then disproportionately impacts the global majority. And this results in the now partialization of work, automation of tasks that are actually produced by the severance of labor’s embeddedness within the production process by the kind of fragmentation of work processes. So I’ll stop here, and thank you again.

Luca Belli: Thank you very much, Avantika, for bringing these considerations about labor, the difference between consumer and prosumer, and this kind of antagonism that you very well situated. Now staying in India, next speaker is Amrita Sengupta from the Center for Internet and Society, soon to be one of our incoming fellows at the FGV Law School. Please, Amrita, the floor is yours.

Amrita Sengupta: Thank you so much, Professor Belli. I’m also joined by my co-author, Shweta Mohandas, who’s also online. So our essay, The Impact of AI on the Work of Medical Practitioners, is actually a part of a larger mixed methods empirical study that we did, trying to understand the AI data supply chain in health care in India. So in this particular essay, through primary research with medical professionals, we did a survey of 150 medical practitioners and also did in-depth interviews. We tried to look at the current use of AI by medical practitioners in their research and practice, and also look at what are some of the new challenges and the perceived benefits Through this, we also try to raise certain concerns and issues about its current use and what is the cost and benefit to the work that the doctors and medical professionals have to put in now in the AI systems as they start developing these systems. So there are four big issues that we want to raise. The first one is that in the short term, doctors have to put in additional time and effort in preparing data through labeling, annotation, but also learning these technologies and providing feedback on AI models. These are real costs that need to be considered before we burden an already overburdened health care system. So, for example, in our survey, we heard that nearly 60% medical practitioners expressed the lack of AI-related training and education as a big barrier to adoption of AI systems. Doctors also raised concerns on the efforts and infrastructure required on their side to digitize health reports because of the nascent stages at which digital health data exists in the current health care system in India today. The second issue that we want to foreground is also about the current use of AI in private health care and less so in public health care, which is where there is a much larger need for meaningful interventions and for providing more efficiency, time-saving, and providing meaningful health, which actually raises the question, what does it serve and who is it privileging through the ways in which it is currently being operated? The third issue, and a critical one at that, is one of liability. Academics and medical professionals in our study flagged the issue of liability. For instance, who would be liable for an error in diagnosis made by an AI application that aids medical professionals? A common concern we also heard from doctors and academics was that AI was meant to assist doctors, but often enough, doctors felt this pressure that AI could take their place or was threatening to take their place. The last issue that we want to also raise is the longer-term impact of AI. In our survey, 41% of medical professionals suggested that AI could be beneficial in time-saving, but also help in improving clinical decisions. The question that we ask is, what are the kinds of risks that this raises with the over-reliance on AI, leading to, let’s say, a lack of or loss of clinical skills, or of course the representational biases that the AI models may present because of of where the data is coming from, the problems of reliance on global north data and so on. Lastly, we say that if we need to prioritize AI, we should prioritize in areas where they could most benefit and is in larger public interest and with the least disruption to the existing workflows and be considerate of whether the costs actually outweigh the benefits.

Luca Belli: Excellent. And now we are going to start to see how the global majority is reacting to AI and which kind of innovative thinking and solution is put forward in our last section. And then we will open the floor hopefully for debate as we have started with some minutes of delay. I hope our colleagues will indulge on us and give us five extra minutes. We have now Elise Racine from University of Oxford. Do we have Elise here? Yes, please go ahead. Hi. Hi, everyone. So I shared a presentation PDF in the chat.

Elise Racine: I’m Elise Racine. I’m a doctoral candidate at the University of Oxford. I study artificial intelligence, including reparative practices. So AI really does promise transformative societal benefits, but it also presents significant challenges in ensuring equitable access and value for the global majority. Today, I’ll introduce preparative algorithmic impact assessments, a novel framework combining robust accountability mechanisms with a reparative praxis to form a more culturally sensitive, justice-oriented methodology. So the problem is multifaceted. The global majority remains critically underrepresented in AI design, development, deployment, research and governance. This leads to systems, as we’ve discussed, that not only inadequately serve, but often harm large portions of the world’s population. For example, AI technology developed primarily in Western contexts often fail to account for diverse cultural norms, values, and social structures. While traditional algorithmic impact assessments provide valuable accountability mechanisms, they often fall short in ameliorating injustices and amid marginalized and minoritized voices. Reparative algorithmic impact assessments address these challenges through five steps that combine theoretical rigor with practical action. First, socio-historical research, delving into the context and power dynamics that shape AI systems. Second, participant engagement and impact harm co-construction that goes beyond tokenism and redistributes power. Three, sovereign and reparative data practices that incorporate decolonial intersectional principles while ensuring communities retain control over their information. Fourth is ongoing monitoring and adaptation focused on sustainable development and adjusted based on real-world impacts. And the fifth and last step is redress or moving beyond identifying issues to implementing concrete, actional plans that address inequities. To illustrate these steps in practice, considered a US-based company deploying an AI-powered mental health chatbot in rural India. So a reparative approach may, for instance, employ information specialists with data curation and archival expertise to ground social historical research in actual reality. Implement flexible participation options with fair compensation and mental health support to drive meaningful community engagement. Establish community-controlled data trusts. Develop new evaluation metrics that incorporate diverse cultural values and priorities. And partner with local AI hubs and research institutes that empower communities to develop their own AI capabilities. These are just several examples. There’s a few more, again, in the PDF, as well as in the report. But through this comprehensive approach, I wanna emphasize how reparative algorithmic impact assessments move beyond merely avoiding harm to actively redressing historical, structural, and systemic inequities, including colonial legacies in their algorithmic manifestations. That was a large focus of the paper. By doing so, we can foster justice and equity, ultimately ensuring AI truly serves all of humanity, not just a privileged few. Thank you very much.

Luca Belli: Thank you very much. We are almost done with our speakers. We have now Hellina Hailu Nigatu from the UC Berkeley. Please, Elina, the floor is yours. Thank you. I am going to share my screen real quick. Okay. Hi, everyone.

Hellina Hailu Nigatu: My name is Hellina, and today I’ll briefly present our work with my collaborator, Zirak. So social media platforms, such as YouTube, TikTok, and Instagram are used by millions of people across the globe. And while these platforms certainly have their benefits, they’re also a playground for online harm and abuse. Research showed that in 2019, majority of the content that was posted on YouTube was created in languages other than English. However, non-English speakers are hit the hardest with content that they quote, regret watching. Social media platforms have also resulted in physical harm. Facebook faced backlash in 2021 for its role in fueling violence in Ethiopia and Myanmar. With this in mind, we take a look at how, when we take a look at how platforms protect their users, platforms rely on automated content moderation systems or human moderators. For instance, Google reported that 81% of the content that is flagged for moderation is self-detected and that most of the content that is detect, most of the content is detected by automated systems and then redirected to human reviewers. Additionally, Google uses machine translation tools in their moderation pipeline. However, automated systems do not work well for all languages. A research shows that the intersection of social, political and technological constraints results in disparate performance for languages spoken by majority of the world’s population. In terms of human moderators, the Google Transparency Report states that about 81% of the human moderators operate in English and of the non-English moderators, only 2% of them operate in languages other than the highly resourced European ones. Majority world is a term coined by Shahdu Alam to refer to what were mostly called third world, developing nations, global south communities, et cetera. And the term global majority emphasizes that collectively these communities comprise majority of the world’s population. And as with their size, these communities are very diverse in terms of race, ethnicity, economic status, culture and languages. Within NLP, the majority world is exposed to harm and marginalization because they are excluded from state-of-the-art models and research. They are hired for pennies on the dollar as moderators with little to no mental or legal support. They’re exposed to harmful content when conducting their jobs as moderators and they are harmed by the failures of existing moderation pipelines. With the cycle of harm we see, there are two major lines of argument on including or not including these languages and their communities in AI. Either you are included in the current technology and as a result are surveilled, or you are left in the trenches with no protection or support. We argue that this is a false dichotomy in our paper and ask that if we remove the guise of capitalism that currently dominates content moderation landscape, is there a way to have moderation with the power primarily residing in the users?

Luca Belli: Thank you so much. Excellent. Now we have only two speakers to go. Isha Suri is the Research Lead at the Center for Interest in Society. Please, the floor is yours.

Isha Suri: Thank you, Professor Luka. I’ll just quickly share my screen. I’m joined by my co-author Shiva Kanwar and we looked at countering false information and policy responses for the global majority in the age of AI. I’ll quickly give you a teaser and a rundown and we’d be happy to take any questions. So one of the things that we… Something wrong with my screens here, sharing. A background and context World Economic Forum recognizes false information, including misinformation and disinformation as the most severe global risk anticipated over the next two years. And multiple studies have demonstrated that social media is designed to reward and amplify divisive content, hate speech and disinformation. For instance, an internal Facebook study revealed that its newsfeed algorithms exploit the human brain’s attraction to divisiveness. And if left unchecked, it would feed users more and more divisive content to gain user attention and increase time over platform. And one of the factors that emerged was these integrated structures, profit maximizing incentives, ensure that platforms continue to employ algorithms recommending divisive content. For instance, a team at YouTube tried to change its recommender systems to suggest more diverse content, but they realized that their engagement numbers were going down, which was ultimately impacting their advertising revenue and they had to roll back on some of these changes. And this, as we found, leads to a lot of harmful divisive content being promoted on these systems. We then delve into what are the regulatory responses that are emerging from the global majority countries and we sort of realized that it was bucketed into one of these three large categories. One was amendments to existing laws, including penal code, civil law, electoral law, and cybersecurity law. And largely the focus was on ascribing criminal liability in cases where false information is defined broadly. And we later found that that carries significant risks of censorship. In our paper, we also go into an India-specific case study where empirical research has demonstrated that platforms over-comply and that leads to a chill effect on speech and freedom of speech and expression. Another aspect that is emerging is that legislative… proposals are transferring the obligation to internet communication corporations, largely like the intermediary liability regime is being tinkered with. Legislations are being tied to the size of a platform. I think the German example comes to mind, where a for-profit platform with more than 2 million users has additional obligations, where manifestly illegal content and illegal content has to be taken down. There are also ex-ante obligations on intermediaries, such as the Digital Services Act in the EU. The Digital Services Act is an important one, I think, because that is one piece of legislation that really transfers the obligation on platform providers to have more transparency in how their algorithms are working. In addition to regulatory responses, I think fact-checking initiatives have also emerged as a response to counter-false information. Meta’s fact-checking initiative is the one that has probably taken a lot of prominence. But again, it leads to questions of inherent conflict. There are also concerns about the payment methods, how is Meta paying or reimbursing these fact-checkers, and there is lack of clarity whether there is sufficient independence within the organization as such. We sort of also see a trend within a global majority countries to mimic EU or the global north regulations, also known as the Brussels effects. And with this, I’ll just also segue into our conclusions and recommendations and sort of tie down whatever we’ve discussed in the past few minutes. This is the broad table that we have in the essay. I’ll not stop over it, but just to give you an overview of how we’ve categorized some of these countries and looked at what the instrument and response is, what are the sort of criminal sanctions, whether it’s an intermediary liability framework that they’ve sort of introduced, and whether there is a transparency and accountability sort of an obligation that they have introduced. European Union and Germany, as an example, has been given because we felt that they have additional transparency and accountability requirements, as opposed to some of the other countries that you see on your screen.

Luca Belli: So I’ll stop here, and thank you so much. Fantastic. And now last, but of course not least, Dr. Guangyu Qiao-Franco from the Radboud University.Dr. Xiao-Franco, the floor is yours. Thanks, Professor Belli. And thanks for staying around for my presentation.

Guangyu Qiao-Franco: So my contribution is co-hosted with Mr. Mahmoud Javadi of Free University Brussels, who is also present online today. So our research is on military AI governance. And in our paper, we highlight the concerning and widening gap between the North and the South in military AI governance. One striking observation is the limited and decreasing participation of global South countries in UN deliberations on military AI. Between 2014 and 2023, fewer than 20 developing countries contributed to UNCCW meetings on lethal alternative weapons systems on a regular basis. Our interviews indicate different priorities in AI governance. While the global North emphasizes security governance and ethical frameworks, the global South prioritizes economic development and capacity building. The North and the South also diverge in their preferred approaches to military AI governance. Most developing countries prefer a legal ban on autonomous weapon systems, while the North favors soft law approaches represented by the re-aimed blueprint for action and the US political declaration. However, these North-led frameworks have received limited endorsement from global South countries. And notably, none of the BRICS member states, key players in global innovation, have endorsed these documents. Global South’s participation in military AI governance is further complicated by the dual user nature of AI technologies, geopolitical tensions, stricter access control led by the global North, and concerns about hindering AI development for security reasons have contributed to disengagement among global South nations. So we want, in our paper, and also want to use this opportunity to call for the building of an inclusive AI arms control regime that begins with a thorough assessment of the distinct needs and priorities of both the North and the South, fostering international dialogue, building trust, and promoting partnerships are essential to bridging the divide. Capacity building and knowledge transfer must also be prioritized to incentivize responsible technology use and encourage broader, more active engagement. So I will stop there, and thanks for your attention. Thank you very much for this.

Luca Belli: This has been an incredible marathon, very intense. We have a lot of food for thought. I am pretty sure people in the audience that have been with us over the past hour and a half have comments. If we can, I would take one or two quick, very quick questions or comments, if we have them, from the room. Otherwise, we can have them over coffee. Is there any? Yes, I see one, only one comment. Good, fantastic. So can anyone give a mic, so otherwise, I can borrow mine? You can borrow mine. Hello. We’ll work very quickly.

Audience: Thiago Moraes, PhD fellow from VLB. And some of my peers are here today, both on site and online, which is great, and also several colleagues, which I like a lot. So going very quickly and seeing the work that IGE has been doing, last year, I was able to be an author of the last edition. So it’s very nice to see the initiatives that are being discussed in the document. So what I was thinking here, there has been some ongoing discussions through the IGEF of how regional IGEFs could contribute to the global discussion. And I was just thinking, maybe the cases, especially from global majority, for example, could be discussed in these regional IGEFs. And we try to find some way of showcasing them and then making these connections, especially now that we’re discussing the WCs, like the renewal of the mandate and how we could make Mood Stakeholder a bit more concrete to action. Maybe that could be an interesting way. We can talk more later. I know we don’t have time. So I’ll stop here.

Luca Belli: All right, fantastic. Thank you. Can you hear me? OK, so this is not working. All right, so thank you very much, everyone, for the comments, the presentations, respecting the time. Just to remind that there are still four copies of the book available here and people praying to have them. So you can still have four, or actually five. I don’t need one. And you can download it for free. Actually, as there will be only six months between now and the next IGEF, and we have carefully presented this volume and also the volume of last year, we might use the occasion of the next IGEF to have a debate building on what I’ve already presented this year and last year. Maybe for those who are interested, building a paper on the achievements that we have showcased here in this volume and in the volume of past year. And this actually could be a good way also of building upon what Tiago was mentioning. So try to connect in the dots and showing also what is the rationale behind all this and showing also the complexity. I think if something is clear from the very intense debate and presentation of today is that there is not only a lot of problems, but also a lot of thinking and a lot of potential solutions that can come up from the global south. A lot of challenges, of course, but there is also a lot of room to improve things and collectively at least identify problems and potential common solutions. So let me thank everyone for the very insightful work that I urge everyone to read in this volume and for the excellent presentation of today. Thank you very much. Thank you. Thank you. Thank you.

A

Ahmad Bhinder

Speech speed

134 words per minute

Speech length

741 words

Speech time

330 seconds

Developing regional AI strategies and policies

Explanation

Ahmad Bhinder discusses the Digital Cooperation Organization’s efforts to develop AI readiness assessment tools and data privacy principles for member states. The organization is working on creating frameworks to evaluate AI systems against ethical principles and human rights considerations.

Evidence

DCO is developing an AI readiness assessment tool for member states and drafting data privacy principles that consider AI implications.

Major Discussion Point

AI Governance Frameworks and Approaches

Developing ethical AI principles and assessment tools

Explanation

Ahmad Bhinder describes the DCO’s work on creating frameworks to assess AI systems against ethical principles. They are developing tools to help AI developers and deployers evaluate their systems’ compliance with ethical considerations.

Evidence

DCO is creating a framework that maps AI ethical principles to basic human rights and developing a tool for AI system developers to assess their systems against these principles.

Major Discussion Point

AI Ethics and Human Rights

Differed with

Rachel Leach

Differed on

Focus of AI governance

A

Ansgar Koene

Speech speed

150 words per minute

Speech length

386 words

Speech time

153 seconds

Addressing biases and discrimination in AI systems

Explanation

Ansgar Koene discusses the challenges of identifying and addressing the impacts of AI systems on different groups, particularly young people. He emphasizes the need for organizations to understand how their AI systems affect users and to address potential biases.

Evidence

Koene mentions that organizations often do not fully understand how young people interact with their AI systems or what particular concerns need to be taken into account.

Major Discussion Point

AI Ethics and Human Rights

Agreed with

Bianca Kremer

Agreed on

Addressing biases and discrimination in AI systems

M

Melody Musoni

Speech speed

145 words per minute

Speech length

810 words

Speech time

333 seconds

Creating inclusive AI governance frameworks for the global majority

Explanation

Melody Musoni discusses recent developments in AI governance in Africa, including the adoption of a continental strategy on AI. She highlights the priorities for AI development in Africa, including human capital development, infrastructure, and building an AI economy.

Evidence

Musoni mentions the African Union’s adoption of a continental strategy on AI and the development of a data policy framework to support member states in utilizing data.

Major Discussion Point

AI Governance Frameworks and Approaches

Agreed with

Catherine Bielick

Stefanie Efstathiou

Agreed on

Need for inclusive AI governance frameworks

B

Bianca Kremer

Speech speed

149 words per minute

Speech length

725 words

Speech time

290 seconds

Addressing biases and discrimination in AI systems

Explanation

Bianca Kremer discusses her research on algorithmic racism and its impact in Brazil. She highlights the need to understand and address the economic impacts of algorithmic bias, particularly in the context of facial recognition technologies used in public security.

Evidence

Kremer cites research showing that 90.5% of those arrested in Brazil using facial recognition technologies are black and brown individuals.

Major Discussion Point

AI Ethics and Human Rights

Agreed with

Ansgar Koene

Agreed on

Addressing biases and discrimination in AI systems

Addressing the economic impact of algorithmic racism

Explanation

Bianca Kremer emphasizes the importance of understanding the economic impacts of algorithmic racism. She is conducting research to assess the economic losses for individuals, ethnic groups, and society as a whole due to biased AI systems in law enforcement.

Evidence

Kremer mentions her ongoing research on the economic impact of algorithmic racism in digital platforms, focusing on developing indicators to measure these impacts.

Major Discussion Point

AI Impact on Labor and Economy

L

Liu Zijing

Speech speed

133 words per minute

Speech length

424 words

Speech time

190 seconds

Implementing AI in smart court systems

Explanation

Liu Zijing discusses China’s implementation of AI in its judicial system, including the development of large language models for legal research and reasoning. The presentation highlights the use of AI in various aspects of the judicial process, from pre-litigation mediation to criminal cases.

Evidence

Liu mentions specific AI systems implemented in Chinese courts, such as the Faxin system by the Supreme Court and the Phoenix system in Zhejiang province.

Major Discussion Point

AI in Judicial Systems

Y

Ying Lin

Speech speed

122 words per minute

Speech length

366 words

Speech time

179 seconds

Addressing transparency and fairness concerns in AI-assisted judicial decisions

Explanation

Ying Lin discusses the challenges and concerns related to the use of AI in judicial systems. She highlights issues of transparency, due process, and the potential weakening of judicial accountability when decision-making authority is delegated to AI assistants.

Evidence

Lin raises questions about judges’ understanding of AI algorithms and the potential for AI to make up information, emphasizing the need for human oversight and explainable AI in judicial processes.

Major Discussion Point

AI in Judicial Systems

L

Luca Belli

Speech speed

150 words per minute

Speech length

2525 words

Speech time

1008 seconds

Adopting a multi-stakeholder approach to AI governance

Explanation

Luca Belli emphasizes the importance of a multi-stakeholder approach in AI governance, particularly in addressing cybersecurity challenges. He argues that cooperation between technical experts and policymakers is necessary to identify the best tools and standardization measures for AI governance.

Evidence

Belli cites his research on AI and cybersecurity in Brazil, highlighting the need for multi-stakeholder cooperation to implement effective security measures.

Major Discussion Point

AI Governance Frameworks and Approaches

R

Rodrigo Rosa Gameiro

Speech speed

160 words per minute

Speech length

599 words

Speech time

223 seconds

Ensuring equitable access to AI technologies

Explanation

Rodrigo Rosa Gameiro discusses the dual nature of AI development, highlighting both its benefits and challenges. He emphasizes the need to ensure that AI technologies serve all populations and uphold human rights principles for everyone.

Evidence

Gameiro mentions examples of AI benefits in healthcare, such as enabling new diagnoses and accelerating drug development, while also pointing out the digital divide and unequal access to these technologies.

Major Discussion Point

AI Ethics and Human Rights

Agreed with

Melody Musoni

Catherine Bielick

Stefanie Efstathiou

Agreed on

Need for inclusive AI governance frameworks

C

Catherine Bielick

Speech speed

167 words per minute

Speech length

739 words

Speech time

264 seconds

Creating inclusive AI governance frameworks for the global majority

Explanation

Catherine Bielick proposes using the Paris Agreement as a model for international AI governance. She suggests adopting a framework that allows for collective response with differentiated responsibilities, localized flexibility, and regular reviews to ensure accountability and progress.

Evidence

Bielick outlines five core features from the Paris Agreement that could be applied to AI governance, including nationally determined contributions and a global stocktake mechanism.

Major Discussion Point

AI Governance Frameworks and Approaches

Agreed with

Melody Musoni

Stefanie Efstathiou

Agreed on

Need for inclusive AI governance frameworks

S

Sizwe Snail ka Mtuze

Speech speed

115 words per minute

Speech length

450 words

Speech time

232 seconds

Exploring AI development and challenges in Africa

Explanation

Sizwe Snail ka Mtuze discusses recent developments in AI policy and strategy in South Africa. He highlights the country’s efforts to create a national AI strategy and policy framework, while also noting the mixed reception these initiatives have received.

Evidence

Snail ka Mtuze mentions the South African draft AI strategy published earlier in the year and the subsequent national artificial intelligence policy framework released in August.

Major Discussion Point

Regional Perspectives on AI Development

S

Stefanie Efstathiou

Speech speed

128 words per minute

Speech length

501 words

Speech time

233 seconds

Ensuring equitable access to AI technologies

Explanation

Stefanie Efstathiou emphasizes the need for inclusive AI governance that serves the global majority equitably. She highlights the importance of youth participation in shaping AI’s trajectory and calls for amplifying diverse voices in policymaking processes.

Evidence

Efstathiou mentions examples such as data protection-driven policies for student privacy in Africa and youth-led innovation hubs in Latin America.

Major Discussion Point

AI Ethics and Human Rights

Agreed with

Melody Musoni

Catherine Bielick

Agreed on

Need for inclusive AI governance frameworks

Y

Yonah Welker

Speech speed

135 words per minute

Speech length

374 words

Speech time

165 seconds

Ensuring equitable access to AI technologies

Explanation

Yonah Welker discusses the challenges and opportunities in developing AI technologies for people with disabilities. He emphasizes the need for original solutions tailored to specific languages and contexts, rather than relying on generalized models like ChatGPT.

Evidence

Welker mentions that there are over 120 companies working on assistive technology and highlights the need for dedicated safety models and environments for complex and high-risk AI applications.

Major Discussion Point

AI Ethics and Human Rights

Agreed with

Melody Musoni

Catherine Bielick

Stefanie Efstathiou

Agreed on

Need for inclusive AI governance frameworks

E

Ekaterina Martynova

Speech speed

145 words per minute

Speech length

545 words

Speech time

224 seconds

Examining AI development and regulation in Russia

Explanation

Ekaterina Martynova discusses the current state of AI development and regulation in Russia. She highlights the government’s increased spending on AI development and the cautious approach to regulation, focusing on developing technology rather than hindering it through strict laws.

Evidence

Martynova mentions the use of AI in public services, healthcare, and public security in Russia, as well as the development of sandboxes for AI testing.

Major Discussion Point

Regional Perspectives on AI Development

Differed with

Rocco Saverino

Differed on

Approach to AI regulation

Protecting human rights in AI development and deployment

Explanation

Ekaterina Martynova discusses the human rights concerns associated with AI use in Russia, particularly in public security and facial recognition systems. She emphasizes the need for more safeguards and transparency in AI deployment.

Evidence

Martynova mentions a case considered by the European Court of Human Rights regarding procedural safeguards for people detained through facial recognition systems in Russia.

Major Discussion Point

AI Ethics and Human Rights

R

Rocco Saverino

Speech speed

117 words per minute

Speech length

408 words

Speech time

207 seconds

Analyzing AI governance trends in Latin America

Explanation

Rocco Saverino discusses the emerging AI regulations in Latin America, focusing on Brazil and Chile. He highlights the establishment of specialized AI regulatory bodies and the integration of AI governance into existing data protection frameworks.

Evidence

Saverino mentions Brazil’s law 23.38 of 2023 and Chile’s proposed AI technical advisory council and data protection agency to enforce AI laws.

Major Discussion Point

Regional Perspectives on AI Development

Differed with

Ekaterina Martynova

Differed on

Approach to AI regulation

R

Rachel Leach

Speech speed

159 words per minute

Speech length

444 words

Speech time

166 seconds

Examining the environmental and social costs of AI development

Explanation

Rachel Leach discusses the environmental and social impacts of AI development, particularly in Brazil and the United States. She argues that current regulations are furthering AI development without properly addressing the harms caused by AI and big data systems to the environment.

Evidence

Leach mentions Brazil’s $4 billion BRL investment in AI development and the U.S. Executive Order on AI, which both focus on the benefits of AI without fully examining its environmental impacts.

Major Discussion Point

AI and Environmental Concerns

Differed with

Ahmad Bhinder

Differed on

Focus of AI governance

Considering embodied carbon in AI technologies

Explanation

Rachel Leach emphasizes the importance of considering embodied carbon in AI technologies. This includes the environmental impact of all stages of an AI product’s lifecycle, from raw material extraction to energy consumption for training and retraining models.

Evidence

Leach mentions that environmental costs often fall harder on the global majority, citing examples of U.S.-based companies locating data centers in Latin America, exacerbating issues such as droughts.

Major Discussion Point

AI and Environmental Concerns

A

Avantika Tewari

Speech speed

138 words per minute

Speech length

604 words

Speech time

262 seconds

Analyzing the exploitation of digital labor in AI development

Explanation

Avantika Tewari discusses the hidden human labor behind AI systems, particularly in data annotation and moderation. She argues that this labor, often outsourced to workers in the global majority, is underpaid and unacknowledged, perpetuating digital exploitation and colonial legacies.

Evidence

Tewari mentions platforms like Amazon Mechanical Turk that outsource tasks to workers in the global majority, reducing them to fragmented, repetitive tasks.

Major Discussion Point

AI Impact on Labor and Economy

A

Amrita Sengupta

Speech speed

183 words per minute

Speech length

583 words

Speech time

190 seconds

Examining the impact of AI on medical practitioners’ work

Explanation

Amrita Sengupta discusses the challenges and benefits of AI adoption in healthcare, based on a study of medical practitioners in India. She highlights issues such as the additional time and effort required for data preparation, concerns about liability, and the potential long-term impacts on clinical skills.

Evidence

Sengupta cites survey results showing that 60% of medical practitioners expressed lack of AI-related training as a barrier to adoption, and 41% suggested AI could be beneficial for time-saving and improving clinical decisions.

Major Discussion Point

AI Impact on Labor and Economy

E

Elise Racine

Speech speed

137 words per minute

Speech length

444 words

Speech time

194 seconds

Implementing reparative algorithmic impact assessments

Explanation

Elise Racine introduces the concept of reparative algorithmic impact assessments as a framework to address inequities in AI development and deployment. This approach combines accountability mechanisms with reparative practices to create a more culturally sensitive and justice-oriented methodology.

Evidence

Racine outlines five steps in the reparative algorithmic impact assessment process, including socio-historical research, participant engagement, sovereign data practices, ongoing monitoring, and concrete redress plans.

Major Discussion Point

AI Governance Frameworks and Approaches

H

Hellina Hailu Nigatu

Speech speed

153 words per minute

Speech length

480 words

Speech time

187 seconds

Addressing challenges in AI-powered content moderation for diverse languages

Explanation

Hellina Hailu Nigatu discusses the challenges of content moderation on social media platforms, particularly for non-English content. She highlights the limitations of automated systems and human moderators in effectively moderating content in languages spoken by the majority of the world’s population.

Evidence

Nigatu cites research showing that the majority of content posted on YouTube in 2019 was in languages other than English, yet 81% of human moderators operate in English.

Major Discussion Point

AI in Content Moderation and Misinformation

I

Isha Suri

Speech speed

162 words per minute

Speech length

750 words

Speech time

277 seconds

Developing policy responses to counter false information in the age of AI

Explanation

Isha Suri examines regulatory responses to false information in global majority countries. She discusses various approaches, including amendments to existing laws, transferring obligations to internet communication corporations, and fact-checking initiatives.

Evidence

Suri provides examples of regulatory responses from different countries, such as Germany’s approach to regulating platforms with more than 2 million users and the EU’s Digital Services Act requiring more transparency in platform algorithms.

Major Discussion Point

AI in Content Moderation and Misinformation

G

Guangyu Qiao Franco

Speech speed

122 words per minute

Speech length

329 words

Speech time

161 seconds

Addressing the gap between North and South in military AI governance

Explanation

Guangyu Qiao Franco highlights the concerning gap between the global North and South in military AI governance. She discusses the limited participation of global South countries in UN deliberations on military AI and the divergent priorities and approaches to governance between the North and South.

Evidence

Franco mentions that fewer than 20 developing countries contributed regularly to UNCCW meetings on lethal autonomous weapons systems between 2014 and 2023, and notes that none of the BRICS member states have endorsed North-led frameworks for military AI governance.

Major Discussion Point

AI Governance Frameworks and Approaches

Agreements

Agreement Points

Need for inclusive AI governance frameworks

Melody Musoni

Catherine Bielick

Stefanie Efstathiou

Creating inclusive AI governance frameworks for the global majority

Creating inclusive AI governance frameworks for the global majority

Ensuring equitable access to AI technologies

These speakers emphasize the importance of developing AI governance frameworks that are inclusive and consider the needs of the global majority, including youth participation and localized flexibility.

Addressing biases and discrimination in AI systems

Ansgar Koene

Bianca Kremer

Addressing biases and discrimination in AI systems

Addressing biases and discrimination in AI systems

Both speakers highlight the need to identify and address biases and discrimination in AI systems, particularly their impacts on different groups and in specific contexts like facial recognition technologies.

Similar Viewpoints

Both speakers address the hidden costs of AI development, with Leach focusing on environmental impacts and Tewari on labor exploitation, particularly in the global majority countries.

Rachel Leach

Avantika Tewari

Examining the environmental and social costs of AI development

Analyzing the exploitation of digital labor in AI development

Both speakers discuss challenges related to content moderation and misinformation in the context of AI, particularly focusing on the needs of diverse language communities and global majority countries.

Hellina Hailu Nigatu

Isha Suri

Addressing challenges in AI-powered content moderation for diverse languages

Developing policy responses to counter false information in the age of AI

Unexpected Consensus

Importance of regional and localized AI strategies

Ahmad Bhinder

Melody Musoni

Sizwe Snail ka Mtuze

Ekaterina Martynova

Rocco Saverino

Developing regional AI strategies and policies

Creating inclusive AI governance frameworks for the global majority

Exploring AI development and challenges in Africa

Examining AI development and regulation in Russia

Analyzing AI governance trends in Latin America

Despite representing different regions and contexts, these speakers all emphasize the importance of developing localized AI strategies and governance frameworks tailored to specific regional needs and priorities.

Overall Assessment

Summary

The main areas of agreement include the need for inclusive AI governance frameworks, addressing biases and discrimination in AI systems, considering the hidden costs of AI development, and developing region-specific AI strategies.

Consensus level

There is a moderate level of consensus among speakers on the importance of considering the needs and perspectives of the global majority in AI development and governance. This consensus suggests a growing recognition of the need for more inclusive and equitable approaches to AI governance globally, which could lead to more collaborative efforts in developing AI policies and frameworks that address the diverse needs of different regions and populations.

Differences

Different Viewpoints

Approach to AI regulation

Ekaterina Martynova

Rocco Saverino

Examining AI development and regulation in Russia

Analyzing AI governance trends in Latin America

Martynova discusses Russia’s cautious approach to AI regulation, focusing on developing technology rather than strict laws, while Saverino highlights Latin American countries’ efforts to establish specialized AI regulatory bodies and integrate AI governance into existing frameworks.

Focus of AI governance

Ahmad Bhinder

Rachel Leach

Developing ethical AI principles and assessment tools

Examining the environmental and social costs of AI development

Bhinder emphasizes developing ethical AI principles and assessment tools, while Leach argues that current regulations are furthering AI development without properly addressing environmental harms.

Unexpected Differences

Economic impact of AI

Bianca Kremer

Avantika Tewari

Addressing the economic impact of algorithmic racism

Analyzing the exploitation of digital labor in AI development

While both speakers address economic impacts of AI, their focus is unexpectedly different. Kremer examines the economic consequences of algorithmic racism in law enforcement, while Tewari highlights the exploitation of digital labor in AI development. This difference shows the diverse economic challenges posed by AI in different contexts.

Overall Assessment

summary

The main areas of disagreement include approaches to AI regulation, focus of AI governance, addressing biases in AI systems, and economic impacts of AI.

difference_level

The level of disagreement among speakers is moderate. While there are differing perspectives on specific issues, there is a general consensus on the need for inclusive AI governance and addressing the challenges posed by AI technologies. These differences reflect the diverse contexts and priorities of different regions and stakeholders, highlighting the complexity of developing global AI governance frameworks that address the needs of the global majority.

Partial Agreements

Partial Agreements

Both speakers agree on the need to address biases in AI systems, but they focus on different aspects: Kremer emphasizes algorithmic racism in law enforcement, while Nigatu highlights content moderation challenges for diverse languages.

Bianca Kremer

Hellina Hailu Nigatu

Addressing biases and discrimination in AI systems

Addressing challenges in AI-powered content moderation for diverse languages

Both speakers advocate for inclusive AI governance frameworks, but they propose different approaches: Musoni focuses on regional strategies in Africa, while Bielick suggests adapting the Paris Agreement model for international AI governance.

Melody Musoni

Catherine Bielick

Creating inclusive AI governance frameworks for the global majority

Creating inclusive AI governance frameworks for the global majority

Similar Viewpoints

Both speakers address the hidden costs of AI development, with Leach focusing on environmental impacts and Tewari on labor exploitation, particularly in the global majority countries.

Rachel Leach

Avantika Tewari

Examining the environmental and social costs of AI development

Analyzing the exploitation of digital labor in AI development

Both speakers discuss challenges related to content moderation and misinformation in the context of AI, particularly focusing on the needs of diverse language communities and global majority countries.

Hellina Hailu Nigatu

Isha Suri

Addressing challenges in AI-powered content moderation for diverse languages

Developing policy responses to counter false information in the age of AI

Takeaways

Key Takeaways

AI governance frameworks need to be inclusive and consider perspectives from the global majority

There are significant disparities in AI development and governance between the global North and South

AI has major impacts on labor, the economy, and the environment that need to be addressed

Ethical considerations and human rights protections are crucial in AI development and deployment

Regional approaches to AI governance are emerging, with varying priorities and challenges

Content moderation and countering misinformation are key challenges in the age of AI

AI is being implemented in judicial systems, raising concerns about transparency and fairness

Resolutions and Action Items

Develop more inclusive AI governance frameworks that incorporate perspectives from the global majority

Implement reparative algorithmic impact assessments to address historical inequities

Create open repositories and taxonomies for AI cases and accidents

Develop original AI solutions tailored to regional languages and contexts

Increase capacity building and knowledge transfer in AI between global North and South

Incorporate environmental justice concerns comprehensively in AI discussions and policies

Unresolved Issues

How to effectively balance AI development with environmental sustainability

Addressing the exploitation of digital labor in AI development

Resolving disparities in military AI governance between global North and South

Determining liability in AI-assisted medical decisions

Ensuring fairness and transparency in AI-powered judicial systems

Developing effective content moderation systems for diverse languages and contexts

Suggested Compromises

Adopting a co-regulatory approach to AI governance, balancing government oversight with industry self-regulation

Developing AI tools for content moderation that are inclusive of diverse languages and contexts

Balancing the need for AI development with environmental and social costs through comprehensive impact assessments

Implementing human-in-the-loop systems for AI in judicial decision-making to balance efficiency with fairness

Thought Provoking Comments

AI meets cybersecurity, exploring the Brazilian perspective on information security with regard to AI… even if formally it has climbed the cybersecurity index because it has adopted a lot of cybersecurity sectoral regulation like in data protection, like in telecoms sector, in the banking sector and so on, in the energy sector but the implementation is very patchy and not very sophisticated in some cases

speaker

Luca Belli

reason

This comment highlights the gap between formal regulations and actual implementation, revealing a critical issue in AI governance.

impact

It set the tone for discussing practical challenges in implementing AI governance frameworks, especially in developing countries.

We are developing a tool for assessment of AI readiness for our member states. This is a self assessment tool and this tool is we will make it available in a month’s time to the member states across different dimensions of their AI readiness that includes governance but that goes beyond governance to a lot of other dimensions from for example capacity building, the adoption of AI

speaker

Ahmad Bhinder

reason

This introduces a concrete tool for assessing AI readiness, moving the discussion from theory to practical implementation.

impact

It shifted the conversation towards actionable steps countries can take to prepare for AI adoption and governance.

90.5% of those who are arrested in Brazil today with the use of facial recognition technologies are black and brown. The brown people in Brazil are called pardos. So, we have more than 90% of the population being biased with the use of technology.

speaker

Bianca Kremer

reason

This statistic starkly illustrates the real-world impact of AI bias, particularly on marginalized communities.

impact

It brought the discussion to focus on the urgent need to address AI bias and its societal implications, especially in diverse societies.

Platforms like Amazon Mechanical Turk outsource these tasks to workers in the global majority, reducing them to fragmented, repetitive tasks that remain unacknowledged and underpaid. These workers sustain AI systems that disproportionately benefit corporations in the global north, transforming colonial legacies into new forms of digital exploitation

speaker

Avantika Tewari

reason

This comment exposes the hidden labor behind AI systems and the exploitation of workers in the global majority.

impact

It broadened the discussion to include labor rights and global inequalities in AI development.

Reparative algorithmic impact assessments address these challenges through five steps that combine theoretical rigor with practical action.

speaker

Elise Racine

reason

This introduces a novel framework for addressing AI inequities, combining theory with practical steps.

impact

It moved the conversation towards concrete solutions and methodologies for creating more equitable AI systems.

Overall Assessment

These key comments shaped the discussion by highlighting the complex interplay between AI governance, societal impacts, and global inequalities. They moved the conversation from theoretical frameworks to practical challenges and potential solutions, emphasizing the need for inclusive, culturally sensitive approaches to AI development and governance. The discussion evolved from identifying problems to proposing concrete tools and methodologies for addressing these issues, particularly focusing on the perspectives and needs of the global majority.

Follow-up Questions

How can AI governance frameworks ensure equitable access to and promote development of AI technologies for the global majority?

speaker

Melody Musoni

explanation

This is a key policy question that needs to be addressed to ensure AI benefits are distributed fairly globally.

What are the economic impacts of algorithmic racism in digital platforms?

speaker

Bianca Kremer

explanation

Understanding the economic consequences could provide compelling arguments for policymakers to address algorithmic bias.

How can we develop more specific intersectional frameworks and guidelines for AI in healthcare and education, particularly for underserved populations?

speaker

Yonah Welker

explanation

This would help ensure AI applications in critical sectors like health and education are inclusive and beneficial for diverse populations.

How can we develop AI regulatory models that reflect the unique social, economic and cultural contexts of Latin American countries?

speaker

Rocco Saverino

explanation

This would allow Latin American countries to shape AI governance proactively in a way that suits their specific needs and contexts.

How can we prioritize AI development in areas that provide the most public benefit with the least disruption to existing workflows in healthcare?

speaker

Amrita Sengupta

explanation

This approach could help maximize the positive impact of AI in healthcare while minimizing potential negative consequences.

Is there a way to have content moderation with power primarily residing in the users, rather than being dominated by capitalist interests?

speaker

Hellina Hailu Nigatu

explanation

This could lead to more equitable and culturally sensitive content moderation practices.

How can we build an inclusive AI arms control regime that addresses the distinct needs and priorities of both the global North and South?

speaker

Guangyu Qiao Franco

explanation

This is crucial for developing effective global governance of military AI applications.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Main Session 3: Internet Governance and elections: maximising potential for trust and addressing risks

Main Session 3: Internet Governance and elections: maximising potential for trust and addressing risks

Session at a Glance

Summary

This discussion focused on Internet governance and elections, particularly addressing the challenges of maintaining information integrity and trust in the democratic process in the digital age. Panelists from various sectors and regions shared insights on the experiences of the 2024 “super election year” and discussed strategies to protect election integrity.

Key issues highlighted included the spread of misinformation and disinformation, the impact of artificial intelligence and deep fakes, and the need for better regulation of digital platforms. Panelists emphasized the importance of media literacy, fact-checking, and collaboration between stakeholders to combat these challenges. The discussion also touched on the specific difficulties faced by the Global South, including digital inequality and limited access to information.

Several initiatives were discussed, such as partnerships between tech companies and fact-checkers, training programs for journalists, and the development of AI detection tools. The role of civil society and NGOs in promoting digital literacy and resilience was stressed. Panelists agreed on the need for a multi-stakeholder approach to address these complex issues.

The discussion explored governance principles and mechanisms to protect electoral processes while upholding human rights. Suggestions included improving transparency in political advertising, strengthening data protection laws, and developing global standards for content moderation. The importance of balancing innovation with integrity was emphasized.

Participants highlighted the potential of the Internet Governance Forum (IGF) to facilitate global dialogue and cooperation on these issues. They called for a more coordinated approach between regional and global IGFs to maximize impact. The discussion concluded with a recognition of the ongoing nature of these challenges and the need for sustained efforts beyond election periods to safeguard democratic processes in the digital age.

Keypoints

Major discussion points:

– The challenges of misinformation, disinformation and foreign interference in elections in the digital age

– The need for multi-stakeholder collaboration and governance frameworks to protect election integrity

– The importance of media literacy, journalist safety, and access to reliable information

– The role of social media platforms and technology companies in addressing online harms

– The potential of the Internet Governance Forum to facilitate global cooperation on these issues

The overall purpose of the discussion was to examine the challenges to election integrity in the digital age and explore potential governance principles, tools and mechanisms to protect democratic processes while upholding human rights.

The tone of the discussion was largely serious and concerned about the threats to democracy, but also constructive in proposing solutions. There was a sense of urgency about addressing these issues, balanced with cautious optimism about the potential for multi-stakeholder cooperation. The tone became more action-oriented towards the end as participants offered final recommendations.

Speakers

– Pearse O’Donohue: Moderator

– Tawfik Jelassi: Director from UNESCO

– Lina Viltrakiene: Representative from the Lithuanian government

– William Bird: From Media Monitoring Africa

– Rosemary Sinclair: Chief Executive Officer of the Australian DA

– Daniel Molokele: Member of Parliament from Zimbabwe

– Sezen Yesil: Director of Public Policy at Meta

– Elizabeth Orembo: Researcher at the International Stakeholder Relations of ICT Africa

Additional speakers:

– Giacomo Mazzone: Member of EDMO (European Digital Media Observatory)

– Bruna Martins dos Santos: Organizer of the session

– Maha Abdel Nasser: From the Egyptian parliament

– Alexander Savnin: From Primorsky University in Russia

Full session report

Expanded Summary of Discussion on Internet Governance and Elections

Introduction:

This discussion, moderated by Pearse O’Donohue, brought together in-person and online panelists from diverse sectors and regions to explore the critical intersection of internet governance and election integrity in the digital age. The panel examined challenges, successful initiatives, and potential governance mechanisms to protect democratic processes while upholding human rights.

Key Challenges to Election Integrity:

1. Misinformation and Disinformation:

Multiple speakers, including Tawfik Jelassi, William Bird, Sezen Yesil, and Lina Viltrakiene, identified the spread of misinformation and disinformation as a significant threat to election integrity. This includes coordinated inauthentic behaviour on social platforms and the use of AI and deepfakes to create misleading content.

2. Attacks on Electoral Bodies and Journalists:

William Bird and Tawfik Jelassi highlighted the serious issue of attacks and intimidation against journalists and electoral management bodies, recognising it as a significant threat to press freedom and election integrity. Jelassi specifically noted the increased violence against women journalists.

3. Digital Inequality:

Elizabeth Orembo raised concerns about digital inequality limiting access to reliable information, particularly in the Global South. She also highlighted challenges related to data sharing and the need for proactive information from election management bodies.

4. Emerging Technologies:

Lina Viltrakiene and Sezen Yesil emphasised the threat posed by AI and deepfakes in creating misleading content. Yesil acknowledged these risks and discussed measures taken by platforms to address them.

5. Untrained Influencers:

Daniel Molokele pointed out the rise of influential but untrained social media personalities and podcasters affecting election integrity in Africa, highlighting the lack of regulation for these new media actors.

Successful Initiatives and Best Practices:

1. Multi-stakeholder Collaboration:

Several speakers emphasised the importance of collaboration between various stakeholders, including tech platforms, fact-checkers, authorities, and civil society.

2. Media Literacy and Digital Skills Education:

Tawfik Jelassi highlighted the importance of media literacy and digital skills education programmes in combating misinformation, mentioning UNESCO’s role in training journalists on election coverage and AI’s impact on elections.

3. Technical Measures:

Sezen Yesil discussed technical measures implemented by Meta, including detecting manipulated media, removing inauthentic accounts, and providing transparency in political advertising.

4. Public Reporting Platforms:

William Bird mentioned the development of public reporting platforms for online harms and suggested more nuanced labels to understand different types of misinformation.

5. Consolidated Monitoring Systems:

Lina Viltrakiene described Lithuania’s initiatives, including a consolidated monitoring system and collaboration between business and academia to address digital threats to elections.

6. European Digital Media Observatory (EDMO):

Giacomo Mazzone highlighted EDMO’s role in monitoring European elections and coordinating fact-checking efforts across the continent.

Governance Principles and Mechanisms:

1. Balancing Innovation and Integrity:

Rosemary Sinclair stressed the need to balance innovation with integrity and human rights protections in the digital sphere, emphasising the technical community’s role in maintaining DNS availability during elections.

2. Global Cooperation:

Lina Viltrakiene called for increased global cooperation and information sharing between democracies to address digital threats to elections.

3. Standardisation of Information Quality:

Daniel Molokele suggested the standardisation of quality information and news across regions, particularly in Africa.

4. Platform Accountability:

Lina Viltrakiene advocated for establishing clear legal responsibilities and potential penalties for digital platforms, while Sezen Yesil emphasised voluntary collaboration between platforms and authorities.

5. Information as a Public Good:

Tawfik Jelassi proposed treating information as a public good rather than a public hazard.

6. Ongoing Efforts:

William Bird stressed the importance of continuous efforts to combat misinformation outside of election periods.

Role of the Internet Governance Forum (IGF):

Rosemary Sinclair emphasised the potential of the IGF to facilitate global dialogue and cooperation on election integrity issues. She called for clarifying and strengthening the IGF’s role in addressing information integrity issues globally, developing more coordinated efforts between national, regional, and global IGFs, and potentially contributing to a global governance architecture.

Unresolved Issues and Future Directions:

1. Regulation of Influential Social Media Personalities:

The discussion highlighted the need for effective regulation of influential social media personalities and content creators, particularly in regions like Africa.

2. Addressing the Digital Divide:

Participants recognised the ongoing challenge of addressing the digital divide that limits access to reliable information in some regions.

3. Balancing Free Speech and Combating Misinformation:

The discussion touched on the complex issue of balancing free speech protections with the need to combat harmful misinformation.

4. Global Platform Accountability:

Questions remained about how to hold global platforms accountable across different national jurisdictions.

5. Standardised Definitions:

The need for developing common definitions and standards for identifying misinformation/disinformation was identified as an area for future work.

6. Internet Voting Systems:

An audience member raised concerns about the use of internet voting systems in some countries and the potential risks associated with them.

Conclusion:

The discussion underscored the complex and evolving nature of protecting election integrity in the digital age. While there was broad consensus on the challenges faced, the panelists emphasised the need for continued multi-stakeholder collaboration, enhanced digital literacy efforts, and the development of nuanced governance frameworks to address these critical issues. The role of the IGF in facilitating ongoing global dialogue and cooperation on these matters was highlighted as a key avenue for future progress. The moderator’s final remarks emphasised the importance of the multi-stakeholder process in addressing these challenges effectively.

Session Transcript

Pearse O’Donohue: Good afternoon. Welcome to this open session, the main session on Internet Governance and Elections. Welcome to this open session on Internet Governance and Elections. We want to focus on the issues around elections and maximising potentials. We have a very important session on Internet Governance and Elections. We want to address the issues of the democratic process. We must address, already on Sunday morning, in day zero of this Internet Governance Forum here in Saudi Arabia, we had a session on misinformation. And in that session, we also had a session on the role of stakeholders in protecting election integrity and the right to information. We want to have a discussion on the role of stakeholders for increasing trust and addressing any risks that exist. This session will therefore have a discussion on the role of stakeholders in actually protecting information and election integrity and what are the rights, what are the rights to information and election integrity. And also, we will have a discussion on the role of stakeholders in protecting citizen participation while mitigating the risks to electoral integrity. So for that, I would like to introduce our great panel of speakers, whom we have, and I would like to start by saying hello to Ms. Sezen Jezil, who is Director of Public Policy at Meta. We also have Mr. William Bird, who is from Media Monitoring Africa. And Mr. Tawfik Jelassi from UNESCO. You’re welcome, Tawfik . And then we have online Ms. Rosemary Sinclair, who is the outgoing Chief Executive of AUDA. You are both welcome online and it’s great to see you. We can see you on stage here. So I will move to the seat. I beg your pardon. I am so sorry, Your Excellency. This is the problem of not having paper in front of me. I’m still not adapted. So we have a representative from Zimbabwe, a member of the Parliament of Mozambique, the Honourable Mr. Daniel Molokele So the way we’re going to proceed with this with this panel is that I’m going to allow each of the panel members to make a brief opening statement in relation to a question which I will now ask. They’ll have three minutes to respond and in the good tradition of the IGF we will then immediately allow for input from you, the audience, both here and online to those questions before then I go back to the moderators with some more detailed questions for which we have chosen specific subjects. That’s how we’d like to proceed so as I say get ready we would really like to encourage your participation so that the output of this session will actually be something which we can have some well-informed actionable measures which can be taken and I will say in the context of the IGF where we know that there’s so much that the multi-stakeholder platform that this represents can do in such an important issue. So to get us going I’m going to ask the following question to all of our panel members. With more than 65 countries going to the polls in 2024 this was marked by the biggest number of elections at the same time in history so some have called this the year of democracy but looking now in retrospect at the end of the year how do you think it has gone? How has the year gone by? What worked and what didn’t work? So perhaps I can turn to you please first of all.

Sezen Yesil: Thank you so much. Hello everyone thanks a lot for hosting Metta on this panel. Internally at Meta we call it year of election too so we knew it was coming and we prepared well. Before each election we make a risk assessment specific to that election and this assessment informs our election integrity work at Metta. In 2024 we ran a number of election operations center to monitor continuously the issues on our platforms and to take actions as needed swiftly. I can share a few observations from this year’s elections. So in first of all in our actions we try to strike a balance between protecting voice and keeping people safe and I must admit that it is one of the hardest jobs in the world and we have many policies or rules on what is allowed and what is not allowed on meta platforms and we remove content which is violating our rules or policies. Throughout this year we decided to update some of our policies. For example we updated our penalty system per feedback of the oversight board to treat people more fairly and to give them more free expression and secondly we updated our policy on violence. People of course have every right to speculate on election related corruption but when such content is also combined with a signal of violence we remove it and I can say that those updates work very well during the elections in this year. Second observation is about prevention of foreign interference. In this year only we removed about 20 CIB network coordinated in authentic behavior network. Those networks consist of hundreds of Facebook and Instagram accounts and pages and they work to mislead people, they work to spread disinformation unfortunately. We observed that some of those networks we disrupted moved to other platforms with fewer safeguards than ours. The last observation is about the impact of artificial AI. So in the beginning of this year many people were very concerned about the potential negative impact of gen AI generated content on elections such as deepfakes or AI generated disinformation campaigns. However and sorry to address these risks we took a lot of technical measures plus we signed an AI election accord with other major tech companies to cooperate to combat threats coming from the use of AI on elections and we observed that the risks did not materialize in a significant way and such impact was modest and very limited in scope. example, only less than 1% of the fact-checked misinformation was AI-generated.

Pearse O’Donohue: Time now. Thank you. That’s it. Okay. Thank you. Sorry, Sezen, but you’re the first to suffer from the fact that we will hopefully have a good discussion, so I’ll keep the speaking time short. Now I’ll go to the other end of our list of speakers here, just Tawfik Jelassi, Director from UNESCO. We’ll be very happy to hear your views on that question as to, really, what do you think, how did the year go, and what worked and what didn’t work?

Sezen Yesil: Thank you very much, Mr. Chair. So you reminded us that this is the super election year, with 75 elections being held, that is involving half of the population of the world, and obviously this is a major test for democratic systems around the globe. What has worked well, to answer your question, I think there were some global efforts to protect election integrity from a process point of view, however, the second maybe thing that worked well is the involvement of the youth and first-time voters in elections around the world, especially in countries where half of the population, sometimes even 60% of the population, is under the age of 25. I think we saw this major engagement, that’s good. What has not worked well is the exponential spread of disinformation and hate speech derailing the integrity of electoral processes, and maybe casting some doubt or trust in election outcomes and democratic institutions. Another thing that did not work well, which is a major challenge, is the safety of journalists covering elections. Many attacks happened against them, and we know about the extreme The second thing that did not work well is that there is a huge digital inequality in the world, and that’s why there is a relatively high impunity rate for violence or crimes committed against journalists. The third thing that did not work well is still a huge digital inequality that exists, especially marginalised groups, including women and persons with disabilities, who face major barriers to participate in public spaces, and that’s why we need to change the way or the path forward. I think we need some stronger regulatory frameworks to address harmful online content while protecting freedom of speech, so when I say regulation, I’m not referring to censorship, that’s why I’m saying while safeguarding free speech online. Second, we need maybe to expand media and information literacy in the digital age, especially among the youngsters and citizens, and, finally, I would say that UNESCO is contributing to this global effort on media and information literacy in the digital age, but also through the published UNESCO guidelines for the governance of digital platforms, which happened a year ago.

Pearse O’Donohue: Thank you very much, and some of those subjects that you’ve raised we will come back to in our detailed questions, but it’s a very clear view as to the main points that we must address, including, of course, intimidation and violence against journalists, and the digital gaps which have themselves an impact on the derailment of these elections. So, thank you. If I could now ask the same question to the first of our online participants, our online panellists, so Ms Liz Orenbo, who’s a researcher at the International Stakeholder Relations of ICT Africa. I’d like to hear your views on that same question about how things went and what worked and what didn’t work. Please, Liz.

Elizabeth Orembo: Thank you for the floor, and thank you for inviting me to this very important discussion. In my reflection, I would say that there are things that went well, there are things that didn’t go well, just as far as I’m concerned. There are things that didn’t go well. There are things that didn’t go well, just as far as I’m concerned. you might hear some chicken sound behind me. So one thing that did go well is that stakeholders, even locally, even in Africa, because I work in the context of Africa, they knew that this was coming. And with the rapid changes of technologies, they were aware that they needed to come together and tackle some of these risks. So some of those risks were tackled, but also the challenges of the free flow of information itself. And with that, I also talk about data. That remained a problem. And when free flow of information is not there, with challenges of policy, with challenges of infrastructure, with challenges also of media, then people don’t access information the same way. And it breeds a very fertile ground for misinformation and inequality. There’s also not that culture of data sharing, and especially in the context of election. And this brings that unevenness of access to information itself and also misinformation. But that problem continued. It also meant that trust for election management bodies also kind of went down, because people are yearning for information, truthful information. And at the same time, they’re getting mixed information. But also at the same time, media is not equipped. It’s also a struggling industry to get important information to people. So that also breeds another fertile ground for misinformation. So data and information flow, I would say was a major problem to me. But also, as much as the stakeholders came together to tackle misinformation, also there was a bit of challenge in bringing all stakeholders to come to place. Because with data becoming more available, we also need more capacity to crunch data to get it to people. And those capacities were different as well, and sometimes challenging. So there was sometimes data availability, but challenges in making use of that data. Another one persistent challenge is, and especially us in the global south, reaching the tech companies. And with that, we also experienced regulatory challenges when it comes to crisis during election that can sometimes lead to internet shutdown. I will stop there for fear of being time-limited.

Pearse O’Donohue: Well, thank you. And a very interesting perspective, including that last point, but not. least that last point with regard to the particular issues of the Global South. Hopefully we can come back to some of those questions as well. But now if I could turn to the next of our speakers here, William Byrd from Media Monitoring Africa. Please, William. Thank you.

William Bird: It’s been a big year but I want to just ask if people genuinely feel better about democracy having had 65, 70, 75 elections. Because the sense that I get from speaking to people is that despite it being, it should be a year of celebrating democracy, we don’t feel good about democracy and I think that speaks to some fundamental changes. The first is the rise of fascism and this is a very real problem for us in terms of the fact that I think it’s deepening polarization. It’s framing people that believe and support human rights as left-wing extremists, just because you are talking about fundamental equality and dignity for all. And there’s something that’s happened I think that we also need to accept as a point of departure about power structures. We’re no longer in a place where you can have power determined and messaging and narratives framed by one or a few central entities. There’s now this wonderful possibility that almost anyone can have a view and then as much as that’s a good thing, we mustn’t throw away, throw the baby out with the bathwater as the expression goes, right? Because we do need to make sure that there’s certain things that are common that we can at least agree on. So I think in terms of things that worked well, I was thinking about it last night and I came up with MECA, which stands for Media, it seems appropriate, Media, Electoral Management Bodies, Civil Society, Collaboration and Adaptability. Some colleagues have touched on that sense of adaptability of organizations of entities adapting to the emerging challenges. I think for media we saw them facing huge problems across the continent, particularly in Southern Africa, but we also developed some mechanisms to start to assess how they perform and how they contribute. Electoral Management Bodies in countries where there were big shifts, like in South Africa and in Botswana for example, of political power, we saw that where you’ve got a stronger, more credible Electoral Management Body, they’re able to still contribute and function despite being subjected to significant attacks. Civil Society I think worked really well certainly in our experience in South Africa. They came up with research projects, they worked with universities. There was a reporting mechanism, Real 411, that’s a public complaints platform, and they worked together, which is the next point, collaboration, can I finish, collaboration, which is that we worked with the social media platforms, Google, Meta, and TikTok, and the electoral management body, and that did something really positive.

Pearse O’Donohue: Okay, thank you, William, for a new acronym, but at least a way of analysing the different issues. We will come back to that also. And now, I’m certainly not going to forget him this time, our next speaker is the Honourable Daniel Molokele, who is a Member of Parliament from Zimbabwe, please.

William Bird: Thank you so much. I will speak more from the African point of view. It was also a very huge election year for Africa 2024. I would say as we end the year as a continent, we are generally happy with the election processes across Africa, we had largely peaceful and successful elections in countries such as South Africa, Madagascar, Botswana, and very recently in Ghana. And we managed also to benefit from innovation around media and technologies, especially harnessing the youth population into elections. Generally, young people in Africa are very averse to elections, there is apathy, but I think this year we saw a higher participation of young people as voters. We still need to see more young people as candidates or as elected representatives. We also saw the use of social media in a much more progressive way to mobilize people to voter registration and more importantly to turn out as voters, including media platforms such as TikTok, WhatsApp, Facebook, and X, so Africa is harnessing the media technologies to also improve access to elections by average citizens. We also end the year on a very difficult note in countries such as Mozambique where there is no peace at the moment. The post-electoral violence continues to escalate with no solution in sight. Last time I checked, over 100 civilians have died, mostly at the hands of security officials like police and army in Mozambique. The election remains disputed and we need a solution to that. Interestingly enough, there has been a huge use of media technology or innovative approaches to use of media. The opposition leader is actually not in Mozambique at the moment, but he is able to provide leadership every day in Mozambique and people are using access to media technologies to respond. It can be a bad thing, it can also be a good thing, but that’s the situation at the moment in Mozambique. Thank you.

Pearse O’Donohue: Thank you very much. So between what Daniel Molokeli has said and William before him, we are faced with a number of issues where we need to consider the role of the international online data and communications on issues such as, William mentioned, the rise of extremists as a result of the elections, or in the case of Daniel, the actual fact of violence as part of the elections leading even terribly to the death of citizens and individuals, and to what extent is whatever about misinformation, whatever to what extent is digital or online information or platforms contributing to those serious issues. So the next speaker is here with us, so it’s Ms. Lina Vitrakainė from the Lithuanian government. Please.

Lina Viltrakiene: Thank you very much and good afternoon everybody. And indeed, I would like to say that Lithuanians significantly contributed to this year of democracy by having, participating in three elections this year. We had presidential elections, we had elections to the European Parliament, and we also had the national parliamentary elections. So from the government perspective, it was a challenge and a lot of governmental institutions, including Lithuania’s Central Electoral Commission and a number of other institutions, indeed worked hard and consolidated all their efforts in order to make these elections go smoothly and make them reliable. Particular attention was paid to ensure that only legal sources of fundings are used for electoral campaigns, transparency is maintained with regard to real expenditure of political parties and individuals for the media, effective communications channels with media are maintained, appropriate channels to detect disinformation and a comprehensive system to mitigate risks is established, to mention just a few. Some of these requirements and other important requirements are covered by Lithuanian and European legal acts, like the election code, the criminal code, the political party law, the long provision of information to the public. to mention some of them, and thus solid legal environment is the first thing I would like to mention in the list of what worked. Another action which I prefer to include in the same list is established collaboration of responsible state institutions with media, including with social platforms, which no doubt enlarged public space and reinvigorated public debate during the election campaign. But on the other hand, all around the world, we faced the unprecedented scales of lies and disinformation, deep fake statements of top politicians appearing especially on social platforms. And this increased the threat of influencing the choices of people, seeding distrust in society and eroding trust in democratic institutions. You may know that in the EU, the Romanian and Bulgarian elections experienced significant interference by foreign actors via social media platforms, especially TikTok and Telegraph. Thus, this shows us that we need to work further on continuous collaboration of platforms with state institutions. And while regulatory frameworks perhaps should be improved, and as a model, I would like to refer to the EU’s Digital Services Act, which could really encourage the thinking. Thank you.

Pearse O’Donohue: Thank you very much, Lina. And if I could just add, working in the European Union for the European Commission, also we had put in place a number of measures for monitoring the health of the European Parliament elections. We’re still doing that assessment, but it is clear that some problems were avoided. But you did mention, including a number of other problems, for the first time, the appearance of deep fakes, which can be very influential and turn people against an individual or a tendency or a party, and be very damaging, even if they are very quickly identified as being fake, because sometimes the initial damage is done. Thank you. So now our last speaker who is in line, and thank you very much for your patience, is Ms. Rosemary Sinclair, who is the Chief Executive Officer of the Australian DA. Rosemary, the floor is yours.

Rosemary Sinclair: Thank you, Pierce, and many thanks for the opportunity to bring a technical community perspective to the panel. And I’d like to start with just a technical reminder, really, about the internet. It is, of course, a network of networks, 70,000 in total. It operates on open standards and common protocols to enable global interoperability. It’s made useful by the unique identifiers, the names and numbers, which are coordinated by ICANN, which in itself is an independent technical community that uses a multi-stakeholder approach. So I’m part of that technical community, and I’m responsible for OUDA, which is the small company that administers .AU, the country code for Australia. We focus on technical operations and performance and our domain name licensing rules. And we’re very strong supporters of the multi-stakeholder model of internet governance. When I think about 2024 and what worked and what didn’t work in that year of so many elections, the first point I want to make is that technically the internet worked. In Australia, we delivered 100% availability to users during the year. Every time a user wanted to access the domain name system, they could. Why was that important? It’s because the internet worked to share information, to provide communication and commerce, of course, to grow economies and standards of living. But there are a number of harms, and many of those have been mentioned just now. Misinformation and disinformation and fraud and others, and they are key challenges, particularly in such an election year. So the harms, of course, need policy work, and that’s what we’re here to talk about. And the tensions, as we see it, are between open information, secure identity and privacy for individuals. And the question really is how to balance those things. So practically speaking, during elections, we sometimes see at powder increased requests from people to take down the websites of their political opponents. And those requests are often made with claims of misinformation or disinformation. Those claims must be assessed by others who are authorised by law and skilled to make those judgements. Our response can only be based on our .au licensing rules and not on the political nature of the content or the requester. We’ve not yet seen the impact of AI on elections in Australia, but we’re expecting to have a national election next year, and we think that AI will be something that we need to watch during that process. So the policy work that we all have to contribute to is really a work in progress, and we see the Internet Governance Forum as the place for those discussions to take place across all the different perspectives, including our own technical perspective. Thank you.

Pearse O’Donohue: Thank you, Rosemary. And indeed, thank you for giving us the views of the technical community, and in particular referring to ICANN, but also the importance of the DNS in relation to the issues that we’re talking about, and again, of course, the need for independent verification and moderation with regard to any attempt to take down websites. It’s a two-edged sword. So thank you. So now, thank you to all of the panellists for that first round, and as I said, we are now going to see if anybody from the audience here in the conference room or, for that matter, online, would wish to make any inputs. I will ask that they are short, and to do so in time-honoured fashion, if that is the case, you need to come up to the front and use one of the microphones. So if anyone wants to do so, could you please identify yourself and the organisation you represent and please keep your input very short, two minutes as a very maximum. Thank you.

Giacomo Mazzone: Thank you very much. Giacomo Mazzone. I am a member of EDMO, the European Digital Media Observatory that you know very well. I’m here reporting what we discussed in the workshop on day zero that was organised by EDMO about the task force that worked on vigilant the integrity of the European election last year, compared with what happened in the US election and the South Africa election. The contribution that we can give you is that the assessment of what happened during the European election was very good because there was a successful example of cooperation with the platforms, and made in a multi-stakeholder way, in the sense that in a unique place, that is EDMO, you have academia, you have fact-checkers, you have institutions working together. Through the code of practice that the European Commission signed with a certain number of platforms, this information will bring to the attention of the platform and the platform will immediately react and behave. So we have been successful in removing things without having enforcement, but made on goodwill and cooperation. Unfortunately, what was reported by… U.S. friends was not exactly the same. They said that the level of cooperation in the U.S. was not the same, and also that they lived a very worrying experience, this is important for our UNESCO people here, of pressure and intimidation on fact-checkers, trying to to silence them and having not them in the public discourse. And in South Africa…

Pearse O’Donohue: Sorry, I’m going to have to ask you just to wrap up, please.

Giacomo Mazzone: Yes, the last point, to be complete, is about South Africa’s experience. They reported that any intervention by legislation is seen as censorship, so it shows that it’s different. You need to find a different way to act in different cultural contexts according to the different situation. Thank you very much.

Pearse O’Donohue: Thank you, and thank you for those insights, and indeed as well the very useful workshop that took place on Sunday. We have another speaker, please. Again, your name and organization. Thank you.

Audience: Hello, Alexander Savnin, Primorsky University from Russia. I would like to point out that among this misinformation and data spread, the Internet already may be used by some governments for votings. Like in Russia, this year there were two sets of elections, one of which was actually a presidential election for Mr. Putin, and systems implementing Internet voting was used in these elections. And without possibility to multi-stakeholder discussions on implementation of this system, without possibility to check trust, this system actually undermines any results of elections as all. Unfortunately, implementation of these systems and results of elections are not very well observed or seen by global community, but it brings another dimension to the undermining trust and improving risk of fair elections. Thank you very much.

Pearse O’Donohue: Thank you, indeed. I’m just looking to see, do we have any online inputs? Anybody who’d like to take the floor or make a comment? And this is the way of giving the spotlight to Bruna, who has done all the organization for this session.

Bruna Santos: I would just echo a comment from Mokabedi. So just reading it out loud. Hi everyone. I’m Mokabedi from Iranian academic community, some Krausbräuder digital platforms, refuse to cooperate with the competent authorities of independent countries in the field of immediately dealing with this information that meaningfully affects the election results and harms public trust during the elections due to reasons and excuses, including political reasons and sanctions. They even refuse to establish legal representation. My question to the panel is what can be the legal and political solutions to solve this challenge and the double standards of digital platforms? Should maintaining the health and safety of online elections in different countries have a different degree of importance? That’s the one we have here. Thanks.

Pearse O’Donohue: Thank you, Bruna. And I will ask the panelists if there’s anything from what we’ve heard so far, particularly that last question, if you want to incorporate that in the responses when we come back to you for a discussion. Now we have a final participant from the floor, please. Thank you.

Audience: Thank you very much. My name is Maha Abdel Nasser. I’m from the Egyptian parliament. Actually, the problem is not just during elections, but it gets worse during the elections. We find those, what they call it, the electronic flies or so, they attack anything we put with the, they put a lot of disinformation and they try to get us down by all means, even those people who were, I don’t know, by the regime or by opponents or by anything. And even when we report, it takes a very long time to take any action if the action is taken. So my question is, if there is a possibility to have a platform or anything between all those people to report such attacks or such harassments, especially for politicians, women politicians of course, so the action can be taken in a rapid way and we can get rid of these things or not? Thank you.

Pearse O’Donohue: Thank you. Again, I hope that that question can be addressed. I will allow myself just to very briefly give a partial answer, but it is not the full answer, but that in the European Union, particularly now with the introduction of the Digital Services Act, we do have a requirement for individual, very large operators of platforms to have the facility for the reporting of such activities, but also centralized databases monitoring these issues. And by the way, verbal and online violence against women and particularly female politicians is something that we are particularly concerned about, as it is insidious and has long-term effects, as well of course as the effects on the individual. So these are issues which we must address in the case of the European Union. We do see this as a necessity, the ability to report such incidents and hopefully to see quick action. But I’m sure that there are other experiences from around the world and we’re always willing to learn. So for that, thank you for your participation. We will have another slightly longer section at the end and I hope that we have more participation here in the room and online, but we’re going to move on now to the second set of questions and here we’ve broken them down between our expert panellists and I’m going to start with William Byrd and Liz Orembo and you’ve got the hardest job because I’m going to ask both of you two questions and give you five minutes each to answer both of them. So we’ve put them on screen and I hope that you can see them, but certainly what evidence has come to light of information integrity being weakened through human rights or tech harms? How should the weakening of election integrity through these and other risks be identified? And that’s for William. And then Liz, when we come to you, the question I’d like to ask you is, what are the implications or consequences of such risks to information integrity in elections? But we’ll come back to you, Liz, in a moment. First of all, I’d like to hear William on the first question and you have five minutes, please. Thank you.

William Bird: So I love the point from one of the other people that a lot of these things occur outside of elections. What we see is these things occurring at a heightened level in an election period, but that they, you know, attacks against women, for example, online don’t stop just because it’s not an election period. So I think there are three things where we saw information integrity being weakened in South Africa specifically. Firstly, attacks against the electoral management body. These were multi-pronged and straight out of a disinformation playbook that targeted the entity, its decisions, they spread rumors, missing disinformation, then they target individuals in there, and then they lace these various campaigns with kind of pseudo-legal challenges. And then they rely on a willing platform partner to scale the dirty work. And in that instance, most of these things we saw in South Africa on the platform that was X, which was not part of our collaboration, and unsurprisingly. The second issue is attacks against journalists and human rights defenders and those bodies. So as an example, on X, over a two-week period, we saw over a thousand attacks against journalists, and most of those actually against one journalist in particular. So clearly organized network behavior, including issues linked to incitement.

Tawfik Jelassi: And then thirdly, the bigger impact of the decimation of media, as we’ve seen them being systematically undermined as trusted systems. That feeds into that idea of media and polarization, that sense of people not knowing what’s actually going on, and then being unable to actually operate. So how should they be identified? You spoke about what’s happening in the EU. In South Africa, we’ve got a platform, Mars, which people can report attacks against journalists so that there’s a public archive of them, and we’ve also got the same thing for other online harms, mis- and disinformation, hate speech, and threats, and hate speech. And that’s also, again, a public platform that operates independently of the state so that the public begin to have faith in it. And critically, it applies the same standard, because what we found was problematic is that what’s okay on one platform isn’t okay on another. And so that leaves the public thinking, well, what do I do here? If I want to report on X, nothing happens. If I report on meta, it’s this process. If I report on this platform, it’s another whole process. So we’ve got a system that allows the people to report any platform, and then you can take action.

Pearse O’Donohue: Thank you very much. And of course, consistency in the application and confidence of the individual that whatever the platform is, that they will have the ability to have redress or at least to have the issue examined is very important. Thank you. Now, turning to you, Liz, just to repeat the question is, what are the implications or consequences of these risks to information integrity in elections, including the risks to civil and political rights, or the interference by foreign actors, and so on? Please.

Elizabeth Orembo: Thank you. Well, I’d begin by first looking at the media environment. There are certain most information that comes from the online environment coming to media and vice versa. And when there’s no information integrity on online platforms, it means that the media has to respond to a lot for public interest, get what information that is misleading there and putting it out to the public, demystifying some of this misinformation. Also, competing narratives, a lot of the information also coming online means that the media has to go through all that information and spotlight what kind of information that the public should focus on, because they can also get overwhelmed with a lot of information coming from different media. But then again, we see the capacity issue of the media also because the shift with advertisement and revenue to online spaces. So the media is also challenged there. What does it mean on human rights implications and civic rights is that people don’t vote from an informed point, because they miss a lot of information that can really be detrimental or be useful for voting, the right choice of candidate. It also means that this will also impact things to do with development issues, development which is a right that would enable them to enjoy also first generation of rights like freedom of expression, and that’s a problem there. The other one is incitement. Of course, when there’s no information integrity, there’s a lot of polarization happening online, offline, that have effects to also marginalization. People who are further marginalized, you mentioned women and girls. women who have been active change-makers at the grassroots level, when they try getting into the spaces of governance, they face a lot of violence online and offline, and this really discourages them from pursuing government office or electoral office. That means that we are widening the inequalities there. I would also like to point out that the African continent faces very different challenges and also very different context. We are different levels of development, different levels of democratic progress, and that means that policies that are by the big platforms cannot just be applied blanketly because some will not apply in some countries, because of special tech development context and also democratic context as differing to others. Sometimes we see that there’s no much investment or tech companies get overwhelmed to give special attention to special context. This year, what we’ve seen, and especially with Mozambique as the situation continues, is that not really that tech platforms are not engaging there, but also there’s that overstructure in engagement to respond faster to situation on the ground. Those are the challenges that we are getting in most African countries, that even when there’s attention there, there’s no that specialized attention on the ground because most of these tech companies are not domiciled. The other thing is when we talk about information integrity and trust in electoral management bodies, sometimes you have a focus on electoral management bodies maintaining their reputation. But at the same time, for them to get trust from the public, it means that there’s also need to be an environment where there’s proactive information coming also from election management bodies and especially in the context of how they manage election. Now, because of different media access, the situation in Africa, either connectivity is uneven or even access to media, even traditional media is uneven. That means even when they try to communicate with whatever platform, it doesn’t really reach people. That unevenness in information access also brings about the fertile ground for misinformation. Like I said, it also touches on what William Bird had also mentioned. On this, I’d also like to touch on what we try to do at RIA.

Pearse O’Donohue: Just as quick as you can, please. Thank you.

Elizabeth Orembo: Yes. We are working on Mozambique, Ghana, and Tanzania, which is having elections next year. Our focus is on media coalitions and access to data also. research, another thing that we are seeing right now are the dilemmas around data sharing, data sovereignty, and whether to host data, elections data, in the country and outside the country. I think I will stop there.

Pearse O’Donohue: Okay, I’m sorry that I had to interrupt you, but that was a very interesting analysis and quite a number of issues that you have identified as being things that need to be addressed the consequences in some detail and obviously some lived experiences as to what happens. With that in mind, now we’re going to move on to the next set of panellists. This time the format is slightly different. We have one question and I’m going to ask that question to three panellists and hopefully you can feed off one another. So I will start with Daniel Molokele. And the question is, what initiatives have successfully responded to challenges posed to information integrity in elections? And how is this success measured? And are such initiatives specific to a given time or place? Or could they be used more widely around the world? Mr. Molokele, please.

Daniel Molokele: Thank you so much. Yeah. There are several initiatives, most of them are just starting. But I wanted to highlight a very continental one which occurred in September. We met in Senegal as Africans on the Freedom of Internet Forum. And one of the key pillars of this conference with hundreds of delegates from across the continent, one of the key pillars was access to information from a perspective of elections, especially knowing that in some instances in Africa, we have seen governments using strategies such as internet shutdowns, where they create a complete blackout during the campaign period to force an advantage against the opposition. We’ve also seen instances where social media platforms like WhatsApp are also restricted in terms of operation to make it difficult for people to access information. Also the over-reliance on state media at the expense of media that is independent and shutting down of alternative media platforms, especially media houses that are seen to be sympathetic to the opposition. So we have started an annual meeting in which we will be able to get presentations and research and assessments in terms of electoral processes and access to information. And also related to that, there is a parallel process around challenging policy frameworks and legislative frameworks that make it harder for people to access information, especially civil society, especially political parties that are not the ruling party, especially journalists who are covering elections. Access to information laws in Africa are there, but some of them are designed in such a way that they create a more bureaucratic process. Ostensibly, they are supposed to increase access to information, but at the same time they make it harder for someone to access information. We also have such laws in Zimbabwe, where I come from, called the Official Secrets Act. Official Secrets Act also can be used to make it difficult to access specific information if it doesn’t create advantage to the ruling party. So there is a lot that is happening, and we are seeing not just civil society coming into space, but we are also seeing research coming from universities, from schools that teach journalism and media studies, and that also helps us to have a more robust view around access to information and electoral integrity. Some of the ideas that are coming out, they are mostly unique to Africa, because Africa also is a situation where there is a great digital divide with the rest of the world. The majority of people in Africa have no easy access to the internet, they have no easy access to mainstream media, so at the end of the day, they are subjected to misinformation and disinformation, and a lot of state-funded propaganda. And at the end of the day, it’s such a huge disadvantage, it makes it difficult for election systems to be free and fair, because without being properly informed, you cannot make informed choices as a voter, and in most instances it favours the ruling elite in the continent. Thank you so much.

Pearse O’Donohue: Thank you. So, in suggesting some of the solutions, you’ve also identified one or two further problems that need to be addressed, some arising. from your experience. So now I’d like to ask the same question to Lina. I will read it out very briefly or abridge what initiatives have been have successfully responded to the challenges and how are such initiatives specific to a given time or place or could they be used more widely. Lina, please.

Lina Viltrakiene: Well, thank you very much and indeed measuring the impact of the counter disinformation initiative is really very challenging task but I am willingly like to share with you several good practices which we developed in Lithuania and that could be really replicated worldwide. So I will refer to three of them. First, in Lithuania we created really a consolidated system of monitoring and neutralizing disinformation. We take a comprehensive whole-of-society approach to monitoring, analyzing and countering disinformation involving not only state institutions but also the whole vibrant ecosystem of non-governmental organizations, media, business, which really helps to create resilience of the society and also trust. In this context I would like to particularly stress the importance of NGOs in analyzing and countering disinformation but also particularly in promoting digital and media literacy, including journalists working or writing to audiences of national minorities, developing learning programs and different devices to vulnerable groups. We have NGO Civil Resilient Initiative which worked a lot on that. We have an important non-governmental organization debunk.org. This institution also researches disinformation and runs educational media literacy campaign. So indeed developing… the management itself, main part of our research objects going ambitious and developing critical thinking is key to resilience against a foreign information manipulation and interference. Another important element I would like to mentions, it’s the collaboration between business and academia to develop technical solutions. So, in Lithuania, we have a lot of collaboration between business and academia, and we have technologies, such as AI-driven tools, for example, that could detect manipulated media bots, and also coordinated inauthentic behaviour, and here, the collaboration between science, between academia, and business is really, really important. So, we have a lot of collaboration between business and academia, and we have a lot of collaboration between business and academia, and we have a lot of collaborations about reporting platforms, and so on, so, in Lithuania, we have, really, a lot of people, a lot of society members participating in encountering this disinformation. We have a very nice initiative, the Lithuanian Elves Initiative, where we have a lot of people from all over the world participating in encountering this disinformation, and this really works very well. Second practices which I wanted to share with you, and which is very much related to the first one, is a cross-sectorial approach to find disinformation, and really closely cooperating at the national level. For this reason, I brought with me on my side a team of experts under the framework, under the national crisis management centre. indeed, helping to quick detection and rapid response to disinformation or to information incidents, which could have a big influence. This National Crisis Management Center coordinates the strategic communications and also provides guidelines for a possible response to different information incidents. And our experts from this center are really willing to share and sharing their experiences also with other countries of this effective functioning of cross-institutional framework. And finally, that brings me to my third point, that sharing experiences among democratic states is really, really important. And one of such initiatives we have in Lithuania is the Information Integrity Hub, which is operated by Lithuania and the OECD, which provides the training for officials worldwide. So this is a training program offering opportunity for OECD and non-OECD public officials to peer learn and strengthen their capacities to detect, suppress and prevent and view foreign influence and find disinformation. And indeed, that is very effective when experts are gathering together, when the sharing the cases of disinformation they face, and perhaps also that could form a kind of inventory of practices, of bad practices, which would be then easier to recognize when experts are working together. are discussing and sharing that. Thank you.

Pearse O’Donohue: Thank you very much, Lina. So, now, the same question to Sezen Yezil. You’ve been waiting a long time since you last spoke, so again, what initiatives have successfully responded and can they be used elsewhere?

Sezen Yesil: Please. Thank you so much. Oopsie. I hope that my answer will also address the questions of the audience, the one from the online participant and the one from my sister from Egypt. And I know that women politicians are especially vulnerable, unfortunately, and we have special protections in place, and after this session, if she kindly stays and meet me, I would like to explain more in detail. But I can say that we, as META, we have a very well-established playbook on election integrity and we keep improving it according to the lessons learned after major elections. Our measures are globally applicable, but we make risk assessment for each election specific to that country and adjust our measures if needed. So that participant from online medium said that we don’t have a local representation, et cetera, so that doesn’t matter because all our measures are globally applicable. And we have about 40,000 employees working on safety and security, and we have invested more than $20 billion in this area since 2016. There are five pillars in our election integrity work. First one is that we do not allow fake accounts. Our automatic detection tools block billions of accounts often within a few minutes after creation. Second, we disrupt bad actors. We took down more than 200 coordinated inauthentic behavior networks since 2017. And, as you know, those networks are used to mislead people, especially during election times. And we work in collaboration with law enforcement and security agencies and with academia, researchers, etc. to identify those actors. Third, we fight against misinformation. It is a really tough issue because nobody agrees on the definition of misinformation. For example, let’s say a politician says that they have the best economy in the world. What if the indicators do not agree with him? Are we going to remove that content and label it as misinformation? That won’t be appropriate. So, we have a three-part strategy. Remove, reduce, and inform. Under remove, we do not allow mispresentation of voting date, voting location, and times. We do not allow mispresentation of who can vote, who can participate in elections, and what documents are required, etc. Under reduce, we work with more than 90 third-party fact-checkers around the world. And they cover 60 languages to identify and rate viral misinformation. When rated content is not recommended in our systems, its distribution is reduced. And under inform, we put labels like false information on rated content by the third-party fact-checkers. And we provide more context to the users if they want to have more information on why it was misinformation, etc. And under the fourth pillar, we increase transparency. Especially for political ads, we have an obligatory authorization process. Advertisers, political parties, for example, have to prove who they are and where they are located. They can only target audience in the country where they are based in. And we put a paid-for-buy disclaimer to the ad so that people can understand who is funding that political advertisement to give more transparency. Also, political ads are kept in our ad library for seven years. So, for example, researchers use it a lot. It is publicly available, free. And you can see all the information like the amount spent on ads, who is funding, etc. is created with AI apps, the advertisers have to disclose it to us. They have to say it. And we put a label on the content, like digitally create, so that people understand it is a photorealistic video or photo or something. And under fifth and last pillar, it’s about partnerships. We work with local trusted partners to receive timely insights on the ground. So okay, final comments. And also user education is very important. We do campaigns with third-party checkers and academia to raise awareness on how to fight disinformation and misinformation. Thanks so much.

Pearse O’Donohue: Thank you very much for that. So we heard, particularly in the answers from Daniel and from Lina, already references to the civil society, to NGOs, to the stakeholders, the multi-stakeholder process, as having an important role with regard to, you know, what could be successful responses and how we learn to share initiatives across countries and regions. And now we might, that’ll be one element of the next question that I’m going to pose to our final two panellists. Again, thank you for your patience. And that question, again, it’s on screen, is what are the governance principles, tools and mechanisms that could be applied in order to help protect the integrity of electoral processes and information in the digital age, while upholding human rights and democratic principles? And then, are there specific roles for particular stakeholders that need to be highlighted? So I’m going to put that question, first of all, to Tawfiq Jalassi, please.

Tawfik Jelassi: Thank you very much, Mr. Moderator. I think we all agree that ensuring that information is trustworthy and accurate is a very critical challenge today, maybe more than ever before, especially during elections. And here I would like just to quote Maria Ressa, the 2021 Nobel Peace Prize winner, who said, without facts, there is no truth. Without truth, there is no trust. And without trust, there is no shared reality. I think this is a very powerful quote that reminds us that fact-checked information is the basis for not only democracy, but for society and for communities to live together. So it’s a major challenge. But then, second and final quote from journalists. Here is Karl Bernstein, a political scientist. We have a journalist who said, what we do as real journalists is to give our readers the best obtainable version of the truth. It’s a simple concept, but it’s very difficult to achieve and especially elusive in the age of social media. We know the power of digital influences, who have today 50 plus million followers per digital influencer. Our recent study shows that more than half of the content they post online is not fact-checked, is not verified. This is a new challenge that we need to deal with. So, the dilemma is there, and the pursuit of truth is especially challenging in this digital age, where information spreads rapidly and far faster than objective information. A recent MIT study shows that false information travels 10 times faster than fact-checked information. So, it’s a real challenge, and as I said, this is at the heart of preserving democratic processes. So, the question is, what can we do about this? And here, let me say that at UNESCO, we are deeply committed to advance our mission of protecting the integrity of information. And here, I must say that we are honored at UNESCO, being asked last month by the G20 Summit to become the secretariat for a global initiative on information integrity and to administer the global fund allocated to that by the G20, the 20 most important economies of the world. So, I think information integrity is at the heart of what we are discussing, especially also when it comes to climate change. How can we combat climate disinformation when we try to resolve the environmental crisis? So, this is part of our… mission. Now the next question is how do we go about it and our approach has been all along anchored in the international human rights standards. We developed the guidelines I mentioned a few minutes ago, the guidelines for the governance of digital platforms, again based on human rights, but also to promote transparency. you Transparency, accountability, and inclusivity. One third of them have to quit because of online harassment and as I said sometimes physical violence as well. So this is what we have been doing to protect women journalists. So you didn’t ask about this, you asked about women politicians and our panelists has addressed that. So again there is one final note maybe to mention is we believe that true empowerment starts with education. Education is at the heart of the matter and some of the panelists mentioned media and information literacy in the digital age. Literacy again in reference to education. Our program on that is a cornerstone of our strategy. We want not only to have guidelines for digital platforms and for regulatory authorities, that’s on the supply side of information, but we have to work on the demand side. of information and the usage. And our aim through our educational program is to make the users of digital platforms become media and information literate by developing a critical mindset so they can distinguish, hopefully, between fact-checked information, objective information, and obviously falsehood. This is something that we believe is very important. We want them to raise a few questions. Who created this information I come across online? Why was it shared? And what evidence supports it? Because otherwise, the users of information online become themselves amplifiers of misinformation. They like and they share that information. And finally, to say it’s a collective effort. I mentioned what UNESCO is trying to do, but of course, it’s a collective effort. We need governments to create policies to protect human rights, safeguard freedom of expression, and having the right regulation, maybe, for digital platforms. We want tech companies to adhere to full transparency and accountability and the proper content moderation and curation, but educators and civil society to empower citizens, the way I mentioned, to discern facts from fiction. Let me just conclude, because I think my time is up, to say not only that we remain at UNESCO steadfast in our commitment to this cause, but we believe that together we can build a digital age that does not divide, but unites, that does not harm, but heals, and that does not undermine democracy, but strengthen it.

Pearse O’Donohue: Thank you very much. A lot to think about there. So finally, waiting patiently, we would like to hear from Rosemary Sinclair on her views on this same question. Please, Rosemary, the floor is yours.

Rosemary Sinclair: Thanks, Piers, and it’s a very big question, as I know you know, so just a few thoughts from me. We’ve been focusing in this panel session on elections, misinformation, and disinformation, but I think we’re really talking more broadly about information, and that means we’re really talking about trust and confidence in an online world. And we’re having this discussion right at the point where we have the possibility to secure amazing innovation, which can benefit individual people, their communities, and their economies. So this is a conversation really worth having. For a long time, we’ve been focused on practical connectivity, and there’s a way to go, I know, particularly in the Global South. More recently, we’ve started to think about cultural connectivity, so efforts focused on digital inclusion through language. But I really want to stress that our focus must be on building, or in some cases, rebuilding confidence online. In Australia, we at .au do annual research into the digital lives of Australians. And for the first time this year, that research told us that Australians are starting to think about doing less online because of the harms that they are experiencing. Right at a time where for productivity, efficiency, and innovation reasons, our policy makers and others are wanting them to do more online. So I think we’ve got to get back to a point where technology is seen as a tool and not as something that is somehow beyond policy. And when we’re thinking about policy, we’ve got to balance innovation and integrity. And I think we need some very big thinking, and we’ve done some of that at .au. We forced ourselves to do it using a scenario process. And if the scenarios are of interest to anybody, they’re available for free use on our website. But there are two scenarios in there that are pertinent to this discussion. And I’m going to summarize them in about six words. One of them says, government is in charge of information. And the other of them says, private sector is in charge of information. And when we dug into those scenarios, what we found were really some shared issues about the rights of individuals to privacy and to choice, the importance of integrity and impartiality around information. There’s a whole set of issues around the importance of the security of people’s identity. We explored the role of the internet, open, free, secure, and globally interoperable. And we really thought about integrity and the assurance processes that would need to be put in place to assure people of integrity. So in answer to the question, we need governance principles, tools, and mechanisms in all of those areas. Getting back to our topic today, which is elections, I wanted to make the point that really democracy now is a team sport. And more than that, it’s actually a global team sport. And who we need on the playing field with the voters and the politicians, we need civil society, we need the technical community, we need the private sector, media, technology companies, the platforms, we need government, public service officials, we need the combination of judiciary and regulators to actually implement and enforce policies, laws, regulations, the people who are accountable for election oversight and the like. In addition, I want to be bold enough to suggest that we might need some philosophers on the playing field as well, to think about the limits of markets as Michael Sandel has done, to think about big questions around values and ethics and culture. My final point, in fact, I’ve got two final points, but the first one is I’m finding it very interesting that organizations that have usually been concentrating on economic policies and competition and the like are becoming very interested in these issues too. And if I just give you one little quote from the OECD’s report, Facts Not Fakes, Tackling Disinformation, Strengthening Information Integrity, that report says, informed individuals are the foundation of democratic debates and society. And the report also goes on to make the comment a multi-stakeholder approach is required to address the complex global challenges of information integrity. More locally in Australia, our ACCC, which is our competition authority, has been conducting an inquiry into digital platforms. And in its final report, it says, this inquiry has highlighted the intersection of privacy, competition, and consumer protection considerations. Privacy. data protection laws can build trust in online markets. So, the fact that these bodies are thinking about these issues for the purpose of economic and societal outcomes, I think, is very interesting. Sorry, Rosemary, I’m going to have to ask you to wrap up now, please. And my final point, please, is just we need to have a global governance architecture. I think the Internet Governance Forum has a role to play, and I’m really hoping that through the processes next year, the role of the IGF is made clear and permanent so that it has the certainty to help do this work.

Pearse O’Donohue: Thank you. Thank you very much, Rosemary, and thank you for that very clear enumeration and explanation of the principles that we need to revisit in the work that we’re doing with regard to the Internet as a whole, and then specifically with regard to election integrity. And, of course, to Taufik for his analysis, and again, the worrying facts of violence against journalists, particularly female journalists, there is a direct and very thick line between that and election integrity. If the journalists, if the free press, is intimidated into silence, then we are already losing the electoral integrity process. So, something that we must think of, and also the effects of the digital elements to that. So now, as I’ve said, we want to again open up the floor to questions, but particularly statements, because on this occasion we’re going to make you work a little bit harder. So this is to participants here in the room, but also, of course, to online participants. We actually have a couple of questions for you, so if anyone would like to answer those questions, address those questions, or address points made by our panellists in their very rich responses to those set of questions that we put to them. So, it’s simply this, how do you think the broader Internet governance debate intersects with electoral information integrity discussions, and how can the IGF discussions, the multi-stakeholder approach, how could it contribute to improving and strengthening the information integrity in elections, and in the election space? So, do we have anyone who’d like to take the floor on this, or make comments on what has been heard from the floor? If so, please come to the microphones, one or other, at the head of the room. And I’m also looking at Bruna, if there is anybody online. Okay, well, we’ll keep going, because we have been very disciplined, I have to say. I’ve been nudging one or two of you, but I would like to thank all the panellists for being so disciplined in time, while giving us such rich responses. But now we have the opportunity, perhaps, to open the debate to you, to everything that your co-panellists have said in the questions they had asked, in relation to what we said, what are the problems, you know, what evidence has come to light, what initiatives have worked, what hasn’t worked, and then what are the principles that we need to apply. So now I’m giving the floor to you, but I would also like to put to you that question that I just posed now, and you can tackle any or all of them as you see fit, is, you know, how can the broader Internet governance discussion and debate intersect with this issue of electoral integrity, and, you know, how can the multi-stakeholder approach contribute to improving and strengthening the situation. So now, the floor is open. Who’d like to take the floor? Please, Tawfiq.

Tawfik Jelassi: Thank you. You remind us that the focus is elections, of course, and reporting on elections in a fact-checked, objective way requires proper training of journalists covering elections. UNESCO has been doing this in many countries recently, to provide the training needed by journalists, because, of course, the information they bring to the fore is so important, especially in this era of misinformation. Second, the impact of emerging technologies on elections, such as the impact of AI on elections. This is another training that we developed. It’s an online course on the impact of AI and generative artificial intelligence on election processes. So this is part of awareness creation, awareness raising, advocacy, because we need to have in place an enabling environment for elections to take place in a fair, free, and democratic way.

Pearse O’Donohue: Very good. Please, William, you were next.

William Bird: So what struck me is, despite us all coming from radically different perspectives, just how similar the issues we’re facing are, and in fact how similar the kind of approaches to dealing with them are, which says that often these things aren’t, as I said at the beginning, a bigger question of how do we deal with this new information chaos environment, where power dynamics have shifted so dramatically. And that seems to be a common kind of question that all of us are grappling with to varying degrees. The second thing is the critical importance of digital literacy. This is mentioned at every single instance of these things that I go to. The thing that is consequently and still missing in massive amounts are effective and properly resourced plans to actually implement these things. So we can come here and say all these good things, but there’s no real meaningful action. And then how do we deal with the outliers, right? Elon Musk being one of those outliers. X’s power is diminishing, but just because it’s diminishing, the harm that it’s causing in very real terms is still significant. And it seems we don’t really have an answer to that, you know. We’ve just seen one of the major world superpowers buddy up to this man who openly used his platform to spread misinformation, and in the case of South Africa, happily allowed it to spread attacks against journalists inside violence and hate speech. And we need an answer to that. IGF and all of us, we need to be able to say how we’re going to deal with this. Thanks.

Pearse O’Donohue: Thank you. Daniel Molokele, please.

Daniel Molokele: Thank you so much. I wanted to speak on something that I feel we have not addressed that affects electoral integrity from an information point of view. It’s the issue around the need for standardization and professionalism. You will see that we are seeing a rise of new social media platforms or media technologies that are highly influential in influencing political opinions, especially for the electorate. Some people have got blogs, some people have got podcasts, some of them are live. Some people have got ex-pages or Twitter pages, they’ve got Facebook pages, they can actually go live at any time and millions of potential voters tune in. In that live broadcast, there are some untested facts around the elections that are said, or even allegations, for example, around rigging or cheating in elections. Because the audience trusts the person behind the podcast or the show, it then affects everything in terms of integrity of the entire election process. Yet most of these people who conduct these live sessions and so on are actually not trained journalists. They do not practice any form of ethics and they have no form of qualification or certification. And at the end of the day, there is no emphasis on professional research and standardization of content. And also driving them is the fact that at the end of each month, they get a paycheck and it’s based on the amount of interaction or interactive use of that blog or pod. So the more people are emotionally tuned, the more viewership, the more interactions, and the more currency at the end of the month. So the net effect of that is a single person or two people… people can actually shape the narrative, depending on the side which they are. And it then allows people with money, maybe business people, for example, who have got interest, maybe in public tender systems, to actually fund these people also, unofficially, and influence the electoral process. Because at the end of the day, they would want a government that would be in power after the elections to be aligned to their business interest. So it’s a major concern. And the main media houses or professional institutions that practice journalism to standards are normally overrun by this kind of live transmissions. Then I also wanted to zero in on artificial intelligence. For us as Africans, we are coming from a position of being left behind. I think the average person, especially in Zimbabwe, where I come from, is still very difficult to distinguish a story that is AI-generated and one that is real. Because if you look at the videos, if you look at the pictures, they look so real. And if you can come up with content that is misleading or misinforming or disinforming, an average voter will be able to take it seriously. By the time clarifications are done, follow-ups are done, it’s too late, and it then affects the credibility of the election system. Thank you so much.

Pearse O’Donohue: Thank you very much, indeed, I fully agree, I see the same. Now, I just wanted to check, I don’t know if we have a hands-up function, but I want to make sure that if either Rosemary or Liz wanted to come in on what we’ve heard in the panel discussion, and also, of course, this question about the broader internet governance debate and the IGF, would either of you like to come in on this?

Elizabeth Orembo: I can come in and make a few short remarks on how elections, the discussion, can integrate with the internet governance, which, from my view, we are looking at the governance of infrastructure and the governance of content, as far as the discussion of internet governance is concerned but then again, when it comes to elections and information integrity, I don’t think any society has really quite agreed what is disinformation or what is misinformation, what is good information that should be encouraged online and what should be discouraged. But also, what we are also seeing is that people are saying that there should be plurality of information, and when does that plurality of information, some of them oppose each other, and in shaping our narratives, we try to label the other information as misinformation. Some misinformation are outright misinformation, and they are put out there to influence some unfair narrative, which is also harmful to the society. But now we are talking about a plurality of information that sometimes causes tensions in the society, and it’s not really intentional on harming any citizenry, but still, it harms our democratic process. I think as a society, we need to reflect on such dangerous misinformation coming out, but if regulatory concerns are taken out, then it means that a certain group will feel offended with it. With internet governance discussion, and this is my last point, is that. But I think we need to go broader to accommodate, to appreciate what’s really happening within the elections environment, because it also touches on the wider issues of development. When the right people in governance, not the right people in governance are put in place, then it really affects how a society will develop democratically. In some elections, violence also erupts. It also means that economic consequences also follow, because in cases where countries face elections aftermath, or even three years after election, contesting elections, going to court, and even being passive in applying policies that are put in place by the illegitimate government, it means there’s a slow economic growth in those countries, and people are not able to prosper there. So I think we need to be wider in how we think about internet governance and democracy, going beyond just content moderation and infrastructure governance, but also to the underlying issues, like other panelists had said, that they also play out in between elections, but they actually rise when it comes to the elections themselves. Thank you.

Pearse O’Donohue: Thank you. And Rosemary, I saw you wanted to come in.

Rosemary Sinclair: Sorry, Piers, I’m having trouble with my mute button. Yes, I was wondering if we could pursue the idea that I think Lina put on the table earlier. I was wondering if we could pursue the idea that I think Lina put on the table earlier. And I just wondered if we could hear a little bit more about that work in Lithuania.

Pearse O’Donohue: Okay, well, Lina, I think that’s an invitation to you. Please.

Lina Viltrakiene: Well, of course, as I already presented, we have a quite comprehensive system. I think it’s very important for us to understand that the IGF is not a system established in Lithuania on countering disinformation. But perhaps as we are now moving to the end of the discussion, I wanted to react very briefly to your question about how IGF, indeed, discussions could lead us to something more specific. I think it’s very important for us to understand that the IGF is not a system established in Lithuania on countering disinformation. But perhaps as we are now moving to the end of the discussion, I wanted to react very briefly to your question about how IGF, indeed, discussions could lead us to something more specific. But perhaps as we are now moving to the end of the discussion, I wanted to react very briefly to your question about how IGF, indeed, discussions could lead us to something more specific. So, we are now in the process of defining the responsibility of social media platforms, and perhaps finding some legal tools to enforce that. So, we really think that we could establish clear responsibilities, legal obligations, and sometimes even penalties for platforms that fail to prevent the spread of organized disinformation campaigns. But perhaps, inspiring thee mechanisms e architectures s such as digital market and information distribution, often these are completely explicit approaches which we could develop in discussing in this multi-stakeholder format, we are all views hurt, and all of us are onboard. When we are hyper great, that is time to have these kind of discussions.

Pearse O’Donohue: » I am going to come to you now I see on finished business, but also great opportunity to take this forward. Because I’m going to ask all of the panelists now to give their views, and I will come to you first, Sez. » I would like to start by saying thank you to all of you for being here. I know that you are all very committed and you know that I’m tough, but you are very disciplined, and the purpose really here is that I’d really love if you could come up with a recommendation, a request, a best practice that we could take as takeaways from this discussion, and as I said, I think there can be another one from Nina or from Rosemary, but I would like to start with you, Sez, and then we can move on to the next panel. » Thank you very much. I think that the way in which a person, a user, can address themselves, and hope to have action taken is one of those issues, and how we can adapt that, I’m sure there are different models. but if it were to be throughout the system, it would be great. But anyway, I’ll ask you to do that now. Two minutes maximum, and I will shut you off so that everyone gets their last word. And I will start with you, Sezen, please.

Sezen Yesil: Okay, thanks so much. So, the problems we have discussed today, like disinformation, misinformation, are probably as old as the history of democracy. But of course, the use of internet brings those to another level. We all accept that. Also, the problems are not specific to one country or to one platform only. Bad actors, for example, can work from country X to target people country Y, and also they use all the platforms they can use. So, I believe that, like all other global problems, election integrity-related problems can be tackled best in collaboration with all the stakeholders, like private, public, academia, and civil society. The beauty of IGF is that it’s bringing us all, and we are hearing each other. That’s great. So, I really appreciate all the esteemed panelists’ views and comments on the matter. I’m taking this as my homework. I will feed those as input to our election integrity work back at META. So, at META, we understand our responsibility, and we try to improve ourselves. And we are already leading and participating in many collaborative efforts, and we will be more than happy to expand our collaborative efforts to the other stakeholders, including governments, UNESCO, etc. Thanks so much for this opportunity. Thank you very much.

Pearse O’Donohue: And now I’ll turn to you, William. Your two minutes. Yes. Don’t worry. I’m being random for a purpose.

William Bird: Okay. So, my points would be, I think, to call for resources outside of elections periods, because what we see is that elections approach, and suddenly everyone’s excited, and then elections go, and we all say, yes, these are bad, and then suddenly there’s no work to… We need to see these as ongoing societal challenges. The second point is the intersection of online harms needs to be dealt with comprehensively, and then we need to see some action. It’s not enough for us to just say, oh, yes, the attacks against women online is very bad, and we really must do something. Let’s do something. Let’s hold some of these people accountable. We can’t take Elon Musk to court in South Africa because they don’t have anything there, but why aren’t gender-based violence groups… him to court in the United States where he’s domiciled. We need to be holding people accountable that continue these things. We can’t leave it as is any longer. And then the third thing I think is that we need to, mis and disinformation are thrown around and you’re right, everyone said there’s no common definition. For us we reference public harm as one of the elements of mis and disinformation and I think one of the things that we could and should be looking at are more nuanced labels around understanding mis and disinformation that it isn’t all the same thing because we already accept some things as problematic. Thank you.

Pearse O’Donohue: Thank you very much. I’ll now turn to Tawfiq please for your two minutes worth. Thank you.

Tawfik Jelassi: Thank you very much. Not to repeat myself but the title here is how to maximise potential for trust and addressing the risks. For me, I repeat myself, the number one risk is dis and mis information. It’s not by chance that Davos World Economic Forum this year put fighting disinformation, disinformation as the number one global risk for 2024 and 2025. This is the super election year that will continue in 2025. Disinformation for me is at the heart of the battle and if we can address it, if we can minimise it, maybe we cannot totally reduce it, I think we will be able to maximise trust and address the risk that it represents.

Pearse O’Donohue: Thank you very much. Liz, are you still with us? We see a very interesting photo of you. If not perhaps then I will turn for the moment to Rosemary please. What would be your last comments? Now we see you Liz but we’ll give you a moment in a moment. Thank you. Rosemary, go ahead.

Rosemary Sinclair: Yes, thank you. I’d like to make two comments really. One is to re-emphasise what I said very briefly that I would like to see the role of the IGF clarified and made permanent so that we have a forum for large multi-stakeholder discussions about matters of importance. importance. Second thing is practically I would like to see the tapestry of internet governance forums knitted together much more closely. So in Australia we have the local IGF, AUIGF, and then we have the Asia-Pacific region IGF, and then we come to the global IGF. If we could imagine a world where all of that effort was focused on a topic area, perhaps centralised clearinghouse or reporting, perhaps how to deal with platforms. If we could somehow maximise that effort and bring that work to the global IGF for consideration and discussion by the multi-stakeholder community, then I can see a possibility of progress. Thank you.

Pearse O’Donohue: Thank you very much. Now, Liz, we’d like to hear from you. No, we don’t hear you. Keep trying to unmute. Yes, now I succeeded in unmuting, thank you.

Elizabeth Orembo: I think my only one point is that positively we’ve seen a lot of efforts on strengthening information integrity with the election. I think this year it came as a golden opportunity because of the many elections happening, and in Africa we’ve seen a lot of partnerships, different kinds of partnerships, stakeholders coming together to fight disinformation, to map risks on disinformation and fight it proactively. But then this has happened not really in silos, but different parts of the partnership, but also leaving important stakeholders. Like one panelist said, it’s a big team thing and we should expand more. But also the other thing is that we should not leave the work here. We should make these different works connect to each other to get a full picture of what really happened this year and what to anticipate in the next elections next year and even the years after. So that connection, connecting the dots from the partnership in the civil society with the tech people to even the data enthusiasts. There are people who are working with data. What really happened there and how we can connect the dots there. Thank you.

Pearse O’Donohue: Thank you so much. Now I’d like to turn to Daniel, please.

Daniel Molokele: Thank you so much. I think in 2024, we saw democracy continue to grow and take root in Africa. The elections that we had all across Africa, they significantly gave us an opportunity as Africa to showcase ourselves and rebuild our reputation as a continent. To that end, access to information from a perspective of electoral integrity is very important to Africans. And as a parliamentarian, I think one of the issues we need to focus on is making sure that there is standardization in terms of quality of information and news across Africa, especially during election campaigns and election announcements, results announcements. And we must make sure that the policy framework, the legislative framework all across Africa is modelled and standardised so that it benefits democracy in Africa. Because information is the potential to build our democracy, is the potential to make our electoral system be accepted and conventional. But at the same time, it is the potential to undermine our electoral integrity. So it’s important that we create model laws from a continental point of view that will enhance access to quality information and promote electoral integrity. Thank you.

Pearse O’Donohue: Thank you. And now, Lina, please.

Lina Viltrakiene: Well, from my side, I would like to leave you with a message that elections is the test of democracy. And indeed, democracy is not something that we have for granted. So if we want to live in democratic society, upholding liberty, human rights, rule of law, other democratic values, all together, we need to work for strengthening the democracy. And in this task, I firmly believe that all our societies need to be on board. and that is the only way to build trust, to make comprehensive action on strengthening resilience and critical thinking. And again, I believe that critical thinking and resilience is key, indeed, in all our efforts to ensure the smooth, reliable, free from malign interference elections, democratic elections. And perhaps, in addition, just one more thing I wanted to mention is that how important it is to coordinate among ourselves, among democracies, and that is crucial, indeed, to ensure that we appropriately, effectively respond to foreign information manipulation and interference, and that we prevent hostile actors from manipulating and hijacking the information space. Thank you.

Pearse O’Donohue: Thank you, Tawfik. You were very, very short in your-

Tawfik Jelassi: 30 seconds.

Pearse O’Donohue: So you have 30 seconds that you didn’t use. Go ahead, please.

Tawfik Jelassi: Why I ask the floor again, because your question had two parts. What can the IGF do about it, which I did not answer in my first intervention. 20 years ago, IGF did not foresee the rise of digital platforms, nor the harmful online content that we suffer from today. I believe that, going forward, IGF has to ensure that information is a public good, not a public hazard, not a public harm.

Pearse O’Donohue: Thank you. So, I want to draw a close to the meeting. I’m not going to draw formal conclusions. That would not be appropriate, and it would be very subjective and impressionistic, and I’ll tell you about what we’re going to do in a moment. But what I would like, first of all, is I think we should show our appreciation for the fantastic insights and analysis by our panel, physically and online. Please, a round of applause. And they have made my job very easy, one, by really focusing on the questions, but also by allowing the discussion to continue by limiting their time. I know it’s very frustrating. The only censorship that is allowed during the multi-stakeholder process is your speaking time. Everything else is just not allowed. So what I will do is I would like to say that we have a clear set of well-informed views that show, yes, the experience tells us that the threats are real, that the challenges have been experienced across a number of countries and regions, and we would expect that they will get worse unless action is taken. Whereas there were no, well sorry, I will take that back, where largely disasters were avoided in 2024. There have been some very stark examples given to us of where serious problems have arisen. But almost all domains, countries have seen a level of disinformation, certainly misinformation, going all the way to the use of deepfakes, as well as the suppression of opposing views. So we are united in diversity and it might not always be the case that I am happy with the result of the election. My side didn’t win. That’s not the point. That’s democracy. It is, well, did the side that win do so on the basis of the democratic process, which we all welcome, or did they do so because they used digital technologies to misinform, to disinform, or to actively prevent another voice from being heard. And that is the line that we must follow with regard to information and with regard to election integrity. I think we’ve had some great insights. As I said at the start, we do hope in what will come next is to, in listening to the stakeholders, to actually share our experiences and actually find inspiration, make suggestions, be able to give actionable insights to guide stakeholders and the actions that they can take, including and particularly in the IGF. We have the IGF coming to us next June. I think this work can continue. And for that, we have a rapporteur from this session who will help us, and I’d like to thank him, Jordan Carter, for his contribution in organising the session as well. But I must single out, as well as thanking our panellists, in particular I must single out Bruna Martins dos Santos, who has been the driving force in organising this event. I saw Bruna in action in NetMundial, and we’re very grateful for all the work that she’s doing on the MAG. By the way, Jordan is also on the MAG, and we really think that this is an issue that we will continue to need to focus on, and where the IGF and the multi-stakeholder process that it represents is the only forum in which we can find consensual responses to the challenges of digital while, and I think this was also what Taufik wanted to say as well, embracing all the good that digital technologies can bring to societies across the world. Thank you for your presence, thank you for those online, thank you again to the speakers and I wish you a great continuation of the IGF.

T

Tawfik Jelassi

Speech speed

137 words per minute

Speech length

1363 words

Speech time

593 seconds

Spread of misinformation and disinformation online

Explanation

Disinformation and misinformation are major challenges to election integrity in the digital age. They spread rapidly online and can significantly impact public trust and democratic processes.

Evidence

MIT study shows that false information travels 10 times faster than fact-checked information.

Major Discussion Point

Challenges to election integrity in the digital age

Agreed with

William Bird

Sezen Yesil

Lina Viltrakiene

Agreed on

Misinformation and disinformation as major threats

Violence and intimidation against journalists, especially women

Explanation

Journalists, particularly female journalists, face violence and intimidation when covering elections. This poses a serious threat to press freedom and election integrity.

Evidence

One third of women journalists have quit due to online harassment and physical violence.

Major Discussion Point

Challenges to election integrity in the digital age

Media literacy and digital skills education programs

Explanation

UNESCO is focusing on education programs to improve media and information literacy in the digital age. These programs aim to help users develop critical thinking skills to distinguish between fact-checked information and falsehoods.

Evidence

UNESCO’s program on media and information literacy is a cornerstone of their strategy.

Major Discussion Point

Successful initiatives and best practices

Training journalists on election coverage and emerging technologies

Explanation

UNESCO provides training for journalists on covering elections and the impact of emerging technologies like AI. This helps ensure more accurate and responsible reporting during election periods.

Evidence

UNESCO has developed an online course on the impact of AI and generative artificial intelligence on election processes.

Major Discussion Point

Successful initiatives and best practices

Treating information as a public good, not a public hazard

Explanation

The IGF should focus on ensuring that information is treated as a public good rather than a public hazard. This approach is crucial for addressing the challenges of harmful online content and protecting democratic processes.

Major Discussion Point

Governance principles and mechanisms needed

W

William Bird

Speech speed

155 words per minute

Speech length

1485 words

Speech time

572 seconds

Attacks on electoral management bodies and journalists

Explanation

There have been multi-pronged attacks on electoral management bodies and journalists, following a disinformation playbook. These attacks target the entities, their decisions, and individuals within them, often using pseudo-legal challenges.

Evidence

Over a two-week period, there were over a thousand attacks against journalists on X, with most targeting one journalist in particular.

Major Discussion Point

Challenges to election integrity in the digital age

Agreed with

Tawfik Jelassi

Sezen Yesil

Lina Viltrakiene

Agreed on

Misinformation and disinformation as major threats

Public reporting platforms for online harms

Explanation

South Africa has implemented public platforms for reporting attacks against journalists and other online harms. These platforms operate independently of the state and apply consistent standards across different social media platforms.

Evidence

South Africa has platforms called Mars and Real 411 for reporting attacks against journalists and other online harms like misinformation and hate speech.

Major Discussion Point

Successful initiatives and best practices

D

Daniel Molokele

Speech speed

138 words per minute

Speech length

1200 words

Speech time

519 seconds

Lack of regulation for influential social media personalities

Explanation

There is a rise of influential social media personalities who can shape political narratives without proper journalistic training or ethics. This lack of regulation and standardization can significantly impact election integrity.

Evidence

Examples of podcasts, blogs, and live broadcasts that can reach millions of potential voters with untested facts or allegations about elections.

Major Discussion Point

Challenges to election integrity in the digital age

Standardization of quality information and news across regions

Explanation

There is a need for standardization in the quality of information and news across Africa, especially during elections. This includes developing model laws and policy frameworks to enhance access to quality information and promote electoral integrity.

Major Discussion Point

Governance principles and mechanisms needed

E

Elizabeth Orembo

Speech speed

124 words per minute

Speech length

1719 words

Speech time

831 seconds

Digital inequality limiting access to reliable information

Explanation

Digital inequality in Africa leads to uneven access to information, creating fertile ground for misinformation. This inequality affects people’s ability to make informed choices during elections.

Evidence

Challenges in policy, infrastructure, and media access in African countries.

Major Discussion Point

Challenges to election integrity in the digital age

S

Sezen Yesil

Speech speed

138 words per minute

Speech length

1647 words

Speech time

711 seconds

Coordinated inauthentic behavior on social platforms

Explanation

Meta has identified and removed numerous networks engaged in coordinated inauthentic behavior. These networks spread disinformation and mislead people, particularly during election periods.

Evidence

Meta removed about 20 coordinated inauthentic behavior networks in 2024 alone.

Major Discussion Point

Challenges to election integrity in the digital age

Agreed with

Tawfik Jelassi

William Bird

Lina Viltrakiene

Agreed on

Misinformation and disinformation as major threats

Collaboration between platforms, fact-checkers and authorities

Explanation

Meta collaborates with third-party fact-checkers, local trusted partners, and authorities to combat misinformation. This multi-stakeholder approach helps in receiving timely insights and taking appropriate actions.

Evidence

Meta works with more than 90 third-party fact-checkers around the world, covering 60 languages.

Major Discussion Point

Successful initiatives and best practices

Agreed with

Lina Viltrakiene

Rosemary Sinclair

Agreed on

Need for multi-stakeholder collaboration

Differed with

Lina Viltrakiene

Differed on

Approach to regulating digital platforms

Technical measures to detect manipulated media and inauthentic accounts

Explanation

Meta employs various technical measures to detect and remove fake accounts and manipulated media. These measures help maintain the integrity of the platform during elections.

Evidence

Meta’s automatic detection tools block billions of fake accounts, often within minutes of creation.

Major Discussion Point

Successful initiatives and best practices

L

Lina Viltrakiene

Speech speed

121 words per minute

Speech length

1451 words

Speech time

719 seconds

Use of AI and deepfakes to create misleading content

Explanation

The use of AI and deepfakes to create misleading content, such as fake statements from top politicians, poses a significant threat to election integrity. This technology can influence people’s choices and erode trust in democratic institutions.

Evidence

Experiences from Romanian and Bulgarian elections where significant interference by foreign actors via social media platforms was observed.

Major Discussion Point

Challenges to election integrity in the digital age

Agreed with

Tawfik Jelassi

William Bird

Sezen Yesil

Agreed on

Misinformation and disinformation as major threats

Multi-stakeholder approach to monitoring and countering disinformation

Explanation

Lithuania has implemented a consolidated system for monitoring and neutralizing disinformation. This system involves various stakeholders including state institutions, NGOs, media, and businesses to create societal resilience against disinformation.

Evidence

Lithuania’s Civil Resilient Initiative and debunk.org work on analyzing and countering disinformation, as well as promoting digital and media literacy.

Major Discussion Point

Successful initiatives and best practices

Agreed with

Sezen Yesil

Rosemary Sinclair

Agreed on

Need for multi-stakeholder collaboration

Clear responsibilities and accountability for digital platforms

Explanation

There is a need to establish clear responsibilities, legal obligations, and potential penalties for digital platforms that fail to prevent the spread of organized disinformation campaigns. This approach aims to improve the governance of digital platforms during elections.

Major Discussion Point

Governance principles and mechanisms needed

Differed with

Sezen Yesil

Differed on

Approach to regulating digital platforms

Global cooperation and information sharing between democracies

Explanation

Coordination among democracies is crucial to effectively respond to foreign information manipulation and interference. This cooperation can help prevent hostile actors from manipulating the information space during elections.

Major Discussion Point

Governance principles and mechanisms needed

R

Rosemary Sinclair

Speech speed

119 words per minute

Speech length

1509 words

Speech time

756 seconds

Balancing innovation with integrity and human rights protections

Explanation

There is a need to balance innovation in the digital space with integrity and human rights protections. This involves developing governance principles that address issues of privacy, security, and trust in the online world.

Evidence

Research in Australia shows that people are starting to do less online due to the harms they are experiencing.

Major Discussion Point

Governance principles and mechanisms needed

Strengthening the role of IGF in addressing information integrity

Explanation

The role of the Internet Governance Forum (IGF) should be clarified and made permanent to provide a forum for multi-stakeholder discussions on important issues like information integrity. This could help in developing more effective global governance mechanisms.

Major Discussion Point

Governance principles and mechanisms needed

Agreed with

Sezen Yesil

Lina Viltrakiene

Agreed on

Need for multi-stakeholder collaboration

Agreements

Agreement Points

Misinformation and disinformation as major threats

Tawfik Jelassi

William Bird

Sezen Yesil

Lina Viltrakiene

Spread of misinformation and disinformation online

Attacks on electoral management bodies and journalists

Coordinated inauthentic behavior on social platforms

Use of AI and deepfakes to create misleading content

Multiple speakers identified the spread of misinformation and disinformation as a significant threat to election integrity, highlighting various forms and channels through which this occurs.

Need for multi-stakeholder collaboration

Sezen Yesil

Lina Viltrakiene

Rosemary Sinclair

Collaboration between platforms, fact-checkers and authorities

Multi-stakeholder approach to monitoring and countering disinformation

Strengthening the role of IGF in addressing information integrity

Several speakers emphasized the importance of collaboration between various stakeholders, including tech platforms, fact-checkers, authorities, and civil society, to effectively address election integrity issues.

Similar Viewpoints

Both speakers highlighted the serious issue of attacks and intimidation against journalists, recognizing it as a significant threat to press freedom and election integrity.

Tawfik Jelassi

William Bird

Violence and intimidation against journalists, especially women

Attacks on electoral management bodies and journalists

Both speakers addressed issues related to information quality and access in Africa, emphasizing the need for better regulation and infrastructure to ensure reliable information during elections.

Daniel Molokele

Elizabeth Orembo

Lack of regulation for influential social media personalities

Digital inequality limiting access to reliable information

Unexpected Consensus

Importance of digital literacy and education

Tawfik Jelassi

William Bird

Lina Viltrakiene

Media literacy and digital skills education programs

Public reporting platforms for online harms

Multi-stakeholder approach to monitoring and countering disinformation

Despite coming from different backgrounds (UNESCO, civil society, and government), these speakers all emphasized the importance of digital literacy and education in combating misinformation and protecting election integrity.

Overall Assessment

Summary

The main areas of agreement included recognizing misinformation and disinformation as major threats to election integrity, the need for multi-stakeholder collaboration, the importance of protecting journalists, and the value of digital literacy and education programs.

Consensus level

There was a moderate to high level of consensus among the speakers on the key challenges facing election integrity in the digital age. This consensus suggests a shared understanding of the problems, which could facilitate more coordinated and effective responses to these challenges. However, there were some differences in the specific solutions or approaches proposed, indicating that while there is agreement on the problems, there may be diverse views on how best to address them.

Differences

Different Viewpoints

Approach to regulating digital platforms

Lina Viltrakiene

Sezen Yesil

Clear responsibilities and accountability for digital platforms

Collaboration between platforms, fact-checkers and authorities

Lina Viltrakiene advocates for establishing clear legal responsibilities and potential penalties for digital platforms, while Sezen Yesil emphasizes voluntary collaboration between platforms, fact-checkers, and authorities.

Unexpected Differences

Focus on AI and deepfakes

Lina Viltrakiene

Sezen Yesil

Use of AI and deepfakes to create misleading content

Technical measures to detect manipulated media and inauthentic accounts

While Lina Viltrakiene emphasizes the threat of AI and deepfakes in creating misleading content, Sezen Yesil surprisingly downplays this concern, stating that the risks did not materialize significantly in recent elections. This unexpected difference highlights varying perceptions of the immediate threat posed by AI in election integrity.

Overall Assessment

summary

The main areas of disagreement revolve around the approach to regulating digital platforms, the focus on AI and deepfakes as immediate threats, and the most effective methods for combating misinformation and improving information quality.

difference_level

The level of disagreement among speakers is moderate. While there is a general consensus on the importance of addressing misinformation and protecting election integrity, speakers differ on the specific strategies and priorities. These differences reflect the complex nature of the issue and the need for a multi-faceted approach, potentially complicating efforts to develop unified global strategies for protecting election integrity in the digital age.

Partial Agreements

Partial Agreements

All speakers agree on the need to improve information quality and combat misinformation, but propose different approaches: Tawfik Jelassi focuses on education programs, William Bird on public reporting platforms, and Daniel Molokele on standardization of news quality.

Tawfik Jelassi

William Bird

Daniel Molokele

Media literacy and digital skills education programs

Public reporting platforms for online harms

Standardization of quality information and news across regions

Similar Viewpoints

Both speakers highlighted the serious issue of attacks and intimidation against journalists, recognizing it as a significant threat to press freedom and election integrity.

Tawfik Jelassi

William Bird

Violence and intimidation against journalists, especially women

Attacks on electoral management bodies and journalists

Both speakers addressed issues related to information quality and access in Africa, emphasizing the need for better regulation and infrastructure to ensure reliable information during elections.

Daniel Molokele

Elizabeth Orembo

Lack of regulation for influential social media personalities

Digital inequality limiting access to reliable information

Takeaways

Key Takeaways

The integrity of elections is facing significant challenges in the digital age, including misinformation, disinformation, and attacks on electoral bodies and journalists

Successful initiatives to protect election integrity include multi-stakeholder collaboration, media literacy programs, and technical measures by platforms

Governance principles needed include balancing innovation with integrity, global cooperation between democracies, and treating information as a public good

The Internet Governance Forum (IGF) has an important role to play in addressing information integrity issues globally

Resolutions and Action Items

Continue discussions on election integrity at future IGF meetings

Clarify and strengthen the role of the IGF in addressing information integrity issues

Develop more coordinated efforts between national, regional and global IGFs on key topics like election integrity

Expand collaborative efforts between platforms, governments, civil society and other stakeholders

Unresolved Issues

How to effectively regulate influential social media personalities and content creators

Addressing the digital divide that limits access to reliable information in some regions

Balancing free speech protections with the need to combat harmful misinformation

How to hold global platforms accountable across different national jurisdictions

Developing common definitions and standards for identifying misinformation/disinformation

Suggested Compromises

Balancing innovation in digital technologies with the need for integrity and human rights protections

Finding a middle ground between government regulation of platforms and industry self-regulation

Developing nuanced labels and categories for different types of problematic content, rather than broad definitions

Thought Provoking Comments

We must address, already on Sunday morning, in day zero of this Internet Governance Forum here in Saudi Arabia, we had a session on misinformation. And in that session, we also had a session on the role of stakeholders in protecting election integrity and the right to information.

speaker

Pearse O’Donohue

reason

This comment set the stage for the entire discussion by framing it within the broader context of the IGF and highlighting the key themes of misinformation and stakeholder roles in protecting election integrity.

impact

It focused the discussion on the intersection of internet governance and election integrity, prompting panelists to address these specific issues throughout their remarks.

Throughout this year we decided to update some of our policies. For example we updated our penalty system per feedback of the oversight board to treat people more fairly and to give them more free expression and secondly we updated our policy on violence.

speaker

Sezen Yesil

reason

This comment provided concrete examples of how a major tech platform is adapting its policies to balance free expression with preventing harmful content, particularly in the context of elections.

impact

It sparked discussion about the role of tech platforms in moderating content and the challenges of balancing different rights and interests.

What has not worked well is the exponential spread of disinformation and hate speech derailing the integrity of electoral processes, and maybe casting some doubt or trust in election outcomes and democratic institutions.

speaker

Tawfik Jelassi

reason

This comment highlighted a major challenge facing election integrity in the digital age, pointing to the broader implications for democratic institutions.

impact

It shifted the conversation to focus more on the negative impacts of disinformation and hate speech, prompting other panelists to address these issues in their remarks.

Because with data becoming more available, we also need more capacity to crunch data to get it to people. And those capacities were different as well, and sometimes challenging.

speaker

Elizabeth Orembo

reason

This comment introduced the important issue of data literacy and capacity, particularly in the context of the Global South.

impact

It broadened the discussion to include considerations of digital inequality and the need for capacity building in data analysis and interpretation.

It’s been a big year but I want to just ask if people genuinely feel better about democracy having had 65, 70, 75 elections. Because the sense that I get from speaking to people is that despite it being, it should be a year of celebrating democracy, we don’t feel good about democracy

speaker

William Bird

reason

This comment challenged the assumption that more elections necessarily lead to stronger democracy, introducing a more nuanced perspective on the state of global democracy.

impact

It prompted a deeper reflection on the quality of democracy beyond just the quantity of elections, influencing subsequent comments on the challenges facing democratic processes.

We still need to see more young people as candidates or as elected representatives. We also saw the use of social media in a much more progressive way to mobilize people to voter registration and more importantly to turn out as voters, including media platforms such as TikTok, WhatsApp, Facebook, and X

speaker

Daniel Molokele

reason

This comment highlighted the positive potential of social media in engaging young voters, while also pointing out the need for greater youth representation in politics.

impact

It shifted the discussion to consider the role of social media in political engagement and the importance of youth participation in democratic processes.

Thus, this shows us that we need to work further on continuous collaboration of platforms with state institutions. And while regulatory frameworks perhaps should be improved, and as a model, I would like to refer to the EU’s Digital Services Act, which could really encourage the thinking.

speaker

Lina Viltrakiene

reason

This comment introduced the idea of regulatory frameworks as a potential solution to challenges in digital election integrity, specifically referencing the EU’s Digital Services Act.

impact

It prompted discussion about the role of regulation in addressing digital challenges to election integrity and the potential for international cooperation in this area.

So practically speaking, during elections, we sometimes see at powder increased requests from people to take down the websites of their political opponents. And those requests are often made with claims of misinformation or disinformation. Those claims must be assessed by others who are authorised by law and skilled to make those judgements.

speaker

Rosemary Sinclair

reason

This comment provided a concrete example of the challenges faced by technical operators during elections, highlighting the complexity of content moderation decisions.

impact

It grounded the discussion in practical realities and emphasized the need for clear guidelines and authorized bodies to make content moderation decisions during elections.

Overall Assessment

These key comments shaped the discussion by highlighting the multifaceted challenges facing election integrity in the digital age, from disinformation and hate speech to digital inequality and youth engagement. They prompted a nuanced exploration of the roles and responsibilities of various stakeholders, including tech platforms, governments, civil society, and international bodies. The discussion evolved from identifying problems to considering potential solutions, including policy updates, capacity building, regulatory frameworks, and multi-stakeholder collaboration. Throughout, there was a tension between the potential of digital technologies to enhance democratic participation and the risks they pose to election integrity, reflecting the complex nature of internet governance in relation to democratic processes.

Follow-up Questions

How can we develop more nuanced labels and definitions for misinformation and disinformation?

speaker

William Bird

explanation

Current definitions are too broad and don’t account for different types and levels of harm. More precise categorization could help in addressing these issues more effectively.

How can we create a centralized platform for reporting online harassment and attacks, especially against politicians and journalists?

speaker

Maha Abdel Nasser

explanation

A unified reporting system could help address online harassment more quickly and effectively, particularly during election periods.

What legal and political solutions can address the challenge of digital platforms refusing to cooperate with authorities in independent countries?

speaker

Mokabedi (online participant)

explanation

This is important to ensure consistent enforcement of policies across different countries and platforms.

How can we standardize the quality of information and news across Africa, especially during elections?

speaker

Daniel Molokele

explanation

Standardization could help improve the integrity of electoral information and strengthen democracy across the continent.

How can we better connect and synthesize the work of different partnerships and stakeholders working on election integrity?

speaker

Elizabeth Orembo

explanation

Connecting these efforts could provide a more comprehensive understanding of election integrity issues and more effective solutions.

How can we ensure consistent implementation of content moderation policies across different social media platforms?

speaker

William Bird

explanation

Consistency across platforms is crucial for effective management of online harms and misinformation.

How can we better address the challenges posed by non-professional content creators (e.g., podcasters, bloggers) in spreading election-related misinformation?

speaker

Daniel Molokele

explanation

These new media sources have significant influence but often lack professional standards or oversight, potentially impacting election integrity.

How can we improve digital literacy efforts, particularly in the Global South, to help users distinguish between AI-generated and real content?

speaker

Daniel Molokele

explanation

As AI-generated content becomes more prevalent, the ability to identify it is crucial for maintaining election integrity.

How can the role of the Internet Governance Forum be clarified and made permanent to address ongoing issues of online information integrity?

speaker

Rosemary Sinclair

explanation

A clearer, permanent role for the IGF could provide a consistent forum for addressing these evolving challenges.

How can we better integrate local, regional, and global Internet Governance Forums to address issues like election integrity more effectively?

speaker

Rosemary Sinclair

explanation

Better integration could lead to more coordinated and effective responses to global challenges in online information integrity.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Networking Session #74 Digital Innovations Forum- Solutions for the Offline People

Networking Session #74 Digital Innovations Forum- Solutions for the Offline People

Session at a Glance

Summary

This discussion focused on efforts by various organizations to improve internet access and digital inclusion globally, particularly in developing regions. Representatives from donor organizations, government agencies, and foundations shared their initiatives and perspectives on challenges in this area.


Key themes included the need for affordable broadband access, digital skills development, and addressing widening digital divides. Many speakers emphasized the importance of coordinating efforts between donors and stakeholders to avoid duplication and maximize impact. Innovative approaches mentioned included using local NGOs and digital ambassadors to reach underserved communities, integrating digital literacy into formal education, and employing blended financing models.


Challenges highlighted included regulatory barriers, geopolitical issues limiting work in certain countries, and ensuring projects maintain market competitiveness. Several participants noted the difficulty in reaching the most vulnerable populations due to sanctions or security concerns. The need to address the digital gender divide was also raised as a priority.


Looking ahead, speakers suggested focusing on areas like AI governance, cloud transformation for countries, and completing partially implemented projects. There was broad agreement on the need for continued dialogue and knowledge sharing between funders and implementers. Suggestions included creating simplified grant processes, taking a whole-of-government approach to digital transformation, and finding ways to work within legal frameworks to support restricted areas.


Overall, the discussion underscored the complex, multifaceted nature of expanding meaningful internet access globally and the ongoing need for collaboration and innovation in this space.


Keypoints

Major discussion points:


– Challenges in improving internet access and digital inclusion, including affordability, digital literacy, and reaching underserved populations


– The need for better coordination and collaboration between donors and organizations working on digital development projects


– Innovative approaches to funding and implementing digital inclusion initiatives, such as working with local partners and simplifying grant processes


– The importance of addressing widening digital divides and emerging issues like AI governance


– Barriers to funding certain regions due to sanctions and geopolitical issues


The overall purpose of the discussion was to bring together representatives from donor organizations, regulators, and foundations to share their work on digital inclusion projects, discuss challenges, and explore opportunities for collaboration and improved coordination of efforts.


The tone of the discussion was collaborative and solution-oriented. Participants were open in sharing both successes and challenges in their work. There was a sense of urgency around addressing digital divides and a willingness to consider new approaches. The tone became more action-oriented towards the end as participants discussed concrete next steps and ways to continue the dialogue.


Speakers

– Amrita Choudhury: Moderator


– Franz von Weizsäcker: Representative from GIZ


– David Hevey: Representative from Australian Department of Foreign Affairs


– Zhang Xiao: Representative from CNNIC (China Network Information Center)


– Sarah Armstrong: Representative from ISOC Foundation


– Rajnesh Singh: CEO of APNIC Foundation


– Samia Melhem: Representative from World Bank


Additional speakers:


– Ekaterina (Katrina): Commissioner from Georgia


– Shadia (Sharir): Representative from Islamic Development Bank


Full session report

Revised Summary of Digital Inclusion Discussion


This report summarizes a discussion on efforts to improve internet access and digital inclusion globally, with a focus on developing regions. Representatives from donor organizations, government agencies, and foundations shared their initiatives and perspectives on challenges in this area.


Key Themes and Speakers’ Contributions


1. David Hevey (Australian Government):


– Discussed infrastructure investments in Pacific Island countries


– Emphasized the importance of affordable broadband access


– Suggested developing cloud transformation roadmaps for countries like PNG and Vanuatu


– Stressed the need for regional coordination among donors to avoid overloading recipient countries


2. Franz von Weizsäcker (GIZ):


– Highlighted social insurance and digital skills programs in Southeast Asia


– Emphasized the importance of funding local organizations for better context understanding


– Advocated for a decentralized decision-making approach in project implementation


– Proposed simplifying grant processes to better support local innovation


3. Sarah Armstrong (Internet Society Foundation):


– Discussed digital literacy programs for women with disabilities


– Emphasized the need to address the digital gender divide


– Highlighted the foundation’s focus on innovative technologies, digital skills training, and internet economy


4. Zhang Xiao (formerly with ITU):


– Emphasized the importance of digital literacy programs for elderly populations


– Highlighted the need to address AI governance in the context of digital development


5. Rajnesh Singh (APNIC Foundation):


– Raised concerns about widening digital divides across multiple layers of society


– Mentioned an innovation fund for inclusion, knowledge, and infrastructure


– Highlighted challenges related to sanctions and geopolitical issues limiting aid to certain economies


– Emphasized the need for multi-modal approaches rather than relying on a single technology


6. Samia Melhem (World Bank):


– Discussed initiatives in digital public infrastructure and sectoral applications


– Emphasized the importance of local knowledge and capacity building


– Stressed the need to complete and perfect started projects rather than initiating new ones


7. Shadia (Islamic Development Bank):


– Highlighted the bank’s focus on digital transformation and innovation


– Discussed initiatives to support member countries in digital development


8. Ekaterina (Commissioner from Georgia):


– Shared information about a broadband development program in rural Georgia


– Discussed digital literacy initiatives implemented in the country


Challenges and Considerations


1. Affordability: Multiple speakers emphasized the need for affordable broadband access, particularly in developing regions.


2. Digital Literacy: Various digital literacy programs were discussed, targeting different demographics such as elderly populations, women with disabilities, and rural communities.


3. Widening Digital Divides: Concerns were raised about current approaches potentially exacerbating inequality rather than reducing it.


4. Cybersecurity: The need for robust security measures alongside increased access was highlighted.


5. Regulatory and Geopolitical Barriers: Sanctions and geopolitical issues often limit the ability to provide aid to certain economies, presenting a significant challenge to digital inclusion efforts.


6. Market Competitiveness: Balancing market competitiveness with funding rural access initiatives was identified as a challenge.


7. Donor Coordination: The need for better coordination among donor organizations to avoid duplication of efforts and ensure more effective use of resources was emphasized.


Future Focus Areas and Considerations


1. Implementing digital literacy in formal education


2. Addressing the digital gender divide more effectively


3. Supporting cloud transformation roadmaps for countries


4. Improving monitoring and evaluation of digital development projects


5. Addressing AI governance in digital development contexts


6. Facilitating partnerships between large organizations and NGOs for project funding and implementation


7. Ensuring follow-up projects to complete and perfect initiatives that have been started


8. Exploring ways to work within legal frameworks to support people in countries where direct funding is challenging due to sanctions or regulations


9. Simplifying grant application processes to make them more accessible to local organizations


The moderator suggested creating a mailing list for continued dialogue among donors, highlighting the ongoing need for collaboration and information sharing in this space.


In conclusion, the discussion underscored the complex, multifaceted nature of expanding meaningful internet access globally. While challenges remain, particularly in terms of coordination and reaching the most vulnerable populations, there was a clear commitment among participants to finding innovative solutions and improving the effectiveness of digital inclusion efforts.


Session Transcript

Amrita Choudhury: . . . . . . . Hi everyone and thank you for coming this afternoon. My name is Amrita and I would be the moderator of this session and I hope you can hear me. Okay. France has gone to get some water. Kind of setup but we have to make do with this. So this discussion is primarily to it’s an interactive open discussion which we are having today. And we have a few donor organizations and regulators here, you know, and what we want to do is have a discussion on, you know, the kind of projects they are doing, why they’re investing in these kind of projects, what challenges they see. And if there could be a way ahead in how there could be some synergies between the different entities. Oops, sorry. Sorry. There could be some exchange of information, or you know even gap analysis as to what could be done better. to explore collaborations, if possible. So with us today, and this is going to be open. We’ll have some questions. And I’ll throw to all the speakers here. And then you can ask questions or even give comments. You can raise your hand. And we have with us France from GIZ. And then we have David, who is from the Australian Department of Foreign Affairs. We have Xiaoxiao Sinig. We have Sarah Armstrong from the ISOC Foundation. We are supposed to have Samia, but I think she’s not here from the World Bank. We have E. Katrina, who is the commissioner from Georgia. Yes, that’s about it for now. We may have one or two more colleagues who may be joining for this discussion. So without much ado. Sorry, and I forgot the organizers, as in Raj Singh is here, who is the CEO of APNIC Foundation. And sorry, Raj, for this. So I don’t want to waste much time. 60 minutes, I think, five minutes have already gone. So my first question to all the panelists would be, are you supporting projects to improve internet access and inclusion? And if yes, what do you perceive, from your perspective, are the two main issues that need to be addressed? And since David is just next to me, David, you’re the first one.


David Hevey: Thank you very much for that. Actually, to be honest, I don’t need to say that. I would cut out. No, it’s just in. Sorry about that. Thank you for the question. I know it’s supposed to be an informal roundtable, but I’ve got some notes that I’ve been told to read off as well. So bear with me. Thank you. Is that there? Australia is working with the Indo-Pacific regional partners to achieve greater connectivity. and improve and achieve inclusive internet access. For example, our Australian Infrastructure Financing Facility, we’ve been partnering with Pacific Island countries on investments to support all Pacific Island countries, or PICs as we call it, having primary telecommunications cable connectivity by the end of 2025. So that’s been a really big focus of that facility. Since 2018, we’ve committed 350 million Australian dollars on that. Again, it keeps on cutting out, apologies. The facilities also made more investments for end connectivity to build out a secure and resilient, reliable digital ecosystem. So this is actually also in some investment in like terrestrial infrastructure, we’ve seen in PNG, and also in other Pacific Island countries as well. I’d say, so that’s the infrastructure aspect covered. But in terms of the work that we’re supporting for secure and inclusive internet access, we go also, we have our 2023 to 2030 cybersecurity in Australia. So across in line with that, we’ve actually got a Southeast Asia and Pacific Capacity Building Program across both Southeast Asia and the Pacific regions. We’ve been focused on cybersecurity, including and improving resilience in cybercrime and online scam response. So that’s what we’ve sort of been focused on there. And the program’s also supported Australia’s e-Safety Commission work. So, having been at the booth here today with the Australia booth, if you haven’t already, please go there. It’s a koala photo time soon. But we’ve also sort of focused on online safety work as well for digital inclusion. I think also another important thing, and I’m almost finished, please bear with me I’m sorry, but digital trade. Digital trade has also been a key part as well. We recognise that as an important inclusionary tool. So what we try and do is we’ve been advocating digital trade rules to achieve trust in the online environment, including online consumer protection, but also facilitating cooperation so that trading partners can actually make the most of digital trade. So I’ll leave it at that there, and thank you. Challenges. Challenges, okay. Well, I suppose the two main issues or challenges that we think need to be addressed, again, I kind of touched on it before, but it’s a cyber resilience piece. We talked about, again, as I said, the rapid response that we’ve got set up and deploying incident response. We set that up after Vanuatu and Tonga had cyber incidents, which Australia had deployed some assistance in late 2022, early 23. That’s all publicly out there and which we acknowledge. So we saw that there was a genuine need for working with our regional partners to assist with incident response. So that’s why we set up that facility under the last year’s federal budget a couple of years ago. What we’ve also done, I think, we’ve also sort of, again, on the cyber crime, and the reason why we focus on cyber crime in terms of the two priorities, and cyber crime and cyber-enabled crime, as many of us know, it’s increasing. And because it can impact individuals and small to medium enterprises, it can actually have much more profound impact as well. So that’s some of the challenges that we see there. For example, where we’re putting some rubber to hit the road there, so to speak. We’ve partnered with New Zealand and also Identity Care Australia for some trial funding over this year, where we’ve actually got a support service. service for them to work with impacted individuals and businesses in PNG and Fiji to respond to delivering tailored cybercrime and online scam response assistance. So that gives a measure in terms of what we’re trying to prioritise as a key challenge. And I suppose just covering off, I’m realising I’m running out of time, but the second main issue that we see that needs addressing is actually prioritising regional coordination. We have so many actors and donors out there as well, look at us all sitting up on the stage here, thank you for joining me, but I used to work on some capacity building assistance programs with DFAT in the Pacific and, for example, you’d have one person that might have three or four different roles that would also then have to go to four or five different trainings as well. So there’s always a challenge in terms of ensuring that we’re not overloading the people that we’re actually trying to help in country there as well. So that’s why Australia’s working with the partners of the Blue Pacific, so that’s Australia, New Zealand, UK, US, Canada, Japan, and Germany as well, and Korea. That’s why we set up through that partners of the Blue Pacific, the Pacific Cyber Capacity and Coordination Conference, so I’ll take a breath after that, apologies. And that’s one of the key outcomes of the initial and the intersessional meetings was around ensuring that people, better coordination. I’ll leave it at that. Thank you.


Amrita Choudhury: Thank you so much. Raj, you will come last. I’m not going to give you to speak right now. So I’ll go to Franz. Franz, over to you. GIZ has been doing a lot of work, so is there some synergy? They’re working a lot on cybersecurity, and he also mentioned something like regional coordination between donors, et cetera, but overall to you.


Franz von Weizsäcker: Definitely, I mean, and I would be speaking for a very long time if I was going to go through all the project lists that we’re having, having over 1,000 projects of whom maybe 30% has some relevance in digital transformation. But maybe the focus on the region, Asia, you could look at the inclusion, digital inclusion, access to internet, access to affordable internet at different levels. I mean, we don’t obviously invest in telecommunications industry. That’s the private sector job. In some cases, we have advisory projects to regulatory bodies, also at the regional level, at the ASEAN level in Jakarta. But what we mainly focus on is a different level of inclusion. We have a very big program in Cambodia as well as in Indonesia on digital inclusion for social safety and security, and that forms a very basis of inclusivity also in the less explored parts of the, less developed parts of the country where the internet affordability is lower. Because generally we have in many parts of Asia pretty large coverage, a high percentage of the population is in principle reached, but affordability is a major issue. And maybe India is the best positive example in terms of having a very capable regulator and a very competitive telecommunications industry that allows the prices per gigabyte to drop as low as nobody else in the world. And then there are some negative examples where we have, I think, especially Central Africa has the highest price per gigabyte around the world. And the reason for that is it’s not a good investment environment. It’s not safe. There’s no good rule of law. There is no good regulatory environment, no good competition, and so on. So that’s the very basis of affordable and inclusive internet connectivity. But then in GIZ, we’re also addressing the more, like a few of the soft enablers that come on top of that. One is the general inclusivity of society. That’s why social insurance programs in Indonesia and Cambodia, and a lot of the other programs focus on the institutions for education that is both at the level of general education as well as vocational training, technical training institutes. And those focusing also on the on the digital skills in that area. So there’s a few regional projects that we also focus on in Asia. That is the GIZ focus and when we come to talk to challenges, well their challenges are at many different levels. But if we focus on the core of internet connectivity, the usual challenge is it’s a very political landscape, the regulations of telecommunications. This is a billion-dollar business and of course a lot of interest is involved in there. The form, how the regulation is shaped, influences whether companies can be profitable or not. So that is maybe the key challenge and in many cases we have seen that the rural access funds for other purposes and in some cases they’re not not resulting in actual that is in any public sector funding you always have as many silos as you have that’s a very typical situation and that’s why maybe we should look at the there was a big meta study done by I think USAID who looked into aid effectiveness and who noticed that aid is much much more effective when it’s funded it’s it’s channeled into local organizations and those projects are the most effective ones that have a large part of the budget being allocated to rather than having the big international implementers implementing all by themselves so that is maybe one one approach in terms of coordination is not only the big donors talking with each other but also the funding lines being dispersed to local organizations. Thank you.


Amrita Choudhury: For bringing in even the regulatory environment, the use of access funds rural access funds like India has good things but the rural access fund has not been used. The USOFPO is used everywhere but not giving success. Zhao, over to you.


Zhang Xiao: Well actually this year is a very particular year for China. In 1994 internet was introduced into China so exactly for 30 years we see from internet users amount is huge. Actually we as a dossier but we also do policy research and statistics for the internet coverage rate. The penetration rate is 78% but if we we include the children that mean under 10 years old if we exclude this this group according to the national more internationally the penetration rate will be over 91%. Actually it’s 1.1 billion it’s going to be 1.1 billion at the end of the month. So nearly the mobile penetration rate is 100% so everyone is just I just take my cell phone to go anywhere I don’t need to take a card I don’t need to my key I don’t need to take something else and I can just take a phone my smartphone so we see the It’s huge. So the internet penetration rate is huge. We actually, for .cn, we recorded some data. From data we can see that a lot of things going on. So from my view, I think there are two challenges for the inclusion of the internet. The first thing is the elderly people because the penetration rate is good and we have 1.1 billion already, a huge amount of people, but because we are entering an elderly society, normally it’s 18.7% of Chinese people, over 60 and above, and going to be 20%. And in the following, not more than 20 years, it’s going to be 35% of the total population is going to be 60 and above. And like my father, he couldn’t use a smartphone well and a lot of smart appliances he couldn’t use that well. So he has access, but he couldn’t use that well. So that’s a big problem for the elderly society. It’s not a problem just in China. I think in Europe, in Japan, and some other countries as well. And the second question I think for us is, while still there are some 2.6 billion people have no access to the internet. How could we help them? I think we have best practice. We have a lot of cases to share and we could call on investments, a lot of things to do to help them. Also in China, we still have 10% of them have no access. But really, if you look at the reason why they have no access, it’s not just investment in the telecom. It’s because they have no digital literacy. They don’t recognize the characters. They can’t read or write. And it’s another reason they have no awareness that it’s important. It’s of no importance. It has nothing to do with my life. So I think still we have a long way to go. Thank you. I have a follow-up question for you. What’s CENIC doing? Is CENIC playing a role? Yes, yes. Actually, we are for .CNN. We have an operate .CNN. And it’s huge. We have 20 million users actually. And we are, I don’t want to put it in technically, but also we have research policy for the usage retail of the internet. So with our data and our research, we support policymaking. And like, for example, how many women are using the internet? If you look at the gender, it’s quite balanced. China, 49% of users are women and 51% are men. And also we can see the age classification. So with these results, we can support policymaking. And what we should do next, I think the telecom and all the governments are very interested in it.


Amrita Choudhury: Thank you so much, Sarah. ISOC Foundation is investing a lot in various projects. So what is it you are investing in the ICT projects? Primarily that’s where your focus is. And what do you see as the main challenges at this point?


Sarah Armstrong: Okay. So the Internet Society Foundation is a supporting organization for the Internet Society. And we are responsible for giving grants. And we do this throughout the world. In fact, we were started in the first operational year in 2020. And since that time up until now, our fifth operational year, we have distributed over $63 million in funding. We’ve issued more than a thousand grants and we are working or have worked in 121 countries. Specifically when it relates to the APAC region, we’ve done nearly $5 million right now at this current time. And we have 37 active grants in the APAC region. Now these are some overall statistics about the Internet Society Foundation, but I’d like to give you some specific examples to answer your question of some of the things that we are funding. So we are again, a funding organization. We work with organizations throughout the world and we are definitely interested in the issue of connectivity access. And then we also care a lot about how people can benefit from the Internet and how they can be up-skilled. in order to learn what the things are that they need to do to increase their economic opportunities and their education. So an example in Indonesia, we have a project called Kota Kita. It’s part of our skills program. We have 11 different programs. I won’t go into them all, but I will just say this is a skills program. It’s about building digital literacy. And they are working actually with women with disabilities to help them with social enterprises. So that is a growth opportunity there that’s focused on training that we think is really important. And we have many training programs, but that’s an example in the APAC region. We have another grantee, Digital Empowerment Fund that works in India. So this program right now is aiming to bring 50,000 people across 100 communities worldwide, and they are working with tea tribes throughout an area of India. They’re also doing a resiliency grant. This is a program where we help communities prepare for disasters that we know will come so that they’re better equipped to deal and be ready to get back up online and communicate. The small island states are certainly an area that we believe in targeting, and we will be doing more of that in 2025 going forward. We also are, excuse me, funding an organization called the Institute of Electrical and Electronics Engineer, IEEE. This is a resiliency grant, again, another grant to help communities prepare for the inevitable disasters that will come and be ready. And this is working right now, aiming to impact directly 20,000 people. So the final project to share in the APAC region, it’s not the only, this is not the only set of the grants that we have, but I wanted to give you an idea to answer the question of what it is we are funding to help with this situation. And this is a beyond the net large grant where they’re providing literacy skill building and to allow citizens to participate in e-government services provided by the government of Kyrgyzstan. Very important, again, it’s not just getting access, it’s knowing what to do once you have that access. There’s also a research grant where they are creating an open and secure IOT infrastructure for monitoring and preventing emergencies in landlocked mountainous communities. So we’re doing a lot in APAC, we’re doing a lot throughout the world. We don’t do the work ourselves, we fund the work. So we are a funder. And I think to talk about the biggest issues, affordable, meaningful access and how we can do that, we find to be extremely important. And then the others, as I was saying, I’m sure that training is part of a lot of the programs that we do so that we can be sure people are benefiting to the fullest. So those are the areas that we know are challenged. And I guess the biggest challenge too, quite frankly, is that the need to connect and we have still 2.6 billion people who are not connected.


Amrita Choudhury: Some places have been giving the grants, like in India, you can’t give for many people. People also is a challenge. But Ekaterina, I will come to you, but I will first talk to the donors and we have some here. Thank you for joining and Shadia came in. If you could share, World Bank is in a lot of things, but what ICT project… And it’s so good to be here with all our partners and friends from the UN system.


Samia Melhem: The World Bank has been doing a lot on digital and we are scaling up our products and services. We, as you know, the World Bank provides financing support either through loans or grants and for the low and really low income countries, most of the assistance is through what we call AIDA grants. So we’re seeing an unprecedented increase in AIDA grants. We have around 100 billion mobilized for that round. And unlike the last 10 years, digital has become a big priority at the World Bank. We’ve been reorganized and digital became one big vice presidential unit at par with human capital, sustainable development and infrastructure. So it’s really big for us in terms of both the attention, the mandate and the resources that are being made available to support digital acceleration. For one reason, we are worried about that big digital divide. We are seeing the impact it’s causing on attaining the SDGs. You know very well that countries that are adopting digital are much more likely, 45% more likely, in fact, to achieve the SDGs on time. We are also seeing the big gap in the job markets, how all the jobs are kind of almost monopolized in countries where you have high end, high value jobs, pretty much because you have strong capacity in STEM and digital skills. And these never happen without a strong digital public infrastructure. If these kids grow up with no connection and by the time they’re connected, they’re 50, they would have missed out on a lot of job opportunities. So we’re seeing that. We’re seeing that more now with AI, AI which needs really a lot of data, a lot of good data if we want it to be useful, if we want it to be ethical, but it also needs data in all the spoken languages we are talking about, specifically as we mentioned here, Asia in general or South Asia. So really a lot to do and we are pushing to accelerate. Our focus is, as all my colleagues said affordable broadband for all. The second one is financing with government and private sector not only telecom but also the digital public infrastructure, the government networks, the digital ID, the shared services, the authentication and security, and then the sectoral applications for health, education, transport, security, so on so forth. The last one you brought it so well digital skills. What is this use if we’re putting millions and billions if people cannot use it or if they depend on the north or the west or whatever we call it to provide that. So really and and here everybody what everybody said is really music to my ear and I completely agree with all the focus on digital skills, building local capacity, investing in NGOs. Look as you transfer capacity to a local entity whether it’s a government or a private sector are they going to be as great in the beginning than the top consulting firms? No, but they know the local context, they know who does what, they have the local intel that many times these big firms don’t have and they fail just because of that. They have the know-how they don’t know the context. So I think I outlined the challenges. If I can just focus on one thing which is very dear to me which is what you all mentioned really cooperation at a country level, at a regional level making sure that we put all that good know-how and financing in coherent pieces so that one day we’ll be sitting here and the 2.6 billion people would have been connected with meaningful access. Thank you.


Amrita Choudhury: So well said. Raj, I’ll come to you and then Sharya. You know APNIC Foundation, what you’re doing and what do you see as challenges?


Rajnesh Singh: Thank you Amrita and it’s nice to have all of you join this session. So thank you for making time for it. In terms of what the APNIC Foundation does, I’m going to go out on a limb here and say that no one knows the Asia-Pacific better than we do. We’ve been around for over 30 years. We’ve built most of the internet infrastructure in the region in some way or form. We’ve helped with training, with capacity building, and so on. The foundation itself, of course, is the development arm of APNIC, which is the Regional Internet Registry. We have the longest running innovation fund in the region that’s been running for over 16 years now, the Information Society Innovation Fund. We have funded programs across inclusion, knowledge, and infrastructure, which are the three pillars of the program. And whether it’s senior citizens or upskilling women or improving gender diversity in the workforce, we do all of that. We’re a small organization, but we do a lot of work. So one of the things we like to say is that we are more about action rather than words. So action, not words. I’ll get to some of the issues and challenges I see. One is, and this has concerned me for quite a while, and if you’ve had me speak before, I keep on repeating the same thing, because hopefully someone’s going to listen. And Samir, you mentioned some of this, actually. It’s the widening digital divides we’re creating. So it’s not just the digital divides. It’s the widening digital divides we’re creating. And that’s got to do with infrastructure. It’s got to do with the devices people use. It’s got to do with digital literacy. We can go down through the layers and define where those divides are widening. It also goes down to a simple thing, very technical, whether an economy or a country or an organization is using IPv6 or is it still using IPv4. So there’s just so many layers of these widening digital divides, because depending on what they have access to and what they can leverage, that will determine what they can do with that connectivity they have. You know, we talk about how wireless, for example, Leosets are changing the landscape. Yes, they are, but there’s still challenges with that as well. but there’s legislative issues or regulatory issues at play with that. There’s also an affordability angle to that. So we shouldn’t just look at one form of technology as the solution to fix everything. It has to be multi-modal in nature. The second problem, in fact, I’m gonna mention three problems here. The second problem is just the level of prioritization that exists between governments, within governments, within regions. And unfortunately, even within the government, very few have a whole of government approach to digital transformation. Time and time and again, this keeps coming up. You’ve got an IT ministry, someone set up a digital ministry, then there’s the finance ministry, then there’s home affairs or foreign affairs who also want to, not having a shot at you, but the thing is that if you don’t have a whole of government approach, you’re going to be working in silos, as Fran said. And then, of course, the third thing is just the coordination between donors. There’s just so much duplication of work. And it distresses me when I see multiple organizations giving funding to do the same thing that someone else has already done. And yes, you’ve got to tick off some KPIs or tick some boxes on your delivery or whatever it is that you need to do. But if you want to bring about holistic transformative change, you have to consider what’s already out there and where do I go and plug the gaps? That’s what the APNIC Foundation does. We’re more interested in plugging the gaps. So if any of you want to come help and support us and plug gaps with us, please talk to us. Thanks, Sumit.


Amrita Choudhury: Thanks so much, Raj. Coordination is something which I’m hearing everywhere, most of the things. And Shari, over to you.


Panelist: Thank you. Thank you very much. I think my job has been easier by Raj and by Samia and ISOC. So we do a combination of what has been said, but comparing ourself, Islamic Development Bank as… can be seen as a smaller version of what the World Bank does. So we do financing of digital development. Our main objective, of course, we are more on the digital inclusion side and we have digital inclusion strategy that was launched last year for four years where we have four key areas that we focus on. So first focus is basically on smart policy and that we build this. So without private sector intervention, there could not be either bridging the digital divide or we talk about widening the digital divide. And now we are hearing in this conference that we are having the AI divide pretty soon that we will be able to monitor and capture and see. The other aspect that we cover is, of course, the capacity building, which is digital literacy of not only policy makers, but end users as well. So that once they have the internet connectivity provided, the use of these services. Then, of course, we have one of the aspect that we have is traditional. Well, what we have been doing is financing of enabling digital infrastructure so that the country has the capacity so that we do work on the upstream side, financing submarine cables, fiber optic backbone. We have done several of them in Southeast Asia, East Africa, West Africa. And last but not least, what we have really focusing very recently is mainstreaming of technology into developed sectors like smart education, how we can use technology and education, telemedicine and health services, e-agriculture, smart cities. So that the idea is we make our development operations more. effective, more impactful, and last but not least, more sustainable. Because of low resources, this is a challenge that we face. We don’t have the luxury of the lot of grants that the World Bank has, so we have very limited amount of grants. We have limited amount of financing. And of course, our member countries, which are the Global South, so they also have challenges in borrowing. So we have to come up with innovative financing instruments of blended finance so that it is affordable for the country to absorb this financing. And then of course, make – so that’s why we insist on developing a business case so that it’s even commercially viable, so that it is self-sustainable. And then basically, once we have an exit policy, once we finish the funding, what is the model that will sustain the whole operation of this intervention that we do? I’ll share – sorry for taking a bit longer – is that we recently came up with – just three weeks ago, we did a policy digital – for Africa region partnership with ITU. And we – actually, this program was never done before. We co-designed it because what we are trying to promote is we are promoting a concept of government ownership. So we encourage policymakers to come up with innovative programs that an international financing institution could finance to help bridge the digital divide. So otherwise, these innovative solutions we expect normally from the private sector, from SMEs, but what we are trying to promote is government ownership that we encourage policymakers that we help you, we will help you to have the right capacity so that you think out of the box and come up with programs that will help us bridge the digital divide correctly. We are doing a similar program with UNDP. We call it digital stewardship community. We are coming up with a community of policymakers who have the digital capacity. So they basically not only do capacity-building programs, we do programs for them, and then they encourage their other – other policy makers within their ministries to come up with projects to help them to brush up and then ultimately when we do the MCPS, which is member country partnership strategy, country engagement framework, then those policy makers are invited to share their experiences what they have learned. So what we are doing is we are not only doing capacity building programs, but we are hand holding them to come up with larger projects and then we ultimately either us or help our partners to come and finance those interventions. We are working with the Ministry of Villages. We are financing their AI tool that would basically help develop them to have the capacity to do their infrastructure service delivery planning. So in Indonesia, if you’re aware that the village have their community engagement to set up a yearly plan. So initially it was manual, but now they converted into a service innovation platform. And now we are financing the AI embedded tool of that platform that would utilize information available so that every participatory stakeholder is involved so that they can give their input and we can come up with a better infrastructure services, be it water, be it road, be it energy supply in the villages. So these are a couple of examples that I wanted to share that we are working on and we are happy to collaborate with others because we will be replicating in other countries and in the country scale it within the country as well. Thank you.


Amrita Choudhury: Thank you so much, Sharir. I’ll go to Katrina. You’ve been hearing the donor community speaking a lot. From a government perspective, where is it that, what do you see are the gaps which you find when there are projects being held in a country? What from your perspective, the other side are the gaps?


Panelist: Thank you so much. I think that this is the perfect momentum for me because after, I will give you another perspective from the implementation side, how it evolved and also take you to other parts of the world, which is South Caucasus, but still working closely with World Bank and other donor organizations like ISOC, GIZ, also European Investment Bank. So let me mention the major state program, which was supported by World Bank, which is the State Broadband Development Program, which was bringing the high-speed broadband to villages of Georgia. but mountainous country and we still have the rural-urban gap. We all agree that today’s economy and sustainable economic development is absolutely unthinkable without having your citizens to have access to affordable, high-speed, high-quality internet. So we were supported to bring the middle-middle connectivity. It’s 5,000 kilometers of the broadband and up to 1,000 villages will be covered. And it’s very important when we speak about the challenges, the challenge for the regulator and state is that the funded project should also still maintain competitiveness of the market. So the players might tell you that if you have enough funds to just give the funded internet to all the villages, we might step back and leave the market. So this is one of the challenges, to ensure, to be very precise where are those white spots where you definitely need this funding and this will not breach the competitiveness overall for the country and it will evolve to more competitive digital ecosystem for the country and not to scare investors, for example, for the innovative technologies. But most important for the state is to make your citizens protected and give them access and affordable access to high-speed internet. So this was the first step. The second important step where the role of ComCom, the regulator, was also broader is another second component of the project. First was the infrastructure build-up and second was that supporting the component that will… support the literacy component which is bringing awareness how to use the internet which is another challenge because you need to really come to the each small village and find community and make people confident that it’s really interesting to listen how you can use internet for economic to grow up your business or your household and for economic benefits also to reach out to people from the different ethnical groups so we are small country but we have different ethnical groups and they might say so it was a Comcom’s role to find proper communities in the regions in the villages to reach out and start this media literacy trainings and Comcom has been given the national nationwide role of developing media literacy in the country so this with this broader state-defined role we were empowered we are we were supported by Ministry of Education to work with schools with universities and also to also reach out to the people with disabilities for whom it is most important to have access to the digital technologies nowadays it is a bigger enabler I would say for the people with disabilities to be reached out and to learn how to turn this access into the benefit for their lives for improving their next day or improving or it’s very popular in Georgia for for example micro businesses small businesses to really understand how that they can digitize their services and how they can use this high-speed broadband for their day-to-day activities. So I think there are very similar challenges around the world and let’s keep doing, let’s keep going with this strategy. Thank you so much.


Amrita Choudhury: Thank you so much. A question which all the speakers out here need to think but we’ll take some questions from the ground also. I’m coming to you. One question which I would like you to respond later is two areas where you would be focusing on from 2025 and beyond. But we have some questions. We’ll take two or three together and then you could respond.


Audience: Sorry, this is more of a comment than question. It’s okay. I’m from the Internet Society Foundation and I just wanted to say, going back to your point, your comments about not being able to fund India and Raja’s point about the need for cooperation and peer dialogue among funders. One thing working at the foundation that I think is another challenge is not being able to fund a lot of people you want to fund and this is something that we recently experienced with Georgia. So I would just ask or put out if there are ways as funders or and organizations to think of ways to work within legal frameworks but also just find ways to support people in countries where they might not be able to receive funding legally from other sources. Just being able to share ideas or maybe partners that you work with that you know are good in the region.


Amrita Choudhury: Thank you so much. So it’s again coordination, better coordination how you can use. Yes, please.


Jordanka Tomkova: Hi, Jordanka Tomkova from Innova Bridge Foundation, Switzerland. I’m curious about the innovation in the sort of interventions that you have made and the funding that you’re providing. If you have any good examples of innovative approaches, whether centralized or decentralized because that is the title of today’s talk. So not what has been done and what is sort of standardly done but what is innovative about it. Thank you.


Amrita Choudhury: Thank you so much. Any other questions from the room? If not, I will. Any of the speakers wants to speak innovatively using approaches of funding and can there be more exchange between community as in since you’re here I would say the community here to share best practices of how you can reach out to more communities who need fund where you currently because of regulatory challenges are not able to reach. Anyone wants to take a step?


Samia Melhem: I’ll be very quick because yes a very good question innovation. I think the first innovation is in the approach on how do you really get to those that are not accessed because there’s no way to reach them if you can call them many don’t even have an ID. So the idea of using local NGOs community we have digital ambassadors in most of our projects and these are hand-picked thousands of them that we kind of empower, train, compensate in different ways and they are from the areas that are newly connected. We have that in Congo, in Zambia, we have that in Bangladesh, in Pakistan. So really using the youth on the ground which we didn’t in the past. The other element is to really have more participation crowd sourcing in the design of projects. These projects are getting bigger and they really need to have participation of the stakeholder. So more design thinking as we plan these particular projects. Last but not least and it’s not an innovation but really working more amongst one another and the private sector to understand where the jobs are and help universities, academic institutes, learning institutions reform their the supply of programs to really align with the job market.


Rajnesh Singh: Yeah thanks Amrita. So Samia covered some of the approaches they take which I think some of us take as well. One thing I do want to bring up and and it’s just pointing out the elephant in the room, a lot of the reasons we can’t deploy funds in certain economies is due to a thing called sanctions and due to another thing called geopolitics. So that I think has to be taken into account because what we find is where the greatest intervention required, where we could have the most impact are those economies and peoples in those economies who are suffering, not because of their doing, it may be the political system or whatever else that exists in that economy. So how do you try and ensure that we can help the people who need the most help? And that I think has been one of the challenges we’ve had to face. So what we’ve done, our approach has been to find partners who can actually go and work in those economies. Sometimes we can ourselves, and sometimes we are limited by government sanctions and or legislation that we can’t, but there are ways that that can be addressed as well. However, what sort of concerns me is that we don’t seem to talk about that as enough because we wanna do things here and there and everywhere else, but the people who can have the most benefit from digital technologies and all the benefits that the internet can provide, sometimes we just don’t go there because we can’t even send money there or we can’t send people because there’s a security issue, for example. So, I think there is some scope to work together to find some innovative approaches on how we can address that issue. And I’m happy to discuss it further given the fact that we are out of time.


Amrita Choudhury: Thank you. I know Sarah is going to go next. For example, Raj would not say it, I’ll say it, Afghanistan is one place where putting in money for most people is difficult, but they need it the most, women, et cetera. Sarah, you and then Shadia.


Sarah Armstrong: Yes, just briefly. Some of the examples that I mentioned here, we feel are really putting forth real innovative solutions. This organization that I mentioned earlier that is doing the IOT to help with the detection of emergencies in the climate area, that’s our CURSIC-SAN chapter. I think that’s a really good. example of what’s going on in being innovative, and we have also the IEEE that’s working in India, and they are working with unintended solutions, and we’re seeing elements of that. And we also would say that in order to continue to find innovative solutions, we are encouraging our grantees to learn from one another and to experience what others are doing and how they’re doing it and how it can be more innovative and responsive to the cultures in which they’re working. So that’s a very important part of the type of foundation that we are and we’d like to continue to be.


Panelist: Yeah, maybe just on one thing that I wanted to share. So, the new method that we came up with very recently, traditionally the bank has been operating in a way that we receive a request from our member country and then we address the needs. So when we were developing our digital inclusion strategy, which was launched here in Riyadh last year, we actually went through a very detailed consultative process of over one and a half year which involved policymakers from 14 countries, 14 member countries and 10 international organizations. I can see Mr. Sharif sitting here and he was part of that discussion. So, we had called something called ISDB, Digital Inclusion Technical Working Group, and we reached out to different policymakers from countries for in-person and hybrid workshops in different parts of our member country constituency. Out of that, fast forward one year, when we came up with the strategy, we said, okay, we have a strategy, but what would be a catalyst to embark on certain immediate programs? that would kick-start some ground. So we came up with this Digital Inclusion Strategic Partnership Program, where we encouraged all those partners that were with us throughout the journey in terms of developing the strategy to come up with pilot programs that would have some sort of scalability criteria and or appetite for that, that would encourage more people or more international organizations, even in private sector to come in and finance the scalability part. So we are now financing for pilot programs in Pakistan, in Smart Village. We are doing a pilot program in Indonesia, in Maldives, and working with different partners on digital ID so that we de-risk for the government so that if we can have some sort of immediate results, we know a certain amount that we are looking at in terms of financing some immediate results that would encourage potential financiers or investors to come in and fill in these gaps. So these are some of the innovative mechanisms other than the one that I already mentioned is blended finance. So we come up with a grant portion with some sort of a soft lending, especially for low-income countries that otherwise would not be able to afford the financing for the immediate needs that they would like to have some sort of impact in the country. So this is what I wanted to share. Thank you.


Amrita Choudhury: I do have one question, and I see an alert of five minutes. So, you know, there is one question. I have to give it to her. She’s from Myanmar. But I would take back the last question. I would think, what can we do next? One minute for each, or even half a minute, feel quick.


Audience: Yeah, I just wanted to agree with Raj, and thank you for raising about the Vanuatu community that we are suffering a loss of the internet issue. And that is also another challenge of the community like Myanmar, Afghanistan, to resist the funding to build the capacity in our community. I really think, and I would like to think, like to request to all of the grantors and funders to think about some time to break through to see the community, not seeing the sanction or geopolitical situation. Thank you very much.


Amrita Choudhury: Thank you. Very relevant point. So my question to all of you would be, what do we do next? We’ve discussed it. We say coordination is important. But what should be our next steps if we really want to take this discussion ahead? Anyone wants to take a jab first? I have the mic.


Panelist: So what we have done, in addition to what I’ve just mentioned, is what we are now coming up is that we have already a whole list of 72 programs in our pipeline that has the potential to have over $1 billion worth of digital development projects. And of course, we ourself cannot finance all of it. So what we are doing is we are encouraging partners to come in and chip with the development of these programs. So the idea is that over the next three years, we will help not only do the capacity building for the policymakers and the countries itself, but we will help them come up with basically from a concept note to a bankable project that would ultimately be available for financing for either the private sector, any MDB. So there are a certain number of programs that we need to do the seed funding for. And we look forward to those kind of interventions and happy to collaborate and work together towards achieving collective goals. Thank you. Sorry. OK, thank you so much. I’ll try to be very, very quick. So I want to mention one step that will be still ongoing is implementing media digital literacy into formal education. This is one of the components that is crucial for the country. And I think this will somehow make the circle of the success. story of the whole broadband development with the component of literacy and with the component of safe use of internet, so we will make it even broader in schools and universities. And a second topic that is on our agenda as far as the digital is a cross-cutting stream, so make, involve more sectors into this. When you bring the broadband infrastructure and literacy, you need to bring also other economic stakeholders to make a real success story out of your investment. Thank you.


Franz von Weizsäcker: All right, so on what to do next for GIZ as a big organization, our answer is we take our decisions very decentrally, probably 90% of our budgets are being decided by people who live and reside in the country where it’s being implemented, and that is one of the answers towards innovation, because when you want to source good and applicable innovations, that needs to happen locally. And so that’s where also most of the coordination shall happen going forward, and another recommendation for all the granting mechanisms and calls for proposals, I mean, one big important feedback that we got and what the grantees very much appreciated is keeping all these processes very simple, have the pitch done on one sheet of paper and then have a subsequent pitching session and do not overburden grantees with bureaucracy, that’s the best way to become effective and good value for money and sourcing the real innovations and not just those organizations that are very good at checking all the boxes of all the donors. So don’t put too many strings attached, but rather make it simple and make it fit for purpose.


Rajnesh Singh: So I’ll just repeat what I said before, don’t duplicate, if you want to do something in the Asia-Pacific, come and talk to us, come and talk to me for you. Thank you. G’day.


Sarah Armstrong: Oh, again. No, again, please. Pleasant to meet you. Thanks. Okay. Just to add on, absolutely, we are continually looking for ways to improve the foundation and the way in which it works. It’s five years old and we are just launching our next five-year strategy, so we are full of new ideas, more simplicity with our grantees we know is extremely important, so we’re finding ways to do that, we think that’s important. Looking, too, for more gender focus, we think that the digital gender divide is an important thing for us to be addressing, so that’s another area that we’ll be focusing on going into the future in terms of that. And then also continuing to find innovative ways, because some of the environments in which we’re working are very difficult, as we’ve talked about, and seeing if we can identify grantees that are able to come up with solutions where they can address some of the problems that we’ve encountered. We are going to move forward with a lot of enthusiasm and possibly a fair amount of changes.


David Hevey: Thank you. Beyond what I already said around focusing on the capacity… building on cybersecurity and cybercrime. I think also what we’re, as an honorable mention in, with the data, my colleague from World Bank said before, with the data and the data being critical in a number of things, one thing that we are looking forward to is actually supporting cloud transformation roadmaps for countries, particularly in PNG and Vanuatu. So that’s one thing that we actually also focus on there too. But also taking stock of things, that’s really important. The monitoring and evaluation piece is actually, is really important. It’s all well and good that we have these approaches which have worked and we’re continuing to innovate, but ensuring that what we’re doing actually is hitting the mark and is doing what we need it to do. And also having been Foreign Affairs and working with the APNIC Foundation, I support Raj’s plug to partner with APNIC Foundation. There we are.


Zhang Xiao: Yeah. Personally, I would like to focus on AI governance because digitalization is a process. Internet is the foundation. But AI is going to change each field, more or less. So I want this talk to continue, dialogue to continue, because as a group of people, we are going to make a sense. Thank you.


Samia Melhem: Thank you. Yes, you’ve said it all. And if I can complement with two actions, the first one is for big organization, perfect. The Saudi government wants to fund a lot of these projects. How to make sure it is done, it is approved, and it’s not too complicated. NGOs, et cetera. So make partnership much easier. And second, with the client side, with the government, we oftentimes start big projects, but they’re not completed. We do an ID system, and 10% of the people of that country are in it. What about the other 90%? Make it easy to have follow-up projects to complete what we started and keep perfecting it. Thank you.


Amrita Choudhury: Thank you so much. We would be ending, but I think this dialogue, as Xiao mentioned, needs to continue. And may I, as a moderator, suggest that perhaps if there could be a mailing list kind of a thing where like-minded donors could actually share, like World Bank, et cetera, has tremendous knowledge, like what you shared, that projects are not completed, or make it easier for hosts, et cetera. Then the bank has some ideas, governments have some ideas. Could there be some kind of a mailing list where you can share the experiences, like Franz was saying, someone was saying, let’s not duplicate projects, I think. France said, and let’s see how this knowledge which is there can use. For example, if GIZ is making a simpler form, could others look at it as an inspiration? I’m not saying copy-paste it, but so perhaps if someone can think of having a mailing list, et cetera. I don’t know, Raj, would APNIC Foundation would want to have a mailing list if others want, and it could be informal way where you all exchange so that what discussion happens, you can continue, other people can also join it or not join it. That’s just a suggestion, you can take it or leave it. And we would like to have a group photo, first of the speakers. Thank you so much, it was really good. We would love to have a 90 minute session actually, but we ran out of time. If we would get just three minutes to take a photograph of the speakers and then of the group, right? Thank you. Thank you.


D

David Hevey

Speech speed

163 words per minute

Speech length

1063 words

Speech time

390 seconds

Cyber resilience and cybercrime response

Explanation

David Hevey emphasizes the importance of addressing cyber resilience and cybercrime response. He highlights these as key challenges in improving internet access and inclusion.


Evidence

Australia has set up a rapid response facility for deploying incident response after cyber incidents in Vanuatu and Tonga. They are also partnering with New Zealand and Identity Care Australia to provide cybercrime and online scam response assistance in PNG and Fiji.


Major Discussion Point

Challenges in Improving Internet Access and Inclusion


Infrastructure investments in Pacific Island countries

Explanation

David Hevey discusses Australia’s investments in infrastructure to improve connectivity in Pacific Island countries. This includes both submarine cable connectivity and terrestrial infrastructure.


Evidence

Australia has committed 350 million Australian dollars since 2018 to support all Pacific Island countries having primary telecommunications cable connectivity by the end of 2025.


Major Discussion Point

Current Projects and Initiatives


Need for regional coordination among donors

Explanation

David Hevey emphasizes the importance of coordination among donors to avoid overloading recipient countries. He points out the challenge of multiple organizations providing similar trainings or assistance.


Evidence

Australia is working with partners of the Blue Pacific to set up the Pacific Cyber Capacity and Coordination Conference to improve coordination.


Major Discussion Point

Coordination and Collaboration Among Donors


Agreed with

Rajnesh Singh


Franz von Weizsäcker


Agreed on

Need for better coordination among donors


Cloud transformation roadmaps for countries

Explanation

David Hevey mentions supporting cloud transformation roadmaps for countries as a future focus area. This initiative aims to help countries modernize their digital infrastructure.


Evidence

He specifically mentions plans to support cloud transformation roadmaps for PNG and Vanuatu.


Major Discussion Point

Future Focus Areas


F

Franz von Weizsäcker

Speech speed

139 words per minute

Speech length

818 words

Speech time

352 seconds

Affordability of internet access

Explanation

Franz von Weizsäcker highlights affordability as a major challenge in internet access and inclusion. He notes that while coverage may be high in many parts of Asia, affordability remains a significant issue.


Evidence

He contrasts India’s competitive telecommunications industry and low prices with Central Africa, which has the highest price per gigabyte globally due to poor investment environment and regulatory issues.


Major Discussion Point

Challenges in Improving Internet Access and Inclusion


Social insurance and digital skills programs in Southeast Asia

Explanation

Franz discusses GIZ’s focus on social insurance and digital skills programs in Southeast Asia. These programs aim to improve digital inclusion and skills development.


Evidence

He mentions specific programs in Cambodia and Indonesia focused on digital inclusion for social safety and security.


Major Discussion Point

Current Projects and Initiatives


Agreed with

Sarah Armstrong


Zhang Xiao


Samia Melhem


Agreed on

Importance of digital literacy and skills development


Importance of funding local organizations

Explanation

Franz emphasizes the importance of channeling aid through local organizations for greater effectiveness. He suggests that local organizations have better understanding of the context and local intelligence.


Evidence

He references a meta-study by USAID which found that aid is much more effective when channeled into local organizations.


Major Discussion Point

Coordination and Collaboration Among Donors


Agreed with

David Hevey


Rajnesh Singh


Agreed on

Need for better coordination among donors


Simplifying grant processes and sourcing local innovations

Explanation

Franz recommends simplifying grant processes and focusing on sourcing local innovations. He suggests that this approach leads to more effective and innovative solutions.


Evidence

He advises keeping processes simple, such as using one-page pitches and subsequent pitching sessions, to avoid overburdening grantees with bureaucracy.


Major Discussion Point

Future Focus Areas


Z

Zhang Xiao

Speech speed

153 words per minute

Speech length

664 words

Speech time

258 seconds

Digital literacy for elderly populations

Explanation

Zhang Xiao identifies digital literacy for elderly populations as a significant challenge. He points out that while internet penetration is high in China, many elderly people struggle to use digital technologies effectively.


Evidence

He mentions that 18.7% of Chinese people are over 60, expected to reach 35% in the next 20 years, and many struggle with using smartphones and smart appliances.


Major Discussion Point

Challenges in Improving Internet Access and Inclusion


Agreed with

Franz von Weizsäcker


Sarah Armstrong


Samia Melhem


Agreed on

Importance of digital literacy and skills development


AI governance

Explanation

Zhang Xiao expresses a desire to focus on AI governance in the future. He sees AI as a transformative technology that will impact various fields.


Major Discussion Point

Future Focus Areas


S

Sarah Armstrong

Speech speed

141 words per minute

Speech length

1031 words

Speech time

437 seconds

Affordable and meaningful access

Explanation

Sarah Armstrong emphasizes the importance of affordable and meaningful access to the internet. She highlights this as a key challenge in improving internet inclusion.


Major Discussion Point

Challenges in Improving Internet Access and Inclusion


Digital literacy programs for women with disabilities

Explanation

Sarah Armstrong discusses the Internet Society Foundation’s support for digital literacy programs, particularly those targeting women with disabilities. These programs aim to improve digital skills and economic opportunities.


Evidence

She mentions a project in Indonesia called Kota Kita that works with women with disabilities to help them with social enterprises.


Major Discussion Point

Current Projects and Initiatives


Agreed with

Franz von Weizsäcker


Zhang Xiao


Samia Melhem


Agreed on

Importance of digital literacy and skills development


Gender focus and addressing the digital gender divide

Explanation

Sarah Armstrong mentions a future focus on addressing the digital gender divide. The foundation plans to increase its emphasis on gender-focused initiatives.


Major Discussion Point

Future Focus Areas


R

Rajnesh Singh

Speech speed

191 words per minute

Speech length

1023 words

Speech time

319 seconds

Widening digital divides across multiple layers

Explanation

Rajnesh Singh expresses concern about widening digital divides across various layers, including infrastructure, devices, digital literacy, and technical aspects like IPv6 adoption. He emphasizes that these divides determine what people can do with their connectivity.


Major Discussion Point

Challenges in Improving Internet Access and Inclusion


Lack of whole-of-government approach to digital transformation

Explanation

Rajnesh Singh points out the lack of a whole-of-government approach to digital transformation in many countries. He argues that this leads to working in silos and ineffective implementation of digital initiatives.


Evidence

He mentions the existence of separate IT ministries, digital ministries, finance ministries, and other departments working on digital transformation without proper coordination.


Major Discussion Point

Challenges in Improving Internet Access and Inclusion


Sanctions and geopolitics limiting aid to certain economies

Explanation

Rajnesh Singh highlights how sanctions and geopolitics limit the ability to provide aid to certain economies. He points out that often the areas most in need of intervention are those affected by these limitations.


Major Discussion Point

Challenges in Improving Internet Access and Inclusion


Agreed with

Audience


Agreed on

Challenges in funding certain countries due to regulations


Innovation fund for inclusion, knowledge and infrastructure

Explanation

Rajnesh Singh mentions APNIC Foundation’s Information Society Innovation Fund, which has been running for over 16 years. The fund supports programs across inclusion, knowledge, and infrastructure.


Evidence

He mentions that the fund has supported programs for senior citizens, upskilling women, and improving gender diversity in the workforce.


Major Discussion Point

Current Projects and Initiatives


Duplication of work among donor organizations

Explanation

Rajnesh Singh expresses concern about the duplication of work among donor organizations. He argues that this leads to inefficient use of resources and limits the overall impact of interventions.


Evidence

He mentions seeing multiple organizations giving funding to do the same thing that someone else has already done.


Major Discussion Point

Coordination and Collaboration Among Donors


Agreed with

David Hevey


Franz von Weizsäcker


Agreed on

Need for better coordination among donors


S

Samia Melhem

Speech speed

157 words per minute

Speech length

937 words

Speech time

356 seconds

Digital public infrastructure and sectoral applications

Explanation

Samia Melhem discusses the World Bank’s focus on financing digital public infrastructure and sectoral applications. This includes government networks, digital ID, shared services, and applications in health, education, and other sectors.


Major Discussion Point

Current Projects and Initiatives


Agreed with

Franz von Weizsäcker


Sarah Armstrong


Zhang Xiao


Agreed on

Importance of digital literacy and skills development


Completing and perfecting started projects

Explanation

Samia Melhem emphasizes the importance of completing and perfecting started projects. She points out that many projects are not completed or only partially implemented.


Evidence

She gives an example of an ID system where only 10% of the country’s population is included, questioning what happens to the other 90%.


Major Discussion Point

Future Focus Areas


P

Panelist

Speech speed

148 words per minute

Speech length

2231 words

Speech time

900 seconds

Maintaining market competitiveness while funding rural access

Explanation

The panelist discusses the challenge of maintaining market competitiveness while funding rural internet access. They emphasize the need to balance government-funded projects with maintaining a competitive market environment.


Evidence

The panelist mentions that players might step back from the market if the government provides funded internet to all villages, potentially reducing overall market competitiveness.


Major Discussion Point

Challenges in Improving Internet Access and Inclusion


Broadband development program in rural Georgia

Explanation

The panelist discusses a state broadband development program in Georgia, supported by the World Bank. The program aims to bring high-speed broadband to rural and mountainous areas of the country.


Evidence

The program involves building 5,000 kilometers of broadband infrastructure to cover up to 1,000 villages.


Major Discussion Point

Current Projects and Initiatives


Implementing digital literacy in formal education

Explanation

The panelist emphasizes the importance of implementing digital literacy in formal education as a future focus area. This approach aims to create a comprehensive digital literacy program integrated into the education system.


Evidence

They mention plans to make digital literacy broader in schools and universities, including components on safe internet use.


Major Discussion Point

Future Focus Areas


U

Unknown speaker

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Developing partnerships for financing digital projects

Explanation

The speaker discusses the development of partnerships for financing digital projects. This approach aims to leverage resources from multiple partners to fund and implement digital development initiatives.


Evidence

The speaker mentions having a pipeline of 72 programs with the potential for over $1 billion worth of digital development projects, encouraging partners to contribute to the development of these programs.


Major Discussion Point

Coordination and Collaboration Among Donors


A

Audience

Speech speed

154 words per minute

Speech length

248 words

Speech time

96 seconds

Challenges in funding certain countries due to regulations

Explanation

An audience member raises the issue of regulatory challenges in funding certain countries. This highlights how legal and political factors can limit the ability of donors to support digital development in some regions.


Evidence

The speaker mentions difficulties in funding projects in countries like Myanmar and Afghanistan due to sanctions or geopolitical situations.


Major Discussion Point

Coordination and Collaboration Among Donors


Agreed with

Rajnesh Singh


Agreed on

Challenges in funding certain countries due to regulations


Agreements

Agreement Points

Importance of digital literacy and skills development

speakers

Franz von Weizsäcker


Sarah Armstrong


Zhang Xiao


Samia Melhem


arguments

Social insurance and digital skills programs in Southeast Asia


Digital literacy programs for women with disabilities


Digital literacy for elderly populations


Digital public infrastructure and sectoral applications


summary

Multiple speakers emphasized the importance of digital literacy and skills development programs, targeting various groups including women with disabilities, elderly populations, and general workforce development.


Need for better coordination among donors

speakers

David Hevey


Rajnesh Singh


Franz von Weizsäcker


arguments

Need for regional coordination among donors


Duplication of work among donor organizations


Importance of funding local organizations


summary

Several speakers highlighted the need for better coordination among donors to avoid duplication of efforts, overloading recipient countries, and to ensure more effective use of resources.


Challenges in funding certain countries due to regulations

speakers

Rajnesh Singh


Audience


arguments

Sanctions and geopolitics limiting aid to certain economies


Challenges in funding certain countries due to regulations


summary

Both Rajnesh Singh and an audience member raised concerns about regulatory challenges and sanctions limiting the ability to fund digital development projects in certain countries.


Similar Viewpoints

These speakers all emphasized the importance of affordable and accessible internet infrastructure, particularly in developing regions.

speakers

David Hevey


Franz von Weizsäcker


Sarah Armstrong


arguments

Infrastructure investments in Pacific Island countries


Affordability of internet access


Affordable and meaningful access


Both speakers highlighted concerns about digital divides, particularly focusing on how certain populations (such as the elderly) may be left behind in digital adoption.

speakers

Rajnesh Singh


Zhang Xiao


arguments

Widening digital divides across multiple layers


Digital literacy for elderly populations


Unexpected Consensus

Importance of local context and organizations in project implementation

speakers

Franz von Weizsäcker


Rajnesh Singh


Samia Melhem


arguments

Importance of funding local organizations


Lack of whole-of-government approach to digital transformation


Completing and perfecting started projects


explanation

There was an unexpected consensus on the importance of understanding and working within local contexts, involving local organizations, and ensuring projects are completed and effective within specific country environments. This highlights a shift from top-down approaches to more locally-driven development strategies.


Overall Assessment

Summary

The main areas of agreement included the importance of digital literacy and skills development, the need for better donor coordination, addressing affordability and accessibility of internet infrastructure, and recognizing the challenges posed by regulations and sanctions in certain countries.


Consensus level

There was a moderate to high level of consensus among the speakers on these key issues. This consensus suggests a growing recognition of the complex, multifaceted nature of digital development challenges and the need for collaborative, locally-sensitive approaches. The implications of this consensus could lead to more coordinated efforts among donors, increased focus on digital literacy alongside infrastructure development, and potentially new strategies for overcoming regulatory barriers in challenging environments.


Differences

Different Viewpoints

Approach to funding and implementation

speakers

Franz von Weizsäcker


Rajnesh Singh


arguments

Franz emphasizes the importance of channeling aid through local organizations for greater effectiveness. He suggests that local organizations have better understanding of the context and local intelligence.


Rajnesh Singh expresses concern about the duplication of work among donor organizations. He argues that this leads to inefficient use of resources and limits the overall impact of interventions.


summary

While Franz advocates for channeling aid through local organizations, Rajnesh expresses concern about duplication of work among donor organizations. This suggests a difference in approach to funding and implementation of projects.


Unexpected Differences

Overall Assessment

summary

The main areas of disagreement appear to be around the approach to funding and implementation of projects, with some speakers advocating for local involvement and others focusing on coordination among larger donor organizations.


difference_level

The level of disagreement among the speakers appears to be relatively low. Most speakers seem to agree on the major challenges and goals, with differences mainly in the specific approaches or areas of focus. This suggests that there is potential for collaboration and coordination among the various organizations represented, which could lead to more effective interventions in improving internet access and inclusion.


Partial Agreements

Partial Agreements

Both speakers agree on the need for better coordination among donors, but they approach it from different angles. David focuses on avoiding overloading recipient countries, while Rajnesh emphasizes avoiding duplication of work.

speakers

David Hevey


Rajnesh Singh


arguments

David Hevey emphasizes the importance of coordination among donors to avoid overloading recipient countries. He points out the challenge of multiple organizations providing similar trainings or assistance.


Rajnesh Singh expresses concern about the duplication of work among donor organizations. He argues that this leads to inefficient use of resources and limits the overall impact of interventions.


Similar Viewpoints

These speakers all emphasized the importance of affordable and accessible internet infrastructure, particularly in developing regions.

speakers

David Hevey


Franz von Weizsäcker


Sarah Armstrong


arguments

Infrastructure investments in Pacific Island countries


Affordability of internet access


Affordable and meaningful access


Both speakers highlighted concerns about digital divides, particularly focusing on how certain populations (such as the elderly) may be left behind in digital adoption.

speakers

Rajnesh Singh


Zhang Xiao


arguments

Widening digital divides across multiple layers


Digital literacy for elderly populations


Takeaways

Key Takeaways

There are significant challenges in improving internet access and inclusion, including cybersecurity issues, affordability, digital literacy gaps, and widening digital divides.


Donor organizations are implementing various projects to address these challenges, focusing on infrastructure development, digital skills training, and sector-specific applications.


Better coordination and collaboration among donors is needed to avoid duplication of efforts and maximize impact.


Future focus areas should include AI governance, addressing the digital gender divide, implementing digital literacy in formal education, and simplifying grant processes.


Resolutions and Action Items

Explore ways to improve coordination and information sharing among donor organizations


Consider creating a mailing list for donors to share experiences and best practices


Focus on completing and perfecting started projects rather than initiating new ones


Simplify grant processes and funding mechanisms to make them more accessible


Unresolved Issues

How to effectively provide aid to countries affected by sanctions or geopolitical issues


Balancing government-funded projects with maintaining market competitiveness


Addressing the challenges of funding certain countries due to regulatory restrictions


Finding innovative ways to reach and support underserved communities in difficult environments


Suggested Compromises

Using local NGOs and community-based organizations to reach areas where direct funding is challenging


Implementing blended finance approaches to make projects more affordable for low-income countries


Balancing centralized decision-making with decentralized implementation to foster local innovation


Partnering with local organizations and existing regional experts (like APNIC Foundation) to leverage local knowledge and networks


Thought Provoking Comments

We shouldn’t just look at one form of technology as the solution to fix everything. It has to be multi-modal in nature.

speaker

Rajnesh Singh


reason

This comment challenges the tendency to view new technologies like LEO satellites as a panacea, highlighting the need for diverse, context-appropriate solutions.


impact

It broadened the discussion beyond specific technologies to consider more holistic approaches to digital inclusion.


The widening digital divides we’re creating. So it’s not just the digital divides. It’s the widening digital divides we’re creating.

speaker

Rajnesh Singh


reason

This insight highlights how current approaches may be exacerbating inequality rather than reducing it, forcing a critical examination of existing strategies.


impact

It shifted the conversation to focus more on the unintended consequences of digital development efforts and the need to address root causes of inequality.


Look as you transfer capacity to a local entity whether it’s a government or a private sector are they going to be as great in the beginning than the top consulting firms? No, but they know the local context, they know who does what, they have the local intel that many times these big firms don’t have and they fail just because of that.

speaker

Samia Melhem


reason

This comment challenges the conventional wisdom of relying on external expertise, emphasizing the value of local knowledge and capacity building.


impact

It prompted discussion on more sustainable, locally-driven approaches to digital development and capacity building.


A lot of the reasons we can’t deploy funds in certain economies is due to a thing called sanctions and due to another thing called geopolitics.

speaker

Rajnesh Singh


reason

This comment brings attention to often-overlooked political barriers to digital inclusion efforts, highlighting systemic challenges beyond technological or financial constraints.


impact

It broadened the scope of the discussion to include geopolitical factors and prompted consideration of how to work within or around these constraints.


Don’t put too many strings attached, but rather make it simple and make it fit for purpose.

speaker

Franz von Weizsäcker


reason

This insight challenges conventional grant-making processes, suggesting that overly complex requirements may hinder innovation and effectiveness.


impact

It sparked discussion on how to streamline funding processes to better support local innovation and implementation.


Overall Assessment

These key comments shaped the discussion by challenging conventional approaches to digital inclusion and development. They broadened the conversation beyond technological solutions to consider geopolitical factors, local capacity building, and the unintended consequences of current strategies. The discussion shifted towards more nuanced, context-specific approaches that prioritize local knowledge and simplify implementation processes. This led to a more critical and holistic examination of digital inclusion efforts and their impacts.


Follow-up Questions

How can donors work within legal frameworks to support people in countries where they might not be able to receive funding legally from other sources?

speaker

Audience member from Internet Society Foundation


explanation

This is important to find ways to support communities in need despite regulatory challenges or sanctions.


What are some examples of innovative approaches in interventions and funding, whether centralized or decentralized?

speaker

Jordanka Tomkova from Innova Bridge Foundation


explanation

Understanding innovative approaches can help improve the effectiveness and reach of development projects.


How can we address the challenge of deploying funds in economies affected by sanctions or geopolitical issues?

speaker

Rajnesh Singh


explanation

This is crucial for helping people who need the most assistance but are limited by political circumstances beyond their control.


How can we improve coordination and information sharing among donors and implementers?

speaker

Multiple speakers (Rajnesh Singh, Samia Melhem, Franz von Weizsäcker)


explanation

Better coordination can reduce duplication of efforts and improve overall effectiveness of development projects.


How can we simplify grant application processes to make them more accessible to local organizations?

speaker

Franz von Weizsäcker


explanation

Simpler processes can help source real innovations and support organizations that may not have extensive resources for complex applications.


How can we address the digital gender divide?

speaker

Sarah Armstrong


explanation

Focusing on gender-specific issues in digital inclusion is important for ensuring equitable access and opportunities.


How can we support cloud transformation roadmaps for countries, particularly in PNG and Vanuatu?

speaker

David Hevey


explanation

This is important for helping countries modernize their digital infrastructure and services.


How can we improve monitoring and evaluation of digital development projects?

speaker

David Hevey


explanation

Effective monitoring and evaluation is crucial for ensuring that projects are achieving their intended outcomes and for continuous improvement.


How can we address AI governance in the context of digital development?

speaker

Zhang Xiao


explanation

As AI becomes more prevalent, understanding and managing its impacts on various fields is crucial for sustainable and ethical development.


How can we make partnerships easier between large organizations and NGOs for project funding and implementation?

speaker

Samia Melhem


explanation

Streamlining partnerships can help leverage resources and expertise more effectively for development projects.


How can we ensure follow-up projects to complete and perfect initiatives that have been started?

speaker

Samia Melhem


explanation

This is important for ensuring that projects achieve their full potential and reach all intended beneficiaries.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Launch / Award Event #168 Parliamentary approaches to ICT and UN SC Resolution 1373

Launch / Award Event #168 Parliamentary approaches to ICT and UN SC Resolution 1373

Session at a Glance

Summary

This panel discussion focused on parliamentary approaches to using information and communication technologies (ICTs) in counterterrorism efforts, in accordance with UN Security Council Resolution 1373. Experts from various international organizations and parliamentary bodies shared insights on the challenges and opportunities presented by ICTs and artificial intelligence (AI) in combating terrorism.


The speakers emphasized the critical role of parliamentarians in developing legislation, allocating resources, and providing oversight for counterterrorism measures involving new technologies. They stressed the importance of balancing security needs with human rights protections and adhering to international law. The discussion highlighted how terrorist groups are exploiting AI and other emerging technologies for propaganda, recruitment, and planning attacks, while also noting the potential for these same technologies to enhance threat detection and prevention efforts by authorities.


Key points included the need for technology-neutral legislation, international cooperation, and public-private partnerships in addressing these challenges. Speakers also emphasized the importance of digital literacy and public awareness campaigns to build societal resilience against online radicalization and disinformation. The UN Security Council Resolution 1373 was cited as a foundational document guiding international counterterrorism efforts, with speakers noting the ongoing need to adapt its principles to the evolving technological landscape.


The panel concluded by reiterating the importance of human rights considerations in all counterterrorism measures and the need for continued dialogue and collaboration among parliamentarians, international organizations, and other stakeholders to effectively address the complex challenges posed by terrorist use of ICTs and AI.


Keypoints

Major discussion points:


– The dual nature of information and communication technologies (ICTs) in counterterrorism – both as tools that can be exploited by terrorists and as valuable resources for preventing/countering terrorism


– The critical role of parliamentarians in developing legislation, allocating resources, and providing oversight related to ICTs and counterterrorism


– The importance of UN Security Council Resolution 1373 as a foundational document guiding international counterterrorism efforts


– The need to balance security measures with protection of human rights and fundamental freedoms when regulating ICTs


– The value of international cooperation and public-private partnerships in addressing ICT-related terrorism challenges


Overall purpose:


The goal of the discussion was to explore parliamentary approaches to using ICTs in counterterrorism efforts, in accordance with UN Security Council Resolution 1373. Speakers shared insights on challenges, opportunities, and best practices for parliamentarians to consider.


Tone:


The tone was largely formal and informative, with speakers providing expert perspectives in a professional manner. There was an underlying sense of urgency about the topic, but the tone remained measured and analytical throughout. The discussion concluded on a note of ongoing commitment to addressing the complex issues raised.


Speakers

– Murad Tangiev: Chief of the UNOCT program office on Parliamentary Engagement


– David Alamos: Moderator, Chief of the UNOCT program office on Parliamentary Engagement


– Kamil Aydin: Chair of the Ad Hoc Committee on Counterterrorism of the Organization of Security and Cooperation in the European Parliamentary Assembly


– Ahmed Buckley: Author of the UN Parliamentary Handbook on the Implementation of UN Security Council Resolution 1373, former diplomat and counterterrorism expert


– Emanuele Loperfido: Vice-Chair of the Ad-Hoc Committee on Counterterrorism of the Parliamentary Assembly of the Organization for Security and Cooperation in Europe, member of the Italian delegation to the OSCEPA


– Abdelouahab Yagoubi: Member of the People’s National Assembly of Algeria, PAM rapporteur on artificial intelligence


– Jennifer Bramlette: Coordinator for Information and Communication Technology of the United Nations Counterterrorism Committee Executive Directorate


– Akvile Giniotiene: Head of the Cyber and New Technologies Unit at the United Nations Office of Counterterrorism


Additional speakers:


– Dr. Ahmed Al-Muhannadi: Member of the Shura Council of Qatar (mentioned but did not speak)


– Pedro Roque: Vice President of the Parliamentary Assembly of the Mediterranean (mentioned but did not speak)


– Audience member: Badil Badi from Shura Council, Qatar (asked a question at the end)


Full session report

Parliamentary Approaches to Using Information and Communication Technologies in Counterterrorism Efforts


This panel discussion, held in the context of the Internet Governance Forum (IGF), brought together experts from various international organisations and parliamentary bodies to explore the challenges and opportunities presented by information and communication technologies (ICTs) and artificial intelligence (AI) in combating terrorism. The dialogue centred on the role of parliamentarians in developing legislation, allocating resources, and providing oversight for counterterrorism measures involving new technologies.


The Role of Parliaments in Addressing ICT/AI Challenges


Speakers unanimously agreed on the critical role of parliaments in addressing the challenges posed by ICTs and AI in counterterrorism efforts. Parliamentarians are responsible for transposing international commitments, such as those outlined in UN Security Council Resolution 1373, into national laws and allocating resources based on credible threat assessments. David Alamos, the moderator and Chief of the UNOCT programme office on Parliamentary Engagement, emphasised the parliamentary role in allocating budgets and conducting oversight of counterterrorism efforts.


Akvile Giniotiene from the UN Office of Counterterrorism highlighted the importance of establishing legal frameworks for law enforcement to use new technologies effectively. She also discussed UNOCT’s capacity-building efforts to support member states in developing these frameworks.


Dual Nature of ICTs and AI in Counterterrorism


A significant portion of the discussion revolved around the dual nature of ICTs and AI in counterterrorism efforts, acknowledging both the potential benefits for authorities and the risks posed by malicious actors exploiting these technologies.


Challenges:


1. Kamil Aydin, Chair of the Ad Hoc Committee on Counterterrorism of the OSCE Parliamentary Assembly, noted that AI enables sophisticated propaganda and automated recruitment by terrorists.


2. Emanuele Loperfido, Vice-Chair of the Ad-Hoc Committee on Counterterrorism of the OSCE Parliamentary Assembly, warned about the risks of deepfakes in spreading disinformation and eroding public trust.


3. Akvile Giniotiene highlighted how terrorists are exploiting cybercrime-as-a-service on the dark web.


Opportunities:


1. Abdelouahab Yagoubi, Member of the People’s National Assembly of Algeria, pointed out that AI and ICTs can enhance threat detection and analysis for authorities.


2. Jennifer Bramlette, from the UN Counterterrorism Committee Executive Directorate, emphasised the need for digital literacy training to build societal resilience against online threats.


International Cooperation and Public-Private Partnerships


The discussion highlighted the importance of international cooperation and public-private partnerships in addressing the challenges of terrorist use of ICTs. Emanuele Loperfido stressed the significance of public-private partnerships, while Akvile Giniotiene emphasised the need for cross-border cooperation mechanisms.


Abdelouahab Yagoubi highlighted the role of parliamentary assemblies in promoting knowledge sharing, particularly mentioning the Parliamentary Assembly of the Mediterranean’s (PAM) work on AI and emerging technologies. David Alamos noted that UN entities provide capacity-building support to member states.


The speakers agreed that no single entity or nation could effectively combat the terrorist use of new technologies in isolation, making international collaboration crucial. They also discussed the importance of a coordination mechanism among parliamentary assemblies to enhance knowledge sharing and cooperation.


Balancing Security and Human Rights


A recurring theme throughout the discussion was the need to balance security measures with the protection of human rights and fundamental freedoms when regulating ICTs. Emanuele Loperfido particularly emphasised this point, highlighting the ethical considerations that must be taken into account when implementing new technologies in counterterrorism efforts. He also presented the OSCE Parliamentary Assembly’s resolution on AI and counterterrorism, which addresses these concerns.


This balance is especially crucial given the potential for misuse of counterterrorism measures to infringe on civil liberties. An audience question regarding the broad use of the term “terrorism” and its potential misuse further underscored this concern. The speakers agreed that any legislative or policy frameworks developed must have robust safeguards to protect individual rights while still allowing for effective counterterrorism measures.


Conclusion and Future Directions


David Alamos concluded the discussion by reiterating the importance of continued dialogue and collaboration among parliamentarians, international organisations, and other stakeholders to effectively address the complex challenges posed by terrorist use of ICTs and AI. Key areas for future focus include:


1. Updating and improving the UN Parliamentary Handbook on the Implementation of UN Security Council Resolution 1373 to reflect evolving threats and good practices.


2. Developing more effective legislative frameworks to counter the abuse and misuse of AI and emerging technologies by malicious actors.


3. Enhancing parliamentarians’ understanding of new technologies to enable more informed decision-making and oversight.


4. Establishing clear legal mandates and policy frameworks for law enforcement agencies to use new technologies in investigating and prosecuting terrorist offences.


5. Investing in digital literacy and public awareness campaigns to build societal resilience against online radicalisation and disinformation.


The discussion underscored the ongoing need to adapt international counterterrorism efforts to the rapidly evolving technological landscape while maintaining a steadfast commitment to human rights and the rule of law. It also highlighted the critical role of parliamentarians in shaping these efforts and the importance of international cooperation in addressing global challenges.


Session Transcript

Murad Tangiev: Your Excellency, thank you so much for your insightful speech and for your support. And of course, for the support of the Shura Council of the State of Qatar, for all the work that our program office is doing. Now, it gives me a pleasure to invite here at this stage, the chief of the UNOCT program office on Parliamentary Engagement, Mr. David Alamos. David, please.


David Alamos: Thank you very much, Murat. Good morning, excellencies, honorable participants, esteemed colleagues, ladies and gentlemen. It is a great honor to welcome you all to this important event organized on the margins of the Internet Governance Forum here in the beautiful city of Riyadh. I would like to thank the Kingdom of Saudi Arabia and IGF for hosting and organizing this critical global platform, as well as to each of you for your commitment to addressing one of the of the most pressing challenges of our time, terrorism and its evolving complexities in the digital age. At the outset, I wish to express my heartfelt gratitude to the Shura Council of the State of Qatar for its unwavering and continuous support to the UNOCT Program Office on Parliamentary Engagement in Preventing and Countering Terrorism. I also wish to extend my appreciation to all participants joining us today, including representative from parliamentary assemblies, members of national parliaments, governments of member states, international organizations, media, academia, and civil society, both in person and also online. I would also like to convey our gratitude to our expert panel, comprising distinguished representative of the Parliamentary Assembly of the Organization for Security and Cooperation in Europe, with whom we have co-organized this event, the Parliamentary Assembly of the Mediterranean, the United Nations Counterterrorism Committee Executive Directorate, and the UNOCT Global Program on Cybersecurity and New Technologies, and other international experts on counterterrorism, ICT, and artificial intelligence. Excellencies, terrorism remains a persistent global threat, transcending borders, nationalities, and beliefs. The international community, through robust frameworks, such as the United Nations Security Council Resolution 1373, has provided a roadmap for coordinated action. National parliaments are pivotal in this endeavor, serving as the bridge between international obligations and their implementation through effective legislation, oversight, and policies. As we navigate an era of rapid technological advancement, the dual role of information and communication technologies, particularly artificial intelligence, cannot be overstated. These technologies offer unprecedented opportunities. to enhance data analysis, improve threat detection, and bolster predictive capabilities in counterterrorism. Yet, they also present profound challenges as terrorist groups increasingly exploit digital tools for recruitment, fundraising, and the dissemination of propaganda disinformation. The recent UN Summit of the Future underscores the importance of addressing these opportunities and challenges. The Pact for the Future, adopted by the General Assembly in September 2024, highlights the necessity of a multi-stakeholder approach. It calls for enhanced engagement with national parliaments while respecting their legislative mandates and promoting collaboration across all sectors of the society. In this context, national parliaments are not just participants, but leaders. By proactively regulating ICT to support counterterrorism strategies, they can ensure that such measures align with UN Security Council Resolution 1373, advance the Sustainable Development Goals, and adhere to principles of inclusivity, human rights, and gender sensitivity. Our today’s event is a great opportunity to foster dialogue and raise awareness about these critical issues. To conclude, let me reaffirm the UN OCT’s unwavering commitment to supporting national parliaments and their efforts to combat terrorism and violent extremism in all its forms. Thank you very much, and I wish you all and us all a productive and insightful session. Thank you very much.


Murad Tangiev: Thank you very much, dear David. Finally, I would like to invite here to connect with us online, Honorable Mr. Kamil Aydin. He is the Chair of the Ad Hoc Committee on Counterterrorism of the Organization of Security and Cooperation in the European Parliamentary Assembly, to make his welcoming remarks today with us. Honorable Kamil, the floor is yours. Thank you.


Kamil Aydin: Thank you, Murat. Can you hear me? Yes, we can hear you and see you very well. Thank you. Thank you. Dear Excellencies, colleagues, and esteemed participants. Above all, I would like to express that I wholeheartedly wanted to be there with you, but I couldn’t make it as we have been intensively discussing the annual budget for the last 10 days in the Turkish current National Assembly. And I would like to say welcome to everybody participating in this very precious organization. Dear Excellencies, distinguished colleagues and guests, on behalf of the Parliamentary Assembly of the Organization for Security and Cooperation in Europe and its Ad Hoc Committee on Countering Terrorism, it is my great pleasure to welcome you all to this launch and award session on Parliamentary Approaches to the Use of Information and Communication Technologies in Counterterrorism in accordance with the UN Security Council Resolution 1373 on the margins of this year’s Internet Governance Forum in Riyadh. The OSCE is the world’s largest regional security organization devoted to promoting peace and stability across its 57 participating states through cooperative dialogue. In today’s increasingly challenging geopolitical landscape, one of the priorities of the OSCE and its Parliamentary Assembly has been developing responses to terrorism and violent extremism that are both effective and well-rooted in human rights. Today’s event, co-organized with our partners at the UNOCT, reflects this shared dedication to global efforts against terrorism while emphasizing the critical role of AI and new technologies in shaping modern security strategies. We must stand together against those seeking to undermine our democratic values and threaten our societies through malicious acts. Information and communication technologies have transformed governance and society. but are increasingly exploited by terrorist groups for recruitment, propaganda and coordination. Recent data underscores the urgency of this challenge. The Global Internet Forum to Counter Terrorism reported a 32% increase in AI-enabled extremist content between 2020 and 2023, highlighting the growing use of technology in radicalization and propaganda. 90% of all terrorist propaganda is currently disseminated online, and an AI-generated content can significantly enhance the quality and quantity of this. Terrorist organizations such as Daesh, Al-Qaeda, PKK and far-right violent extremist groups are increasingly leveraging AI in their operations, exploiting AI’s capabilities to produce sophisticated propaganda, automate recruitment processes and manipulate social media algorithms to amplify their narratives. These and other threats associated with the potential misuse of AI and new technologies for terrorist purposes, as well as parliamentary approaches to using AI in counter-terrorism, will be the focus of today’s discussion. This complex, multifaceted nexus between AI and countering terrorism has been high on the agenda of the OSCE Parliamentary Assembly for some time. Now, not least since the adoption of our resolution on AI and the fight against terrorism on the margins of our last annual session in Romania. This resolution recognizes the significant threat to international security posed by the potential misuse of AI by terrorists and violent extremists. And, at the same time, acknowledges the opportunities that lies in the ethical application of AI in countering terrorism. The document represents the culmination of our efforts to be at the forefront in tackling yet another emerging security threat that needs to be addressed cooperatively. Accordingly, two weeks ago, in cooperation with the UNOC, we organized a highly relevant parliamentary policy dialogue on countering the misuse of AI for terrorist purposes in Rome, Italy, engaging 13 parliamentary assemblies from around the world and many renowned experts on this emerging issue. After all, parliamentarians play a critical role in preventing and countering terrorism, violent extremism and radicalization that leads to terrorism. We act as enablers, shaping national legislation and establishing the mandate of counter-terrorism bodies. We serve as controllers, ensuring that all counter-terrorism measures respect fundamental freedoms. And we bridge diverging views at all levels, facilitating constructive exchanges and ensuring citizens’ participation in state affairs. Against this backdrop, I would like to commend our United Nations partners at the Office on Counter-Terrorism. UNOC has been at the very forefront in engaging parliamentarians in counter-terrorism affairs, and we are deeply grateful to them for their invaluable support and expertise. It was an honor for our assembly to preside for two constructive years over the work of the new coordination mechanism of parliamentary assemblies on countering terrorism, and we are confident that our efforts have strengthened parliamentary engagement in the field. Misuse of AI for terrorist purposes is an urgent and critical issue. And I am deeply grateful for the expertise and insights gathered in Riyadh today. While I regret not being able to join you in person, I am confident that my colleague and Vice Chair Emanuele Porfido will represent the OSCPA’s comprehensive work on this matter effectively. On that note, I wish you all a productive and engaging panel discussion. Thank you and best wishes from the Grand National Assembly in Ankara. Thank you.


Murad Tangiev: Honorable Aden, thank you very much for your kind words and for all your support throughout these two years we had this privilege to work with the OSCPA. And now allow me please to give the floor, not to give the floor, give the moderation role to David Alamos to continue this dialogue. Thank you, David, over to you.


David Alamos: Oh, yeah, okay. Yeah, okay. Good afternoon already to everybody, excellencies and honorable participants. I will have the pleasure to moderate this panel of distinguished experts to address basically the topic of today, how parliaments may approach the use of information and communication technologies in counter-terrorism in accordance with the UN Security Council Resolution 1373. And I will just very briefly say that we will cover basically during the discussion three key questions, okay? Which will be, what are the challenges and opportunities posed by information and communication technologies in preventing and countering terrorism? What is the role of parliamentarian in addressing these challenges? And of course, how can the UN Security Council Resolution 1373? help member states in ensuring that the national content duration measures are holistic, inclusive, human rights compliant, gender sensitive, and effective. So without any further delay, in the sake of time, I will give the floor and present the first of our speakers, which I have the pleasure to introduce Dr. Ahmed Bakley, the author, indeed, of the UN Parliamentary Handbook on the Implementation of UN Security Council Resolution 1373. And allow me to briefly say that he has joined the Egypt’s diplomatic corps two decades ago. His career has been dedicated to counter-terrorism, including serving as deputy director of the counter-terrorism unit at the Ministry of Foreign Affairs. He was also a member of the analytical support and sanctions monitoring team, supporting the UN Security Council Sanctions Committee on ISIL, al-Qaeda, and the Taliban. And his background is impressive. Also, he has a Master of Arts in comparative politics in the Middle East, another one in terrorism and international security. And he is now undertaking a PhD. So Dr. Bakley, please, the floor is yours.


Ahmed Buckley: Thank you very much, David. And I’d like to extend my deep gratitude to the Shura Council of Qatar and UNOCT for having me here. And I don’t think we can say this enough, but thank you also to the government of the Kingdom of Saudi Arabia for graciously hosting this event in this fabulous venue. When we talk about international cooperation on counter-terrorism, I always like to begin with highlighting two points. The first is that despite all of our definitional differences on what is terrorism, who is a terrorist, or all our haranguing on these definitions, we were still, as an international community, able to make large strides on counter-terrorism cooperation. And the bedrock of that cooperation was… was UN Security Council Resolution 1373 and its descendants. The second point, which is particularly relevant because we’re talking in a parliamentary track, is that none of this international cooperation could have taken place and none of it is sustainable in the future without the active participation of parliamentarians. Parliamentarians are of course the legislators, they are the ones responsible for transposing all of these international commitments into national laws, but they are also the dispensers of resources. They are the ones who make the correct decisions on appropriations and budgetary allocations to face the threats based on credible threat assessments from the security agencies. They are also in the best position as representatives of the electorate to make sure that before any of these laws or measures are enacted, that they are the culmination of a wide-ranging consultative process that takes into account the views of the law enforcement agencies, the private sector, as well as civil society. Finally, of course, they are the bulwark to ensure that all of these measures and laws are commensurate with the Member States’ constitutional and international commitments on human rights, as you mentioned, David. On the threats, and for the sake of time, I won’t delve deep into that. I think they were covered by the Honourable Agne, and maybe Akfili will also talk about the threats, and we’ve heard in other workshops that’s emanating from AI. You mentioned propaganda. There’s also the fear of terrorists using AI to raise funds in the form of scams. If criminal organizations are starting to use AI to raise funds, you can be sure that terrorists will be quickly to follow on their heel. Um, how has the Security Council addressed the issue of the misuse of ICTs? Well, it goes back to, uh, to the mother of all resolutions to 1373, uh, which obliged member states to prevent the collusion of safe neighbors, and in turn, scan large records, terrorist acts, despite the fact that the nation also extends to virtual territory, online platforms, uh, end-to-end encryption, uh, services, uh, and any other virtual space which has been also used to, uh, plan, coordinate, recruit, and raise funds for terrorist, uh, acts. There is Resolution 1624, uh, a few years later, which obliged member states to criminalize the incitement of terrorism and the glorification of terrorism, and it explicitly in that resolution called on member states to take all, um, legal and regulatory measures, uh, to, um, prevent the misuse of ICTs in, uh, creating propaganda for, uh, terrorist organizations. Um, you have, uh, Resolution, uh, uh, 2232 on, on, on, uh, global counterterrorism, uh, cooperation as well, which laid, uh, uh, a map, a roadmap for member states on how to, uh, establish, uh, robust mechanisms and channels, uh, within each member state to gather and disseminate, uh, uh, information across borders and to facilitate, um, you know, the, the, the, the drafting, the, the sending and receiving of mutual, uh, legal assistance, um, requests regarding, uh, ICTs in, in terrorism. And you have Resolution 2341, which talked about… about the critical infrastructure. And while the Security Council did not explicitly in that resolution define what in what critical infrastructure was for each member state, it still was very cognizant that member states, some of them will, will consider the internet as a critical infrastructure. And the council called on UN entities to help member states, whether in through capacity building or technical assistance to take the appropriate measures to protect the internet from being misused by terrorists. I say I give these examples, again, to make two points that the Security Council was from the very beginning, aware of the misuse of ICTs and gave it is it’s, it’s due attention, but also that most of these resolutions have been drafted in a technology neutral language. And in fact, member states are also encouraged when they develop their legislation to do so in this technology neutral language, which focuses on criminalizing the crime, not necessarily on the tool by which that crime was committed. In fact, I think it is safe to say that when we’re talking about the threats from artificial intelligence, most countries do not require a substantial overhaul of their, of their legal frameworks. But what they need to do is concertedly address, you know, raising the capacity of law enforcement agencies to detect and prevent and prosecute these crimes when they are being committed by artificial intelligence tools. You mentioned the handbook. Thank you very much for bringing it up. It was a privilege working on it. And the handbook I think is a very useful tool. Hashtag shameless self promotion here. I shouldn’t be praising my own product. But I think it is a it is a useful tool. Because on one hand, it provides a very good overview for parliamentarians on all of these on this oeuvre of the Security Council resolutions regarding counterterrorism. And it also gives them a sort of checklist, what they need to check to gauge their, their level of implementation. Of course, it’s not the definitive guide to for gap analysis for member states. The the that is still the preserve of CTAD’s technical guide on implementing resolution 1373. But the handbook is a useful reference for some of these checklists and also for additional resources when member states are making proper legislation. It also covers some parallel legislative concerns that that complement counterterrorism legislation. So many member states are now undergoing legislation on personal data protection on cyber security. And you will find in the handbook, some, some concerns, some aspects to take into consideration when you are legislating and taking measures against those types of threats. Now, I don’t think that the handbook is complete. I think it should be a living document. You know, it should take into consideration some of the good practices that parliaments have done already in this regard, and that’s just a heavy hint, David, to say that we still need to work together on improving the handbook as much as we can, and with that I’ll hand it back to you. Thank you very much.


David Alamos: Thank you very much, Dr. Ahmed, for your insightful and comprehensive presentation. I have to say it was also a really big honor for us to work with you in the elaboration of the handbook. This is the handbook, with Excellency Dr. Ahmed Al-Muhannadi. So it is also available online, in case that anyone wants to check our web page and get it from there. So it is now my pleasure to introduce the next speaker, the Vice-Chair of the Ad-Hoc Committee on Counterterrorism of the Parliamentary Assembly of the Organization for Security and Cooperation in Europe, Honorable Mr. Emanuele Loperfido, and let me just say that Honorable Loperfido is a member of the Italian delegation to the OSCEPA and the principal sponsor of the 2024 OSCEPA Resolution on Artificial Intelligence and the Fight Against Terrorism. He currently serves as Secretary of the Foreign Affairs Committee in the Italian Chamber of Deputies, and he is also an active member of the Defense Committee. And today, Honorable Loperfido will speak about the OSCEPA Resolution, a very important resolution indeed. For all of us, please, Honorable Loperfido, you have the floor.


Emanuele Loperfido: Good morning. Thank you, David. Thank you for the kind introduction. Thank you for all of you that are here to listen to us. Thank you to the OSCEPA staff members that are working all this year together with us to support the parliamentarian to try to give response to this a new challenge that we are facing as parliamentarian. And it is very important the work that the United Nations Organization, Office of Counterterrorism and the OCPA is doing together because the most important thing is to make a real partnership to face the challenges. So I’m delighted to be here and contribute to this distinguished panel. My capacity as a vice chair of the OC Parliamentary and Assembly Ad-Hoc Committee on Counterterrorism and speaking directly about the artificial intelligence, we know that it has brought significant advantages across various sectors and hold promising potential for the use of authorities in the fight against terrorism. But at the same time, the same technology when exploited by malicious actor poses a significant risk to international security. As AI capabilities evolve, so does the potential for them to be used in ways that threaten peace and stability. For example, widely available AI-driven tools could enable individuals or groups to assess technologies such as drones that could be misused for surveillance, targeted attacks, or other malicious purposes. Another area of concern is potential for extremists to harness AI algorithms to identify and target vulnerable individuals, tailoring messaging to exploit fears and biases. These prospects underscore the importance of vigilance as AI could inadvertently aid in amplifying extremist narratives and online radicalization. Particularly troubling dimension is the rise of deepfake technologies. We must consider how the ability… to create convincing but fabricated audio and video content could be leveraged by terrorist groups to spread disinformation, incite violence, or erode public trust, which will have far-reaching impacts on social cohesion and national security if left unaddressed. This is why, over the past year, the OCPA and the ADOC committee have made significant strides in response to these ever-evolving challenges. As the world’s largest inter-parliamentarian forum dedicated to peace and security, our assembly worked hard to promote more knowledge around this topic in order to inform national and international policymaking. In February, we had a high-level panel discussion in Vienna, bringing together experts from the tech industry and the public sector. These pressing issues were further examined with renowned academics during the official visit to Turkey in early May, and it was organized by our dear President Kamil Aydin, that I would like to thank for the continued effort to support the assembly in becoming more and more expert in counterterrorism. And at the end, we had a last but not least, obviously, conference in Portugal. And during the annual session in Bucharest, our committee, together with the OCPA, adopt the Bucharest Resolution on Artificial Intelligence and the Fight Against Terrorism, which codified some key findings and which represent one of the very first policy attempt to address the dual security impact of the rapid advancement of the artificial intelligence. So the resolution clearly focused on mitigating the risk of the AI misuse in the focus on the strengthened national legal frameworks that govern AI development and deployment, ensuring robust and ethical standards and human oversight. While AI can be a powerful tool in detecting threats and preventing radicalization, its use must always be balanced with respect for privacy and freedom of expression. This dual approach not only strengthen public trust, but it also ensure that AI innovation remains aligned with our shared values of democracy and security. Italy, for instance, has recently underlined the importance of ethics in AI governance by appointing a theologian, an AI expert, a member of the United Nation Committee to lead national AI coordination. This choice reflect a broader commitment to ensure that AI technology are developed and applied with respect for human dignity and rights. The resolution that we adopted went beyond that. Indeed, it emphasized how these tools can also be used by a security agency to quickly identify potential threats, preventing attack and supporting early radicalization pattern. Additionally, our document is stressed the importance of a public and private partnership and the importance of strengthen international cooperation. Lastly. The resolution highlighted the critical role of the education and the importance of having a digital literacy in order to create and improve the public awareness campaign in order to help societies to recognize and resist disinformation and manipulation. Ultimately, through this resolution, we aim to foster an environment where AI is secure, ethical and aligned with democratic principles while remaining economically viable. Hopefully, other national parliaments and other international parliamentaries will follow our example, as we did in Rome, the last event, where together with the UNOCT and the inter-parliamentary policy, we address our effort to reinforce the cooperation, the mechanisms of cooperation together with the members of the parliament, together with the parliamentarians in order to create legislation that is international, respecting the rights that I just said, but in order to secure to the people that are living in our world, to have a secure world. So our effort will continue together with the members of the OCPA, together with UNOCT and together with all the parliamentarians that will support our effort against this challenge.


Murad Tangiev:


David Alamos: Thank you very much, Your Honourable Loperfido, for a highly topical and informative presentation. I would like to also express my gratitude to you, to Honourable Camille Aydin, online and to the Parliamentary Assembly. of OSCE for the continuous support, collaboration with our Parliamentary Engagement Office and especially for this last excellent two years of Presidency of the Coordination Mechanism of Parliamentary Assemblies. Now I would like to give the floor to our next speaker, a representative of the Parliamentary Assembly of the Mediterranean, PAM, member of the People’s National Assembly of Algeria, Honourable Mr. Abdullohab Jacobi. He is elected in Paris, indeed, at the Algerian National Assembly’s Foreign Affairs Cooperation and Immigration Committee. Honourable Jacobi is a member of the Algerian delegation to the Parliamentary Assembly of the Mediterranean since 2021 and an expert on AI and ICT. He has a large experience in private sector and international companies and at present he holds the function of PAM rapporteur on artificial intelligence. Please, Mr. Honourable Jacobi, you have the floor.


Abdelouahab Yagoubi : Thank you very much, dear David. Good afternoon, everybody. Excellencies, distinguished colleagues, leaders and gentlemen, on behalf of the Parliamentary Assembly of the Mediterranean, I wish to thank the UNOCT, OSCE, PIA and the Shura Council of Qatar for organising this side event. I am especially pleased to gather here today following the election of PAM to the Presidency of the Coordination Mechanism of the Parliamentary Assembly on Counter-terrorism which took place two weeks ago in Rome. In this regard, I wish to strongly reiterate that PAM will work with all international parliamentary assemblies to fulfil its mandate and advance towards a future free of terrorism for the generations to come. The highly development and expansion of AI and emerging technologies made imperative for parliaments to pay attention to and develop more effective legislative frameworks and strategies to counter their abuse and misuse. As it was predictable, the accessibility, low cost and efficiency provided by AI and emerging technologies have allowed malicious actors, including but not limited to, terrorist and criminal organisations to exploit them for their purpose. In response to these threats and in compliance with the provisions of the UN Security Council Resolution 1373, a concreted and united international approach is critical to address the challenge but all the opportunities to AI and emerging technologies in preventing and countering terrorism. This coordinated approach must take into consideration the centrality of national and regional parliaments to advance relevant and dedicated legislations. Moreover, it is always worth highlighting that any framework adopted by States to combat the misuse of IAEA and emerging technologies must be compliant with international human rights law and the respect of fundamental freedom of individuals, which are equally applicable online as offline. Fully aware of this complex landscape, PAM, with the support of its Center of Global Studies – CGS – and in partnership with the UN Security Council Counterterrorism Executive Directorate – CITED – recently published a report on the malicious use of AI and emerging technologies by terrorist and criminal groups’ impact on security, legislation, and governance. Among other elements, the report also stressed that AI and emerging technologies play a provided role in the fight against terrorism and organized crime. This also includes the automatic analysis of vast amounts of data patterns and trends associated with the malicious use of technological tools, which enable authorities to rapidly identify the most effective approaches and strategies. As a result of its report, PAM established a permanent global parliamentary observatory on AI and ICT. We’ve assembled by PAMCGS and begun the publication of a daily and weekly digest to disseminate news and analysis about trends related to technological advancement in a number of fields, including security and defense. I invite you to reach out to PAMCGS in order to strengthen our collaboration, multiplying the effectiveness of our work. Thank you for your attention.


David Alamos: Thank you very much, Honorable Jacobi, for your precise intervention. I would also like to express my gratitude to you and also to Honorable and Excellency Pedro Roque, Vice President of the Parliamentary Assembly of the Mediterranean, who is accompanying us also here for your constant support and also for being now – we are very grateful for that – the new elected President and Chair of the Coordination Mechanism for Parliamentary Assemblies. Let me now turn to my dear colleague of the UN and friend, Ms. Jennifer Bramlett. Just to let you know also that Ms. Bramlett serves as the Coordinator for Information and Communication Technology of the United Nations Counterterrorism Committee Executive Directorate. In this role, she focuses on issues relating to preventing and countering the use of ICT and related new and emerging technologies for terrorist purposes. Ms. Bramlett has also served as the Strategic Advisor to CTED’s Executive Director, CTED Legal Officer, and as the Program Manager. and Senior Advisor of UNODC’s Global Program against Money Laundering, Proceeds of Crime, and the Financing of Terrorism. And she has a really large experience also before even the UN in the US Department of Defense. So please, Ms. Bramblitt, you have the floor.


Jennifer Bramlette: Thank you, David. And good morning or good afternoon to everybody. I just want to start off by saying how delighted I was and seated was when a UN OCT said that they were going to put this parliamentarian handbook together on Resolution 1373. The main reason is that Resolution 1373 is a groundbreaking, forward-thinking, essential document for all of the work that the UN Security Council and other partner agencies around are doing on counterterrorism. It set the groundwork for everything that has come since. There have been a number of Security Council resolutions on counterterrorism, 16 of which deal with the issue of information and communication technologies. Resolution 1373 set the groundwork by initiating a requirement for states to share operational communications information. Now, it seems like a pretty small mandate. But that set operational interactivity between law enforcement agencies, border control agencies, between aspects of government that had never traditionally worked together, usually operational information was held on the security side of the house. And all of a sudden, now you had Ministries of Foreign Affairs, Ministry of Interior, Ministries of Education starting to work together. And so Resolution 1373 was essential as a starting point for all of the work that we’re doing today. and what we’re talking about today. Now my office is a special political mission that supports the United Nations Security Council’s Counterterrorism Committee. For them, we conduct assessments of member states’ capacity to counterterrorism in accordance with Security Council resolutions, and particularly Resolution 1373. We also have a mandate to identify gaps in implementation and to facilitate technical assistance so that member states can better implement these resolutions. We also have mandate to look at emerging threats, evolving trends, and to keep an eye on what’s happening in the world so that, again, we can better assist member states to implement Security Council resolutions. I was so delighted with this handbook because Resolution 1373 is our bread and butter. This is where we first started. And when we first started working with this resolution, we broke things down into looking at legal frameworks, because this is where the resolution sets the groundwork for looking at legal frameworks and how states actually can criminalize terrorist acts with the end state the goal of bringing terrorists to justice. And Resolution 1373 lays out all of these various components, these activities that states need to do in order to be able to bring terrorists to justice. The resolution doesn’t tell states how to do it. It just says that you must prevent terrorism financing. You must prevent terrorism arming. You must prevent the safe havening of terrorist groups. It doesn’t say how. This is where the regular dialogue with member states, where the activities of capacity building and technical assistance come into state is to help member states accomplish these goals. My office is looking not only now at legal frameworks, but also at institutions and how institutions are mandated and how they coordinate, cooperate, and share information, including operational information, again, going back to Resolution 1373. And we’re also looking at how the practical measures they’re taking are effective or not, looking at good practice, and again, looking at shortfalls. When it comes to ICT, I think that Buckley made an excellent point in how we think about about terrorist use of the internet, social media platforms, alternative online spaces, new technologies like AI, like virtual and augmented reality, even looking forward into quantum computing. And we have to think about it differently because when we think about terrorism we often think about bombs and buildings, we think about people being injured, we think about real-life harms, and yet there’s this whole other world, whether you call it the cybersphere, the digital world, online spaces, where terrorism happens. And we were asked actually, why are terrorism bodies here at the IGF? Well, we made a point earlier in an intervention on misinformation that the way misinformation is being written and propagated online is very similar to how terrorists are using online spaces to move their messaging, their propaganda, to coordinate and operate. How misinformation and harmful content is housed online is very similar to how terrorist material is housed online. And so we have to have this open mindset that the cybersphere, these online spaces, are operational spaces for terrorist organizations and that everything that’s being discussed here at the IGF is relevant to countering terrorism. Everything being talked about with regard to misinformation and the way societies need to be able to be inoculated against misinformation and disinformation and also terrorist propaganda are all similar. So in our work, our assessment work, some of the challenges we’ve seen are many, and I won’t go into all of them, but I would say that where we’ve seen great success is in states investing in digital and AI literacy training to build resilience in their populations. And this is from children all the way through to elders to teach them how the internet works, how social media works, and how they can interpret the information they see so they can determine for themselves if it’s true or not and if it’s something they should believe. So this investment into AI and digital literacy training is very important. Also efforts to work with the tech industry on safety by design and on issues around good programming and the tech aspects to ensure that material going into the internet and the spaces on the internet are safe and monitored and workable for all cultures and all societies. I would reiterate the points made on human rights, that human rights cannot be sacrificed in any way. I know many states claim that it’s difficult to balance security and human rights. But I would say that human rights are as applicable online as they are offline, and they cannot be compromised. And so there must be a way to have justice in all aspects of life for users and for states. And that’s a conversation that must continue with the outcome of privacy, data protection, freedom of expression, and all of the other fundamental freedoms that we have come to enjoy and need to maintain. Thank you very much. I’ll stop there.


David Alamos: Thank you very much, dear Jennifer, for your insights, observations, and recommendations as always very relevant, highly relevant and valuable, and really appreciate the collaboration with CTED. That’s very, really important for us of this common approach to member states. And I would like now to give the floor to our final speaker, who is our dear colleague from UNOCT, Mrs. Akhil Jinyo-Thien. You have full time, because we have been given extra time. It’s like a football match, so we have some extra minutes. So please, you can have your five to seven minutes completely. But let me first say that Mrs. Akvile Giniotiene is the head of the Cyber and New Technologies Unit at the United Nations Office of Counterterrorism. Prior to joining the United Nations, she had served for 25 years in different capacities for the government of the Republic of Lithuania, including as the Deputy Director of the State Security Department, Deputy Chair of the National Security Authority, and in private sector, where she has been an active participant of international cybersecurity dialogue and capacity building initiative and assisted governments in the development of national cybersecurity strategies and critical information infrastructure protection frameworks. Dear Akhil, thank you very much. You have the floor, please. Thank you, David.


Akvile Giniotiene: And good afternoon to all. It’s really a pleasure to be here and be engaged in the discussion of parliamentary approaches to the terrorist use of ICT. I come from a little not a legal background, but from more operational background. And the program is a capacity building tool to support member states to develop necessary capacities to respond both to the challenges and opportunities that new technologies provide in countering terrorists. And in our work, we are helping member states to understand the threat stemming from terrorist use of new technologies, what are the opportunities, and also build necessary capacities, like protect critical infrastructures against terrorist cyber attacks, develop necessary law enforcement capacities to use. new technologies for investigation of terrorist offenses, also develop policy frameworks that are necessary to ensure the strategic and whole of the government approach to new technologies in countering terrorism. And of course, from my capacity building work, I can say that such capacities cannot be built in vacuum. So there should be legal mandates in place for law enforcement to do things online, to use information collecting using new technologies for investigation and prosecution. There should be policies in place as well. And I had a pleasure also to participate two weeks ago in a parliamentary assembly dialogue in Rome. And I was really, really impressed of the amount of thought given by parliamentarians on how to go about it. And at least my takeaway from all the discussions there were that to regulate, legislate and deliver proper oversight of new technologies in countering terrorist domain, first, you need to understand what is the threat, how malicious actors can abuse new technologies for countering terrorism and what are the opportunities there for law enforcement and wider communities to use new technologies in this regard. And I’m happy that a program in a little bit of a way supports member states in this regard. So three years ago, we published a report on the use of artificial intelligence by terrorist organizations, outlining different areas how terrorists can use artificial intelligence in future. So be it as cyber enabled attacks, be it physical attacks using self-driving cars or drones equipped with facial recognition technology to identify particular targets in the crowds or enhancing their operational capability to count a few documents and spread misinformation. information. It was a little bit futuristic at that time because generative AI was not there, but two years passed and generative AI hit the floor, and we see some of the scenarios already becoming a reality of today that parliamentarians are trying to address today. Also, one of the most recent report that is also available online is regarding terrorist use of cybercrime as a service on dark web, how cybercrime as a service is available, to be procured at a very cheap price, and could cause massive effects against critical infrastructure or help them to raise money. And in terms of capacity building, we are engaging with member states to help them develop understanding of the threats and risks at national level in a structured manner, inviting all relevant parties to prioritize the risks, which can become a national risk, be it use of deepfakes, be it artificial intelligence, and how to address them through policy responses, how to prevent those scenarios from happening, how to deny them from happening, how to protect and recover once they happen, and how to prosecute through policy approaches. In our capacity building work, I would say it’s always very good to have parliamentarians in these discussions. It’s not always happening, but in those cases that we had, the representatives from the, let’s say, committees on national security and defense or committees of new technologies, it was a very good discussion, bringing all relevant parties together. So when it comes to opportunities of new technologies, the program is mostly focusing on building law enforcement capacities. So we help law enforcement to embrace open source intelligence, how to conduct investigations online. how to conduct dark web investigations, how to use facial recognition, how to use digital forensic techniques, how to run cryptocurrencies investigations, how to seize cryptocurrencies, which is also a very difficult thing to do, and how drones can support counter-terrorist efforts. And in all these regards, to wrap it up, it’s very important that their legal aspects are addressed. So first of all, the use of new technology shall be based on clear provisions of law by counter-terrorist agencies to ensure the principles of rule of law and adherence to international law. Because if law enforcement agencies do not have a mandate to use those new technologies, it will not lead to prosecution and adjudication of terrorist offenses, which is the end goal of any counter-terrorist agencies to reduce the number of threat actors that we need to deal. Second, and it was also, I’m repeating other experts on the panel, that any measures impacting or restricting human rights must be established by law, necessary and proportionate. Also, I think it’s very important that the law establishes legal powers for review and redress, which are independent from law enforcement agencies. So if there’s a concern that law enforcement agencies are not using the powers, the new technologies properly, so there are mechanisms to raise that and to resolve through necessary mechanisms. Also, we are using the increased use of advanced data collection, which is a very efficient way for law enforcement to address counter-terrorism and use of CCTV and big data, but they also should be governed to… to prevent excessive information collection. And of course, as Jennifer mentioned, prohibition of act of terrorist, because that’s how law enforcement have powers to investigate. So it’s very important that these new and evolving crimes are addressed in criminal laws. And they give a mandate to law enforcement to do those. And legal arrangement to support cross-border cooperation is also very, very important because terrorists has no borders, technologies have no borders, data is everywhere. So parliamentarians have a very important role to play and then increasingly making efforts in this regard, which is appreciated by law enforcement and counter-terrorist community. So thank you again for inviting me to be in this panel and thank you very much.


David Alamos: Thank you. Thank you very much, Dr. Actil, for your presentation. And let me highlight important work that you and your unit is doing in serving and supporting member states on cybersecurity, artificial intelligence, and ICT in the prevention and countering of terrorism. We have just two more minutes, okay, because we will need to close because there is a new session at 1.15. But if there is any comment or question that would like to be raised, please, just in 30 seconds, I would be very grateful.


Audience: Thank you, Badil Badi from Shura Council, Qatar. I thank everybody here. Unfortunately, the law enforcement used the word of terrorism in many aspects a long time ago. So if you want to put someone in trouble, just tell them, use the word, and that’s enough to put them into much troubles. And if you also support them with legal action, we are afraid to go deeper. So that’s one point. And hopefully, we can understand and defend. of the terrorist or terrorism or whatever, the, you know, the word is just wide used for everybody and everybody just misuse it. And, you know, but the terrorism, as Dr. Ahmed said in the beginning, it means a lot and it means not only, but it means other, many other things. And we’ve seen it in hackers or whatever. It’s all, it’s terrorist. So thank you.


David Alamos: Thank you very much, Excellency. So if there is any other further question, I would suggest that, yeah, I know. I would suggest that after the event, please do reach out to our distinguished panelists. I would like to conclude by just saying that we still have a lot of challenges. We will, we need to keep on working on strengthening the legal frameworks, especially we need to, we have a UN Security Council Resolution 1373 as a guiding document also for that that has to be taken into consideration as a mandatory resolution from the Security Council. And let me just say, highlight the important role of parliamentarians, not only in developing legislation, but also, yes, it has been said in allocating budgets, in conducting the oversight functions, and especially also for all of us, has been reiterated in many occasion, but I would like to conclude with that, importance of having human rights at the forefront of all our dialogues and decision in these key matters. Let me conclude by thanking all of the distinguished panelists and experts that have been accompanying us during today’s session, and to all of you also for having been with us and participating in this session. Thank you very much.


A

Ahmed Buckley

Speech speed

128 words per minute

Speech length

1152 words

Speech time

537 seconds

UN Security Council Resolution 1373 as foundational framework for international counterterrorism cooperation

Explanation

Resolution 1373 is the bedrock of international cooperation on counter-terrorism. It provides a framework for member states to work together despite definitional differences on terrorism.


Evidence

The resolution obliged member states to prevent the collusion of safe neighbors and scan large records for terrorist acts.


Major Discussion Point

The role of UN Security Council Resolution 1373 in countering terrorism


Agreed with

Jennifer Bramlette


David Alamos


Agreed on

Importance of UN Security Council Resolution 1373


Responsible for transposing international commitments into national laws

Explanation

Parliamentarians are responsible for transposing international commitments into national laws. They are the ones who make the correct decisions on appropriations and budgetary allocations to face threats based on credible threat assessments from security agencies.


Major Discussion Point

The role of parliaments in addressing ICT/AI challenges in counterterrorism


Agreed with

Emanuele Loperfido


David Alamos


Akvile Giniotiene


Agreed on

Role of parliaments in addressing ICT/AI challenges


J

Jennifer Bramlette

Speech speed

125 words per minute

Speech length

1043 words

Speech time

497 seconds

Resolution 1373 requires operational information sharing between agencies

Explanation

Resolution 1373 initiated a requirement for states to share operational communications information. This set operational interactivity between law enforcement agencies, border control agencies, and other aspects of government that had not traditionally worked together.


Evidence

Ministries of Foreign Affairs, Interior, and Education started working together, sharing operational information that was traditionally held on the security side.


Major Discussion Point

The role of UN Security Council Resolution 1373 in countering terrorism


Agreed with

Ahmed Buckley


David Alamos


Agreed on

Importance of UN Security Council Resolution 1373


Resolution 1373 provides guidance on legal frameworks to criminalize terrorist acts

Explanation

Resolution 1373 lays out various components that states need to implement in order to bring terrorists to justice. It sets the groundwork for looking at legal frameworks and how states can criminalize terrorist acts.


Evidence

The resolution mandates states to prevent terrorism financing, prevent terrorism arming, and prevent the safe havening of terrorist groups.


Major Discussion Point

The role of UN Security Council Resolution 1373 in countering terrorism


Agreed with

Ahmed Buckley


David Alamos


Agreed on

Importance of UN Security Council Resolution 1373


Need for digital literacy training to build societal resilience

Explanation

States should invest in digital and AI literacy training to build resilience in their populations. This training should cover how the internet and social media work, and how to interpret information to determine its truthfulness.


Evidence

This training should be provided from children all the way through to elders.


Major Discussion Point

Challenges and opportunities of AI and ICTs in counterterrorism


D

David Alamos

Speech speed

142 words per minute

Speech length

1895 words

Speech time

796 seconds

Resolution 1373 needs to be implemented through national legislation by parliaments

Explanation

UN Security Council Resolution 1373 is a guiding document that has to be taken into consideration as a mandatory resolution from the Security Council. It needs to be implemented through national legislation by parliaments.


Major Discussion Point

The role of UN Security Council Resolution 1373 in countering terrorism


Agreed with

Ahmed Buckley


Jennifer Bramlette


Agreed on

Importance of UN Security Council Resolution 1373


Allocating budgets and conducting oversight of counterterrorism efforts

Explanation

Parliamentarians play a crucial role not only in developing legislation but also in allocating budgets and conducting oversight functions in counterterrorism efforts. This is particularly important in the context of using new technologies for counterterrorism.


Major Discussion Point

The role of parliaments in addressing ICT/AI challenges in counterterrorism


Agreed with

Ahmed Buckley


Emanuele Loperfido


Akvile Giniotiene


Agreed on

Role of parliaments in addressing ICT/AI challenges


UN entities providing capacity building support to member states

Explanation

UN entities are providing capacity building support to member states in their efforts to counter terrorist use of ICTs. This support is crucial in helping states develop the necessary capabilities to address the challenges posed by new technologies in the context of counterterrorism.


Major Discussion Point

International cooperation on countering terrorist use of ICTs


K

Kamil Aydin

Speech speed

118 words per minute

Speech length

815 words

Speech time

411 seconds

AI enables sophisticated propaganda and automated recruitment by terrorists

Explanation

Artificial Intelligence is being leveraged by terrorist organizations to enhance their operations. This includes producing sophisticated propaganda and automating recruitment processes.


Evidence

Terrorist organizations such as Daesh, Al-Qaeda, PKK and far-right violent extremist groups are increasingly leveraging AI in their operations.


Major Discussion Point

Challenges and opportunities of AI and ICTs in counterterrorism


Agreed with

Emanuele Loperfido


Abdelouahab Yagoubi


Akvile Giniotiene


Agreed on

Challenges posed by AI and ICTs in terrorism


E

Emanuele Loperfido

Speech speed

111 words per minute

Speech length

890 words

Speech time

478 seconds

Deepfakes pose risks of disinformation and eroding public trust

Explanation

The rise of deepfake technologies presents a troubling dimension in the fight against terrorism. These technologies could be leveraged by terrorist groups to spread disinformation, incite violence, or erode public trust.


Evidence

The ability to create convincing but fabricated audio and video content could have far-reaching impacts on social cohesion and national security if left unaddressed.


Major Discussion Point

Challenges and opportunities of AI and ICTs in counterterrorism


Agreed with

Kamil Aydin


Abdelouahab Yagoubi


Akvile Giniotiene


Agreed on

Challenges posed by AI and ICTs in terrorism


Need to balance security measures with human rights protections

Explanation

While AI can be a powerful tool in detecting threats and preventing radicalization, its use must always be balanced with respect for privacy and freedom of expression. This dual approach not only strengthens public trust but also ensures that AI innovation remains aligned with shared values of democracy and security.


Evidence

Italy has recently underlined the importance of ethics in AI governance by appointing a theologian and AI expert as a member of the United Nation Committee to lead national AI coordination.


Major Discussion Point

The role of parliaments in addressing ICT/AI challenges in counterterrorism


Agreed with

Ahmed Buckley


David Alamos


Akvile Giniotiene


Agreed on

Role of parliaments in addressing ICT/AI challenges


Differed with

Akvile Giniotiene


Differed on

Approach to regulating AI and ICTs in counterterrorism


Importance of public-private partnerships

Explanation

The resolution adopted by OSCEPA emphasized the importance of public and private partnerships in addressing the challenges of AI and ICTs in counterterrorism. This approach is crucial for developing effective strategies to counter terrorist use of new technologies.


Major Discussion Point

International cooperation on countering terrorist use of ICTs


A

Abdelouahab Yagoubi

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

AI and ICTs can enhance threat detection and analysis for authorities

Explanation

AI and emerging technologies play a provided role in the fight against terrorism and organized crime. They enable authorities to rapidly identify the most effective approaches and strategies.


Evidence

This includes the automatic analysis of vast amounts of data patterns and trends associated with the malicious use of technological tools.


Major Discussion Point

Challenges and opportunities of AI and ICTs in counterterrorism


Role of parliamentary assemblies in promoting knowledge sharing

Explanation

Parliamentary assemblies play a crucial role in promoting knowledge sharing on the use of AI and ICTs in counterterrorism. This includes disseminating news and analysis about trends related to technological advancement in various fields, including security and defense.


Evidence

PAM established a permanent global parliamentary observatory on AI and ICT, and began the publication of a daily and weekly digest to disseminate news and analysis about trends related to technological advancement.


Major Discussion Point

International cooperation on countering terrorist use of ICTs


A

Akvile Giniotiene

Speech speed

143 words per minute

Speech length

1044 words

Speech time

436 seconds

Terrorists exploiting cybercrime-as-a-service on the dark web

Explanation

Terrorists are using cybercrime-as-a-service available on the dark web. This service can be procured at a very cheap price and could cause massive effects against critical infrastructure or help terrorists raise money.


Evidence

A recent report is available online regarding terrorist use of cybercrime as a service on the dark web.


Major Discussion Point

Challenges and opportunities of AI and ICTs in counterterrorism


Agreed with

Kamil Aydin


Emanuele Loperfido


Abdelouahab Yagoubi


Agreed on

Challenges posed by AI and ICTs in terrorism


Establishing legal mandates for law enforcement to use new technologies

Explanation

It’s crucial that law enforcement agencies have clear legal mandates to use new technologies in counter-terrorism efforts. Without such mandates, their actions may not lead to successful prosecution and adjudication of terrorist offenses.


Evidence

The use of new technology by counter-terrorist agencies should be based on clear provisions of law to ensure the principles of rule of law and adherence to international law.


Major Discussion Point

The role of parliaments in addressing ICT/AI challenges in counterterrorism


Agreed with

Ahmed Buckley


Emanuele Loperfido


David Alamos


Agreed on

Role of parliaments in addressing ICT/AI challenges


Differed with

Emanuele Loperfido


Differed on

Approach to regulating AI and ICTs in counterterrorism


Need for cross-border cooperation mechanisms

Explanation

Legal arrangements to support cross-border cooperation are crucial in countering terrorism. This is because terrorists and technologies have no borders, and data is everywhere.


Major Discussion Point

International cooperation on countering terrorist use of ICTs


Agreements

Agreement Points

Importance of UN Security Council Resolution 1373

speakers

Ahmed Buckley


Jennifer Bramlette


David Alamos


arguments

UN Security Council Resolution 1373 as foundational framework for international counterterrorism cooperation


Resolution 1373 requires operational information sharing between agencies


Resolution 1373 provides guidance on legal frameworks to criminalize terrorist acts


Resolution 1373 needs to be implemented through national legislation by parliaments


summary

Multiple speakers emphasized the crucial role of Resolution 1373 in providing a framework for international cooperation, information sharing, and legal guidance in counterterrorism efforts.


Challenges posed by AI and ICTs in terrorism

speakers

Kamil Aydin


Emanuele Loperfido


Abdelouahab Yagoubi


Akvile Giniotiene


arguments

AI enables sophisticated propaganda and automated recruitment by terrorists


Deepfakes pose risks of disinformation and eroding public trust


Terrorists exploiting cybercrime-as-a-service on the dark web


summary

Several speakers highlighted the various ways terrorists are exploiting AI and ICTs, including for propaganda, recruitment, and cybercrime.


Role of parliaments in addressing ICT/AI challenges

speakers

Ahmed Buckley


Emanuele Loperfido


David Alamos


Akvile Giniotiene


arguments

Responsible for transposing international commitments into national laws


Need to balance security measures with human rights protections


Allocating budgets and conducting oversight of counterterrorism efforts


Establishing legal mandates for law enforcement to use new technologies


summary

Multiple speakers emphasized the critical role of parliaments in legislating, overseeing, and balancing security needs with human rights in the context of ICT/AI use in counterterrorism.


Similar Viewpoints

Both speakers emphasized the importance of building capacity, whether through public education or legal frameworks, to address the challenges posed by new technologies in counterterrorism efforts.

speakers

Jennifer Bramlette


Akvile Giniotiene


arguments

Need for digital literacy training to build societal resilience


Establishing legal mandates for law enforcement to use new technologies


Both speakers highlighted the importance of collaboration and knowledge sharing between different sectors and entities in addressing the challenges of AI and ICTs in counterterrorism.

speakers

Emanuele Loperfido


Abdelouahab Yagoubi


arguments

Importance of public-private partnerships


Role of parliamentary assemblies in promoting knowledge sharing


Unexpected Consensus

Dual nature of AI and ICTs in counterterrorism

speakers

Emanuele Loperfido


Abdelouahab Yagoubi


Akvile Giniotiene


arguments

Need to balance security measures with human rights protections


AI and ICTs can enhance threat detection and analysis for authorities


Establishing legal mandates for law enforcement to use new technologies


explanation

There was an unexpected consensus among speakers from different backgrounds on the dual nature of AI and ICTs in counterterrorism – recognizing both their potential benefits for authorities and the need for careful regulation to protect human rights.


Overall Assessment

Summary

The speakers generally agreed on the importance of UN Security Council Resolution 1373, the challenges posed by AI and ICTs in terrorism, and the crucial role of parliaments in addressing these challenges. There was also consensus on the need for capacity building, collaboration, and balancing security measures with human rights protections.


Consensus level

High level of consensus among speakers, suggesting a shared understanding of the complex issues surrounding ICT/AI use in counterterrorism. This consensus implies potential for coordinated international action, but also highlights the need for careful consideration of human rights and legal frameworks in implementing new technologies and strategies.


Differences

Different Viewpoints

Approach to regulating AI and ICTs in counterterrorism

speakers

Emanuele Loperfido


Akvile Giniotiene


arguments

Need to balance security measures with human rights protections


Establishing legal mandates for law enforcement to use new technologies


summary

While Loperfido emphasizes the need to balance security measures with human rights protections, Giniotiene focuses more on establishing clear legal mandates for law enforcement to use new technologies in counterterrorism efforts.


Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the balance between security measures and human rights protections, as well as the specific approaches to addressing the challenges posed by AI and ICTs in counterterrorism.


difference_level

The level of disagreement among the speakers appears to be relatively low. Most speakers agree on the importance of addressing the challenges posed by AI and ICTs in counterterrorism, but they propose slightly different approaches or emphasize different aspects. This level of disagreement is not likely to significantly impede progress on the topic, but rather suggests a need for a comprehensive, multi-faceted approach that incorporates various perspectives.


Partial Agreements

Partial Agreements

Both speakers agree on the need to address the challenges posed by new technologies in counterterrorism, but they propose different approaches. Bramlette emphasizes digital literacy training for the public, while Giniotiene focuses on legal mandates for law enforcement.

speakers

Jennifer Bramlette


Akvile Giniotiene


arguments

Need for digital literacy training to build societal resilience


Establishing legal mandates for law enforcement to use new technologies


Similar Viewpoints

Both speakers emphasized the importance of building capacity, whether through public education or legal frameworks, to address the challenges posed by new technologies in counterterrorism efforts.

speakers

Jennifer Bramlette


Akvile Giniotiene


arguments

Need for digital literacy training to build societal resilience


Establishing legal mandates for law enforcement to use new technologies


Both speakers highlighted the importance of collaboration and knowledge sharing between different sectors and entities in addressing the challenges of AI and ICTs in counterterrorism.

speakers

Emanuele Loperfido


Abdelouahab Yagoubi


arguments

Importance of public-private partnerships


Role of parliamentary assemblies in promoting knowledge sharing


Takeaways

Key Takeaways

UN Security Council Resolution 1373 remains a foundational framework for international counterterrorism cooperation, especially regarding ICTs


AI and new technologies present both significant challenges (e.g. sophisticated propaganda, deepfakes) and opportunities (e.g. enhanced threat detection) for counterterrorism efforts


Parliaments play a crucial role in addressing ICT/AI challenges in counterterrorism through legislation, budget allocation, and oversight


International cooperation, including public-private partnerships and cross-border mechanisms, is essential for countering terrorist use of ICTs


Human rights protections must be balanced with security measures when developing counterterrorism strategies involving ICTs/AI


Resolutions and Action Items

Parliamentary Assembly of the Mediterranean elected to Presidency of the Coordination Mechanism of Parliamentary Assemblies on Counter-terrorism


OSCE Parliamentary Assembly adopted the Bucharest Resolution on Artificial Intelligence and the Fight Against Terrorism


PAM established a permanent global parliamentary observatory on AI and ICT


Unresolved Issues

How to effectively balance security measures with human rights protections in the digital sphere


Addressing the potential misuse of the term ‘terrorism’ in law enforcement and legislation


Developing comprehensive legal frameworks to govern the use of new technologies in counterterrorism efforts


Suggested Compromises

Adopting technology-neutral language in legislation to focus on criminalizing actions rather than specific tools


Investing in digital and AI literacy training to build societal resilience against online threats and misinformation


Establishing independent review and redress mechanisms for the use of new technologies by law enforcement agencies


Thought Provoking Comments

Despite all of our definitional differences on what is terrorism, who is a terrorist, or all our haranguing on these definitions, we were still, as an international community, able to make large strides on counter-terrorism cooperation. And the bedrock of that cooperation was UN Security Council Resolution 1373 and its descendants.

speaker

Dr. Ahmed Buckley


reason

This comment highlights the importance of international cooperation in counter-terrorism efforts, despite definitional challenges. It sets the tone for discussing practical approaches rather than getting bogged down in semantic debates.


impact

It framed the subsequent discussion around concrete actions and cooperation, rather than theoretical debates about definitions. This allowed for a more productive conversation focused on implementation and parliamentary roles.


Parliamentarians are of course the legislators, they are the ones responsible for transposing all of these international commitments into national laws, but they are also the dispensers of resources. They are the ones who make the correct decisions on appropriations and budgetary allocations to face the threats based on credible threat assessments from the security agencies.

speaker

Dr. Ahmed Buckley


reason

This comment succinctly outlines the crucial role of parliamentarians in counter-terrorism efforts, highlighting both their legislative and budgetary responsibilities.


impact

It shifted the focus of the discussion to the specific roles and responsibilities of parliamentarians, leading to more detailed explorations of how they can contribute to counter-terrorism efforts in practical ways.


As AI capabilities evolve, so does the potential for them to be used in ways that threaten peace and stability. For example, widely available AI-driven tools could enable individuals or groups to assess technologies such as drones that could be misused for surveillance, targeted attacks, or other malicious purposes.

speaker

Honorable Emanuele Loperfido


reason

This comment introduces the dual-use nature of AI technologies and their potential misuse by malicious actors, highlighting a key challenge in the intersection of technology and security.


impact

It sparked a more nuanced discussion about the challenges of regulating and governing AI technologies in the context of counter-terrorism, leading to considerations of balancing security needs with ethical concerns and human rights.


Resolution 1373 set the groundwork by initiating a requirement for states to share operational communications information. Now, it seems like a pretty small mandate. But that set operational interactivity between law enforcement agencies, border control agencies, between aspects of government that had never traditionally worked together, usually operational information was held on the security side of the house.

speaker

Jennifer Bramlette


reason

This comment provides historical context and highlights the transformative impact of Resolution 1373 on inter-agency cooperation, which is crucial for effective counter-terrorism efforts.


impact

It deepened the discussion by emphasizing the importance of information sharing and inter-agency cooperation, leading to further exploration of how to enhance these aspects in the context of new technologies.


To regulate, legislate and deliver proper oversight of new technologies in countering terrorist domain, first, you need to understand what is the threat, how malicious actors can abuse new technologies for countering terrorism and what are the opportunities there for law enforcement and wider communities to use new technologies in this regard.

speaker

Akvile Giniotiene


reason

This comment emphasizes the importance of understanding both the threats and opportunities presented by new technologies before attempting to regulate them, highlighting the need for informed policymaking.


impact

It shifted the discussion towards the importance of technological literacy among policymakers and the need for ongoing education and collaboration between tech experts and legislators.


Overall Assessment

These key comments shaped the discussion by emphasizing the importance of international cooperation, the crucial role of parliamentarians, the dual-use nature of AI technologies, the need for inter-agency information sharing, and the importance of understanding both threats and opportunities before legislating. The discussion evolved from broad principles of counter-terrorism to specific challenges and opportunities presented by new technologies, with a consistent focus on the role of parliamentarians in navigating these complex issues. The comments collectively highlighted the need for a multifaceted approach that balances security concerns with ethical considerations and human rights, while also emphasizing the importance of technological literacy among policymakers.


Follow-up Questions

How can the UN Parliamentary Handbook on the Implementation of UN Security Council Resolution 1373 be improved and updated?

speaker

Dr. Ahmed Buckley


explanation

The handbook should be a living document that incorporates good practices from parliaments and evolves with new developments


How can parliaments develop more effective legislative frameworks and strategies to counter the abuse and misuse of AI and emerging technologies?

speaker

Honorable Abdelouahab Yagoubi


explanation

This is crucial for addressing the evolving threats posed by malicious actors using new technologies


How can states effectively balance security needs with human rights protections when implementing counter-terrorism measures using new technologies?

speaker

Jennifer Bramlette


explanation

This balance is essential to ensure that counter-terrorism efforts do not compromise fundamental freedoms


What legal mandates and policy frameworks are needed to enable law enforcement to effectively use new technologies for investigating and prosecuting terrorist offenses?

speaker

Akvile Giniotiene


explanation

Clear legal and policy foundations are necessary for law enforcement to leverage new technologies while adhering to the rule of law


How can parliamentarians improve their understanding of the threats and opportunities presented by new technologies in the counter-terrorism domain?

speaker

Akvile Giniotiene


explanation

A deeper understanding is crucial for effective legislation and oversight of counter-terrorism efforts involving new technologies


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #138 Empowering End Users Voices in Internet Governance

WS #138 Empowering End Users Voices in Internet Governance

Session at a Glance

Summary

This discussion focused on empowering end-users’ voices in Internet governance through multi-stakeholder approaches. Participants emphasized the importance of including diverse perspectives, particularly from underrepresented groups, in shaping digital policies. They highlighted challenges such as digital divides, language barriers, and power imbalances that hinder meaningful participation.

Several speakers stressed the need to evolve multi-stakeholder models to be more inclusive and results-oriented. They suggested disaggregating stakeholder categories to better reflect diverse interests and improving mechanisms for filtering up local concerns to global forums. The role of governments in facilitating inclusion was debated, with some emphasizing their unique responsibilities while others cautioned against overreliance on traditional power structures.

Participants discussed strategies for engaging end-users, including citizen assemblies, opinion polls, and leveraging emerging technologies like AI for improved accessibility. However, concerns were raised about potential biases in AI systems and the need to involve underrepresented groups in technology development. The importance of creating channels for expression and empowering users to shape technologies’ future was emphasized.

The discussion touched on the changing digital landscape, particularly the impact of AI and the need for governance to keep pace. Speakers noted the challenges of balancing rapid innovation with inclusive decision-making processes. The upcoming WSIS+20 review was highlighted as a crucial opportunity to reaffirm and refine multi-stakeholder approaches in Internet governance.

Overall, the conversation underscored the complexity of ensuring meaningful end-user participation in Internet governance while adapting to technological changes and addressing systemic inequalities.

Keypoints

Major discussion points:

– The importance of including end-user voices in internet governance, while recognizing the challenges in defining and engaging diverse end-user groups

– The need to evolve and improve multi-stakeholder processes to be more inclusive, effective, and results-oriented

– The role of governments in internet governance and ensuring global agreements remain relevant amid rapid technological changes

– The potential of new technologies like AI to enhance participation, while also considering risks of perpetuating inequalities

– The importance of engaging youth and underrepresented groups in shaping the future of internet governance

Overall purpose:

The goal of the discussion was to explore ways to empower end-users’ voices in internet governance and improve multi-stakeholder processes to be more inclusive and effective in the rapidly changing digital landscape.

Tone:

The tone was largely constructive and collaborative, with participants building on each other’s ideas. There was a sense of urgency about the need to improve current approaches, balanced with optimism about potential solutions. The tone became more reflective towards the end as participants summarized key takeaways.

Speakers

– Pari Esfandiari: Moderator

– David Souter: Managing Director, ict Development Associates

– Carol Roach: Government stakeholder representative

– Olga Cavalli: Government stakeholder representative

– Amrita Choudhury: Civil society representative

– Wolfgang Kleinwächter: Expert in Internet governance

– Olivier Crepin-Leblond: Internet governance expert

– Ellen Helsper: Researcher on links between social and digital inclusion

– Sebastien Bachollet: Online moderator

– Yik Chan Chin: Summarizer of key takeaways

Full session report

Empowering End-Users’ Voices in Internet Governance: A Multi-Stakeholder Approach

This summary reflects the discussions held during an Internet Governance Forum (IGF) session focused on empowering end-users’ voices in Internet governance through multi-stakeholder approaches. The session featured a panel of experts, invited community leaders, and audience participation, exploring challenges and opportunities in creating more inclusive and effective Internet governance processes.

Setting the Context

The session began with polls to gauge audience perspectives on multi-stakeholder approaches and end-user participation in Internet governance. Participants, including digital policy experts, government representatives, civil society members, and researchers, emphasized the critical importance of the multi-stakeholder model in Internet governance while acknowledging its challenges.

Wolfgang Kleinwächter highlighted the historical context, referencing the NetMundial Plus10 and the Sao Paulo guidelines, which laid the foundation for multi-stakeholder Internet governance. He noted, “We have no clear procedures how multi-stakeholder collaboration works in practice.”

Key Themes and Challenges

1. Inclusion and Representation

A central theme was the importance of including diverse perspectives, particularly from underrepresented groups. David Souter highlighted the digital divide between governments and other stakeholders, while Olga Cavalli pointed out barriers such as language, finances, and lack of information. Ellen Helsper emphasized the underrepresentation of vulnerable groups and the Global South, noting, “Young people in these regions make up a majority of the population but are often excluded from governance discussions.”

Carol Roach cautioned against oversimplification, stating, “We need to stop looking at people as being one dimensional and review how we label boxes and how we label people.” This insight challenged current stakeholder categorizations and prompted consideration of more nuanced approaches to representation.

2. Evolving Multi-Stakeholder Models

Several speakers stressed the need to evolve multi-stakeholder models. David Souter argued for disaggregating and expanding stakeholder categories beyond the current model of four groups, suggesting this doesn’t capture the complexity of interests. He stated, “We need to be multisectoral in thinking about it. The internet is not the end in itself, in other words, it’s means to an end.”

Amrita Choudhury stressed the need to strengthen the legitimacy of civil society stakeholders beyond tokenism. Carol Roach proposed considering media as a separate stakeholder group, highlighting its unique role in shaping public opinion.

3. Role of Governments and Power Dynamics

The role of governments in facilitating inclusion was debated. Olga Cavalli noted that governments have unique responsibilities but must understand the multi-stakeholder approach. David Souter highlighted the need to address power imbalances between stakeholders, a point echoed by Ellen Helsper who specifically mentioned the role of big tech companies in shaping the Internet.

4. Artificial Intelligence and Internet Governance

The potential of AI to enhance participation was a significant point of discussion. Olivier Crepin-Leblond was optimistic about AI’s potential to overcome language barriers, stating, “AI will help me in that. And I’ll develop a tool for this for my own means.” However, Ellen Helsper cautioned about AI models potentially perpetuating existing inequalities, highlighting the complex relationship between technology and inclusion.

5. Engaging End-Users and Creating Channels for Expression

Participants discussed various strategies for engaging end-users, including citizen assemblies and opinion polls. Wolfgang Kleinwächter emphasized the importance of creating channels for everyone to express their opinions. Amrita Choudhury highlighted the importance of creating narratives to engage end-users on issues that affect them.

Ellen Helsper stressed the need to counter disempowering discourse around technology, while Yik Chan Chin called for self-motivation from end-users, presenting an unexpected difference in approach to end-user empowerment.

6. Capacity Building and Awareness-Raising

Several speakers emphasized the importance of capacity building and awareness-raising for end-users. Olga Cavalli highlighted the need to understand how new generations use information and media, stating, “We need to understand how the new generations are using information and media.”

7. National and Regional IGFs

The importance of national and regional IGFs in fostering local participation and addressing context-specific issues was discussed by several speakers, emphasizing their role in building a more inclusive Internet governance ecosystem.

Conclusion

The discussion underscored the complexity of ensuring meaningful end-user participation in Internet governance while adapting to technological changes and addressing systemic inequalities. It highlighted the need for more inclusive and representative Internet governance, improved multi-stakeholder processes, and careful consideration of the role of emerging technologies in both enabling and potentially hindering participation.

As the digital landscape continues to evolve rapidly, the conversation emphasized the urgency of addressing these challenges to create a more inclusive, effective, and forward-looking approach to Internet governance. The upcoming WSIS+20 review was highlighted as a crucial opportunity to reaffirm and refine multi-stakeholder approaches in this context.

Session Transcript

Pari Esfandiari: Well, good morning and welcome everyone. Whether you are joining us online or here in person, it’s a beautiful day in Riyadh, Saudi Arabia, which has the honor of hosting the IGF this year. I would like to take a moment to express my heartfelt gratitude to the IGF for its invaluable contributions to the Global Digital Governance Dialogue. This platform not only brings us together, but also empowers us to engage in meaningful conversations and contribute to shaping the future of digital governance. My name is Pari Esfandiari, and it’s an honor to moderate today’s critical discussion on empowering end-users’ voices in Internet governance. As you can see on the screen, joining me is Sebastian Bachelet, a well-known figure in Internet governance. He’s joining us virtually and will moderate the online comments and questions. I am also joined by a distinguished group of panelists, David, Carol, Olga, and Amrita. They are here and will provide their perspective and thoughts. We are also joined by invited community leaders, Olivia, Wolfgang, who are here, and Ellen, who will join virtually, both to express their perspectives, but also to include you, the community, in this interactive session. We are also joined by Yek-Chan Chin. She will summarize the key takeaways. As you can see, we have renowned leaders in the field of Internet governance. Their contributions speak for themselves, and they hardly need an introduction. We go now to the next slide. Before we dive in, let me briefly outline our agenda for the next 90 minutes. First, I will set the stage and introduce today’s topic. Then our panelists will address three core questions, offering their diverse and unique perspective. This will be followed by invited community leaders sharing their responses, fostering a dynamic exchange of ideas. We will then open the floor to all participants for comments and questions. We will wrap up with reflections, a summary of takeaways, and closing remarks. So now let me take a moment to set the scene. As we gather here… Thank you. Thank you. Today, it’s clear that the Internet has evolved far beyond being just a tool or platform. It’s now the backbone of our interconnected world, driving economies, transforming societies, and deeply impacting personal lives. With its integration into nearly every aspect of life, governing the Internet has become an increasingly complex and critical task. This complexity is heightened by rising geopolitical tensions and the inherent friction between the Internet’s borderless nature and traditional nation-state frameworks. In this context, the need for inclusive global agreements, adoptable standards, and collaborative approaches is more urgent than ever. The evolution of Internet governance reflects a profound shift in the dynamics of power, influence, and collaboration in the digital age. Traditional multilateral and bilateral frameworks often struggle to keep pace with the rapid technological advancement and transnational challenges of the Internet. This is where the multi-stakeholder approach emerges as indispensable. Unlike conventional governance models dominated by governments, multi-stakeholder acknowledges the Internet as a shared global resource, requiring shared responsibilities and diverse representation. Where governments, civil society, technical community, academia, and the private sector work collaboratively to navigate this complex landscape. At the heart of this ecosystem are those who are impacted. Their perspectives are not only… valuable but fundamental to shaping an Internet that reflects the needs and aspiration of global communities. They bring critical first-hand insights into navigating the digital landscape, from addressing privacy concerns and ensuring accessibility to building trust and fostering innovation. Communities shaped by these lived experiences are more likely to be effective, trusted, and widely adopted. Conversely, excluding them risks governance being dominated by narrow interests, perpetuating inequalities and missing opportunities for meaningful progress. Yet, despite its necessity, as well as its amazing achievements, the multi-stakeholder approach faces serious challenges, as highlighted during NetMundial Plus10 earlier this year. These challenges include issues of representation, inclusivity, meaningful participation, inefficiency, and a perceived inability to deliver actionable results. These concerns underscore the need for reform, innovation, and a renewed commitment to making multi-stakeholder work, not just in principle but in practice. The stakes have never been higher as we approach the WSIS Plus20 review, a pivotal moment to shape the future of Internet governance. Now, with this context in mind, I would like to engage with our audience by launching three quick polls on today’s key topics. You can… You have one minute to respond. Okay. This is poll number one. Okay. Hmm. Oh, you can’t… I think there’s… Yeah, we have 8%, one and one, and I think we now stop the polls and we continue. Could we end the polls, please? Yeah, we have 8%. 1%, 1%, and 1%. So could we please end the polls, because I cannot now change the slides. Okay, it’s changed. Okay, could we? So, sorry for this. So, now we have three overarching questions as shown on the screen. To delve into this discussion, I would like to begin with David. So, if David is online, the issue of inclusion of internet user has been underscored, but who exactly are we talking about, and what are barriers here? Please limit the response to three minutes. So, I think I’d like to start by building on what you’ve just been saying, because to me, what matters about the internet and the work I do is on the development of digital policy,

David Souter: which includes at the moment, working for the United Nations on the 20 year review of the WSIS process. What matters most to me is issues around impact. And on the whole, internet governance has been largely led by digital insiders. So, by businesses, by the technical community, by governments that are government departments that are involved in the supply of the internet rather than its impact on society as a whole. So, the question here I think is particularly driven by the way the internet has evolved to be something that is now impactful across all areas of economy, society, and culture. So, the first part of the answer is. is actually not to do with the end users themselves, but to do with the expertise that is involved in internet governance discussions. I think that needs to be much, much more informed, at least as much informed by people whose expertise lies in those fields of impact rather than in the fields of the internet itself. So by environmental experts, by health specialists, by educators and so on. We don’t have sufficient space for that in internet governance. In terms of end users, they’re of course very diverse. And they’re the demand side rather than the supply side of the internet. So not just individuals, but also organizations, more businesses, trades unions, sports clubs, religious organizations, whatever. Not just organizations, but also individuals who are also very diverse from where they come in age, gender, education, in their requirements of the internet. Not just intensive users, but also occasional users. Not just those who want to take part in internet governance. We also need to understand the perspectives of those who don’t want to take part in the process. And not only users, because non-users are also severely affected these days by the impact of the internet on their lives and their societies. So there are ways to get a wider range of views like this. And just, maybe I’ll come back to these later. But I would particularly look at ways that do not just attract vested interests or insiders to the process. So a couple of things that might be considered here are the kind of household surveys or opinion polls that have been used a lot by Research ICT Africa, by Ofcom, the regulator in Britain. And citizens assemblies, which have been successful ways of bringing in the very wide diversity of views on controversial issues in some societies, to be in Ireland as ways of just ensuring that discussion is informed by everyone and not just by those who want to take part.

Pari Esfandiari: Thank you very much David. And now Carol, you heard David’s comments and how he expands the concept of end-user. With your leadership experiences, why do you think that they remain invisible in multi-stakeholder process?

Carol Roach: Thank you. Thanks David. End-users are part of or trying to be part of the digital society, so that means that they want to get involved using technology for social reasons, that means education, health, employment, or even if you look at governments or civil servants, they have to provide services online. So the barrier for the multi-stakeholder, for the end-user, is that we don’t tend to determine that we may have missed an end-user within a group of persons. We tend to group them a lot. So you find that the barriers that you find offline are the same type of barriers that you would find online. So you might have an end-user that’s missed because of their economic standing, or they don’t have the capacity, they may have some kind of disability, and therefore they’re not aware that they could be part of the multi-stakeholder. They also say they think that there’s some representative out there and they’re doing the work, it’s not me. And I think it’s a lack of awareness because we tend… to categorize people and put them in this labeled box and sometimes I might be somebody that falls in more than one category. So therefore I’m not in a box anywhere. I’m totally left out. So I think we need to stop looking at people as being one dimensional and review how we label boxes and how we label people.

Pari Esfandiari: Thank you very much, Carol. And with that, Olga, in your view, to what extent do the barriers lie in inclusion and how much are they rooted in a lack of participation?

Olga Cavalli: Thank you. Can you hear me? Yes. Thank you very much. First, thank you very much for inviting me. I’m very honored to be with all these very important people here in this room. I would like to build upon what Carol said, and I totally agree with you. Whether you work in a big company or a government or in a civil society organization, you’re always an end user. You have your own life, you learn, you communicate with your first, with your students or with your friends or with your family through the Internet. So at the moment we are always end users. So I always find somehow weird this division, for example, in ICANN you have the end users in one place and then you have that label thing that you mentioned. I think it’s a very interesting way of describing it and put it into words. Barriers. The ones that we always come very easily to our mind, lack of resources to participate, which we all know that it’s a problem, especially for developing economies, people living far away from where the meetings are happening. This is the beauty of rotation of the meetings, because you always have the possibility of having something closer to your home or at your own town. And then it’s the language barrier. I don’t know in other regions I don’t have that deep insight, but in Latin America that is a big barrier. Many people are able perhaps to read English, but hearing a native speaker of other language, English, is complicated. So that is a barrier which is important. But I would like to also stress another barrier, which I think is a lack of information. Sometimes people don’t know where to go. There are diversity of spaces of participation. They don’t know how to direct their interests, which meeting they should be focused on. There are several and sometimes they don’t know how. There are sources of funding, for example, they don’t know. So I talk with my students about the fellowship of ICANN or some other fellowships from ISOC to participate in IGF, in ICANN, and they have no idea. So it’s communication, it’s information, and also it’s capacity building about this, how to participate and how to participate in a meaningful way in all these different spaces where we can make our voices heard. So it’s not only money, it’s not only resources, but it’s also information, communication, and a good networking to spread this news.

Pari Esfandiari: Thank you very much. And with that, we go to Armita. Armita, from a grassroots and civil society viewpoint, what are your thoughts on this?

Amrita Choudhury: Thank you. So if you look at the end user, and I’ll go to that question first, end user where? Different processes will have different people as end user. A government also can become an end user of a process. So we need to be very clear who the end user is and what the impact, as David was saying. So that’s one thing, and end-users are not homogenous. If we think just bringing three people into the room when AI is being discussed would be end-user, no. Who is impacted? How is the kind of impact? Do they understand it? Is important. For the grassroots level, as Olga was mentioning, one of the important things is capacity. Everyone at the grassroots does not have the same resources to understand what the global discussions are all about. Are we building that amount of capacity? Because the learnings, the amount of learning which goes is extremely high. Are we building it? And I think the Sao Paulo principles also speak a bit about that. The other thing, obviously, finances is this. You know, resources are another thing. Even amongst grassroots NGOs, there may be bigger ones, there may be smaller ones. Are we making it equitable amongst the developed and developing countries? I think there are many things which needs to be looked into. Many dimensions, apart from languages, skills, et cetera. I’ll leave it at that.

Pari Esfandiari: Thank you very much, Amita. With that, I go to Carol. From your experiences, what best practices ensure meaningful inclusion?

Carol Roach: A very good question. We tend to talk about inclusion all the time, but I don’t think we break it down to say, who’s not being included? We need to be able to identify and understand what their need is, why their need, and we go back again to thinking that persons are one-dimensional, and we’re not. So therefore, we need to look at a stakeholder management model, and there’s so many models that we can apply where we would look at, or something like a stakeholder. called mapping where let’s say we look at the interest level and the abilities of the person and we create a strategy based on that because just creating one strategy, it doesn’t fit everybody. So you really need to sit down and take stock of who the end user is, who we’re trying to reach, who we are missing. And another thing you need to do is to make it an iterative approach in terms of, okay, I tried this strategy. Who did I capture? Did I meet my objectives? If not, well, let me go back at it. Let me make a change to it. Let me see who I did miss out and then what’s my strategy to grab that person. And you just keep doing iterative approach so you could be more agile. Sometimes we write these big strategies on paper and we say, okay, that’s it, I’m done. Let’s try to implement it. And it usually doesn’t work or you don’t get the impact that you would want to. There’s also for stakeholder management, you can have a spectrum because people will fall somewhere on a spectrum and you can decide, okay, what’s my different criteria on the spectrum? And you could create different strategies for it. It’ll require more resources, but if you want to be impactful, then you need to take the time to understand as Amrita and everybody’s saying, who really is the end user? Am I trying to reach the government, the public service? Am I trying to reach the persons that use public services? Who am I really trying to reach? And what it is that they’re interested in. Sometimes we impose what we’re interested in onto what we think other persons are interested in.

Pari Esfandiari: Thank you very much. And with that, I go to Amrita. Amrita, you heard Carol. So from your point of view, how could we grassroots approaches better support inclusion?

Amrita Choudhury: I think there has to be a bit of, for inclusion at the grassroots level, as in not from the grassroots. David gave an example of Africa, there are community discussions happening. But, training the trainer to work at the grassroots is important. For example, if I look at a country like India with 1.2 billion population, just five community meetings will not be enough. You need language, you need to build that capacity. So, building the capacity of, for example, it cannot be one size fit all for all topics. If I’m say, taking AI for good, which is a buzzword these days, and you want to use it, how is it helping in agriculture, or climate change, or even, you know, the change, even jobs for that matter. You have to know who in that range is working. Mapping, as Carol mentioned, how do you build their skills? Are your interests and their interests aligning? And how do you get the feedback and take it up when decisions are making? I think that’s also important, how you map. And it’s not going to be similar for similar places, but I think building the capacity, having that information flowing when you give suggestions, how is it being used, unused, the transparency in the processes, I think those are important, and building accountability. If, for example, the problem that many point out with multi-stakeholderism when we talk is, we don’t have stakeholder accountability. Are we trying to bring in some accountability of what I am preaching? Thank you.

Pari Esfandiari: Thank you very much, Amirita. With that, I go to Olga. What role can government play in including underrepresented voices?

Olga Cavalli: Thank you for the question. There is one thing in multi-stakeholder concept that I usually say, it’s this confusion about equal footing and all stakeholders are equal. So that is something that people, oh, we all sit together and we all talk together, but the responsibilities of each stakeholder are different. And I think that the government has this kind of particular and important role because governments are responsible for security, for promoting the economy, for taking care of citizens, security at the streets and all that. So they have an important role and I think we as members of the community have a big challenge in trying to make the governments understand the beauty and the importance of building a real open multi-stakeholder environment to interact with this multilateral meetings. So both are okay. And there is this fantasy that multi-stakeholder is easier, no, it’s much more difficult because you have to bring everyone at the table but really all stakeholders have a good dialogue, an open dialogue. Multilateral is easier, you put all the representatives of government together, they talk with their advisors and that’s it and they do a document. That’s very important but at the same time governments must understand that the inclusion of end users and other stakeholders in the dialogue is fundamental for these new technologies that are impacting the society. So they are a very important stakeholder, I wouldn’t say more important than others but they have a kind of a gathering role of all parts of the society.

Pari Esfandiari: Actually that’s a great point to make and often it’s overlooked. With that I go to David. What strategies can help make multi-stakeholder process more inclusive for underrepresented groups?

David Souter: Okay, so I think the starting point here, which applies not just to issues around the internet but to everything really, is if you want to, as a policymaker, you want to engage with the people who’s… lives your policies impact upon. If you want to engage with them, you have to engage with them on the terms that have meaning for them and that encourage them to participate. So there are, I think, probably a couple of points here. First is that most people, and this includes most end users, don’t have the time, don’t have the inclination or sufficient interest to get deeply involved in the issues that are the priorities in most internet governance discussions. They’re not interested in how the technology works, they’re interested in what it does to them. And so the internet governance institutions, if they want to reach out to those whose lives are impacted, have to have to do so by starting from the point of view of what is important to them, what impacts matter to them, how are their daily lives affected, and then reach back from that to what the internet governance technology questions are, how they should respond to those. The internet is not the end in itself, in other words, it’s means to an end. We need to be multisectoral in thinking about it. The suggestions that I made earlier, I think, are trying to do that sort of reaching beyond. So the point of household surveys or of opinion surveys is to try and get to those people who would not naturally participate. And citizens assemblies, which I also mentioned, are a particularly effective way, I think, of doing that on complex issues over a period of time. What you do with those is you have a randomised selection made also representative of the population as a whole of maybe 100-200 people who over a period of time with expert input discuss an issue that is complex and difficult and challenging and seek to reach consensus about it, which is a consensus of the opinion of society. It’s been very helpful in a number of countries in dealing with issues that are highly contentious, such as those to do with reproductive rights, abortion and gay rights, for example, in Ireland. And I think that is a way of getting to the public, as opposed to the much easier thing that happens, which is internet governance, insiders talking to themselves.

Pari Esfandiari: Thank you, David. And while I have you, maybe you could comment on one key challenge is fast changing digital landscape and how multi-stakeholder approach could adapt to it.

David Souter: What challenges, the biggest challenge in the digital landscape at the moment is to do with some frontier technologies and artificial intelligence that we use that term and other things, too, where the pace of technological change is too fast for our institutional frameworks to deal with that governance, the regulation, the governance to deal with the uncertainties and risks that are associated with them. That makes it particularly important to understand the purposes of technological change as being about the common good. And so understanding what the kind of long-term goals we might have for society as a whole, rather than seeing them as being about what is the good of technology itself.

Pari Esfandiari: Thank you, David. With that, I go to Carol. How can multi-stakeholder discussions stay flexible and responsive to digital changes?

Carol Roach: So if you’re talking about global agreements, I think what persons that there’s usually an argument between multilateral and multi-stakeholder. But what I tell persons from, and it could be because I’m from the government stakeholder, is that at the end of the day, the people vote for governments. They don’t vote for civil society. They don’t vote for technical companies. They vote for people who will represent them. So when it comes to global agreement, as Olga says, no, not all stakeholders are created equal all the time, every time. So in a case where you’re talking about negotiations for global agreements, then the government is a important stakeholder. Now they have the influence, but a lot of times they don’t have the interest. So what we need to do is to ensure that we raise the interest level. We need to keep the awareness up. We have missions that go, each country or state have a mission that will actually do the negotiations for them. So therefore, we need to find some way in which we can raise the awareness to them. We have to ensure that we do it constantly. We just can’t say, okay, wow, there’s an agreement coming in up that has to be signed. Let’s try to get some meetings with them. No, if you keep them constantly updated and aware, then they feel comfortable that you’re not just trying to pressure them into learning something into an agreement. So I think we just need to keep it constant. And as I think another, I can’t remember who said it, we need to make the stakeholders more accountable. So you have to be a part of it, you just can’t sit back. You have to play a part. You just can’t say, oh, look what they did. Okay, you have to be accountable.

Pari Esfandiari: Thank you very much. And with that, I go to Olga. How can governments ensure global agreements remain relevant amid rapid technological? changes?

Olga Cavalli: That is a very interesting question and a very difficult one. Also, governments are not equal among themselves. It’s not the same government of a small developing country than a global leader in the world. So for developing economies, it’s a challenge. Because developing countries, we, and I live in one, so the urgencies are other. So there are many things that are economy-suppressing things, strikes, or inflation, and other things that have to be solved in the short term. And they’re very, very impacting the society. So when you go to them and say, hey, we need to talk about something about artificial intelligence. Oh, Olga, what are you talking about? We don’t have time for that. But I think Carl made a very interesting point. We have to be consistent. We have to be able to approach information in a way that they can quickly digest and use. You cannot provide to them 100 pages to read. Perhaps a brief document that opens their eyes to be aware of some negotiations that could be global, but at the end will have an impact at the national level. So we have seen that, for example, with new GTLDs. I’ve been talking about with my government for decades. And once, one of the names of one of our regions in Argentina got a TLD named by a company that, oh, it’s so good that you’re there. OK, I’ve been talking about this for years. So it is a process. I would say that it’s the way that it’s not one point thing. It’s going patiently to their advisors and to the government to tell them that there are global decisions that will have, someday, an impact. at the national level, and they have to be aware of that. But it’s challenging, especially in developing economies.

Pari Esfandiari: Thank you very much. And with that, I go to Amrita. Amrita, how can grassroots and civil society voices help keep multi-stakeholder processes adaptable?

Amrita Choudhury: By giving regular and constructive feedback, keeping many times, Olga rightly said that governments have sovereign interests they need to protect. However, many times in many developing countries, in the name of sovereign interests, the interests of end users or others are overridden. So end users, I would say civil society organizations should continue to raise their voice, point out the things which needs to be corrected, because at the end of the day, if you look at the internet or the digital technologies, it impacts everyone. And if the concerns are not taken up and deliberated in a nuanced way, no process or regulation can work. So that is, you know, why multi-stakeholder, why different stakeholders have to be there is not a question of having everyone in the table. It is to get the legitimate concerns and advantages coming to one point so that when decisions are taken, all aspects can be heard, not necessarily adapted, but at least heard. And there is a buy-in when you have to, you know, implement those things. So it is in the vested interest of a smart government if they really want things to happen in the ground. So I think grassroots level civil society have to keep on raising their voices and calling people out to make them more accountable. Thank you.

Pari Esfandiari: Thank you very much. And with that, now we go to our invited community leaders. So you heard, we set the stage. you heard the panel. I would like you now to make two comments about what you have heard so far. Who wants to go first?

Wolfgang Kleinwachter: Yeah, thank you. Thank you very much. It’s an inspiring discussion and this remembers me on debates we had nearly 30 years ago in the 90s when all this was new and people came up with ideas for a cyber democracy. So I think I haven’t heard too much in the last couple of years about cyber democracy but in the 90s this was the catchword and there was a question what is cyber democracy and some people said okay people with a passport are citizens and people with a passport are now netizens and they should have the same rights like citizens and so the idea of election came out because the accountability question was raised already in particular in the ICANN context and we had this very interesting experiment in the year 2000 to bring all, to give all internet users and users a right to participate in global election. At this time it was for five directors of the ICANN board. So this was an incredible experience and the conclusion from this election was that people who were first excited about this global election and global cyber democracy became a little bit out, you know, they lost their illusions in the process and were more skeptical and people who were skeptical in the beginning said okay yeah this is something new we should have reached a level for accountability also for stakeholder groups by continuing with the election. The wise decision which was made by ICANN in the year 2002 was, you know, to find a mix between what the American people want and what the European people want. democratic theory is called the representative democracy and the participatory democracy. So I think there was a long debate in this, whether the participatory democracy will remove or substitute the representative democracy. And the outcome was, no, this brings additional value to the process. So that means participatory elements are important in particular when the representative democracy has reached a certain limit. And insofar, the user participation is an important element, you know, to bring more sustainability to decision, to bring more voices, more perspective to the policy development process. And then it depends on the issue, because we always decided between policy development and decision making. I think for decision making, you have to have a certain authority. But I think before a decision is made, the policy development process is even more important. So that means if you have a good, broad, open, inclusive policy development process, then the decision maker, at the end of the day, just rubber stamps the recommendation which comes out from the PDP. This is in the ideal world. But the problem is, and I remember the argument 30 years ago, and said, okay, do you really want to go for global elections? Do you want to have five billion people who go to the ballot box? How you can organize this? So there was also a little bit illusions and some dreams around it. And to bring it down to a real situation in 193 countries. So it’s difficult, really. to have the wish to invite everybody to the process. So there is a natural barrier and not only barrier like language, finance and things like that. So that means people who buy a car, so do not have to be engineers and have not to understand to build a car, but people have to understand the rules when they use the car. And then so far, so it’s also a question when we speak about user involvement and user involvement, the question is then where and what? So that means you have to be a little bit more specific. For me, and it’s my final word, the most important thing is that you have a channel for everybody where he can express his voice, make his position heard. And I think in a democracy, we have to free media, we have all kinds where people can express their voices and can have a channel where they can participate in policymaking in their country. And in our internet world, that’s why the national IGF is the best institutional framework you can have, because an IGF gives you an opportunity to bring everybody to a table, it’s like a round table discussion. A business person has a different perspective than a technical expert, civil society organizations is a different and in governments are wise, they will listen to what’s going on there and then everybody goes home and does the decision where he has an authority to make decision. So this little bit idealistic, so I’m an academic person, so I’m working with models, but I think you have to have a vision if you want to move forward into reality. Thank you.

Pari Esfandiari: Thank you very much. I see Olivier. is shaking his hand and agreeing with all the comments made. So maybe you would like to make comments.

Olivier Crepin-Leblond: Yeah, thank you very much, Pari. Olivier Clapin-Hublot. And I agree with a lot of the things that were said in this session. Of course, having been involved with internet governance for quite some time, there’s a lot of things that we are hashing again and again, but we don’t seem to have solutions for them. Carol was mentioning the need not to put people in boxes, but it’s so easy to put people in boxes. It’s, oh, what stakeholder group are you? And then, there you go, you’ve got a label. We’ve dealt with those people. Let’s deal with the others. That’s one of the things that we’ve been used to do. Olga mentions that there’s big governments, small governments. You can’t just put all governments under the same banner. And of course, everyone is a user at the end of the day. Amrita mentioned that the learning is really high. And I’ve got a thought about this, because yes, there is a learning barrier with everything. And of course, Wolfgang mentioned that you don’t need to know how to build a car to be able to operate one, but you do need to learn how to operate it. My belief, and by the way, I don’t forget, of course, David’s description of methods, which I find interesting about the sampling of people, taking a sample, a representative population, and then asking them questions. I’m a firm believer in technology. And I think that we are, at the moment, living a fundamental change. The past maybe year, two years, a fundamental change into how everything is happening. First, we’re seeing this complete crazy instability worldwide with regards to politics. Things that we would have never imagined are actually happening. Things that nobody has even forecast are happening. It seems that intelligence agencies worldwide are either on holiday or something, but they didn’t tell us that something was gonna take place. And suddenly, you open the TV, and you think, oh, this has happened. And you’re just thinking, oh, we’re living this crazy reality TV show. And why is that? Well, I have no answer for this, but one thing that I do know is that there is a fundamental change in the way that we’re doing things that we need to embrace, and that’s the use of artificial intelligence. And that is a tool that is so powerful, I really think it will help us in our aim to make multi-stakeholder governance something that will succeed. Suppose the various barriers that we have in front of us, for example, languages. We all speak different languages, we all have a common language that we’re using, which is English. We all sometimes use interpreters, but that’s extremely expensive. I believe that AI with automatic interpretation will be able to help us greatly in this respect. Finances, well, okay, financing is still a huge problem because we all feel the need to meet face-to-face. But with the technologies that we have and that are going to be developed, it’s going to be easier and easier to not only interact on a Zoom room remotely, but with other tools as well to be able to interact. And when you start linking the physical world and the virtual world, that will make things a lot easier because you could have a meeting with someone with a holographic image that you just put on your glasses and say, oh, by the way, I’m having a chat with some person in New York at the moment. Sorry, I’ll talk to you in a second, I’ll just finish my chat with the person. This sort of thing, it’s stuff that is inconceivable today because AI is at the level where aviation was a hundred years ago. Now a hundred years ago, if you ever go to the Udvar-Hazy Center in, I think it’s in Washington DC, there’s a huge airplane museum and you see some of the earliest instances of aviation and you think, there’s no way in hell that I would ever even think of going on one of these things because it’s 99% sure you’ll kill yourself. And you know, whoever wants to fly are crazy people. And yet, the majority of us who have come from outside the country have flown into here, and we haven’t really thought twice about it. And that’s because, of course, aviation has got this whole history of improvements that have happened over the years. We are at the very early stage of artificial intelligence, and already we are able to summarize things using generative AI. We’re able to use it to take a complex idea that is presented in a professional paper from people who have written about a topic for the past 30 years, and that are able to use a certain jargon and a certain way of expressing themselves that is easy for them, but very difficult for newcomers. And we’re able to say, I don’t understand this, simplify it please. And the machine will do it for us. And it will, you know, it’ll write six pages. No way I’m writing, I’m reading six pages. Say it in one page. And it will do a pretty good job. Sometimes they’ll get it wrong. But it’s still very early days. It’s the days when you don’t want to go on that device that might jump over the cliff and kill you. In a couple of years’ time, all of these models are going to work better. And I really think… See, that’s the technology we have today. Yes. So that makes the point. We have very basic tools at the moment. The flight has crashed. Not at the moment. Sebastien, are you able to hear? No, it seems technology has failed us. How ironic. Shall I speak in French instead or another language? 60 page page. Want to be able to use. It’s very early on in the use of artificial intelligence. And I really believe that the tools that are currently being developed, that we ourselves can develop. Because AI allows us to develop our own tools too. I really believe that we will, as a group, as people, as end users, be able to develop tools for ourselves. That will help us in better being equipped for taking part in these discussions of Internet governance. Whether it’s explanations on things that we don’t understand when somebody else talks about it. Whether it’s ways for us to express ourselves. Because there are some difficulties sometimes when you enter a place and you have to convey a story, convey a point. But you don’t quite know what language to use for that. And at the same time also being able to do exactly what I don’t do. Which is to make very short interventions and let other people speak as well. AI will help me in that. And I’ll develop a tool for this for my own means. And I’m sure you will all be able to develop your own tools that will help you and the people around you in taking part in these issues and these discussions.

Pari Esfandiari: Thank you very much, Olivier. And my apologies for the technical glitch we had. so far. So with that, I go to Ellen. Ellen, could you please make your intervention on the conversation that has taken so far?

Ellen Helsper: Yes, thank you very much. I hope you can hear me. I apologize for my voice. I’ve been ill, so it’s not that strong. I hope it’s okay. I’m actually quite glad to be following Olivier, because I’m going to give the exact counter argument that while how everything happens might be changing, and we’ve seen how everything happening changing at several occasions throughout the history in relation to technology, what doesn’t tend to change is what the result is, especially for people who are more vulnerable and unrepresented. We see that in digital spaces, their voices are often less heard than in, and their experience is less represented, because especially with AI, the models that AI is built on are built on the lived experience of those who have been most present online and who’ve created most content, and those don’t tend to be the people who are underrepresented in society in general, and who have historically been systematically excluded. My work is in the links between social and digital inclusion, so what happens to vulnerable groups as societies become increasingly digital, and what I find interesting in this discussion, and in the framing of this panel, and I would have to say I am in line also with much of what the other speakers have said, and especially David Souter, is that it’s interesting that we talk about users, because that presents in the internet, and let’s not forget the internet is not just the infrastructure, but all the applications and platforms that are on it, it presents people with something as a fact that they then need to become engaged with, and it’s interesting that we talk about users, because that presents So it presents them in a way as passive in the creation of these technologies and to have to get involved with something that wasn’t from the beginning, designed by or for them. So I think looking forward, and this is the kind of our experience in working with groups who tend to be underrepresented or have been excluded in various ways from society more in general, and especially from more digital societies, is that often there is a kind of individual responsibilization of people need to get skills to get engaged, they need to become literate on how to use technologies and what these processes are, and that this often feels quite exploitative for them. It feels like it’s passive for them as well. And I think this has also been mentioned before, it’s kind of a mismatch about what the internet and internet governance is for, that that’s not understood, that the outcomes of internet governance or digitization in general are not presented in ways that have meaning or are relevant to a lot of the people that I work with. In my research, and well, yeah, I think it is definitely governments and other powerful stakeholders that should be held accountable, but for kind of the outcomes that people get from this process of governance, but we should be thinking about what kind of internet and what kind of technology we want for the future and that future should include all these experiences. And including some of the work that I’ve done, I think there’s two things that it’s a bit rambling my thoughts because I’m following up on many very well made points earlier. But I think one of the two things that I’ve been thinking about the future that I think we haven’t really discussed yet is that there’s many, many, many young people on the world. world, and actually young people and children especially make up the majority of the population in the Global South. And both children and the Global South in general are underrepresented in terms of the kind of lived experience on the ground. And they also have a very hard time making up this future that we’re going to be living. They have a really hard time of getting their voices heard at a higher level. And when we talk, you know, there was talk about, you know, a level playing field, all stakeholders being involved. But in the end, even if we talk about local or national IGFs, there needs to be a mechanism for filtering up and governments and governance bodies need to be held accountable for putting the mechanisms in place. So that’s through the forums that David Souter talked about, but also through kind of civil society organizations that work very locally, that really understand the local impact of the way in which technologies are designed, that these organizations are involved and that they have a meaningful voice, that it’s not the responsibility of the individuals who are really struggling to make their voices heard, but that there’s a really clear process for that. I think also, and that’s the other point that I wanted to make, something that we haven’t mentioned, but that’s obviously the big elephants in the room, that internet governance cannot be talked about without talking about the huge power inequalities in terms of who is shaping the internet, its infrastructure, its content, its platforms, whose data gets used and collected. We have not talked about the enormous sums of money and funding that come from the tech industry itself. We haven’t mentioned them necessarily as a stakeholder. They’re also not here on the table, but in the end, many governments around the world are truly beholden. to what big tech companies from specific parts of the world, very specific parts of the world, will allow them to do in a way or help them to do by providing content platforms and infrastructure. And I think we really need to talk about that because in the design of these platforms and in the design of the content and the infrastructure, this is where we also see a huge under-representation. So I don’t think it’s not just involving people as end-users and focusing on who is most likely to be advantaged, not all end-users, but I would say especially people who tend to have been under-represented, making sure that they are heard through some mechanism without making them responsible for their voices being heard, reaching out to them, as David Souter was saying, but also to think about how governments and other stakeholders, civil society and other stakeholders, get more involved in making sure that in the design and the construction of the infrastructure and the content and the platforms, in these global tech companies and the global flows of money and funding, that they are involved from the beginning and not as an after-the-fact, here is a technology, how should we govern it, but really thinking ahead. So to make sure that these patterns that I was talking about before, that we can see happening with AI right now, in terms of who is represented, who are these technologies designed around to be made useful, that will prevent a more unequal future because these technologies are governed and designed in a way that doesn’t necessarily represent the best interests of these future generations of vulnerable populations. And I would say getting more young voices, young underrepresented voices, especially from parts of the world that have been underrepresented. And I don’t want to put people into boxes. My approach is always kind of understanding kind of a disadvantage or vulnerability or living in precarious conditions from an intersectional perspective, from a local perspective. But that requires accountability at the top for involving these voices and kind of perspectives from the beginning and not at the kind of pick box exercise I think was mentioned before. So that would be my contribution.

Pari Esfandiari: Thank you very much. Thank you. With that, I think we go to Sébastien and open the floor for questions. Sébastien, the floor is yours. Sébastien?

Sebastien Bachollet: Thank you very much, Pariq. We don’t have any questions yet into the chat. If people want to raise a question right now, will be very useful. And maybe you have people in the room who would like to take the floor too.

Pari Esfandiari: Well, maybe why don’t we start with you? Maybe you could make your own comment.

Sebastien Bachollet: OK, I can. I can do that. Thank you very much. Thank you for all these exchanges. It’s quite interesting. And I will say that they are possible. I am Sébastien Bachelet. I was presented by Pariq at the beginning. Importantly, I am not with you, but a lot of my friends are there, and that’s good. It is a real interesting discussion. I would like to point out a few of the points you raised. And I will not pinpoint who said what, but artificial intelligence, yes, it’s an interesting tool. If it’s done by a foreign user, who is building it today? Therefore, do we need to trust them as we trust any of the other platforms? Therefore, yes, it could be one interesting tool, but it will depend on how the tool will be set up. The second point is why we are talking about end-users here. Because very often we don’t talk about them. I just want to give you a short story. When I started my first meeting in ICANN, I went to my government representative and they told me, while you are here, we don’t need an end-user voice. We are the voice of citizens of the country, therefore, we are there for you. Go away. I went to the representative of the CCTLD and they told me, but while you are here, we are gathering the users of the CCTLD of the country and we are the voice of end-users. You don’t need to be there. And so on and so forth. It happens that I am the only one still around. Okay, too long, but I’m the only one still around. And they left. Therefore, they left. Literally, there is no representative from my government anymore in ICANN. And therefore, it’s important that we keep the voice of all the stakeholders if we want to have multistakeholder reality. But don’t forget that end-user, it’s not just gathering the billions of people around the world directly. We can’t do that. democracy it’s not working like that therefore it’s important that we have also place where we gather people. Civil society or end-user organization are really really very very important and don’t forget that end-user are also organized more better in some part of the world but they are organized in a lot of places in the world and therefore you can’t say oh civil society it’s everybody but they don’t have any organization yes civil society get trouble for financing participation but we are we have organization um and um my wish my last point it’s a question of equal stakeholder i really feel that equal stakeholder it’s really really important no at the end of the day it’s not just a government we need to decide and and the south polo declaration it’s quite interesting with that because it show how we want to work between the two models um but uh uh we we don’t want to work and say oh multi-stakeholder will discuss and the multilateral will decide it it couldn’t work like that it’s need to be more uh agile than that and uh and once again the declaration of net mundial plus 10 was very interesting for for that um once again thank you very much for your exchanges i am sure that there are a lot of to to to say and and maybe some of the topic we are raising here during this discussion need to be taken into account in next session in next igf at the the national IDF, regional IDF, or global IDF. And my last point, yes, we can’t discuss everything here, but a lot of things are discussed in other rooms within the IDF today and during this five days. And we need to take all that into account in our thinking. Pari, back to you.

Pari Esfandiari: Thank you very much. Thank you, Sebastian. With that, I would go to Yig Chen, please.

Yik Chan Chin: Thank you very much. I think it is a very inspiring and very interactive discussion, so I just pick up some points from the previous discussion. I think we have a kind of debate at two levels. One is, as the first speaker said, we have to raise the bar of the demand side, not only the supply side. So when we say demand side, we actually talk about the individual users and also civil society. So therefore, actually the whole debate is about the digital divide between the government and different stakeholders, including the users. So there’s a digital divide, which can be the financial issue, capacity building, or IT literacy. So I think this is one debate we have in here. One is about the digital divide, and the other side actually is about the role of the government. We talk about what the government’s role in this multi-stakeholder process. Should they understand more about the individuals and the different stakeholder groups? So I think that’s the issue from the government. But on the other hand, actually we have also, as an end user, or civil society, or other stakeholder, we also have a responsibility, just like a carousel. We need to raise the awareness. government, which is not entirely up to the government, but it is also up to us as a civil society or other sector to influence the government. I think the last two speakers, Alan and also, of course, Olivia, talked about the technology, how technology could enhance or empower us. I actually agree with him. But on the other hand, Alan also talked about the continuum and the mind and the represent of one of our book. But she made a very interesting comment about, you know, we should make, I think it was involved underrepresented group to get them involved without making them responsible to make their voice to be heard. So I’m a bit cautious about this argument. For example, I had a son, he’s 18 years old. I think for him, I needed him to be self-motivated to some extent. I cannot take an entire responsibility for his life and career. I think we need some kind of a self-motivation in that respect. And I really appreciate Wolfgang’s point about, you know, the most important thing actually is to have a channel, you know, for everybody can express their opinions and to get heard, you know. And that channel is very important. I think IGF is a very important and crucial platform for us to have that kind of exchange. I think I’ll stop here. Thank you.

Pari Esfandiari: Thank you. Thank you very much. Amrita, you want to make one? Thank you. Thank you. So on that point, I think we are arriving to the reflection. So each of you have one minute to reflect on what has been said. Maybe I’ll start with Olga.

Olga Cavalli: Thank you very much. A lot of very interesting thoughts. For me, not a total conclusion. I think the way is the destination in all these processes about multi-stakeholder, something that came. up to my mind when Olivier was talking about, I’m an engineer, I was never considered part of the technical community, never ever. So I don’t know why. Many times I tried to participate, no, no, no, you’re not part of it, but I’m an engineer. And they usually are a lot of lawyers there in that stakeholder. So I think we have to be careful of the society that we are interacting with, especially young people, as you said, your son, and young people have a totally different way of using information and media. They don’t see television, they don’t use, my son and my daughter just don’t have television at home, just everything is through the internet and through YouTube channels, different channels that inform them. So we have to understand how new generations will use the information so they can build upon these processes that we are building upon. So we have to stay aware of what is happening with artificial intelligence and young people. And thank you for inviting me.

Wolfgang Kleinwachter: Thank you very much, you know, we have reached a certain progress in the last 25 years, because 25 years ago, it was a question mark whether civil society and end user are seen as a stakeholder. So in the middle of the 90s, it was a question mark. Today, I think that’s the good news. Civil society and users are recognized as an independent stakeholder group within the multi-stakeholder approach. The weak point is, that’s the bad news. So this fact is partly misused by others and they use it just to show you have a seat on the table, but you have nothing to say, or you have weak representation and things like that. So that means what we are missing are procedures, how multi-stakeholder collaboration works. in practice, both in negotiations, also in intergovernmental negotiations, how far non-state actors are involved in these negotiations, but also in multi-stakeholder bodies. So the procedures for interaction are not well defined or are not existent. Insofar, the NetMundial plus 10 multi-stakeholder guidelines, the Sao Paulo guidelines, are a step forward. It’s not the final solution, but we have now clear criteria where we can measure whether this collaboration can be labeled multi-stakeholder or not. So that means this is the next step and I think we have to work in the next couple of years, in particular also in the context of the global IGF, the national regional IGF, to make it more clear, also for outsiders, how the multi-stakeholder approach works in practice. So it’s not only a label which you put on a person’s fine and then, you know, it’s used as an excuse for traditional power policy. So, no, it has to be different, but we have not yet a full, clear understanding what the multi-stakeholder approach means in practice. Thank you.

Olivier Crepin-Leblond: Yeah, I want to thank Ellen, actually, for bringing me back down to earth from my technological heights around, but I was thinking about, not recent actually, a trip to India a few years ago. India has made incredible advances in technology and in spreading the use of mobile phones to a very large segment of its population. I remember being at the airport and the phone ringing repeatedly behind me and I turned around and the lady who was sweeping on the side was using a smartphone and had received the call and was speaking. And the tuk-tuk driver a little bit later was also with his smartphone. And I thought, wow, of course these people are able to use technology. And technology has reached a level where it’s affordable for them. There was a way for them to use the system. And I’m really hoping that the technologies that we have now today around AI are technologies that will be affordable and easy for people to use. And people, including those that Ellen was speaking about, the ones that are more disenfranchised, that are the young people, etc. I think young people have a faster capability to adapt than we do at our age. So I’m not too concerned about them. We just have to give them the chance. And giving a chance to those people that are currently not listened to and that are young and deprived communities and so on is not a burden for us but should be something of an asset. Because they’re the ones that will also help with the change. Thank you.

Pari Esfandiari: Thank you very much. With that, I go to Ellen. Ellen, would you like to have your final reflection? One minute.

Ellen Helsper: Yes. Thank you, all of you, for following up on that. I can’t agree more. My final reflection actually is to position also the governance within a wider discourse that is going on in society at the moment where we see that there’s a kind of a disempowering discourse where there’s a lot of what in academia we would call panics around technologies where people feel that it’s running ahead of them. And I think one of the important things of the governance forum and other similar multi-stakeholder approaches is to try and counter this and give people the feeling again, and especially the groups that I work with, that there is still change to be made, that they can be involved. that they are not powerless in the face of the technological developments that are going on, and the documentaries that are out there about the terrible impacts that technology is having on our lives, and then things like that. I think that is a really important step, without falling into undue technological optimism about creating a very rosy future, but it is important that we give back this feeling of empowerment and influence over the future of technologies. I don’t have the best way of doing that, but I think that should be a priority to make the internet ours, as in the world’s and the end users, again, rather than in the realms of dystopias or utopias that are governed by people who are very much not like most of the citizens of the world.

Pari Esfandiari: Thank you, very much, and with that we go to David. David, would you like to have your final reflection? One minute, please.

David Souter: Yes, okay. Let me come back to, we tend to talk about multi-stakeholderism, don’t we? So I think we have too simple a model of multi-stakeholderism, and we don’t sufficiently critique it. So the purpose of multi-stakeholder involvement is to improve the quality of decision making, and enable it to contribute more effectively to society. Sorry, can you hear me? Yes, we can. Sorry. Right, it disappeared from my screen. To contribute more effectively to society, we need to pay more attention to a number of things there. So we need to pay much more attention to power structures and power imbalances, which Ellen was talking about. In particular, I think we need to recognise the vested interests within each and all of us. stakeholder groups and how that influences the discussions that we have about governance. We need especially, I think, to disaggregate the four stakeholder groups that we tend to talk about or we tend to have in our minds of government, business, civil society and the technical community. I think that’s far too simplistic. It doesn’t recognize fundamental differences such as that between the supply and demand sides of the Internet. So if you look around you in the meeting in Riyadh, you know, ask yourselves how many businesses are there from the demand side of the Internet, the people who businesses that make use of it to do other things compared with how many are from the supply side of the Internet with their particular interests to pursue. And individual users are also much more complex. We need to consider them not just as consumers of the Internet, but also as citizens of their societies. There are differences between people here, but there are also differences within people about how they perceive their own context. We need to reflect on the diverse needs and priorities there and the fact that they are often in conflict with one another. So there are conflicting needs and priorities from the Internet and its governance. And then we need to reach out to that wider community of users in ways that they think are sufficiently relevant to them to bother taking part. In other words, if we want to hear from people, we need to listen to them and we need to create the opportunity for us to listen to them, which is also the opportunity for them to speak to us.

Amrita Choudhury: As reflections, I do agree with what Wolfgang mentioned that we have a stake in the table now, but it should not be tokenism. We need to strengthen it so that at least it is heard with legitimacy. and that’s where we need to work. And I also agree with Olga. If you want the next generation to get involved in these issues, you have to work and act with them the way they look at it. And simple, and another example I would give is when it hits the end user interests, end users rise. In India we had the Free Basics which came in. There was a huge furor from the end user community, civil societies, and it was pushed back successfully. So when it matters and when people understand that their interests are at stake, I think they work. So you have to create the narratives so that people understand what they would lose if they are going with it. And for the younger generation, they use technology, they take it for granted, but what they miss out or what are the risks or what are the trade-offs they are having, I think you need to explain it to them. Thank you.

Pari Esfandiari: Carol?

Carol Roach: I agree with what David was saying. We need to evolve the multi-stakeholder processes that we have, and it’s something that we’re looking for, trying to do with the IGF. Collaboration needs to be more effective. We need to be more result-oriented. Not a result for one particular stakeholder, but we need to come to some agreed set of objectives and then aim to meet those objectives. Each stakeholder has a different objective. But if we could come to an agreement, then it’s good. One of the persons from the media, they came to me and said, oh, I’m so glad that the IGF finally recognized the media as a stakeholder. And it came out from one of the meetings that we had. And when you look at it. you know, where does media fit? Are they private sector, are they civil society? But they have a different angle, and they have a different perspective, they have a different interest, and they have influence. So we do need to look at how we categorize stakeholders, so we need to be more flexible, we need to evolve the model, and to not only look at the issue, but look at the interest and the influence that persons, even end users have.

Pari Esfandiari: Thank you very much. I think we had a very insightful conversation here, and as we conclude, I want to emphasize the critical role of the multi-stakeholder approach in navigating the complexities of a rapidly evolving digital landscape, and the importance of end users’ participation in shaping our common digital future. The upcoming Visas Plus 20 review is a pivotal opportunity to reaffirm this approach, ensuring that end users’ perspectives remain at the heart of the internet governance decisions. With that, thank you all for your time and comments to this shared mission. Thank you to our panelists, invited community leaders, and participants, both online and in person, for your engagement and thoughtful contributions. Together, let’s continue to advance for an internet that reflects the needs and aspiration of all. Again, thank you to support group, thank you to technical community, and thank you to IGF. And with that, we end this meeting. Thank you all. Thank you. Thank you. . . . . . . .

D

David Souter

Speech speed

156 words per minute

Speech length

1368 words

Speech time

523 seconds

Digital divide between governments and other stakeholders

Explanation

David Souter highlights the gap in digital knowledge and capabilities between governments and other stakeholders in internet governance. This divide impacts the ability of different groups to participate effectively in discussions and decision-making processes.

Evidence

Working for the United Nations on the 20 year review of the WSIS process

Major Discussion Point

Challenges in Including End-Users in Internet Governance

Agreed with

Olga Cavalli

Carol Roach

Wolfgang Kleinwachter

Ellen Helsper

Amrita Choudhury

Agreed on

Need for more inclusive and representative internet governance

Power imbalances between stakeholders need to be addressed

Explanation

David Souter emphasizes the need to recognize and address power structures and imbalances within the multi-stakeholder model. He argues that these power dynamics significantly influence discussions and outcomes in internet governance.

Major Discussion Point

Role of Government and Other Stakeholders

Need to disaggregate and expand stakeholder categories beyond current model

Explanation

David Souter suggests that the current four-stakeholder model (government, business, civil society, technical community) is too simplistic. He argues for a more nuanced approach that recognizes fundamental differences within these groups, such as between supply and demand sides of the internet.

Evidence

Example of businesses from demand side vs supply side of the Internet at the Riyadh meeting

Major Discussion Point

Improving Multi-Stakeholder Processes

Agreed with

Wolfgang Kleinwachter

Carol Roach

Agreed on

Improving multi-stakeholder processes

Differed with

Amrita Choudhury

Differed on

Approach to engaging end-users

O

Olga Cavalli

Speech speed

150 words per minute

Speech length

1187 words

Speech time

474 seconds

Barriers like language, finances, and lack of information

Explanation

Olga Cavalli identifies several barriers to participation in internet governance, including language difficulties, financial constraints, and lack of information. She emphasizes that these barriers particularly affect developing economies and people living far from meeting locations.

Evidence

Example of language barrier in Latin America

Major Discussion Point

Challenges in Including End-Users in Internet Governance

Agreed with

David Souter

Carol Roach

Wolfgang Kleinwachter

Ellen Helsper

Amrita Choudhury

Agreed on

Need for more inclusive and representative internet governance

Governments have unique responsibilities but must understand multi-stakeholder approach

Explanation

Olga Cavalli argues that while governments have specific responsibilities, they need to understand and embrace the multi-stakeholder approach. She emphasizes the importance of governments recognizing the value of including diverse stakeholders in dialogue and decision-making.

Major Discussion Point

Role of Government and Other Stakeholders

Need to understand how new generations use information and media

Explanation

Olga Cavalli highlights the importance of understanding how younger generations consume and interact with information and media. She argues that this understanding is crucial for building effective internet governance processes that engage future generations.

Evidence

Example of her children not using traditional television

Major Discussion Point

Role of Technology in Empowering End-Users

C

Carol Roach

Speech speed

143 words per minute

Speech length

1137 words

Speech time

474 seconds

Need to avoid putting people in boxes/categories

Explanation

Carol Roach argues against categorizing people into rigid groups in internet governance discussions. She emphasizes that individuals often have multiple identities and interests that may not fit neatly into predefined stakeholder categories.

Major Discussion Point

Challenges in Including End-Users in Internet Governance

Agreed with

David Souter

Olga Cavalli

Wolfgang Kleinwachter

Ellen Helsper

Amrita Choudhury

Agreed on

Need for more inclusive and representative internet governance

Need for accountability from all stakeholders, not just governments

Explanation

Carol Roach emphasizes that all stakeholders, not just governments, should be held accountable in the multi-stakeholder process. She argues for a more balanced approach to responsibility and participation in internet governance.

Major Discussion Point

Role of Government and Other Stakeholders

Importance of being more results-oriented in collaboration

Explanation

Carol Roach advocates for a more results-oriented approach in multi-stakeholder collaboration. She suggests that stakeholders should agree on common objectives and work towards meeting these goals, rather than pursuing individual agendas.

Major Discussion Point

Improving Multi-Stakeholder Processes

Agreed with

David Souter

Wolfgang Kleinwachter

Agreed on

Improving multi-stakeholder processes

W

Wolfgang Kleinwachter

Speech speed

138 words per minute

Speech length

1107 words

Speech time

478 seconds

Importance of having channels for everyone to express opinions

Explanation

Wolfgang Kleinwachter emphasizes the critical need for channels that allow all individuals to express their opinions in internet governance. He argues that providing these channels is fundamental to ensuring inclusive and representative decision-making processes.

Major Discussion Point

Challenges in Including End-Users in Internet Governance

Agreed with

David Souter

Olga Cavalli

Carol Roach

Ellen Helsper

Amrita Choudhury

Agreed on

Need for more inclusive and representative internet governance

Need for clear procedures on how multi-stakeholder collaboration works in practice

Explanation

Wolfgang Kleinwachter calls for the development of clear procedures for multi-stakeholder collaboration in internet governance. He argues that without well-defined processes, the multi-stakeholder approach risks being misused or becoming merely symbolic.

Evidence

Reference to NetMundial plus 10 multi-stakeholder guidelines

Major Discussion Point

Improving Multi-Stakeholder Processes

Agreed with

David Souter

Carol Roach

Agreed on

Improving multi-stakeholder processes

E

Ellen Helsper

Speech speed

151 words per minute

Speech length

1541 words

Speech time

610 seconds

Underrepresentation of vulnerable groups and Global South

Explanation

Ellen Helsper highlights the persistent underrepresentation of vulnerable groups and the Global South in internet governance discussions. She argues that this lack of representation leads to decisions that may not reflect the needs and experiences of these communities.

Evidence

Mention of young people and children making up the majority of the population in the Global South

Major Discussion Point

Challenges in Including End-Users in Internet Governance

Agreed with

David Souter

Olga Cavalli

Carol Roach

Wolfgang Kleinwachter

Amrita Choudhury

Agreed on

Need for more inclusive and representative internet governance

Caution about AI models being built on experiences of those already most represented online

Explanation

Ellen Helsper warns about the potential bias in AI models used in internet governance. She points out that these models are often based on the experiences of those who are already well-represented online, potentially perpetuating existing inequalities.

Major Discussion Point

Role of Technology in Empowering End-Users

Differed with

Olivier Crepin-Leblond

Differed on

Role of technology in empowering end-users

Need to counter disempowering discourse around technology

Explanation

Ellen Helsper argues for the importance of countering disempowering narratives about technology. She suggests that governance forums should work to give people, especially marginalized groups, a sense of agency and influence over technological developments.

Major Discussion Point

Future Directions for Internet Governance

O

Olivier Crepin-Leblond

Speech speed

169 words per minute

Speech length

1463 words

Speech time

517 seconds

Potential of AI to help overcome language barriers and improve participation

Explanation

Olivier Crepin-Leblond discusses the potential of AI to address language barriers in internet governance. He suggests that AI-powered translation tools could significantly improve participation by making discussions more accessible to non-English speakers.

Major Discussion Point

Role of Technology in Empowering End-Users

Differed with

Ellen Helsper

Differed on

Role of technology in empowering end-users

Importance of making new technologies affordable and accessible to disenfranchised groups

Explanation

Olivier Crepin-Leblond emphasizes the need to make new technologies, including AI, affordable and accessible to disenfranchised groups. He argues that this is crucial for ensuring these groups can participate meaningfully in shaping the future of the internet.

Evidence

Example of widespread smartphone use in India, including by tuk-tuk drivers

Major Discussion Point

Role of Technology in Empowering End-Users

A

Amrita Choudhury

Speech speed

161 words per minute

Speech length

921 words

Speech time

342 seconds

Importance of creating narratives to engage end-users on issues that affect them

Explanation

Amrita Choudhury emphasizes the need to create compelling narratives that help end-users understand how internet governance issues affect them. She argues that this understanding is crucial for motivating meaningful participation from diverse user groups.

Evidence

Example of the Free Basics controversy in India

Major Discussion Point

Improving Multi-Stakeholder Processes

Differed with

David Souter

Differed on

Approach to engaging end-users

Need to strengthen legitimacy of civil society stakeholders beyond tokenism

Explanation

Amrita Choudhury argues for strengthening the role of civil society stakeholders in internet governance beyond mere tokenism. She emphasizes the importance of ensuring that civil society voices are not only included but also heard with legitimacy in decision-making processes.

Major Discussion Point

Future Directions for Internet Governance

Agreed with

David Souter

Olga Cavalli

Carol Roach

Wolfgang Kleinwachter

Ellen Helsper

Agreed on

Need for more inclusive and representative internet governance

P

Pari Esfandiari

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Upcoming WSIS+20 review as opportunity to reaffirm multi-stakeholder approach

Explanation

Pari Esfandiari highlights the upcoming WSIS+20 review as a crucial opportunity to reaffirm and strengthen the multi-stakeholder approach in internet governance. She emphasizes the importance of ensuring that end-users’ perspectives remain central to decision-making processes.

Major Discussion Point

Future Directions for Internet Governance

Agreements

Agreement Points

Need for more inclusive and representative internet governance

David Souter

Olga Cavalli

Carol Roach

Wolfgang Kleinwachter

Ellen Helsper

Amrita Choudhury

Digital divide between governments and other stakeholders

Barriers like language, finances, and lack of information

Need to avoid putting people in boxes/categories

Importance of having channels for everyone to express opinions

Underrepresentation of vulnerable groups and Global South

Need to strengthen legitimacy of civil society stakeholders beyond tokenism

Speakers agreed on the need to address various barriers to participation and ensure more diverse representation in internet governance processes.

Improving multi-stakeholder processes

David Souter

Wolfgang Kleinwachter

Carol Roach

Need to disaggregate and expand stakeholder categories beyond current model

Need for clear procedures on how multi-stakeholder collaboration works in practice

Importance of being more results-oriented in collaboration

Speakers agreed on the need to refine and improve multi-stakeholder processes to make them more effective and inclusive.

Similar Viewpoints

Both speakers emphasized the importance of understanding and including younger generations and underrepresented groups in internet governance discussions.

Olga Cavalli

Ellen Helsper

Need to understand how new generations use information and media

Underrepresentation of vulnerable groups and Global South

Both speakers highlighted the need for a more balanced approach to power and accountability among different stakeholders in internet governance.

David Souter

Carol Roach

Power imbalances between stakeholders need to be addressed

Need for accountability from all stakeholders, not just governments

Unexpected Consensus

Role of technology in addressing participation barriers

Olivier Crepin-Leblond

Ellen Helsper

Potential of AI to help overcome language barriers and improve participation

Caution about AI models being built on experiences of those already most represented online

While Olivier was optimistic about AI’s potential to improve participation, Ellen cautioned about potential biases. However, both recognized the significant role of technology in shaping participation, which was an unexpected area of alignment given their different perspectives.

Overall Assessment

Summary

The main areas of agreement centered around the need for more inclusive and representative internet governance, improving multi-stakeholder processes, and recognizing the role of technology in both enabling and potentially hindering participation.

Consensus level

There was a moderate level of consensus among speakers on the need for change and improvement in current internet governance processes. This consensus suggests a shared recognition of existing challenges and a willingness to explore new approaches, which could potentially lead to more inclusive and effective internet governance frameworks in the future.

Differences

Different Viewpoints

Role of technology in empowering end-users

Olivier Crepin-Leblond

Ellen Helsper

Potential of AI to help overcome language barriers and improve participation

Caution about AI models being built on experiences of those already most represented online

While Olivier Crepin-Leblond sees AI as a potential solution to overcome barriers in participation, Ellen Helsper cautions against the potential biases in AI models that could perpetuate existing inequalities.

Approach to engaging end-users

David Souter

Amrita Choudhury

Need to disaggregate and expand stakeholder categories beyond current model

Importance of creating narratives to engage end-users on issues that affect them

David Souter advocates for a more nuanced categorization of stakeholders, while Amrita Choudhury emphasizes the importance of creating compelling narratives to engage end-users.

Unexpected Differences

Responsibility for end-user participation

Ellen Helsper

Yik Chan Chin

Need to counter disempowering discourse around technology

We need some kind of a self-motivation in that respect

While not directly contradicting each other, Ellen Helsper’s emphasis on countering disempowering narratives and Yik Chan Chin’s call for self-motivation from end-users present an unexpected difference in approach to end-user empowerment. This highlights a tension between institutional responsibility and individual initiative in internet governance participation.

Overall Assessment

summary

The main areas of disagreement revolve around the role of technology in empowering end-users, approaches to engaging end-users, and the balance of responsibilities between institutions and individuals in promoting participation.

difference_level

The level of disagreement among the speakers is moderate. While there are clear differences in perspectives and approaches, there is also a significant amount of common ground, particularly in recognizing the need for more inclusive and effective multi-stakeholder processes. These differences in viewpoints contribute to a rich discussion that highlights the complexity of internet governance issues and the need for diverse perspectives in addressing them.

Partial Agreements

Partial Agreements

Both speakers agree on the need for improved accountability and clarity in multi-stakeholder processes, but differ in their focus. Carol Roach emphasizes accountability from all stakeholders, while Wolfgang Kleinwachter stresses the need for clear procedures in collaboration.

Carol Roach

Wolfgang Kleinwachter

Need for accountability from all stakeholders, not just governments

Need for clear procedures on how multi-stakeholder collaboration works in practice

Both speakers agree on the need to address power imbalances and underrepresentation in internet governance, but they approach it from different angles. David Souter focuses on general power structures, while Ellen Helsper specifically highlights the underrepresentation of vulnerable groups and the Global South.

David Souter

Ellen Helsper

Power imbalances between stakeholders need to be addressed

Underrepresentation of vulnerable groups and Global South

Similar Viewpoints

Both speakers emphasized the importance of understanding and including younger generations and underrepresented groups in internet governance discussions.

Olga Cavalli

Ellen Helsper

Need to understand how new generations use information and media

Underrepresentation of vulnerable groups and Global South

Both speakers highlighted the need for a more balanced approach to power and accountability among different stakeholders in internet governance.

David Souter

Carol Roach

Power imbalances between stakeholders need to be addressed

Need for accountability from all stakeholders, not just governments

Takeaways

Key Takeaways

The multi-stakeholder approach is critical for navigating the complexities of internet governance, but faces challenges in meaningful inclusion of end-users and underrepresented groups.

There is a need to evolve and improve multi-stakeholder processes to be more inclusive, results-oriented, and reflective of diverse stakeholder interests.

Governments play an important role but all stakeholders need to be held accountable in internet governance.

Technology like AI has potential to improve participation, but also risks perpetuating existing inequalities if not carefully implemented.

The upcoming WSIS+20 review is an important opportunity to reaffirm and strengthen the multi-stakeholder approach in internet governance.

Resolutions and Action Items

Work to develop clearer procedures for how multi-stakeholder collaboration functions in practice

Improve efforts to engage and include young people and underrepresented groups in internet governance processes

Explore ways to disaggregate and expand current stakeholder categories to better reflect diverse interests

Unresolved Issues

How to effectively balance power dynamics between different stakeholder groups

Best methods for including end-user perspectives without placing undue burden on individuals

How to ensure AI and other new technologies are developed and implemented in an inclusive manner

Specific mechanisms for improving accountability of all stakeholders in internet governance processes

Suggested Compromises

Combining elements of multilateral and multi-stakeholder approaches, as referenced in the Sao Paulo declaration

Using tools like citizen assemblies to gather input from a wider range of voices without requiring extensive time commitment from individuals

Developing targeted strategies to engage different stakeholder groups based on their interests and capacities

Thought Provoking Comments

We need to be multisectoral in thinking about it. The internet is not the end in itself, in other words, it’s means to an end.

speaker

David Souter

reason

This comment shifts the focus from technology to its societal impacts, challenging the technocentric view often prevalent in internet governance discussions.

impact

It broadened the scope of the discussion to include considerations of how internet governance affects various sectors of society and everyday lives of people.

We tend to group them a lot. So you find that the barriers that you find offline are the same type of barriers that you would find online.

speaker

Carol Roach

reason

This insight highlights how digital inequalities often mirror and amplify existing social inequalities, adding nuance to the discussion of inclusion.

impact

It prompted further discussion on the multifaceted nature of digital exclusion and the need for more nuanced approaches to inclusion.

AI will help me in that. And I’ll develop a tool for this for my own means. And I’m sure you will all be able to develop your own tools that will help you and the people around you in taking part in these issues and these discussions.

speaker

Olivier Crepin-Leblond

reason

This comment introduces a provocative perspective on how AI could potentially democratize participation in internet governance.

impact

It sparked a debate about the role of AI in governance processes, with subsequent speakers both building on and challenging this optimistic view.

We should be thinking about what kind of internet and what kind of technology we want for the future and that future should include all these experiences.

speaker

Ellen Helsper

reason

This comment reframes the discussion from reactive governance to proactive shaping of technology, emphasizing inclusivity.

impact

It shifted the conversation towards considering long-term visions and values in internet governance, rather than just immediate technical concerns.

We need to stop looking at people as being one dimensional and review how we label boxes and how we label people.

speaker

Carol Roach

reason

This insight challenges the oversimplification often present in stakeholder categorizations in internet governance.

impact

It led to further discussion on the complexity of user identities and the need for more nuanced approaches to representation in governance processes.

Overall Assessment

These key comments collectively shifted the discussion from a narrow focus on technical governance to a broader consideration of societal impacts, inclusion, and long-term vision. They challenged simplistic categorizations of stakeholders and users, emphasized the need for proactive shaping of technology’s future, and sparked debate about the potential role of AI in governance processes. The discussion became more nuanced, considering the multifaceted nature of digital inclusion and the complex interplay between online and offline inequalities. Overall, these comments pushed the conversation towards a more holistic, forward-looking, and inclusive approach to internet governance.

Follow-up Questions

How can we develop more effective procedures for multi-stakeholder collaboration in internet governance?

speaker

Wolfgang Kleinwachter

explanation

Wolfgang highlighted that while civil society and users are now recognized as stakeholders, clear procedures for how multi-stakeholder collaboration works in practice are still missing. Developing these procedures is crucial for the effectiveness of the multi-stakeholder approach.

How can we better disaggregate and represent the diverse interests within each stakeholder group?

speaker

David Souter

explanation

David argued that the current model of four stakeholder groups (government, business, civil society, technical community) is too simplistic and doesn’t capture the complexity of interests, especially the differences between supply and demand sides of the internet.

How can we create more effective channels for end-users to express their voices in internet governance?

speaker

Wolfgang Kleinwachter

explanation

Wolfgang emphasized the importance of having channels for everybody to express their opinions and be heard in internet governance processes.

How can we ensure AI and other emerging technologies are developed and governed in ways that represent the interests of underrepresented groups?

speaker

Ellen Helsper

explanation

Ellen raised concerns about AI models being built on the lived experiences of those most present online, potentially excluding vulnerable and underrepresented groups.

How can we better involve young people, especially from the Global South, in internet governance processes?

speaker

Ellen Helsper

explanation

Ellen pointed out that young people, particularly in the Global South, make up a majority of the population but are underrepresented in internet governance discussions.

How can we create more effective mechanisms for filtering up local and national concerns to global internet governance forums?

speaker

Ellen Helsper

explanation

Ellen suggested the need for better mechanisms to ensure local voices are heard at higher levels of internet governance.

How can we address the power inequalities in shaping the internet, its infrastructure, content, and platforms?

speaker

Ellen Helsper

explanation

Ellen highlighted the need to address the significant power imbalances in who shapes the internet, including the role of big tech companies.

How can we evolve the multi-stakeholder model to be more flexible and inclusive of diverse perspectives?

speaker

Carol Roach

explanation

Carol suggested the need to evolve the multi-stakeholder processes to be more effective, result-oriented, and inclusive of diverse perspectives, such as media.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #59 Towards a Greener Future with E-Waste Management

Open Forum #59 Towards a Greener Future with E-Waste Management

Session at a Glance

Summary

This discussion focused on the global challenge of e-waste management and potential solutions. The Digital Cooperation Organization (DCO) presented their initiative to develop a framework for addressing e-waste issues, emphasizing the need for collaboration between governments, businesses, and individuals. Key points included the rapid growth of e-waste, with projections showing it will more than double by 2030, and the low global recycling rate of only 20%.

Participants highlighted several barriers to effective e-waste management, including lack of consumer awareness, data privacy concerns, and the complexity of the supply chain. The importance of collection systems, consumer education, and achieving economies of scale in recycling were stressed. The discussion also explored the potential for reusing and redeploying electronics to bridge the digital divide in underserved communities.

Cross-border collaboration was identified as crucial for addressing e-waste challenges, particularly for smaller countries with limited resources. Participants discussed various initiatives, such as government-led collection programs and partnerships with the informal sector. The need for better data collection and standardized metrics for measuring e-waste was emphasized.

The DCO presented a framework for governments, focusing on regulation and policies, financial instruments, awareness and capability building, and infrastructure development. The discussion concluded with a call to action for all stakeholders to take responsibility and contribute to creating a more sustainable digital economy.

Keypoints

Major discussion points:

– The growing problem of e-waste and its environmental/economic impacts

– Challenges around e-waste collection, consumer awareness, and data privacy concerns

– Potential for reusing and recycling e-waste to bridge the digital divide

– Need for collaboration between governments, private sector, and NGOs to address e-waste

– Developing policies, infrastructure, and economic incentives for e-waste management

The overall purpose of the discussion was to raise awareness about the e-waste challenge, gather input from diverse stakeholders on potential solutions, and promote collaboration to develop more effective e-waste management practices globally.

The tone of the discussion was informative and collaborative. It started out more formal with presentations on the e-waste issue, but became more interactive and participatory as attendees were encouraged to share their perspectives and ideas. There was an emphasis on collective responsibility and finding practical solutions together.

Speakers

– Alaa Abdulaal: Representative from the Digital Cooperation Organization (DCO)

– Syed Iftikhar: Representative from DCO

– Arianna Molino: Sustainability specialist at Kearney, collaborating with DCO

– Mohamed Mashaka: From United Republic of Tanzania

– Ayman Arbiyat: From Jordan

Additional speakers:

– Noia: From Tuvalu (Pacific island country)

– Dr. Nagwa: From the Academy in Egypt

– Abdul Aziz: From CST (KSA regulator), mentioned but did not speak directly

Full session report

E-Waste Management: Global Challenges and Collaborative Solutions

Introduction

The Digital Cooperation Organization (DCO) hosted a discussion on the critical global challenge of e-waste management, bringing together representatives from various countries and organizations. The interactive dialogue, which included audience participation through Slido polls, focused on the rapid growth of e-waste, projected to reach 74 million tons by 2030, and the current low global recycling rate of only 20%. Participants examined barriers to effective e-waste management and explored potential solutions, emphasizing the need for collaboration between governments, businesses, and individuals.

Key Challenges in E-Waste Management

1. Growing Environmental and Health Risks

Speakers, including Alaa Abdulaal from DCO and Arianna Molino from Kearney, highlighted the increasing volume of e-waste and its negative impacts on the environment and human health. E-waste was identified as a significant contributor to climate change and pollution.

2. Lack of Consumer Awareness

Mohamed Mashaka from Tanzania emphasized the critical issue of public awareness, noting that many citizens are unaware of the impact of e-waste on various initiatives. This lack of awareness was identified as a major barrier to proper e-waste disposal and management.

3. Data Privacy Concerns

Ayman Arbiyat from Jordan raised the issue of data privacy concerns when disposing of electronic devices, recognized as a significant obstacle preventing individuals from properly recycling their e-waste.

4. Complexity of Supply Chains

Arianna Molino highlighted the complexity of e-waste supply chains as a major challenge, based on participant responses. This complexity makes it difficult to track and manage e-waste effectively throughout its lifecycle.

5. Unique Challenges for Small Island Nations

An audience member from Tuvalu brought attention to the specific challenges faced by small island nations due to their geographical isolation, emphasizing the need for tailored solutions in different contexts.

Proposed Solutions and Strategies

1. Comprehensive E-Waste Strategies

Mohamed Mashaka called for the development of comprehensive e-waste strategies and guidelines to address the multifaceted nature of the problem.

2. Improved Data Collection and Measurement

Dr. Nagwa from Egypt highlighted the importance of accurate data collection and measurement in e-waste management, recognizing its critical role in effective policy-making and understanding complex e-waste supply chains.

3. Promoting Reuse and Repair

Arianna Molino advocated for promoting the reuse and repair of electronic devices to extend their lifespans and reduce e-waste generation. Audience members raised concerns about the quality and reliability of refurbished electronics.

4. Leveraging Technology and AI

An audience member suggested exploring the use of artificial intelligence and other technologies to support e-waste management.

5. Multi-stakeholder Collaboration

Speakers emphasized the importance of collaboration between governments, the private sector, and NGOs to address e-waste challenges effectively. The need for cross-border and regional collaboration was also highlighted.

6. Policy and Regulatory Frameworks

Participants discussed the need for e-waste-specific regulations and standards, including extended producer responsibility policies and financial incentives for proper e-waste management. The importance of harmonizing cross-border e-waste regulations was noted.

7. Consumer Education and Awareness Campaigns

Multiple speakers stressed the need for increased consumer awareness and education to address concerns and promote proper e-waste disposal.

8. Cross-border Initiatives

The discussion included potential initiatives for global e-waste management, such as global regulation and responsible recycling certification.

The Role of the Digital Cooperation Organization (DCO)

The DCO presented a framework for governments focusing on four key components:

1. Regulation and policies

2. Financial instruments

3. Awareness and capability building

4. Infrastructure development

This framework aims to guide governments in developing comprehensive strategies to address e-waste management challenges. The DCO emphasized its role in facilitating collaboration and knowledge sharing among member states to tackle the global e-waste problem.

Specific E-Waste Management Initiatives

Participants shared examples of ongoing e-waste management efforts, including:

– The KSA government working with the social sector to collect devices and ensure privacy

– Tanzania’s efforts to develop comprehensive e-waste guidelines

Closing Remarks

Alaa Abdulaal and Syed Iftikhar from DCO concluded the discussion by emphasizing the importance of collective action in addressing the e-waste challenge. They encouraged participants to take personal responsibility for proper e-waste disposal and to contribute to creating a more sustainable digital economy. The speakers reiterated the DCO’s commitment to supporting member states in developing effective e-waste management strategies and fostering international collaboration on this critical issue.

Session Transcript

Alaa Abdulaal: issue. As the digital economy continues to grow, connecting billions of people, it also has oppressing environmental consequences of this progress. The rapid growth of e-waste is really becoming more and more, with more consumers of electronics and mobile phones and home devices. All of this leaving us with this challenge that we want to tackle and we really need to look at it as a shared responsibility, not only by government but by individuals, by ourselves. It is the responsibility upon everyone in this room and even listening to us to take this challenge and really think about it. Because imagine that there is a lot of e-waste that is not being recycled. Look at the lost opportunity, not only from an economic perspective but also from an environmental aspect. There is a lost opportunity here of all of this e-waste not being recycled and having those devices reaching to places where there is a need for it. We are also now facing a challenge of affordability of devices. So why not seize this opportunity and look at this e-waste and see how it can be recycled, how it can be managed. And again, as I said, it’s not a government responsibility or a private sector responsibility. I believe it’s a shared responsibility on each individual. We are the ones who are consuming those electronics, those mobiles, those devices. What are we doing with them? If I ask to raise hand, how many devices do we have more than one in the room? Who has more than one device in the room? Yeah, a majority of the room is raising their hand. What did you do with your old devices? How many times are you buying new devices? So it’s just to think about this. And for us as the digital cooperation organization, because we are looking at how to have that inclusive and sustainable growth of the digital economy, we saw that this is an opportunity for us to gather stakeholders and to look at this challenge and see what we can do. And specifically from a cross-border e-waste management. Because DCO is really committed to this mission. And through our e-waste management program initiative, we aim to foster circular economy in the ICT sector, advance cross-border solution and leverage technology to mitigate environmental harm. And today in this workshop, we want to share our work and we want to hear from you and to give us insights on what we are doing. Because we believe in a multi-stakeholder approach. We believe in learning from other and listening from experts like the one in the room. And for us to have that comprehensive solution. Because again, we as DCO, we really believe that we want to give a fair opportunity for each person, each nation, each business to prosper in a cross-border inclusive digital economy. So thank you, everyone, for being here today. And looking forward to hear from you to be engaged in this interactive workshop. I want to give the floor to my colleague,

Syed Iftikhar: Dr. Sayed Iftikhar, to have his word. Thank you. Thank you, Ms. Salah, for giving more insight about DCO and particularly the e-waste management initiatives. So basically, first of all, I give some introduction about the DCO more in detail because DCO is established in November 2020 and we are aggressively working on different aspects. So DCO is a unique, multilateral, intergovernmental organization. And considering to support the stakeholders, particularly the government businesses and the individuals, for emerging areas, particularly in terms of digital economy. So one of the challenges is the sustainability. And we’re working, cooperating with our member states and countries of the world on how we tackle this challenge. So DCO, as per the structure, we have 16 member states. We also have observers, more than 40. These observers are from international organizations, private sectors, and NGOs as well. So we also have a partnership with international organizations like UN, World Economic Forum. So this is somehow our overall structure. So we have core functions. We are an information provider. We are educators. We are facilitators. So this is somehow the digital cooperation organization structure. As I said, we have 16 member states and we represent 800 million populations. And notably, the 70% of the population is youth. So as we have youth, and youth are more focused on the digital areas, particularly the digital devices. So that’s why we need to care about more on the sustainability aspect from environmental aspect, from the health aspects. Our member states have a GDP about USD 3.5 trillion. So you see on the screen first, you see a word DSA, Digital Space Accelerator. Basically, it’s a working group of think tank, researcher, policy makers, and individuals. And we gather this working group on different international forums. In this year, we also organized different roundtables. What exactly the objective of this DSA? DSA is to focus on what exactly the emerging issues in digital economy and how we solve these challenges. We keep all these stakeholders on board to discuss, to co-create the possible solution to tackle the emerging challenges in digital economy. And one of the challenges is sustainability. So why we keep this DSA program? Basically, it has impact. We want to create impactful solutions. Another, we want to present our organizations as a credible source of information. And we also want, because our name is more focused on the cooperation, we want to expand the cooperation as well. In this year, we have different topics, and one of the topics is on e-waste management, and particularly focus on the cross-border e-waste management. In 2024, DCO focused on sustainability, and from sustainability, we are focusing on e-waste management program. So this program, as I told you, the DCO is more focused to concentrate on to address these challenges through stakeholders. And how we manage this challenge? So there are different ways. One of the things is we want to reduce this e-waste, number one. Second is that we want to leverage the economic value of the e-waste. And definitely, we also focus on the digital inclusions. So the scope of this project is mainly to analyze the best practices and to utilize these practices to tackle the challenge of e-waste. And the second major objective is to co-create the framework, holistic framework. It covers all aspects, regulatory aspects, capacity building aspects, financial dimensions, and how we promote the digital inclusion. So this is also one of the key scope of this project. So I stop here. I invite our experts, Ari, to give you more insight about the detailed projects. Thank

Arianna Molino: you. Ari, please. Hello, everyone. Nice to meet you. I’m Arianna Molino. I’m a sustainability specialist at Kearney. And I’m helping and collaborating with the DCO on this, I think, very passionate and very interesting initiative. Because as also Ms. Ala was mentioning, it’s an urgent problem. We love technologies. I think we’re addicted to technologies. And our kids will definitely also use more and more of it. So if we want really to have a digital economy, we need to tackle these issues and making sure that it’s sustainable. So for the today’s session, what we would like to do, it’s really have an engagement with you, as mentioned, giving your expertise. And also I see I hope that from different parts of the world, it would be great to get your perspective. So on one side, we want to discuss the importance of e-waste in your countries or for your sector. The second objective would be to also try to link the environmental issues and the social benefits. Right? So how do you see the reuse of devices to tackle digital to bridge the digital divide? And then third, we believe in collaboration. I mean, it’s a little bit obvious because DCO is known for cooperation. But we really believe that you cannot do a work independent. It’s a complex ecosystem. If you start working and alone, you will definitely not reach economy of scales. Profitability is an issue. And so this is where we really encourage to have discussions and to understand your lesson learned. So through collaboration, what did you gain? And what did you learn? And also the challenges and how we can collaborate together. Miss Hala mentioned it’s national focus, but also cross-border. Because I think it would be great here in this forum, where it’s global, to understand if it’s possible to have collaboration between countries. Cross-border e-waste trade is under the Basel Convention. It’s regulated. And we definitely need to be responsible in how we trade e-waste. But also we believe that there is a lot of potential in order to leverage technologies and making sure that we don’t duplicate infrastructure around the world if we can work together to reach efficiency. So these are the main objectives. We wanted to have introduction, but I see that maybe we are a little bit too many. We will have a poll online, where we will start understanding from which country you are, from which also sector, et cetera. But I really encourage you, if you want to intervene, please do. Because really, we see that if you start talking to each other, at the end, it would be really amazing if you start bouncing ideas among you. We are here just to facilitate. But really, if you go out from this session with more energy and more hope for setting up a business or really scaling up your efforts in terms of policy, that would be amazing. I totally agree with Arianna. It is important. We are here to listen and hear from you. And without your interventions and feedback, this session will not be successful. So yes, I really encourage you to be interactive and share your insights with us. Yes. And of course, we have microphones. So we can also be a star with microphones and sending it around. So also to set up a little bit the agenda, after this brief introduction, we will talk about e-waste, probably you know already the basics. But just to remind us about the big numbers, the big pictures, to kind of phase out and understand what is the environmental impact, the social impact, but also the economic potential. And then looking into the value chain. Because we know that the issue is not just recycling, right? It’s like collection. It’s kind of sorting. It’s really taking the private sector together with the social sector and the informal one. Because we need to remember that at the end, each country is going through a certain level of involvement. So we really would like to see and discuss with you the value chain itself. Second part would be on digital divide and e-waste. I think we need to remind ourselves, but before recycling, we can reuse and redeploy, right? Because circular strategies, you need to close the loop early on. And it’s a great potential to use this in order to then create a new market, to give access to a potentially part of the global economy. Part of the population that cannot buy a new iPhone every two years, right? So we’ll talk a little bit about that. And finally, cross-border and the national collaboration. Here would be great. We will see if maybe we can split into groups or if we have enough energy. We can also see among you if you want to raise the hands in terms of potential ideas that you have grabbed or that you have also experienced and you are doing in your organization or country. So again, raise the hands if you want to intervene. I think everything is very valuable. So really, don’t shy away. So e-waste. This figure was already mentioned by Ms. Ala, but just to tell you about the growth, right? In 2010, it was estimated. Of course, there is no really precise data, but estimated $34 billion. This is projected to more than double in 20 years. So I’m not sure if you have kids or not, but I started to do the projection for my kids and was like, OK, if I don’t start to tackle it now, on one side, we will have plenty of waste, but also we will not have raw material. So I’m not sure if my kids will have a phone when it will be 30, 40. We can debate if it’s good or not for kids not to have phones, but in general, the growth is very, very scary. On the other side, you might say, OK, but if the recycling rates are going up, at least we are tackling it. Unfortunately, that is not really a good news because recycling rates are not going up as the e-waste is going up. So now, on average, in the world, we have almost 20% recycling rate. The ambition is to arrive to at least 60%, 80%. So just to clarify, we know that 100% definitely will not happen in five years, right? So the international organization are really trying to push their mission at least to 60% and 80%. And we will see what is the impact there. So if we then say, OK, why we are doing it? On one side, environment. If you do not recycle well, it’s not just that you pollute the air, the land, the soil, the water, but also there is risk for the well-being. So that is where you have the environmental impact that is affecting both the environment and the society. And here, there are estimates of 145 billion of CO2 emissions in the environment. Is it 50% of the global emission? No, but this really is contributing to it. And we have high potential to lower it down to making sure that we close the loop in this ICT sector. There are informal workers that are affected, 11 million. I want really to hear your opinion on informal sector, because talking with UNITAR and with different organizations, they will say, like, informal sector is great, because at the end, they are really embedded in the society. They are really working in different areas of the city. So they are not per se wrong, or you need really to dismantle it. But you need to help them to follow probably some compliance rule or any way to be careful of the environment and the people. So here, it was mainly the environmental impact. But then also, if we think about the social impact in terms of digital divide, and we try to close the loop early on, we really redeploy in just 1% of the smartphones. And we are like 5 billion smartphones in one year. 5 billion, right? Like, it’s just to let it sink. We can really help 50 million people, at least. So that is where we are in this forum. There is a lot of talks about bridging digital divide, how to bring internet in the rural areas, et cetera. And it is part of it. Another part is the device. Of course, if you have the device and you don’t have internet, I mean, you’re solving partially the issues. But in this workshop, it would be great to also get your opinion about this topic. Again, people are a little bit skeptical about it because they are saying, normally, you pretend it is a redeployed device and you send it to the south of the world. But in reality, it’s just e-waste, right? So also, how you make sure that you are doing this app cycle in the right way that touch the people in need. And then economy, because on one side, driven by the environment issues and the social causes, it’s important. But on the other side, I think it’s also great to remind us that we are talking about GDP impact. If you have more people affected in terms of health, they will go to the hospital. You have a cost on that. If you have impact on the soil, on long term, you will have also impact on agriculture, on your economy. If you have a climate change, you will have impact on the different disasters that we see now, where sometimes, especially if you have Islam or the different cities that are not set up for this climate disaster, you have a human impact and economic impact. So here in the Global E-Waste Monitor 2024, you can see an interesting calculation that at the moment, with this 20% recycling. rate, we are losing globally 37 billion. But as soon as we start recycling, we can have a net positive impact. Why? Because on one side you are creating a market. You’re creating economy activities. Then less people are healed. So less people go to the hospital and then have to pay for medical care. And then on the other side you have less impact on the environment. So less pollution and green gas emission. It is true that this type of concept relates more to government, right? Because at the end, the government is the one that will allocate funds for also the health care, funds for the environment, et cetera. So this is really to also arrive to government level and tell them, investing even more in the private sector, social sector doesn’t really stop into the e-waste management, but goes beyond at your national and global economy. So here we have done a couple of workshops. One in London last week, where we had more kind of private sectors and NGOs coming in and tell us their point of view on the policies and their challenges. We had also in Singapore and with also private sector and policymaker that we’re discussing how the e-waste is done in Malaysia and in Singapore. And finally, also here in Riyadh with the GCC countries where we focus mainly on kind of policies. And what we have identified are five main kind of best practices. On one side, collection system is important. We saw some countries that invested into recycling infrastructure, but they don’t have feedstock. So then you don’t have, you know, your economy will like profitability will not stand. So collection is important. Another point is if you want to, if you raise awareness with consumer, but don’t give them the access to recycle, to like bring the device, then, you know, like they are, okay, like you are telling me to do it, but I don’t know how to do it. So, you know, you lose the momentum. And so, yeah, the second one is consumer awareness. Once you have at least set up some access point, right? And as also Miss Hala was saying, consumer is also us. So I was also discussing with my colleagues that I’m not sure you, but I have a bag in my home with all the cables of the old phones, et cetera, that I, with this project, I started on saying, okay, where I have to put it, how, what is the impact, et cetera. But it’s not something that everybody knows. And especially the collection system, all this system will really change from Spain to Saudi, for example. Economy of scale. In a lot of countries, both private and public sector, told us that they want to do recycling facilities, but they don’t have enough feedstock to make it profitable. And that is where on one side, okay, if you increase the collection, most probably you will be kind of profitable. But on the other side, this is where you maybe need on one side to think broadly on collaboration and how potentially you can work with other countries or other part of municipality, et cetera, to then consolidate and making sure that you’re not setting up small facilities around the territory that no one is profitable, and then everybody needs to close or have subsidies, right? So economy of scale is important. Private sector. I think talking with the private sector, they really want to get involved. Of course, there are a lot of opinion about policies. Like, okay, this we need to change. This is about legacy technologies. It also has to be updated. This is too strict. This is too broad, of course. But I think what’s the key message that came from this workshop is that government alone cannot centralize, because e-waste is also spreading the territory. So you need to enable, but then the private sectors also have the appetite to come and then to build, you know, strong supply chain and value chain. And last but not least, regional collaborations. So here is where, you know, like, depending also on the regional policies and the regional relationship, you can really think on how I can work, for example, between Saudi Arabia and Oman to making sure that if I specialize on recycling on the one type of e-waste or one type of battery, then I will receive the feedstock, but then I don’t have to do the recycling of that type of material, because then I can collaborate and then I can send it there. So this exchange and this promotion of e-waste cross-border trade, I think it’s important. But also sharing lessons learned. Because there is no one system that fits all. First of all, we are very different in terms of society. Like doing some policies in Europe, it’s totally different doing policies in Ghana and doing policies here in Saudi Arabia, right? So this is where maybe collaborating between the GCC countries that can help saying, okay, what did you do that works? What did you do that didn’t work? So maybe can we try to test it together? Can I try to learn your governance, your policies, your EPR implementation? So this is really a call to action on collaboration. Now I will stop talking. Maybe before doing the Slido, I want just to open the floor for maybe reactions. I know that you need to be brave to be the first one talking, because everybody is like, as soon as I share, like someone wants, okay, perfect. We

Mohamed Mashaka: have a volunteer. Thank you. Hello, yes, my name is Mohamed Mashaka from the United Republic of Tanzania, which is an Eastern part, it’s an Eastern part, East African country back there. One of the areas that I think we are really facing as a challenge is the literacy to our citizens, because most of these citizens, they aren’t aware of the impact of this e-waste on different initiatives that they are doing. Maybe the issue is we are trying to look on the case studies whereby the countries have these strategies towards the e-waste, and probably the guidelines towards the safe usage of this ICT and all infrastructures so that they can be aware of that. So one of the biggest challenges that I’ve seen, which is actually coming across, is the literacy level of the people, and how effective we are really going to do it. So I think there is a need to have an e-waste strategy, and probably a guideline for the e-waste as well. And the awareness campaign that needs to go to the people, because as you have mentioned, we have the low-income people, the people who are actually in the informal sectors, they are the ones who are much more affected into this. So there is a need to increase much effort into it. So we really appreciate that. That’s the comment I wanted to add. Thank you. Thank you very much. Maybe we have someone that wants to…

Ayman Arbiyat : Hello. Good morning. My name is Ayman Arbiyat from Jordan. First of all, I am not an expert in e-waste, but I like the concept. So I would like to ask you about the best practices. In the last slide, please, because I have the same issue. I have some e-waste, but I don’t know where I can send it to any organisation, how we can collect what is the effective collection system. For the next best practice is about consumer awareness, and maybe I will raise some privacy concerns. Because, let’s see, when I talk to my wife or to my colleague, they say this phone has my photo, even if I delete it. But we still believe that this device still has many photos or many personal information, and for that we keep it in the home instead of recycling it. Thank you.

Audience: Anyone wants to… Yes. Okay. I wanted to say ladies first, but we are two ladies, so… Hello. My name is Noia. I’m from the small Pacific island country of Tuvalu. I… Very nice presentation. You know, the Pacific is very vulnerable to climate change, and you mentioned something about climate change and how e-waste contributes to solving all that. We have very unique challenges due to our geographic location and our isolation. We have very limited infrastructure and resources are constraints as well. My question is around the recycling and reprocessing. Are there any cost-effective recycling solutions suitable for small-scale and more decentralized systems in the Pacific? Also, can e-waste be repurposed or up-cycled locally to create some economic opportunities, in a way, because we are too far away from where the e-waste are processed and shipping is also a challenge for small Pacific Island countries. So reprocessing and up-cycle of those waste can be… We are looking at leveraging as an economic opportunity. Thank you. Thank you. My name is Dr. Nagwa. I’m from the Academy from Egypt. Actually, my question is related to if there is any intention in your organization to measure the e-waste, especially that what I noticed that all the figures presented are estimated figures and for globally. It’s worse, and it’s not a good thing, and it’s not a good thing. It’s worse, and it would be useful to think about following the methodology in order to measure the e-waste, as well as to see the impact when you set some policies or strategies and implementation of these policies. You can see how how there is improvement in place due to the implementation of these policies. So I think this is very important, not only for the Kingdom, I think for maybe for the region as well, because you can, after this, you can also share this with the others, whether it’s on international level or on the regional level as well. Thank you very much, and congratulations for such wonderful work for Engineer Ela and for you, Ariane. Thank you.

Alaa Abdulaal: Thank you. And also, thank you for all the different questions. I will try to address very briefly. I think then during the presentation we will see also other kind of example of initiatives, and I will also encourage you to reply to one another, right? So if you know some awareness campaign that works, please raise a hand and say in our country we did this and was successful, etc. So I think we have one question about how you do consumer awareness. Another one, it’s about collection. How does it work in data privacy? And then it’s how does it make, like, what are the different recycling processes that make sense for a small community or small economy? And then can you hear? Yeah. So consumer awareness. Short answer, it takes time. It’s not something that you can do from one day to another. I like the sentence repetition is communication. You need to hear one message once, twice, three times before really understanding and reaching all the population. I think what works well is government awareness campaign, but mainly working with the NGOs that are on the ground. That is where you need to really access different type of channels. So it’s, of course, internet. A lot of us are now on the web. It’s about also on the ground kind of workshops of this type of events that we are doing, but then going face to face with the informal sectors and try to make them aware of the challenges. Of course, the more on the ground you are, practical example you will need to give. You cannot go to a small area, a small village in in Zambia and say, you know, the impact of e-waste is 58 billion. That will not work, right? You can make an example of if you don’t recycle, then it will impact the health for the lungs, etc. So that might be a little bit more concrete example. Yeah, it takes time and you need to leverage a social sector with the different channels. Collection system. Great question. So on one side private sector, social sector and government. Private sector, now with some policies of EPR, of extended producer responsibilities, a lot of companies need to take your phone back. So the first thing is you want to give me the iPhone back. So if you want one, I’m talking about the iPhone, but sorry, Samsung, you want a phone, you need to give the other phone back. And this is the reverse logistics, right? We’ll not go into the discussion if it makes sense or not economically for the producers, but this is one channel. The second one, again for the private sector, they sometimes put the different boots where you can drop the technologies. Of course, if there is data security, we’ll talk later, you might not be comfortable just to put it in a box. But then there are other kind of cables where you might be fine with it. Social sector or kind of organization, now there are organization that on behalf of the producer will then organize the collections. So that is where it is important again, and that is spread on the territory. And then it’s very clear who does it and what is the purpose for this. So that is NGOs that of course is connected with a consumer awareness, but then accessibility. And then the last one is government. I saw that before here was Abdul Aziz that is from CST, from the KSA

Arianna Molino: regulator, and they did a great initiative to recycle my device when they tackled both of the questions you had. First of all, they saw that in KSA, but I guess in most of the countries, there is a lot of concern about privacy. Where my photo will go, right? Even if I delete it, I also don’t know. If I delete it, I’m not sure if they’re probably there, right? There’s always some magic gig that resuscitates stuff. At least when I delete, I hope that there is someone that will do for me. So they see that in some countries, government have the trust and so consumer are okay to handle the device to the government. And so they will reassure you that the data will be the first one that will be kind of erased in a secure way before handling over to another actor in the supply chain, right? Now, this doesn’t work in China, for example. China, they don’t trust to give to the government, right? They trust the private sector more than the government, correct. So then you have to understand in your country, what is the culture, right? I’m Italian. I’m not sure if I will trust my government, right? But again, also private sector. I’m questionable. But for example, in China, we know that private sector will have, you know, better trust. In the London workshop, there was a specifically one company that was dedicated on erasing data and make it secure. So that is where you need to build trust. You need to build the technologies, etc. But long story short, this is a pain point. This is a pain point that, yes, we need to be addressed. And we cannot just think that, you know, once you know where to throw your phone, you’re happy to do it. For the privacy perspective, there are some government policies as well. If we see some advanced countries, they have policies for when they export the e-waste, particularly the electronics. So they mention specific clauses, like they need to discard the hard disk, and this is where the data is stored. So usually in the government and national level policies, they mention categorically to don’t export the storage devices. Yeah, indeed. So it started with policies and then with the implementation, right? So, as always, then there are people that say how are you making sure that the policies really get enforced, right? That is another, you know, topic that we can discuss later. But then going maybe to the question of what small economies or small islands can do, I think upcycle and recycle is the first thing that needs to be done. First of all, because you close the loop earlier and you extend the life of the device. Of course, there are some cultures where secondhand is better accepted than others. And yeah, we know that in Africa, for example, they are really super good in it, right? While in Italy, to be honest, if I have something broken, I will always go to my grandpa because the whole generation knows how to do it. And I’m the one that is not really comfortable sometimes to fix, you know, different devices. So first of all, it’s upcycle and recycle. And then what recycling business is more profitable or makes sense? I would say focus on dismantling, because then you can extract the plastic. And then the plastic can go in any plastic recycling.

Audience: Or then extract in the glass. So the glass can then go not only on e-waste recycling, but can be on glass recycling for other industries. Probably you will not have the volume to do batteries that you will need to. But good news is that critical raw material are more and more a hot topic nowadays. So you can translate it in an economic advantage. I can sell you that. Because soon, if we continue like this, we will not have raw material. So in order to make batteries for a new e-vehicle, we will not have lithium, et cetera. So you will need to recycle. So this is where they will come to have kind of feedstock. Of course, logistics might be an issue. But that can be, again, something to evaluate, or at least to see the numbers in order to do the business case. And last but not least, data. So you’re talking with a consultant that needs to do data collection. So I feel the pain, right? So we try to go to different countries and different representatives and say, OK, we go to the source. You as a government that look at e-waste, can you please give us country data? And that is where you understand that, first of all, there is no common understanding of e-waste. The definition, you always have to step back and say, no, no, no, it’s not just phone, right? Like, six categories, et cetera. So, ah, OK. So this is where maybe it’s not one department. Maybe there are two, three departments. And then, to be honest, in this region, they are still kind of step towards it. KSA now, in the last two years, is launching one initiative for data collection, but also mapping all the different supply chain on waste in general. So that is where you can start having a little bit more granularity. But then you can see some efforts also using technologies, because, again, I think ICT can really be leveraged for this type of problems. And so they have platforms where maybe you’re not really tackling the data across the value chain, but you’re putting together maybe the collectors with the recyclers that has to exchange e-waste.

Arianna Molino: So at least you see all the transaction, and you start really understanding what’s going on, at least in that part of the supply chain. So this is happening. There are different countries that are doing it. And also e-waste trade, because they have to enter your border and exit. At least in that transaction, you can start to have control over it. Now, issue, classification. Some do not classify as e-waste, also because most of the time it’s illegal. They’re ban on export, ban on import. So they are sometimes wrongly defined as electronics. So that is where, of course, you will need more enforcement. But yeah, we are also thinking about it. So with Ms. Ala and Dr. Saeed, we are also trying to tackle this and maybe give some lesson learned in 2025. So conscious of time, but thank you for that. I hope we discuss. And of course, later on, we can also have a better, deeper discussion. Now, Dr. Saeed told me that you are a little bit late. Italian style, I’m talking a lot. So now, we are in IGF. So now, yes, you can take your phone. I think some have already anticipated this moment. I already have the phone. Slido, you can scan this code. And you can access it. Ideally, it will not fail. That would be magic. And the first question is pretty simple. I think everybody know the country where they are from. Now, if you are from one country and you’re representing another one, up to you what you want to put. Yeah, and we will see the different results. And I’m really happy that we see different country represented here. So Thailand, Maldives, Tonga, Netherlands. Maybe I know who is from Netherlands and Saudi. We have five participants. I think in this room, there are a little bit more than five. So I encourage everyone, if it’s possible, to participate. It’s anonymous, as you can see. So it’s really to start. Then later on, we will try to connect the different solutions, different issues to the. Sorry. Two value. Two Netherlands. Good. Rwanda, Tanzania. Perfect. OK, and you can join the poll also later. There will be a couple of questions. So the next question is, which sector you are representing? If it’s private sector, public sector, or social third sector? This will help us also understanding you’re more interested into policies, you’re more interested into solution implementation, more in kind of awareness and collaboration for different impact. Great. OK, we have public sector. That is almost half of the audience. And then also private sector. Social sector is a little bit kind of less. So here, if you know there are policy makers on one side, if you know there are business on the other side, I really encourage you to discuss. Because that is where we see the most interesting debate. Because of course, policy maker need to arrive to some compromises in defining the regulations. But then you have here some of the businesses that really, it’s not really feel the pain, but has really to implement it, right? And you’re from different countries, so it’s not that they are pointing you specifically. So great. So e-waste. Just if you have to think one word, two words, about e-waste, after this 40 minutes of discussions, what do you think? And probably there will be some repetitions. This is really to test which part of the e-waste you are more concerned or passionate, if you want to. I like Infinity Loops. Wow. The Infinity Loops author needs to do this type of workshop, because definitely the wording is very, very good for so different campaigns of awareness. And use electronic material, collection, environmental justice. I like that. Kind of just transition, environmental justice, expand lifespan. As I was mentioning, it’s not just a matter of recycling. We need to extend the life of the devices. Great. So reuse, recycle. Great. Perfect. So now that we want to test the issue that you see this, how much do you think it’s important e-waste in your country? And it’s not that you have to say, ah, it’s horrible, because I did the presentation, right? If you also feel that it’s tackling or there’s not enough volume, really here. So it’s not yet significant, but it’s increasing and moderately significant. So you see, after this presentation, at least I think it’s not that everybody thought, OK, it’s super significant. So I think that this is a good kind of takeaway that probably the urgency is not there yet, either because of facts or perception. Then let’s see. Of course, we need data to arrive at some conclusion, but this we will probably for each country will understand case by case. So yeah, it’s not significant, but it will come. So better to be prepared and proactive rather than reactive. Right. So here about collaboration. Are you collaborating or not? Because when we were in other workshop, this was a big debate, like super complex. I want to collaborate, but I don’t know with whom. Other are like, no, no, I base my business on collaboration because it’s very complex and I need to be interrelated with other organization in order to be successful. Yeah, as also other workshop, like I don’t collaborate, but it would be great to do it. And this is also for the policymakers, right? Because, for example, in Ghana, they are kind of mature in terms of policies. they have a lot of discussions with the informal sector and private sector. Now there are critics that say that like they are they did a lot of policies but then the private sector is not able to implement or it cannot enforce it right but there is this kind of culture of different workshop different working group to enable the collaboration. On the other there’s other countries we’re like for example with Oman that they are still starting to work with the private sector. Okay great so you want to start you have a lot of people here that potentially are passionate about it okay probably will not do a waste trade between a kind of Tanzania Netherlands probably that will not be the most effective and in terms of also impact of emission but still feel free to then after grab and know each other to see if there is any potential option. Maybe just going back like I have already explored and activated some. I think it’s amazing that no one has a lot of collaboration active right. This tells a lot about the maturity of the system and how much still we are working in silence. Not sure if someone wants to to talk about collaboration that they had so far set up and with whom and why mainly so so why because the driver is important why you’re doing it is because of economic profitabilities it is because of kind of operation you need feedstock or it is you know for designing the solution together. No volunteer okay maybe later on someone will be adventurous. Now barriers. Barriers to scale up the collaboration we said okay maybe I want to start maybe I want to do more I think there was at least one that was saying I’m not interested but for the one that would like to really scale up the collaboration why why you’re not doing it it is because I don’t have funds or there is no financial mechanism that helps me collaborating or do I need to there is a lack of awareness or policies I don’t have data so I don’t know where to turn around. Some yeah I think here like lack of infrastructure and ICT enablers doesn’t seems to be okay jumping up so I will maybe hold from conclusion until we everybody has replied. Just here what we have seen is that financial drivers that or mechanisms are not really aligned with the profitability of the value chain so mainly a comment that we had in London last week was that financially it’s not really kind of profitable or to for example reuse and repair it costs more to reuse and repair so what do you do is just recycle and that is something that as a government you know you need to adjust because you want to close the loop early on and again if someone wants to raise their hands and for questions for comments really please do it okay so awareness I think as also here we had for the first comment awareness is the first barrier with also investment and yeah that is where I think around the world is still a lot of to do so if we then I think we have just one couple of questions and then we can move on with the digital divide what potential initiatives would you recommend and here if it’s too long to write also you can just you can just open the floor for for discussion just to give some example again there were collaboration in KSA with the government working with the social sector that would collect the device and the governor will be the one that guarantee that privacy is respected and then will connect also with also the producer right or for the telco company to ensure that it will also start doing awareness for their consumers to to tell them okay you have a new devices a modem bring back and then I will give you a new one so that is one that collaboration that we saw in KSA okay so here potential solution or potential initiatives solution is yeah collaborate collaborate and collaborate it’s about the reward system and that is interesting because it’s being also to the financial right to the second challenge that we have discussed so reward you as always in this world of this economy you need to have a reward in order to really be motivated also because at the end you need to have financial sustainability as you will not do kind of a business so at the end this is why EPR sometimes works because you then give the responsibility to the producers so at the front of their value chain so that is where they have to increase their price or costs in order then to manage till the end of life and that will help you potentially do some reward when you give back the phone and you have X money back right of course it will not be after like after three years that you use one device you will not that you cannot value $100 but still that small reward can be pennies but for some people you know it’s good enough like in Germany sometimes you put the bottles and you have two cents five cents still it’s thing that kind of kind of work for consumers using AI to support for that I think like we can also open the floor of you know how technologies our AI can help I don’t know who wrote that if he wants to or she wants to elaborate more in the workshop in Singapore there were a lot of discussion or so how to integrate AI and other technologies in the warehouse of the different companies to understand the values of the product when they depreciate to bring back etc to optimize and yeah that was also quite quite stimulating as conversation doing policies yeah public awareness and behavioral change launch campaigns to educate consumers encourage cultural shift yes to repair and reuse great thanks you thank you everyone I think like now I want to link this to the digital divide I think we already have mentioned it before how we can promote e-waste and then also do that you know promote this reuse of devices in the in the part of the population in need so if you think about reuse of electronics you as a company or you as a policymaker why would you do it it’s because you think about the environment you think about this because of the social you think about the brand and the corporate social responsibility so the governance or you think about the economics so that ESG and also the profitability so here I want really to test the driver why would you reuse or you would encourage to reuse or repair one device you can I think choose up to two yeah so one driver or reward to connect to the the previous is CSR or brand private sectors can really use that to communicate their commitment to sustainability right of course it should not be like just greenwashing it should be substantiated by a real initiative there but I think to be honest it’s good enough right you if this helps you as a business to be positioned in the market as someone that cares about the environment etc I think it’s more than fair to to leverage that I think here I’m surprised yeah about the like social impact it’s very prominent here at least a third talked about it sometimes if you talk to people about e-waste social divide doesn’t come naturally into mind right if you start talking about um but what about you know reuse etc they start thinking maybe but I think in the south of the world the association is stronger right to start to go to Europe etc there I don’t think it’s something very prominent so this again some cultural chain cultural differences economic job creation with environment is the least. So this is again, maybe on one side the environment is linked to awareness, right? You don’t know what is the impact of reuse. And so that is where I think awareness will definitely help. And economic and job creation. A lot of companies were saying that it’s not profitable. What should I do it, right? It’s better to recycle. Like I pay for this device to collect. I have to repair. And I have to then probably send around the world and the margin are not high, right? So the economics, there are no economic incentive for it. So are you do either policies or reward system to making sure that people are incentivized? People and businesses, right? Great. So I’m going a little bit I think we’re at two mores and then maybe we can we can see either to divide into groups but due to time and I think due to also I see there’s not a lot of sharing of ideas so maybe we will see what to how to structure that one. But here rank your concern and barriers about this topic. So you’re concerned about the complexity of the value chain. You’re concerned about I don’t know what is the demand. I don’t know really how many devices repair devices I will sell. So why have to promote it? Why do I have to go into this business? Some concern about the quality. If you also do a secondhand microwave, you are maybe like a little bit worried that it would explode in your hands or your kids hands. And other mentioned export and import because sometimes you cannot export, you cannot import used devices or waste, right? So that is where sometimes there are some barriers to bring from Europe for example to the south. There are also barriers that were mentioned that about critical material. A lot of now there is a new regulation in Europe that promotes critical material recycling in Europe. So even if one device can be used for repair, it will maybe stay in Europe because the critical material are predominant. Okay so here complexity of the supply chain is the winner. And the second equality and the reliability. So the complexity I think that also comes about data, comes about awareness and comes also about probably the maturity of the market. And the second one quality and reliability that you need to have standards, right? Because sometimes if you don’t have standards, you don’t know really if one device was recycled or repaired properly. So for some probably a phone you’re okish, most of that will not explode in your hands, but there are others where probably you don’t want to access it because of that concerns. And unethical trade and dumping. Okay we go to the third one. This is something that we have tested especially for trade. Then I can give my device but also the clothes, right? But then probably they will be anyway go to the landfill. So do I trust it? It’s a matter of trust, a matter of transparency, a matter of awareness if really what is claimed really will happen. Great. So here again maybe initiative more for cross-border e-waste. I think also we can talk on the microphone. It’s too much rather than typing. If you have done already or if you think that there is some initiative that you believe are valuable, what could be? Like between collectors and producers. Maybe also thinking about you know Egypt or about the island or KSA. What might be? Maybe it’s lunchtime. Everybody’s looking forward for a break. And I promise it’s almost done. Last 10 minutes of brainpower. I think there is one participant typing. So we will wait for the brave one. And I promise I will not call out. You’re safe. Oh two? Okay. As a sharing like an initiative that UNITAR told us to do is to go really on the ground and follow the informal sector, the pickers, to really collect the data. That’s one of the point was mentioned before. And really to understand the complex supply chain etc. So great. So global regulation and then responsible recycling certification. That is nice. Yeah certification or standards that will then reassure you that the quality is there. Right. Having collection point and doing campaign and EPR. Yeah. EPR it’s something that a lot of countries in the world are not doing it. And also for global regulation just to connect to it with a DCO we are working on a framework for government. So targeting government not private sector. And tell them what are the key components that they need to do. So maybe ending with that. The key components of the framework are this one. So government we believe that first they need to think about regulation and policing and strategies. Of course based on data. Because if you based on fingers in the air most probably will not be that effective. Financial instrument. Again reward. You need to on one side understand what comes into your pocket and what goes out. Right. And you need to balance it. And in order to really foster and give incentives to the to the private sector to the social sector you need to have for example EPR strategies. EPR fees that make sense for the ecosystem. And then awareness and capability building. Here awareness for consumer but also for businesses. Right. Sometimes also businesses like are not aware that maybe that component is really e-waste. And they are you know dumping it. On capability building. Also the government needs to learn about it. I mean it’s a it’s a like new new topic somehow. So so like the government the people in the government for example to understand the vapes. Was a big debate last week in London about the vapes are electronic devices. They are super like increasing volumes etc. And the legislator do not know how to manage that big topic. Again capability building everywhere. And finally infrastructure and technologies. You need to have infrastructure for recycling. Also dismantling it. Right. It’s not just a matter of mechanical or chemical recycling. It’s also all the infrastructure for collection dismantling etc. And technologies. ICT. And I think in this forum ICT can be also a great tool. And with this we were planning to do the like a discussion in groups. I think based on the time and based also on this the setup of the rooms and the participation maybe I can just give the the microphone to either Miss Alaa or Dr. Saeed for their final remarks. And thank you everyone. You made it almost to one. Thank you very much for your active participation. I think it’s very two-way communication and we get a lot of information from you. And as

Syed Iftikhar: we told that this initiative is we are co-creating the framework. And definitely we incorporate your feedback in our framework. This framework is not only for the DCO member state. Basically this framework for the whole countries of the world. So thanks again for your feedback and participation. Thank you.

Alaa Abdulaal: Thank you everyone. So just one thing if we can get out out of this session with the feel of responsibility that this challenge is in the hands of each. The solution is in the hand of each one of us. This would be for us we have achieved a great goal. And this is a call of action to be part of the solution to to build a future, a greener future, a more sustainable economical future for everyone. Thank you for joining us and hope that we can hear from you and your feedback. Thank you so much.

A

Alaa Abdulaal

Speech speed

150 words per minute

Speech length

1201 words

Speech time

479 seconds

Growing e-waste volumes pose environmental and health risks

Explanation

The rapid growth of e-waste is becoming a significant challenge due to increasing consumption of electronics and mobile devices. This issue has pressing environmental consequences and needs to be addressed as a shared responsibility.

Evidence

Billions of people are connected through the digital economy, leading to more consumers of electronics and mobile phones.

Major Discussion Point

E-waste challenges and impacts

Agreed with

Mohamed Mashaka

Arianna Molino

Agreed on

E-waste is a growing environmental and social challenge

E-waste management requires shared responsibility across sectors

Explanation

Addressing the e-waste challenge is not solely the responsibility of governments or the private sector. It is a shared responsibility that involves individuals, governments, and businesses working together to tackle the issue.

Evidence

The speaker asks the audience to consider how many devices they own and what they do with old devices, emphasizing individual responsibility.

Major Discussion Point

Collaboration and stakeholder engagement

Agreed with

Syed Iftikhar

Arianna Molino

Agreed on

Multi-stakeholder collaboration is crucial for effective e-waste management

M

Mohamed Mashaka

Speech speed

138 words per minute

Speech length

246 words

Speech time

106 seconds

Lack of consumer awareness about e-waste impacts

Explanation

One of the main challenges in addressing e-waste is the low level of awareness among citizens about its impact. Many people are not aware of how e-waste affects various initiatives and the environment.

Evidence

The speaker mentions that this is a challenge faced in Tanzania, an East African country.

Major Discussion Point

E-waste challenges and impacts

Agreed with

Ayman Arbiyat

Arianna Molino

Agreed on

Consumer awareness and education are key to improving e-waste management

Need for comprehensive e-waste strategies and guidelines

Explanation

There is a need for countries to develop e-waste strategies and guidelines for the safe usage of ICT infrastructure. These strategies would help raise awareness and provide direction for managing e-waste effectively.

Evidence

The speaker suggests looking at case studies of countries that have implemented such strategies.

Major Discussion Point

E-waste management strategies

Differed with

Arianna Molino

Differed on

Approach to engaging informal sector workers in e-waste management

A

Ayman Arbiyat

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Data privacy concerns when disposing of devices

Explanation

Consumers have concerns about their personal data remaining on devices even after deletion. This fear of privacy breaches leads people to keep old devices at home instead of recycling them.

Evidence

The speaker mentions that people believe their phones still contain photos and personal information even after deletion.

Major Discussion Point

Consumer concerns and barriers

Agreed with

Mohamed Mashaka

Arianna Molino

Agreed on

Consumer awareness and education are key to improving e-waste management

Lack of accessible e-waste collection systems

Explanation

There is a lack of knowledge about where to send e-waste for proper disposal or recycling. This lack of accessible collection points hinders effective e-waste management.

Evidence

The speaker expresses uncertainty about where to send their own e-waste.

Major Discussion Point

Consumer concerns and barriers

A

Arianna Molino

Speech speed

134 words per minute

Speech length

6619 words

Speech time

2950 seconds

E-waste contributes to climate change and pollution

Explanation

E-waste has significant environmental impacts, contributing to CO2 emissions and pollution of air, land, soil, and water. This affects both the environment and human well-being.

Evidence

The speaker cites an estimate of 145 billion CO2 emissions related to e-waste.

Major Discussion Point

E-waste challenges and impacts

Agreed with

Alaa Abdulaal

Mohamed Mashaka

Agreed on

E-waste is a growing environmental and social challenge

Promoting reuse and repair to extend device lifespans

Explanation

Extending the lifespan of devices through reuse and repair is an important strategy in e-waste management. This approach helps close the loop earlier in the product lifecycle and can create new markets for refurbished devices.

Evidence

The speaker mentions that redeploying just 1% of smartphones could help 50 million people.

Major Discussion Point

E-waste management strategies

Need for cross-border and regional collaboration on e-waste

Explanation

Regional collaboration is crucial for effective e-waste management. Countries can work together to share lessons learned, implement policies, and create efficient recycling systems.

Evidence

The speaker gives an example of potential collaboration between Saudi Arabia and Oman for specialized recycling of different types of e-waste.

Major Discussion Point

Collaboration and stakeholder engagement

Agreed with

Alaa Abdulaal

Syed Iftikhar

Agreed on

Multi-stakeholder collaboration is crucial for effective e-waste management

Engaging informal sector workers in e-waste management

Explanation

The informal sector plays a significant role in e-waste management in many countries. Rather than dismantling this sector, efforts should focus on helping informal workers follow compliance rules and environmental safety practices.

Evidence

The speaker mentions that 11 million informal workers are affected by e-waste management.

Major Discussion Point

Collaboration and stakeholder engagement

Differed with

Mohamed Mashaka

Differed on

Approach to engaging informal sector workers in e-waste management

Quality and reliability concerns with refurbished electronics

Explanation

Consumers have concerns about the quality and reliability of refurbished or repaired electronic devices. This perception can be a barrier to promoting the reuse of electronics.

Evidence

The speaker mentions that some people worry about refurbished devices malfunctioning or being unsafe.

Major Discussion Point

Consumer concerns and barriers

Agreed with

Mohamed Mashaka

Ayman Arbiyat

Agreed on

Consumer awareness and education are key to improving e-waste management

Complexity of e-waste supply chains

Explanation

The e-waste supply chain is complex, involving multiple stakeholders and processes. This complexity can be a barrier to effective e-waste management and collaboration.

Evidence

The speaker notes that this was identified as the top concern in a poll conducted during the session.

Major Discussion Point

Consumer concerns and barriers

Implementing extended producer responsibility policies

Explanation

Extended Producer Responsibility (EPR) policies are an important tool for managing e-waste. These policies make producers responsible for the entire lifecycle of their products, including end-of-life management.

Evidence

The speaker mentions that EPR is being implemented in various countries and can provide incentives for proper e-waste management.

Major Discussion Point

Policy and regulatory frameworks

S

Syed Iftikhar

Speech speed

130 words per minute

Speech length

653 words

Speech time

301 seconds

Importance of public-private partnerships for e-waste initiatives

Explanation

The Digital Cooperation Organization (DCO) emphasizes the importance of collaboration between governments, businesses, and individuals in addressing e-waste challenges. This multi-stakeholder approach is crucial for developing comprehensive solutions.

Evidence

The speaker mentions that DCO has 16 member states and over 40 observers from international organizations, private sectors, and NGOs.

Major Discussion Point

Collaboration and stakeholder engagement

Agreed with

Alaa Abdulaal

Arianna Molino

Agreed on

Multi-stakeholder collaboration is crucial for effective e-waste management

Developing financial incentives for proper e-waste management

Explanation

Financial instruments and incentives are crucial for fostering effective e-waste management. Governments need to understand the financial implications and balance costs and benefits to encourage private and social sector participation.

Evidence

The speaker mentions the need for EPR strategies and fees that make sense for the ecosystem.

Major Discussion Point

Policy and regulatory frameworks

A

Audience

Speech speed

151 words per minute

Speech length

723 words

Speech time

285 seconds

Small island nations face unique e-waste challenges due to isolation

Explanation

Small Pacific island countries face unique challenges in e-waste management due to their geographic isolation and limited resources. This situation requires innovative solutions for recycling and reprocessing e-waste locally.

Evidence

The speaker from Tuvalu mentions limited infrastructure and resource constraints as challenges.

Major Discussion Point

E-waste challenges and impacts

Importance of data collection and measurement for e-waste

Explanation

Accurate data collection and measurement of e-waste volumes are crucial for effective management and policy-making. Current figures are often estimates, which can hinder the development of targeted strategies.

Evidence

The speaker from Egypt notes that all figures presented in the session were estimated and global, suggesting a need for more precise local data.

Major Discussion Point

E-waste management strategies

Leveraging AI and technology for e-waste management

Explanation

Artificial Intelligence and other technologies can play a significant role in supporting e-waste management efforts. These technologies can help optimize processes and improve decision-making in the e-waste value chain.

Evidence

A participant suggested using AI to support e-waste management in response to a question about potential initiatives.

Major Discussion Point

E-waste management strategies

Need for e-waste-specific regulations and standards

Explanation

There is a need for specific regulations and standards governing e-waste management. These would help ensure proper handling, recycling, and disposal of electronic devices and components.

Evidence

A participant suggested implementing global regulations and responsible recycling certifications.

Major Discussion Point

Policy and regulatory frameworks

Harmonizing cross-border e-waste regulations

Explanation

There is a need for harmonized regulations across borders to facilitate effective e-waste management on a global scale. This would help address challenges related to e-waste trade and ensure consistent standards across countries.

Evidence

A participant suggested the need for global regulations in response to a question about cross-border e-waste initiatives.

Major Discussion Point

Policy and regulatory frameworks

Agreements

Agreement Points

E-waste is a growing environmental and social challenge

Alaa Abdulaal

Mohamed Mashaka

Arianna Molino

Growing e-waste volumes pose environmental and health risks

Lack of consumer awareness about e-waste impacts

E-waste contributes to climate change and pollution

Multiple speakers emphasized the increasing volume of e-waste and its negative impacts on the environment and human health, highlighting the urgent need for action.

Multi-stakeholder collaboration is crucial for effective e-waste management

Alaa Abdulaal

Syed Iftikhar

Arianna Molino

E-waste management requires shared responsibility across sectors

Importance of public-private partnerships for e-waste initiatives

Need for cross-border and regional collaboration on e-waste

Speakers agreed that addressing e-waste challenges requires collaboration between governments, businesses, individuals, and across borders.

Consumer awareness and education are key to improving e-waste management

Mohamed Mashaka

Ayman Arbiyat

Arianna Molino

Lack of consumer awareness about e-waste impacts

Data privacy concerns when disposing of devices

Quality and reliability concerns with refurbished electronics

Multiple speakers highlighted the need for increased consumer awareness and education to address concerns and promote proper e-waste disposal.

Similar Viewpoints

Both speakers emphasized the importance of involving multiple stakeholders, including the informal sector, in e-waste management efforts.

Alaa Abdulaal

Arianna Molino

E-waste management requires shared responsibility across sectors

Engaging informal sector workers in e-waste management

Both speakers highlighted the need for comprehensive strategies and incentives to guide and encourage proper e-waste management practices.

Mohamed Mashaka

Syed Iftikhar

Need for comprehensive e-waste strategies and guidelines

Developing financial incentives for proper e-waste management

Unexpected Consensus

Importance of data collection and measurement for e-waste

Audience

Arianna Molino

Importance of data collection and measurement for e-waste

Complexity of e-waste supply chains

There was unexpected consensus on the critical need for accurate data collection and measurement in e-waste management, with both the audience and speakers recognizing its importance for effective policy-making and understanding the complex e-waste supply chain.

Overall Assessment

Summary

The main areas of agreement included recognizing e-waste as a growing challenge, the need for multi-stakeholder collaboration, the importance of consumer awareness and education, and the necessity of comprehensive strategies and incentives for proper e-waste management.

Consensus level

There was a moderate to high level of consensus among speakers on the key challenges and necessary actions for e-waste management. This consensus suggests a shared understanding of the issues, which could facilitate the development of coordinated strategies and policies to address e-waste challenges globally.

Differences

Different Viewpoints

Approach to engaging informal sector workers in e-waste management

Arianna Molino

Mohamed Mashaka

Engaging informal sector workers in e-waste management

Need for comprehensive e-waste strategies and guidelines

While Arianna Molino suggests integrating informal workers into the e-waste management system, Mohamed Mashaka emphasizes the need for comprehensive strategies and guidelines, potentially overlooking the role of the informal sector.

Unexpected Differences

Focus on data collection and measurement

Arianna Molino

Audience

Complexity of e-waste supply chains

Importance of data collection and measurement for e-waste

While Arianna Molino focuses on the complexity of e-waste supply chains, an audience member unexpectedly emphasizes the importance of accurate data collection and measurement, highlighting a potential gap in the discussion.

Overall Assessment

summary

The main areas of disagreement revolve around the approach to engaging informal workers, the balance between promoting reuse and addressing quality concerns, and the prioritization of data collection in e-waste management strategies.

difference_level

The level of disagreement among speakers is moderate. While there is general consensus on the importance of addressing e-waste challenges, there are differing perspectives on specific strategies and priorities. These differences highlight the complexity of e-waste management and the need for multifaceted approaches tailored to different contexts.

Partial Agreements

Partial Agreements

Both speakers agree on the need for better e-waste management, but while Alaa Abdulaal emphasizes shared responsibility across sectors, Ayman Arbiyat focuses on the specific issue of accessible collection systems.

Alaa Abdulaal

Ayman Arbiyat

E-waste management requires shared responsibility across sectors

Lack of accessible e-waste collection systems

Both recognize the importance of reuse and repair in e-waste management, but while Arianna Molino promotes it as a strategy, the audience raises concerns about the quality and reliability of refurbished devices.

Arianna Molino

Audience

Promoting reuse and repair to extend device lifespans

Quality and reliability concerns with refurbished electronics

Similar Viewpoints

Both speakers emphasized the importance of involving multiple stakeholders, including the informal sector, in e-waste management efforts.

Alaa Abdulaal

Arianna Molino

E-waste management requires shared responsibility across sectors

Engaging informal sector workers in e-waste management

Both speakers highlighted the need for comprehensive strategies and incentives to guide and encourage proper e-waste management practices.

Mohamed Mashaka

Syed Iftikhar

Need for comprehensive e-waste strategies and guidelines

Developing financial incentives for proper e-waste management

Takeaways

Key Takeaways

E-waste volumes are growing rapidly, posing environmental and health risks globally

There is a lack of consumer awareness about e-waste impacts and proper disposal

E-waste management requires shared responsibility across government, private sector, and individuals

Collaboration and partnerships are crucial for effective e-waste management

Data collection and measurement are important for developing effective e-waste strategies

Promoting device reuse and repair can help extend lifespans and reduce e-waste

Small island nations face unique e-waste challenges due to geographic isolation

Consumer concerns like data privacy and device quality are barriers to proper e-waste disposal

Policy frameworks and financial incentives are needed to promote proper e-waste management

Resolutions and Action Items

DCO to develop a framework for governments on key components of e-waste management

Participants encouraged to take personal responsibility for proper e-waste disposal

DCO to incorporate participant feedback into their e-waste management framework

Unresolved Issues

How to effectively engage and regulate the informal e-waste sector

Specific strategies for small island nations to manage e-waste given their unique challenges

How to address the lack of profitability in device repair and reuse

Methods to improve data collection and measurement of e-waste volumes

How to harmonize cross-border e-waste regulations globally

Suggested Compromises

Balancing data privacy concerns with the need for proper e-waste disposal through secure data erasure services

Developing standards and certifications for refurbished electronics to address quality concerns

Implementing extended producer responsibility policies while ensuring economic viability for businesses

Thought Provoking Comments

One of the areas that I think we are really facing as a challenge is the literacy to our citizens, because most of these citizens, they aren’t aware of the impact of this e-waste on different initiatives that they are doing.

speaker

Mohamed Mashaka

reason

This comment highlights the critical issue of public awareness and education regarding e-waste, which is often overlooked in technical discussions.

impact

It shifted the conversation to focus more on the importance of public awareness campaigns and strategies for educating citizens about e-waste impacts.

Are there any cost-effective recycling solutions suitable for small-scale and more decentralized systems in the Pacific? Also, can e-waste be repurposed or up-cycled locally to create some economic opportunities?

speaker

Noia

reason

This question brings attention to the unique challenges faced by small island nations and introduces the idea of local economic opportunities through e-waste management.

impact

It broadened the discussion to consider solutions for different geographical contexts and economic scales, prompting thoughts on decentralized and localized approaches to e-waste management.

Is there any intention in your organization to measure the e-waste, especially that what I noticed that all the figures presented are estimated figures and for globally.

speaker

Dr. Nagwa

reason

This comment addresses a crucial gap in e-waste management – the lack of accurate measurement and data collection.

impact

It led to a discussion about the importance of data in policy-making and implementation, highlighting the need for better measurement methodologies in the e-waste sector.

Infinity Loops

speaker

Anonymous participant (via Slido)

reason

This concise response encapsulates the concept of circular economy in e-waste management, showing a deep understanding of the topic.

impact

While brief, this comment reinforced the importance of viewing e-waste management as a continuous cycle rather than a linear process, influencing the subsequent discussion on reuse and recycling.

Complexity of the supply chain is the winner. And the second equality and the reliability.

speaker

Arianna Molino

reason

This summary of participant responses highlights the key challenges in e-waste management as perceived by the audience.

impact

It focused the discussion on addressing supply chain complexities and quality concerns in e-waste management, shaping the direction of potential solutions discussed.

Overall Assessment

These key comments shaped the discussion by broadening its scope from technical aspects to include crucial elements like public awareness, geographical context, data accuracy, circular economy principles, and supply chain challenges. They prompted a more holistic view of e-waste management, considering various stakeholders and contexts. The discussion evolved from a general overview to addressing specific challenges and potential solutions, emphasizing the need for collaborative, data-driven, and context-specific approaches to e-waste management.

Follow-up Questions

What are effective e-waste collection systems?

speaker

Ayman Arbiyat

explanation

Understanding effective collection systems is crucial for addressing the e-waste problem at its source.

How can privacy concerns related to data on electronic devices be addressed in e-waste collection?

speaker

Ayman Arbiyat

explanation

Addressing privacy concerns is essential for encouraging people to recycle their electronic devices.

What are cost-effective recycling solutions suitable for small-scale and decentralized systems in small island countries?

speaker

Noia from Tuvalu

explanation

Finding appropriate solutions for small island nations is important for global e-waste management.

How can e-waste be repurposed or up-cycled locally to create economic opportunities in small island countries?

speaker

Noia from Tuvalu

explanation

Exploring local economic opportunities from e-waste can incentivize better management practices.

How can we improve the measurement and data collection of e-waste globally and regionally?

speaker

Dr. Nagwa from Egypt

explanation

Accurate data is crucial for understanding the scale of the problem and measuring the impact of policies.

How can artificial intelligence be used to support e-waste management?

speaker

Unidentified participant (via Slido)

explanation

Exploring the potential of AI in e-waste management could lead to more efficient and effective solutions.

What are effective awareness campaigns and behavioral change strategies to encourage e-waste recycling and reuse?

speaker

Unidentified participant (via Slido)

explanation

Public awareness and behavior change are crucial for improving e-waste management practices.

How can we develop and implement global regulations and responsible recycling certifications for e-waste?

speaker

Unidentified participant (via Slido)

explanation

Standardized regulations and certifications could improve the quality and reliability of e-waste recycling globally.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #209 Multistakeholder Best Practices: NM, GDC, WSIS & Beyond

WS #209 Multistakeholder Best Practices: NM, GDC, WSIS & Beyond

Session at a Glance

Summary

This discussion focused on multi-stakeholder best practices in internet governance, particularly in the context of recent initiatives like NetMundial Plus10 and the Global Digital Compact. Participants explored the challenges and opportunities in strengthening multi-stakeholder engagement using the Internet Governance Forum (IGF) and other processes.


Key points included the need for balanced representation and meaningful participation from all stakeholder groups, including governments, civil society, the private sector, and technical communities. Panelists emphasized the importance of inclusive processes that give voice to diverse perspectives, especially from developing countries and underrepresented groups.


The discussion highlighted tensions between multilateral and multi-stakeholder approaches, with some noting the challenges governments face in global forums. Participants stressed the need for coherent stakeholder processes within groups to enable effective collaboration across sectors.


Several speakers pointed out that multi-stakeholder processes require significant resources and time to be truly effective. The importance of transparency, clear guidelines, and mechanisms to ensure authentic engagement was emphasized.


The role of the IGF in capturing learning and applying best practices was explored, with suggestions to better utilize IGF messages in other forums and improve host country selection. Panelists also discussed how to make processes like the Global Digital Compact more inclusive while recognizing the challenges of balancing different stakeholder interests.


Overall, the discussion underscored the complexity of multi-stakeholder internet governance and the ongoing need to refine and improve collaborative approaches to address evolving digital policy challenges.


Keypoints

Major discussion points:


– The role and effectiveness of multi-stakeholder processes in internet governance


– Challenges and opportunities for improving multi-stakeholder collaboration


– The relationship between multilateral and multi-stakeholder approaches


– How to make the Internet Governance Forum (IGF) more impactful and inclusive


– Implementing lessons from processes like NetMundial and the Global Digital Compact


The overall purpose of the discussion was to examine best practices for multi-stakeholder engagement in internet governance, particularly in light of recent processes like NetMundial+10 and the Global Digital Compact. The goal was to identify gaps, challenges, and opportunities to strengthen multi-stakeholder approaches, especially through the IGF.


The tone of the discussion was largely constructive and reflective. Participants acknowledged both the value and limitations of multi-stakeholder processes. There was a sense of cautious optimism about improving these approaches, balanced with frank discussion of challenges. The tone became more solution-oriented towards the end as participants suggested concrete ways to enhance the IGF and other multi-stakeholder initiatives.


Speakers

– Anriette Esterhuysen: Chair of the Global Network Initiative


– Bruna Martins: Civil society representative, member of the IGF Multi-Stakeholder Advisory Group


– Isabelle Lois: Representative from Ofcom, the Swiss government’s office on communications


– Flavia Alves: Director of International Organizations for META


Additional speakers:


– Tijani Benjama: Civil society representative from Tunisia


– Lina: Representative from Search for Common Ground and the Council on Tech and Social Cohesion


– Dana Kramer: Representative of Youth IGF Canada


– Manal Ismail: Works at the National Telecom Regulatory Authority of Egypt, former chair of ICANN’s Governmental Advisory Committee


– Aziz Hilary: From Morocco


– Arjun Singh Vizoria: Founder of Vizoria Foundation, a civil society organization in India


Full session report

Multi-stakeholder Best Practices in Internet Governance: A Comprehensive Analysis


This discussion focused on multi-stakeholder best practices in internet governance, particularly in the context of recent initiatives like NetMundial Plus10 and the Global Digital Compact. Participants explored the challenges and opportunities in strengthening multi-stakeholder engagement using the Internet Governance Forum (IGF) and other processes.


Role and Effectiveness of Multi-stakeholder Processes


The participants unanimously agreed on the importance of multi-stakeholder processes in internet governance. Bruna Martins emphasized that these processes bring diverse perspectives together and serve to bring civil society voices to the table. Isabelle Lois argued that governments should use their convening power to ensure inclusive processes. Anriette Esterhuysen stressed that multi-stakeholder processes need to survive even when there is serious disagreement, highlighting the need for resilience in these approaches.


However, the effectiveness of current multi-stakeholder processes was a point of contention. Flavia Alves critiqued the Global Digital Compact process for insufficient non-governmental participation, while Anriette Esterhuysen viewed it as an attempt by multilateral institutions to be more inclusive, despite imperfections. This difference in perspective underscores the ongoing challenges in implementing truly effective multi-stakeholder approaches.


NetMundial Plus10 and Sao Paulo Guidelines


Several speakers highlighted the importance of the NetMundial Plus10 initiative and the Sao Paulo guidelines as frameworks for effective multi-stakeholder engagement. Bruna Martins suggested that these guidelines provide a roadmap for implementation and could serve as a ‘litmus test’ for evaluating the effectiveness of multi-stakeholder processes. The Sao Paulo guidelines, in particular, were noted for their emphasis on inclusivity, transparency, and accountability in internet governance processes.


Challenges and Opportunities for Improvement in the IGF


Participants identified several areas for improvement in the IGF. Isabelle Lois argued that IGF messages should be better utilized in other forums and decision-making processes, highlighting the need to increase the impact of these discussions. Bruna Martins emphasized the importance of selecting host countries that ensure safety and inclusivity for all participants. Flavia Alves stressed the need for mapping evolving issues to keep multi-stakeholder processes relevant, addressing the need for adaptability in these approaches.


An audience member raised an important question about the participation of small-scale organizations in the IGF, highlighting the challenges faced by smaller entities in engaging with global internet governance processes. This sparked a discussion on the need for more inclusive and accessible participation mechanisms.


Relationship Between Multi-stakeholder and Multilateral Processes


A key point of discussion was the tension between multilateralism and multi-stakeholderism, particularly in processes like the Global Digital Compact. Bruna Martins highlighted this tension, while Anriette Esterhuysen noted that governments are more comfortable with multi-stakeholder approaches nationally than globally. An audience member pointed out the gradual opening up of previously closed governmental processes, as seen in ICANN, suggesting potential for progress in this area.


Manal Ismail and Isabelle Lois emphasized the crucial role of governments in multi-stakeholder processes, noting that their involvement is essential for implementing outcomes and ensuring inclusive participation.


Thought-Provoking Insights


Several comments during the discussion challenged conventional thinking and added nuance to the conversation. Anriette Esterhuysen cautioned against romanticizing past multi-stakeholder processes, reminding participants of the difficulties in reaching consensus even within stakeholder groups. This insight highlighted the complexity of these processes and the need for coherent internal processes within stakeholder groups.


Lina from Search for Common Ground raised a provocative question about the honesty of discussions regarding the relationships between governments, the UN, civil society, and big tech. She suggested that litigation, regulations threatening fines, or extreme reputational damage were the primary drivers of change, challenging the effectiveness of multi-stakeholder forums.


Isabelle Lois offered an important perspective on inclusion, emphasizing that including more stakeholders does not remove power from those already involved. This reframing of inclusion as a non-zero-sum game could potentially make the concept more palatable to those resistant to change.


Unresolved Issues and Future Directions


Despite the productive discussion, several issues remained unresolved. These include balancing multilateral and multi-stakeholder approaches in global internet governance, ensuring authentic multi-stakeholder processes rather than just rhetoric, addressing power imbalances between different stakeholder groups, making multi-stakeholder processes more resource-efficient while maintaining effectiveness, and better integrating perspectives from developing countries and smaller organizations in global processes.


The discussion generated several suggestions for future action, including using the Sao Paulo multi-stakeholder guidelines as an evaluation tool, better utilizing IGF messages in other forums, reviewing and refining the IGF’s intersessional work models, and starting early preparations for the upcoming review of the IGF mandate.


Conclusion


The discussion underscored the complexity of multi-stakeholder internet governance and the ongoing need to refine and improve collaborative approaches to address evolving digital policy challenges. While there was broad agreement on the importance of multi-stakeholder processes, the conversation revealed nuanced perspectives on their implementation and effectiveness. Moving forward, the challenge lies in translating these insights into concrete improvements in multi-stakeholder engagement, ensuring that these processes remain relevant, inclusive, and impactful in shaping the future of internet governance.


Session Transcript

Anriette Esterhuysen: We are starting two minutes late, which is unacceptable, but I hope you forgive us. It’s day three of the idea, so I think we are fading a little bit. My name is Anriet Esterhosen. I’m chaired by the Global Network Initiative, along with other partners. None of them are here, but I hope we do them justice. The topic that we’ll be discussing is probably a topic that I’ve certainly been in multiple sessions at this IGF, to talk about multi-stakeholder best practices, but particularly how we can understand multi-stakeholder best practices in the context of the NetMundial Plus10, which took place in May this year and which produced this document. It’s not an official document produced by the UN, but it’s a document that was created in a bottom-up way, which is, I think, really owned by all the people that were part of that process. I’m going to ask some of our panellists. I’m going to stray a little bit from the script. I hope I have your permission. And then also the Global Digital Compact, this new process, which became formalised at the Summit of the Future in September this year in New York, which gives very strong endorsement for the multi-stakeholder approach. And then, of course, the World Summit on the Information Society, the UN process that I think consolidated and mainstreamed the idea of multi-stakeholder collaboration as being, I think, generally a good idea. But in the case of the Information Society, the Internet and digitalisation, it’s kind of non-negotiable. You can’t actually do anything really effectively in development and digital and human rights and inclusion if you do not have effective collaboration and participation from the private sector, the technical community, governments and civil society. And I know we don’t always have them as a separate group, but the academic and research community, think tanks and researchers all around the world. So what we want to achieve, I want to check if Ramsha from JNI is online yet. Ramsha, I’m looking for you online. I don’t see you yet. But just to emphasize what the goal of this workshop is, we really want to look at where are the gaps and what are the key challenges and what are the opportunities that we can take from all these processes that I’ve just mentioned to really strengthen multi-stakeholder engagement using the IGF and coordinating and synergizing how we strengthen multi-stakeholder engagement. I think what the NetModial guidelines told us is that it’s not just in this multi-stakeholder arena that we need to strengthen our processes. Oh, fantastic, Flavia, welcome. It’s also in the multilateral space. But just to get us started and I think also to make sure there’s a level playing field. And I think everyone, I want you to, I’m not going to, I will open the audience, but I want people to raise their hands. I think we are on day three of the IGF. You’ve discussed many of these issues. So I’m going to ask people in the room to interrupt. If you want to say something at any point, put up your hand. As long as you’re brief, it’s absolutely fine. and so that we can have very dynamic interaction between the panelists and the room, assuming that’s okay. But I think let’s just, I want to start and ask Bruna. I said earlier that the NetMundial plus 10 and the Sao Paolo guidelines has very strong ownership from those who created it. It might also have had gaps, but can you just tell everyone in very brief terms, what is, what was NetMundial plus 10? What are the Sao Paolo guidelines? And why is there both on the one hand, strong ownership, but on the other hand, also feeling that it’s not official enough?


Bruna Martins: Thanks, Henrietta, and thanks for the invitation. I think that looking back at NetMundial plus 10, it was a community oriented and steered process, right? NetMundial was multisakeholder from its very beginning. It’s a initiative that was shifted towards by the Brazilians, Nick.br, but in the previous edition by the government, this year we had a huge support from the government yet. In terms of the ownership, I think it’s because it’s set for itself the challenge of addressing all of the gaps we perceived from the process and attempted to improving that. The Sao Paolo guidelines, they are a set of principles and process steps as a how-to for effective implementation of multisakeholder in internet governance and digital policy, right? And the goal, and in doing so, we must also look forward to implementing openness, inclusiveness, and agility in internet governance, as well as the need for all stakeholders to be well-informed. So I think, in very brief words, I think the ownership comes from that. It is a community bottom-up initiative. It lies a lot on the success of the first edition of the initiative. And last but not least, it aims in addressing all of the gaps that we saw throughout the GDC process. Thanks.


Anriette Esterhuysen: Thanks very much for that, Bruna. And I didn’t introduce my panel. I was waiting for Flavia, but just that Bruna is from civil society. She can tell you more about herself. She’s a member of the IGF Multi-Stakeholder Advisory Group. So one of the people that organized this event and that we are part of and also was on the, what was the body? The High-Level Executive Committee. The High-Level Executive Committee of NetMundial Plus 10. So just to jump into the kind of substance of multi-stakeholder collaboration. I mean, Isabel, it’s become quite a buzzword. We talk about it. Some governments are more explicit about how they support it, how they use it. Some governments are more cautious about how and when they use it. But from your perspective, Isabel is with Ofcom, the Swiss government’s office on communications. So very much right inside government, but very active in multiple multi-stakeholder processes. But what do you think governments should do to deliver on this promise and potential of multi-stakeholder processes? And what do you think they are, certainly in your experience, what are they doing well? And what do you think they’re not doing well?


Isabelle Lois: Thank you, Henriette. That is a great question and a very difficult question to answer. I think government can and should play a vital role in assuring that we have inclusive and open processes. I think the main point and our main responsibility as governments is to ensure that all of the relevant voices are being heard and are being listened to and are being taken into account. But the really difficult part is how can we do that and where is our capacity to do that highlighted. And I think governments have often a very strong convening power. We can sometimes set details on who will be included in a room in a discussion, and this is a power that we should use to make sure that everybody can be part of the conversation. So that means, on one part, being present in multi-stakeholder spaces, for example, at the IGF, be that the international IGF, the regional ones, the national ones, so being very active in those processes, making sure that governments are also there, and it’s not just other stakeholders talking in between themselves, being part of the conversation, but also making sure within other structures and other for us that where there is a space and a need to include stakeholders, we make sure that they are all in the room with us. So I think that would be, for me, the main role and the main possibility for governments to do, and of course this is much easier said than done, and there is a way forward. I think at least Switzerland tries its best to include all stakeholders in our discussions, in our conversations, make sure that if we are planning a panel, we are not just inviting governments to speak, and we are using our convening power as best as we can. There is more that we can do, and I think we should push for that and include that in all processes. I think that is the main point I would say here.


Anriette Esterhuysen: I want to ask you just a follow-up question, and anyone else is welcome to respond as well. What is the difference between a government facilitating multi-stakeholder cooperation and a government living up to its constitutional obligations for public participation and policymaking?


Isabelle Lois: I think that is a very good question. I think a lot there enters in how does government include this perspective nationally, and how do they make sure that this is also included? internationally. And I think this is a distinction that is not always very easy to navigate, I mean, for governments within their country and then internationally. In Switzerland we have a very strong will and we have a lot of public participation at the national level. We do a lot of consultations, we have a semi-direct democracy, so we vote on many issues and we have everyone sort of being part of the conversation at the national level. This is something we strive for and work for. And then at the international plane, this is where it becomes a bit more complicated. Because, I mean, first we have to find an agreement between governments and between stakeholders, who should be included, how should we include them, what is meaningful, how do we make sure everybody’s in the room. And this is where we think that the Sao Paulo Multi-Stakeholder Guidelines is a very useful tool to not just talk the talk, but actually walk the walk. It’s a way for us to see what are the main questions we should ask ourselves, how do we make sure that these principles that we find valuable and necessary, how can we actually use them. So what should we include, how should we include, how do we make sure that if there is a power imbalance, we have thought about it and try to mitigate it as best as possible. Of course, it will never be perfect, but we can do better. And now we have a sort of roadmap on how we can do better. And I think this is very useful. This is why it’s such an important document to read and to include in our processes.


Anriette Esterhuysen: Thanks very much for that response, Isabel. I think, I mean, speaking from civil society and someone who does a lot of, not all, but a lot of my work in Africa, what you said actually, I think, mirrors, I think many governments who have some reservations about the multi-stakeholder approach don’t really have it about working at national level. They’re much more comfortable, they work very closely with the private sector at national level. Civil society sometimes thinks they work too. too closely with the private sector at national level. And they also collaborate with civil society and grassroots organization. It’s when you get to a global forum that there’s more caution about that multi-stakeholder approach. And I think that’s exacerbated by the fact that many developing country governments already feel fairly disempowered in global arenas. And when they feel they have to not just be effective and influential in relation to countries that are much more powerful and rich and influential that they are, but also deal with the multi-stakeholder community, it is quite challenging. But Bruna, as a member of the MAG, what are the lessons? And the MAG is a multi-stakeholder advisory group. It’s supposed to be perfect. And it’s been going for a long time. This is the, how many of IGF? 19th IGF. What do you think we can learn from the IGF? And in applying the multi-stakeholder approach, what are we doing wrong? What are we doing right?


Bruna Martins: Maybe I’ll start by saying that I think 2024 has been one of those inflection points, right? Or years where the internet governance space was all sorts of crazy or dynamic in that sense. Everyone was talking about with the CDC pact for future, what is gonna happen with the IGF, how ICANN is gonna react to those spaces, what happens with the ITF and many of those things and how all of those missions or questions would be integrated, right? We had a lot of meaningful processes. We had a lot of all of them taking place at the same time. And I think that some of those spaces like the GDC, in my personal opinion, they have presented a rather serious risk for the way we do things at the IGF, which is bottom-up multi-stakeholder and ensuring everyone has a say and has a microphone above all. right? And coming back to the IGF, I would say that this is the main value of this space that everyone is here, gets to come here or gets to join sessions remotely, given that the remote participation is working. And at the same time, this is a space that relies a lot on the diversity of perspectives and not just in terms of the difference of opinions, but the difference in terms of backgrounds and expertise. This is a space where you hear to people from the Pacific, from Brazil like me, or Tanzania, talking about different aspects around internet governance. And to me, that’s one of the core aspects. So the big diversity around stakeholders, perspectives, backgrounds, and so on. And I would say that this is what makes the IGF one of the primary spaces for internet governance and digital cooperation related issues. Because over the course of almost 20 years, the space has been leveraging on all of these vast community experts and expertises in order to move forward and to evolve its model. Back to the challenges, I would say, to put it more bluntly, I would say that the tension to what the GDC seems to have catered between multilateralism and multi-stakeholderism is one of the challenges, right? And that is because there has always been somehow a clear ask for some member states for more silo discussions or exclusive mechanisms. And the point is that, right, we need to balance those two expectations. It’s something that the São Paulo multi-stakeholder guidelines try to do by making some waves or some, you know, signalization to the multilateral spaces, but one should not overcome the other. We should achieve for balanced spaces in that sense. So maybe I’ll stop here, Rietz. Thanks. Thanks very much. Flavia, let’s go to you. Flavia is from, oh,


Anriette Esterhuysen: did you want to? Oh, sorry. Thanks a lot. Flavia is from META and META has really invested in time and people in participating in many of these spaces. Picking up from what Rune said about this being a year of, I guess you use the term inflection point, we had the culmination of the summit of the future, the global digital compact, and we have the WSIS plus 20 process well underway now, and of course we have the IGF. How have you participated as a company in this process and what, from your perspective, what works and what doesn’t work?


Flavia Alves: Sure. Thanks. Hi, everyone. I’m Flavia Alves, Director of International Organizations for META, and I have been doing an internet governance and multi-stakeholderism processes for the past 20 years. From the 19 IGFs, I think I have been on half of them, so maybe a good chunk of IGFs. As META, we believe it’s important to have a level playing field that all stakeholders should be part of processes that deal with internet governance. Historically, we are in the pick, and I agree completely with Rune, we are in the pick of diverse, of multilateral and multi-stakeholder processes. dealing with issues related to internet governance. If we go back to WSUS 2006, or WSUS Plus 10, or before WSUS Plus 10, there was a clear division between multistakeholderism and multilateralism, and what do we do in multistakeholderism as opposed to multilateralism. One of them was internet governance, and that’s why the IGF was created. Internet governance issues were supposed to be treated on a level playing field where all tech communities, civil society, private sector, governments, and international organizations would have a voice. Right now, we are dealing with processes where private sector might not have had voices, including tech communities and civil society. We are looking forward to see what the UN Global Digital Compact implementation is going to be, but the reality is that through the process, which is multilateral processes, which I should have said in the beginning, there is a difference between multilateral processes that once you have input from other stakeholders through multistakeholder processes. In the case of multilateral processes, I believe that the Global Digital Compact could have got a little bit more of participation from civil society, private sector, and tech community, and then be transparent on how those comments, or et cetera, were intaked and uptaken by the Global Digital Compact. We participated at the WSUS Plus 10 in the past, and there was an opening processes for consultations. There were consultations that were taken into account into a final document, and then a final document for folks to have comments on. Everything is still on the UN website. We have dates and meetings. We were in the room during WSUS Plus 10. I would have hoped that the Global Digital Compact was similar, and then I’m hoping the WSUS Plus 20 will be similar. We are now, we have an opportunity, and I think META is going to try as much as possible to be together with tech community and civil society to be part of the WSUS Plus 20 process, to work with the co-facility. to make sure there are consultations, that we are in the room and that we are providing comments through the several documents that are going to come. And I think we have those opportunities through the several other mood stakeholder process that we have in Sudan. There is the ICANN meeting, there will be other conference, but there is also IGF in Norway before the WSIS. And so I would invite this community for us to work closely together to see how and what do we want from WSIS plus 2020. There is also the discussion are we going to be able to build upon what WSIS plus 2010 has agreed on that resolution? Are we open for new comments? And then we need to map up with issues that we wanted to address as a community, as a multi-stakeholder community. What are the issues that are there that we should be re-addressing now? There are issues that unfortunately I think we’re gonna have a lot of challenging, challenged conversations should be able to agree on. To say the least, one on internet governance or another on human rights. However, there is the renewal of the IGF and I think we obviously we all want to renew the IGF, I would assume. The question here for this group is also how do we make the IGF more, even more relevant for others so that we can have more governments present, more civil society and even more colleagues of ours and other tech peers, private sector community present here. META is committed to the IGF and then through the years as the IGF changed location etc, we increased our participation back again after COVID. This year we had presence like a global diverse delegation from all over the regions but Europe and so I’m at Asia, LATAM, NORAM as well as content experts. So we had safety, privacy and I talking to stakeholders in every little corner here because we believe it’s important for us to exchange and the power of IGF among all is also the convening power that it brings. So I hope that we can continue that spirit and we can continue to invest on those processes but together. If we go in silos just as Bruno just said, governments sometimes want to work in silos, I think the other communities we should come together. The tech communities, civil society, private sector just as we do at the ITU, a multilateral firm where we all have a seat. I guess I’ll stop there


Anriette Esterhuysen: otherwise I could go for it for years. I mean it all makes sense but I think what I don’t hear is what does it mean to not work in silos? What does it mean to all come together? Do we all just come to the IGF? It’s a multi-stakeholder space. We all sit together, we talk together but does it make it, do we do we do we really effectively are we effectively able to engage about this is where there’s a common interest, this is where there’s a divergent interest, and do we come together as sectors or do we come together as individual companies, individual governments, you know individual civil society organizations. You know I think we sometimes we romanticize the past of the WSIS and the wonderful WSIS multi-stakeholder process. What we forget, those of us who were there at the time, like Tijani and myself, is that we had bureaus. We had a civil society bureau, we had a private sector bureau, and governments of course have to negotiate with one another. And we had within civil society before every opportunity to give an input on an item of the agenda, we had to reach internal consensus, and it was very very difficult. But we had to, and we were given the space by the WSIS process to meet, and we were forced to reach consensus, and then our consensus statement was given to governments, and governments took our consensus statements quite seriously. The same thing with the private sector, you did not have individual companies submitting their views, businesses had to work together and decide these are our priorities. And I think we sometimes forget that, that to have effective multi-stakeholder collaboration, you need coherent stakeholder processes within those stakeholder groups. And I think the same applies for regional multi-stakeholder processes. For Africa to have a strong voice in the global IGF or in the WSIS, Africa has to have a strong regional multi-stakeholder process, but it also needs a strong regional multilateral process. So I’m trying to unpack a little bit, how do we, I think we all believe in this modality, we believe in the multi-stakeholder approach, but I think we recognize that it needs to be better. I think NetMundial, Sao Paulo Guidelines is trying to make us do that. And I guess my final challenge, and I want you all to react to this, is that it takes resources. I think sometimes we look at the multi-stakeholder approach as a more cost-effective approach, because we put everyone in the same space, but is effective multi-stakeholder processes not also actually quite resource and time-intensive? But I’ve now challenged the panel, but I want to open it to the room and also online. If there’s anyone who wants to ask a question, or make a comment, and then we’ll go back to our panel. And Tijani, please, can we have a mic? Can we ask one of our… Excuse me, the volunteer on her cell phone in the back of the room. Sorry, can you help us with the microphone, please? Thank you so much, and very much… Tijani, just introduce yourself and be brief.


Audience: Okay, my name is Tijani Benjama. I am from Tunisia, civil society, from the beginning. And yet, I really thank you for asking how, what does it mean working in silos? We know that governments want to work in silos, but what about the other stakeholders? Are there any kind or any aspect of multi-stakeholder in their work? Do they consult with civil society? Do they consult with governments? This is a very important point. When we speak about multi-stakeholder model, we speak about it for all the stakeholders, not only for the governments. Thank you.


Anriette Esterhuysen: Any other comments? Any other comments from the question? Please, go ahead, from the floor. Is that working?


Audience: Hi, my name is Lina. I’m with Search for Common Ground, a peace-building organization, and the Council on Tech and Social Cohesion. I wonder whether or not we’re being honest enough about the relationship between governments, the UN, civil society, and big tech. Because it feels like the only things that are actually making things move is litigation, certain regulations that threaten fines, or extreme reputational damage. And sometimes I’m just not sure that these kinds of forums are really raising the issue. And I think it is changed, right? We have billions of dollars of lobby funds going to countries that are trying to move the needle around certain regulations so that those regulations don’t happen. And I’m not seeing necessarily that big tech is wanting a coherence from a regulatory standpoint. And just to give an illustration of what I’m talking about, we’ve seen that when it comes to online safety, protection of women and children, the kinds of things that are on many panels here, that this information has been known by the companies for a long time. And yet, they’re waiting until regulations in Europe force them to do different things. Meanwhile, the global south is not benefiting from any of those changes and protections. So I’m really trying to just see whether or not the multi-stakeholder model isn’t being threatened by this. And are we being honest about that? Thank you. Hello, Dana Kramer for The Record, representing my youth, R.I. Youth IGF Canada. I’m curious about the panel. Can you hear me? Okay, sorry. It seems to be cutting in and out of my own. I’m curious if the panel can speak to implementation of the GDC and where it could be implemented. So kind of building off of the last question about IGFs, are we seeing that practical element? And I’m wondering if the panel can maybe speak to points of, would the IGF be the best place to implement the GDC so that there’s an action-oriented outcome within it for some of those principles, within the document for more safe internet? Sorry, I’m just building off of your question there. But where we can see some of this impact for multi-stakeholderism, because as mentioned earlier with the resource constraints, it’s easy when we’re all kind of coming together that this would be the most appropriate venue. Thank you. Thanks, Dana. And we have one more comment from online, and then I’ll ask you to respond. Manal Ismail, let me just see if I can unmute you. I can unmute you. So please go ahead and introduce yourself. Manal is someone with a very deep track record in the multi-stakeholder process. No, I can’t hear you. Can anyone else hear Manal? We have a remote speaker that’s trying to speak, Manal Ismail. We can’t hear her. I have unmuted her. Manal, try now. Sometimes the audio is going to the table but not to the speakers neither here So if you guys can change it and Manal you can type your comment and I will read it. Okay, I Will look in the chat in the meantime Let’s just oh she’s speaking but we can’t hear you Let’s see if I can unmute you again last try And we have one more question in the room and and and Manal just type your comment, I’m so sorry we can’t hear you The the remote participants can hear you but those of us in the room can’t hear you Whose hand was it Aziz? Please go ahead. Yes. I am Aziz Hilary from Morocco. I just want to add one quick question about What mechanism or criteria? can ensure that multi-stakeholders Is authentic And not just a rhetoric and just words What mechanism and criteria we can apply to ensure that multi-stakeholders is authentic?


Anriette Esterhuysen: They’re really tough that’s a tough question and It’s okay I think Manal says we should go ahead But I really do ask our technical team to try and make sure that our remote participants can participate and Here’s quite a wide range of Reactions there and challenging questions Who wants to go first?


Bruna Martins: Guess I’ll go to Lina’s point about the multi-stakeholderism and and whether or not it works or it’s been implemented Brazil is one of those countries, right that has been championing the multi-stakeholder model into policymaking into law enforcement and in some of those things But again, we must not conflate the issues right the IGF is not a regulatory body the IGF the IJF is a convening space for the discussion of ideas and it doesn’t have the interesting that they are muting everyone now. Can we please cut off the, but just to say that the IJF has it like, it’s a, it’s general, like general or initial idea of being a convening space for different thoughts and different approaches, but in any sense, can we please stop the interference on the microphone? But in any case, Brazil has been implementing that and just to quote two examples, we have the civil rights framework for the internet and also our Data Protection Act, which were all discussed and co-written by a group of stakeholders that were convened by the rapporteurs in the parliament and which main idea was to make sure everyone had its position. And the point that I always kind of mention when there is this kind of tension between big tech and the rest put at the table is that when we talk about policy processes, governments talk to business because they have the financial interest. Governments talk to business, to other stakeholders, technical community, because of different interests, but there is nothing that makes them talk to civil society depending on where you’re coming from and what country you’re going to. Obviously, if you master participation mechanisms and so on, that’s one thing, but there’s literally nothing that obliges governments to go to end users and the multi-stakeholder model serves its purpose that is bringing civil society to the table and making sure that here, it’s not a financial interest that’s at play, but the needs for including everyone above all. So maybe that’s kind of what I go,


Anriette Esterhuysen: but yeah, I’ll stop. Flavia. Hi, thank you.


Flavia Alves: There are a couple of issues that I would like to address. Just picking up on this last one just so that we get it first. First of all, we as Meta, and I won’t speak for other tech companies, are highly supportive. or harmonization and interoperable approach of key critical issues. We also comply with regulation globally around the world, and we appreciate processes that are either interoperable or harmonized. In a sense, right now, you might be very well familiar with the Global Digital GDPR, also EU AI Act, or Digital Services Act, or Digital Market Access Act, and so there are DMA, DSI, there are all of these processes that we were part of the process as they were developing, and now we are working together with governments to try to implement it as much as possible. So I would say we are supportive of regulatory making processes that are open, and then provide processes for us to provide our comments and together develop documents or regulations, as we said. META has always been proactive, supportive of regulation, particularly because we don’t want to be the ones trying to and having to determine what we should or should not have in the internet. On the safety, here at IGF, we have a child safety group that has been here for years, I think right now might have closed, and from that, we developed a community that today have continued working together on safety matters and have several different groups addressing online safety, particularly child safety issues. I can send you some of those details, my digital safety person is around here too, but that’s something that I wanted to make sure you understand. This convening helps us develop in the community to understand what are the issues and how can we address it together with tech companies. We have, well, several other groups as well. From the IGF perspective, and then coming back to Henriette and your comment, I do agree that sectors need to, together, come up with consensus. For me, I cannot picture, because I wasn’t at WSIS, but I cannot picture another process that worked as best as NET Mundial, the first and the second. So it was NET Mundial 2014 and NET World 2014. Mundial plus 10. I remember perfectly most of us there, having like the head of civil society, have everyone on the same level, and even on the negotiations, we each had a room, we each had processes, and then we had to come up with a consensus on a single document, with governments on the same level. Now, the document exists, and then we review it like last year. I think we should use this base for other processes, and perhaps that’s where we want to go. In fact, an interesting thing about NetMundial is that, imagine a room like this, but it’s very, very big, and then you have different microphones, one for civil society, one for government, one for business, one for the technical community, and they have to line up, but of course there are only so many speakers, civil society usually has loads, and as a result, we as civil society had to negotiate. We had a WhatsApp group, we had a Google doc, so that we could prioritize what, you know, we only had three opportunities to speak, so we had to prioritize. So I think in a way, it did capture that combination of stakeholders having to collaborate, as well as be on a level playing field. But you know,


Anriette Esterhuysen: just I’m going to give the mic to Manal, but I also want to say, isn’t the true test to respond to the comment from Search for Common Ground, isn’t the true test of an effective multi-stakeholder process that it should survive, even if there is serious disagreement on how to regulate, and what to regulate, by whom? Isn’t that ultimately what shows us that our multi-stakeholder processes has matured, that we don’t abandon them when we reach points of conflict? Same with governments that have different perceptions, different understandings of human rights, and compliance of human rights. Should we stop working with them? Because we disagree, but that’s another challenge. And Manal, please, you can, I think it works now. the team has sorted the problem, so please go ahead and share your experience. Hello everyone, can you hear me now, Andriette? We can.


Audience: Excellent, thank you. So, just very quickly, I was triggered by your comment that governments partner with civil society. Please introduce yourself, sorry, I didn’t introduce you yet. Sure, sure, I’m sorry. This is Manal Ismail. I work at the National Telecom Regulatory Authority of Egypt, and I’ve been participating to almost all the IGF meetings. And in ICANN, I participated to the represented Egypt on the governmental advisory committee in different capacities, last of which was chairing the committee. And I just wanted to share the experience of government’s participation to the governmental advisory committee of ICANN, and as said, I was triggered by your comment, Andriette, that governments collaborate and partnership with civil society and private sector at the national level, but they are more cautious globally. And I think this could be attributed to, if I’m participating in an individual capacity, it’s more easy and more flexible to just speak my mind. When someone is participating on behalf of the country, it is more difficult to speak up without really being prepared and consulted at the national level. We started the very first meeting I tried to participate to. I found the room was closed with a key, so it was a really closed government meeting. But over the years, we started opening up gradually. We opened up certain sessions, then all the sessions except the communicate drafting. But now all the meetings are open, including the communicate drafting. And I think a few things that help is, for example, sharing the topics and everything in advance so people can prepare and consult at the national level before they come so they can speak more freely in public. And also availing in real time interpretation also helped because sometimes the language is a barrier and people are very cautious. careful in choosing each and every word, because it’s going to be attributed to their governments and their country. I’m cautious of time, I leave it at this, but just wanted to share with you that after we had very close meeting with AKI, now we’re having all the meetings open, and we are also engaging with other stakeholder groups at ICANN, and thus benefiting from the multi-stakeholder nature of the organization. Previously, we were meeting in silos, all the stakeholders, but not in one meeting. I leave it at this. Thank you, Andrea, for the opportunity.


Anriette Esterhuysen: Thanks very much for sharing that, Manal, and I think it’s a very good example of how one learns incrementally from processes. We have about 15 minutes left, and we want to look at what role the IJF can play in capturing this learning, capturing best practices, and applying them. I think we should also reflect, there’s been a lot of talk about the Global Digital Compact, but I’ve also heard many people say that it’s one of the most inclusive and collaborative processes that has been run from within the UN General Assembly. I think I felt frustrated by it, but I also sometimes would talk to the co-facilitators and see how much additional work they had from normally just facilitating a negotiation between members of the UN General Assembly. They also tried to get in all this other stakeholder input. It was imperfect, but there was a serious attempt. How do you think we can make these processes better? How do we use the Sao Paulo guidelines? What do you see the IJF doing concretely? Maybe it demonstrates, but maybe it can also innovate in making us get away from the happy, wonderful, multi-stakeholder community. to actually having deeper engagement that produces more concrete policy outcomes that might not always be consensus-based, but that are serving the broader public interest in the best possible way and the internet. So yeah, I know it’s a very long question, but I know you’ve been thinking about this and you are in these spaces. So let’s start with you, Isabel.


Isabelle Lois: Thank you. Okay, thank you for the question. Very, very long and there’s a lot of things. I just wanted to add one point on what we’ve said before that I think is very important. When we talk about inclusion, inclusion does not remove the power from the people who are already in the room, opening up to more stakeholders, more people, is not removing from those who are already there. And I think this is something that we need to be aware of. And I think this is something we have to remember and underline in all of the processes. So this was just my first little point that I think is important to highlight. On what we can do and how we can use what has been, we have 20 years of experience of trying to be as multi-stakeholder as possible and try to be better. We now have some guidelines on how we can make it effective. And I think this is something we should use to not have moments where we believe a process or is multi-stakeholder just by name because we name it, but without maybe actually being it, it’s sort of a whitewashing just because we’re using this buzz term and not actually living it up. And I think this is something that we now have some sort of litmus test that we can use with the Sopaldo guidelines of checking, is this truly multi-stakeholder or is this just called a multi-stakeholder process? So I think this is one of the points that we can emphasis on. And for the IGF specifically, it is difficult to see what it could do. And I think there are probably many, many ideas. But one of the points I would like to highlight is on the messages. We have messages at the end of every IGF. And, of course, these are not adopted by consensus. It’s a sort of summary of what has been discussed. But it gives us a very good knowledge of what has been shared, what are the issues that are raised, what are the opinions that have been raised in the different rooms. And I think we could do much better in using those messages in other forums, bringing them, highlighting them, saying, okay, this was discussed at the IGF, these issues were identified, and then bringing them in the other conversation, in the other rooms where there might be regulation or decision-making. Because the IGF is not a regulatory body. This is very important to highlight. But we are coming up with new ideas that might then just be lost in a document that is not read as much. So I think this is something where we can actively, we have the opinions of the different stakeholders, they are concretely written down, and we should use this more. This would be my little point. I’m happy to give it to you.


Bruna Martins: I think I’ll start with the idea that for the upcoming hosts, we should also make sure that the host country selection process takes into account safety of participants. You know, comfort around participants and whether or not the community, or whether or not the selection will result in one part of the community being less present, right? My stakeholder group is one that’s not present this year, or present in much smaller numbers, and it’s one of the main stakeholders within the IGF space. To anyone that’s here for the first time, this space is much more lively. I do miss my colleagues from Latin America. society and many other spaces in this broader conversation. So maybe looking at the IJF, making sure that the host country selection takes into account new aspects around safety of participants and so on is one thing. And lastly, I would just echo some of the guidelines, not the guidelines, but the suggestions issued by the MAG Working Group Strategy, because we just issued a vision document for the IJF looking into 2025. And a couple of the recommendations, they go around making sure the next year’s events takes a lot of makes a couple of discussions on how to improve the IJF mandates, a couple more things on making sure that the IJF has a track for GDC follow up and implementation and brings into a lot of the GDC follow up and implementation discussions through the workshops, main sessions and everything that takes place. But also working on the development of relationships between the IJF and some of the WSIS partner institutions and also continue some of the MAG discussions on NetMundial plus 10 alignment and last but not least, review and refine the intersessional work models. I’ll just wrap this by saying that if we don’t have every single, you know, group and stakeholder group at the table, this doesn’t work. And this goes both ways for civil society and private sector for academia. And, you know, many other parts of government, many other parts of this community, there is not a multistakeholder model where we don’t have one or where we don’t like one of them. And that’s the kind of the perk of it all and the joy of the IJF space.


Anriette Esterhuysen: Thanks. Thanks. You want to make a comment? I just want to react quickly to what Bruna said. I think, yes, all stakeholders at the table, but I think what the NetMundial Sao Paulo guidelines tells us, scope the issue that’s being discussed. And based on that issue, identity, to identify who’s affected by that particular discussion, and then you bring the stakeholders. If it’s about meta and content regulation and gender-based violence, you bring together meta, you bring together feminist organizations, you bring together data brokers, regulators, and you bring together freedom of expression people because any kind of content regulation might impact on freedom. So you have to, I think, be quite focused and targeted as well. And I think Ned Montiel gives us steps to help do that. I said you could interrupt us, so you can, but you’re gonna have to get up and come and fetch your own microphone because if I get up, I’m gonna drop something. Thanks, Flavia. Yeah. Hello.


Audience: Thank you so much for the mic. Good afternoon, everyone. My name is Arjun Singh Vizoria. I am from India. Here I am representing a civil society organization called the Vizoria Foundation that is founded by me in 2016. We are working in India in rural sector for the digital literacy. Now my question is here, how a small-scale organization can work with the IGF and is there any space to work with the IGF in India for the small-scale organization? And the second question is my to the matter. Sorry for the direct question. And I just want to know that how the matter is dealing with the cyber bullying. Sometimes I’m using the Facebook or certain platform. I see certain messages that are not relevant to me. It’s just like a direct message. Somebody is targeting. So how you are dealing with that?


Anriette Esterhuysen: Yeah. So that and then make your final remarks as well. And then you can answer the question about participating in national IGFs. Let’s talk on the side on this.


Flavia Alves: I am not an expert on cyber bullying. Sure, sure. So we are addressing it and we have our digital, head of safety here, so we can discuss with you. I guess most of the points I was going to make, particularly with regard to multistakeholderism and IGF, were made by my excellent co-panelists. Particularly, I think one thing that I keep hearing is we need to map the issues. We need to say, what are the issues? And this needs to be an evolving process. There is not a rule set in stone. Issues that we were discussing 10 years ago are different from the issues we were discussing now. At that time, we wouldn’t discuss content. Today, we have a whole DSA in content regulation issues. We have the UNESCO information integrity. We have the UN information integrity matters. And so, I think we should take this into account as we prepare for the upcoming review of the IGF mandate. I would love for the IGF next year in Norway for us to start early the process, particularly the country with the hosts, to make sure we bring stakeholders from all groups. My group is also not too present here. And so, I think we could partner in trying to make sure we bring others to the place. And with Norway, we agreed also to try to make available the list of participants earlier, so others have more incentive to be present, but also try to bring small business, small organizations, small developing countries, and making sure the remote participation is there as well. So, I would stop there. I know we are out of time.


Anriette Esterhuysen: Thanks very much, Flavia. And Bruna, maybe you can talk to Bruna about how to participate in the national. And absolutely, national IGFs are completely open to any organization of any size. Well, I think, you know, it feels again like we’re almost scratching the surface of this, but I think that we should take this experience of the Global Digital Compact. I think it’s an important experience where a multilateral institution tried hard to be consultative. The results might not be what we are used to or expect from the multistakeholder space. That doesn’t mean that there wasn’t good intention. I think it demonstrates how difficult it is. So, I think let’s look at that. that process, work with multilateral processes to make these processes that originate from within the United Nations system more inclusive. I think you’ve outlined very clearly how the IGF can become more effective. And my closing, this workshop was convened amongst others by Global Network Initiative. And I wanna quote Rebecca McKinnon, she’s not here, but she’s the person that was the founder of the Global Network Initiative. And she always says it takes different types of initiative. There’s no one fix to all of this. There’s no one perfect process. And if we look at how we’re making progress in using the multi-stakeholder approach to have more accountable, democratic, inclusive, digital and internet governance, it takes all these different types of processes. And it’s kind of the imperfections of all of them together, I think sometimes that really makes us be more effective and more inclusive. So thanks everyone for joining us and thanks to our panel. Thanks to the remote participants, sorry about the tech issues. And thanks very much Manel for your contribution as well. Thank you. Thank you. Thank you. You You You


B

Bruna Martins

Speech speed

161 words per minute

Speech length

1395 words

Speech time

517 seconds

Multi-stakeholder processes bring diverse perspectives together

Explanation

Bruna Martins emphasizes that the IGF’s value lies in its ability to bring together diverse perspectives from different backgrounds and expertises. This diversity of stakeholders, opinions, and experiences is what makes the IGF a primary space for internet governance discussions.


Evidence

The IGF allows people from various regions like the Pacific, Brazil, and Tanzania to discuss different aspects of internet governance.


Major Discussion Point

The role and effectiveness of multi-stakeholder processes in internet governance


Agreed with

Isabelle Lois


Flavia Alves


Anriette Esterhuysen


Agreed on

Importance of multi-stakeholder processes in internet governance


Multi-stakeholder model serves to bring civil society voices to the table

Explanation

Bruna Martins argues that the multi-stakeholder model is crucial for ensuring civil society participation in policy processes. She points out that while governments often engage with businesses due to financial interests, there’s no inherent obligation for them to consult with civil society or end users.


Evidence

Brazil’s implementation of the civil rights framework for the internet and the Data Protection Act, which were co-written by various stakeholders convened by parliament rapporteurs.


Major Discussion Point

The role and effectiveness of multi-stakeholder processes in internet governance


NetMundial Plus 10 and Sao Paulo guidelines provide a roadmap for effective implementation

Explanation

Bruna Martins highlights the importance of the NetMundial Plus 10 initiative and the Sao Paulo guidelines. She explains that these documents provide principles and process steps for effectively implementing multi-stakeholder approaches in internet governance and digital policy.


Evidence

The Sao Paulo guidelines aim to address gaps perceived in previous processes and improve implementation of openness, inclusiveness, and agility in internet governance.


Major Discussion Point

Challenges and opportunities for improving multi-stakeholder engagement


All stakeholder groups need to be present for multi-stakeholder processes to work

Explanation

Bruna Martins emphasizes the importance of having all stakeholder groups present for the multi-stakeholder model to function effectively. She argues that the absence of any group, whether it’s civil society, private sector, academia, or government, undermines the process.


Evidence

Bruna notes the reduced presence of her stakeholder group (likely civil society) at the current IGF, affecting the liveliness and diversity of discussions.


Major Discussion Point

The role and effectiveness of multi-stakeholder processes in internet governance


Host country selection for IGF should consider safety and inclusivity of all participants

Explanation

Bruna Martins suggests that the IGF host country selection process should take into account the safety and comfort of all participants. She emphasizes the importance of ensuring that the selection doesn’t result in underrepresentation of any part of the community.


Evidence

Bruna mentions the reduced presence of her stakeholder group and colleagues from Latin America at the current IGF.


Major Discussion Point

Challenges and opportunities for improving multi-stakeholder engagement


Agreed with

Isabelle Lois


Flavia Alves


Agreed on

Need for improvement in multi-stakeholder engagement


I

Isabelle Lois

Speech speed

182 words per minute

Speech length

1148 words

Speech time

378 seconds

Governments should use their convening power to ensure inclusive processes

Explanation

Isabelle Lois argues that governments have a responsibility to use their convening power to ensure inclusive and open processes. She emphasizes that governments should ensure all relevant voices are heard, listened to, and taken into account in discussions.


Evidence

Lois mentions Switzerland’s efforts to include all stakeholders in discussions and use its convening power to ensure diverse participation in panels and conversations.


Major Discussion Point

The role and effectiveness of multi-stakeholder processes in internet governance


Agreed with

Bruna Martins


Flavia Alves


Anriette Esterhuysen


Agreed on

Importance of multi-stakeholder processes in internet governance


IGF messages should be better utilized in other forums and decision-making processes

Explanation

Isabelle Lois suggests that the messages produced at the end of each IGF should be better utilized in other forums and decision-making processes. She argues that these messages provide valuable insights into the issues discussed and opinions raised during the IGF.


Evidence

Lois points out that the IGF messages, while not adopted by consensus, provide a summary of what has been discussed and shared during the forum.


Major Discussion Point

Challenges and opportunities for improving multi-stakeholder engagement


Agreed with

Bruna Martins


Flavia Alves


Agreed on

Need for improvement in multi-stakeholder engagement


F

Flavia Alves

Speech speed

169 words per minute

Speech length

1735 words

Speech time

613 seconds

Global Digital Compact process could have had more participation from non-governmental stakeholders

Explanation

Flavia Alves expresses that the Global Digital Compact process, while attempting to be inclusive, could have benefited from greater participation from civil society, private sector, and the tech community. She suggests that the process could have been more transparent about how stakeholder input was incorporated.


Evidence

Alves compares the Global Digital Compact process to previous processes like WSIS+10, which had more open consultations and transparent incorporation of stakeholder input.


Major Discussion Point

Challenges and opportunities for improving multi-stakeholder engagement


Agreed with

Bruna Martins


Isabelle Lois


Anriette Esterhuysen


Agreed on

Importance of multi-stakeholder processes in internet governance


Differed with

Anriette Esterhuysen


Differed on

Effectiveness of the Global Digital Compact process


Mapping of evolving issues is needed to keep multi-stakeholder processes relevant

Explanation

Flavia Alves emphasizes the need to continually map and update the issues being discussed in multi-stakeholder processes. She points out that the topics of discussion have evolved over time and that this evolution needs to be taken into account in preparing for future IGF mandates.


Evidence

Alves gives examples of how discussion topics have changed, such as the emergence of content regulation issues and information integrity matters that weren’t prominent 10 years ago.


Major Discussion Point

Challenges and opportunities for improving multi-stakeholder engagement


Agreed with

Bruna Martins


Isabelle Lois


Agreed on

Need for improvement in multi-stakeholder engagement


A

Anriette Esterhuysen

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Governments are more comfortable with multi-stakeholder approaches nationally than globally

Explanation

Anriette Esterhuysen observes that many governments are more comfortable with multi-stakeholder approaches at the national level than in global forums. She suggests that this discomfort at the global level is exacerbated by the power imbalances between developed and developing countries.


Evidence

Esterhuysen notes that governments often work closely with the private sector and civil society organizations at the national level, but are more cautious about multi-stakeholder approaches in global arenas.


Major Discussion Point

The relationship between multi-stakeholder and multilateral processes


Agreed with

Bruna Martins


Isabelle Lois


Flavia Alves


Agreed on

Importance of multi-stakeholder processes in internet governance


Multi-stakeholder processes need to survive even when there is serious disagreement

Explanation

Anriette Esterhuysen argues that the true test of an effective multi-stakeholder process is its ability to survive and continue even when there are serious disagreements among participants. She suggests that mature multi-stakeholder processes should be able to handle conflicts and divergent interests.


Major Discussion Point

The role and effectiveness of multi-stakeholder processes in internet governance


Multilateral institutions like the UN are trying to be more consultative, as seen in the Global Digital Compact process

Explanation

Anriette Esterhuysen acknowledges that multilateral institutions like the UN are making efforts to be more consultative, as demonstrated by the Global Digital Compact process. She suggests that while the results might not meet all expectations, there was a genuine attempt to be more inclusive.


Evidence

Esterhuysen mentions that the co-facilitators of the Global Digital Compact had to do additional work beyond their usual role of facilitating negotiations between UN General Assembly members to incorporate stakeholder input.


Major Discussion Point

The relationship between multi-stakeholder and multilateral processes


Differed with

Flavia Alves


Differed on

Effectiveness of the Global Digital Compact process


A

Audience

Speech speed

137 words per minute

Speech length

1280 words

Speech time

559 seconds

Gradual opening up of previously closed governmental processes is possible, as seen in ICANN

Explanation

An audience member (Manal Ismail) shares the experience of government participation in ICANN’s Governmental Advisory Committee. She describes how the process has gradually opened up over the years, moving from closed meetings to fully open sessions, including communiqué drafting.


Evidence

The speaker mentions that ICANN meetings started with closed government meetings, but over time opened up certain sessions, then all sessions except communiqué drafting, and finally all meetings including communiqué drafting.


Major Discussion Point

The relationship between multi-stakeholder and multilateral processes


Agreements

Agreement Points

Importance of multi-stakeholder processes in internet governance

speakers

Bruna Martins


Isabelle Lois


Flavia Alves


Anriette Esterhuysen


arguments

Multi-stakeholder processes bring diverse perspectives together


Governments should use their convening power to ensure inclusive processes


Global Digital Compact process could have had more participation from non-governmental stakeholders


Governments are more comfortable with multi-stakeholder approaches nationally than globally


summary

All speakers emphasized the importance of multi-stakeholder processes in internet governance, highlighting the need for diverse perspectives and inclusive participation.


Need for improvement in multi-stakeholder engagement

speakers

Bruna Martins


Isabelle Lois


Flavia Alves


arguments

Host country selection for IGF should consider safety and inclusivity of all participants


IGF messages should be better utilized in other forums and decision-making processes


Mapping of evolving issues is needed to keep multi-stakeholder processes relevant


summary

Speakers agreed on the need to improve multi-stakeholder engagement through various means, including better host country selection, utilization of IGF messages, and continuous mapping of evolving issues.


Similar Viewpoints

Both speakers emphasized the importance of civil society participation in multi-stakeholder processes, particularly at the global level where governments may be more hesitant.

speakers

Bruna Martins


Anriette Esterhuysen


arguments

Multi-stakeholder model serves to bring civil society voices to the table


Governments are more comfortable with multi-stakeholder approaches nationally than globally


Unexpected Consensus

Recognition of efforts by multilateral institutions to be more inclusive

speakers

Flavia Alves


Anriette Esterhuysen


arguments

Global Digital Compact process could have had more participation from non-governmental stakeholders


Multilateral institutions like the UN are trying to be more consultative, as seen in the Global Digital Compact process


explanation

Despite criticism of the Global Digital Compact process, both speakers acknowledged the efforts made by multilateral institutions to be more inclusive, which is an unexpected area of consensus given the typical divide between multi-stakeholder and multilateral approaches.


Overall Assessment

Summary

The main areas of agreement centered around the importance of multi-stakeholder processes in internet governance, the need for improvement in multi-stakeholder engagement, and the recognition of efforts by multilateral institutions to be more inclusive.


Consensus level

There was a moderate level of consensus among the speakers on the importance and challenges of multi-stakeholder processes. This consensus suggests a shared understanding of the value of diverse participation in internet governance, but also highlights the ongoing challenges in implementing effective multi-stakeholder approaches, particularly at the global level. The implications of this consensus point towards a continued push for more inclusive and effective multi-stakeholder processes in internet governance, while also recognizing the need for improvement and adaptation to evolving issues.


Differences

Different Viewpoints

Effectiveness of the Global Digital Compact process

speakers

Flavia Alves


Anriette Esterhuysen


arguments

Global Digital Compact process could have had more participation from non-governmental stakeholders


Multilateral institutions like the UN are trying to be more consultative, as seen in the Global Digital Compact process


summary

While Flavia Alves critiques the Global Digital Compact process for insufficient non-governmental participation, Anriette Esterhuysen views it as a genuine attempt by multilateral institutions to be more inclusive, despite imperfections.


Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the effectiveness of current multi-stakeholder processes, particularly the Global Digital Compact, and the specific roles different actors should play in improving these processes.


difference_level

The level of disagreement among the speakers is relatively low. While there are some differences in emphasis and perspective, there is a general consensus on the importance of multi-stakeholder processes and the need for their improvement. These minor disagreements are constructive and contribute to a more nuanced understanding of the challenges and opportunities in implementing effective multi-stakeholder approaches in internet governance.


Partial Agreements

Partial Agreements

All speakers agree on the importance of inclusive multi-stakeholder processes, but they emphasize different aspects: Bruna Martins focuses on the presence of all stakeholder groups, Isabelle Lois highlights the role of governments in ensuring inclusivity, and Flavia Alves stresses the need for continual updating of discussion topics.

speakers

Bruna Martins


Isabelle Lois


Flavia Alves


arguments

All stakeholder groups need to be present for multi-stakeholder processes to work


Governments should use their convening power to ensure inclusive processes


Mapping of evolving issues is needed to keep multi-stakeholder processes relevant


Similar Viewpoints

Both speakers emphasized the importance of civil society participation in multi-stakeholder processes, particularly at the global level where governments may be more hesitant.

speakers

Bruna Martins


Anriette Esterhuysen


arguments

Multi-stakeholder model serves to bring civil society voices to the table


Governments are more comfortable with multi-stakeholder approaches nationally than globally


Takeaways

Key Takeaways

Multi-stakeholder processes are valuable for bringing diverse perspectives together in internet governance, but need improvement to be truly effective


The NetMundial Plus 10 and Sao Paulo guidelines provide a roadmap for more effective implementation of multi-stakeholder approaches


There is tension between multilateral and multi-stakeholder processes that needs to be navigated carefully


The IGF plays an important role in facilitating multi-stakeholder dialogue, but could improve in translating discussions into concrete outcomes


Inclusivity and safety of all stakeholders is crucial for effective multi-stakeholder processes


Resolutions and Action Items

Use the Sao Paulo multi-stakeholder guidelines as a ‘litmus test’ to evaluate if processes are truly multi-stakeholder


Better utilize IGF messages in other forums and decision-making processes


Review and refine the IGF’s intersessional work models


Start early preparations for the upcoming review of the IGF mandate


Make participant lists available earlier for IGF events to encourage broader participation


Unresolved Issues

How to balance multilateral and multi-stakeholder approaches in global internet governance


How to ensure authentic multi-stakeholder processes rather than just rhetoric


How to address power imbalances between different stakeholder groups


How to make multi-stakeholder processes more resource-efficient while maintaining effectiveness


How to better integrate perspectives from developing countries and smaller organizations in global processes


Suggested Compromises

Combine strong regional multi-stakeholder processes with regional multilateral processes to strengthen voices in global forums


Use targeted, issue-specific multi-stakeholder engagement rather than always trying to include all stakeholders in every discussion


Balance the need for inclusive processes with the need for concrete outcomes and decision-making


Work with multilateral institutions to make their processes more consultative and inclusive, while recognizing their distinct nature


Thought Provoking Comments

I think sometimes we romanticize the past of the WSIS and the wonderful WSIS multi-stakeholder process. What we forget, those of us who were there at the time, like Tijani and myself, is that we had bureaus. We had a civil society bureau, we had a private sector bureau, and governments of course have to negotiate with one another. And we had within civil society before every opportunity to give an input on an item of the agenda, we had to reach internal consensus, and it was very very difficult.

speaker

Anriette Esterhuysen


reason

This comment challenges the idealized view of past multi-stakeholder processes and introduces nuance about the difficulties of reaching consensus within stakeholder groups.


impact

It shifted the conversation to consider the internal dynamics within stakeholder groups and the need for coherent processes within those groups for effective multi-stakeholder collaboration.


I wonder whether or not we’re being honest enough about the relationship between governments, the UN, civil society, and big tech. Because it feels like the only things that are actually making things move is litigation, certain regulations that threaten fines, or extreme reputational damage.

speaker

Lina from Search for Common Ground


reason

This comment challenges the effectiveness of multi-stakeholder forums and raises critical questions about power dynamics and motivations for change.


impact

It prompted panelists to address the role of regulation and litigation in driving change, and to defend the value of multi-stakeholder processes while acknowledging their limitations.


When we talk about inclusion, inclusion does not remove the power from the people who are already in the room, opening up to more stakeholders, more people, is not removing from those who are already there.

speaker

Isabelle Lois


reason

This insight highlights an important aspect of inclusion that is often overlooked – that it’s not a zero-sum game.


impact

It reframed the discussion on inclusion to focus on expanding participation without threatening existing stakeholders, potentially making the concept more palatable to those who might resist change.


Isn’t the true test of an effective multi-stakeholder process that it should survive, even if there is serious disagreement on how to regulate, and what to regulate, by whom?

speaker

Anriette Esterhuysen


reason

This question redefines the measure of success for multi-stakeholder processes, emphasizing resilience in the face of disagreement rather than just consensus.


impact

It prompted reflection on the maturity and robustness of multi-stakeholder processes, shifting the focus from achieving agreement to maintaining dialogue despite differences.


Overall Assessment

These key comments shaped the discussion by challenging idealized views of multi-stakeholder processes, introducing critical perspectives on power dynamics, and reframing concepts of inclusion and success. They moved the conversation beyond surface-level agreement on the value of multi-stakeholder approaches to grapple with the complexities and challenges of implementing them effectively. The discussion became more nuanced, acknowledging both the potential and limitations of these processes, and considering how they might evolve to better address power imbalances and maintain relevance in the face of disagreement.


Follow-up Questions

How can we make multi-stakeholder processes more effective and resource-efficient?

speaker

Anriette Esterhuysen


explanation

This question addresses the challenge of implementing multi-stakeholder approaches in a cost-effective and time-efficient manner, which is crucial for their sustainability and widespread adoption.


What mechanism or criteria can ensure that multi-stakeholder processes are authentic and not just rhetoric?

speaker

Aziz Hilary


explanation

This question highlights the need for concrete measures to evaluate the genuineness and effectiveness of multi-stakeholder processes, which is important for maintaining trust and credibility in these approaches.


How can the implementation of the Global Digital Compact (GDC) be improved, and where could it be implemented?

speaker

Dana Kramer


explanation

This question addresses the practical aspects of implementing the GDC and suggests exploring the IGF as a potential venue, which is important for turning principles into action.


How can we ensure that all stakeholder groups, including those less represented this year, are present and actively participating in future IGFs?

speaker

Bruna Martins


explanation

This question addresses the need for inclusive participation in the IGF, which is crucial for maintaining its multi-stakeholder nature and effectiveness.


How can the IGF better utilize its messages and outcomes in other forums and decision-making processes?

speaker

Isabelle Lois


explanation

This question explores ways to increase the impact and relevance of IGF discussions in other policy-making arenas, which is important for the IGF’s influence and effectiveness.


How can small-scale organizations work with the IGF, particularly at the national level?

speaker

Arjun Singh Vizoria


explanation

This question addresses the need for inclusivity of smaller organizations in IGF processes, which is important for diverse representation and grassroots participation.


How can we better map and address evolving issues in internet governance through multi-stakeholder processes?

speaker

Flavia Alves


explanation

This question highlights the need for adaptability in multi-stakeholder processes to address new and emerging issues in internet governance, which is crucial for maintaining relevance and effectiveness.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

DC-IoT & DC-CRIDE: Age aware IoT – Better IoT

DC-IoT & DC-CRIDE: Age aware IoT – Better IoT

Session at a Glance

Summary

This discussion focused on age-aware Internet of Things (IoT) and how to create better IoT systems that respect children’s rights and safety. Participants explored various aspects of this topic, including data governance, age assurance technologies, AI’s role, and capacity building.


The conversation highlighted the importance of considering children’s evolving capacities when designing IoT systems and policies. Experts emphasized the need for a more nuanced approach to age verification that goes beyond simple chronological age limits. They discussed the challenges of balancing children’s protection with their rights to privacy, access to information, and participation.


The role of AI in IoT systems was examined, with participants noting both its potential benefits for personalized learning and its risks in terms of data collection and user profiling. The discussion touched on the need for ethical AI development that considers children’s best interests.


Labeling and certification of IoT devices were proposed as ways to empower users and parents to make informed choices. Participants stressed the importance of global standards and the potential role of public procurement in driving adoption of child-friendly IoT practices.


The conversation also addressed the need for capacity building among various stakeholders, including parents, educators, and industry professionals. Experts called for more inclusive discussions that involve children and young people in the development of IoT policies and technologies.


Throughout the discussion, participants emphasized the shared responsibility of industry, governments, and civil society in creating a safer and more empowering digital environment for children. They concluded by highlighting the importance of ongoing dialogue and the need for practical, enforceable solutions to protect children’s rights in the evolving IoT landscape.


Keypoints

Major discussion points:


– The importance of age-aware IoT and developing good practices to protect children while allowing them to benefit from technology


– The need for better data governance, labeling, and certification of IoT devices to empower users and protect privacy


– The role of AI in adapting IoT environments to users’ abilities and needs


– The importance of capacity building and education for children, parents, and other stakeholders about IoT


– The tension between innovation, regulation, and corporate responsibility in developing safe IoT for children


Overall purpose:


The goal of this discussion was to explore how to develop age-appropriate and safe Internet of Things (IoT) technologies that serve people, especially children, while addressing potential risks and ethical concerns.


Tone:


The tone was collaborative and solution-oriented, with experts from different fields sharing insights and building on each other’s ideas. There was a sense of urgency about addressing these issues, but also optimism about finding ways to harness technology for good. Towards the end, the tone became more pointed about the need for corporate accountability and including children’s voices in future discussions.


Speakers

– Maarten Botterman: Chair of the Dynamic Coalition on Internet of Things (DC IoT)


– Sonia Livingstone: Professor at London School of Economics, expert on children’s rights in digital environments


– Jonathan Cave: Senior teaching fellow at University of Warwick, Turing Fellow at Alan Turing Institute, former economist member of British Regulatory Policy Committee


– Jutta Croll: Representative of Dynamic Coalition on Children’s Rights in the Digital Environment


– Torsten Krause: Role not specified, helped moderate online comments


– Pratishtha Arora: Expert on AI and children’s engagement with technology


– Abhilash Nair: Legal expert on age assurance and online regulation


– Sabrina Vorbau: Representative of Better Internet for Kids initiative


Additional speakers:


– Helen Mason: Representative from Child Helpline International


– Musa Adam Turai: Audience member who asked a question


Full session report

Age-Aware Internet of Things: Protecting Children’s Rights in a Connected World


This comprehensive discussion explored the challenges and opportunities of creating age-aware Internet of Things (IoT) systems that respect children’s rights and safety. Experts from various fields, including child rights, economics, law, and technology, convened to address the complexities of developing IoT technologies that serve people, especially children, while mitigating potential risks and ethical concerns.


Key Themes and Discussions


1. Data Governance and Age-Aware IoT


The conversation highlighted the nuanced nature of data collection in IoT systems, recognising that it can be both beneficial and harmful to children. Jonathan Cave emphasised that static age limits may not be appropriate given the evolving capacities of children. Sonia Livingstone stressed the need to consider broader child rights beyond just privacy and safety, arguing for a more holistic approach to children’s rights in digital environments. She emphasised the importance of consulting children in the design of technologies and policies.


2. Labelling and Certification of IoT Devices


Several speakers agreed on the importance of labelling and certification for IoT devices as a means of empowering users and protecting privacy. Maarten Botterman suggested that such measures could enable users to make informed choices about the technologies they adopt. Jutta Croll proposed leveraging public procurement to drive the adoption of standards, while Abhilash Nair noted that certification could help mitigate literacy issues for parents and caregivers.


Jonathan Cave expanded on this idea, suggesting that public procurement could serve as a complement to self-regulation or formal regulation, potentially incentivising industry compliance with safety standards. This approach was seen as a novel policy tool for promoting child-safe technologies.


3. The Role of AI in Age-Aware IoT


The discussion explored the dual nature of AI in IoT systems, acknowledging its potential to both facilitate and potentially distort children’s development. Jonathan Cave highlighted this duality, while Pratishtha Arora emphasised the importance of developing age-appropriate AI models and interfaces. Arora also raised the crucial point of considering impacts on children who may not be direct users of IoT devices but are nonetheless affected by them.


4. Capacity Building and Awareness


Participants stressed the need for translating research into user-friendly guidance for parents and educators. Sabrina Vorbau discussed the Better Internet for Kids initiative, which aims to create a safer and better internet for children and young people. She emphasised the importance of involving children and youth in developing these resources. Jutta Croll mentioned the EU ID wallet as a potential tool for age verification in digital environments.


Helen Mason from Child Helpline International advocated for including civil society and frontline responders in discussions, noting that data from child helplines could provide valuable insights into children’s experiences with online technologies.


5. Corporate Responsibility and Regulation


A significant portion of the discussion focused on the need to place more responsibility on industry rather than users for ensuring child safety in IoT environments. Sonia Livingstone argued strongly for this shift, while Jonathan Cave suggested that personal liability for executives might drive more attention to child safety issues. Abhilash Nair supported this idea, noting that it could lead to more proactive measures from companies.


The conversation also touched on the tension between free speech rights and child protection, particularly in conservative societies, as raised by audience member Musa Adam Turai. This highlighted the need for nuanced approaches that balance various rights and cultural contexts.


Thought-Provoking Insights


Several comments sparked deeper reflection and shifted the discussion:


1. Jonathan Cave’s challenge to static age-based protection measures, encouraging more nuanced approaches based on digital maturity.


2. Sonia Livingstone’s emphasis on considering the full spectrum of children’s rights, not just safety and privacy.


3. An audience member’s suggestion to focus more on media information literacy rather than access restrictions.


4. Livingstone’s critique of the term “user” and how it can lead to overlooking children’s specific needs and rights in technology development and policy discussions.


Conclusion and Future Directions


The discussion concluded with a call for ongoing dialogue and practical, enforceable solutions to protect children’s rights in the evolving IoT landscape. Participants emphasised the shared responsibility of industry, governments, and civil society in creating a safer and more empowering digital environment for children.


Key takeaways included the need to consider children’s evolving capacities in IoT design, the potential of labelling and certification to empower users, the importance of involving children in technology development processes, and the need for greater industry responsibility.


Moving forward, participants suggested involving children and young people in future IGF sessions on this topic, developing more user-friendly guidance on age assurance, and considering the use of public procurement to drive adoption of child safety standards in IoT. Jutta Croll noted the upcoming high-level session on children’s rights in the digital environment at the UN, highlighting the growing importance of this topic on the global stage.


Session Transcript

Maarten Botterman: Oh, you cannot unmute. Jonathan, you should be able to…


Jonathan Cave: Oh, now I can. Yes, I am allowed to unmute. I can’t turn on my camera, but at least I can speak.


Maarten Botterman: Okay, that’s excellent. Thank you. Sonja will check you to…


Sonia Livingstone: Hello. Yes, I can speak now. Thank you. And it would be lovely to have my camera on, if that’s possible.


Maarten Botterman: We’re checking. Thank you. Can we put on the camera for those speakers? Can we put on the camera for Sonja and Jonathan? Yes. Oh, excellent.


Sonia Livingstone: Thank you.


Maarten Botterman: And Jonathan Case, right? And Jonathan Case. Gentlemen, Jonathan Case as well. So, unmute your camera. Yes, there I am. Great. Thank you. Jonathan and Sonja, you can unmute and unmute yourself. So, if you’re not speaking, maybe best to mute yourself.


Sonia Livingstone: Okay, sounds good.


Maarten Botterman: You’re now both co-hosts. Shall we begin? Can we begin? Okay, good morning, everybody. Welcome to the session from DCIoT and DCCRIDE, which is focused on age-aware IoT and better IoT. Can you hear me well in the room? Good. So, this session will take us through the landscape of evolving technology and how it relates to how we deal with people socially in age, how we can make sure technology serves the people with specific focus on age. So, there’s so many opportunities to make everyday life more convenient, safer and more efficient. But there’s also threats that come with that. And we want to get the best out of it. This is why dynamic coalitions throughout the year explore how to develop good practice in the best possible way and address risk processing, provision of information that may be inappropriate or even harmful to individuals or the initiation of processes that are based on false assumptions. And one of the ways to counter these risks is by categorizing the users. If the devices in the surroundings can categorize users, the Internet of Things can adequately adapt according to the needs and the specific measures to serve that user. So, this is why Jutta and I discussed coming together with the two dynamic coalitions and focus on how this will look like. A little bit on the reasons for the DCIAT on CRITE. Can you share that, Jutta?


Jutta Croll: Yes, of course I can share that. You gave me the perfect segue to that when you mentioned evolving technologies because the dynamic coalitions are talking about the evolving capacities of children, which is one of the general principles of the UN Convention on the Rights of the Child. I do think both dynamic coalitions started their work very early in the process of the Internet governance from children’s rights than was called the Dynamic Coalition on Child Online Safety. And that started in 2007. I do think IoT started the same year as well?


Maarten Botterman: 2008.


Jutta Croll: 2008. So, very long-standing collaboration between these two dynamic coalitions in a certain way. And several years later, when the General Command Number 25 came out, which is dedicated to children’s rights in the online environment, we renamed the Dynamic Coalition to Children’s Rights, as Martin has already said. And we found some similarities in the work that we are doing and also in the objectives that we want to achieve because we know that children are the early adopters of new technology, of emerging technologies, and that’s always the case where we have to look whether their rights are ensured and whether they can benefit from these technologies. And IoT is one area that can help people, that can help children to benefit from digital environment. And that’s, having said that, I hand over to you, Martin, again.


Maarten Botterman: Thank you so much. So, basically, the DC IoT 2008, HydroBot was the first time, has been talking over time, so how can IoT serve people? What global good guidelines, what should be a good global good practice guidelines should be adopted by people around the world because this technology is used everywhere. So, like the Internet, the Internet of Things doesn’t stop at the border. Very practically, because products come from all over the world, but also because, for instance, the more mobile IoT devices, like in cars, in planes, or what you’re carrying with you when you travel, crosses borders all the time as well. So, an understanding of global good practice would also help governments to take it into account when they develop legislation, being more aware of what consequences could be, what to think of. It could take business, global business, could take it into account, design and development of devices and systems. By doing that from the outset, innovation can be guided by some insights, even when it’s not lost yet. So, the Internet of Things good practice aims at developing IoT systems, products and services, taking ethical considerations into account from the outset, development phase, deployment phase, use phase, and waste phase of the life cycle, or sustainable way forward. It’s using IoT to help to create a free, secure, and enabling rights-based environment with a minimal ecological footprint, and that for a future we want for us and future generations. And when we talk about an Internet we want in which IoT, as we want, is to be developed, it’s crucial that we really get that clear what that means for us, and that we do take action to make something happen there, because otherwise, still remembering very much what Vint Cerf, the chair of the high-level panel here in Kobe, if we don’t make the Internet we want, we may get the Internet we deserve, and we may not like that. So, with that, I really look forward to discussions today for which we have a number of excellent speakers in the room and online, and we will talk first about the data governance aspects that underline this, then we go into labeling and certification of IoT devices as that helps in transparency of these devices and what they can do, and empowers users to be more informed in their choices. Every session so far, I think I’ve heard the word AI, so let me be the first one to mention it here. Of course, it’s important that IoT environments work and also how selections can be made to adapt that to abilities of people. And then last but not least, in all this, the kind of horizontal layer is, how do we develop capacity? Because IoT may be developed all over the world, but to apply it locally, you need to have local knowledge. So where does that come together? How can we work on that? I’m very, very happy to have also Sabrina here to talk more about that. With that, I’d love to give the word to Jonathan Cave, who is a senior teaching fellow, who used to be a senior teaching fellow at economics in the University of Warwick, and a Turing Fellow at the Alan Turing Institute, well known for its work on ethics in the digital space. He was also a former economist member of the British Regulatory Policy Committee. Jonathan, can you dive into the data governance aspect and why this is so important and what we need to do about it? You will be followed by Sonia.


Jonathan Cave: Yes, thank you, Martin, and thank you, everyone, for showing up. This is a very important topic, and I’m going to largely limit these first remarks to matters dealing with data. But one thing I want to point out is that this idea of evolving capacity applies not only to the technologies which are changing and collecting more and more data, but also applies to the evolving capacities of the individuals involved, in particular children.


Maarten Botterman: Jonathan, can you improve your microphone? Not really. Okay, you’re understandable, but just not great. If you don’t have an easy trick, let’s continue. Sorry about that.


Jonathan Cave: Okay, let me just try.


Maarten Botterman: Closer to the device may help. Okay, well, let me try.


Jutta Croll: Now that you’re so close to the device, it’s better if you just go close.


Jonathan Cave: No, actually, the device is attached to my ears, so there’s no way of going closer without changing my face geometry. But I’ve switched to another microphone on my camera. Is that better?


Jutta Croll: Yes. Yes, it’s much better.


Jonathan Cave: Okay, thank you. One can never tell with these technologies. Yes, I think it’s very interesting is that much of our laws and much of our policy prescriptions around child safety is predicated around the idea of chronological age, that people below a certain age should be kept safe and people above that age lose that protection. But of course, particularly as children have more and more experience of online environments, and the people making the rules have less and less experience of the new technologies, that static perspective of protecting people on the basis of age may not be the most appropriate, and we need to stay aware of that. First of all, I think it’s interesting to remember the data governance issues. One element of this is that the data themselves can be a source of either safety or risk and harm to young people, and the reason we care about that is both in terms of the immediate harm, but in terms of the collective, progressive, or ongoing harm that early exposure to inappropriate content, which includes manipulation of individuals by priming and profiling, can expose people, which then change the way they think as individuals or as groups. Now, in that respect, the question becomes, which data should people have available to them? One particular element of this is that we have a lot of privacy laws, and many of these privacy laws set age limits for people’s exposure to or ability to consent to certain kinds of data collection or processing. Mostly, these are predicated around what we would consider sensitive data, but in the online environment, particularly social environments or gaming environments, many more data are collected whose implications we only dimly understand, and this is where AI comes in. It’s not obvious which data may be harmful. So, instead of imposing rules and asking industries simply to comply with those rules, we may need, and increasingly in areas like online harms, we’re moving in the direction of a sort of duty of care where we make businesses and providers responsible for measurable improvements and not for following static codes. So, it’s like harm reduction rather than compliance. So, there’s the question about which data are collected. There’s also a more minor issue. We are exempt from certain kinds of data collection, but those may be the same data needed to assess either what their true chronological age is or their level of digital maturity. So, it may be that some of the rules we have in place make it difficult to do too many things to keep pace with the evolving technologies. Okay. I think, probably, rather than going on, I should turn over to Sonia at this point. Any comments as to when things come back up?


Maarten Botterman: Fantastic. Thank you. Yes, Sonia, please go on.


Sonia Livingstone: Okay. Brilliant. Thank you very much, and thank you for the preceding remarks, which kind of set the scene. So, I did want to begin by acknowledging what an interesting conversation this promises to be because we’re bringing together two constituencies, those concerned with Internet of Things and those concerned with children’s rights, that haven’t historically talked together. It’s really valuable that we’re having this conversation now. In a kind of Venn diagram of child rights and IoT experts, I think the overlap currently is relatively modest, and I hope we can widen the domain of mutual understanding. I think it’s even there in some of the… and age assurance is a brilliant topic to illustrate some of the both overlaps but also differences. So, I guess, from a child rights perspective, a starting point is to say that it’s very hard to respect children’s rights online or in relation to IoT if the digital providers don’t know which user is a child. And so, having that knowledge seems a prerequisite to respecting children’s rights. And yet, as some of us have been investigating so far, it is far from clear that age assurance technologies as they currently exist do themselves respect children’s rights. So, the many is that we might bring in a non-child rights respecting technology to solve a serious child rights problem. So, and I think this has amplified this challenge in relation to the Internet of Things because now we’re talking about technologies that are embedded, that are ambient, and that the users may not even know are operating or collecting their data, processing… but also introducing risks. So, a child rights landscape always seeks to be holistic. So, privacy is key, as has already been said. Safety is key, as has already been said. But the concern about some of the age assurance solutions is that, as Jonathan just said, they introduce age limits. And so, there are also costs, potentially, to children’s rights, as well as benefits. …perspective that is crucial. So, it’s always important that we think about privacy, safety, if you like, hygiene factors. How do we stop the technologies introducing problems? But we need to think about those also in relation to the rest of children’s rights. What am I thinking of? I’m thinking of consulting children in the design and making of both the technologies and also the policies that regulate them. I’m thinking of children’s right to access information, right to participation, and rather than excluded through age limits, or perhaps through delegating the responsibility of children to parents, which means parents might exclude children. We’re seeing a lot of this in various parts of the world at the moment. As Jutta said, I’m thinking about evolving capacities. This is not just a matter of age limits. Underneath, children are excluded, and above, they are placed at risk, as it were. This is a terrible binary, if that’s where we’re heading. But we’re also thinking of appropriate provision for children who are users or may be impacted by technology. They may not be the named user. They may not be signed up to the profile or signed up to the service or paying for the service, but they may be in the room, in the car, in the street, in the workplace, in the school. They may be impacted by the technology. I’m thinking also about best interests, that overall, the balance should always be in the child’s best interest. interests. That’s what every state in the world, except America, has signed up to when it ratified the convention. And I’m thinking of child-friendly remedy so that when something goes wrong, children themselves, not necessarily through adult remedy. So I think a child rights approach brings a broader perspective but also one that is already embedded in and encoded in established laws policies and obligations on institutions and states to ensure that these new areas of business respect children’s rights during and as part of their innovative process. And so I’ll end with the mention of child rights by design if you like to give a broader focus to questions of privacy by design.


Maarten Botterman: Thank you. I see no… Oh, in the chat. I will buy the host. No, there’s no comment. Torsten?


Torsten Krause: Yeah, we have one comment in a chat. I would read it out, please, as it was written by Godzway Kubi. I hope I pronounced it correctly. And Godzway Kubi wrote, two ways Age-Aware IoT can contribute to a better IoT ecosystem for children is by prioritizing private devices designed for children, such as smart toys and learning tools, and using age-appropriate interfaces and content filters to enhance usability and safety.


Maarten Botterman: Okay, thank you for that very appropriate remark. By the way, Sonja, the first togetherness on age was actually on children’s toys and IoTs six years ago. So… Jonathan, please.


Jonathan Cave: Yeah, just a small follow-up. Those are extremely useful remarks. There’s one thing I wanted to say about the use of technologies to identify children’s ages, whether they’re chronological or let’s call them digital ages, which is that, of course, these, like all other age verification technologies, can be bypassed. And one particular concern we should have is that when these bypass approaches become to children or by groups of children, there may be an adverse selection in the sense that those most likely to bypass those protections may be those most at risk, either as individuals or in the groups through which these practices share, disseminate. I remember when I was growing up and our county was 21 for drinking alcohol, the neighboring county was 18, and fake IDs were in widespread circulation among certain social networks. And so there is this issue about whether, although the solution may be very good, the path to the solution may be more harmful than where we start up now. And so, yeah.


Maarten Botterman: I think that’s a very good point and one of the big later on. So with that, I would like to move on towards the second. Oh, Sonja, do you want to respond to that?


Sonia Livingstone: Well, I was just going to say, thank you. I was just going to say very briefly, in response to Jonathan, and thinking of a paper that I worked on as part of the EU consent project also with Abhilash, when we consult children and families, they actually value those kinds of workarounds. It provides a little flexibility. And I might say, I don’t think a five year ID to drink, but perhaps a 19 year old could. And it’s that little bit of flexibility around hard kind of age limits, that many families and children say is important to them in providing just that bit of flexibility for where they know their child is a bit less mature or a bit more mature. And so encoding everything in legislation and technology can in itself take away some agency from families. And I think that’s a challenge to consider.


Jonathan Cave: I’d also say that it takes away some agency from the children themselves, who have to learn how to navigate. There is this tension between asking whether the environment can be made safe. And if not, whether denying access, as for example, in Australia is the appropriate approach, or whether some more active form of engaged filtration that goes, but then you have to move away from the binary, of course. Yeah, because you do have to not only gauge how mature children are, but to provide them with curated experiences, perhaps under parental or peer control, that enable them to become capable of safely navigating these environments.


Maarten Botterman: So I think there’s the legal hard, legal coded limits. And I think online tools can be more useful and practical limits than with online code limits, unless you involve obligatory registration, including at the UKIP.


Jutta Croll: Yes, so far, what we know, for example, from the GDPR is a hard set limit of, it’s between 13 or 16. But in every country in Europe, it’s a hard limit, either 13, 14, 15, or 16. And what the European limits is more about age brackets, which would mean a certain age range, saying between 13 and 15, or 16 to 18, or something like that. So that you have a diverse, a bit of a range. And there comes in the issue of maturity. So you might have a 13 year old that is mature, like a 15 year old, and a 15 year old that is only like someone who’s 14, or 11, or 12, something like that. So when we are not talking about an exact age threshold, but about age brackets, we get some of this flexibility. And it also pays into the concept of maturity, so that it makes it more flexible. Thank you.


Maarten Botterman: Yes, thank you very much. And I see another one from Doherty in the audience. I don’t know about this, but let’s call on Doherty, please.


Torsten Krause: Doherty Gordon wrote in a comment. It’s repeating a little bit what Jonathan said. She wrote, denying access will always encourage teens to look at work around, to not engage in dangerous behavior because they have no guidance. Why do we not put more emphasis on media information literacy, so that users understand how to protect themselves?


Maarten Botterman: Yes, thanks for that. And also, the brackets, can you come into that as well? Please.


Pratishtha Arora: Yeah, so I want to speak on the point. Looking into all aspects of children over there, it’s equally important over here that when we’re defining age on the accessibility of technology for children, what category of children that we’re talking about, because it might just differ as per the point that, you know, of their sense of understanding about technology, and also their sense of understanding about using the technology.


Maarten Botterman: Yes, thanks. And this was Batista Aurora. If you speak, please, we will be speaking shortly on AI as well. Thank you for that. So how can we help to help ensure that parents, children environments know how I think devices can serve them. And that is the next topic we would like to dive into. So basically having all these goods from all over the world, having different capacities and having found in the past that, for instance, the security of these devices was sometimes limited to or to being set with default passwords like admin is not useful. Also, we found that in the past some devices, for instance, send data back to the factory or wherever without users being aware of that or asks for more of that clarity. At the same time, legislation is per country. And so how can we together get a good tool here that helps us to understand what we actually buy, what we actually start using? So from the IoT perspective, we had a big discussion last year in Kobe where it was made clear that labeling of devices and of services is crucial. Needs to be even dynamic because with upgrades of software and things, the label may change. So a label that can be linked to online repository would be crucial. And certification of that, of course, is important because one could say anything there. And how do we know that it’s true? So some certainty needs to be built in. And there’s different certification schemes. This is not a session to go deeper, very deep into the differences of those. At this moment, these labeling and certification schemes are also discussed around the world and put in place. Framework has been put in place that has also identified this in Singapore. Action has been taking place in other places. What is now, and in Europe as well, of course, as part of the Digital Services Act. And what we see is that now the diplomatic go around is beginning. These countries are talking to each other about how can we do it in such a way that we can recognize each other’s certifications, that the labels of other countries are useful for us as well. And this is the beginning. But we’re not there yet. The deeper intent of labeling and certification, so labeling is what is it about the certification is how do I know that this is correct, is basically that it empowers users to make smarter choices. So next to security, we discussed last year, it should be also about data security. So where is data streams going? And I think what came up over the years since is more and more emphasis also, so how much energy does it use, like IEEE is already doing for electronic devices. So all this together is, I think, in the name of IoT devices can do and can offer. And I’m very interested to hear your perspective, Abhilash, on what this can do for age awareness and appropriateness.


Abhilash Nair: Thank you. I want to talk a little bit about why age assurance matters from a legal perspective. As a starting point, we know that requires some form of age assurance for various content services and sale of goods online. And some of these laws have explicit requirements of age verification or age assurance. Some of them are implied. But in practice, there is very little out there. For decades, really, we’ve had laws that have not been enforced properly because they have not been complemented with appropriate age assurance tools. And the notable exception for that is probably online gambling, where the law seems to have worked in some jurisdiction under the Gambling Commission. But part of the reason why it was successful is because it’s not just about age assurance. It’s also about identity verification, so where people need to be identified so that they can be offered support against support for problem gambling, so on and so forth. For almost all of the cases, when we looked at the EU Consent Project that Sonia mentioned earlier, when we looked at all member states in the EU plus the UK, we found that there was very little out there in terms of AV tools, age verification tools, that can actually help implement legislation. And content was the most problematic of all of us, not least because there are cultural variations even within Europe as to what is acceptable content, even for children in under the sub-18 category of people. But there was also a wider problem there. There was a disconnect between the principle of self-regulation that the EU has advocated, especially for content for people of Italian, on the one hand, with legislation that suggests the adoption of age assurance tools on the other. So, not led to a happy situation that, you know, the law on the one hand requires age assurance to protect children, but the practical reality is, you know, there hasn’t been any useful means of enforcing that legislation in practice. There is a legal principle which suggests that if a law cannot be enforced as unenforceable, it’s unlikely to succeed. It’s unlikely to command the respect of people who are bound by that legislation. You can see a good example in copyright infringement online. It’s not because people’s copyright is unlawful that people infringe copyrights, it’s because they can do it without consequence. Unfortunately, that is the case with most laws that require age assurance, most not to play with content, as I’ve already mentioned. So, what I’m trying to say here is it’s important that age assurance, or effective age assurance, complements legislation for the legislation to work, and that is a starting point. Jonathan already mentioned that we’ve got too many rules. The solution is not to have more legislation. The real starting point is to make sure that what we have is enforceable and practical. Now, things are changing lately with more specific legislation coming into the books, especially specifically mandating age assurance and posting specific obligations on platforms and websites for non-compliance. But it’s not without problems. The problem with age assurance, in my view, the fundamental problem there is, essentially, it has been a debate about children out and adults in, and that’s not, you know, that’s not how it should be. Sonya’s already talked about evolving capacity, so children in the UK cannot just classify everyone under the age of 18 into one category and use age assurance on them and have adults on the other side. But there’s also the other binary debate between adults who will feel strongly about privacy and their ability to access internet without any restrictions, without any restrictions, and keeping children safe. Now, that balance also has not been struck appropriately thus far. So, and there’s also the other issue of, you know, the age threshold for children for accessing content services or purchasing goods can also vary across nations, across cultures, even within the same country. There are different cultural variations in that. But the law does not always factor in evolving capacities of children. To take the example of the EU Audiovisual Media Services Directive, which refers to a notion of risk of harm, to be the basis for adopting appropriate safeguards and measures for age assurance is a good example because, in principle, it recognises evolving capacities of children, you know, every child, say, for example, an 11-year-old is very different to a 16-year-old. But not all legislation does that. I’ve already talked about the cultural variations as to age thresholds, even for what is generally perceived as harmful content for children. Even within Europe, there are variations as to the age threshold for accessing porn, even under the sub-18 category. So, that’s where I stand on age assurance laws. I think we have toyed with the idea of self-regulation, especially in the online space, for more than three decades, and it hasn’t worked. And I think we need, what I’m saying is we don’t need new age assurance laws, we already have age assurance laws. What we need is workable things. legislation. And like you said, labelling and certification can be mandated by law or it could be a voluntary thing, but they obviously have to go hand in hand and complement legislation because I do believe that measures like certification and labelling gives more consumer choices, gives more parental or caregiver autonomy, also children autonomy, but that cannot be a substitute for it is how I feel about it.


Maarten Botterman: Thank you for that. Comments online on this subject, Torsten?


Torsten Krause: There are discussions online,


Maarten Botterman: but it’s skipped Jonathan and Sonja because they raised their hand. Other comments? There are comments, but not to this. Jonathan, please. Your thoughts on this.


Jonathan Cave: Okay. Thank you. Thank you very much. And thanks for that.


Maarten Botterman: I can’t hear you right now. Am I still inaudible? One moment. Yes. Jonathan? Yes. No.


Jonathan Cave: No. I’m still inaudible.


Maarten Botterman: I see the technical section working on it. Yeah. It will be the same issue for her because it’s settings here in the room. Can you say something? Something? Something? Say something. We can’t hear the online speakers. We can’t hear the online speakers. Okay. One moment. Yes. We can hear you now. You’re back.


Jonathan Cave: Okay. I’ll be very brief because of the technical delays. I think one of the things that we learned with age verification in relation to pornography is that the very existence of the single market or the global use of these technologies makes it very difficult to make differences. Even attempts to tackle the problem by regulating payment platforms because you couldn’t regulate content users on the platforms sort of failed because the content was coming from outside the jurisdiction, and the fact that it was banned within the jurisdiction merely increased the profitability or the price of external supplies of this kind of potentially harmful content. Another thing is that I completely agree that some mixture of self and co-regulation and formal regulation backed by a concept of a duty of care or harm-based regulation, something more accenting, is required to keep pace, not just with the evolution of technology and people’s understanding, but with how it reacts to existing, let’s say, bans and protections of regulations. And the final point was to say that we should probably also be aware of the fact that certification schemes and other forms of high-profile regulation can convey a false sense of security, but by the same token, it may be the case that some of the harms against which we regulate are really harmful because people have evolved away from the point where they’re vulnerable to them. And in that case, in that sense, I just point out that in relation to disinformation, misinformation, and malinformation, there’s evidence that a lot of the younger generations, Gen Alpha, Gen Z, in particular, are less vulnerable to these harms than their unrestricted, unregulated adult counterparts. And so that it may be that some of the harms we worry about cease to be harms or are no longer appropriate to be tackled by legal means. Okay, those are my comments.


Maarten Botterman: Thank you for that, Jonathan. Sonia, please.


Sonia Livingstone: Thank you. I wanted to acknowledge the conversation in the Zoom chat for the meeting, identifying the range of stakeholders involved. And so, of course, maybe we should have said at the very beginning that in facing this new challenge of age assurance in relation to IOT, a whole host of actors are crucial, they play a role, and there are difficult questions of balance which will vary by culture. So yes, we need to empower children and make sure that these services are child friendly, they speak to them, they’re understandable by them, we need to address parents, we need to address educators, and involve media literacy initiatives in exactly this domain. But there is, I wanted to make two points there. And one is, in that we can only educate people, the public, in school and so forth, insofar as the technologies are legible, insofar as people can reasonably be expected to. What are the harms, where the harms might come from, and then what are the levers, what are the kind of available resources for the public, the users, to address those. And we’re not there yet. And so on the question of balance, I think the spotlight for IOT is rightly on the industry, and on the role of the state, as Abhilash said, in bringing legislation. And on that point, we’ve been doing some work trying to make the idea of child rights by design real and actionable. We’ve been doing some work with the industry, the stakeholder group that is kind of most … And so I just want to open up the black box of industry a little bit. And because what we’re finding is that from the CEO, through the legal department, to the marketing department, the design department, the developers, the engineers, you know, all the different professionals and all the different experts that make up the development of a technology, for the most part, most are not aware of the child user who may or may not be at the end of the process. Most of them have a different kind of individual, not a family, that might share passwords and share technologies, and by and large is a relatively invulnerable or resilient user, rather than one with a range of vulnerabilities, including children that we’ve thought about. So I think let’s break up this, you know, look into this notion of the industry and think about where are ethical principles, our duty of care, our legal requirements and our child rights expectations, where will they land within a company, whether it’s a small startup that is completely hard-pressed and has no idea of these concerns, all the way through to an enormous company that has, you know, a lot of kind of trust and safety focus at a relatively unheard level in the organization, and a lot of engineers and innovators who are pushing forward without the kind of knowledge and the kind of awareness that we’re discussing today. So pointing to industry and governments regulating industry just, you know, opens up the next set of challenges about who and how to address these issues.


Maarten Botterman: Very well said, and also an excellent segue towards our next sections, because basically this is about maybe education of the equipment through AI, and for sure the capacity building to parents, children and environments, to which we will talk in the last session. I’d like to invite Patricia Arora to initiate explaining the role for AI that you see in this interaction. Thank you.


Pratishtha Arora: Yes, thank you for that. I think the impact of AI is kind of, you know, putting a lot of emphasis on children and the engagement on the devices. In terms of impact, I see both the positive and the negative in terms of positive, because it’s also a learning platform for many children who are maybe slow developers, you know, watching through videos and learning and learning and building their own capacities. It’s a good contrary when I talk about technology and the advancement of it and the impact on children. That is also a big challenge for them in terms of that children are being given all the rights to use that device without. What speaker children are using to call their parents, voice out what they feel like, engage with that device, which is also giving them that expression of freedom to learn on the aspect that, okay, the device is answering them back from the point that a child is asking a question. But there, the speaker or the device is unable to understand that if, what is the age of the child over there? So age assurance as a point over there that an old child asking the question or a 13-year-old or a 15-year-old, so that gap is not being identified as of now. So where I feel that, you know, these technological advancements is playing a very big role. On the point that how it is leading to a negative impact is that, you know, there is over dependence on these technological tools as well. Because for every small thing that we are going up to the device and asking a problem to solve, to solve that problem, and that is also leading to somewhere the development of the skills in terms of the physical development of a child, the mental development of a child as well. Because we are totally dependent upon what the device is responding to us. In terms of standards, I feel that, you know, we need to have more defined standards where children have access to devices and where the engagement of parents have a big role in terms of what PowerPoint and also from the point that we need to have other stakeholders involved from the industry perspective that they follow these rules when any device is being designed from a child perspective. Because developers, like what Sonia mentioned, that, you know, developers are not able to figure out that, you know, this device has been designed from a child’s perspective. So that thought to be ingrained to the point that any technology which is being designed needs to be child friendly as we are advancing with technology now. Of course, reinforcing on the point again and again that safety by design is a key concept. And in the future of particular point that all these aspects are taken into consideration that any child and every child is looked upon irrespective of their age, irrespective of their own skills and their own learning capabilities as well. So coming from India, I have experienced a varied range of, you know, observations of how technology is being used by children. And also how it is misleading them in terms of the engagement in the online spaces as well. While it is in the space of online gaming or it is in the space of, you know, social media interactions. Because somewhere or the other, the Internet of things or the devices, when we see that there is a development on one aspect with technology, there is the flip side of it on the misuse of technology. So we need to keep the right checks and balances. I think that’s what has been coming again and again in the conversation as well. We need to have the right checks and balances. Also because Internet of things is quite an alien term when we talk about it in India. It’s sad. We need to somewhere or the other, we need to also break down this concept of Internet of things to people to simplify the understanding of what exactly it means. Like when we talk about trust and safety, it is again an alien term. Somewhere we are advancing with the technology in the technological tools only with certain sector of people who are involved into this whole game of designing and developing tools. But for the larger or for the masses, it is still an alien concept. What is? How the standards need to be defined. That is why I think that’s the missing gap when we talk about child friendly or safety by design as a concept. Also now because somewhere or the other, technology has been knocking everybody’s door. So as a smart device or a device in terms of a phone is in everybody’s house. But in terms of having other gadgets is again more about what section of society engage. But larger is more about the difference in the economic backgrounds of the families. That it is more with the privileged section that they are encountering this problem and challenge about devices, the engagement over there. While the phone, a smart phone is in everybody’s house. I feel there, this is also global attention that a phone device which is in a capacity that any child can use because there has been an emphasis about that we need to have devices as well. So I think I’ll stop there. And with the last point that how data governance is playing a very big role over there. Because whenever you’re setting up any device and you’re giving out the data, you’re also ending up giving your child data. So there, what is the governance about data, the privacy aspect of children’s data over there?


Maarten Botterman: Okay. Thank you very much. Some good points made. Jonathan, I’ll leave the floor to you.


Jonathan Cave: Okay. Thank you. And thank you for that discussion. I just have a few points that are I think other points I’d like to introduce. I’m an educator. One of the things that AI does is that it not only facilitates education and children’s development in the ways that we normally understood it, but it also preempts or distorts it. It has an influence on the way people think. One of the aspects of this is that the AI devices that children use learn about the child, but in the process they also, as it were, to program the child and they teach the child. Now, one of the things they teach them is to rely on the system for certain kinds of things, outsourced our memory to our AI devices, and we will ask for things that in the past we would have thought about. When a child searches for information online or asks a question, in the past they would have had to read a book, for example. They would have read things that they weren’t specifically looking for, and they would have to think about them to develop an answer to the question. If the AI gets very good at simply answering the question that was asked, the educational aspect of that is somewhat lost, and the child on the device becomes in a certain sense deeper. The child becomes an interface that sets the device in motion. Now, this is something we have to deal with. We might say what we need to do is to prevent it, but my students say to me, with respect to the use of AI to write essays and so on, that it is a transferable skill that the world into which they will grow will make use of these technologies, and to learn how to use the technologies may be more important than learning to use the technologies to do the things that we used to ask them to do. So there is a question here, a deep question, I think, of experience is best for children to help them become the kind of adults who can successfully work in this environment. And there are some technical things we can do along the way, like developing specific or stratified large-language modules or even small-language models for specific children to use, or to use synthetic data or digital twins to put a sandbox around children’s experience of using the technologies. But I think that the general lesson is that these technologies used in this way to serve people, that if they’re oriented towards solving past problems, and developers often tend to do that, they need to be required to think consequences of what it is that they’re doing. And that requires a continuous conversation involving children, developers, parents, and the rest of society that doesn’t just stop when the device is released into use. And a final point on games. It’s certainly true that games, particularly immersive online games, have a kind of reality or salience to a person that is even greater than the salience of real experience. It can cut through to the way in which we think in ways that normal contact doesn’t always do that. We know that from neuroscience experiments as well as normal experience, which suggests that these games could be used actually to help people navigate this new world, to promote ethical development instead of sort of attenuating ethical and moral sensitivities among children. And then the final thing is that there is a difference, and this was very compellingly brought out by the experience from India. Of the technologies which are designed for or used for elites, whether they are privileged elites in the sense of money or trained elites who can navigate, understand the risks and benefits, and those same technologies when used by the drop, for example, and the uses become different, evolve away from those that the developers originally intended. So I think that is a fundamental issue that needs to be dealt with at the development and deployment level. So oh, then the final thing is to say that, of course, one of the things that AI can do is it can police the problems that AI creates. And so one of the things that one would expect a machine learning model, a deep neural net model to do is to keep track of how these technologies are changing our children and to respond. It’s, I don’t know, the solution to the problems created by AI is more AI. I would hesitate to actually endorse that because then we do give up our human agency. But those are sort of my concluding thoughts on AI in this respect.


Maarten Botterman: Thank you very much for that. Yeah. That play that is ongoing and is forming us with a clear call for being aware of dependency that may grow with AI, the safety for design by kids, from the outset, right, by design. The warning, the human agency warning, like, we may want to keep that in some way or another. Jonathan?


Jonathan Cave: Yeah. The other thing I would just add to that is Piaget’s dictum that play is a child’s work, that that sense of engaging with these technologies for play allows us to develop in ways that doing them in anger or for serious reasons do not. And there’s some really interesting work going on at Oxford on the difference on play as a state of mind when we engage with technologies. OK.


Maarten Botterman: Thank you so much. Person from the room?


Torsten Krause: No comment to the current block, but there was a discussion or a hint that it’s not only necessary that children understand how IoT works, what’s the functionality behind it, but also parents must know how it works and how it could influence their children. But maybe that’s an aspect we can add to the next block, too.


Maarten Botterman: Yes. We will come back to that in the end, because it’s not only about children and parents, but also the babysitters, but also social environments. So we will take that back to the end. Jutta, you want to? Other specific questions on the AI impact? I mean, it’s clear, right? We’re learning with AI, we’re learning from AI, but also AI is learning while we do it. And let’s make sure that it learns it in the right ways, taking into account the values that we share around the world. And that may be not all values that you’ve been getting from your parents, like the inclusiveness, the recognition and valuing of diversity, and human agency and privacy. I think these are some of the clear examples of values that we share. And for new tools, new developments to be taken into account is one thing. Let’s also keep in our mind, so what with all the old stuff that is already out there? And evolve that. Now, on standards, we heard about legislation, we heard about industry practice. And in a way, if we look to, for instance, electronic shocks, we’ve got the IEEE standards. It’s global standards. For internet standards, we’ve got the IETF standards that set certain rules. But they are voluntary, but they’re industry standards and they’re adopted and at least agreed and discussed around the world. And more of this is likely to come. Jutta, please.


Jutta Croll: Yes, if I may come in, since you got back to the standards, I was considering that Abelis was saying labeling certification schemes should be mandated. That would be the first step to go further with IoT, when we have that mandatory certification scheme or labeling. But then it also needs to be accepted. And one example that we know for about 20 years now is Section 508, that at that time demanded or obliged the US state’s administration to have accessibility as a precondition in procurement. So from that time on, any product that was bought by administration in the United States needed to be accessible for people with disabilities. Of course, the whole process of having a broad range of products that were certified to be accessible, and also it brought the prices down. The products got affordable because the administration was obliged to only buy those products that are accessible. And if we could come to that next stage, not only labeling and certification, but having it like a procurement precondition, that I do think would really help to bring forward the labeled IoT. Do you understand what I mean? It’s come to my mind that it’s a really good example that we could follow up with.


Maarten Botterman: Yes, I see Jonathan clap his hands because we talked a lot about this. And basically, we’ve got standards, we’ve got legislation. The problem about legislation is it’s per jurisdiction. And then you can start to harmonize across jurisdictions and it takes time. But at least if there’s principles that are globally recognized, you have something to look for. And an organization like IEEE, ITF, ISO, there’s a role in that. So very much, I know Jonathan is much more an expert in this than I am. And I see he raised his hands, Jonathan.


Jonathan Cave: Yes, thank you very much. That’s a brilliant point. The idea of using public procurement as a tool, as a sort of complement either to self-regulation or to formal regulation is, I think, one that’s worked in a number of areas. One of the things it can do, as Jotun mentioned, is to set a floor under the kind of capability that we wish to provide a stimulus, an economic stimulus to. The things have to be accessible. They have abilities. But it can also create a kind of direction of competition. So when you specify a procurement, part of it is the requirements that you put in that the proposed solutions have to provide. The other part are the things on which you provide advanced scores. And so procurement tenders written appropriately can also stimulate innovation to come up with better and more effective solutions. So there’s that aspect that puts money into developments that might not yet have a market home, that might not yet be profitable for people to provide, but which with certain development or certain economies of scale might become profitable. And you can do that without putting governments in the position of saying, this is what we need, because governments are particularly bad at picking winners and specifying solutions. But what they can do is to move the whole industry in the direction of providing these things. So that also happens, and the final point on this, when it comes to the adoption of standards within the procurement. European standards, although not developed by the EU, are often incorporated into public procurement tenders, with the idea being that you have to show compliance with the standard or equivalent performance. And that introduces into this market-based alternative to regulation, something which looks like an outcome-based or principles-based criterion as to what’s acceptable and what isn’t. So in other words, you either have to do the thing which is there in the code, you have to comply, or better still, you have to show that you can do better. And if you do that, you harness the inability to give the customer some say in the matter, and not just a negotiation or a procurement officer within a government bureaucracy. So I think it’s really profitable.


Maarten Botterman: Yes. A clear example on that is that, for instance, for internet security, there are standards that back up the flaws in the current system, like DNSSEC, like RPKI. These are standards that can be implemented. adopted by service providers. Now, these standards are, again, global, but they’re voluntary. For instance, in the Netherlands, the Dutch administration does include it in its standarding for services. And with that, they ensure that their service providers, that I as a citizen can also go to for those services, because the services get the basis. So that’s one of the examples. And yes, if government isn’t sure, at least they can help with the direction. So Utah, thanks for raising it. Jonathan, thanks for bringing it home. And you wanted to compliment on that.


Abhilash Nair: Yes, thank you. Just wanted to follow up on what you just said about laws mandating, said laws could rather than, you know, must always. I recognize there are some instances where it’s not possible. One thing, one other thing just to add to that is it might also help mitigate the literacy problem of parents or caregivers, because often policymakers assume that parents and caregivers are always educated and every child comes from a two-parent middle-class household. And that doesn’t happen, especially in countries with varying literacy rates, let alone digital literacy rates, with that kind of certification level. They might for children.


Jutta Croll: You gave the perfect segue to handing over to Sabrina, I would say. Yes. Start to go on.


Maarten Botterman: And yes, we also come with the remark from Dorothy online that says, there’s so many people who are not online yet. And how do we provide, make sure that they don’t miss the boat? So with the focus on, well, after what we can have technology do and develop with AI and standards, in the end, it’s about the people. And how can we make sure that people use it well? Sabrina, please.


Sabrina Vorbau: Yeah, thank you. Good afternoon, everyone. I kept quiet for the moment because I think it makes sense for me to come in at the very end, just to compliment on the various aspects that have been, so how we can indeed build this bridge of the information and the knowledge we have to the end users, which are, of course, in primary children and young people, but not exclusively also parents, caregivers, teachers, but also not to forget other stakeholders, the policymakers in the industry. I want to come in with a concrete example, representing the Better Internet for Kids initiative, which is funded by the European Commission under the Digital Europe program. The initiative aims to create a safer and more empowering online environment for children and young people. The goal is in the European Union, we have the Better Internet for Kids plus strategy that is based on three core pillars, child protection, child participation, and child empowerment. So also what was mentioned already, to try to empower children and young people to become agents of change, but in order for them to do this, they need us as an adult responsible space. And that, of course, is to promote, it’s the goal of the Better Internet for Kids initiative, to promote responsible use of the internet, protecting minors from online risks such as harmful and inappropriate content, but also to provide resources for parents, for educators, and other stakeholders to better support on aspects such as online safety and digital literacy. And, of course, Better Internet for Kids also addresses the very prominent topic of age assurance, to ensure that children and young people engage with age-appropriate content and are protected from harmful content. And to give some concrete examples of materials that you will find on the Better Internet for Kids portal, just earlier this year, together with University of Leiden in the Netherlands, we published a mapping of age assurance typologies and by a comprehensive report that gives an overview of different approaches of age assurance and the associated legal, ethical, and also technical considerations that were picked up by my fellow panelists. And just to scratch on some key areas, first of all, a diverse approach to age assurance, really the approach of there is no one-size-fits-all solution, the crucial importance of privacy and data protection concerns that were also already highlighted by Jonathan, and the balance and act of effectiveness and also user experience. As said, this is a very comprehensive research report, and as I mentioned with existing laws and policies, we need to ensure that this knowledge, this information is translated in user-friendly guidance, so we also transmit this expertise and the knowledge we have to educators, to parents that are really, really crucial in this process, and also how can we build capacity and make sure also this is properly implemented at a local level, and that’s why also on the Better Internet for Kids portal that you can find on betterinternetforkids.eu, we very much put age assurance in the spotlight, specifically focusing on two stakeholder groups. First of all, the educators and families really to provide resources to help proper awareness raising, but also knowledge sharing to foster digital literacy. I think this was also a comment that was given in the chat earlier, really how can we ensure proper media literacy education, and that’s why we developed an age assurance toolkit that includes age assurance explainers, and just to give you some examples of what users can find in the school toolkit, first of all explaining what is age assurance in the first place, and I think it was also mentioned other examples were given before age assurance might be a typology or a term that is not so accessible for many people. That’s why we also in this toolkit provide concrete examples of when age assurance comes. Why is it so important and how it actually can protect children? I think that’s also important for carers and parents to understand why is this topic so important? How can it protect my child? And in addition to this, so this is a resource, and as I said, I have a printed copy here, so you can see it’s a much lighter report. We also designed this together with children and young people. That’s also what we really try to work on these resources also together with the end users, because ultimately it’s for them, so it’s also very important to involve them in this process, and I think it’s always very eye-opening, because I think we are very much used to certain terminologies that is quite self-explanatory for us, but for some people it is not. It is not so accessible, so that’s also important to really follow this co-creation process, and then touched up on the group already on the black box of the industry. Of course, the Better Internet for Kids initiative, we really try to bridge the conversation and also have industry and policymakers around the table when we discuss certain topics. So on the website, we also have resources aimed at digital services providers to check their own compliance, and this has been done in the form of the self-assessment tool manual and questionnaire that we also developed in the Netherlands, and really the aim is here for the service provider to critically reflect on their services and how these may intersect with the protection of children and young people, but what is important here to note is, of course, that it only provides guidance, so it’s not a legal compliance mechanism, and here again, and it was mentioned also before, it’s not a one-size-fits-all when we talk about online service providers. We talked about the gate search engines, so I think we also need to acknowledge their diversity, and then maybe just to conclude, also to highlight on behalf of the European Commission that, of course, a lot of focus and work is done here in this space, and this is also complemented by the work the European Commission is doing on age verification. For example, following a risk-based principle, the European Commission, together with the EU member states, is also developing a European approach to age verification in the commission, is preparing for an EU-wide interpropyl and privacy-preserving short-term age verification solution before the European Digital Identity Wallet is offered as of 2026 in the European Union. So I think conversations like today are really, really important, really trying to pull the different strings and bringing different stakeholders together to work together and then hopefully in the future settings of the IGF we will have children and young people but also educators participating in such conversations because we really need them, we really need to understand what are their needs and for us then to act properly and as I said really build this bridge to share the knowledge we are having on the different aspects and really make sure this is translated properly at national and local level.


Maarten Botterman: Thank you very much, Sabrina. I know this is diverse by definition because it’s 29 member states that are all finding their way in this. Globally, this may be very good input. For me, experience that we have of capacity building in general is that we got examples from all over the world of good practice. We got legislatory examples, we got teaching examples, we got practical examples. But how to apply it best in your region is for the people in the region. This is why capacity building isn’t only on using the same guide around the world but it’s also about understanding the why’s and the how’s and make sure that it’s applied for the Indian region, for the different regions in Africa. Africa isn’t one region either and Latin America, et cetera. So you can adapt to that and learn from that. Same with the relationship with children, with parents, around the world. And we need to recognize that we can’t set one standard for all but we can have some principles that are valid for all. So with that, I see, Jutta, you grabbed the microphone.


Jutta Croll: Yes, I grabbed the microphone because I saw a comment or a question in the chat that I would like to address. And that was the question. Sabrina mentioned the EU ID wallet that has to come into place in 24-month time. But already the European Commission has acknowledged more time. And the EU ID wallet is an instrument for ID verification. So it’s more than age verification. But the wallet is also foreseen to make possible age verification only. So it needs to have an option that you can use it only to verify your age, not to give away your identity. And that is very important in regard of the privacy aspects, the data protection aspect that were already mentioned by Jonathan Cave. The question was whether the Commission is developing their own age verification tool. I would not say the Commission will develop it, but they have issued a tender in October this year for the development of an age verification instrument that should be white-labeled so that whatever age verification instrument is available in any country, Euro-global as well, it should have an open interface to that white-labeling tool that the Commission has issued a tender for. And they did so because they gave priority to age assurance. And they would not wait these 24 months for the EU ID wallet to come. So that also shows how important and how topical the thing is that we are talking about here. Age assurance is very topical, not only for the Commission. We’ve heard several sessions talking about that already here at the Internet Governance Forum. And we are pretty sure that train is put on rails, I would say.


Maarten Botterman: Thank you very much for that. So with that, I think we’ve had a pretty good cycle and I’d like to ask people also in the Zoom room if you have any final questions, or here in the room. If you have any final questions, raise it and then we’ll do a final round. Yes, I was looking at… Dorothy has been very active in the chat and Fabrice has been very active in the chat. Any of you wants to talk as well? Otherwise, we go to Sonja. As you’re not raising your hand. Sonja, please.


Sonia Livingstone: Thank you. I just wanted to make a point that hasn’t been perhaps as a political point. I’m very struck by how much the industry innovates in relation to complex and challenging technologies and then introduces them into the marketplace. We’re seeing this with AI now. Suddenly it pops up in all of our search and our social media in ways that were not necessarily asked for. And the same, of course, will happen with IoT. And then the worthy groups like us sit around and say parent leaders must do that. Of course, we want them to. But this is a major shift of innovation in commerce, placing an obligation on ordinary people and on the public sector. And it is, you know, I think this is why the conversations about regulation, certification, standards and obligations on industry are really so crucial because otherwise the burden of making a profit on one side really does fall on those who are already extremely hard pressed. So let’s keep up the pressure on the industry without in any way undermining the argument that of course media literacy and public information and awareness are crucial.


Maarten Botterman: Yes, thank you very much for that. Of course, regulatory innovation is also a point that approaches within Europe. I’m European too. Where the European Commission, for instance, with the AI Act, without immediately going to regulate, first invites the industry to come with, so what should we talk about? What should we regulate? How should good practice look like? And regulation doesn’t only need to be from the countries, but it can also be from industry and self-regulation. Jonathan, yes, please. I learned this from Jonathan. Please, Jonathan.


Jonathan Cave: I just wanted to applaud Sonja’s point really because a lot of these things, there is a transfer of responsibility from industry, well, from the developers of the tech part of the industry to the service providers and the comms providers and the others who are already regulated and from them to us. To a certain extent, society is being used as a kind of beta tester or alpha tester for these technologies. They’re spat out, and the ones that succeed, succeed, and the ones that don’t, don’t. Maybe they grow a regulatory structure around them to make them more robust, but the irreversible changes that take place will nonetheless have taken place and cannot be undone, even if we later come to regret them. And so some element of, A, a precautionary principle, and B, an appropriate placement of responsibility should be important. And when I say appropriate placement, these things are uncertain. So where responsibility lies should be some mixture of being able to understand the uncertainties, being able to do something about them, and being able, in particular financially, evive the disruption involved in getting from where we are now to a solution that we can not only live with, but can sort of accept and understand. And I think simply provide and protect or responding to industry by shoring up the crash barriers and so on encourages industry to take less and less responsibility for the consequences of what it is that they do or to define them in narrower and narrower and more technocratic terms and to say this is safe because in lab tests it works out safely. We saw this with medicine. This is why real-world evidence in the use of drugs is so important. They may survive a randomized clinical trial, but put them in the real world and they don’t work like that. So there needs to be some way of joining this up so that industry at all levels, people and government are partners in something and not people sitting on a predefined responsibility. So anyway, thanks for making it political as it may have been.


Maarten Botterman: Thank you very much for that. We’ve got a lady. Can you introduce yourself in the room?


Helen Mason: Thank you. And thank you for a very interesting session, which I unfortunately came a little late to. But nevertheless, I’m picking up on a few.


Maarten Botterman: What is her name?


Helen Mason: My name is Helen Mason. I’m from Child Helpline International in the Netherlands. We work in 132 countries to provide child helplines. 24-7 to children and young people via a variety of channels and I think two points really, I would say that we must include civil society and the first-line responders in these kinds of discussions because people that are actually talking to children and young people and dealing with reports of harms that have happened online and building the capacity of those frontline responders is absolutely crucial in being able to respond adequately and report and know where to report to to have proper alliances and referral protocols with law enforcement for example with regulators so our work at Child Helpline International is really advancing this particular aspect to make sure that all of our members are well equipped to respond to all kinds of incidences that might occur online and we have much data that shows an increase for example in areas like extortion children and young people not knowing where they should report is there a crime committed what should they do next should they delete the the evidence etc so having those frontline responders you know capacitated to be able to respond adequately is really vital for us one more point I want to make as well is that the data that is generated by the child helplines themselves as a result of the conversations they have with children young people is really a unique resource so I would really encourage all stakeholders to have a look at that information that we collect it’s around prevalence it’s around help-seeking behavior it’s around trends around and just a case material we have it has a lot of information about the actual experiences of children and young people of course it’s all done very safely and anonymized and working together with people like Sonia we can really use this information to feed back into policy and I’d really encourage all of the stakeholders to take a look at the information that we’re publishing online thank you


Maarten Botterman: thank you so much for your remark with that please


Abhilash Nair: thank you thanks for those comments very useful indeed thank you I just wanted to follow up on what Sonia said about corporate liability or imposing obligations on the industry we discussed this at a different panel yesterday I wasn’t on the panel and I’ll hear about to what extent should that liability extend and who should be held accountable for should it just be financial penalties or should executives be sent to prison for draws negligence and other lack of action and I wondered if you had any thoughts on that Sonia because I think I don’t think it is for want of obligations on websites or platforms or providers that things haven’t worked so far financial penalties are sometimes too little even if they sound like a lot of money for the average person in the street for the large company tech company in particular it’s not a lot of money would introducing criminal sanctions for corporate executives make a difference it’s just a thought rather than a question really


Maarten Botterman: yes thank you thank you for that I think with this all and this is one of the reasons several remarks throughout the session on companies will behave when they know it’s paid attention to I think I dare to hope I’m an optimist that some companies really care and do it from the outset and I know there are companies who do and I think these will be the companies from the long run not from the short run profit there’s so much but accountability in that is key what to be accountable for is the thing that we need to be clearer on we’re not gonna mandate from this little group what a parent may or may not say to his child what an industry may or may not put on the market what a child may or may not do but we can help by finding making clear what what to be taken into account and and the capacity development around the world is important in that we discussed from early on that well if you protect children by not allowing to use any of it at some point they will be allowed to use and then we dived in the dip and this is the same problem we see for instance of internet access in Africa the biggest challenge in Africa is to get online but as soon as you’re online you’re in trouble and have a lot of opportunities so you need to be aware before that same is true for children capacity development is also for all stakeholders legislators administrations let’s let’s companies I mean also for another example there is the what in Europe is called the corporate sustainability reporting directive it’s to make companies aware of the ecological footprint of what they do and with that move slowly towards more responsible behavior this is something that here should be an obvious thing to certain there is legislation you cannot harm children let’s make sure that understood what does it means in the context of the new digital world as well and last but not least of course it’s also the ability to act is something that need to be brought in and that need to be brought in together in reasonable ways so this is from the IOT perspective also a very important part of how you deal with things users etc but I really appreciate it to again after a couple of years work together with Utah and you all on something where this comes together because in the end technology is also for the people and to serve the people is what I and my colleague Jonathan tend to believe go to Utah and then to Jonathan and Sonia for the last word and Musa okay and you’re still lost so so after my attempt to round off we will now open up again and then Utah will round off Musa I’m unmuting you please go ahead yes we’re listening and we’re even hearing you


Musa Adam Turai: okay my question is sorry I can’t very let you the problem my question is regarding the okay what can defendants of free expression in these regions address the tension between the protection cultural and religious burden of holding the universal of holding the universal holding the universal rights and to free speech particularly in deeply conservative societies I listening


Maarten Botterman: yes I’m listening I’m I’m trying to comprehend the question that is behind this remark okay how can be how can you get it Jonathan you got it yeah please answer and then continue with your your final statement


Jonathan Cave: okay yes if I understand the question well there is a tension also between free speech rights and in particular the exercise of those rights by children and the need to protect children not only in their own rights of self-expression but from the harmful consequences of this self-expression of other in societies where freedom of speech is heavily restricted that you have freedom of speech but only in certain directions and certainly yeah and certainly in surveillance I mean take the right to be forgotten for example when I was very young I said very many intemperate politically intemperate things later on in my life I went through a period where I was very glad that those things had been forgotten and then later on still I came to a point where it was very important to me in my image of myself that it be those things but fortunately the consequences for me were sort of non-existent or minimal but we have seen that the consequences can be very great and what that says to me is that when we talk about child safety and child protection it is not just protection from the content that they see online but from the social legal political terrorist whatever consequences of using those online platform it’s an issue there where the safety goes beyond the safety within the online environment so I I get that point it’s a hard question I don’t have an answer for it of course what I wanted to say by way of rounding off was really just on this last point about how we make corporations and actually governments pay serious and sustained attention to these issues now I remember that in the antitrust environment when the u.s. passed the Sherman antitrust act the liability on a company that broke competition was only on the company as economic person. It was only with the Clayton Act, where personal liability was brought in, that the big trusts began to sit up and pay attention and change their behavior. So that personal liability does make a difference. In the Guardian today, there was a call for companies to be held responsible. This is the second aspect of this, not just for the harms that they have done in the past, but also for producing improvements into the future. What we’re seeing today with things like the Grenfell Inquiry or the Post Office Inquiry is that when something goes wrong, people are held to account. They’re supposed to stand by themselves and say, we’re sorry, we’ve learned lessons, and so on, until the next thing comes along. This doesn’t really help. It doesn’t really help when the problems are systemic and when the problems cannot be remedied by somebody saying, I’m sorry, or paying an amount of money to somebody for something. We need something that is more continuously engaging. And finally, it is commonly the case, as we saw with Paula Ventles in the Post Office Inquiry, that the people who are supposed to bear the responsibility evade that responsibility or that they didn’t have the information. Now, in many criminal contexts, this has been this concept of what I knew when I took the action is replaced by something which said, you were sitting in this position of responsibility. You had certain privileges, like a universal service obligation. This is what you knew or what you should have known. And if you don’t, if you’re not aware of these things, that by itself is a black mark against you. And it’s only the fact that these things went wrong that caused the light of day to shine upon that. So I think that we should take the issue more seriously. And with politicians, this happens. They come into office. They say things about children and the online risk of their office. And the box has been ticked as far as the newspapers are concerned, but it doesn’t become part of the culture that the safety of children doesn’t become the kind of cultural value on which we act. It actually changes what we do when we have new decisions to make. Okay, so that’s my call to arms. And now I’ll shut up. Thank you.


Maarten Botterman: Thank you very much. As a certified board director myself, I must say, I’ve seen that ongoing trend. And I know I’m personally liable for not doing the right things, not asking the right questions within reason. If I exert my fiduciary duties in the right way, then I can make mistakes too. But I fully appreciate your point. And that attention, that’s a crucial point. Also, call for boards to be aware of what they tell their CEO to do. Make more money or make sure that you do it in the right way. So I see a lot of nodding heads here. And I even see Sonja’s smile. So Sonja, to you, and then the last word to Jutta, please.


Sonia Livingstone: Brilliant. Thank you very much. Lots of really great things said. I wanted to say something, come back to the question of children, the way in which their rights can be heard and acknowledged. So users, the word user is a really problematic word. And I think if we talk about users, we can quickly forget there are children. So by and large, in relation to IoT and other innovations, by and large, children are not the customer. They don’t pay. They don’t always set up the profile, especially for IoT. They don’t seek remedy unless we scaffold that. They don’t bring lawsuits. They don’t get to speak at the IGF. You know, they are kind of uniquely dispossessed in relation to these debates. And yet they are one in three of the world’s Internet users. One in three also of the world’s population. If I continue my statistics, you know, one in three of the users are children, one in three are women and one in three are men. And we have to rethink who the user is and recognise their diversity. I think my last word might be to mention, as hasn’t yet been mentioned, that in General Comment 25, the UN Committee on the Rights of the Child has set out how the Convention on the Rights of the Child applies to the digital environment, including to IoT, to the different technologies, including to a whole range of digital innovations. And in so doing, it maps out and tries to look within the industry and all of those who provide the checks and balances around the industry, as well as speaking to the state. So, I think what we’re saying when we want companies to be aware or board members to tell their CEO or perhaps executives to get arrested when they land at Heathrow, wherever, I think we’re trying to recognise that there are people within this sector and very many agents who can be part of the process of making things better. And I would include those in the engineering schools who are training tomorrow’s engineers. And the data scientists who think they’re just processing anonymised data. It has nothing to do with them. And the marketers who are creating a certain vision of the user and how it might be used when they promote the technology. And so on and so on. You know, great that we’ve talked about procurement, which I think is really critical. And so, I would like the next session at the IGF, if I may be so bold on this topic, to include representation and the voices of children and young people in the room. And to begin with a more disaggregated vision, both of children, but also of the actors who are shaping this technology of the future. Thank you.


Maarten Botterman: Thank you for very beautifully said.


Jutta Croll: Thank you for giving me the last word. I don’t think I need to do any more wrapping up because everything has already been said. Just to put real what Sonja said at the beginning, that children are being impacted by Internet of Things, even though they might not be the users as we understand users so far. But if developers have that in mind, I do think that is very important. And to reflect on Jonathan saying what he was saying about the politicians, I just need to mention that yesterday, we had for the first time ever in 19 years of Internet for Children’s Rights. And we only have five high level sessions set by the United Nations. And one of these five sessions was set for children’s rights in the digital environment. So, awareness is raised. We have come a long way and we have a long way to go. But still, these are steps that are milestones, I do think. And people will remember that and we will bring it forward to Norway next year. Thank you so much for being here, for listening and for taking part. Thank you.


Maarten Botterman: Yes, thank you very much. I just want to applaud this too. Thank you so much, everybody, for attending and for contributing in any way, shape or form. And really appreciate the session, not only as a DCIoT person, but a father and even a grandfather. So, see you around. Yeah. That was good. We can refer to it. We can refer to it. This may be the future of dynamic emissions. I mean, we plan to go for another 30 flights next year. So, that’s what I would need to know. I’m comfortable. Well, if I’m the current principal of the Dynamic Emissions Bureau, we know so much about it, but we haven’t heard any more of it. It’s true, but we saw good deploying. Next year we saw more jamming. Hopefully this will help with that today. For the next two or three months, we will be exclusively the Dynamic Emissions Bureau, and no spayed applications for Dynamic Emissions projects. Most recently, we adished the management of our small business. This year we started out with a new startup company and Dynamic Emissions came out with the idea of raising the speed of our businesses.


J

Jonathan Cave

Speech speed

148 words per minute

Speech length

3517 words

Speech time

1421 seconds

Data collection can be both beneficial and harmful to children

Explanation

Jonathan Cave points out that data collected from children can be a source of both safety and potential harm. The immediate and long-term effects of exposure to inappropriate content or manipulation through profiling are concerns.


Evidence

Example of privacy laws setting age limits for data collection and processing.


Major Discussion Point

Age-aware IoT and data governance


Static age limits may not be appropriate given evolving capacities of children

Explanation

Cave suggests that using chronological age as the sole basis for protecting children online may not be the most appropriate approach. Children’s digital maturity and experience with online environments should be considered.


Major Discussion Point

Age-aware IoT and data governance


Agreed with

Sonia Livingstone


Pratishtha Arora


Agreed on

Need for age-appropriate design in IoT and AI


Differed with

Sonia Livingstone


Differed on

Approach to age verification and assurance


AI can both facilitate and potentially distort children’s development

Explanation

Cave discusses how AI can aid in children’s education and development, but also potentially distort it. He points out that AI devices not only learn about the child but also ‘program’ the child in certain ways.


Evidence

Example of how AI answering questions directly may reduce the educational aspect of children searching for information themselves.


Major Discussion Point

Role of AI in age-aware IoT


S

Sonia Livingstone

Speech speed

145 words per minute

Speech length

2061 words

Speech time

847 seconds

Need to consider broader child rights beyond just privacy and safety

Explanation

Livingstone emphasizes that a child rights approach should consider more than just privacy and safety. She argues for a holistic approach that includes rights such as access to information, participation, and appropriate provision.


Evidence

Mentions the UN Convention on the Rights of the Child and the concept of best interests of the child.


Major Discussion Point

Age-aware IoT and data governance


Differed with

Jonathan Cave


Differed on

Approach to age verification and assurance


Importance of consulting children in design of technologies and policies

Explanation

Livingstone stresses the importance of involving children in the design and development of technologies and policies that affect them. This ensures that children’s perspectives and needs are taken into account.


Major Discussion Point

Age-aware IoT and data governance


Agreed with

Pratishtha Arora


Jonathan Cave


Agreed on

Need for age-appropriate design in IoT and AI


Need to place more responsibility on industry rather than users

Explanation

Livingstone argues that the burden of ensuring safety and appropriate use of technology should not primarily fall on users, especially children and parents. She emphasizes the need for industry to take more responsibility in this area.


Major Discussion Point

Corporate responsibility and regulation


Differed with

Sabrina Vorbau


Differed on

Focus of responsibility in ensuring child safety online


Children are often overlooked as stakeholders in tech development

Explanation

Livingstone points out that children are often not considered as primary stakeholders in technology development, despite being one-third of internet users globally. She argues for greater recognition of children’s diverse needs and experiences in tech development.


Evidence

Statistic that one in three of the world’s Internet users are children.


Major Discussion Point

Corporate responsibility and regulation


M

Maarten Botterman

Speech speed

129 words per minute

Speech length

3732 words

Speech time

1727 seconds

Labeling and certification can empower users to make informed choices

Explanation

Botterman argues that labeling and certification of IoT devices can help users understand what they are buying and using. This transparency enables users to make more informed decisions about the technology they adopt.


Evidence

Examples of past issues with IoT devices, such as default passwords and undisclosed data sharing.


Major Discussion Point

Labeling and certification of IoT devices


Agreed with

Jutta Croll


Abhilash Nair


Agreed on

Importance of labeling and certification for IoT devices


J

Jutta Croll

Speech speed

129 words per minute

Speech length

1120 words

Speech time

518 seconds

Public procurement can be used to drive adoption of standards

Explanation

Croll suggests that using public procurement as a tool can encourage the adoption of standards for IoT devices. By making certain standards a requirement for government purchases, it can stimulate the market for compliant products.


Evidence

Example of Section 508 in the US, which required accessibility features in products purchased by the government.


Major Discussion Point

Labeling and certification of IoT devices


Agreed with

Maarten Botterman


Abhilash Nair


Agreed on

Importance of labeling and certification for IoT devices


A

Abhilash Nair

Speech speed

143 words per minute

Speech length

1200 words

Speech time

502 seconds

Certification could help mitigate literacy issues for parents/caregivers

Explanation

Nair suggests that certification of IoT devices could help address literacy issues among parents and caregivers. This would make it easier for them to understand and manage the technology their children are using, regardless of their educational background.


Major Discussion Point

Labeling and certification of IoT devices


Agreed with

Maarten Botterman


Jutta Croll


Agreed on

Importance of labeling and certification for IoT devices


P

Pratishtha Arora

Speech speed

137 words per minute

Speech length

1038 words

Speech time

453 seconds

Need to consider impacts on children who may not be direct users of IoT devices

Explanation

Arora points out that IoT devices can impact children even when they are not the primary users. This includes situations where children are in environments with IoT devices, such as smart homes or connected cars.


Major Discussion Point

Role of AI in age-aware IoT


Importance of developing age-appropriate AI models and interfaces

Explanation

Arora emphasizes the need for AI models and interfaces that are appropriate for different age groups. This involves considering children’s varying levels of understanding and maturity when designing AI-powered IoT devices.


Major Discussion Point

Role of AI in age-aware IoT


Agreed with

Sonia Livingstone


Jonathan Cave


Agreed on

Need for age-appropriate design in IoT and AI


S

Sabrina Vorbau

Speech speed

137 words per minute

Speech length

1178 words

Speech time

513 seconds

Need to translate research into user-friendly guidance for parents/educators

Explanation

Vorbau stresses the importance of making research findings accessible to parents and educators. This involves creating user-friendly resources that help adults understand and navigate the complexities of children’s online experiences.


Evidence

Example of the Better Internet for Kids initiative developing toolkits and resources for educators and families.


Major Discussion Point

Capacity building and awareness


Differed with

Sonia Livingstone


Differed on

Focus of responsibility in ensuring child safety online


Importance of involving children/youth in developing resources

Explanation

Vorbau highlights the value of involving children and young people in the creation of resources about online safety and digital literacy. This ensures that the materials are relevant and understandable to their target audience.


Evidence

Mention of co-creation process with children and young people in developing resources for the Better Internet for Kids initiative.


Major Discussion Point

Capacity building and awareness


H

Helen Mason

Speech speed

175 words per minute

Speech length

375 words

Speech time

128 seconds

Civil society and frontline responders should be included in discussions

Explanation

Mason argues for the inclusion of civil society organizations and frontline responders in discussions about children’s online safety. These stakeholders have direct experience with children’s issues and can provide valuable insights.


Evidence

Example of Child Helpline International’s work in 132 countries providing support to children.


Major Discussion Point

Capacity building and awareness


Data from child helplines is a valuable resource on children’s experiences

Explanation

Mason points out that data collected by child helplines can provide unique insights into children’s online experiences and challenges. This information can be valuable for policymakers and researchers.


Evidence

Mention of increasing reports of online extortion and children not knowing where to report issues.


Major Discussion Point

Capacity building and awareness


M

Musa Adam Turai

Speech speed

84 words per minute

Speech length

60 words

Speech time

42 seconds

Tension between free speech rights and child protection in some societies

Explanation

Turai raises the issue of balancing free speech rights with child protection, particularly in conservative societies. This highlights the cultural and societal differences in approaching online safety for children.


Major Discussion Point

Corporate responsibility and regulation


Agreements

Agreement Points

Need for age-appropriate design in IoT and AI

speakers

Sonia Livingstone


Pratishtha Arora


Jonathan Cave


arguments

Importance of consulting children in design of technologies and policies


Importance of developing age-appropriate AI models and interfaces


Static age limits may not be appropriate given evolving capacities of children


summary

The speakers agree on the importance of considering children’s evolving capacities and involving them in the design process to ensure age-appropriate IoT and AI technologies.


Importance of labeling and certification for IoT devices

speakers

Maarten Botterman


Jutta Croll


Abhilash Nair


arguments

Labeling and certification can empower users to make informed choices


Public procurement can be used to drive adoption of standards


Certification could help mitigate literacy issues for parents/caregivers


summary

The speakers agree that labeling and certification of IoT devices can empower users, drive adoption of standards, and help address literacy issues for parents and caregivers.


Similar Viewpoints

Both speakers emphasize the need for a more nuanced approach to children’s rights and protection online, considering their evolving capacities rather than relying solely on static age limits.

speakers

Sonia Livingstone


Jonathan Cave


arguments

Need to consider broader child rights beyond just privacy and safety


Static age limits may not be appropriate given evolving capacities of children


Both speakers advocate for including diverse stakeholders, particularly children and those working directly with them, in discussions and decision-making processes related to online safety and technology design.

speakers

Sonia Livingstone


Helen Mason


arguments

Importance of consulting children in design of technologies and policies


Civil society and frontline responders should be included in discussions


Unexpected Consensus

Corporate responsibility in technology development

speakers

Sonia Livingstone


Jonathan Cave


Abhilash Nair


arguments

Need to place more responsibility on industry rather than users


AI can both facilitate and potentially distort children’s development


Certification could help mitigate literacy issues for parents/caregivers


explanation

Despite coming from different perspectives, these speakers unexpectedly converged on the idea that the tech industry should bear more responsibility for ensuring safe and appropriate technology for children, rather than placing the burden primarily on users or parents.


Overall Assessment

Summary

The main areas of agreement include the need for age-appropriate design in IoT and AI, the importance of labeling and certification for IoT devices, and the necessity of involving diverse stakeholders in discussions and decision-making processes.


Consensus level

There is a moderate to high level of consensus among the speakers on key issues. This consensus suggests a growing recognition of the complexities surrounding children’s rights in the digital environment and the need for multi-stakeholder approaches to address these challenges. The implications of this consensus could lead to more collaborative efforts in developing age-aware IoT solutions and more comprehensive policies that consider children’s evolving capacities and rights.


Differences

Different Viewpoints

Approach to age verification and assurance

speakers

Jonathan Cave


Sonia Livingstone


arguments

Static age limits may not be appropriate given evolving capacities of children


Need to consider broader child rights beyond just privacy and safety


summary

While Cave emphasizes the limitations of static age limits, Livingstone advocates for a more holistic approach considering various child rights beyond age verification.


Focus of responsibility in ensuring child safety online

speakers

Sonia Livingstone


Sabrina Vorbau


arguments

Need to place more responsibility on industry rather than users


Need to translate research into user-friendly guidance for parents/educators


summary

Livingstone emphasizes industry responsibility, while Vorbau focuses on empowering parents and educators with user-friendly guidance.


Unexpected Differences

Role of AI in children’s development

speakers

Jonathan Cave


Pratishtha Arora


arguments

AI can both facilitate and potentially distort children’s development


Importance of developing age-appropriate AI models and interfaces


explanation

While both speakers discuss AI’s impact on children, Cave unexpectedly highlights potential negative effects on development, whereas Arora focuses more on the need for age-appropriate design without explicitly addressing potential distortions.


Overall Assessment

summary

The main areas of disagreement revolve around the approach to age verification, the distribution of responsibility between industry and users, and the role of AI in children’s online experiences.


difference_level

The level of disagreement among speakers is moderate. While there is general consensus on the importance of protecting children online, speakers differ in their proposed approaches and emphasis. These differences reflect the complexity of the issue and suggest that a multifaceted approach, incorporating various perspectives, may be necessary to effectively address age-aware IoT and child protection online.


Partial Agreements

Partial Agreements

All speakers agree on the need for more nuanced approaches to child protection online, but differ in their proposed solutions: Cave suggests moving away from static age limits, Livingstone advocates for a broader rights-based approach, and Botterman proposes labeling and certification as tools for informed decision-making.

speakers

Jonathan Cave


Sonia Livingstone


Maarten Botterman


arguments

Static age limits may not be appropriate given evolving capacities of children


Need to consider broader child rights beyond just privacy and safety


Labeling and certification can empower users to make informed choices


Similar Viewpoints

Both speakers emphasize the need for a more nuanced approach to children’s rights and protection online, considering their evolving capacities rather than relying solely on static age limits.

speakers

Sonia Livingstone


Jonathan Cave


arguments

Need to consider broader child rights beyond just privacy and safety


Static age limits may not be appropriate given evolving capacities of children


Both speakers advocate for including diverse stakeholders, particularly children and those working directly with them, in discussions and decision-making processes related to online safety and technology design.

speakers

Sonia Livingstone


Helen Mason


arguments

Importance of consulting children in design of technologies and policies


Civil society and frontline responders should be included in discussions


Takeaways

Key Takeaways

Age-aware IoT needs to consider children’s evolving capacities rather than using static age limits


Labeling and certification of IoT devices can empower users to make informed choices


AI in IoT can both facilitate and potentially distort children’s development


Capacity building and awareness efforts should involve children/youth and translate research into user-friendly guidance


There is a need to place more responsibility on industry rather than users for child safety in IoT


Children are often overlooked as stakeholders in tech development despite being 1 in 3 internet users


Resolutions and Action Items

Involve children and young people in future IGF sessions on this topic


Develop more user-friendly guidance on age assurance for parents and educators


Consider using public procurement to drive adoption of child safety standards in IoT


Unresolved Issues

How to balance free speech rights with child protection, especially in conservative societies


Extent of corporate liability and accountability for child safety issues in IoT


How to effectively implement age assurance across different cultural contexts


How to ensure IoT benefits reach children who are not yet online


Suggested Compromises

Use age brackets rather than hard age limits to allow for flexibility in maturity levels


Develop ‘white-labeled’ age verification tools that can interface with different systems


Balance precautionary principle with allowing children to learn to navigate online risks


Thought Provoking Comments

We need to stay aware that the static perspective of protecting people on the basis of age may not be the most appropriate, and we need to stay aware of that.

speaker

Jonathan Cave


reason

This challenges the conventional approach of using chronological age as the sole basis for online protection measures.


impact

It shifted the discussion towards considering more nuanced, evolving approaches to protecting children online based on their digital maturity rather than just age.


A child rights landscape always seeks to be holistic. So, privacy is key, as has already been said. Safety is key, as has already been said. But the concern about some of the age assurance solutions is that, as Jonathan just said, they introduce age limits. And so, there are also costs, potentially, to children’s rights, as well as benefits.

speaker

Sonia Livingstone


reason

This comment broadens the perspective beyond just safety and privacy to consider the full spectrum of children’s rights.


impact

It prompted a more comprehensive discussion of the trade-offs involved in age assurance technologies and their potential impacts on children’s rights and development.


Denying access will always encourage teens to look at work around, to not engage in dangerous behavior because they have no guidance. Why do we not put more emphasis on media information literacy, so that users understand how to protect themselves?

speaker

Doherty Gordon


reason

This comment challenges the effectiveness of access restriction approaches and suggests an alternative focus on education.


impact

It sparked discussion about the importance of digital literacy and education as complementary or alternative approaches to technical solutions for online safety.


The idea of using public procurement as a tool, as a sort of complement either to self-regulation or to formal regulation is, I think, one that’s worked in a number of areas.

speaker

Jonathan Cave


reason

This introduces a novel policy approach to incentivizing industry compliance with safety standards.


impact

It shifted the conversation towards considering economic incentives and government purchasing power as tools for promoting child-safe technologies.


Users, the word user is a really problematic word. And I think if we talk about users, we can quickly forget there are children. So by and large, in relation to IoT and other innovations, by and large, children are not the customer. They don’t pay. They don’t always set up the profile, especially for IoT. They don’t seek remedy unless we scaffold that. They don’t bring lawsuits. They don’t get to speak at the IGF.

speaker

Sonia Livingstone


reason

This comment highlights how children are often overlooked in discussions about technology users and policy.


impact

It prompted reflection on the need to explicitly consider children’s perspectives and interests in technology development and policy discussions.


Overall Assessment

These key comments shaped the discussion by broadening its scope beyond simple age-based protections to consider more holistic approaches to children’s rights in the digital world. They challenged participants to think about the complexities of balancing protection with other rights, the role of education and literacy, economic incentives for industry compliance, and the importance of explicitly considering children’s perspectives in technology development and policy. The discussion evolved from focusing on technical solutions to exploring a multi-faceted approach involving education, policy, industry incentives, and children’s participation.


Follow-up Questions

How can we ensure proper media literacy education to help users understand how to protect themselves online?

speaker

Doherty Gordon (audience member)


explanation

This is important to empower users, especially children and teens, to navigate online risks safely rather than relying solely on access restrictions.


How can we develop age assurance technologies that themselves respect children’s rights?

speaker

Sonia Livingstone


explanation

This is crucial to ensure that solutions meant to protect children’s rights online do not inadvertently violate those rights in the process.


How can we develop more flexible approaches to age verification that account for children’s evolving capacities rather than relying on strict age limits?

speaker

Sonia Livingstone and Jonathan Cave


explanation

This is important to create more nuanced and effective protections that align with children’s actual developmental stages rather than arbitrary age cutoffs.


How can we ensure that age assurance and online safety measures account for children who may not be the primary user or customer of a service but are still impacted by it?

speaker

Sonia Livingstone


explanation

This is crucial to protect children who may be indirectly affected by IoT and other technologies, even if they are not the intended users.


How can we better incorporate the perspectives and experiences of children and young people into discussions and policymaking around online safety and IoT?

speaker

Sonia Livingstone


explanation

This is important to ensure that policies and technologies are truly responsive to children’s needs and experiences.


How can we address the tension between protecting children online and upholding rights to free expression, particularly in conservative societies?

speaker

Musa Adam Turai (audience member)


explanation

This is important to balance child protection with other fundamental rights across different cultural contexts.


How can we create more effective corporate accountability measures for online child safety that go beyond financial penalties?

speaker

Abhilash Nair and Jonathan Cave


explanation

This is crucial to ensure that companies take their responsibilities towards child safety seriously and make it a core part of their operations.


How can we better integrate civil society organizations and frontline responders into discussions and policymaking around online child safety?

speaker

Helen Mason (Child Helpline International)


explanation

This is important to ensure that policies and technologies are informed by real-world experiences and data from those directly working with affected children.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.