WS #184 AI in Warfare – Role of AI in upholding International Law

WS #184 AI in Warfare – Role of AI in upholding International Law

Session at a Glance

Summary

This discussion focused on the role of AI in warfare and its implications for international law and ethics. Experts from various fields explored the challenges and responsibilities associated with AI in military applications.

The speakers emphasized the importance of compliance with international humanitarian law and human rights law in the development and use of AI in warfare. They highlighted the need for a comprehensive global governance framework for AI that addresses both civilian and military applications due to the dual-use nature of the technology.

Key issues discussed included the principles of distinction, proportionality, and necessity in warfare, and how AI systems might struggle to adhere to these principles. The question of liability and accountability for AI actions in conflict situations was raised, with concerns about who bears responsibility when AI systems make mistakes or cause harm.

Ethical considerations such as data bias, privacy concerns, and the need for human oversight in AI decision-making were explored. The speakers stressed the importance of incorporating international law considerations from the early stages of AI development, promoting a “compliance by design” approach.

The discussion also touched on the need for multi-stakeholder engagement, including input from industry, civil society, and academia, in shaping AI governance in the military domain. The speakers called for increased awareness of the current use of AI in conflict situations and the urgent need for effective regulation and oversight.

Overall, the discussion underscored the complex challenges of balancing technological advancement with ethical and legal considerations in the use of AI in warfare, emphasizing the critical importance of maintaining human control and accountability in life-and-death decisions.

Keypoints

Major discussion points:

– The role of AI in warfare and its implications for international law

– Challenges in ensuring AI systems comply with principles of international humanitarian law

– Issues of accountability and liability for AI-enabled weapons systems

– The need for human oversight and control in AI-powered military applications

– Ethical considerations and potential biases in AI systems used in conflict

Overall purpose/goal:

The purpose of this discussion was to explore the complex issues surrounding the use of AI in warfare and military applications, with a focus on how to ensure compliance with international law and ethical principles. The speakers aimed to raise awareness of current challenges and discuss potential governance frameworks and solutions.

Tone:

The tone was primarily serious and academic, reflecting the gravity of the topic. Speakers approached the issues analytically, drawing on their expertise in law, ethics, and technology. There was an underlying sense of urgency about addressing these challenges, but the tone remained measured and constructive throughout. Towards the end, there were some more optimistic notes about the potential for responsible development and use of AI in this domain.

Speakers

– Bea Guevarra: Moderator/Organizer, Netmission.Asia

– Qurra Tul AIn Nisar (Annie): Online moderator, senior year law student, governance and policy analyst, Netmission.Asia

– Yasmin Afina: Representative from United Nations Institute for Disarmament Research (UNIDIR)

– Jimena Sofia Viveros Alvarez: Commissioner at Global Commission on the Responsible Use of AI in the Military Domain

– Anoosha Shaigan: Technology lawyer, human rights expert

– Mohamed Sheikh-Ali: Representative from International Committee of the Red Cross (ICRC)

Additional speaker:

– Abeer Nisar: Civil Society, Asia-Pacific Group

Full session report

The Role of AI in Warfare: Legal, Ethical, and Governance Challenges

This discussion brought together experts from various fields to explore the complex issues surrounding the use of artificial intelligence (AI) in warfare and its implications for international law and ethics. The speakers, including representatives from the United Nations Institute for Disarmament Research (UNIDIR), the Global Commission on the Responsible Use of AI in the Military Domain, and the International Committee of the Red Cross (ICRC), addressed the challenges and responsibilities associated with AI in military applications.

International Law and AI Governance

Yasmin Afina from UNIDIR emphasized that international law should be a core component of AI governance in the military domain. She introduced UNIDIR’s RAISE program (Responsible AI in Security and Ethics) and mentioned an upcoming global conference on AI security and ethics. Afina stressed the importance of translating legal requirements into technical specifications for AI systems and advocated for a “compliance by design” approach.

Jimena Sofia Viveros Alvarez, Commissioner at the Global Commission on the Responsible Use of AI in the Military Domain, argued for a broader, coherent global AI governance framework addressing both civilian and military applications. She highlighted the transfer of discussions from Group of Governmental Experts (GGEs) to the UN General Assembly and called for binding treaties aligned with international law to govern AI use in warfare.

Anoosha Shaigan, a technology lawyer with a background in human rights law, discussed specific legal issues such as liability, command responsibility, and developer liability in the context of AI in warfare. She emphasized the importance of international humanitarian law principles like distinction, proportionality, and necessity. Shaigan also mentioned the Outer Space Treaty in relation to AI-guided satellites and suggested developing an international military AI tribunal.

Ethical Considerations and Challenges

The discussion delved into several ethical challenges posed by AI in warfare. Anoosha Shaigan raised concerns about data bias and model drift in AI systems, using the example of potentially discriminatory targeting based on appearance. She also addressed the challenges posed by generative AI, deep fakes, and disinformation in military contexts.

Privacy concerns in conflict zones were addressed, with speakers noting the challenge of balancing military needs with civilian privacy rights when deploying AI technologies. The concept of explainable AI for autonomous weapons systems was introduced, emphasizing the importance of human understanding and oversight of AI decision-making processes in warfare.

Accountability and Human Control

A significant point of agreement among the speakers was the necessity of maintaining human control and accountability in AI-powered warfare systems. Mohamed Sheikh-Ali from the ICRC stressed that human oversight and control are essential for weapons systems, particularly for life-and-death decisions. This view was strongly supported by other speakers, who emphasized the need for human responsibility and accountability in the use of AI in military contexts.

The discussion touched on the complex issue of liability for AI actions in warfare. Anoosha Shaigan highlighted the need to clarify who bears responsibility when AI systems make mistakes or cause harm, whether it be the operator, commander, developer, or the state itself.

Multi-stakeholder Engagement and Corporate Responsibility

Yasmin Afina introduced the importance of multi-stakeholder engagement in shaping AI governance in the military domain. This approach calls for input from industry, civil society, and academia, in addition to government actors.

The role of private sector companies developing AI technologies for military use was emphasized by both Anoosha Shaigan and Mohamed Sheikh-Ali. They agreed on the need to engage tech companies from the design stage and ensure corporate accountability for military AI suppliers. Sheikh-Ali specifically mentioned the ICRC’s engagement with technology companies in Silicon Valley and China.

Future Developments and Recommendations

Looking towards the future, the speakers offered several recommendations:

1. Develop binding treaties aligned with international law to govern AI use in warfare (Jimena Sofia Viveros Alvarez)

2. Create specific standards for military AI that incorporate legal and ethical considerations (Anoosha Shaigan)

3. Engage technology companies from the early stages of AI development for military applications (Mohamed Sheikh-Ali)

4. Implement a “compliance by design” approach, incorporating international law considerations from the outset of AI system development (Yasmin Afina)

5. Establish an international military AI tribunal to address legal issues arising from AI use in warfare (Anoosha Shaigan)

Conclusion

The discussion underscored the complex challenges of balancing technological advancement with ethical and legal considerations in the use of AI in warfare. While there was a high level of consensus on core principles, such as the importance of international law and human control, the speakers differed in their specific approaches and areas of emphasis. This reflects the multifaceted nature of the issue and highlights the need for continued dialogue and collaboration among various stakeholders to develop comprehensive and effective governance frameworks for AI in warfare.

The urgency of addressing these challenges was evident throughout the discussion, as speakers called for increased awareness of the current use of AI in conflict situations and the pressing need for effective regulation and oversight. As AI technologies continue to advance, the international community faces the critical task of ensuring that their use in warfare remains within the bounds of law, ethics, and human control.

Session Transcript

Bea Guevarra: First of all, thank you for joining the session, AI in Welfare, Rules of AI in Upholding the International Law session, workshop number 184. And this session is, you know, about the nature, we are going to explore the sensitive nature of the AI in Welfare domain on the format that foster an open, frank discussion as a 60-minute roundtable discussion, but let’s see how we can manage the time anyway. So I will not consume so much time on that. So we are now having online moderator, Annie, and also one of the organizers, Beer, with us today. So I will pass the floor to Annie for introducing to the organizers. Just quick introduction. Welcome, everyone. We can’t hear you, Annie. Is there any technical issue? We can’t hear you. You can hear me? Yeah, I can hear you right now.

Qurra Tul AIn Nisar: Amazing. Then. Yeah, so I was saying that it’s an immense pleasure to have you all with us today. And I am Kratul Ayn Nisar, you can call me Annie. And I am a senior year law student, as well as a governance and policy analyst. I have with me the other organizers, Bea, and Abin Nisar, as our organizers. So, firstly, I want to, you know, quickly thank them all for their constant support and, you know, immense help for finding all the speakers and the experts on this topic. I’m like, I’m very grateful for your insight and have you on board. topic. We all are aware that AI has reshaped how we are living in this world and warfare is no separate aspect from that. So, I would love if we can quickly start the session because we have already, apologies for that, but we have already, you know, a lot of time. So, back to you, Bill, for, you know, introducing our speakers.

Bea Guevarra: Thank you, Annie. In this session, we are going to have three speakers, Ms. Anosha from the Civil Society and Yasmin Afina from the Intergovernmental Organization, and also Ms. Jamina Sophia who are also joining on site here. So, as an opening of this session, I would like to ask the speaker about their own side on the topic, such as like AI in warfare. So, I will ask to Yasmin, how do you see the future of the AI and also the warfare?

Yasmin Afina: Yeah, perfect. Hi, thank you, everyone. It’s nice to meet you. My name is Yasmin Afina from the United Nations Institute for Disarmament Research, or UNIDIR. Thank you so much for the organizers of this panel, for inviting me today, and I’m so sorry for not joining you in Riyadh in person due to personal circumstances. I could not travel in time for the workshop, so I know that you wanted me to speak a little bit about the future of AI in warfare, but if you would allow me, I might just share a few slides, if I may. Is it correct? Is it okay? So, let me just… Yeah, I hope that you can see my screen. Perfect. So I know that you wanted me to speak about the role of AI in warfare and its role in upholding international law, specifically from a responsible AI perspective. But please allow me to twist the framing a little bit and instead look at international law as a key and central facet of responsible AI in the military domain. So in the first half of 2024, UNIDIR took part in regional consultations with states and experts in Asia Pacific, the Middle East and Central Asia, Africa, Europe, Latin America, and the Caribbean. And based on these consultations, we have identified and established a number of facets of responsible AI in the military domain based on what states have shared during these consultations. And one of them relates to compliance with national and international law, as you can see in the top right of the diagram. And in fact, the overwhelming majority of states across regions place compliance with international law as a central component of their governance approaches to AI in the military domain and wider security domains. And there is this shared sentiment that international law is an important framework that must be upheld throughout the lifecycle of AI technologies meant for deployment and use in defense and security and thus including in the context of warfare. So international considerations must be considered from the earlier stages, from the design, development, testing, and evaluation, which would require efforts to translate international law obligations into technical requirements in order to frame and shape the pre-deployment stages of these technologies in such a way that they will somewhat be compliant by design. And I’ll get back to that later in my concluding remarks. And so in addition, international law, and in particular international humanitarian law, and international human rights law must inform or even shape and frame procurement processes, as many states are increasingly considering purchasing AI-enabled capabilities. are developing AI, but also those that are purchasing. And so from a policy standpoint, however, it’s also important to note that while this overall shared sentiment that international law is important, it does not mean that states approach it in a uniform way. And there are nuances across regions in states’ approach to AI in the military domain and the applicability of international law. So for example, states in Latin America and the Caribbean, they generally dedicate more attention and efforts to foster compliance with and uphold international human rights law. And this approach is somewhat reflective of the regional security landscape where transnational efforts at combating organized crime prevail and in the light of the international human rights laws applicability, both in and outside of conflict. And while of course states in all the regions acknowledge the importance of international human rights law, international humanitarian law tends to be overwhelmingly dominating the policies and discourse of states in other regions. Although our findings were also such that the African region would also dedicate more attention to international human rights law, particularly within the framework of the African Charter on Human and People’s Rights. And there’s more of these findings in the report that I launched back in September, which I invite you to download and read from UNIDIR’s website by following the QR code on the slide or by going to unidir.org slash Kali Doscope AI. So now that we’ve established that states around the world see international law and compliance as an important component of responsible AI in the military domain, I wanted to add another layer to our discussions, the role of the multi-stakeholder community. And in fact, in the report that I previously mentioned, one of the other key areas of nuance convergence that we have identified is the importance of multi-stakeholder engagement. And states, in fact, generally recognize the value of multi-stakeholder and cross-sectoral engagement to promote responsible AI in the military. domain, but states generally disagree on how such engagement should be conducted. And so UNIDIR in our capacity as an independent research within the UN ecosystem, and with a mandate of informing member states, we’ve launched earlier this year in March a program of work called the Roundtable for AI Secured in Ethics, or RAISE, in partnership with Microsoft. And we’ve been engaging very closely with a group of industry representatives, including big tech and startups, and consultancy organizations, civil society, and academics. And we basically ask them, what are the main themes that should be prioritized in the context of AI governance and security and defense? And as a small parenthesis, I just wanted to note that as part of the RAISE program of work, UNIDIR will be holding the inaugural global conference on AI security and ethics on the 27th and 28th of March in Geneva. It’s open to all. We’ll soon be issuing a call for abstracts for you to present your insights to the international diplomatic community in Geneva in the UN. So please do mark your calendars and let me know if you’d like to be kept in the loop. So coming back to RAISE, the group has identified six themes that must be prioritized for the governance of AI and security and defense. And across all of the six themes, international came across as a recurrent pattern. So for example, the second priority theme was trust building. And one of the key recommendations put forward was that in order to enable this trust building, there’s a need to clarify the interpretation of applicable laws. And so for this group, states should develop clear national positions on how to interpret and apply international law in the context of AI applications in the military domain, and thus ultimately contributing to build this trust between states. And another example is the third priority theme, which pertains to unpacking the human element in the development, testing, deployment, and use of AI systems in the military domain. And so clarifying how international law applies can help clarify then what is the level of human element that is required at each stage of the lifecycle of the technologies of AI in the military domain. and under what basis in international law. So again, all of this can be found in the report that we’ve published on UNIDIR’s website at unidir.org slash governance AI, or you can do so by scanning the QR code on the slide. And so finally, to conclude, I wanted to circle back to something that I mentioned earlier on how all of these initiatives basically can contribute to efforts towards compliance by design in the development, testing and evaluation of AI technologies in the military domain, while also acknowledging and addressing and mitigating some of the risks that these technologies can present with regards to international law. So anecdotally last month, I submitted my PhD manuscript specifically looking at how international humanitarian law considerations should frame the development, testing and evaluation of AI technologies for military targeting. So anything that is going on before the deployment in the battlefield. And the thesis has been drafted with the assumption and acceptance personally, not UNIDIR’s side, that AI in the military domain is happening already. And without of course, prejudice to possible instruments in the future that may prohibit an outlaw some applications. But at this stage, it’s important to dedicate efforts and research towards ensuring that whatever technology will come out of the lab for warfare, they have been developed with compliance in mind instead of an afterthought. So earlier I mentioned the need for to translate legal requirements into technical requirements. One example that I looked at my thesis is for example, the use of proxy data for training and testing the use of AI technologies. And I argued that while proxy data can to a certain extent be necessary by virtue of the rule of precautions due to the messy and uncertain nature of warfare, it cannot be separated from direct indicators that instead should be seen as a natural part of the ecosystem of intelligence needed for military decision making. So all of this to say that with the right efforts, dedicated resources and political will, compliance with international law should in principle be at the heart of the development of AI technologies for military domain. And this is not about. coding international law into algorithms, but rather identify and prioritize practical measures for the implementation of international law and ensuring that the deployment and use of AI in warfare upholds international law from the outset and does not jeopardize it instead by remaining as an afterthought. Because at some point you just lose the right to say oops. And on that note, thank you very much. And I look forward to our Q and A.

Bea Guevarra: Thank you, Yasmin. And I would like to ask to another speaker who are joining on site. How do you think of the international laws and the future of the AI in warfare? Could you also please give like an insight based on your experience as well? Hello, thank you. Can you all hear me?

Jimena Sofia Viveros Alvarez : Perfect. Well, first of all, thank you for the organizers for inviting me. I think I don’t like to just circumscribe this conversation to warfare. Because these technologies are being used to attack civilians also during peacetime. So I like to call it the peace and security spectrum of things. And also because they’re not only used by military actors, but also civilian actors, both state and non-state. So state actors that are civilian law enforcement or border controls. Whereas non-state actors, it depends from the context. As Yasmin very well pointed out, for example, in Latin America where I’m from, organized crime is a big threat. In other regions, it’s terrorism. In other regions, it’s mercenaries. So all of these actors are using the same technology. So it’s important to acknowledge the different implications and the different treatment under international law of each one of this. Because when we’re talking about AI in the peace and security domains, we are talking about many different sets of rules, right? So we have obviously IHL, international humanitarian law. We have international human rights law, which applies to both wartime and peacetime, and also by civilian actors, and it also kind of involves state responsibility. So it also comes to public international law, which deals with this, but which also deals with use ad bellum. So that’s the use of force, the right to use of force or self-defense type of considerations that can stem out of the use of these technologies. We also have international criminal law. We also have national, regional regulations and laws around different types of liability modes and compliance and procurement and all the different mechanisms that apply to the entire life chain of these technologies. So it is quite a broad spectrum to talk about the future of international law, because we’re also seeing it as in the present. It’s not just a future situation, especially right now when we’re living in a world where international law is blatantly violated and with complete impunity, unfortunately. So we’re living in a volunteerism world where compliance seems to be optional, and that’s really not how it should be, because we’re seeing very dire consequences for civilians in different types of contexts around the world. So what we need to do is to, everyone, advocate, promote, and foster a coherent AI global governance framework. And I’m saying AI in general, because by its dual-use nature, we cannot really divide by civilian, by military, precisely because of the distinction that I made at the beginning, the convergence of actors, the convergence of moments of use and types of use. and etc. So we all really need to strive for this global AI governance framework to materialize, to be binding, and to have the correct mechanisms for implementation, because that will be crucial. And this obviously requires enforcement mechanisms, which, you know, it’s going to be even harder, but we need to be ambitious, because this is a very ambitious goal, to preserve international peace and security at this time. So what we have in the current governance landscape, in this particular domain, we obviously have the GGEs, which is the group of governmental experts in Geneva, that are under the Convention of Conventional Weapons, which I think is a little bit ironic, because these are the least conventional weapons, autonomous weapons. We also have RE-AIM, which is the Global Commission on the Responsible Use of AI in the Military Domain, where I’m a commissioner. We also have RAISE, as Yasmin mentioned, and we’re now seeing the development of the transfer from the GGEs to the General Assembly, with resolutions that are coming out by the initiative of different states, for example the Netherlands and Korea, Austria, you know, which are leading this conversation, amongst others of course. So these are very welcome steps that we are building towards, but we still need to create a lot more awareness about the fact that these are situations that are going on right now, they’re not future, eventual possibilities. And we also need to be very mindful, because there is a tendency to try to separate either alleged pros of these technologies, like okay, well there will be more precise, there will be more accurate, there will be less bias, but you know, we know. That’s why it needs to be all comprehensive within the same global governance framework for AI, because we all know the problems with AI itself, right? So the bias, the brittleness, the hallucinations, the misalignment, etc. et cetera. So those two cannot be dissociated when we’re looking at what the actual consequences and effects of the use of these technologies in this space will be. And also the differentiation between offense and defense capabilities, it’s completely illusory because the same technology is just interchangeably used. So any type of defense is an office in itself. So that’s something we should be mindful when we’re having this conversation. And I will leave it there for now. And again, also looking forward for the Q&A. Thank you.

Bea Guevarra: Thank you, Ms. Jimena Sofia. And it is very insightful to understand and the dynamics of the emerging technology and the challenges that we are facing. So I also would like to ask to the Inucia who are joining here online. From the civil society perspective, how do you see AI in warfare and how we could facilitate that collaboration and why making sure the responsibility for the conversation among the community? So Inucia, are you going to share the screen? Is there any PowerPoint? No? Yeah, go ahead, please. Ms. Inucia, can you share your screen and have the slides on the screen, if that’s possible? Sure, sure. Thank you, thank you. Is that, does that work? Yes, okay.

Anoosha Shaigan: So thank you everyone for organizing this and thank you for having me, it’s such a privilege to be here and talk about such an important issue. I’m quickly going to go over some of the points and the slides are just bullet points of my, you know, some of the issues that I would like to touch upon so you can follow along. So thank you to the speakers for setting the stage for international collaboration, that is of course, you know, the first and foremost thing that we need to do. But let’s also look at some of the very specific issues. So I am a technology lawyer by profession, I started my career in human rights, in international human rights law, working on treaties and I was responsible for, I was part of the team responsible for bringing the first seven core human rights treaties to Pakistan. So we got the government to sign these treaties and then we started working on them. So my association goes a long way back when the SDGs were called MDGs. So, you know, we’ve come a long way since then and I’d like to touch upon some very specific legal issues. The aim is not to give you more anxiety about these issues but, you know, maybe help you form an opinion because, you know, as civil society experts, as lawyers, as development professionals, you know, your opinion matters as well because this is a very new area and as we go into the future, digital technologies become more and more decentralised, which means that… governments have to rely on the civil society, the academia, and the development sector, and the private sector, and not just technology companies to be able to start forming these principles and guidelines moving forward. So let me just. So I’m going to touch upon AI and international humanitarian law, some of the very specific issues, and then I’ll go into some of the ethical considerations as well. So when we talk about the key principles of international humanitarian law, they can be found in UN principles, the Geneva Conventions, the ICRC’s handbook. If you’re a person of faith, they could be part of your religion as well. And they just make common sense, right? So there’s the principles of distinction, proportionality, and necessity. I’m going to talk about proportionality and necessity first, since we might be more familiar with that. So specifically talking about Gaza, are the military responses towards civilian population proportional? Are they excessive? These are some of the things that we’ve already been talking about for the past year. So you might be more familiar with this. Do you think autonomous AI systems, or weapons, or autonomous drones, or any other kind of robots, do you think they would be able to make these kind of proportional responses? So that is something to consider. As we have seen in the past one year, they’ve not been doing that. Then there’s the principle of necessity. It obviously talks about whether this military response is necessary. When it comes to AI, they’ve been calls to simulate certain situations first and then see and verify whether you know they warrant an actual military response. So these are like some of the principles you know that you might be familiar with. As far as distinction is concerned you know there are laws available as well at the international level perhaps not at the state level or domestic level where states are supposed to distinguish between civilian and military targets or civilian or military figures or entities and that has somehow you know translated into applying to AI targets as well. So but do you think AI will be able to make that distinction? Let’s hold that thought and we’ll come back to that when we discuss ethical issues. So liability is of course a very important you know issue that we’ve seen with autonomous weapons. If an AI shoots you know somebody down which was civilian target perhaps, perhaps it was a hospital or a school who is going to bear responsibility for that. Will that be the person who was operating the AI? Will that be the AI itself? Will that be the commander of the person or the agent representing you know a certain team or will that be an entire state? So command responsibility you know there are rules around that but they have to be applied in the context of AI. State responsibility of course it talks about that the state can be held responsible for the actions of its agents. I believe the principles are laid down under RCWA. Then there’s developer liability. Somebody who developed an AI system that you know did not work and now it’s being reviewed. It will go back to whether they followed all the protocols, they followed government guidelines or international humanitarian guidelines and whether they tested these systems and whether they removed glitches and they made sure that all the laws were followed. There’s a recent, and for those more familiar with how legal compliance works in highly regulated industries like nuclear power plants or especially those working in the energy sector or in climate. Trainers can also be held responsible if the training was inadequate. If you didn’t document things properly or if you did not impart adequate training, your trainers could be held responsible as well. If somebody did not know how to use an AI system, their trainers could be held responsible as well. By responsible and accountable and liable, we also mean that it would include monetary compensation towards the victims and their families also. Now, there have been calls for developing an international military AI tribunal in particular. Coming from Pakistan, we do not believe in military trials in principle, but just to let you know that this is a form of accountability. But do we need additional forums when we already have international courts and when we have these other international tribunals? How would they impact states individually? Would states have to sign treaties? Would they have to incorporate them into their domestic laws? These are some of the considerations. Then, of course, this is an area that I specialised in, so I really want you to touch upon that as well. Of course, there are laws around the Outer Space Treaty as well, which mandate peaceful use of space and sharing resources. and keeping things clean and debris-free. But then there are, of course, issues with AI-guided satellites. If there are issues, who is going to resolve them? Is it going to be the International Space Station? Is it going to be the United Nations? Is it going to be the state being affected? Or is it going to be the state that actually launched that AI satellite? And how do their actions work? So oversight might be a bit of a questionable issue here. Then let’s quickly touch upon ethical frameworks. So data bias and model drift are the main concerns with AI models. Data bias is, of course, if you train your AI with biased data. For example, if you train it with skewed data or discriminatory patterns like kill all the dark-looking people or kill all the brown people or kill anybody who doesn’t look white or Caucasian. So these kinds of stereotypes, if AI picks up on these elements, it can be very indiscriminate in the actions that these autonomous or AI-based weapons take, especially during military action. So the data sets need to be checked for bias. They need to be audited. There are algorithmic checks as well that you can fix those as well. But constant and regular oversight is very necessary. Then, of course, there’s the issue of model drift. Model drift is when you overstuff and overfit your AI so much with data that it starts behaving unpredictably. So when people say, you know, I like a child or a person and you keep feeding it information and training it and one day it will start you know making better decisions and wider decisions. Personally I don’t think that’s quite accurate because at the end of the day it’s still a machine, it’s still you know it’s something technical or technological and if you look at you know for example the language of some of the AIs, coding language, there could be you know zeros and ones which means they’re very black and white, they’re very exact and very specific so over stuffing it with data can actually lead to unpredictable outcomes where it just becomes so confusing that you don’t know how it’s going to act and then who takes responsibility of course you know that’s the issue that we’ve been discussing so audits and monitoring are important. Then of course there’s the issue of privacy especially in conflict zones, you know surveillance is an issue, surveillance of civilian population is an issue, you know facial recognition software is that you know allowed during a conflict especially when you’re trying to you know target civilians or pinpoint somebody’s exact identity or you know trying to for example you know we’ve seen in the Gaza conflict that very very specific people individually have also been targeted like doctors, journalists, so you know the privacy becomes very very crucial in such scenarios so of course we need like a solution for this perhaps on the lines of the GDPR as in Europe but do we need another international regulation or can we come up with like a general framework or some specific standards that all countries that are perhaps part of the United Nations must follow without having to sign additional treaties or pass additional laws within their countries. This is another issue. Then, of course, autonomy versus human side is a concern as well. Human side or additional or added oversight is, of course, important when AI is being used in conflict zones. I think one of the areas that we could be following for development around autonomous robots or technologies basically comes from the autonomous vehicle economy. When we look at some of the laws that have been, some of the cases that have been going on around autonomous vehicles, those cases are going to help us determine some very minute details and very specific issues, particular to autonomous vehicles and robots. They could apply to drones as well, for example, or other autonomous weapons that have been used in conflict zones. Again, human in the loop is another solution where you always make sure that there’s human oversight present when an autonomous system is being used. Then, of course, you can create ethics committees as well, which will constantly monitor these developments. Corporate accountability is, of course, important just like we require corporations to submit transparency reports on how they’re doing on climate change. We could ask the same from different militaries of different states, who actually, sorry, military. suppliers or military contractors to who are you know part of the corporate world to submit these kind of transparency reports. And then of course if they lack training if they didn’t follow certain laws or if they didn’t follow certain standards they could be held liable for those as well. There’s of course a proposition to develop certain standards for military AI which I think it’s still a very very nascent area still developing so it could be interesting to follow these developments. And at the end I would like to touch upon generative AI as well. So we’ve seen a lot of you know issues surface related to deep fakes and disinformation. Now generative AI again we should be able to you know use detection and prevention tools to spot it especially on social media for example or especially for people operating AI based weapons or AI systems within conflict zones. So developing these tools is very important to counter disinformation and deep fakes because at the end of the day our ultimate goal is to save human lives. This is I think this is all what we’re here for. We’re not here to you know talk about how we’re going to make profits or make money. The ultimate goal is to prevent civilian casualties and you know have this dignified regard for human life. So with that I would like to end the presentation and I’m open to questions. I look forward. Thank you.

Bea Guevarra: Thank you Ms. Inosha. You talk your presentation is very informative like talking about the open data bias and some modern drift privacy and data autonomous versus human oversight and also from the side of the corporate accountability, as well as the use of the adequate use of the AI is very informative and insightful. I noted that we have to end this session very soon, since we are running a bit late for this session, due to the technical error, but before moving to the open floor session, I noted that Mr Mohamed is here in the room with us, so I also would like to invite him to give any comment if he has.

Mohamed Sheikh-Ali: Thank you very much, organisers and the kingdom for hosting this. Being from the ICRC and acknowledging that we are running late, I will just focus on a few things and I will not duplicate what has been said by the other colleagues. AI and autonomous weapons should comply and respect international humanitarian law, proportionality, distinction and precaution. Can an AI-controlled or autonomous weapon that has been tasked to execute an operation abort autonomously the operation if they see a child or a civilian or a fighter who is no longer capable of participating in the conflict? Because a soldier who was in a frontline operation, who was participating in the conflict, once they are injured and they are no longer part of the conflict, they are protected under international humanitarian law. So will these autonomous systems comply with those basic principles of international humanitarian law? It is a huge concern that we have. And therefore, we are actually in ICRC. Today, we have a specialist at the Silicon Valley, we have a delegation in China that are discussing with the technology companies that are contributing to the development of these systems and having this kind of conversation. I absolutely agree with the notion of engaging with the tech companies and those who are developing these technologies from the design stage. That is quite key. We are also calling for human oversight and control on any kind of weapons. A decision to kill, life and death decision, should not be made by a tool or by a logarithm. It has to be a decision that’s at least the bottom or the final engagement or discharge of the munition should be controlled by a human being. That’s quite important. And we are also convening, as some of my colleagues have already mentioned, and discussions and dialogue on how to incorporate and integrate the international humanitarian law in the development of autonomous weapons. And artificial intelligence controlled warfare. And regulations are needed, but international humanitarian law applies in any kind of warfare, whether it is carried out by a human or by autonomous weapon. And so that’s very clear. Where we have to seek clarity is who assumes responsibility? Is it the developer? Is it the commander? we ask and that has to be clarified therefore we are convening discussions just last year or the end of 2012 there were two workshops in Geneva with experts discussing these kind of issues and the recommendations and reports is out there in our website. Treaties that are binding you know and that are aligned with international humanitarian law are actually necessary and ethical you know it was mentioned by one of the classes I don’t remember ethics dignity and preservation of a human life is is the ultimate goal and that’s what international humanitarian law is is eventually about. Thank you very much.

Bea Guevarra: Thank you Mr. Mohamed and for the next session I want to pass to our online moderator Annie. Annie the floor is yours. Just a reminder that we only left like nine minutes.

Qurra Tul AIn Nisar: Oh that’s okay right thank you so much Mr. Mohamed Sheikh Ali and I completely don’t want to forget Mr. Neem’s efforts in bringing Mohamed Ali on board with us. Thank you so much. So I love how this session is not only about identifying problems but also about practical solutions and exploring how exactly AI can be used as a force of good. So I would really quickly want to you know move our discussion towards our last policy question because you guys have effectively answered all other policy questions in your presentations. So just just quickly bring and bringing the discussion towards it. Can explainable AI technologies be effectively applied to autonomous weapon system to ensure human oversight and understanding of how targeting decisions are made. So I understand how there was compliance by design discussed and also biases in AI systems discussed by Anusha and also Ms. Yasmine. I would really appreciate if any of the speakers present on-site or online would like to take this question up.

Yasmin Afina: Sure, I mean I can, I’m happy to have a first stab at it and colleagues please feel free to sort of compliment just you know from the top of my head I think first of all thank you for the very interesting question. I think that it’s very pertinent, it’s something that in Geneva as well diplomats are grappling with every day. I think based on what I’ve seen and what I’ve heard engaging with stakeholders ranging from state representatives to civil society industries, I think in terms of explainable AI generally that’s an issue that has that you know the AI community is trying to grapple with but in the military domain there’s quite a few implications especially with regards to in IHL you have the legal duty to investigate violations of international law and then when you look at machine learning based systems where you have a black box so you basically know what is the input, you know the output but you don’t know where how it went from the input to the output. So for example why the system specifically recommended to target this particular person for example then there is you know there are issues, there may be issues as to like how the investigation, if this output led to a potential violation of international human internal law, how you could effectively conduct an investigation when the system has a black box but at the same time there are measures that is growing research efforts in trying to circumvent that problem so you might for example have In the military domain, you also have to understand that when the commander authorizes the use of force, it’s held to such a high level of standard, or supposed to be held to a high level of standard, then the commander will always maintain some level of responsibility over their decision, even when the commander decides to use and follow the recommendation of a weapon system or of an AI-based decision support system. And even when the commander does not know how the system came up with the output. And then there are also recommendations related to best practices with regards to documentation in the military domain. I think that’s something that, even when not using AI, it’s something that is increasingly being looked at. So there is growing research into that, but I don’t think there is any silver bullet for this question. But there is research that is ongoing. But again, it depends on how much resources are being dedicated to that, and how much political willingness is there to dedicate those resources. Thank you.

Qurra Tul AIn Nisar: Thank you so much, Ms. Yasmine. And I would love if the on-site speakers can also add their insights on this very question.

Mohamed Sheikh-Ali: All I can add is, for now, until the technology is advanced enough, which, in our opinion, from the internal humanitarian law perspective, we will never get there, and a human responsibility and accountability is necessary. So I think I lost, yeah, we can leave the decision, you know. So, I think there’s a technical issue, but all I’m saying is a human control and oversight and accountability is ultimately necessary, even if the technology is so advanced. So that’s our position at the moment, and there’s an expert here.

Jimena Sofia Viveros Alvarez : Well, I agree entirely. So I believe that unless one can completely understand and control the entirety of the effects of a technology, one should not be using it, especially when human lives are at stake. And I also don’t like the term laws, because it’s not only lethal. So even other types of physical harm or to integrity or to targeting for detention for other types of purposes is also quite harmful and should also be encompassed in the regulation of these technologies and their uses. And I would like to end with a quote of Amina Mohamed, Deputy Secretary General of the United Nations. She said this at the Arab Forum that was held in March in Beirut. And she said, there can be no sustainable development without peace. So that’s something that we should keep in mind, because without peace, we really have nothing. Thank you.

Qurra Tul AIn Nisar: Amazing. Thank you so much, the on-site speakers and as well as the online ones. I understand how we still have Miss Anusha left to answer this question, but we quickly need to wrap it up. I love how, you know, in a very short time, we covered a lot of topics of accountability, global coordination and collaboration. also the compliance and design part has got to be my favorite. So guys, we have had some great key takeaways from this session, and I hope really that it further inspires more action and discussion, because we really need it in this time. So the report of this session would be shared right away. And I would quickly request all the speakers that are present on site to maybe come closer to the screen, so we can have a group photo together. Feel free to come, and we can pin Ms. Yasmine and Ms. Anusha on the screen, and how about have a quick group photo?

Bea Guevarra: Next thing, please pin Beer, who are also our organizing member. Unfortunately, we have to end this session. It is a time set right now, so thank you, everyone. I know that you guys have comments and questions. Maybe you can approach to the speaker who are on site later. Sorry about that. I feel it’s also been a beerness hour. She’s also one of the organizers. Could you please have the beer to pin on the screen? Thank you. Okay. Hi, speaker. Feel free to try if you would like to. Thank you. There will be no one in the picture. I guess, Annie, you can take a quick screenshot of us, and then we’re good to leave the. session today. I think you’re on mute, Annie. So I only have the online participants including Phil with me, so I’m taking the screenshot guys. Hold your best poses. Three, two, and one. Got it, got it. Thank you so much. Thank you. Have a good day. Thank you so much. Have a good day. Bye. . . . . . . . . . . . . .

Y

Yasmin Afina

Speech speed

170 words per minute

Speech length

2017 words

Speech time

711 seconds

International law as central component of AI governance

Explanation

Yasmin Afina argues that international law is a crucial element in governing AI in the military domain. She emphasizes that states across regions view compliance with international law as a central component of their approaches to AI governance in defense and security.

Evidence

Findings from regional consultations with states and experts in Asia Pacific, the Middle East and Central Asia, Africa, Europe, Latin America, and the Caribbean.

Major Discussion Point

International Law and AI in Warfare

Agreed with

Jimena Sofia Viveros Alvarez

Anoosha Shaigan

Mohamed Sheikh-Ali

Agreed on

Importance of international law in AI governance

Differed with

Jimena Sofia Viveros Alvarez

Differed on

Scope of AI governance in warfare

Translating legal requirements into technical requirements

Explanation

Afina suggests that international law considerations should be translated into technical requirements for AI technologies in the military domain. This approach aims to ensure compliance with international law from the early stages of development and testing.

Evidence

Reference to her PhD research on framing the development, testing, and evaluation of AI technologies for military targeting based on international humanitarian law considerations.

Major Discussion Point

Future Developments and Recommendations

Explainable AI for autonomous weapons systems

Explanation

Afina discusses the challenges of explainable AI in the military domain, particularly for machine learning-based systems with a ‘black box’ nature. She highlights the implications for investigating potential violations of international law when the decision-making process of AI systems is not transparent.

Evidence

Mention of ongoing research efforts to address the ‘black box’ problem in AI systems used in the military domain.

Major Discussion Point

Ethical Considerations and Challenges

J

Jimena Sofia Viveros Alvarez

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Need for coherent global AI governance framework

Explanation

Jimena Sofia Viveros Alvarez emphasizes the importance of developing a comprehensive global AI governance framework. She argues that this framework should be binding and include proper implementation mechanisms to address the challenges posed by AI in peace and security domains.

Evidence

Reference to current governance initiatives such as GGEs, RE-AIM, RAISE, and UN General Assembly resolutions.

Major Discussion Point

International Law and AI in Warfare

Agreed with

Yasmin Afina

Anoosha Shaigan

Mohamed Sheikh-Ali

Agreed on

Importance of international law in AI governance

Differed with

Yasmin Afina

Differed on

Scope of AI governance in warfare

Need for binding treaties aligned with international law

Explanation

Alvarez stresses the importance of creating binding treaties that align with international humanitarian law. She argues that these treaties are necessary and ethical to ensure the preservation of human life and dignity in the context of AI and warfare.

Major Discussion Point

Future Developments and Recommendations

Agreed with

Mohamed Sheikh-Ali

Agreed on

Human control and accountability in AI warfare systems

A

Anoosha Shaigan

Speech speed

137 words per minute

Speech length

2080 words

Speech time

909 seconds

Compliance with international humanitarian law principles

Explanation

Anoosha Shaigan emphasizes the importance of AI systems in warfare complying with key principles of international humanitarian law. She specifically mentions the principles of distinction, proportionality, and necessity as crucial considerations for AI in conflict situations.

Evidence

Reference to UN principles, Geneva Conventions, and the ICRC’s handbook as sources for these principles.

Major Discussion Point

International Law and AI in Warfare

Agreed with

Yasmin Afina

Jimena Sofia Viveros Alvarez

Mohamed Sheikh-Ali

Agreed on

Importance of international law in AI governance

Data bias and model drift in AI systems

Explanation

Shaigan highlights the ethical concerns of data bias and model drift in AI systems used in warfare. She explains that biased training data can lead to discriminatory actions by AI-based weapons, while model drift can cause unpredictable behavior in AI systems.

Evidence

Examples of potential biases in AI training data, such as discriminatory targeting based on appearance.

Major Discussion Point

Ethical Considerations and Challenges

Privacy concerns in conflict zones

Explanation

Shaigan raises concerns about privacy issues in conflict zones, particularly related to surveillance and the use of facial recognition technology. She emphasizes the need for privacy protections, especially when targeting specific individuals like doctors or journalists.

Evidence

Reference to the Gaza conflict where specific individuals have been targeted.

Major Discussion Point

Ethical Considerations and Challenges

Clarifying liability for AI actions in warfare

Explanation

Shaigan discusses the complex issue of liability for actions taken by AI systems in warfare. She outlines various potential responsible parties, including operators, commanders, states, and developers, and emphasizes the need for clear accountability frameworks.

Evidence

Reference to existing legal principles such as command responsibility and state responsibility.

Major Discussion Point

Accountability and Responsibility

Corporate accountability for military AI suppliers

Explanation

Shaigan proposes that military AI suppliers and contractors should be held accountable for their products. She suggests implementing transparency reports similar to those used for climate change compliance in the corporate world.

Major Discussion Point

Accountability and Responsibility

Developing standards for military AI

Explanation

Shaigan mentions the proposition to develop specific standards for military AI. She acknowledges that this is a nascent area but suggests it could be an important development to follow in the future.

Major Discussion Point

Future Developments and Recommendations

M

Mohamed Sheikh-Ali

Speech speed

107 words per minute

Speech length

569 words

Speech time

318 seconds

Human oversight and control necessary for weapons systems

Explanation

Mohamed Sheikh-Ali emphasizes the necessity of human oversight and control in AI-powered weapons systems. He argues that life-and-death decisions should not be made solely by algorithms or tools, but must involve human judgment.

Evidence

Reference to ICRC’s position on the need for human control in autonomous weapons systems.

Major Discussion Point

International Law and AI in Warfare

Agreed with

Yasmin Afina

Jimena Sofia Viveros Alvarez

Anoosha Shaigan

Agreed on

Importance of international law in AI governance

Human control needed for life-and-death decisions

Explanation

Sheikh-Ali reiterates the ICRC’s position that decisions to kill or use lethal force should not be made by AI alone. He stresses that the final engagement or discharge of munitions should always be controlled by a human being.

Major Discussion Point

Ethical Considerations and Challenges

Agreed with

Jimena Sofia Viveros Alvarez

Agreed on

Human control and accountability in AI warfare systems

Human responsibility and accountability necessary

Explanation

Sheikh-Ali maintains that human responsibility and accountability are ultimately necessary in the use of AI in warfare. He argues that even with advanced technology, human control and oversight remain essential from an international humanitarian law perspective.

Major Discussion Point

Accountability and Responsibility

Agreed with

Jimena Sofia Viveros Alvarez

Agreed on

Human control and accountability in AI warfare systems

Engaging tech companies from design stage

Explanation

Sheikh-Ali mentions ICRC’s efforts to engage with technology companies developing AI systems for military use. He emphasizes the importance of having these conversations from the design stage of the technologies.

Evidence

Reference to ICRC’s specialist in Silicon Valley and delegation in China discussing with technology companies.

Major Discussion Point

Future Developments and Recommendations

Agreements

Agreement Points

Importance of international law in AI governance

Yasmin Afina

Jimena Sofia Viveros Alvarez

Anoosha Shaigan

Mohamed Sheikh-Ali

International law as central component of AI governance

Need for coherent global AI governance framework

Compliance with international humanitarian law principles

Human oversight and control necessary for weapons systems

All speakers emphasized the crucial role of international law in governing AI in warfare and military applications, stressing the need for compliance with existing legal frameworks and principles.

Human control and accountability in AI warfare systems

Jimena Sofia Viveros Alvarez

Mohamed Sheikh-Ali

Need for binding treaties aligned with international law

Human control needed for life-and-death decisions

Human responsibility and accountability necessary

Both speakers strongly advocated for maintaining human control and accountability in AI-powered warfare systems, particularly for critical decisions involving the use of lethal force.

Similar Viewpoints

Both speakers highlighted the need to develop specific technical standards or requirements for military AI that align with legal and ethical considerations.

Yasmin Afina

Anoosha Shaigan

Translating legal requirements into technical requirements

Developing standards for military AI

Both speakers emphasized the importance of involving and holding accountable the private sector companies developing AI technologies for military use.

Anoosha Shaigan

Mohamed Sheikh-Ali

Corporate accountability for military AI suppliers

Engaging tech companies from design stage

Unexpected Consensus

Comprehensive approach to AI governance beyond warfare

Jimena Sofia Viveros Alvarez

Anoosha Shaigan

Need for coherent global AI governance framework

Privacy concerns in conflict zones

Both speakers unexpectedly broadened the discussion beyond just warfare, emphasizing the need for a comprehensive AI governance approach that addresses various contexts including peacetime and civilian applications.

Overall Assessment

Summary

The speakers generally agreed on the importance of international law in AI governance, the need for human control and accountability in AI warfare systems, and the necessity of developing specific standards for military AI. There was also consensus on involving and regulating private sector companies in the development of military AI technologies.

Consensus level

High level of consensus on core principles, with some variations in specific focus areas. This strong agreement suggests a solid foundation for developing international norms and regulations for AI in warfare, but also highlights the complexity of implementing these principles across different contexts and stakeholders.

Differences

Different Viewpoints

Scope of AI governance in warfare

Yasmin Afina

Jimena Sofia Viveros Alvarez

International law as central component of AI governance

Need for coherent global AI governance framework

While Afina focuses on international law as a central component of AI governance in the military domain, Alvarez argues for a broader, coherent global AI governance framework that encompasses both military and civilian uses due to the dual-use nature of AI technologies.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the scope and approach to AI governance in warfare, the extent of human control required, and the specific mechanisms for ensuring accountability and compliance with international law.

difference_level

The level of disagreement among the speakers is moderate. While they share common concerns about the ethical and legal implications of AI in warfare, they differ in their proposed solutions and areas of emphasis. These differences reflect the complex and multifaceted nature of the issue, highlighting the need for continued dialogue and collaboration among various stakeholders to develop comprehensive and effective governance frameworks for AI in warfare.

Partial Agreements

Partial Agreements

All speakers agree on the need for human oversight and accountability in AI-powered weapons systems, but they differ in their approaches. Afina suggests translating legal requirements into technical ones, Shaigan focuses on clarifying liability frameworks, while Sheikh-Ali emphasizes maintaining human control over life-and-death decisions.

Yasmin Afina

Anoosha Shaigan

Mohamed Sheikh-Ali

Translating legal requirements into technical requirements

Clarifying liability for AI actions in warfare

Human oversight and control necessary for weapons systems

Similar Viewpoints

Both speakers highlighted the need to develop specific technical standards or requirements for military AI that align with legal and ethical considerations.

Yasmin Afina

Anoosha Shaigan

Translating legal requirements into technical requirements

Developing standards for military AI

Both speakers emphasized the importance of involving and holding accountable the private sector companies developing AI technologies for military use.

Anoosha Shaigan

Mohamed Sheikh-Ali

Corporate accountability for military AI suppliers

Engaging tech companies from design stage

Takeaways

Key Takeaways

Resolutions and Action Items

Unresolved Issues

Suggested Compromises

Thought Provoking Comments

I wanted to add another layer to our discussions, the role of the multi-stakeholder community. And in fact, in the report that I previously mentioned, one of the other key areas of nuance convergence that we have identified is the importance of multi-stakeholder engagement.

speaker

Yasmin Afina

reason

This comment broadened the scope of the discussion beyond just governments to include other stakeholders, highlighting the complexity of AI governance.

impact

It shifted the conversation to consider a more holistic approach to AI governance, leading to discussion of various stakeholder perspectives and initiatives.

I think I don’t like to just circumscribe this conversation to warfare. Because these technologies are being used to attack civilians also during peacetime. So I like to call it the peace and security spectrum of things.

speaker

Jimena Sofia Viveros Alvarez

reason

This reframing challenged the narrow focus on warfare and expanded the discussion to consider broader implications of AI in security.

impact

It prompted consideration of AI’s impact across different contexts and actors, leading to a more comprehensive examination of legal and ethical issues.

Data bias and model drift are the main concerns with AI models. Data bias is, of course, if you train your AI with biased data. For example, if you train it with skewed data or discriminatory patterns like kill all the dark-looking people or kill all the brown people or kill anybody who doesn’t look white or Caucasian.

speaker

Anoosha Shaigan

reason

This comment brought attention to specific technical challenges in AI systems that have serious ethical implications, especially in conflict situations.

impact

It deepened the discussion on the ethical considerations of AI in warfare, leading to further exploration of oversight and accountability measures.

Can an AI-controlled or autonomous weapon that has been tasked to execute an operation abort autonomously the operation if they see a child or a civilian or a fighter who is no longer capable of participating in the conflict?

speaker

Mohamed Sheikh-Ali

reason

This question highlighted a crucial ethical and practical challenge in implementing AI in warfare while adhering to international humanitarian law.

impact

It focused the discussion on the specific challenges of ensuring AI systems can comply with the nuanced requirements of international law, leading to consideration of human oversight and control.

Overall Assessment

These key comments shaped the discussion by expanding its scope beyond just warfare to consider broader security implications, highlighting the importance of multi-stakeholder engagement, addressing specific technical and ethical challenges of AI systems, and emphasizing the need for human control and oversight. The discussion evolved from a general overview of international law and AI to a more nuanced exploration of practical challenges, ethical considerations, and governance frameworks across various contexts and stakeholders.

Follow-up Questions

How can international law obligations be effectively translated into technical requirements for AI systems in military applications?

speaker

Yasmin Afina

explanation

This is crucial for ensuring AI technologies used in warfare are compliant with international law from the design stage.

How can a coherent global AI governance framework be developed and implemented?

speaker

Jimena Sofia Viveros Alvarez

explanation

A comprehensive framework is needed to address the dual-use nature of AI and its applications across civilian and military domains.

How can liability be determined when AI systems are involved in military actions that violate international law?

speaker

Anoosha Shaigan

explanation

Clarifying responsibility (e.g., operator, commander, developer, or state) is essential for accountability in AI-enabled warfare.

How can data bias and model drift in AI systems used in military contexts be effectively monitored and mitigated?

speaker

Anoosha Shaigan

explanation

Addressing these issues is critical to prevent discriminatory or unpredictable actions by AI in warfare.

How can privacy and surveillance concerns be addressed when using AI technologies in conflict zones?

speaker

Anoosha Shaigan

explanation

Balancing military needs with civilian privacy rights is a key challenge in AI-enabled warfare.

Can AI-controlled or autonomous weapons reliably comply with core principles of international humanitarian law, such as distinction and proportionality?

speaker

Mohamed Sheikh-Ali

explanation

This is fundamental to ensuring AI weapons can be used in accordance with international law.

How can explainable AI technologies be applied to autonomous weapon systems to ensure human oversight and understanding of targeting decisions?

speaker

Qurra Tul AIn Nisar

explanation

This is important for maintaining human control and accountability in AI-enabled warfare.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #225 Gender inequality in meaningful access in the Global South

WS #225 Gender inequality in meaningful access in the Global South

Session at a Glance

Summary

This discussion focused on the gender gap in digital access and usage across different regions, particularly in low and middle-income countries. Presenters from Research ICT Africa, CETIC Brazil, and GSMA shared findings from their surveys on internet and mobile phone usage. Key points included that while overall internet access has increased in many countries, significant gender gaps persist, especially in Africa and South Asia. The gaps widen at each stage from basic access to regular, diverse internet use.

Barriers to women’s digital inclusion include device affordability, lack of digital skills, and safety concerns. Even when women have access, they often use the internet less diversely than men, particularly for economically beneficial activities. The presenters emphasized the importance of collecting gender-disaggregated data through household surveys to understand these nuanced gaps and inform targeted policies. They discussed various models for funding and conducting such surveys, including partnerships with national statistical offices.

The discussion highlighted that progress in closing digital gender gaps is not guaranteed and requires sustained, targeted efforts informed by data. Participants stressed the need for more funding and support for data collection, especially in low-income countries where statistics offices are often underfunded. They also emphasized the importance of making survey data relevant and accessible to policymakers to drive evidence-based interventions aimed at achieving universal and meaningful connectivity for all.

Keypoints

Major discussion points:

– There are significant gender gaps in internet access and usage, particularly in low and middle-income countries

– Data collection and analysis is crucial for understanding and addressing these gender gaps

– Barriers to internet access and usage for women include affordability, lack of digital skills, and safety/security concerns

– Collaboration between researchers, policymakers, and other stakeholders is important for collecting relevant data and using it to inform policies

– More funding and support is needed for data collection efforts, especially in low-income countries

The overall purpose of the discussion was to examine gender gaps in internet access and usage across different regions, share findings from various research efforts, and discuss the importance of data collection for informing policies to address these gaps.

The tone of the discussion was informative and collaborative. Panelists shared insights from their research in a factual manner, while also emphasizing the importance of working together and with policymakers to address the issues identified. There was a sense of urgency about the need for more data and funding, but the overall tone remained constructive and solution-oriented throughout.

Speakers

– Relebohile Mariti: Research Fellowб Research ICT Africa

– Fabio Senne: Project Coordinator at the Regional Centre of Studies on Information and Communication Technololgies under the auspices of UNESCO (Cetic.br)

– Claire Sibthorpe: Head of Digital Inclusion in the Mobile for Development (M4D) team at the GSMA

Full session report

Gender Gaps in Digital Access and Usage: A Comprehensive Analysis

This discussion focused on the persistent gender gaps in digital access and usage across different regions, particularly in low and middle-income countries. Presenters from Research ICT Africa, Cetic Brazil, and GSMA shared findings from their surveys on internet and mobile phone usage, highlighting the complexities of the digital divide and the importance of data-driven approaches to address these inequalities.

Key Research Projects and Methodologies

Research ICT Africa presented findings from their After Access project, which covered 20 countries in Africa, Asia, and Latin America between 2017 and 2022. Cetic Brazil shared insights from their ICT Households survey, which uses a multi-stakeholder model to define indicators and surveys individuals aged 10 and older. GSMA discussed their annual Mobile Gender Gap Report and State of Mobile Internet Connectivity Report, which provide global insights into mobile internet adoption and usage.

Key Findings on Gender Gaps

Claire Sibthorpe from GSMA emphasised that progress in closing the mobile internet gender gap is fragile and not guaranteed, underscoring the vulnerability of digital inclusion efforts to external factors and the disproportionate impact on women.

Fabio Senne from Cetic Brazil introduced a crucial distinction between basic access and meaningful connectivity. He stated, “Although we have 88% or 90% that had some access to the internet, when it goes to the meaningful connectivity, we can say that today in Brazil, only 22% of the population has a meaningful connectivity, and being 30% are in the 0 to 2 of this scale.” This insight reveals a much larger digital divide than raw access numbers suggest.

Sibthorpe highlighted that gender gaps widen at every stage of internet adoption and usage, with specific figures varying by region. For example, in South Asia, women are 41% less likely than men to use mobile internet.

Barriers to Women’s Digital Inclusion

The discussion identified several key barriers to women’s internet access and usage. Relebohile Mariti from Research ICT Africa emphasised affordability of devices and data as a primary obstacle. She also highlighted the lack of digital skills and awareness as major barriers, providing concrete evidence: “So when looking at don’t know what the Internet is, we see that 23% of females say that they don’t know what the Internet is, and this is slightly lower for males at only 19%.”

Claire Sibthorpe added that safety and security concerns significantly limit women’s internet use. She also pointed out that social norms and structural inequalities exacerbate barriers for women.

Importance of Gender-Disaggregated ICT Data

All speakers emphasised the critical importance of collecting and analysing detailed, gender-disaggregated data to understand digital gaps and inform effective policies and interventions. Fabio Senne advocated for a multi-stakeholder approach to ensure relevant data collection, while Claire Sibthorpe focused on using data to inform evidence-based policies and interventions.

Challenges in ICT Data Collection

The discussion highlighted several challenges in collecting ICT statistics, including limited funding, especially in developing countries, and difficulties in partnering with national statistical offices. Fabio Senne pointed out the challenge of keeping surveys relevant as technology rapidly changes and raised the sensitive issue of collecting data on topics like online violence, which requires careful approaches.

Policy Implications and Recommendations

The speakers agreed on the need for targeted interventions to address specific barriers women face in digital inclusion. Specific recommendations included:

1. Focusing on affordability, skills development, and creating enabling environments

2. Lowering mobile-specific taxes

3. Implementing handset financing initiatives

4. Increasing investment in gender-disaggregated ICT data collection

The discussion also touched on the importance of engaging policymakers effectively to use ICT data for decision-making.

Emerging Issues and Future Directions

The discussion highlighted the need to consider children’s access to smartphones and internet, with an audience member raising concerns about the appropriate age for children to have unrestricted internet access. This prompted a discussion about the importance of disaggregating data by age groups and considering the unique needs and risks for different demographics.

The speakers also emphasised the need for qualitative studies to complement quantitative data, especially for sensitive topics like online violence. The moderator noted the challenges in asking questions about controlling behavior in household surveys, highlighting the complexity of addressing certain aspects of the digital divide.

In conclusion, this discussion underscored the complex nature of the digital gender divide and the crucial role of comprehensive, gender-disaggregated data in understanding and addressing these inequalities. It emphasised the need for sustained, targeted efforts informed by data to achieve universal and meaningful connectivity for all, particularly women and children in low and middle-income countries.

Session Transcript

Moderator: Asia-Pacific, on the other hand, the overall percentage is a bit lower, but the gap is smaller as well. It’s 68 against 64. The big gap, though, is in Africa, where 43% of the male population is using the internet against only 31% of women. And Rayleigh will talk to us about that later and come up with a few explanations, maybe, and some ideas how we can address this. We also have data on the percentage of the population owning a mobile phone, where we can see the overall percentages are a bit higher, but the gap is the same. 82% of the male population globally owns a mobile phone against 77% of the population. Earlier, I spoke about the difference by region, but, of course, regions are very heterogeneous. There are rich countries, poor countries, different types of countries. If you look at income, income is a very big factor in these differences. There’s a big correlation. I’m not going to say causation, but there’s a big correlation between income level of a country and gender gap. And here we can see that in high-income countries, almost everyone has a mobile phone, both men and women. In upper-middle-income countries, it’s 86% of male against 84% of women, so there’s a small gap. Lower-middle, it’s 77% against 66%. And in low-income countries, 60% of the male population against 41% of the female population, so a huge gap there as well. I will stop here with the barrage of data, which is a bit hard to get, especially when spoken without any slides. In our session today, we’re going to look deeper into these issues based on data. And eventually, what we’re trying to answer are the following policy questions, or at least that’s where we should be leading, and maybe we can get something out of this session. Policy questions are, what are the most binding… constraints which hold women back from being able to equally participate in a digital economy. So we’re looking at barriers and barriers that are different between men and women. What policy interventions will create a more even playing field where women are as able as men to derive socio-economic benefits from digital interaction. So how can we solve it? How can policymakers solve it? And then which countries have provided evidence of providing an enabling environment for equal participation in the digital economy? And what were the key factors which led to this success? So learning from others. So these are the three policy questions. I would now like to give the floor to Relebohile Mariti from Research ISC Africa to talk about the situation in Africa. Rele, the floor is yours.

Relebohile Mariti: Thank you and thank you to the participants for taking your time to join us today. As Martin has already said, I will be sharing the evidence from… Okay, so I’ll be presenting our findings from the interesting and important work we have been doing at Research ICT Africa. Okay, apologies, we’re trying to move the slide. Okay, so what we see from the data is that despite the increased digitalization around the world and even in Africa, the policies in Africa, in most African countries, have fallen digital inequalities. And so as a result we still see a lot of disparities in how these technologies are adopted and used across countries in Africa and within countries. And so what we find is that those digital inequalities are mainly driven by structural inequalities. These are the differences in income and education between different groups within countries. And at the same time we see that the COVID-19 has exacerbated the structural inequalities by widening the digital inequalities. And so what we have been doing at Research ICT Africa is to collect data on how individuals use these digital technologies and the adoption of digital technologies. And we look at what barriers they face. And for those who already have access, what limitations they face in trying to use these technologies in the most productive way. And so we conclude that for African countries to achieve universal and meaningful connectivity, there’s a need for effectively regulated and competitive and innovative industry that will respond to consumers needs. So to just give an overview of their after access project. To just give overview of their after access project. This is the only nationally representative household and individual SAFE. that looks at the adoption and use of digital technologies across multiple African countries. And Research Health City Africa has been collecting this data, and so the first round of the After Access Surveys took place between 2005 and 2008, and it covered 17 African countries. And so the second was between 2012 and 2010, and it covered 13 African countries, and the third was between 2017 and 2018, and it covered only 10 African countries. And you see that the number of countries we are covering is going down, and this is because we are an organization that’s funded by other organizations, so we really need to invest more on this. So today I’ll be presenting the findings from the fourth round of the After Access Surveys, which took place between 2022 and 2023, and we covered seven African countries. But today I only have results for six African countries. So the survey provides a detailed account of the adoption of digital technologies, and how individuals use these technologies, and what barriers they face and limitations they face. And for each of the countries surveyed, the household and individual survey is accompanied by micro-enterprise surveys. So the individual and household surveys are nationally representative, but the micro-enterprise surveys are not nationally representative, because we don’t have… because of the lack of nationalistic… of micro enterprises in different countries, but then we were able to cover a large number of micro enterprises in each of the countries, so because of this the insights are really important. So just to give the high-level findings, what we find is that the internet is mostly accessed through smartphones, but those devices do remain inaccessible to majority of their population in Africa. For instance, in 2022, 72% of the adult population in Nigeria did not have access to a smartphone, and this is even higher in countries like Ethiopia and Uganda, where we find that 84% of the adult population do not have access to a smartphone, and so because of this, this limits access to the internet, and in those countries where we have low levels of smartphone adoption, we have low levels of internet access. More than 50% of the population, of the adult population, do not have access to the internet, and when looking across countries, we see different gaps. We see significant gaps across countries, and also there are significant gaps within countries. We find that men are still more likely than women to use the internet, and thus the gender gaps are more significant in countries where we have low levels of internet access. So the main barriers to internet access are the price of devices, and the lack of digital skills, and lack of awareness of what the internet is. So when we talk of lack of digital skills, it’s the basic literacy required to navigate the internet. So we still see individuals who said they don’t know how to use the internet, and those who say they don’t know what the internet is. And also when doing the supply side analysis, we find that most of those countries that we have, Ethiopia, Nigeria, and Uganda, they rank in the top 10 countries in Africa that have the cheapest prices of data. But those online indicate that they’re not able to use the internet as much as they would love to because of high data prices. So then this is low data prices do not equate to improved access and use because of the structural inequalities. And also when doing the supply side analysis, we find that when looking at the quality of internet that is experienced by end users, there are disparities. Those in urban and capital centers enjoy better quality than others. So because most internet users access the internet through smartphones, it is important that we understand what the level of smartphone ownership within countries and how that has evolved over time. So overall, there has been increase in mobile phone ownership across countries. And we see that in some countries, 90% of the adult population have access to a mobile phone. So when looking at the bar, the red part of the bar shows smartphone ownership and the gray part shows basic phone or feature phone use. So where we have the red part being bigger than the gray part, that says in that country we have mobile phone ownership being dominated by use of smartphones. But if we have a gray part being larger than the red part, it means we have… mobile phone ownership being dominated by use of basic or future phones. So we still see that in some countries we have low levels of adoption of smartphones. For instance, when you when you look at Ethiopia and Nigeria and Uganda, you see you see that mobile phone ownership in those countries is dominated by use of basic phones, which limits access to the Internet. Only 16% of the adult population in Nigeria, I’m sorry not Nigeria, in Ethiopia and Uganda had access to a smartphone and this was 28% in Nigeria. So not only do we have devices being inaccessible, but even the level of access differs across groups within the same country. So when looking when looking across gender, we find that men are more likely than females to have access to a mobile phone, but those gaps are more pronounced when looking at smartphone ownership specifically. So when looking at the buys, the right buy for each of the countries, the right buy represents mobile phone ownership among males and you find for some of those countries there is a difference and an exception is South Africa where we see that there’s almost parity in smartphone use. So this low levels of access to smartphone among females limits their access to Internet enabled services and opportunities. So then when looking across all countries, we find that the main barrier to smartphone ownership is the price of these devices. So across all countries, a significant share of individuals who do not have access to smartphones indicate that those devices are too expensive for them. So when looking at the trends in internet access, so when looking at the trends in internet access, we see that the internet access has been growing across all countries, and some of the countries have reached more than half of the adult population. And however, what despite the increase in internet access across all countries, what we see is that the level of access differs across countries, and even the rate of growth of internet access also differs across countries. It is growing faster in other countries. For instance, when you look at the level of internet access in 2018 in Ghana, Kenya, and Nigeria, you find that they had relatively similar levels of internet access. But between 2018 and 2022, we see that in Kenya and Ghana, the level of internet access doubled, and in Nigeria, there was a marginal increase. So then, because of this positive trend in internet access, the gender gaps have also been declining, but they still remain. insignificant, with males being more likely than females to have access to the Internet. In some countries, like, these gender gaps are more pronounced in countries where we have low levels of Internet access. So then when looking at the barriers to Internet access across all countries, we see that the lack of access to devices comes out as the main barrier. But even though this is the main barrier, we still see the lack of digital skills and the lack of awareness of what the Internet is being the main barrier. So when looking across gender, we see that more females than males say that they don’t know what the Internet is. For instance, when looking at don’t know what the Internet is, we see that 23% of females say that they don’t know what the Internet is, and this is slightly lower for males at only 19%. So there are slight differences across countries. For instance, when you look at Ethiopia, Nigeria, and South Africa, the lack of digital skills, that is individuals who say they don’t know how to use the Internet, that is the main barrier. But when looking at countries like Ghana, Kenya, and Uganda, we see that the lack of access to Internet-enabled devices remains the main barrier. So when looking at how individuals are using the Internet for those who are already online, we see that across all countries the Internet is predominantly used for social networking. So nearly all Internet users report using the Internet for social networking. very few report using the internet for online activities that have direct economic benefits like government services and online work. We only have 17% of internet users saying they use the internet for online work and 26% say they use the internet for government services. So we still see that the use of the internet for meaningful activities still remains low. So when looking across gender, we see that when it comes to activities that enhance leisure, we see that there are no differences in how male and females use the internet for those activities. But when we look at other, when we look at those online activities that have direct economic benefits like government services, online work and using the internet to access the news, we see significant gaps with males being more likely to use the internet for those activities in comparison to females. So when looking, so we also had a question that asked internet users if they are able to use the internet as much as they would love to. And what we find is that majority of them say they feel limited in the extent to which they can use the internet. And the main factor leading to this is the prices of data. So we see that across all countries, the main limitation to internet use is data prices. So when looking at the use of digital technologies by micro enterprises, we find that in most of the countries The use of mobile phone ownership is dominated by use of basic phones. So in Nigeria and Ethiopia, we see that most micro enterprises that report using mobile phones for business activities, most of them only have basic phones as the most advanced phones and very few have access to smartphones. When looking across different groups, we see that female-owned micro enterprises, those located in rural areas and informal micro enterprises are the least likely to have mobile phones. And these differences are more pronounced when we look at smartphone ownership. And so when we look at South Africa and Ghana, we find that mobile phone ownership amongst micro enterprises in those countries is dominated by use of smartphones and there are slight differences across groups. So then we see the differences that we have in mobile phone ownership and smartphone ownership are also reflected in internet access. So what we see is that the countries that have low levels of smartphone ownership, we also have micro enterprises having low levels of internet. So in Ethiopia, only 5% of the surveyed micro enterprises were using the internet for business activities. And this was only 13% in Nigeria. And there are significant gaps across gender and locations as well as formality. in which we see that informal micro-enterprises and those established in rural areas, and those that are female-owned, are the least likely to use the internet for business activities. And so also, we still see that even in Ghana and South Africa, they had slightly higher levels of internet access amongst micro-enterprises. But they are higher when compared to other countries, but overall, across all countries, the use of internet for business activities by micro-enterprises is low. Because when looking across all those countries, you find that 60% of the established micro-enterprises were not using the internet for business activities. So also then, what we find is that we have micro-enterprises that have smartphones, but are not using the internet for business activities. So when we look at this specific group, we find that the main barrier to internet access is data prices. So for instance, when you look at Ethiopia, 26% of micro-enterprises reported using smartphones for business activities, but only 5% of them were using the internet. And so then, this says there is a need to look at data prices. So in conclusion, the analysis that we have emphasises the importance of having demand-side data. Without demand-side data, we are not able to determine the social and economic factors which limit the adoption and use of data. of digital technologies, and these factors are invisible to the supply side, so it is very important that we invest in demand-side data so that we can be able to monitor progress and identify gaps that are existing and be able to know what targeted interventions are required in order to promote universal and meaningful connectivity. And in doing this, we need to pay more attention to those at the intersection of these inequalities, particularly these are least educated females who are living in a poor rural household. So if we want to have universal and meaningful connectivity, there is a need to pay attention to this group. Thank you. I’ll hand over to you, Martin.

Moderator: Thank you very much. That was very interesting, very rich as well. At this point, I would like only questions for clarification. The general debate will come after the three presentations. So are there any questions for clarification in the room? I have one question, though. How do you define micro-enterprises? So in this study, micro-enterprises are those enterprises that have at most 10 employees, that is 10 or less, and are not part of a franchise. Okay, thank you very much. With that, I think we can move to another part of the world. We are moving to Brazil, and Fabio will tell us about their survey in Brazil. Fabio.

Fabio Senne: Hello. Good afternoon. Thank you very much for the invitation. It’s a pleasure to be here also with Martin and the other colleagues. I think I’ll be standing to see my slides, and if a colleague can help me. So, thank you. So, I’m from Cetic.br, which is a research center based in Brazil that is responsible for collecting ICT data in Brazil for the past 20 years. So, we are monitoring this field for the past 20 years and I don’t need to repeat Martin and really in a sense that to say that why do you need this demand side data, how useful is to have this data, especially in a data-driven society where we wanted to, we are discussing AI and how to train models with data, but we still need to have at least some equality in the way the data is collected and these types of inequalities are very important to monitor. So, my presentation here, if you can go to the next slide, if I have one main question or main message, it will be that we also need to innovate, of course, in data collection, but also in the data analysis of what we collect in order to understand the gaps and the consequences and the correlations that we want to address with policy. So, I would like to show some examples of what we are doing in terms of measuring meaningful access and meaningful connectivity in Brazil so that we can say that even what can we do with those type of demand side data when we have this data available. So, I’ll talk about more or less what we are doing and then talk about how is the structure of measurement in Brazil. On the next slide, just to mention that in the case of Brazil, as we saw in Africa, there was a very fast change in scenario for the internet in the past 20 years. So, just to compare, if you compared 2008 to 2025, we passed through, from having 42% of households connected to the internet to almost 98% of households connected. We had, in the past, we had 48% of the internet users using the internet outside home, in cyber cafes and other environments, and now it’s just 7%. And also with the mobile phone connections, we went from 40% in 2008 to almost 88% now. So there’s a very fast changing scenario in the country. And if you take the differences, at least in the use of individual access to mobile phone, you can see that there’s not much difference between males and females when you compare the two figures, being in 2008 and 2024. So we can argue that there’s no gender gap, there’s no relevant gender gap when you compare just the access, so the access to smartphones, or the access to the internet in Brazil. But our point, okay, but how do you measure, apart from the access, now that we have almost 90% of the population connected, how do you measure effectively how significant this access is, how meaningful this access is to people? So we decided to develop, in the next slide, please. We decided to develop a scale that is a very simple scale based on nine items that we classified using ITU’s frameworks and other organizations’ frameworks to define meaningful connectivity. And we decided to classify these nine indicators in four. dimensions. So the dimension of the affordability, so this connection is affordable to the people that have it. The access to devices, so are there devices that are capable of benefiting from the internet. The quality of connection, including the download speed and so on. And the usage environment, if you have internet in different spaces, at work, at home, at school, and etc. So this is, we use a sample survey, a normal sample survey that we have in Brazil, and we classify these nine items. And for each person in the population, we set a scale of 0 to 9 points, which means that 0, it’s a very low meaningful connectivity, and 9, it will be the minimum connectivity understanding that we have affordability, the access connection, and devices. And when you calculate this in the next slide, then you can see very huge inequalities, because although we have 88% or 90% that had some access to the internet, when it goes to the meaningful connectivity, we can say that today in Brazil, only 22% of the population has a meaningful connectivity, and being 30% are in the 0 to 2 of this scale. We can see that traditional differences appear, for instance, urban areas and rural areas are very different in terms of meaningful connectivity. You can have here also the regions in Brazil vary a lot. We have the poorest regions with less meaningful connectivity. But take a look at male versus male. If in the access there are no difference, when it comes to meaningful connectivity, we have almost 10 percentage points more male with meaningful connectivity than female in the country. So this is very important. So when you do compound indicators and do more sophisticated analysis of the data you have, then you see very huge inequalities. And you can understand how to face those gender inequalities. In the next slide, please. We did the same scale. And we compared also, OK, so you have a more quality connectivity compared to those that have a low quality connectivity. What happens with your activities online? So if you go to using social media, or sending instant messages, or watching videos, the more communication and entertainment activities, there’s not much difference. There are some differences, but the differences are smaller. Then if you compare to more transactional activities, such as public services, financial services, and studying online, more or less the same what happened in Africa if you compare with the data that was shown before. So when it comes to doing the more important activities online that has more benefits to people, the connectivity is correlated. The low meaningful connectivity is correlated with low performance of those types of activities. And also the skills, which is very important. So those that have reported less digital skills are also those that have low levels of digital connectivity. Of course, those things are correlated. You don’t know what comes first. You don’t have skills, and then you don’t go for connectivity or the opposite. But this is important to understand the situation and to do policy recommendations in the field. And the next slide, just to say that, of course, we have very traditional, as was mentioned before, very traditional inequalities that also affect the online world. So this is still happening, and we need to, that’s why we need disaggregated data. We need very, you need to just not look. into the big picture but also disaggregated data. So here we can see when you break by level of education, for instance, and you compare the list of ITU digital skills that ITU recommends as to be measured by the countries, you can see that there’s a very huge gap between those that have more education and less education when it comes to digital skills. So this is another point arguing that we do need more sophisticated and disaggregated data to understand the digital inequalities. The next slide, please. Here, just to say, we have another survey with children and I have just one figure here to mention. It’s interesting to see among children how also the capabilities of recognizing the digital world still need to be developed and this need to be discussed and enhanced among children. So when we ask children, if you think that everyone find the same information when they search for things online, or if the first result of an internet search is always the best source of information, we have more than the half of children nine to 17 years old agreeing with this statement. To see that although you can have access, you can be online through social media, it’s another type of skills to understand how the algorithms work and how the digital world works. So this is another example of how we can be more sophisticated in the analysis. And to sum up, just to say a few words about CETIC. So we are a center based in Sao Paulo, Brazil that for the past 20 years is producing data on the access of the internet by different parts of the population and also by companies, by governments, by… schools and healthcare facilities and we are also a UNESCO Category 2 Center cooperating with Latin American and African countries in order to produce those type of comparative data. You can have all of this information available online if you want. And in the next slide, just to mention that we also have a few interesting programs of capacity building for researchers to apply these types of surveys and to produce comparable data and comparable information on the field of the Internet. But I’m not taking too much more time on this so we can have more discussion on the discussion phase. So thank you very much.

Moderator: Thank you very much Fabio. Again, very interesting presentation, very rich in detail and in the types of data that you’re collecting and what you can do with it. Same procedure, if there are any questions for clarifications at this point? Nothing. Then I suggest we move on to Claire, who’s online, who will talk to us about the GSMA results for lower middle-income countries. Claire, I hope the connection works and the floor is

Claire Sibthorpe: yours. Thank you. Can you hear me? Very good, very well. Okay, perfect. And I’m sharing some slides. Yeah, so I’m Claire from GSMA. I lead our digital inclusion programs, including our Connected Women program. And one of the things we do is we publish an annual report on the mobile gender gap, where we conduct nationally representative surveys on women’s access to and use of mobile Internet and the barriers they face across low and middle-income countries. So I thought it’d be useful to first start by highlighting the trends. As you can see from this slide where we started measuring the gender gap from 2017, that it had the mobile internet gender gap, so had been consistently narrowing up until about 2020 when it stood at 15%. Can you put him at full screen, Schleitz? Yeah, sorry, let me do that. Is it better now? Sorry. Is it showing up as full screen for you now? Yes, it is. Thank you. Okay, great. So this was sort of, I mean, good news. It was a high gender gap, but it had been reducing. And, in fact, we saw during the first phase of COVID when there was the lockdowns and people were stuck at home, a real reduction of the gender gap because, as women were, you know, having to go online to educate their children and such. But the fact is, and I think it was highlighted in the presentation by Rhea, you know, after COVID, you know, for two years after that kind of lockdown period ended, we saw that progress had stalled. There was a slowdown in digital inclusion for women and progress in the mobile internet gender gap had stalled because women were being very disproportionately negatively impacted by the immediate aftermath. And, in fact, the gap widened to 19% in 2022. Last year, when we published our latest data, we showed the gender gap narrowed back down to 15%. So women are 15% less likely than men to use mobile internet, bringing us back to where we were in 2020. And I really, I think I wanted to highlight this trend because it just shows how progress in closing the mobile gender gap is fragile and it’s not guaranteed. So what we really do need is we need to have concerted, targeted effort. And I think by having this data and showing the trends, we can see how different global events affect. affect it, but it’s, you know, while the recent narrowing shows a promising shift compared to two years, it’s absolutely not clear if the gender gap is going to, women’s adoption is going to continue to increase and the gender gap is going to still go. So, and I think it’s also important to note that even though, you know, we’re back to 2020 levels, it’s still quite a wide gender gap. There are still, according to our data, 265 million fewer women than men using mobile internet in low middle income countries. And I’m talking specifically about mobile because that’s the primary way that most people in these, in these regions access the internet. So now I’m going to go into sort of what it, what it means at different regions. And as we, as we show that there’s, this is, there’s a lot of differences in the gaps, depending on the country you’re in, the region you’re in and where you are in a country. So it’s, so that kind of 15% mass, you know, really big regional and national gender gaps. The majority of the women who are not using mobile internet, 60% live in Sub-Saharan Africa and South Asia. So these are the regions that have the biggest gender gaps. So in South Asia, it’s 31% and in Sub-Saharan Africa, 32%. This compares to a 0% in, in Latin America and the Caribbean. And in Sub-Saharan Africa, so we saw a lot of the reduction in the gap has been driven by South Asia and Sub-Saharan Africa. There hasn’t been a, there’s been some changes year on year, but there hasn’t been a really big difference from, in terms of the gender gap from what it was in 2017. So now it’s 32% in 2017, it’s 34%. So I think that just also highlights that, you know, it can, it’s, it can be difficult to make big differences. And, and as, as I highlighted in the previous slide, it’s, you know, any reductions are not guaranteed. And as was, I think, mentioned previously, you know, these gaps also grow and vary within countries. So in rural areas, the gender gaps are much higher than in urban areas. But it isn’t just about whether women are adopting the Internet, is it, are they able to use it to meet their life needs? And I think we see, again, we see that there are big gaps that we have kind of created this sort of high-level journey in terms of, I mean, recognizing it’s not necessarily linear, but in terms of going from owning a phone to kind of being aware of the Internet and using it and using it for diverse use. And what we see is that the gender gaps widen at every stage. So even if there might not be a gender gap in mobile Internet adoption, there is a gender gap in regular diverse use of the Internet, typically. So it’s important to understand these. We see that in our survey, once men and women become Internet users, the vast majority tend to use it every day, but that’s sort of often for a limited range of purposes. And in fact, in our recent research report, we asked whether people would like to use it more, and women were more likely than men to report that they would like to use it more than they currently do. This was true for sort of more than half of mobile Internet users in some of the countries we surveyed, like Ethiopia, Kenya, Bangladesh, India, and Pakistan. So I think in terms of addressing these gaps, it’s really important to understand what stops men and women from adopting and using the Internet, and I think it’s not a surprise that our data is very consistent with what research at ICT Africa says. What we’re being told in our surveys is that once women are aware of mobile Internet, the top barriers that stop them from adopting it are affordability, primarily of handsets, Internet-enabled handsets, and lack of literacy and digital skills. These are the same barriers that men face, but more and more women face these barriers than men because they’re more likely to be offline. And they also experience these barriers more acutely due to social norms and structural inequalities like education disparities in education and income. And then last year, for the first time, we actually asked what is the barrier stopping people, women from using it more, men and women from using it more. And the barriers aren’t as clear cut. So in terms of mobile internet adoption, very clearly handset affordability, literacy and skills are the top barriers across the countries. It’s not told as clear cut and it does vary by country more in the kind of use them for further use. But overall, safety and securities was a top reported barrier. And in fact, it’s one of the top three barriers in all the survey countries. And concerns around this includes like concerns around reliability of information found online, scams, fraud, information security, unwanted contact from strangers and fears of being exposed to harmful content. The second was affordability. This was primarily of data, but also of handsets. And I think we also had previously done some research just to kind of build on what was said before, looking at female microentrepreneurs and what’s stopping them from using. We also saw that similar to research, ICT Africa, female microentrepreneurs are much less likely than male microentrepreneurs to use mobile for their businesses. And even when they were using some of these services in their personal lives, they were not using it for business often because they weren’t even aware that it could be used for business. So I think again, it highlights that these barriers also differ depending not only in country, but your context and who you are. And we shouldn’t be painting women as a kind of homogeneous sort of group. So that’s at a high level. We have a lot more data in our report, and I’ll share that. as well as the report we published on female microentrepreneurs. But I think, just to get to kind of what do we think we can do. So, at a very high level, and again, we have more recommendations, but at a high level, we just really need to focus on this and set real targets. You know, as I showed in the trend slide, you know, we can’t be complacent. You know, it’s not guaranteed that the gender gaps are going to continue to decrease. And as has been highlighted many times, absolutely gender disaggregated data is critical to measuring and informing policies and investments and action to do this. So, while, you know, a number of us have surveys, as we’ve been saying, there’s a lot more data that’s needed to understand women’s mobile use, their needs, their barriers, and how this differs in different contexts and for different groups of women. So, we definitely need more data. And when it comes to kind of designing products, services, policies, we really need to consider women’s needs, circumstances, and the challenges that they face and the different barriers. So, for example, if you look at the barriers that we’ve mentioned, affordability, you know, can be improved by policies and initiatives that lower upfront costs for internet-enabled handsets, for example, lowering mobile-specific taxes or handset financing initiatives. And it’s also skills literacy and skills. There’s a lot that people can do to address that. But to kind of flag that our experience is that these barriers need to be addressed holistically. It’s not you need to think of both the affordability, the skills, the social norms, and all those things when you’re trying to tackle it. And kind of finally, you know, we all need to work together and partner with different stakeholders. No one group can do this on their own. So, I mean, just, I guess, to conclude, I think we need more informed, targeted action and investment by stakeholders. And I shared earlier that that the gap is big and it’s not always reducing. So I wanted to end on a slightly more positive note, which is that it is possible to make a difference. We do feel that this informed targeted action can make a difference. We have mobile operators who’ve made connected woman commitments to reduce the gender gap in their mobile internet and mobile money customer bases. And they have set clear targets for doing so and then are tackling these barriers. And since 2016, when these commitments first started to be made, we’ve seen that they have actually succeeded and have reached over 70 million additional women with mobile internet and mobile money services. So when having the data and taking that targeted action, it is possible to make a difference. So I hope we can continue the conversation here on what we all need to do. So I will stop sharing now. That was me.

Moderator: Thank you very much, Claire. That was very, very insightful as well. So we had three presentations, three different contexts, three different ways of collecting data, and basically all coming up with the same issues. First of all, are there any questions for direct clarifications to Claire? If not, I suggest we can move to a general debate. So I would like to ask the audience if there’s any issues they would like to raise, any questions, any suggestions, how to make the step from data to policy or some policy examples online. Yes, there’s a question there. Can we get a microphone there? If you can also please introduce yourself.

Audience: Hello, my name is Papa Second. I’m from the Virginia Institute of Economics. firsthand international data exchange, a vice-chair of the International Organization of Studies and Linkages, and also a chair of the European Police Parliament in this studio. from UN Women, Research and Data. Hi, Martin. Good to see you again. Good to see you, too. No, really interesting presentations and a lot to digest. And so just, I guess, a couple of questions from me. One is just wanted to understand, because I think each of you presented work that you’re doing, but I just was wondering how much of this is, for example, linked through official data through the National Statistical System? Do you work with the National Statistical Offices in the countries where you work at all? And if so, what has been your experience? And I’m asking because our experience working with NSOs is, again, has been, there are good things and bad things as well. And sometimes it was also using new data sources, including citizen-generated data, has really helped. So I was just wondering what has been your experience. And the second point is really more on, essentially, I mean, have you seen in your research or did you collect data on monitoring behavior, right? So this is linked to violence against women and controlling behavior where, essentially, a male husband, for example, may want to control what his wife does online. We’ve seen that when we try to measure online violence against women, online and offline, and as part of monitoring behavior. And I was just wondering whether this was something that you’ve looked at in your studies, and if so, what did you find? Thank you.

Moderator: Thank you very much. Those are two excellent questions. I have something to contribute on the first question, but I will first go through our panel. Maybe we start the other way around. So Claire, maybe you can tell something about how you collect the data, if there are any interactions with the NSO. We’ll take the questions one by one. So first, we tackle the first question, then the second one.

Claire Sibthorpe: I’ll answer. So the first one, the data I’ve shared is from our nationally representative surveys. We don’t collect the data from the statistics office, but we do refer to and look at the ITU data that was referenced earlier, which does come from the government. So we, but we don’t directly engage with those departments.

Moderator: Thank you, Fabio.

Fabio Senne: Thank you for your question. So, yes, in our case, Cetic participates in our expert groups, so we developed a model that I think is very interesting for highlighting these issues, that is a multi-stakeholder model for collecting or defining what to measure and what types of indicators are needed. So we have groups that the NSO participates, but the statistics regarding the ICTs are made by Cetic, but the NSO also has a few indicators and a few data on the ICTs that it also collects. So we work in partnership and the NSO validates and participates in the data we produce.

Relebohile Mariti: Thank you. So what we do is to work with national statistical offices to collect data. We go to individuals and households as well as micro-enterprises to learn of how they use these technologies and what barriers they face. So in our experience with working with national statistical offices, we had good working relationships in some countries, and well, like any other journey, there are countries where we encountered challenges. So I think that is the overall experience, to say. And who’s conducting the surveys? So we have the national statistical offices hiring field workers to do the surveys.

Moderator: Okay, thank you very much. From the ITU perspective, we actually do not do a survey ourselves. We only collect the data that are collected by the country, and usually and often it is the NSO, not always in some countries and in more and more countries, it’s the regulator that is either funding the survey or in some cases even conducting the survey. And there’s advantages and disadvantages. The advantage is that the regulator has usually money because they have funding from the operators. So they have money to conduct the survey, but you need the experience from an NSO for the sample sizes, etc., etc., for all the statistical work. It’s absolutely crucial that you work with NSOs or that NSOs do the work, because what we do, where we don’t get data, we make estimates. But we only make estimates for the high-level number of people using the Internet. We just don’t do that. add the gender breakdown but we cannot go into any detail about what people are doing online, what kind of barriers there are, what kind of skills they have. It’s impossible to make estimates for that and even if we would make estimates the estimates would be bad and we see how important it is to go beyond the headline number and into the desegregated numbers. So yeah, that really is a crucial point and so we need countries to get involved, NSOs to get funded for surveys. Now on the second question, again let me go through the panel and see if they have addressed online violence in their surveys. The other way around this time, Rele, you can start.

Relebohile Mariti: Okay, thank you and also with the last question, just to clarify, when I say we have the national statistical offices hiring, they do the hiring process but we have Research ICT Africa paying for the labor costs. So with online experiences, we did reports for individual countries and what we find in some countries, we find that there are cases where we find males be more likely than females to experience online threats but what we find is that this is more linked to what kind of information they share online. So in countries where we find that females are less likely to share their real name, their gender and their political views, that is where we find males being likely than females to experience online threats but in countries where we find that there are no differences in what information individuals share on social media, we find that females are more likely than males to experience online threats. Thank you.

Fabio Senne: Thank you. Now in our case in Brazil, we don’t have any specific survey on this on this thematic, although we have a few qualitative studies on this showing more in-depth analysis. It’s a very difficult topic to include in a typical survey because the responders tend to underestimate the rates of those cases. So we do need to combine more qualitative and quantitative analysis and we do have an experience with children. We run, since 2012, the Kids Online Survey, which is a survey with children from 9 to 17 years old and in this case we ask for a few we have a strategy for sensitive data collection with self self-administrative service with children, and we have a few data from things related to violence online and related to children on that. But regarding gender, we see some trends, for instance, in the case of Brazil, although we don’t see many differences in terms of the risks they feel online in terms of, for instance, for sexual violence, but when you talk about parent mediation, you can see that parents are more worried about the girls than the boys, the rates of mediation with girls is higher in the country, and also girls also report more cases of feeling bad online or having a problematic experience of usage, so we have a few experiences with children, but not with the overall adult population.

Claire Sibthorpe: Thank you. Claire, do you have anything on this subject? Yes, we do. We have a whole series of varied questions that we ask around, both on the kind of safety and security issues and whether it’s a concern that’s stopping you from either going online or using the internet more. We also have questions around whether family approval is an issue or not in terms of going online, and last year we asked some questions, extra questions, to look at, you know, this, so this, we see that this is a concern for both men and women, the safety and security concern, and as people get online, it becomes a more of a concern once you’re online, and so we asked some questions last year about whether people had personally experienced the safety and security issues, or whether it was a concern, and I don’t… wasn’t any big kind of big gender differences but although I would say that a lot of I think I think there’s two aspects to this in terms of preventing women from going online there’s a there’s the reality of it happening and you can see from our last report you know how many women versus men report this happening in reality to themselves and there’s a concern that it will happen and I think in our research we’ve seen some of the sort of gatekeepers or family members are concerned about their wives or daughters going online because of these risks so I think so I think it’s both the concern and the reality but it’s certainly an issue that is stopping women from going online and I think concerns some of these concerns are limiting their ability to go online or their online use and their use is limited in certain ways you know in an effort to sort of to address this perceived risk also I should say we did some research a number of years ago that looked at digital skills and what we found is that women were much less likely to know how to protect themselves online so they didn’t know there were things like privacy settings on Facebook and such and they they actually their skills levels and some of these ways to keep to keep themselves protected online were lower than for men so that’s also an issue thank you

Moderator: the only thing I would like to add to this question is that I see some issues in actually asking the question especially if this goes into a household survey where the survey may pass by the head of household who’s actually controlling it so either you don’t get the answers or you may not get the correct answers for the right answers because the women may not feel free to say whatever they want so that that’s a difficult issue in always in this this type of sensitive issue so you have to find a way to actually reach your target population in a way that they are free to answer but it’s it’s very important subject and investing in are there any other questions in the room please… Thank you very much.

Audience: I would like to ask if their research was only based on adult access to mobile phones, or even children. Because in most of the southern countries, in households, you find that even kids got access to smartphones even more than their parents. So maybe the access to internet may be maybe higher than the access to internet by parents. So I’d like to know if their research was only for adults.

Moderator: Yes, very good question. Fabio, since you have the mic, you may answer it.

Fabio Senne: Yes. Well, in our case, the survey I showed, it is covering 10 years old or more individuals. So we screen the household, and we select randomly one person that lives in the household that can be from 10 years old or more. So that’s what we cover in this data. And of course, we have data from what smaller children are doing online. We have data from reported by adults. So we ask adults if the children are using the internet. So we have also estimates from the youngest population. But you’re right that if you take the youngsters and young people use more intensively the internet than the adults, we can also disaggregate out all of this information for these age groups.

Relebohile Mariti: OK. OK, thank you. In our case, what we did was to only conduct interviews with individuals who are at least 15 years.

Moderator: Claire?

Claire Sibthorpe: Yeah, for our Mobile Gender Gap Report, it is adults. But I’ve just put in the link on the line. We have our State of Mobile Internet Connectivity Report, which looks at connectivity, and it does separate out adults versus those who are under 18 and how that makes it and also shared phone access and such. So it has more detail. It’s not. It has more detail on that, a bit more detail on that point.

Moderator: And in the case of the ITU, we collect whatever the countries are doing. So we depend on the service done in countries. We recommend, we have a manual. We recommend starting at the age of five because, as we know, children have different attitudes, different behavior than older people. But there are also legal issues in some countries where you can only start serving people starting at 15 or 16. So in the EU, for example, the age cut-off, it’s 15 or 16 to 74 often, although countries now are voluntarily also going below and above that. Certainly for 15-year-olds, 15- to 24-year-olds are much more likely to use the internet than other age groups. Below 15, it’s actually different. So the there’s really a point where all the children get online and a point where not all the children are online. So it’s actually very important to cover all those age groups so that you can see it in your data. Thank you for that question. Any other questions? Still no questions online. Let me ask a question, because we keep on saying that we need this data, and we do need this data for policy purposes. And Claire really gave a few directions of where policy should go. But it would also be good to know if the data are actually used by policymakers or if there are barriers there. And maybe Brazil is a good case here, because you do work with the policymakers, I understand. And how is the interaction with policymakers? How are your data used by the policymakers? And how do the policymakers ask you for specific data? Fabio.

Fabio Senne: Thank you, Martin. Yes, we have this multi-stakeholder model that I think it’s useful, because we do think that we do need to conduct surveys that are really useful for policymaking. So what we do is that before going to the field, in each iteration of the survey, we call an expert group composed by policymakers, and academics, and private sector, civil society. And they go deep into the survey to understand. Some of the indicators, of course, we keep because they are recommended by ITU or by international standards. So we want to have a time series of fixed indicators. But we also develop new indicators of new aspects that are relevant for policymakers in the field, so that we can be more responsive to that. I think this is useful, not just because the results are used more by the policymakers, but also because this gives legitimacy. to the survey process and also helps in the funding part of it. So how they, the more that they think that the results are useful, the more you can argue that they should fund, that there should be a guaranteed fund for keeping the survey occurring. And I think another important part of this is being very dynamic, because the field changes very dynamically. So the uses of what people do online changes. So, and we have to be very fast in incorporating these new trends in the survey so that you can give results that are more relevant to the policy makers. So that’s why this more participatory way of consulting the communities is good for keeping the survey relevant.

Moderator: Thank you. Rele, do you work with policy makers or how are the data used by policy makers?

Relebohile Mariti: So once we are done with doing the reporting, we do presentations, we disseminate our findings to the policy makers in their respective countries. And they have been, they have shown interest in using the data. And also after we have done the presentation, they always get to know what police interventions they can implement to address challenges in their respective countries.

Moderator: Thank you. Claire, I know you have some issues in actually hearing us, but, and you already mentioned some of the streams of policy interaction, but maybe you can elaborate a bit more or repeat what you said before about the interaction with- I think the question was, how do we engage with policy makers on this issue, if I’m correct?

Claire Sibthorpe: So, yeah, so obviously- We share our data with anybody who wants it. We’re keen to make sure people are doing evidence-based policies and programming. And we support governments on their policies. We run free training courses on both the digital gender divide as well as digital inclusion in general with policymakers, where we, again, share our data and recommendations. We have a whole report which outlines specifically policy recommendations in this space. So we’re very engaged. And we’re very keen that the data that we are lucky enough to be able to collect is available and accessible as much as possible to all stakeholders working in this space so that we’re all evidence-based in our work.

Moderator: Thank you. That’s excellent. I still see no questions. I’m going to ask one last question, and then we will wrap up. And it’s been mentioned already a few times, but it’s so important to have this data. And yet, we see so many gaps, especially in low-income and low and middle-income countries in countries collecting the data. And as I mentioned, the NSOs, they are usually underfunded. And ICT statistics are not a priority for them. So two questions. How did you manage to fund your data collection? And the second is, how can we make this maybe attractive to donors? Or how can we increase funding, especially to NSOs, to do this kind of data collection? And I think I will start with the positive example here. So Fabio, I’m going to first hand it to you, and then to the other two panelists.

Fabio Senne: Thank you. Martin, just to present on the Brazilian model, in the case of Brazil, the center where I’m from in Setic and it’s funded by NIC.br, which is the country code level domain .br register. So we use the funds that comes from the .br to provide society with more information, including service on the use of the internet. This is a unique model, and I think other country code level domains also invest in service, but this is something that can be done. But I do think that, as I mentioned, keeping the relevance of the indicators, I think is very important. It’s very interesting to see that if you go to all the, now you have very new strategies to measure AI readiness or AI capabilities of countries and all of them, in a sense, include data on connectivity on the capability of people to also be online and use the internet. So I think that the agenda changes, but we do need this very basic information on how people are dealing with the state. Now we have generative AI, so we are also expect to see some inequalities in the use of this type of tools, if you have surveys and data like this. So keeping the relevance and promote more stakeholder engagement throughout the process, I think there are good solutions and good ideas for making this type of data more available. Thank you.

Moderator: Maybe I first go to Claire.

Claire Sibthorpe: Sure. For our data, so, I mean, we feel that this data is really, really needed, and we just, we can’t do, we can’t really know what we’re doing and support our members without having it. So very fortunate that because of the lack of this data, especially gender disaggregated data. GSMA is funding the kind of core countries of our survey, our consumer survey, in the kind of core countries that allow us to do the modeling of the usage gap and gender gaps, because we absolutely need this data. And then we have we’re fortunate to have support from some of our donors to add some countries that they’re interested in to be able to compare across more countries, and also to do the modeling so that we can model these gender gaps. So that’s UK Department for International Development and UKAID and Swedish International Development Agency support us in terms of being able to add additional countries and do this modeling in these reports that we’re able to publish with the core countries coming from GSMA. But it is, you know, I think it would be great if this data was just there and done in more countries and by more people. We all need this data.

Moderator: Yes, indeed. Thank you very much. Rele?

Relebohile Mariti: So Research ACT Africa is a donor-funded organization. And the latest round of this survey was funded by the World Bank and the Bill and Melinda Gates Foundation. And so there is really a need for investment in this kind of data. And to make it attractive to funders, I believe what we can do is to stress the importance of this data and also show how this data can be used to create value. And again, as Fabio has already said, there is a need for collaboration and make the data easily accessible. So the data that we have is publicly available on Data First. for everyone to access it. Thank you.

Moderator: Thank you very much for that. I can add something from the ITU perspective. So we collect data from countries. We don’t have our own survey. We don’t fund our own service. We’re not a funding agency. We have tried a couple of times to see if donor agencies are interested in funding servicing countries, but they always say that that has to come from the countries, not from the ITU, and that’s reasonable. But it is important that there’s also a request for the data intrinsically from the countries themselves, from the policy makers, as we discussed. And we now have a project funded by the EU that’s called Promoting and Measuring Universal and Meaningful Connectivity. And in that project, we’re really trying to make the connection between the policy makers and the statisticians. We do a lot of workshops. We do a lot of advocacy. We explain what UMC is, Universal Meaningful Connectivity, how to measure it, but also why it’s important for policy makers. And we’re trying to get, at the country level, a dialogue going between the policy makers and the statistics people. So hopefully that will help as well. Before wrapping up, I’m going to give a last chance to the audience or online for a last question. I see none. So before thanking the panel, I would like to conclude that there is a difference in gender in how men and women access the Internet, how they use it, how many use it. But also, once they are online, men seem to make more of it than women, more activities than women. So how do we know that? We know that because of the surveys done in countries. And if we want to address it, if we want to move to a world where there’s gender equality in access and use of ICTs, you need to have the data to be able to address it. So we need more data, we need better data, we need survey data. And then once we have the data, we need to analyze the data at the detailed level, like, for example, we’ve seen in the presentation of Brazil, and everyone can do that. It’s not a difficult analysis, but you need to have the data. So the data is fundamental. There needs to be more funding for data from donors, but also from countries themselves that they see the importance of the data so that they can get some of the government funding, can go to data collection. There are different models, there are different ways of getting that, and we heard some of them here. But fundamentally, we need the data, countries need to have the data. With that, I would like to warmly thank the panelists, Rele from Research ICT Africa, Fabio from Cetic Brazil, Claire from GSMA, Zahra for the online moderation, and the organizers here. With that, let’s give a big hand for everyone, and thank you very much, everyone. Thank you.

M

Moderator

Speech speed

137 words per minute

Speech length

2844 words

Speech time

1236 seconds

Persistent gender gaps exist in internet access and usage globally

Explanation

The moderator highlights that there are ongoing disparities between men and women in terms of internet access and use worldwide. This is presented as a key issue in the discussion on digital inclusion.

Evidence

Data showing 63% of male population using internet globally compared to 57% of female population.

Major Discussion Point

Gender gaps in internet access and usage

Agreed with

Relebohile Mariti

Claire Sibthorpe

Fabio Senne

Agreed on

Gender gaps exist in internet access and usage

Detailed, gender-disaggregated data is crucial for understanding gaps

Explanation

The moderator emphasizes the importance of collecting and analyzing gender-specific data on ICT usage. This data is essential for identifying and addressing disparities between men and women in digital access and use.

Major Discussion Point

Importance of gender-disaggregated ICT data

Agreed with

Relebohile Mariti

Claire Sibthorpe

Fabio Senne

Agreed on

Importance of detailed, gender-disaggregated data

Funding for ICT statistics is often limited, especially in developing countries

Explanation

The moderator points out that there is a lack of financial resources for collecting ICT statistics, particularly in less developed nations. This funding shortage hinders the ability to gather comprehensive data on digital access and usage.

Major Discussion Point

Challenges in ICT data collection

More investment needed in gender-disaggregated ICT data collection

Explanation

The moderator calls for increased funding and resources to be allocated to collecting gender-specific ICT data. This investment is seen as crucial for understanding and addressing digital gender gaps.

Major Discussion Point

Policy implications and recommendations

R

Relebohile Mariti

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Gender gaps are wider in low-income countries and rural areas

Explanation

Relebohile Mariti points out that the disparity in internet access and usage between men and women is more pronounced in less developed nations and rural regions. This highlights the intersection of gender inequality with other socioeconomic factors.

Evidence

Data showing lower levels of internet access in countries like Ethiopia, Nigeria, and Uganda, with significant gender gaps.

Major Discussion Point

Gender gaps in internet access and usage

Agreed with

Moderator

Claire Sibthorpe

Fabio Senne

Agreed on

Gender gaps exist in internet access and usage

Affordability of devices and data is a key barrier

Explanation

Mariti identifies the cost of devices and internet data as a major obstacle to internet access and usage, particularly for women. This economic barrier contributes significantly to the digital gender gap.

Evidence

Survey results indicating that a significant share of individuals who do not have access to smartphones cite the devices as too expensive.

Major Discussion Point

Barriers to women’s internet access and usage

Agreed with

Claire Sibthorpe

Agreed on

Affordability as a key barrier

Differed with

Claire Sibthorpe

Differed on

Primary barriers to internet access and usage

Lack of digital skills and awareness is a major obstacle

Explanation

Mariti highlights that many individuals, especially women, lack the necessary digital literacy and awareness to effectively use the internet. This skills gap is a significant barrier to meaningful internet usage.

Evidence

Survey data showing that a substantial portion of respondents, particularly women, report not knowing how to use the internet or what it is.

Major Discussion Point

Barriers to women’s internet access and usage

Nationally representative surveys provide key insights

Explanation

Mariti emphasizes the importance of conducting comprehensive, nationally representative surveys to gather accurate data on internet access and usage. These surveys offer crucial insights into digital disparities and trends.

Evidence

Description of the After Access Surveys conducted across multiple African countries, providing detailed data on digital technology adoption and use.

Major Discussion Point

Importance of gender-disaggregated ICT data

Agreed with

Moderator

Claire Sibthorpe

Fabio Senne

Agreed on

Importance of detailed, gender-disaggregated data

Policies should focus on affordability, skills, and creating enabling environments

Explanation

Mariti recommends that policymakers prioritize making internet access more affordable, improving digital skills, and fostering environments that encourage internet adoption and use. These areas are seen as key to addressing the digital gender gap.

Major Discussion Point

Policy implications and recommendations

C

Claire Sibthorpe

Speech speed

164 words per minute

Speech length

2776 words

Speech time

1012 seconds

Progress in closing mobile internet gender gap is fragile and not guaranteed

Explanation

Sibthorpe warns that advancements in reducing the gender gap in mobile internet usage are not stable or assured. This highlights the need for ongoing efforts and vigilance in addressing digital gender inequalities.

Evidence

Data showing fluctuations in the mobile internet gender gap over time, including a widening of the gap after initial progress.

Major Discussion Point

Gender gaps in internet access and usage

Agreed with

Moderator

Relebohile Mariti

Fabio Senne

Agreed on

Gender gaps exist in internet access and usage

Gender gaps widen at every stage of internet adoption and usage

Explanation

Sibthorpe points out that gender disparities become more pronounced at each level of internet engagement, from basic access to advanced usage. This suggests that addressing the gender gap requires interventions at multiple stages of the digital journey.

Evidence

Data showing increasing gender gaps in internet adoption, regular use, and diverse use of online services.

Major Discussion Point

Gender gaps in internet access and usage

Agreed with

Relebohile Mariti

Agreed on

Affordability as a key barrier

Safety and security concerns limit women’s internet use

Explanation

Sibthorpe identifies safety and security issues as significant factors restricting women’s internet usage. These concerns include fears about online harassment, privacy breaches, and exposure to harmful content.

Evidence

Survey results indicating safety and security as top reported barriers to internet use, especially for women.

Major Discussion Point

Barriers to women’s internet access and usage

Differed with

Relebohile Mariti

Differed on

Primary barriers to internet access and usage

Social norms and structural inequalities exacerbate barriers for women

Explanation

Sibthorpe highlights how existing societal norms and systemic inequalities compound the challenges women face in accessing and using the internet. These factors intensify the impact of other barriers like affordability and lack of skills.

Major Discussion Point

Barriers to women’s internet access and usage

Data needed to inform evidence-based policies and interventions

Explanation

Sibthorpe emphasizes the crucial role of data in shaping effective policies and interventions to address the digital gender gap. She argues that evidence-based approaches are essential for creating meaningful change.

Evidence

Examples of how GSMA data has been used to inform policy recommendations and support mobile operators in reducing gender gaps.

Major Discussion Point

Importance of gender-disaggregated ICT data

Agreed with

Moderator

Relebohile Mariti

Fabio Senne

Agreed on

Importance of detailed, gender-disaggregated data

Targeted interventions needed to address specific barriers women face

Explanation

Sibthorpe advocates for tailored approaches to tackle the unique obstacles that prevent women from fully engaging with digital technologies. This suggests a need for nuanced, context-specific solutions rather than one-size-fits-all policies.

Major Discussion Point

Policy implications and recommendations

F

Fabio Senne

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Meaningful connectivity shows larger gender gaps than basic access

Explanation

Senne points out that when examining more comprehensive measures of internet use, such as meaningful connectivity, the gender disparities are even more pronounced than in basic access statistics. This highlights the need to look beyond simple access metrics to understand true digital inclusion.

Evidence

Data from Brazil showing that while there may be no significant gender gap in basic internet access, there is a 10 percentage point gap in meaningful connectivity between men and women.

Major Discussion Point

Gender gaps in internet access and usage

Agreed with

Moderator

Relebohile Mariti

Claire Sibthorpe

Agreed on

Gender gaps exist in internet access and usage

Multi-stakeholder approach helps ensure relevant data collection

Explanation

Senne advocates for involving various stakeholders, including policymakers, academics, and civil society, in the data collection process. This approach helps ensure that the data collected is relevant and useful for policy-making and addressing real-world issues.

Evidence

Description of Brazil’s multi-stakeholder model for ICT surveys, which involves consulting with various groups to determine survey content and indicators.

Major Discussion Point

Importance of gender-disaggregated ICT data

Agreed with

Moderator

Relebohile Mariti

Claire Sibthorpe

Agreed on

Importance of detailed, gender-disaggregated data

Keeping surveys relevant as technology rapidly changes is difficult

Explanation

Senne highlights the challenge of maintaining the relevance of ICT surveys in the face of rapid technological advancements. This requires constant adaptation of survey methodologies and questions to capture new trends and uses of technology.

Major Discussion Point

Challenges in ICT data collection

Collecting data on sensitive topics like online violence requires careful approaches

Explanation

Senne points out the difficulties in gathering accurate data on sensitive issues such as online violence. This requires specialized methodologies and considerations to ensure respondents feel safe and comfortable providing honest answers.

Evidence

Example of using self-administered surveys for children to collect data on sensitive online experiences.

Major Discussion Point

Challenges in ICT data collection

Multi-stakeholder collaboration is key for effective policymaking

Explanation

Senne emphasizes the importance of collaboration between various stakeholders in developing effective digital inclusion policies. This collaborative approach ensures that policies are informed by diverse perspectives and address real-world needs.

Major Discussion Point

Policy implications and recommendations

Agreements

Agreement Points

Gender gaps exist in internet access and usage

Moderator

Relebohile Mariti

Claire Sibthorpe

Fabio Senne

Persistent gender gaps exist in internet access and usage globally

Gender gaps are wider in low-income countries and rural areas

Progress in closing mobile internet gender gap is fragile and not guaranteed

Meaningful connectivity shows larger gender gaps than basic access

All speakers agree that significant gender gaps exist in internet access and usage, with these gaps being more pronounced in developing countries, rural areas, and when considering meaningful connectivity rather than just basic access.

Importance of detailed, gender-disaggregated data

Moderator

Relebohile Mariti

Claire Sibthorpe

Fabio Senne

Detailed, gender-disaggregated data is crucial for understanding gaps

Nationally representative surveys provide key insights

Data needed to inform evidence-based policies and interventions

Multi-stakeholder approach helps ensure relevant data collection

All speakers emphasize the critical importance of collecting and analyzing detailed, gender-disaggregated data to understand digital gaps and inform effective policies and interventions.

Affordability as a key barrier

Relebohile Mariti

Claire Sibthorpe

Affordability of devices and data is a key barrier

Gender gaps widen at every stage of internet adoption and usage

Both speakers identify affordability of devices and data as a significant barrier to internet access and usage, particularly for women.

Similar Viewpoints

Both speakers highlight how lack of digital skills, awareness, and social norms create additional barriers for women in accessing and using the internet effectively.

Relebohile Mariti

Claire Sibthorpe

Lack of digital skills and awareness is a major obstacle

Social norms and structural inequalities exacerbate barriers for women

Both speakers advocate for targeted, collaborative approaches involving multiple stakeholders to address the specific barriers women face in digital inclusion.

Claire Sibthorpe

Fabio Senne

Targeted interventions needed to address specific barriers women face

Multi-stakeholder collaboration is key for effective policymaking

Unexpected Consensus

Challenges in collecting data on sensitive topics

Fabio Senne

Claire Sibthorpe

Collecting data on sensitive topics like online violence requires careful approaches

Safety and security concerns limit women’s internet use

Both speakers unexpectedly highlight the challenges and importance of addressing sensitive topics like online violence and safety concerns in data collection and analysis, despite their different regional focuses.

Overall Assessment

Summary

The speakers show strong agreement on the existence of gender gaps in internet access and usage, the importance of detailed gender-disaggregated data, and the need for targeted interventions to address barriers. They also agree on affordability and lack of digital skills as key obstacles.

Consensus level

High level of consensus among speakers, implying a shared understanding of the challenges in digital gender inclusion and the importance of data-driven, collaborative approaches to address these issues. This consensus suggests potential for coordinated global efforts to bridge the digital gender divide.

Differences

Different Viewpoints

Primary barriers to internet access and usage

Relebohile Mariti

Claire Sibthorpe

Affordability of devices and data is a key barrier

Safety and security concerns limit women’s internet use

While Mariti emphasizes affordability as the main barrier, Sibthorpe highlights safety and security concerns as significant factors limiting women’s internet use.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the primary barriers to internet access and usage, as well as the most effective approaches to data collection and policy development.

difference_level

The level of disagreement among the speakers is relatively low. They generally agree on the existence of gender gaps in internet access and usage, the importance of data collection, and the need for targeted interventions. The differences mainly lie in the emphasis placed on various factors and approaches, which could actually complement each other in addressing the digital gender gap comprehensively.

Partial Agreements

Partial Agreements

All speakers agree on the importance of data collection, but they differ in their approaches. Mariti emphasizes nationally representative surveys, Senne advocates for a multi-stakeholder approach, while Sibthorpe focuses on using data to inform evidence-based policies.

Relebohile Mariti

Fabio Senne

Claire Sibthorpe

Nationally representative surveys provide key insights

Multi-stakeholder approach helps ensure relevant data collection

Data needed to inform evidence-based policies and interventions

Similar Viewpoints

Both speakers highlight how lack of digital skills, awareness, and social norms create additional barriers for women in accessing and using the internet effectively.

Relebohile Mariti

Claire Sibthorpe

Lack of digital skills and awareness is a major obstacle

Social norms and structural inequalities exacerbate barriers for women

Both speakers advocate for targeted, collaborative approaches involving multiple stakeholders to address the specific barriers women face in digital inclusion.

Claire Sibthorpe

Fabio Senne

Targeted interventions needed to address specific barriers women face

Multi-stakeholder collaboration is key for effective policymaking

Takeaways

Key Takeaways

Significant gender gaps persist in internet access and usage globally, especially in low-income countries and rural areas

Key barriers for women include affordability of devices/data, lack of digital skills, and safety/security concerns

Gender-disaggregated ICT data is crucial for understanding gaps and informing evidence-based policies

Progress in closing the gender digital divide is fragile and not guaranteed

Meaningful connectivity shows larger gender gaps than basic access metrics

Multi-stakeholder collaboration is important for effective data collection and policymaking

Resolutions and Action Items

More investment is needed in gender-disaggregated ICT data collection

Policies should focus on addressing affordability, digital skills, and creating enabling environments for women’s internet access and use

Stakeholders should work together to keep ICT surveys relevant as technology rapidly changes

Unresolved Issues

How to sustainably fund ICT data collection, especially in developing countries

Best approaches for collecting data on sensitive topics like online violence against women

How to effectively engage policymakers to use ICT data for decision-making

Suggested Compromises

Partnering with national statistical offices, despite challenges, to conduct ICT surveys

Using multi-stakeholder models to fund and design ICT data collection efforts

Balancing the need for consistent indicators with incorporating new trends in ICT surveys

Thought Provoking Comments

Although we have 88% or 90% that had some access to the internet, when it goes to the meaningful connectivity, we can say that today in Brazil, only 22% of the population has a meaningful connectivity, and being 30% are in the 0 to 2 of this scale.

speaker

Fabio Senne

reason

This comment introduces the crucial distinction between basic access and meaningful connectivity, revealing a much larger digital divide than raw access numbers suggest.

impact

It shifted the discussion from focusing solely on access to examining the quality and usefulness of that access, prompting deeper analysis of digital inequalities.

We see that the gender gaps widen at every stage. So even if there might not be a gender gap in mobile Internet adoption, there is a gender gap in regular diverse use of the Internet, typically.

speaker

Claire Sibthorpe

reason

This insight highlights how gender gaps persist and even widen beyond initial adoption, revealing the complexity of digital inclusion.

impact

It expanded the conversation to consider not just access, but ongoing usage patterns and barriers, leading to discussion of more nuanced policy interventions.

So when looking at don’t know what the Internet is, we see that 23% of females say that they don’t know what the Internet is, and this is slightly lower for males at only 19%.

speaker

Relebohile Mariti

reason

This statistic provides concrete evidence of a fundamental awareness gap between genders, pointing to deeper societal issues.

impact

It prompted discussion on the need for basic digital literacy and awareness programs, especially targeted at women.

After COVID, you know, for two years after that kind of lockdown period ended, we saw that progress had stalled. There was a slowdown in digital inclusion for women and progress in the mobile internet gender gap had stalled because women were being very disproportionately negatively impacted by the immediate aftermath.

speaker

Claire Sibthorpe

reason

This observation highlights the fragility of progress in digital inclusion and how external events can disproportionately affect women.

impact

It led to discussion about the need for sustained, targeted efforts to close the digital gender gap and the importance of considering broader societal factors.

Overall Assessment

These key comments collectively shifted the discussion from a focus on basic internet access to a more nuanced examination of meaningful connectivity, persistent gender gaps, and the fragility of progress. They highlighted the complexity of digital inclusion issues, emphasizing the need for targeted interventions, sustained efforts, and consideration of broader societal factors. The comments also underscored the importance of detailed, disaggregated data in understanding and addressing digital inequalities.

Follow-up Questions

How can we increase funding for ICT statistics collection, especially for National Statistical Offices in low and middle-income countries?

speaker

Moderator (Martin)

explanation

This is important because many countries lack sufficient data on ICT usage, particularly gender-disaggregated data, which is crucial for evidence-based policymaking to address digital inequalities.

How can we make ICT statistics data collection more attractive to donors?

speaker

Moderator (Martin)

explanation

Securing funding is critical for conducting comprehensive surveys and ensuring consistent data collection over time to track progress in closing digital divides.

How can we better measure and address online violence and controlling behavior related to women’s internet use?

speaker

Audience member (Papa Second)

explanation

Understanding these issues is crucial for developing policies to ensure women’s safe and equitable access to digital technologies.

How can we improve data collection on children’s access to and use of mobile phones and the internet?

speaker

Audience member (unnamed)

explanation

Comprehensive data on children’s digital access and usage patterns is important for understanding overall household connectivity and developing targeted policies for youth.

How can we enhance collaboration between researchers, National Statistical Offices, and policymakers to ensure ICT statistics are relevant and used effectively?

speaker

Fabio Senne

explanation

Stronger partnerships can improve data quality, relevance, and utilization in policymaking to address digital inequalities.

How can we develop more sophisticated analysis techniques to uncover hidden inequalities in ICT access and use?

speaker

Fabio Senne

explanation

More nuanced analysis, such as the meaningful connectivity scale presented, can reveal disparities that are not apparent from basic access statistics alone.

What policy interventions are most effective in creating an even playing field for women’s participation in the digital economy?

speaker

Moderator (Martin)

explanation

Identifying successful policy approaches is crucial for replicating and scaling efforts to close gender gaps in ICT access and use.

Which countries have provided evidence of an enabling environment for equal participation in the digital economy, and what were the key success factors?

speaker

Moderator (Martin)

explanation

Learning from successful examples can inform policy development in other countries seeking to address digital inequalities.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #75 The Portuguese Speaking Community as a case study on digital

Open Forum #75 The Portuguese Speaking Community as a case study on digital

Session at a Glance

Summary

This discussion focused on digital cooperation among Portuguese-speaking countries, highlighting the importance of the Portuguese language in the digital world. Participants from various Portuguese-speaking nations and organizations shared insights on collaborative efforts to promote digital inclusion, enhance digital skills, and strengthen the presence of Portuguese online.

Key themes included the significance of Portuguese as one of the most widely spoken languages globally, particularly in the Southern Hemisphere, and its potential as a unifying force for digital cooperation. Speakers emphasized the need to address digital divides, both within and between Portuguese-speaking countries, through capacity building, knowledge sharing, and technology transfer.

The discussion highlighted several initiatives, such as the Lusophone Internet Governance Forum and the Association of Portuguese Speaking Registries (LUSNIC), which aim to foster collaboration and promote Portuguese language content online. Participants stressed the importance of developing technologies, including AI and large language models, in Portuguese to ensure cultural representation and combat linguistic biases in emerging technologies.

Regulatory cooperation was also addressed, with examples of how agencies like ANACOM and ANATEL are working together to share best practices and develop common approaches to digital governance. The speakers emphasized the value of multi-stakeholder engagement and the need for inclusive policies that consider diverse perspectives within the Lusophone community.

The discussion concluded with a call for continued and strengthened cooperation among Portuguese-speaking countries in the digital realm, recognizing the shared language as a powerful asset for fostering innovation, economic development, and cultural exchange in an increasingly interconnected world.

Keypoints

Major discussion points:

– The importance of Portuguese as a common language for digital cooperation among Lusophone countries

– Efforts to enhance digital skills and reduce access inequalities in Portuguese-speaking nations

– Collaboration and knowledge sharing between regulators and organizations in Lusophone countries

– The need to increase Portuguese language content and representation in emerging technologies like AI

– Promoting inclusivity and addressing the digital divide in the Global South

Overall purpose/goal:

The discussion aimed to highlight digital cooperation initiatives among Portuguese-speaking countries and explore how this linguistic and cultural community can work together to address shared challenges in the digital realm.

Tone:

The tone was largely positive and collaborative throughout. Speakers expressed enthusiasm about existing partnerships and a shared desire to strengthen cooperation in the future. There was a sense of pride in the Portuguese language as a unifying force and asset for the community. The tone became more urgent when discussing the need to address inequalities and increase representation in emerging technologies.

Speakers

– Moderator: Panel moderator

– Sandra Maximiano: Chairwoman of the Board of Directors of ANACOM

– Luísa Ribeiro Lopes: President of the board of directors of DNS.pt Association

– Bianca Kramer: Counselor of CGIBR, visiting professor and research lead at the Center of Technology and Society

– Marta Moreira Dias: Board member of DAPT and president of LUSNIC

– Mozart Tenório: Advisor to the presidency of ANATAL, Brazil’s national telecommunication agency

– David Gomes: Executive Secretary of Arctel CPLP, Senior Advisor of the Multi-Stakeholder Regulatory Authority of Cape Verde

Additional speakers:

– Leonilde Santos: Chairwoman of ARC-TEL-CPLP (mentioned but did not speak)

Full session report

Digital Cooperation Among Portuguese-Speaking Countries: A Comprehensive Overview

This discussion focused on digital cooperation among Portuguese-speaking countries, highlighting the importance of the Portuguese language in the digital world. Participants from various Lusophone nations and organisations shared insights on collaborative efforts to promote digital inclusion, enhance digital skills, and strengthen the presence of Portuguese online.

Importance of the Portuguese Language

A key theme throughout the discussion was the significance of Portuguese as one of the most widely spoken languages globally. Bianca Kramer, Counselor of CGIBR, noted that Portuguese is the 5th or 6th most spoken language worldwide, while Marta Moreira Dias, Board member of DAPT and president of LUSNIC, stated it as the fifth most spoken language. The speakers agreed that the Portuguese language serves as a unifying force for collaboration and an asset for economic development.

Digital Cooperation Initiatives

The discussion showcased several initiatives aimed at fostering collaboration among Portuguese-speaking countries:

1. LUSNIC (Association of Portuguese Speaking Registries): Established in 2015, LUSNIC includes member countries such as Portugal, Brazil, Angola, Mozambique, Cape Verde, and Sao Tome and Principe. It focuses on sharing best practices and governance models among ccTLD registries.

2. Lusophone Internet Governance Forum: Marta Moreira Dias highlighted this as a key collaboration platform. The first event was held in Sao Paulo in 2023, with the second scheduled for Cape Verde in 2024.

3. ANACOM’s cooperation activities: Sandra Maximiano mentioned capacity-building programmes focusing on various regulatory topics and sharing of network assessment technologies.

4. ARCTEL-CPLP initiatives: David Gomes, Executive Secretary of Arctel CPLP, emphasised the importance of updating the digital agenda for Portuguese-speaking countries and implementing the Sustainable Village for Development project.

These initiatives demonstrate a shared commitment to knowledge sharing and collaborative problem-solving within the Lusophone community.

Addressing Digital Divides

A significant portion of the discussion focused on efforts to reduce digital inequalities. Luisa Ribeiro Lopes cited a UNESCO report stating that while 93% of people in developed countries are connected to the internet, only 27% are connected in developing countries. This highlighted the urgency of addressing the digital divide.

Speakers agreed on several approaches to combat this issue:

1. Promoting digital skills to combat exclusion

2. Implementing projects to bring internet to underserved areas

3. Fostering inclusion and equal representation in ICT

4. Harmonising regulatory approaches across different regions

These strategies reflect a comprehensive approach to digital inclusion, addressing both infrastructure and skills development.

Future Opportunities and Challenges

The discussion also looked towards future opportunities and challenges for digital cooperation:

1. Developing AI and language models in Portuguese: Bianca Kramer emphasised the importance of this to ensure cultural representation and combat linguistic biases in emerging technologies, including examples of Cape Verdean Creole and Brazilian regional slangs.

2. Addressing cybersecurity collaboratively: Mozart Tenorio, Advisor to the presidency of ANATEL, highlighted this as a key area for future cooperation, mentioning ANATEL’s involvement in ITU and CGI.br.

3. Promoting Portuguese content on the internet: Speakers stressed the need to increase the presence of Portuguese-language content online.

4. Strengthening cooperation amid global instability: Sandra Maximiano noted the importance of maintaining strong partnerships in the face of global challenges.

Regulatory Cooperation and Multi-stakeholder Engagement

The discussion touched upon the importance of regulatory cooperation and harmonisation of procedures across different economic regions. Throughout the event, speakers stressed the importance of multi-stakeholder engagement in digital cooperation efforts, ensuring diverse perspectives within the Lusophone community are considered when developing policies and initiatives.

Women’s Empowerment

An notable aspect of the discussion was the strong representation of women in leadership roles among the participants, reflecting a commitment to gender equality in the field of digital governance and cooperation.

Conclusion

The discussion concluded with a call for continued and strengthened cooperation among Portuguese-speaking countries in the digital realm. Speakers recognised the shared language as a powerful asset for fostering innovation, economic development, and cultural exchange in an increasingly interconnected world.

The overall tone was positive and collaborative, with speakers expressing enthusiasm about existing partnerships and a shared desire to strengthen cooperation in the future. As the digital landscape continues to evolve, the Lusophone community is poised to leverage its linguistic and cultural ties to foster inclusive growth and innovation, addressing both the progress made and the challenges ahead in the digital realm.

Session Transcript

Moderator: Okay, so good afternoon and welcome to the Open Forum 75, the Portuguese-speaking community as a case study on digital cooperation. This Open Forum intends to be a space of dialogue between different stakeholders from different regions on digital cooperation in the Portuguese-speaking country’s community, which spans from four different continents. The Open Forum will discuss this case study on how such digital cooperation is taking place and what meaningful results it has been delivering. So as speakers, we have a very interesting panel, I believe, Leonil Santos, Chairwoman of ARC-TEL-CPLP, Sandra Maximiano, Chairwoman of the Board of Directors of ANACOM, Luiz Ribeiro Lopes, Chairwoman of the Board of Directors at PT, Marta Moreira Dias, Chairwoman of LUSNIC, Mozart Tenorio, Advisor of ANATEL Presidency, Bianca Kramer, Counselor of CJBR, David Gomes, Executive Secretary of ARC-TEL-CPLP. So with no further delay, I will pass the floor, we will see a video directly from the beautiful Cape Verde of Leonil Santos. Leonil Santos, as I said, is the Chairman, the Chairwoman of the Board of Communications and Telecommunications Regulator of the Community of Portuguese Language Countries, ARC-TEL-CPLP. The only of the sentence is also the chairwoman of the Board of Directors of the multi sectoral regulatory authority of Cape Verde. So let’s watch the message from the UNILDE. Thank you very much for coming. Thank you. Thank you.

Sandra Maximiano: Thank you. And now I had the subtitles in Portuguese, so that’s a pity. So I’ll give you some information as well that were in the video. But first of all, let me just telling you that my background as a professor, as an academic, we always work in cooperation. We never do research alone. And the most scientific knowledge and advancements that we see nowadays, they all come with the cooperative work and working together with researchers from all over the world. And it’s that same spirit that we, and myself in particular, try to bring to ANACOM and to foster cooperation with other countries, the Portuguese-speaking countries in particular, but also European and all the rest of the world as well. As well. So let me just telling you that, of course, I want to emphasize this collaboration between Portuguese-speaking countries in fostering a robust digital ecosystem. And the digital landscape is very complex and is rapidly evolving. And we have new challenges. Of course, artificial intelligence, cyber security has new challenges for all of us. In this context, cooperation is not only beneficial, but it’s essential, it’s crucial. You cannot live without it. And the Portuguese-speaking world is vast and diverse. And together, of course, we share a common language which serves as a powerful bridge for communication and understanding. This shared linguistic heritage allows us to collaborate effectively, exchange ideas and promote innovation and connectivity in a collectively way. So it’s extremely important. So let me just telling you a little bit what was in the video. The cooperation that ANACOM establishes is through bilateral and multilateral cooperation, of course, and through mostly cooperation protocols. And these cooperation protocols establish a mechanism for technical and institutional cooperation on different matters related to activities of national regulatory bodies. In particular, ANACOM has focused on cooperation with Portuguese-speaking countries and, as I said, these EU countries in many diverse matters. And we are really dedicated to foster digital cooperation within the Lusophone community, and that can be seen through various collaborative initiatives that they were mentioning in the video. And these partnerships involve the wide range of activities, including capacity-building programs, and these capacity-building programs, they touch on many different issues like finance, human resource management, accounting, information exchange in diverse areas of expertise, technical support, technical field visits, institutional visits, and also the organization of high-level coordination meetings. And the capacity-building and information exchange programs cover a broad spectrum of topics. So, for instance, regulatory economics, statistics, consumer protection, security, equipment, spectrum management, supervision, including coverage and quality of service measurement. So we have a wide range of topics that we work in these capacity programs, and most of these programs are nowadays online, which allows a wide range of participation. But also, we still insist and we foster in-person programs, 30% of these capacity-building activities are conducted in person. within the beneficiary countries. This approach allows a greater number of regulatory authority staff to participate in these exchange sessions. And every year, hundreds of staff members from regulatory authorities engage in this exchange. And we are extremely proud of these efforts. So, I’m not so sure if I’m over time or not, Manuel.

Moderator: Yeah, well, I think we are fine now. We can move on to the next speaker. Thank you very much, Sandra. So, Luisa Ribeiro Lopes. Luisa is currently president of the board of directors of DNS.pt Association, the entity responsible for managing the national top team. She is a member of the European Association of Systems. The floor is yours. I believe you have a presentation, presentation number three. I’m saying to people there, hopefully this time we won’t have an issue. Yeah, right. Now? Yes.

Luísa Ribeiro Lopes: Yeah, okay. First, I would like to begin to thank IGF and Anacom to organize this important panel. I also thank you for inviting me and .pt to join the debate about Portuguese language on the Internet. This panel shows the significance that we have all reaffirmed in recent years the need to include Portuguese as an official language of the United Nations. This is essential for all of us, for our countries and for our organizations, as we work together to build a digital environment enriched by this important asset, the Portuguese language. Since 2007, .pt along with other registries like .cv and .br presenting in this panel has joined forces with other Portuguese countries to create an association with a different and inclusive vision for the internet governance and for our top-level domains. We have seen this collaboration in action during the events held in Brazil last year and in Cape Verde this year with the Lusophone Internet Governance Forum, but about LUSNIC, my colleague Marta, the chair of this association, will share more details. For now, I’m here to present you .pt and our commitment to improving digital skills and achieving gender balance in ICT. So switching to my presentation, .pt is the registry of the top-level domain for Portugal in the internet. Our organization is a non-profit association representing all the digital national ecosystem, the government representing by FCT, the consumers representing by the Portuguese Consumer Protection Association, DECO, and the digital companies representing by the Digital Economic Association. We have, as you can see in the slide, a multi-stakeholder governance model with more than 20 entities from different economic, social, and cultural areas representing in the advisory board. such as Anacom. Our vision, yes, our vision, it’s a new vision from last month, approved by the General Assembly last month, is promote the free and secure use of the Internet by providing services of recognized excellence to the community, partners and peers, while projecting Portugal’s identity in the international digital ecosystems. And for this, we have our principles, as you can see, our vision is aligned with the principles that we also advocate here in the Internet Governance Forum. Security, accountability, strict trust, ethics, inclusion, and my presentation is all about inclusion, responsibility, independence, globalization, cooperation, innovation, and impact. In numbers, .pt, we have now more than 1.9 million domain name registries under .pt, but .pt is not just numbers. We promote and we are a partner with a lot of projects to improve the digital skills of the Portuguese population. We work with young people, we work also with workers who need more digital skills, helping them to upskilling and reskilling, as well as with older individuals to combat digital exclusion. And we are proud to be partners in initiatives centered around gender equality in ICT. We recognize the huge gap that exists for women in these fields, and we strive to create opportunities that foster inclusivity and equal representation. Because an image is worth a thousand words, we brought a small video from .pt. Portugal is more than 200 locations around the world, which means that, regardless of its location, there will always be a server nearby. With Portugal, we want to be more digital and more inclusive. And that’s why we also say that Barra Barra, which is the headquarters of .pt, which is the home of the national internet. For digital nomads, it is an aggregator space, a resource that we want to make available to the community, a digital web. There is a discussion about the issues of the internet of the future. And we want to make it available to the whole society, so that they can use our physical space, which is a bridge to digital inclusion for all those who are contributing to it. So, all of these activities promote the Portuguese language in the internet. The more people have digital skills in Portugal, And in all our countries, the more our language will be represented on the Internet, because we are not only users or consumers, but we are also producers of content. For now, Manuel, this is my presentation.

Moderator: Thank you. Thanks, Luisa. Very inspiring. So I’ll pass now the floor to Bianca Kramer, a counselor of CGIBR, a visiting professor and research lead at the Center of Technology and Society, Faculdade Getulio Vargas de Direito, Rio de Janeiro. Member of Legalite Research Center, Pontificio Universidade Católica do Rio. And currently member of the Brazilian Internet Steering Committee, CGIBR, a representative of the third sector. Bianca, welcome, and the floor is yours.

Speaker 1: Thank you so much, Manuel, for the kind introduction. Thank you so much for the invitation to be here and to represent as a member the CGIBR, which is, for those who don’t know us, a multi-stakeholder organization in Brazil with members from the government, from the corporate sector, the third sector, and also the academic community. To introduce a little bit about my concerns on these topics, I would like to just talk a little bit about two questions that must be taken into consideration to address the Portuguese-speaking community as a case study on digital. So these questions concern, they actually address successful objectives and successful initiatives we have been building together, and I would like to address them. The first one is the importance of, to debate technology, technology issues from a perspective of common language, which is, for us, of course, the Portuguese in particular. the importance of initiatives of capacity buildings and experience changes between Portuguese-speaking countries, okay? So, these are the questions that I have been intending to address, and for so on, I would like to raise awareness of the fact that Portuguese, as Sandra’s video was showing, unfortunately, we didn’t have the audio to make it even more fruitful, but as the video already was saying, Portuguese is the sixth language most spoken in the whole world, and it is not something that shouldn’t be taken into consideration for capacity building purposes among us. So, we had a very successful opportunity to develop the first Lusophone internet governance for Presencio. We had already online, but in the Presencio way, we had the opportunity to do this in Cape Verde in Africa. So, I thought it was a very good opportunity for us to understand what we have in common, and what do we have that is more important? We have the intention to develop our countries in terms of industrialization of technology, in terms of strengthening us as a political area in the country, economically, socially, politically as well, and we have so much in common on this topic, because when we address English as the major language for all developing technologies, we get so much weakened in terms of opportunities worldwide as well, and it submissions us in our capacity of building technology in a sovereignty way. So, this is the first topic that I would like to consider the most. Because this is, when we have this most, one of the most spoken languages, and we don’t talk about the importance of this, we have in the Lisbon Forum, we had it, an opportunity to observe and to exchange in a horizontal way, in a very respectful way, means of hearing each other’s necessities, opportunities we want to build and to strengthen our, not only the way we are weakened in the development technology, worldwidely field, as we can say, but we can observe and achieve other lacks of opportunities that we shouldn’t have, or we didn’t have when we were not together. This is the first topic I would like to say. I know I have very few time, but just to address the major topics on this conversation, it is important to understand that the presence of Portuguese language on the Internet is not reflective of the dimension of our presence in the Internet. It is considerable because we observe a gap of opportunities and also a gap of addressing things that are from our own perspectives. I will give an example that really touched me in the Lisbon Forum that we have been building. We had the presence of a professor, a Portuguese professor, that said a term that a regalo, like it could be translated as a gift, is not a regalinho. It could be translated as well as a little gift. For us, we in Brazil, we don’t say regalo neither regalinho, but I can tell you that I understood the emotionality of the language. And I come from a region of Brazil that I’m sure in Portugal and neither in Cape Verde or any other Lisbon partner, they wouldn’t understand what is tamec. Which is, it is cool, it’s okay, in the favela language. So, this is important for us to understand that when we shut down the cultural heritage and the cultural importance of emotionality in developing technologies and addressing the topics of technology, we also shut down opportunities of self-development and also economic block of development. And that’s it. We have to understand that to bring together Portuguese language in these topics is to address the linguistic and cultural diversity we have among ourselves and also compared to the other countries. Especially those from the north that develop these technologies using English as the major and, why not, the only language that should be addressed. I see China doing the opposite. I just came from a panel that I heard the Chinese researchers say that they develop their own LLMs in Chinese offering Mandarin, of course. But this is important to us first to understand that. What is the importance? We are seeing other examples of other countries that are developing high-level technologies in other languages than language. Why shouldn’t Portuguese community do this? So, this is an invitation for us to see that the success of the Lusophone Forum, why we should address it forward and how could we improve it, respecting our differences but raising awareness and centralizing what we have in common, which is the desirability of self-development and economically, socially and also politically.

Moderator: Thanks a lot, Bianca. Very inspiring words and very relevant reflection. So, I’ll pass the floor now to Marta Moreira Dias. Marta is a board member of DAPT and president of LUSNIC, the Association of Portuguese Speaking Registries. Currently, she serves as Vice Chair of the Internet Governance-led Liaison Committee within the CCNCSOs at ICANN. Marta, the floor is yours, and I believe now the presentation.

Marta Moreira Dias: Yeah, I do have a presentation. So good afternoon, everyone. So thank you for having me here today. It is a pleasure to share this panel with colleagues and friends, and most of all, thank you for setting the scene, so the job will be much easier for me. But I have to confess, it’s my first disclaimer here, that my presentation is huge, so prepare yourself. So I’m going to present you LUSHNIC. So Luisa did a brief presentation of this association that was established back in 2015 in Lisbon, Portugal. It is an association that gathers the registries from the Lusophone countries, and you do see there, we are seven registries for the moment, and the figures just show how our governance model is different, even though we are all ccTLD managers, we do have three ccTLDs within the National Regulatory Communication Authority, two not-for-profit associations, .PR and .PT. Luisa already presented .PT, we are a private association, .PR, it’s managed by CGR, and we do have two ccTLDs, Angola and Mozambique, within the national government. So as we can see here, we do have different realities, we do have a different geographic localisation, a different population, a different digital authority and so on, but so what do we have in common? We are ccTLD managers, and most importantly, we do have this common asset, the Portuguese language, which was in fact behind the creation of this association. From here, I can identify the first challenge, if I can say that, that it was to serve as a driving force, as a unifying force, in order to bring those different realities coming together and work and trying to achieve their own purposes that are similar in terms of governance and maintenance of the respective CCTLDs. I’m talking about training, collaboration, also capacity building, also the question that was raised here today, the importance of promotion of the Portuguese language content on the internet and the idea of collaboration and building and sharing knowledge and promoting, of course, the development of common policies and DNS best practices. So, in short, if I can talk about the value proposition for the SNIC, we want to bridge gaps, we want to be a network facilitator, we want to create awareness regarding the internet governance topics and I’m talking about AI, data protections, human rights, consumer protections and so on. So, those are global and transnational topics that should be approached in this global way and having a coordination and collaboration between different actors. So, what do we have here? We asked Chet Chippity to prepare us a good image of one of our major purposes. What you see there is a picture of one very important poet, a Portuguese poet from the 16th century, Camões, and probably you know, but the Portuguese language is also referred to as the language of Camões. So, we asked Chet Chippity to combine this image of these ex-liberis of the Portuguese heritage with one of our main concerns, which is the future. I can say that it’s more than a concern. It’s a future that we, LUSNIC members, do envision, which is to have a broader representation in the development across the world. So again, the Portuguese language, again, the fact that we are here, all collaborate, all working together in order to promote the Portuguese language. So if we look to the numbers, in fact, I heard here a reference to the sixth most spoken language in the world, but the data that I do have is that we are the fifth. I’m not sure who is right, but well, it doesn’t matter. What is matter is that we do have more than 215 million speakers from four different continents, and we are the most spoken language in the South Hemisphere. So this common asset that I talked previously that was on the basis of the creation of LUSNIC, and we think that we should showcase in numbers because it’s much easier to look at this map and understand the value and the power of the Portuguese language. So if we look to ongoing activities, you can see there the sets of initiatives that we are promoting within LUSNIC, and again, the cooperation in terms of training, in terms of capacity building, and of course, the organization of the Lusophone Internet Governance Forum. The first one was in Sao Paulo in 2023, and this year it was in Praia, Cidade de Praia, in Cape Verde. We do have two outcomes from the Lusophone Internet Governance that I would like to invite you all to visit our webpage and to consult. You have there two charters with a set of commitments that all the Lusophone members had the opportunity to set forth. So, if you have any curiosity on it, please visit the website to hear about the set of commitments that I was talking about. Of course, collaboration is fundamental, and we do have to collaborate among our members, but it is fundamental to collaborate with other entities. In special, we are founding members of the Coalition for Digital Africa, an initiative set in 2020. We want somehow to help Africa having its right place, the place that the continent should be in the digital world, in the usage of the internet. And then we do have regional entities like TLD, AFTLD, organizations that we work to. So, training, awareness, sharing knowledge, and building common positions are our own ongoing activities. So, if I present you ongoing activities, I have to talk about the pressure points. We do have pressure points. We are far away from each other. We do have a political, social, and economic distance. We do have different realities regarding internet usage. We do have different strategic priorities. We do have different budgets allocated with ICT and the internet and connectivity and different levels of connectivity and access, but we work together. We have, I’m sorry, that was the most important part of that slide, the engagement. We work together, we engage, and we try to build bridges. That’s one of the main purposes of this association, so the future is the most important now to look at. We are now just organizing the third Internet Governance Forum that will be held in Mozambique next year in Maputo. We are still deciding about the final date, but probably in September. We are doing a good job, I would say, showcasing the SNIC, that’s what we are doing here today. It’s important to spread the word. It’s important to explain to people that we work together, that we are together, that we engage together, and of course, the importance of safeguarding the inclusion of Portuguese language in the context of emerging technology, mainly in the AI world. So foster the Lusophone community cooperation, training, and shared knowledge, again, the future, the present, but also the future. So those are our contacts, please reach us and work with us, and we want to collaborate and participate in all the events that you believe that we could be an add-on. So Manuela, I think the time is the right one, so back to you, and thank you.

Moderator: Okay, thanks a lot, Marta, very interesting, very relevant presentation. Thanks for your insights. So, Mozart Tenorio serves as an alternate member of the board at the Brazilian Internet Steering Committee, CGIBR, and a full member of the board of the Audit Committee of NIC, and he is also an advisor to the presidency of ANATAL, Brazil’s national telecommunication agency, and I will focus on that capacity in this session. So, Mozart, the floor is yours.

Mozart Tenório: Thank you, Manuel. First of all, I would like to thank you for the invitation, and it was a pleasure for us at ANATAL to help in any way we can. It’s an honor and a pleasure for us to engage with the speaking community every time we can. And, first of all, I would like to mention about ARCTEL, our association of regulators of telecommunications of the speaking, of the Portuguese-speaking community, which we recently passed the presidency to Cape Verde, to Leonel de Santos, she talked a little bit in the beginning of this panel, and we were the presidents before that, and we hold it very dear to us to engage with the speaking community, Portuguese-speaking community of regulators. And, recently, lately, we had a very pleasant happening, which is the creation of the Lusophone IGF, as we call it, and as members of the CGI.br, which is the Brazilian steering committee in Brazil, we are very glad to take part as much as we can, and I would like to say that it was very good to see that we have different roles, different subjects to deal with, different issues to cope in a different, and I would say broader, community. It’s very good to see that we can join with other partners like .pt, LuzNik, other entities from all around Africa, in Europe and South America and Asia. And we are very excited with the Lusophone IGF. We would like to say it very clearly that we are open to participate as much as we can naturally. And it’s only natural and it’s a necessity as well. It’s needed for us as regulators to be engaged with the digital ecosystem day after day more and more. So, one more time, thank you for the opportunity and I’m looking forward for the questions ahead. Thank you.

Moderator: Thank you, Mozart. Very interesting. Also, your insights. So, I was told that we can now play the video from Leon Yildir. So, Leon Yildir. Yeah, we can play. Leon Yildir Well, it’s quite unfortunate. Yeah, well, the video is not working. So, we have to move on. We have also to be a little bit telegraphic in our Q&A part of the session, but moving on. So, Sandra Maximiano, tell us how can HANA.com as a regulator enhance digital cooperation to reduce access inequalities among Portuguese-speaking countries, and how HANA.com is benefiting from this cooperation. Sandra, the floor is yours.

Sandra Maximiano: So I want to just compliment what you cannot really see, but I can refer to work that we have been doing to assess the quality levels of telecommunications networks. We had been developing some technology that allow us for verification and comparison of the performance of mobile and fixed networks, and we share a lot with the Portuguese-speaking countries this assessment technology developed and used by HANA.com, but this cooperation goes way beyond technology, and there’s a proof of that I just signed. I was just making some signatures when one of them was for the next program that we are going to have a cooperation program actually with ARM, and it’s a capacity-building program on communication strategies. So that’s actually a proof that it’s very dynamic, and it happens with a high frequency. So we do also, related to this assessment technology, it includes sharing of knowledge procedures and problem-solving approaches, and this allows the national regulatory authorities to actually have a voice in the process of making decisions. So that’s a proof that it’s very dynamic. So that’s actually a proof that it’s very dynamic. to develop some capacities to assess, analyze, but also to generate reports on user experience quality, particularly concerning mobile networks. So this is really good, because all these best practices are extremely important, so fast, reliable internet is extremely needed for greater digital literacy, and of course to reduce inequalities among people. So this is extremely important, and this is one particular activity that I would like to mention. And the second part of the question, which is what can we bring with this cooperation, and let me tell you that this is not a one-way transaction. So not only the countries that we cooperate with benefit from this cooperation, but of course we, ourselves, Anacom, benefit a lot with this cooperation. So our technicians, they gain many insights by doing this work, by working in a field in different realities, which are completely different from ours, and they are exposed to different technologies, different equipment, but also different problems, and this makes them think outside the box, and question some established realities or certainties. So we gained a lot, but also this cooperation motivates our technicians and our collaborators that go and participate in these capacity building programs, and they give them a sense of achievement, a sense of real public missions, and they see things happening in a concrete territory, a village, or improving lives of a citizen. So it has a very rewarding effect within our teams. So it’s never a one-way transaction. We gain a lot with this cooperation, and of course we truly believe that Portuguese-speaking countries also gain a lot of this cooperation, so we aim at continuing and, of course, expand our realm of interventions. Thanks a lot.

Moderator: Thanks, Sandra. Thanks a lot. So, Luisa, you have been dedicating your researches on how digital skills act as a driver or digital inclusion. Can you expand on that? Can you tell us more about that?

Luísa Ribeiro Lopes: Yes. Thank you, Manuel. Yes, I think that inclusivity is one of the most important aspects of the digital partnership we share between Portuguese-speaking countries. And as we saw Monday in the open session, in the great presentation did by the Saudi Arabian Minister of Digital, we face an unequal digital divide if we compare the North and the South Hemispheres. And as we all know, Portuguese is the fifth or the sixth most spoken language in the world, but the first one in the South Hemisphere with over more than 200 million speakers. Many of these individuals are excluded from the digital. As we heard yesterday during the presentation session of the UNESCO report, 93% of people are connected to the Internet in developed countries, while only 27%, yes, 27% are connected in the developing countries. I believe it is the responsibility of all of us as a Lusophone community to do more to combat the digital divide. People who don’t have access to digital skills will not have access to the opportunities in the digital world. Digital skills are a priority for all of us. Portugal has been progressing quickly in this field and we are now in line with the European Union in digital skills and the use of the internet. But Portugal and all the Portuguese speaking countries have wonderful examples of projects running by government, by companies, third sector or by civil society that have been helping communities acquire these skills and use digital in their daily lives. Our partnership also presents many opportunities for us to exchange information and best practices, to share examples of what works in our countries and to help others to implement similar projects. This is what we need to do all together. Cooperation is so important and we just achieve our purpose if we cooperate with each other. Thank you, Manuel.

Moderator: Thanks a lot, Luisa. Very interesting. So, Bianca, I’ll question you now. Why do you think it’s so important to debate technology related issues from the perspective of a common language, the Portuguese?

Speaker 1: Manuel, if you allow me, I would like just to observe that we didn’t hear the relevant contributions of David Gomes. If you allow me, I would like to ask for his presentation and to share with us his thoughts.

Moderator: Yeah, well, David, yeah, it’s true. The way it was organized, it was supposed to be in the video, but you are totally right. So, David Gomes serves as a Senior Advisor of the Multi-Stakeholder Regulatory Authority of Cape Verde and he acts as Executive Secretary of Arctel CPLP. So, David, if you would like also to provide us with your insights. Thank you.

David Gomes: Thank you, Manuel. Can you hear me? Yes. Yes, thank you. Thanks a lot. First of all, I would like to congratulate the IGF and Anacom for this initiative. As you know, Arctel is an association with nine national regulatory authorities of communities. All of them are in the section of Anarctic. We are all in this communication sector, this means postal service and telecommunication service. Well, as you know, Arctel can enhance digital cooperation amongst people by focusing on the following initiatives. First one, we are updating our digital agenda, that is approved by the Council of Ministers of Telecommunications, and now we are working with the CPLP organization to implement this digital agenda. The second topic we are currently planning to implement in 2025, let’s see it, the second phase of the Sustainable Village for Development project in my island, Cabo Verde. This project aims to bring internet access to the most under-served areas. This means that this project, just to open here an observation, this project is working with an organization, and we hope that by the end of this implementation, my small island will be the first island totally connected by this project. Another issue, a very important, you know, we have the statistical working group that monitors the sector data and show us what weakness need to be addressed in our association. We are now preparing to be present on the international scene, again to promote the creation of the partnership and bring investment towards content in our community. Also we are working on strategy close to the digital divide in our country, particularly gender divide, genderized digital divide. Finally, let me just to say that during the national nature of our association, a lot of work has been developed through the new communication platforms, making our day-to-day life more agile, enhancing the exchange of information in real time, this is very important, and the results of our work are expected to be faster and more efficient. The growing importance of technology, digital platform and internet infrastructure has already created an opportunity for cooperation among our speaking Portuguese. This is what I can say at the moment. Thank you.

Moderator: Yes, thank you, David, and so I’ll come back to you again, Bianca. So why is it important to debate technology-related issues from the perspective of a common language, the Portuguese?

Speaker 1: Thank you very much for this interesting question, Manuel. I’ll be very brief due to our time, but the major point is that it strengthens the Portuguese-speaking community. And it enables partnerships, it enables collaboration, and also we can share experiences and lessons that we learn from each other. We have an example that we can observe is that when we have the majority of discussions on AI and very far away from the Portuguese community, from the Portuguese language, we have the training models and also the observances of biases, language biases. And also, why not to mention that we have the lack of representation, cultural representation in these technological spaces. I can’t talk about the Brazilian perspectives, but for example, talking to Cape Verdean communities, we could see that they express themselves very much, belong to each other, and you can say that much better than I can, in Criolo, for example. And also, the slangs we have in Brazil in different regions, I know Mozart is from a different region from Brazil. As you could see, we are a continental island, as people used to say. We have a famous actress that is running for the Oscar this year, that is Fernanda Torres, and she said this in an interview. Brazil is a continental island, so isolated by the language. Why not strengthen what we have in common, taking language as an asset to move us forward in the discussions, and also in achieving our purposes in economical approaches? So, this is something that I would like to mention. When you ask why is it important to debate, it is important because the major language models are being trained in English, and you can observe it. Why not improve our society? raising awareness of the importance of Portuguese in society and worldwidely. So I would like to answer very briefly, but raising awareness on this topic and observing that we can gain so much if we could point out on this topic.

Moderator: Thanks a lot, Bianca. So now, Mozart, I would like to ask you, what’s Anatel, in terms of cooperation in the digital ecosystem, and you can also future in front of us. Yeah, for Anatel. Sure. First of all, Manuel, just after

Mozart Tenório: this wonderful speech from Bianca, I just would like to point that we have in this table a member of council in Bianca, four chairwomen. So coming from a cultural background, from a Christian and Catholic community speaking, Portuguese speaking community, to see such powerful women having so much success is really very interesting. I just hope they don’t feel like making a women speaking, Portuguese speaking community, otherwise we’ve been in a bad situation, I guess. Moving forward, Anatel is very active, for example, in ITU. And there, ITU is also following a path in the digital environment. So we kind of try to engage with that in an international arena. And through CGI in Brazil, which we are part, we also try to engage and help as much as we can on building this capacity, this digital landscapes in Brazil and speaking in the Portuguese speaking community. And referring to what Bianca just said, when I hear such things, Bianca, I feel like, it’s not very often that we can say that, but I’m glad we speak Portuguese when we are contributing with each other, when we are talking, when we are exchanging experiences. And it’s amazing how language can bind us together, because we come from such different parts of the world, and someone from Timor-Leste or Macau can talk to us and relate and instantly in a very empathic way. It’s amazing. And so I believe this kind of forum is very important, and I’m very glad that Anatel is increasingly taking part on that, Manuel, and we would like to be like this even more. Thank you for the question.

Moderator: Thanks a lot, Mozart. So in my script, I have now to introduce David. Just to complement what we have said before, how can Arctel contribute to enhancing the dialogue among Portuguese-speaking countries? Maybe I need a microphone, because this one. Okay, now it’s fine. So yeah, David, to complement your first intervention.

David Gomes: Yeah, thanks. Thank you, Manuel. To be brief, let me just to show the topics that we are working on in our session, based on our digital agenda. The first topic is digital transformation and inclusion, that we are listening to here from everybody. The second one is cross-border data and cybersecurity that we also hear from our colleagues from Brazil. The third one is this innovation, technological cooperation, so what we are doing now here to develop the digital cooperation, and also technology cooperation. I think that the example with the Anaconda in this matter is a very good example that we can exchange our experience, but also we can cooperate between us in terms of technology. Another issue very important for our community, this is a regulatory organization. As you know, we are maybe the unique association that we have members from four different continents. They are situated in different economic regions, Latin America, EU, Portugal, Brazil, Angola, Mozambique, ECOWAS region, but what we try to do now is harmonize our regulatory view, regulatory procedure. I think we can harmonize this. The last one, what we hear here is capacity building and training. Maybe just wait for what Marta said, our engagement in this process is very important. Thank you, Manu.

Moderator: Thank you, David. Well, we don’t have time for that. I’ll just make a round of one minute each. I would start with Mozart. Yeah. Okay. Okay. Okay, Martha. It’s me?

Marta Moreira Dias: All right. So thank you. I think it was a very fruitful discussion. What I would like to emphasize here is the idea of cooperation and collaboration. I believe that we all heard the United Nations Secretary General, the Portuguese, António Guterres, in the opening session called for a collaborative approach to governance together with an open, free, sustainable, human-centered, and affordable internet. So he was emphasizing the collaboration topic as a main collaboration. In fact, the Pact of the Future and the Global Digital Compact emphasizes as well the collaboration as one of the most important commitments. So what we are doing here, the cooperation, the collaboration, and the engagement is, I would say, closely linked with what is the functioning, what should be the functioning of the multi-stakeholder model. So the multi-stakeholder model relies on the collaboration and cooperation, and that’s what we are doing here today. So thank you so much for this opportunity, and I hope to see you all soon.

Moderator: Yeah. Thank you. Mozart, final remarks? Sure. One minute. We are being pressed.

Mozart Tenório: I would like to say, as a final remark, that we at Anatel would be glad to help and do everything within our powers to foster the decisions, the outcomes from the Portuguese-speaking community in any forum that we get, because we think it’s really fruitful, it’s important. and we are very excited to the future about that. Now we have a kind of limited powers because we are just a telecom operator but anything we can do now or in the future we are very open to achieve that together with all the community. Thank you. Thanks a lot Mozart.

Luísa Ribeiro Lopes: Luisa. Okay, just a minute. DotPT is engaged with the Lushnik, with the whole community because as Marta said we need a digital world with humanistic purpose. Also in Lusophone world we need to make the difference in building a digital world more equal, democratic, open and free in Portuguese language. Thank you. Thanks a lot Luisa. Bianca.

Speaker 1: Extremely briefly I just would like to share my happiness in our fruitful contributions that come from a long time and I hope we can move forward with it more strengthened and more powerfully in the engagement of our development, our common development. Thank you so much.

Moderator: Thanks a lot Bianca. And finally Sandra.

Sandra Maximiano: I just would like to say that the world is very unstable now with very big challenges and geopolitics. So dialogues and cooperation among nations are becoming more challenging and sometimes non-existent. So in my view this also creates… Okay, so as I said in this very difficult context It creates an opportunity within the Lusophone community that we should be able to size. So we need to show that cooperation is vital, is the key, is very crucial. And we have very new challenges ahead. So in this digital ecosystem, artificial intelligence, bring Portuguese into large language models, cybersecurity. So we need to keep working together and show the entire world that this is the way. Thanks.

Moderator: Yeah. Thanks a lot, Perrine. Just might do some takeaways of these discussions and thanking you again, your participation, your very important rally, inspiring insights. Three takeaways, women empowerment, as you said, we had five top leaders with us. It is a very good sign that the Lusophone community is working well on that matter. The asset, well, we have the asset of this dispersed community crossing different continents, the language, fifth or sixth most spoken language in the world. So the happiness of talking the language and the language as an UN official language as well. Finally, to do it together, the development of new technologies, the LLM, the question of the LLM developments and so on, and the question as a question of serenity as well. Global divide was another element about as well as engagement in the international forum. Yeah. And finally, the future. This is a forum, a panel where people. We are unsatisfied with what we have achieved. We want to collaborate further in the future, and so let’s work on that in future occasions. Thanks a lot to everyone.

S

Sandra Maximiano

Speech speed

115 words per minute

Speech length

1128 words

Speech time

584 seconds

Portuguese is the 5th/6th most spoken language globally

Explanation

Sandra Maximiano highlights the global significance of the Portuguese language. This emphasizes the importance of Portuguese in digital cooperation efforts.

Evidence

Over 215 million speakers from four different continents

Major Discussion Point

Importance of Portuguese Language in Digital Cooperation

Agreed with

Marta Moreira Dias

Bianca Kramer

Luisa Ribeiro Lopes

Agreed on

Importance of Portuguese language in digital cooperation

Differed with

Marta Moreira Dias

Differed on

Ranking of Portuguese language globally

Capacity building programs on various regulatory topics

Explanation

ANACOM conducts capacity building programs covering a wide range of regulatory topics. These programs aim to enhance knowledge and skills in telecommunications regulation.

Evidence

Programs cover regulatory economics, statistics, consumer protection, security, equipment, spectrum management, and supervision

Major Discussion Point

Digital Cooperation Initiatives

Agreed with

Marta Moreira Dias

David Gomes

Agreed on

Need for capacity building and knowledge sharing

Sharing of network assessment technologies

Explanation

ANACOM shares technology for assessing the quality of telecommunications networks with Portuguese-speaking countries. This cooperation enables better evaluation and comparison of mobile and fixed networks.

Evidence

Technology allows verification and comparison of the performance of mobile and fixed networks

Major Discussion Point

Digital Cooperation Initiatives

M

Marta Moreira Dias

Speech speed

121 words per minute

Speech length

1368 words

Speech time

673 seconds

Language serves as a unifying force for collaboration

Explanation

Marta Moreira Dias emphasizes that the Portuguese language acts as a common asset for collaboration among diverse countries. This shared linguistic heritage facilitates cooperation despite different realities and geographic locations.

Evidence

LUSNIC association brings together seven registries from Portuguese-speaking countries with different governance models and realities

Major Discussion Point

Importance of Portuguese Language in Digital Cooperation

Agreed with

Sandra Maximiano

Bianca Kramer

Lui­sa Ribeiro Lopes

Agreed on

Importance of Portuguese language in digital cooperation

Lusophone Internet Governance Forum as a collaboration platform

Explanation

The Lusophone Internet Governance Forum serves as a platform for collaboration among Portuguese-speaking countries. It allows for sharing knowledge and building common positions on internet governance issues.

Evidence

Two Lusophone Internet Governance Forums held in Sao Paulo (2023) and Cape Verde (2024)

Major Discussion Point

Digital Cooperation Initiatives

Agreed with

Sandra Maximiano

David Gomes

Agreed on

Need for capacity building and knowledge sharing

Promoting Portuguese content on the internet

Explanation

Marta Moreira Dias highlights the importance of promoting Portuguese language content on the internet. This effort aims to increase the representation and visibility of Portuguese-speaking communities in the digital world.

Major Discussion Point

Future Opportunities and Challenges

S

Bianca Kramer

Speech speed

131 words per minute

Speech length

1347 words

Speech time

613 seconds

Debating tech issues in Portuguese strengthens the community

Explanation

The speaker emphasizes the importance of discussing technology-related issues in Portuguese. This approach strengthens the Portuguese-speaking community and enables partnerships and collaboration.

Evidence

Lack of representation and cultural biases in AI and language models trained primarily in English

Major Discussion Point

Importance of Portuguese Language in Digital Cooperation

Agreed with

Sandra Maximiano

Marta Moreira Dias

Luisa Ribeiro Lopes

Agreed on

Importance of Portuguese language in digital cooperation

Developing AI and language models in Portuguese

Explanation

The speaker advocates for developing AI and language models in Portuguese. This would address the current lack of representation and cultural biases in existing models trained primarily in English.

Evidence

Example of China developing their own LLMs in Mandarin

Major Discussion Point

Future Opportunities and Challenges

L

Luísa Ribeiro Lopes

Speech speed

108 words per minute

Speech length

1064 words

Speech time

587 seconds

Portuguese language is an asset for economic development

Explanation

Luisa Ribeiro Lopes views the Portuguese language as a valuable asset for economic development. She emphasizes the importance of leveraging this shared linguistic heritage to foster digital cooperation and growth.

Major Discussion Point

Importance of Portuguese Language in Digital Cooperation

Agreed with

Sandra Maximiano

Marta Moreira Dias

Bianca Kramer

Agreed on

Importance of Portuguese language in digital cooperation

Promoting digital skills to combat exclusion

Explanation

Luisa Ribeiro Lopes emphasizes the importance of promoting digital skills to combat digital exclusion. She argues that access to digital skills is crucial for participating in the digital world and its opportunities.

Evidence

Portugal’s progress in digital skills, now in line with the European Union

Major Discussion Point

Addressing Digital Divides

Agreed with

David Gomes

Agreed on

Addressing digital divides

Fostering inclusion and equal representation in ICT

Explanation

Lui­sa Ribeiro Lopes highlights the importance of fostering inclusion and equal representation in ICT. She particularly emphasizes the need to address the gender gap in these fields.

Major Discussion Point

Addressing Digital Divides

D

David Gomes

Speech speed

97 words per minute

Speech length

577 words

Speech time

355 seconds

Updating digital agenda for Portuguese-speaking countries

Explanation

David Gomes mentions that ARCTEL is updating its digital agenda for Portuguese-speaking countries. This agenda is approved by the Council of Ministers of Telecommunications and aims to guide digital development in these countries.

Evidence

Collaboration with CPLP organization to implement the digital agenda

Major Discussion Point

Digital Cooperation Initiatives

Agreed with

Sandra Maximiano

Marta Moreira Dias

Agreed on

Need for capacity building and knowledge sharing

Implementing projects to bring internet to underserved areas

Explanation

David Gomes discusses ARCTEL’s plans to implement projects that bring internet access to underserved areas. This initiative aims to reduce the digital divide in Portuguese-speaking countries.

Evidence

Planned implementation of the second phase of the Sustainable Village for Development project in Cape Verde in 2025

Major Discussion Point

Addressing Digital Divides

Agreed with

Lui­sa Ribeiro Lopes

Agreed on

Addressing digital divides

Harmonizing regulatory approaches across different regions

Explanation

David Gomes highlights ARCTEL’s efforts to harmonize regulatory approaches across different regions. This initiative aims to create a more consistent regulatory environment among Portuguese-speaking countries despite their diverse economic contexts.

Evidence

ARCTEL members come from four different continents and various economic regions (Latin America, EU, ECOWAS)

Major Discussion Point

Addressing Digital Divides

M

Mozart Tenório

Speech speed

122 words per minute

Speech length

686 words

Speech time

336 seconds

Addressing cybersecurity collaboratively

Explanation

Mozart Tenorio mentions the importance of addressing cybersecurity issues collaboratively among Portuguese-speaking countries. This collaborative approach aims to enhance the overall cybersecurity posture of the community.

Major Discussion Point

Future Opportunities and Challenges

Agreements

Agreement Points

Importance of Portuguese language in digital cooperation

Sandra Maximiano

Marta Moreira Dias

Bianca Kramer

Lui­sa Ribeiro Lopes

Portuguese is the 5th/6th most spoken language globally

Language serves as a unifying force for collaboration

Debating tech issues in Portuguese strengthens the community

Portuguese language is an asset for economic development

Speakers agree on the significance of the Portuguese language as a unifying force for digital cooperation and economic development in the Lusophone community.

Need for capacity building and knowledge sharing

Sandra Maximiano

Marta Moreira Dias

David Gomes

Capacity building programs on various regulatory topics

Lusophone Internet Governance Forum as a collaboration platform

Updating digital agenda for Portuguese-speaking countries

Speakers emphasize the importance of capacity building programs, knowledge sharing platforms, and collaborative initiatives to enhance digital cooperation among Portuguese-speaking countries.

Addressing digital divides

Lui­sa Ribeiro Lopes

David Gomes

Promoting digital skills to combat exclusion

Implementing projects to bring internet to underserved areas

Speakers agree on the need to address digital divides by promoting digital skills and implementing projects to improve internet access in underserved areas.

Similar Viewpoints

Both speakers emphasize the importance of technological cooperation and promoting Portuguese content online to strengthen the Lusophone digital ecosystem.

Sandra Maximiano

Marta Moreira Dias

Sharing of network assessment technologies

Promoting Portuguese content on the internet

Both speakers advocate for developing technologies and fostering inclusion to ensure better representation of Portuguese-speaking communities in the digital world.

Bianca Kramer

Lui­sa Ribeiro Lopes

Developing AI and language models in Portuguese

Fostering inclusion and equal representation in ICT

Unexpected Consensus

Harmonization of regulatory approaches

David Gomes

Mozart Tenorio

Harmonizing regulatory approaches across different regions

Addressing cybersecurity collaboratively

Despite representing different organizations, both speakers emphasize the need for harmonizing regulatory approaches and collaborative efforts in addressing digital challenges, particularly in cybersecurity.

Overall Assessment

Summary

The speakers show strong agreement on the importance of the Portuguese language in digital cooperation, the need for capacity building and knowledge sharing, and addressing digital divides. There is also consensus on promoting technological cooperation and fostering inclusion in the digital world.

Consensus level

High level of consensus among speakers, indicating a shared vision for digital cooperation in the Lusophone community. This agreement suggests potential for effective collaboration in implementing digital initiatives and addressing common challenges in Portuguese-speaking countries.

Differences

Different Viewpoints

Ranking of Portuguese language globally

Sandra Maximiano

Marta Moreira Dias

Portuguese is the 5th/6th most spoken language globally

Portuguese is the fifth most spoken language in the world

There is a slight discrepancy in the global ranking of the Portuguese language, with Sandra Maximiano mentioning it as 5th/6th and Marta Moreira Dias stating it as the 5th most spoken language.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement were minimal, primarily focusing on slight differences in approach rather than fundamental disagreements.

difference_level

The level of disagreement among the speakers was very low. Most speakers shared similar views on the importance of Portuguese language in digital cooperation, the need for capacity building, and addressing digital divides. This high level of agreement suggests a strong foundation for collaborative efforts in digital cooperation among Portuguese-speaking countries.

Partial Agreements

Partial Agreements

All speakers agree on the importance of collaboration and capacity building, but they propose different approaches or platforms to achieve this goal. Sandra Maximiano focuses on regulatory topics, Marta Moreira Dias emphasizes the Lusophone Internet Governance Forum, and Bianca Kramer advocates for debating tech issues in Portuguese.

Sandra Maximiano

Marta Moreira Dias

Speaker 1

Capacity building programs on various regulatory topics

Lusophone Internet Governance Forum as a collaboration platform

Debating tech issues in Portuguese strengthens the community

Similar Viewpoints

Both speakers emphasize the importance of technological cooperation and promoting Portuguese content online to strengthen the Lusophone digital ecosystem.

Sandra Maximiano

Marta Moreira Dias

Sharing of network assessment technologies

Promoting Portuguese content on the internet

Both speakers advocate for developing technologies and fostering inclusion to ensure better representation of Portuguese-speaking communities in the digital world.

Bianca Kramer

Luisa Ribeiro Lopes

Developing AI and language models in Portuguese

Fostering inclusion and equal representation in ICT

Takeaways

Key Takeaways

The Portuguese language is a valuable asset for digital cooperation among Lusophone countries, being the 5th/6th most spoken language globally

Digital cooperation initiatives like capacity building programs and the Lusophone Internet Governance Forum are strengthening collaboration

Addressing digital divides through skills promotion and infrastructure projects is a key focus

Future opportunities include developing AI/language models in Portuguese and addressing cybersecurity collaboratively

Women’s empowerment is evident in the leadership roles held by female participants

Resolutions and Action Items

Continue organizing the Lusophone Internet Governance Forum, with the next event planned in Mozambique

Update and implement the digital agenda for Portuguese-speaking countries

Expand cooperation on network assessment technologies and regulatory harmonization

Increase efforts to promote Portuguese language content on the internet

Unresolved Issues

Specific strategies for integrating Portuguese into large language models and AI development

Detailed plans for addressing cybersecurity challenges collaboratively

Concrete steps to reduce digital divides between developed and developing Lusophone countries

Suggested Compromises

None identified

Thought Provoking Comments

Portuguese is the sixth language most spoken in the whole world, and it is not something that shouldn’t be taken into consideration for capacity building purposes among us.

speaker

Bianca Kramer

reason

This comment highlights the significance of the Portuguese language globally and frames it as an asset for development and cooperation.

impact

It set the tone for discussing the importance of linguistic and cultural diversity in technology development, leading to further exploration of this theme by other speakers.

We want somehow to help Africa having its right place, the place that the continent should be in the digital world, in the usage of the internet.

speaker

Marta Moreira Dias

reason

This comment introduces the idea of digital equity on a continental scale, emphasizing the role of linguistic communities in promoting development.

impact

It broadened the discussion from linguistic cooperation to addressing global digital divides, influencing subsequent comments on digital inclusion.

93% of people are connected to the Internet in developed countries, while only 27%, yes, 27% are connected in the developing countries.

speaker

Luisa Ribeiro Lopes

reason

This statistic starkly illustrates the digital divide between developed and developing nations, providing concrete data to support the discussion.

impact

It reinforced the urgency of digital cooperation and inclusion efforts, leading to more focused discussion on strategies to combat the digital divide.

Our technicians, they gain many insights by doing this work, by working in a field in different realities, which are completely different from ours, and they are exposed to different technologies, different equipment, but also different problems, and this makes them think outside the box, and question some established realities or certainties.

speaker

Sandra Maximiano

reason

This comment highlights the mutual benefits of cooperation, showing how even more developed countries gain from collaborating with diverse partners.

impact

It shifted the perspective on cooperation from a one-way transfer to a mutually beneficial exchange, enriching the discussion on the value of diverse partnerships.

When you ask why is it important to debate, it is important because the major language models are being trained in English, and you can observe it. Why not improve our society raising awareness of the importance of Portuguese in society and worldwidely.

speaker

Bianca Kramer

reason

This comment connects the linguistic discussion to cutting-edge technology development, highlighting potential biases and missed opportunities in AI.

impact

It introduced a new dimension to the discussion, linking language preservation to technological sovereignty and representation in emerging technologies.

Overall Assessment

These key comments shaped the discussion by progressively expanding its scope from linguistic cooperation to broader themes of digital equity, mutual benefit in partnerships, and technological representation. They highlighted the multifaceted nature of digital cooperation in the Portuguese-speaking world, emphasizing both challenges and opportunities. The discussion evolved from focusing on language as a shared asset to exploring its role in addressing global digital divides and ensuring equitable representation in emerging technologies. This progression deepened the conversation, connecting linguistic identity to broader issues of development, inclusion, and technological sovereignty.

Follow-up Questions

How can we improve the representation of Portuguese language content on the internet?

speaker

Bianca Kramer

explanation

The presence of Portuguese language on the internet is not reflective of the actual number of Portuguese speakers worldwide. Addressing this gap could lead to more opportunities and better representation for Portuguese-speaking countries.

How can we develop high-level technologies in Portuguese, similar to how China is developing LLMs in Mandarin?

speaker

Bianca Kramer

explanation

Developing technologies in Portuguese could help strengthen the position of Portuguese-speaking countries in the global technological landscape and promote technological sovereignty.

How can we safeguard the inclusion of Portuguese language in the context of emerging technologies, particularly AI?

speaker

Marta Moreira Dias

explanation

Ensuring Portuguese is included in emerging technologies is crucial for the digital inclusion and representation of Portuguese-speaking communities in the future technological landscape.

How can we address the digital divide between developed and developing Portuguese-speaking countries?

speaker

Lui­sa Ribeiro Lopes

explanation

There is a significant gap in internet connectivity between developed and developing countries, which needs to be addressed to ensure digital inclusion for all Portuguese-speaking communities.

How can we expand and improve capacity-building programs for digital skills across Portuguese-speaking countries?

speaker

Sandra Maximiano

explanation

Enhancing digital skills is crucial for reducing inequalities and promoting digital inclusion among Portuguese-speaking countries.

How can we harmonize regulatory procedures across Portuguese-speaking countries in different economic regions?

speaker

David Gomes

explanation

Harmonizing regulatory views and procedures could lead to better cooperation and more efficient governance across Portuguese-speaking countries.

How can we address language biases and lack of cultural representation in AI and large language models for Portuguese?

speaker

Bianca Kramer

explanation

Ensuring proper representation of Portuguese language and culture in AI models is crucial for fair and inclusive technological development.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

DC-DNSI: Beyond Borders – NIS2’s Impact on Global South

DC-DNSI: Beyond Borders – NIS2’s Impact on Global South

Session at a Glance

Summary

This discussion focused on AI and data governance from the perspective of the global majority, exploring challenges and opportunities in various regions. The panel, organized by the Data and AI Governance coalition of the IGF, brought together experts from diverse backgrounds to discuss the impact of AI on human rights, democracy, and economic development in the Global South.

Key themes included the need for regional approaches to AI governance, the importance of inclusive frameworks, and the challenges of implementing AI in healthcare and other sectors. Speakers highlighted the potential of AI to address social issues but also raised concerns about data privacy, labor exploitation, and the widening technological gap between developed and developing nations.

Several presenters discussed specific regional initiatives, such as Brazil’s and Chile’s efforts to establish AI regulatory bodies, and Africa’s continental strategy on AI. The discussion also touched on the environmental and social costs of AI development, including issues of embodied carbon and the exploitation of workers in the Global South.

Innovative approaches were proposed, including reparative algorithmic impact assessments and the development of AI tools that prioritize the needs of the global majority. Speakers emphasized the importance of capacity building, knowledge transfer, and international cooperation in bridging the North-South divide in AI governance.

The discussion concluded by highlighting the complexity of AI governance issues in the Global South and the potential for collaborative solutions. Participants agreed on the need for continued dialogue and research to ensure that AI development benefits all of humanity, not just a privileged few.

Keypoints

Major discussion points:

– AI governance frameworks and policies emerging in different regions of the global majority (e.g. Africa, Latin America, Asia)

– Challenges of AI development and deployment in the global south, including issues of data colonialism, labor exploitation, and unequal access

– Environmental and social impacts of AI, particularly on marginalized communities

– Need for inclusive AI development that incorporates diverse perspectives and addresses local needs

– Proposals for more equitable AI governance, such as reparative algorithmic impact assessments

Overall purpose:

The goal of this discussion was to highlight perspectives on AI governance and development from the “global majority” (developing nations and the global south). It aimed to showcase both challenges and potential solutions for more equitable and inclusive AI systems that serve the needs of diverse populations worldwide.

Speakers

– Luca Belli: Professor of digital governance and regulation at Fundação Getulio Vargas (FGV) Law School, Rio de Janeiro, where he directs the Center for Technology and Society (CTS-FGV) and the CyberBRICS project

– Ahmad Bhinder: Policy and Innovation Director at Digital Cooperation Organisation

– Ansgar Koene: Global AI Ethics and Regulatory Leader at EY

– Melody Musoni: Digital Governance and Digital Economy Policy Officer at the European Centre for Development Policy Management.

– Bianca Kremer: Assistant Professor and Project Leader at the Faculty of Law of IDP University (Brazil)

Full session report

AI Governance from the Global Majority Perspective: Challenges and Opportunities

This comprehensive discussion, organised by the Data and AI Governance coalition of the IGF, brought together experts from diverse backgrounds to explore the challenges and opportunities of AI governance from the perspective of the global majority. The panel focused on the impact of AI on human rights, democracy, and economic development in the Global South, highlighting the need for inclusive frameworks and regional approaches to AI governance.

Key Themes and Discussion Points

1. AI Governance Frameworks and Approaches

The discussion emphasised the importance of developing inclusive AI governance frameworks that consider the perspectives of the global majority. Ahmad Bhinder highlighted the need for regional AI strategies and policies, discussing the Digital Cooperation Organization’s (DCO) work on AI readiness assessment and ethical principles. He mentioned the development of a self-assessment tool for AI readiness, which will be made available to member states across different dimensions of their AI readiness, including governance and capacity building.

Melody Musoni stressed the importance of creating inclusive frameworks for the global majority, mentioning the African Union’s continental strategy on AI and data policy framework. This initiative aims to provide a unified approach to AI governance across the African continent.

Elise Racine proposed the implementation of reparative algorithmic impact assessments to address historical inequities. This novel framework combines theoretical rigour with practical action, offering a potential solution for creating more equitable AI systems.

Guangyu Qiao Franco addressed the gap between North and South in military AI governance, highlighting the need for an inclusive AI arms control regime. She provided specific statistics on participation in UN deliberations, emphasizing the underrepresentation of Global South countries in these discussions.

2. AI Ethics and Human Rights

Ethical considerations and human rights protections emerged as crucial aspects of AI development and deployment. Bianca Kremer provided a stark example of AI bias in Brazil, stating that “90.5% of those who are arrested in Brazil today with the use of facial recognition technologies are black and brown.” This statistic underscores the urgent need to address AI bias and its societal implications, especially in diverse societies.

Kremer also discussed her research on the economic impact of algorithmic racism in digital platforms, highlighting how these biases can perpetuate and exacerbate existing inequalities.

3. AI Impact on Labour and Economy

The discussion explored the significant impacts of AI on labour, the economy, and the environment. Amrita Sengupta examined the impact of AI on medical practitioners’ work, emphasising the need to prioritise AI development in areas that provide the most public benefit with the least disruption to existing workflows in healthcare.

Avantika Tewari analysed the exploitation of digital labour in AI development, highlighting how platforms like Amazon Mechanical Turk outsource tasks to workers in the global majority, often underpaying and undervaluing their contributions. She also discussed India’s Data Empowerment and Protection Architecture, providing context for data sharing models and digital labor issues in the country.

4. Environmental Concerns in AI Development

Rachel Leach examined the environmental and social costs of AI development, including the issue of embodied carbon in AI technologies. She highlighted that current regulations are furthering AI development without properly addressing environmental harms, emphasising the need to balance AI advancement with environmental sustainability. Leach also discussed the techno-solutionist approach of countries like Brazil and the U.S., which often overlooks the environmental impact of AI technologies.

5. AI in Content Moderation and Misinformation

Hellina Hailu Nigatu addressed challenges in AI-powered content moderation for diverse languages, while Isha Suri focused on developing policy responses to counter false information in the age of AI. Suri emphasized the need for collaborative efforts between governments, tech companies, and civil society to address the challenges posed by AI-generated misinformation.

6. AI in Judicial Systems

The implementation of AI in judicial systems was discussed by Liu Zijing and Ying Lin, who provided insights into China’s AI initiatives in the judicial system. They presented information about specific AI systems like Faxin, Phoenix, and the 206 system, which are being used to assist judges and improve efficiency in Chinese courts. However, they also raised concerns about transparency and fairness in AI-assisted judicial decisions.

7. Regional Perspectives on AI Development

The discussion provided insights into AI development and regulation across various regions, including in Russia, Latin America, and in Africa. Luca Belli provided a Brazilian perspective on AI and cybersecurity, noting that while Brazil has adopted various sectoral regulations, implementation remains “very patchy and not very sophisticated in some cases.” This observation highlighted the gap between formal regulations and actual implementation, revealing a critical issue in AI governance, especially in developing countries.

8. AI and Disabilities

The discussion also touched on the intersection of AI with disabilities, educational technologies, and medical technologies. This highlighted the potential for AI to improve accessibility and support for individuals with disabilities, while also raising concerns about ensuring inclusive design in AI systems.

Agreements and Consensus

Key areas of agreement included:

1. The need for inclusive AI governance frameworks

2. The importance of addressing biases and discrimination in AI systems

3. Consideration of the hidden costs of AI development, including environmental and labour impacts

4. The development of region-specific AI strategies

This consensus suggests a growing recognition of the need for more inclusive and equitable approaches to AI governance globally, which could lead to more collaborative efforts in developing AI policies and frameworks that address the diverse needs of different regions and populations.

Differences and Unresolved Issues

While there was general agreement on the need for inclusive AI governance, differences emerged in approaches to specific issues:

1. Approaches to AI regulation varied, with some favouring cautious development (e.g., Russia) and others establishing specialised regulatory bodies (e.g., Latin America).

2. The focus of AI governance differed, with some emphasising ethical principles and others prioritising environmental concerns.

3. Addressing biases in AI systems revealed different priorities, such as algorithmic racism in law enforcement versus content moderation challenges for diverse languages.

Unresolved issues included:

1. Balancing AI development with environmental sustainability

2. Addressing the exploitation of digital labour in AI development

3. Resolving disparities in military AI governance between global North and South

4. Determining liability in AI-assisted medical decisions

5. Ensuring fairness and transparency in AI-powered judicial systems

6. Developing effective content moderation systems for diverse languages and contexts

Proposed Solutions and Action Items

The discussion yielded several proposed solutions and action items:

1. Develop more inclusive AI governance frameworks that incorporate perspectives from the global majority

2. Implement reparative algorithmic impact assessments to address historical inequities

3. Create open repositories and taxonomies for AI cases and accidents

4. Develop original AI solutions tailored to regional languages and contexts

5. Increase capacity building and knowledge transfer in AI between global North and South

6. Incorporate environmental justice concerns comprehensively in AI discussions and policies

7. Enhance collaboration between governments, tech companies, and civil society to address AI-generated misinformation

Conclusion

This discussion highlighted the complexity of AI governance issues in the Global South and the potential for collaborative solutions. It emphasised the need for continued dialogue and research to ensure that AI development benefits all of humanity, not just a privileged few. The variety of regional perspectives contributed to a collaborative, global-minded approach to addressing the challenges and opportunities presented by AI in the context of the global majority.

Session Transcript

Luca Belli: Morning, good afternoon actually to everyone. I think you can get started. So we have a very intense and long list of panelists today. These are only a part of the panelists. We have also online panelists joining us due to the fact that we have a lot of co-authors for this book that we are launching today. So this session on AI and data governance from the global majority is organized by a multi-stakeholder group of the IGF called the Data and AI Governance DAIG coalition of the IGF together with the Data and Trust coalition which is another multi-stakeholder group. So we have merged our effort. This report is the annual report of the Data and AI Governance coalition that I have the pleasure to chair. My name is Luca Belli. Actually, pardon my lack of politeness. I forgot to introduce myself. My name is Luca Belli. I’m a professor of digital governance and regulation at Fundação Getulio Vargas (FGV) Law School, Rio de Janeiro, where I direct the Center for Technology and Society (CTS-FGV) and the CyberBRICS project. I’m going to briefly introduce the topic of today and what we are doing here and then I will ask each panelist to introduce him or herself because as we have an enormous list of panelists I might spend five minutes only reading their resumes. So it’s in the interest of time management if it is better if everyone. I will of course call everyone but then if they want to introduce themselves they do it by themselves. So are you hearing well? All right. So the reason of the creation of this group that is leading this effort on data and AI governance is to try to bring into the perspective of data and AI governance debates ideas, problems, challenges but even solutions from the Global South, the global majority. And this is why this year report is precisely dedicated to AI from the global majority and as you may see we have a pretty diverse panel here and even more diverse if we consider also the online speakers. Our goal is precisely to assess evidence, gather evidence, engage stakeholders to understand to what extent AI and data technologies, data intensive technologies can have an impact on individuals life, on the full enjoyment of human rights, on the protection of democracy and the rule of law but also on very essential things like the fight against inequalities, the fight against discrimination and biases against disinformation, the need to protect cybersecurity and safety and all these things are explored to some extent in this book. We also launched another book last year on AI sovereignty, transparency and accountability. I see that many, some of the authors at least of last year’s book are also here in the room and all the publications are freely available on the IGF website. Let me also state that these books that we launched here are preliminary versions. They are then, although they have a very nice design, they are printed, they are preliminary version and then they are officially published with an editor but it takes more time so the AI sovereignty book is going to be releasing in two months with Springer. This will be consolidated so if you have any comments we are here also to receive your feedback and comments so that we can improve the work in a cooperative way. I had the pleasure to author a chapter on AI meets cybersecurity, exploring the Brazilian perspective on information security with regard to AI and this is actually a very interesting case study because it’s an example of a country that even if it has climbed cybersecurity rankings like the ITU cybersecurity index being now the third most cybersecurity in the Americas according to the index, it’s also at the same time a country that is in the top three of the most cyber attacked in the world and this is actually a very interesting case study because it means that even if formally it has climbed the cybersecurity index because it has adopted a lot of cybersecurity sectoral regulation like in data protection, like in telecoms sector, in the banking sector and so on, in the energy sector but the implementation is very patchy and not very sophisticated in some cases so the one of the main takeaways of the study and I will not enter into details because I hope you will read it together with the others, is precisely to adopting a multi-stakeholder approach not to pay lip service to all the stakeholders that join hands and find solutions but because it is necessary to understand to what extent AI can be used for offensive, defensive purposes and to what extent geeks can cooperate with policymakers to identify what are the best possible tools but also what kind of standardization can be implemented to specify what are very vague elements that we typically find in laws like what is a reasonable or adequate security measures. Reasonable and adequate are the favorite words of lawyers. I say this as a lawyer because it means pretty much everything and you can charge hefty fees to your clients to discuss what is reasonable and adequate. If you don’t have a regulator or a standard that tells you what is a reasonable or an adequate security measures it’s pretty much impossible to implement. Now I’m not going to enter too much into this I hope you will check it and I would like to give the floor to our first speaker hoping that they will respect the five minutes time each, save those who are splitting the presentation that will have three minutes per person. So we will start with Ahmad Binder, Policy and Innovation Director at the DCO.

Ahmad Bhinder: Thank you very much Dr. Luca and I’m really feeling overwhelmed to be engulfed with such knowledgeable people. So my name is Ahmad Binder, I represent Digital Cooperation Organization that is an organization, intergovernmental organization that is headquartered in Riyadh and we have 16 member states mainly from the Middle East, from Africa, a couple of European countries and from South Africa, sorry from South Asia and we are in active discussions with the new members from Latin America, from Asia etc. So we are a global organization and although we are a global organization the countries that we represent they come from the global majority. So we are focusing horizontally on the digital economy and all the digital economy topics that are relevant including AI governance and data governance, they are very relevant to us. So I would very quickly introduce some of the work that and it’s on a preliminary level and then how to action some of that work. So I should keep it like this, yeah okay. So we have developed two agendas as I say, one is the data agenda and since data governance is bedrock of AI governance so we have something on the AI agenda as well. So very quickly we are developing a tool for assessment of AI readiness for our member states. This is a self assessment tool and this tool is we will make it available in a month’s time to the member states across different dimensions of their AI readiness that includes governance but that goes beyond governance to a lot of other dimensions from for example capacity building, the adoption of AI and that assessment is going to help the member states assess and it would recommend what needs to be done for the adoption of AI across the societies. Another tool that we are working on is quite an interesting one and I am actually working actively on that. So there are a lot of now I think what we have covered in the in the AI domain is to come up with the ethical principles. So there’s a there’s kind of a harmonization from a lot of multilateral organization on what the ethical principles should be for example explainability, accountability etc. We’ve taken those principles and as a basis and we have done an assessment for the DCO member states on how does AI intersect under those principles to the basic human rights. We’ve created a framework that I presented in a couple of sessions earlier so I will not go into the details but we are looking at for example there is data privacy or data privacy is an ethical AI principle. Looking at data privacy and seeing what are the risks that come under attack from the AI systems and then mapping those risks against the human rights of so a basic human rights of privacy or a basic human right of of whatever. So once we once we take that through through this framework we will make it just a tool available to the AI systems, deployers and developers in the DCO member states or and beyond as well to answer a whole lot of detailed questions and and and assess their the systems, under those ethical principles considerations. So basically, we are trying to put the principles that have been adopted into practice. And the system, and also the recommendations on how AI systems can improve themselves. So this is on AI. Very, very quickly, I think I have a minute left. So we are trying to focus on the data privacy and we are developing or drafting DCO AI, sorry, DCO data privacy principles that take a lot of inspirations from the principles that are out there, but the changed realities with AI, we are taking them into consideration. And we are developing an interoperability mechanism for trusted cross-border data flows across the DCO member states. And we are also developing some foundations on what could go into those interoperability mechanism, for example, some model contractual clauses, et cetera, et cetera. So that, in a meaningful multilateral way, that would facilitate the trusted cross-border data flows and of course, serve as foundation for AI governance. I could say a lot, but I think I am, thank you, it’s time over for me. So thank you very much.

Luca Belli: Fantastic, thank you very much, Ahmad. And now, as you were speaking about ethics and AI, Ansgar Kern, you have been leading the Anderson Young work on AI ethics globally. So I would like to give you the floor to provide us a little bit of punch remarks on what are the challenges and what are the possibilities to deal with this?

Ansgar Koene: Certainly, thank you very much, Luca. And it’s a pleasure and honor to be able to join the panel today. So yes, my name is Ansgar Kern and I’m the Global AI Ethics and Regulatory Leader at EY. As a globally operating firm, of course, we try to help organizations, be it public or private sector, in most countries around the world with setting up their governance frameworks around the use of AI. And one of the big challenges is for organizations to clearly identify, actually, what are the particular impacts that these systems are going to have on people, both those who are directly using the system, but also those who are going to be indirectly impacted by these. And one example, for instance, that is probably of particular concern for the global majority is the question about how these systems are going to impact on young people. The global majority, of course, being a space where there are a lot of young people. And if you look at a lot of organizations, they do not fully understand how young people are interacting with their systems, be it systems that are provided through online platforms or be it systems that are integrated into other kinds of tools. They do not know who and from what ages is engaging with these platforms or what kind of particular concerns they need to be taking into account. A different kind of dimension of a concern is how to make sure that, as we are operating in the AI space, often with systems that are produced by a technology-leading company, but then are being deployed by a different organization, that the obligations, be it regulatory or otherwise, fall onto the party that has the actual means to address these considerations. Often, the deploying party does not know fully what kind of data went into creating the system, does not know fully the extent to which the system has been tested, whether it’s going to be biased against one group or another, and does not have the means to do so. It must rely on a supplier. Do we have the right kind of review processes as part of procurement, as part of making sure that, as these systems are being taken on board, that they do benefit the users?

Luca Belli: That was excellent and also fast, which is even more excellent. So we can now pass directly to Melody Musoni, who is Policy Officer at ECDPM and was former Data Protection Advisor of the South African Development Community Secretariat. Melody, the floor is yours.

Melody Musoni: Thank you, Luca. When I was preparing for this session, I was looking at my previous interventions at IGF last year. It seems like a lot has happened from last year till now in terms of what Africa has been doing, and I guess to speak more on what the developments on AI governance in Africa and trying to answer one of the questions. So I’ll try to speak about the developments on AI governance in Africa and trying to answer as well one of the policy questions we have. How can AI governance frameworks ensure equitable access to and promote development of AI technologies for the global majority? So this year has been an important year and a very busy year for policy makers in Africa. We saw earlier at the beginning of the year, the African Union Development Agency developing a white paper on AI, which kind of gave a layout of the land of what are the expectations from a continental level and the priorities that the continent has as far as the development of AI on the continent is concerned. And later in June this year, we saw again the African Union adopting a continental strategy on AI, and it’s something that was in response to, I guess, conversations that we have at platforms like this, that at least if we can have a continental strategy, which give or direct us and guide us on the future of AI development in Africa. And apart from the two frameworks, we also have a data policy framework. It has been in place since 2022, and it is there to support member states on how to utilize or unlock the value of data. So it’s not only looking at personal data, it’s also looking at non-personal data and issues on data sharing are quite central in the policy framework. Issues on cross-border data flows are also quite central. And again, we are towards the finalization of the African Continental Free Trade Agreement and a protocol specifically on digital trade, which also emphasizes the need for AI development in Africa, the need for data sharing. So some, I guess, some of the important issues that the continent is prioritizing on, the first one I’ll touch on is human capital development. So there’s a lot of discussion around how best can we skill the people of Africa? So we have more and more people with AI skills. We have more and more people who are working in the STEM field, for example. And a lot of initiatives are actually going towards building our own human capital. And I guess with people who are already late in their careers, there’s also that question of how can we best re-skill them? And I think that’s where we need the support from the private sector mostly to support a lot of people who are advanced in their careers on how to re-skill and get new skills that are relevant to the edge of AI. And an important area, again, an important pillar for Africa is on infrastructure. So we’ve been talking about digital, global digital divides and the need to have access to digital infrastructure. And that is still a big challenge for Africa. So it’s not just talking about AI, it’s coming back to the foundational steps that we need. We need to start having access to the internet. We need to have access to basic infrastructure, building on that. And then, of course, with AI, there’s discussions around computing power and how best can we have more and more data centers in Africa to support, again, AI innovation. And I’m not going to talk about enabling environment because that’s more regulatory issues. And I’m sure we have been talking about the issues on how best to regulate. But there, just to emphasize again, that the discussion apart from regulating AI and personal data, discussions around how can we best have laws, be it intellectual property laws, taxation laws, and different incentives to attract more and more innovation on the continent. And then, I’ll guess the most important for the continent is building of the AI economy. How do we go about it in a way that is going to bring actual value to African actors and African citizens? And there, again, there are promises. It’s still not clear how we’ll go about it. For example, I see I’m running out of time. Can I just go to? Yes, so another important issue, again, is the importance of strategic partnerships. We cannot do this by ourselves. We are aware of that. And there is need, again, to see how best can we collaborate with international partners to help us to develop our own AI ecosystem. So, and then. Fantastic, and exactly, these are points that apply around the full spectrum of global South countries.

Luca Belli: But it’s very, very important to raise them. Let’s now move to another part of the world, which is close to you, Professor Bianca Kramer. She is member. of the board of CGI.br, the Brazilian Steering Committee for the Internet, and I also have the pleasure of having her as a colleague at FGV Law School Rio. Please, Bianca, the floor is yours.

Bianca Kremer: Thank you, Luca. I will take off my headphones because it’s not working very well and I don’t want to bother very much the conference for now. So, thank you so much for inviting me. It’s a pleasure to be here. This is my first IGF, despite I have been working with AI and tech for the last 10 years. I have been a professor, an activist in Brazil, and also a researcher on the topics of AI and algorithmic racism, its impact in our country in Brazil, understanding also other perspectives to improve, develop, and also use the technology in our own perspectives, in our own terms. And this is something we have to consider when we talk about the impacts of AI and other new technologies, because we don’t have only AI. AI is the hype for now, but we have other sorts of technology that impacts us socially and also economically speaking. So, I have been concerned on this topic, this specific topic of algorithmic bias in the last 10 years. And from 2022 to 2023, I have been thinking about how to raise awareness of the problem in our country, developing research, and also understanding the impacts for our society on this topic. But this year, I have been changing a little bit my perspective, because I have been concerned about raising awareness on the topic for the last year, and I thought that maybe it was important to give a next step to the research. So, I have been developing a research that has been funded also. It’s partially, one part of my research, I have been developing research on data and AI in FGV University with Professor Luca, and the impact of our Brazilian data protection law and economic platforms as well. But personally, I have been working on the topic of the economic impact of algorithmic racism in digital platforms. This is something that is very complex to do. We have to raise indicators to understand the economic impact that could, when we could see and observe the specificities of these impacts, and maybe provide some changes in our environment, in our legislation, and also in our public policies. So, this is something I have been up to, and just to address a little bit about why this is a concern for us. Until last year, I have been working specifically in one type of technology that is facial recognition, for example. Just to clarify a little bit how the algorithmic racism works in Brazil. We have been addressing a huge amount of acquisitions of facial recognition technologies in the public sector, specifically for public security purposes. And raising researches, we have found that 90.5% of those who are arrested in Brazil today with the use of facial recognition technologies are black and brown. The brown people in Brazil are called pardos. So, we have more than 90% of the population being biased with the use of technology. And this is not something that is trivial, because Brazil today is the third population that incarcerates the most in the world. So, we are the third place. We only lose to China and the United States, for example. So, this is an important topic for us. And which are the economic impacts of these technologies? What do we lose when we incarcerate this amount of people? Which are the losses, the economic losses for the person, for the ethnic group that is arrested, and also for society? Which are the heritages that we feel now, with the use of these pervasive technologies, they are back from the colonial heritage? So, this is something that I have been working with, trying to not only raise awareness, but also understanding the actual economic impacts. And with the use of economic metrics, for example. It’s ongoing, but it’s something that we have to understand a little bit. So, thank you so much, Luca, for the space, for the opportunity. I’m looking forward to hear a little more about my colleagues on their topics. Thank you. Fantastic. Thank you very much also for being on time. And indeed, actually, as the human rights arguments are something that we have been repeating for some years, probably the economic ones might be more persuasive, maybe with policymakers. Now, let’s go directly to the next panelist.

Luca Belli: We have Liu Xinjing from Guanghua Law School of Zhejiang University.

Liu Zijing: Hello, everyone. I’m Liu Xinjing from Zhejiang University in China. And this is my co-writer, Ling Ying. And we also have a co-writer, and he is in China now. We love to share Chinese experience about the artificial intelligence utilization. And our report is about building a smart code through large language models. The experience from China. And Chinese has a smart code reform, and it was since 2016. But before that, in 1980s, China’s leader had to consider how to utilize the computer to modernize the code management and also to modernize the legal work. And until 2016, China government officially launched a program called the Smart Code Reform to digitalize the code management. And now in this year, it has entered into the third phase, which is the AI phase. And in this year, China’s code has launched their own unique large language models, which was very impressive. So we’d like to share some experience from China. And in this year, in 2016 and 2022, the Supreme Court of China has launched a system named Faxin system, which is driven by the large legal language models. And it helps the judges to do their legal research as well as the legal reasoning. And also, in the local court level, such as in Zhejiang province, the Zhejiang High Court, they launched their own unique language model named the Phoenix. And they also have an AI co-pilot named Xiaozhi. And it was being used in the court, especially for the per litigation mediation, which was also a feature of Zhejiang province. And also in Shanghai, the Shanghai High Court, they launched a system named 206 system. And it was especially for the criminal cases. So you can see there are many features in China’s utilization of the large language models, especially in the judicial sector. And we also concluded several features about China’s success. And the first one is that we have a very strong and sustained up-down policy. And the second one is that there is a weaker resistance within the judicial sectors. And also, one of the most important features is that in China, there is a close cooperation between the private sector and public sector to develop the large language models by themselves. Because we witnessed that in this year, lots of judges over the world, they also use AI textbooks such as ChatGPT. But in China, the Chinese court, they developed their own large language models. So it was quite unique. And I will share my time with my co-writer.

Ying Lin: Hello, everyone. I’m Ying from Free University of Brussels. I would like to continue with my colleague on challenges and provide some initial suggestions. There are many three concerns for us. One is about development. As we know, advanced AI requires substantial financial resources and only a few developed regions can afford it. As we mentioned before, like Shanghai. So it calls for special funds for less developed regions to foster equitable access to AI-powered judicial resources. There are also issues about public-private partnership. The biggest problem is public input, but private output. What if those private companies use those data and similar products for their own benefit? What if those private companies dominate this relationship and put great influence in judicial decisions? So robust oversight mechanisms are needed to prevent undue influence and ensure transparency. And the second, the fairest problem. On the one hand, AI assistants raise concerns about transparency and due process. Can the judge really know how the algorithm works? And the decision is really made by the AI or by the human being? And the decision-making authority to AI assistance provides lies of responsibility, potentially weakening judicial accountability. And due to this autonomous process, there is also an issue about whether all the parties in the cases represent them fully. And this emphasizes the importance of transparency and explainability. And on the other hand, there are substantial fairness issues and AI are biased and sometimes they make up things. We need a human in the loop. So integration of a single framework and a guideline into AI system are helpful. And the ongoing dialogue between legal experts and AI development will also work. And the last one is the card issue. When making judicial decisions, it will involve massive process of sensitive personal data. We need the strict data security protocols and the many technicals and the recognition of government data assets and used by private partners and governments. And when smart courts are developed in a national level, there’s an issue like national security risks. So robust cybersecurity measures to prevent unauthorized data breach are essential and to ensure the integrity and security of the smart court system in China.

Luca Belli: Thank you, that’s all. Thank you very much also for being perfectly on time and for raising two very important issues at least. First is the fact that even if we build AI, then it has to run on something. So it’s not only the model, it’s also the compute that is relevant. And second, the fact that it needs to be transparent because probabilistic systems like LLM, they are frequently very opaque and it is not really acceptable from a due process and rule of law perspective to say we know how it works, but it needs to be explainable. All right, fantastic. Let’s get to the last couple of speakers in person. Rodrigo Rosa-Gameru and Katherine Bailick from MIT. Please, the floor is yours. Hello guys, can you hear me?

Rodrigo Rosa Gameiro: Okay, my name is Rodrigo. I’m a physician. I’m also a lawyer by training. I grew up in Brazil, but I currently live in the US. I work at MIT with Dr. Bailick here where we do research in AI development, alignment and fairness. So one question that I had in mind while I was thinking about this panel is how do we make sense of where we stand with AI globally today? And I often find myself turning to literature for perspective and there is this one line from Dickens’ A Tale of Two Cities that feels especially fitting. And it is, it was the best of times, it was the worst of times. Because for some, this is indeed the best of times. AI can work and does work in many cases. In healthcare, AI enabled us to make diagnosis that were simply not possible before. AI is enabling us to accelerate drug development and transform our understanding of medicine in ways that we never imagined. The problem is, this is also the worst of times. The benefits of AI remain largely confined to a handful of nations with robust infrastructure. Meanwhile, the global majority is pushed to the sidelines. And even within countries that lead AI development, these technologies often serve only to the privileged few. We have documented, for instance, AI systems recommending different levels of care based on race. And vast regions of the world where these technologies don’t even reach communities at all. The digital divide isn’t just about access, it’s about who gets to shape these technologies, who benefits from them, and who bears their risks. So, how do we ensure that AI upholds human rights for everyone? How do we build AI that truly serves every population? AI that follows the principles of non-maleficence, beneficence, autonomy, and justice? I would argue that the answer actually lies in the title of this panel and of this book, because there can be no AI for the global majority if it is not from the global majority. And this brings me to our chapter in the book, which is From AI Bias to AI by Us. And at our lab at MIT, led by Dr. Leo Celli, who unfortunately could not be here today, we’ve made efforts to move beyond just talking about these issues. We’ve created concrete ways to measure progress and drive change. And what we’ve learned is powerful. When you give everyone a seat at the table, innovation flourishes. Let me share a little story that illustrates this. Through our work, we connected with researchers in Uganda. We didn’t come as saviors or teachers, we came as collaborators. As a result of our collaboration, the team there has built their own data set, developed their own algorithms to solve their own local challenges. This also secured international funding. In fact, they taught us much more than we taught them. And this isn’t an isolated story. Through Physionet, which is our platform for sharing healthcare data, we have enabled collaboration across more than 20 countries. We’ve hosted datathons that bring together multidisciplinary local talent and leadership worldwide to collaborate on solving local problems. The results, more than 2,000 publications with 9,000 citations, but most importantly, AI solutions that actually work for the communities that they serve. But here’s what we’ve learned about all else. Our approach isn’t the only answer. Effective AI governance needs more than individual initiatives. It requires all stakeholders working together towards shared goals. And my colleague, Dr. Bielik, will explain this further. Thank you. Thank you, Dr. Romero.

Catherine Bielick: So my name is Dr. Katherine Bielik. I’m an infectious disease physician. I’m an instructor at Harvard Medical School, and I’m a scientist at MIT studying AI. Outcome improvement for people with HIV and bias reduction. So I work here at MIT Critical Data. We are publishing here as a case study, but I think we’re just one group, I think, in one country, from one perspective in one professional field about healthcare, artificial intelligence. And this discussion is about so much more. And I think one way that I would like to think about international governance of AI from a global majority is to think about it from a historical precedent and context, because we don’t want to reinvent the wheel. We don’t think that everyone around the world should be doing what MIT Critical Data is doing. Individual countries have individual needs. And I think there’s already a precedent, actually, that we’d like to contend as a good framework that we can emulate going forward for AI from the global majority. And I’m talking actually about the Paris Agreement, the Climate Accords, where nearly 200 countries came together to agree on one common goal with individual needs per country based on their own unique populations. And I think there’s five core features that I want to take away from the Paris Agreement in a way that we can parallelize it over to AI from the global majority. The main thing is that this is a global response to a crisis of what I will say is inequitable access to responsible AI. And I think all those words carry a lot of different meaning and weight. But the key here, I think, for the five core features, the first is there’s a collective response internationally with differentiated responsibilities, where I think that the wealthier nations carry more of the burden to have open leadership and knowledge sharing. The second is, I think, maybe the most important, which is localized flexibility. There are nationally determined contributions in the Paris Agreement that I think parallelize over to AI from the global majority, where each country defines their AI priorities for their own people, and we come together and we put them together and agree on a global standard. Because I think implementation domains differ in so many areas, in healthcare, in agriculture, disaster response, education, law enforcement, job displacement, you can go on, economic sustainability and environmental energy needs. There’s just no one size fits all. And what comes with that, I think, is a core feature of transparency and accountability. And that is accounted for in the Paris Agreement, which I think can also parallelize to us today. There are regular reviews from every country, and they are domain-specific non-negotiables, reducing carbon emissions by a quantifiable amount per country. And in this case, there can be a federated auditing system, I think, which would be similar to federated learning in a way that protects privacy. The last two include, I think, financial supporting, channeling, where developing nations must have resources channeled over, where people can not only use those resources and technology sharing to develop and implement their focused AI tools, but the infrastructure to evaluate those outcomes as well, which I think is just as important, if not more important. And then lastly, is the global stockade term, which was used a lot for the Paris Agreement, I think. What the key here is that there are specific outcomes determined by specific groups, by specific countries, and then we can aggregate those towards a single tracking of progress. And I think with this unified vision for the future, it takes us out of the picture, I think, because I don’t think we can or should be prescribing what the global majority wants or needs from Harvard or MIT or wherever. I think every stakeholder needs to have an equal voice in this. And that’s the pathway, I think, to an international governance with those core features. And why can’t a meeting like this, why aren’t we talking about the equivalent of an international agreement, where we can all have the same equal voice in participating towards the same common goal? We’re all here. There’s no shortage of beneficence from all of you, not maleficence, equity, justice. These are medical ethical pillars, and there’s no shortage of resources, I think, when we can come together for a unified partnership.

Luca Belli: Thanks. Fantastic. So we have a lot of, already a lot of things to think about. And so I also would like to first ask the people in the room to start thinking about their comments or questions, because the reason why we are trying to do, is to then have a debate with you guys. So let’s now. pass to the online panelists which also are a bunch and I really hope they will strictly respect their three minutes each. We should have already a lot of them online so that is the moment where our remote moderation. Friends should be supportive. The first one should be Professor Sizwe Snail Ka Mtuze.

Sizwe Snail Ka Mtuze: Thank you very much Dr. Belli. Thank you very much delegates and everyone in the room. Indeed, IGF time is always a good time and it’s always a good time to collaborate. I’ve had the pleasure of working with two lovely ladies this year, Ms. Morihe and Ms. Nzemande. Ms. Morihe is one of the attorneys at the firm and Ms. Nzemande is a paralegal, on looking at the evolving landscape of artificial intelligence policy in South Africa on the one hand, as well as possibly drafting artificial intelligence legislation. I’m mindful of the three minutes that that’s been allocated to us. I want to fast forward and say in South Africa, the topic of artificial intelligence has been discussed over the last two to three years on various levels. On the one level, there was a presidential commission in terms of which the president of South Africa had made certain recommendations in terms of a panel he had constituted on how the fourth industrial revolution should be perceived and what interventions should be made with regards to aspects such as artificial intelligence. It was a bit quiet. Covid came and went and data protection was the big, big, big issue. However, artificial intelligence is back. It’s the elephant in the room and South Africa has been trying to keep up with what is happening internationally. On the one hand, South Africa drafted what they called the South African draft AI strategy and this was published earlier on this year and the strategy received both very warm comments and very cold comments. Some of the authors and some of the jurists in South Africa were very happy with it saying it’s a way forward, it’s a good way forward and other jurists were of the view to say but this is just a document, it’s 53 pages, why are we having this? South Africa then responded in early August after all the critique and everything that was said with a national artificial intelligence policy framework. This document has been reworked, it looks much better, it has objectives and it has been trimmed from the 53 page document. Having a look at what is happening in Africa as well, I think it is in line with some of the achievements that people want to do in Africa with regards to artificial intelligence and the regulation thereof. And it looks like I’m running out of time, so that is my contribution on this session. All right, thank you very much for having respected the time and again we are mindful that every short presentation is providing only a very teaser of the broader picture but we encourage you to read the deep

Luca Belli: Next speaker actually is the speaker from our partner organization, the Coalition on Data and Trust.

Stefanie Efstathiou: Fantastic. I’m happy to be here. As mentioned, I’m an IP and digital resolution lawyer based in Germany, in-house counsel and a PhD candidate on researching on AI. However, I’m here today in my capacity as a member of the EURid Youth Committee. So ladies and gentlemen, esteemed colleagues, I would like to draw today the attention to the transformative and urgent discourse on regional approaches to AI governance as highlighted in the recent report, AI from Global Majority. This report underscores that while artificial intelligence promises to reshape our societies, it must do so inclusively and equitably. So from Latin America to Africa and Asia, regional efforts as we see in the report demonstrate resilience and innovation. Latin American nations are forging frameworks inspired by global standards yet rooted in local realities, emphasizing regulatory collaboration. And in Africa, the RISE governance framework exemplifies a vision for integrated data governance, emphasizing cooperation, accountability and enforcement. These efforts reflect not only the unique socio-political context but also the shared aspiration to ensure AI serves as a tool for empowerment and not exploitation. A key dimension often overlooked is the role of youth in shaping AI’s trajectory. The younger generation across but not limited to the global majority, of course, should not only adapt to regional frameworks but should actively participate and lead the change. Youth should be more in the focus and participate as a stakeholder since it has a unique inherent advantage. They are the ones who will have to adapt more than any other generation to the change and effectively live in a different world than other generations before. The involvement can have various forms. However, starting from data protection driven policies on ensuring student data privacy in Africa to youth led innovation hubs in Latin America is a good way to go. Nonetheless, it is our duty to amplify these voices and incorporate their ideas into policymaking processes as well as it is the duty of the youth to actively participate and emerge itself in the sphere of responsible AI innovation and policymaking. The energy and the creativity of the younger generation shall signal a brighter future for AI governance. However, challenges persist and we have seen this. Digital colonialism, data inequities as well as systemic biases threaten to widen the divides. As the report highlights, however, it is imperative to address these disparities by adopting inclusive frameworks, fostering regional cooperation and prioritizing capacity building initiatives tailored to each region’s needs. However, with a minimum common global understanding similar to what Dr. Billig described earlier. As we move forward, let us reaffirm and I want to close with this, we shall reaffirm our commitment to an AI future that embodies fairness, sustainability and human centered innovation, however, grounded in regional diversity, but without causing fragmentation and inspired by the vision and the drive of youth. Thank you very much.

Luca Belli: Thank you very much, Stephanie. And actually, this is a very good introduction. Also, the one that UNCs were provided to our this first slot of online presentation dedicated to regional approaches to AI. So what kind of approaches is emerging at the regional level in various regions of the world. Our next speaker, Dr. Jona Welker, that is at MIT and former tech envoy and also now leading multiple EU sponsored projects has worked quite a lot on this and he has also a little bit of presentation for us. So we have it our technical support can confirm that he can share his presentation.

Yonah Welker: Yes, yes, my pleasure to be here. And it’s my pleasure. Excellent. Welcome. Go back to Riyadh, where I serve as an envoy and advisor to the Ministry of AI. I would love to be mindful about the time and address the issue of disabilities, educational and medical technologies is extremely complex area. And it’s almost one year since 28 countries signed the Bletchley Declaration. And unfortunately, this area is still underrepresented, including not only complexity. Currently, there are over 120 companies working on assistive technology. but also complexity of involved models we have biases related to supervised, unsupervised, reinforcement learning, issues of recognition, cues, exclusion. So I would love to quickly share the outcomes, what we can do to actually fix it. First of all, I believe we should work on original solutions. We can’t regeneralize chart GPT because for most original languages we have 1,000 times less data and we need to build our original solutions and not only LLM but also SLM with maybe less parameters but with more specific objective and efficiency. Second, we should work together to create open repositories cases in taxonomies, not only overused cases but also what we call accidents and with what we work with the OECD. The first thing is a dedicated safety models. It includes additional agents which help to improve the areas of accuracy, fairness and privacy and also dedicated safety environments and oversights, specific simulation environments for complex and high-risk models. Also, we actively working on more specific intersectional frameworks and guidelines with UNESCO or UNICEF. For instance, digital solutions for girls with disabilities in emerging regions who AI in health or OECD disability in the AI accidents repositories. And finally, we should understand that all of the biases we have today in technology, it’s actually a reflection of a historical and social issues. For instance, even beyond AI, only 10% of the world population have access to assistive technologies. 50% of children with disabilities in emerging countries still are not enrolled into schools. So we can’t fix it through one policy but through combination of AI, digital, social and accessibility frameworks. Thank you so much. Thank you very much, Jona, for respecting the time and now let’s move to another

Luca Belli: We have our friend Ekaterina Martynova from the Higher School of Economics. She was a researcher with us in Rio last year. Very nice to see you again, Katya, even if only online. Please, the floor is yours.

Ekaterina Martynova: Yes, thank you so much, Professor Luca. I will be very brief, just to give an overview of the current stage of AI development here in Russia and the first thing I should note is the increase in spending from the budget, actual unprecedented level of spending and actually development of AI is one of the key priorities of the state. Though the approach in terms of regulation is still quite cautious, so it seems that the priority is to develop technology, not to hinder somehow the development. So we don’t have still a comprehensive legal act as a federal law on AI and we have some national strategies as a piece of subordinated legislation and also some pieces of self-regulation in the market driven by the marketplace. In terms of practical application, AI is being used quite intensively in the public services and we have some sandboxes, especially here in Moscow, first of all in the public health care system and of course in the field of the public security and investigating. So here I come to the main concerns with using AI in these fields is of course the first one, the obvious, the human rights concern which has already been raised and it is very acute for Russia and it was also a question conceded by the European Court of Human Rights in terms of procedural safeguards provided to people being detained through the use of facial recognition system and we still need to develop very much our legislation here to provide more safeguards and here we look very closely at the Council of Europe Framework Convention on AI and Human Rights, Democracy and the Rule of Law. Though Russia is not currently a member to the Council of Europe, still we consider that these provisions on the standards of transparency, accountability, remedies can be useful for us, for our national development and maybe for development of some common basis within the BRICS countries or with our partners in the Shanghai Cooperation Organization. The second problem is the data security problem and here we have a special center created under the auspices of the Ministry of Digital Development so that to be the central hub of this data sanitation process and minimization of data, especially biometric personal data which is used in this public health care service digitalization process and finally actually what Luca you have mentioned at the beginning in the opening speech is the problem of AI and cyber security, this particular the topic which I research and the problem of the AI powered cyber attacks which Russia is being targeted these years and we here consider which are the mechanisms which legal mechanism which can be developed to hinder the use of such use of AI or malicious activities in cyberspace by state actors, non-state actors and here of course we need some efforts joined on the international level to develop a framework of responsible use of AI by states and the rules of responsibility and the rules of attribution of these types of attacks to the states which can be sponsoring such operations. So I will stop here and thank you very much for your attention. Thank you very much Katja for these very good

Luca Belli: Now let’s conclude this first segment of the online contribution with Dr. Rocco Saverino from the Free University of Brussels.

Rocco Saverino: Thank you Dr. Luca Belli, I’m not yet a doctor because I’m a PhD candidate but thank you and yes of course I am one of the authors of the paper we submitted and with my colleagues here at the Free University of Brussels and but to respect the time I’m going to wrapping up the key points of our paper. We look at the global trends and how Latin America countries are incorporating AI rules into the data protection of frameworks influenced by these global trends, particularly the new digital regulations and this also lead to the emerging of AI regulations in Latin America and because of this we analyzed particularly the case of Brazil and Chile which are establishing the specialized AI regulatory bodies reflecting the region’s awareness of the complex issues of the AI technologies and we look at the Brazil approach with the with the law 23 38 of 2023 but in this case we should make a disclaimer because of course as many of you know in on 28th of November was presented another proposal and we couldn’t update in our paper because it was already submitted but we analyzed the previous one where the role of the data protection authority was very important and we looked also at the Chile’s approach because Chile is advancing in its AI governance model proposing an AI technical advisory council and a data protection agency to enforce the AI law. Of course when we are talking about AI regulation we also talk about the data governance and data governance is a key factor in shaping the AI oversight with a focus on transparency, accountability and data protection of fundamental rights. This leads to challenges and opportunities. Latin American countries face challenges such as the need for coordination among the regulatory bodies developing specialized expertise and allocating sufficient resources but also opportunities because the region has the opportunity to shape AI governance proactively adopting a risk-based approach and integrating AI governance into existing data protection frameworks. We believe that Latin American countries can contribute to the global AI governance discourse by developing the regulatory models that reflect its unique social, economic and cultural context.

Luca Belli: Excellent, fantastic and now we can now we have concluded our regional perspective and we can enter into the social and economical perspectives. Actually the first presenter is Rachel Leach that has been co-authoring one of the papers on the AI impact in terms of environmental and economic and social impacts. So Dr. Leach please the floor is yours.

Rachel Leach: Thank you. Our project is an exploratory analysis of AI regulatory frameworks in Brazil and the United States focusing on how the environment particularly issues of environmental justice are considered in regulations in these countries and broadly we found that regulations in both countries are furthering the development of AI without properly interrogating the role AI itself and other big data systems play in causing harms to the environment, particularly in exacerbating environmental disparities within and across countries. For example, in July 2024, the Brazilian federal government launched the Brazil Plan for Artificial Intelligence investing $4 billion BRL in hoping to lead AI regulation in the global majority. The plan centered the benefits of AI with the slogan AI for Good for Everyone and invested in the use of AI to mitigate extreme weather, including a supercomputer system to predict such events. Additionally, in the U.S., President Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence operates under the assumption that AI is a tool with the potential to enable the provision of clean electric power, again without examining the environmental issues raised by the technology itself. These examples are just a snapshot of the trend we identified, that both of these countries have a largely techno-solutionist approach to understanding AI. What this means is that their regulations tend to operate under the assumption that there is a technological solution to any problem. This approach leads to regulations that vastly under-consider the externalities or harms of technology and that center technology and solutions even in instances where that may not be the best approach. Okay, so turning now to the solutions we wanted to highlight. First when considering environmental and social costs of AI, it’s crucial to consider embodied carbon, meaning the environmental impact of all of the stages of the product’s life. As many people have discussed, developing and using AI involves various energy-intensive processes from the extraction of raw material and water to the energy and infrastructure needed to train and retrain these models to the disposal and recycling of materials. And often these environmental costs fall much harder on the global majority, particularly when data centers from US-based countries are citing a lot of their data centers in Latin America, for instance, and just exacerbating issues such as droughts in that region. The second action we wanted to highlight is the importance of centering environmental justice concerns comprehensively across all discussions about AI, from curriculum to research to policy. We think this is really important to interrogate the assumption that AI technology can necessarily solve social and environmental problems. So yeah, thank you again for having us.

Luca Belli: Excellent. Also very good that you are almost all on time. Next one, Avantika Tewari, that is PhD candidate at the Center for Comparative Politics and Political Theory at Jawaharlal Nehru University in New Delhi. Do we have Avantika? Yes. Hi, can you hear me? Perfect. Yes. Very well. Thank you so much. Welcome. Great to be here with all of you.

Avantika Tewari: So I’m just going to start without much ado. And I think just to give you a little bit of context about this paper, in India we have something called the Data Empowerment and Protection Architecture, which essentially all the debates around AI governance are also hinged on the control and regulation and distribution of data. So there has been an emphasis on consent-based data sharing models, and that’s devised to basically make a data-empowered citizenry. So it is in the context that I have written this paper, and I want to foreground that while these technologies, such as chat GPT and generative artificial intelligence technologies appear to be autonomous, their functionality depends on vast networks of human labor, such as data annotators, moderators, and data laborers, hidden behind the polished facade of machinic intelligence. Platforms like Amazon Mechanical Turk outsource these tasks to workers in the global majority, reducing them to fragmented, repetitive tasks that remain unacknowledged and underpaid. These workers sustain AI systems that disproportionately benefit corporations in the global north, transforming colonial legacies into new forms of digital exploitation through the cheap appropriation of land and labor, resources for compute technologies, and digital infrastructure. Similarly, digital platform users are framed as empowered participants, with their likes, shares, and posts generating immense profits for tech giants, all without compensation. This represents the double bind of digital capitalism, where the unpaid participation of users is reframed as agency, and the labor precarity is disguised as opportunity, with the global majority bearing the brunt of both. The platform economy built on twin pillars of fragmented attention and compulsive participation rebrands user exploitation as agency and convenience. By embedding individuals in digital enclosures, it transforms participatory cultures into systems of unpaid labor, commodifying interactions which were previously non-commodified, such as social relations of interactions and communication. What emerges is what I term an undead dimension of social enjoyment, which is a relentless pursuit of meaning, success, and community, which is inherently mediated by algorithms. Yet the promise of satisfaction remains elusive in snaring individuals in a loop of alienation and exploitation, while making their engagement complicit in the production of data analytics and AI. Data is thus fetishized as a commodity, retroactively imbued with meaning, as valuable information fueling market expansion, diversification, and stackification, which is paradoxically framed as a governance model, where data is framed as a resource that can be reclaimed as an extension of the self or as a social knowledge commons. Yet this transformation conceals a deeper reality, which is that the labor upon which these platforms depend is increasingly fragmented into gig-based, task-based work. This labor sustains the development of AI technologies that paradoxically aim to automate the very low-skilled tasks on which they rely. The shift towards the low-skilled, task-based, on-demand work is not merely a strategic adaptation by platforms, but an ideological reconfiguration of labor relations, which is what I call the ideology of prosumerism in the paper. So increasingly, the fragmentation is actually an attempt by capital to overcome its own dependency on labor. And so what I want to really foreground in this paper is that the real paradox is not whether technology can empower us, but in how monopoly capitals drive to overcome its dependence on labor leads to a fragmentation of global division of labor, which then disproportionately impacts the global majority. And this results in the now partialization of work, automation of tasks that are actually produced by the severance of labor’s embeddedness within the production process by the kind of fragmentation of work processes. So I’ll stop here, and thank you again.

Luca Belli: Thank you very much, Avantika, for bringing these considerations about labor, the difference between consumer and prosumer, and this kind of antagonism that you very well situated. Now staying in India, next speaker is Amrita Sengupta from the Center for Internet and Society, soon to be one of our incoming fellows at the FGV Law School. Please, Amrita, the floor is yours.

Amrita Sengupta: Thank you so much, Professor Belli. I’m also joined by my co-author, Shweta Mohandas, who’s also online. So our essay, The Impact of AI on the Work of Medical Practitioners, is actually a part of a larger mixed methods empirical study that we did, trying to understand the AI data supply chain in health care in India. So in this particular essay, through primary research with medical professionals, we did a survey of 150 medical practitioners and also did in-depth interviews. We tried to look at the current use of AI by medical practitioners in their research and practice, and also look at what are some of the new challenges and the perceived benefits Through this, we also try to raise certain concerns and issues about its current use and what is the cost and benefit to the work that the doctors and medical professionals have to put in now in the AI systems as they start developing these systems. So there are four big issues that we want to raise. The first one is that in the short term, doctors have to put in additional time and effort in preparing data through labeling, annotation, but also learning these technologies and providing feedback on AI models. These are real costs that need to be considered before we burden an already overburdened health care system. So, for example, in our survey, we heard that nearly 60% medical practitioners expressed the lack of AI-related training and education as a big barrier to adoption of AI systems. Doctors also raised concerns on the efforts and infrastructure required on their side to digitize health reports because of the nascent stages at which digital health data exists in the current health care system in India today. The second issue that we want to foreground is also about the current use of AI in private health care and less so in public health care, which is where there is a much larger need for meaningful interventions and for providing more efficiency, time-saving, and providing meaningful health, which actually raises the question, what does it serve and who is it privileging through the ways in which it is currently being operated? The third issue, and a critical one at that, is one of liability. Academics and medical professionals in our study flagged the issue of liability. For instance, who would be liable for an error in diagnosis made by an AI application that aids medical professionals? A common concern we also heard from doctors and academics was that AI was meant to assist doctors, but often enough, doctors felt this pressure that AI could take their place or was threatening to take their place. The last issue that we want to also raise is the longer-term impact of AI. In our survey, 41% of medical professionals suggested that AI could be beneficial in time-saving, but also help in improving clinical decisions. The question that we ask is, what are the kinds of risks that this raises with the over-reliance on AI, leading to, let’s say, a lack of or loss of clinical skills, or of course the representational biases that the AI models may present because of of where the data is coming from, the problems of reliance on global north data and so on. Lastly, we say that if we need to prioritize AI, we should prioritize in areas where they could most benefit and is in larger public interest and with the least disruption to the existing workflows and be considerate of whether the costs actually outweigh the benefits.

Luca Belli: Excellent. And now we are going to start to see how the global majority is reacting to AI and which kind of innovative thinking and solution is put forward in our last section. And then we will open the floor hopefully for debate as we have started with some minutes of delay. I hope our colleagues will indulge on us and give us five extra minutes. We have now Elise Racine from University of Oxford. Do we have Elise here? Yes, please go ahead. Hi. Hi, everyone. So I shared a presentation PDF in the chat.

Elise Racine: I’m Elise Racine. I’m a doctoral candidate at the University of Oxford. I study artificial intelligence, including reparative practices. So AI really does promise transformative societal benefits, but it also presents significant challenges in ensuring equitable access and value for the global majority. Today, I’ll introduce preparative algorithmic impact assessments, a novel framework combining robust accountability mechanisms with a reparative praxis to form a more culturally sensitive, justice-oriented methodology. So the problem is multifaceted. The global majority remains critically underrepresented in AI design, development, deployment, research and governance. This leads to systems, as we’ve discussed, that not only inadequately serve, but often harm large portions of the world’s population. For example, AI technology developed primarily in Western contexts often fail to account for diverse cultural norms, values, and social structures. While traditional algorithmic impact assessments provide valuable accountability mechanisms, they often fall short in ameliorating injustices and amid marginalized and minoritized voices. Reparative algorithmic impact assessments address these challenges through five steps that combine theoretical rigor with practical action. First, socio-historical research, delving into the context and power dynamics that shape AI systems. Second, participant engagement and impact harm co-construction that goes beyond tokenism and redistributes power. Three, sovereign and reparative data practices that incorporate decolonial intersectional principles while ensuring communities retain control over their information. Fourth is ongoing monitoring and adaptation focused on sustainable development and adjusted based on real-world impacts. And the fifth and last step is redress or moving beyond identifying issues to implementing concrete, actional plans that address inequities. To illustrate these steps in practice, considered a US-based company deploying an AI-powered mental health chatbot in rural India. So a reparative approach may, for instance, employ information specialists with data curation and archival expertise to ground social historical research in actual reality. Implement flexible participation options with fair compensation and mental health support to drive meaningful community engagement. Establish community-controlled data trusts. Develop new evaluation metrics that incorporate diverse cultural values and priorities. And partner with local AI hubs and research institutes that empower communities to develop their own AI capabilities. These are just several examples. There’s a few more, again, in the PDF, as well as in the report. But through this comprehensive approach, I wanna emphasize how reparative algorithmic impact assessments move beyond merely avoiding harm to actively redressing historical, structural, and systemic inequities, including colonial legacies in their algorithmic manifestations. That was a large focus of the paper. By doing so, we can foster justice and equity, ultimately ensuring AI truly serves all of humanity, not just a privileged few. Thank you very much.

Luca Belli: Thank you very much. We are almost done with our speakers. We have now Hellina Hailu Nigatu from the UC Berkeley. Please, Elina, the floor is yours. Thank you. I am going to share my screen real quick. Okay. Hi, everyone.

Hellina Hailu Nigatu: My name is Hellina, and today I’ll briefly present our work with my collaborator, Zirak. So social media platforms, such as YouTube, TikTok, and Instagram are used by millions of people across the globe. And while these platforms certainly have their benefits, they’re also a playground for online harm and abuse. Research showed that in 2019, majority of the content that was posted on YouTube was created in languages other than English. However, non-English speakers are hit the hardest with content that they quote, regret watching. Social media platforms have also resulted in physical harm. Facebook faced backlash in 2021 for its role in fueling violence in Ethiopia and Myanmar. With this in mind, we take a look at how, when we take a look at how platforms protect their users, platforms rely on automated content moderation systems or human moderators. For instance, Google reported that 81% of the content that is flagged for moderation is self-detected and that most of the content that is detect, most of the content is detected by automated systems and then redirected to human reviewers. Additionally, Google uses machine translation tools in their moderation pipeline. However, automated systems do not work well for all languages. A research shows that the intersection of social, political and technological constraints results in disparate performance for languages spoken by majority of the world’s population. In terms of human moderators, the Google Transparency Report states that about 81% of the human moderators operate in English and of the non-English moderators, only 2% of them operate in languages other than the highly resourced European ones. Majority world is a term coined by Shahdu Alam to refer to what were mostly called third world, developing nations, global south communities, et cetera. And the term global majority emphasizes that collectively these communities comprise majority of the world’s population. And as with their size, these communities are very diverse in terms of race, ethnicity, economic status, culture and languages. Within NLP, the majority world is exposed to harm and marginalization because they are excluded from state-of-the-art models and research. They are hired for pennies on the dollar as moderators with little to no mental or legal support. They’re exposed to harmful content when conducting their jobs as moderators and they are harmed by the failures of existing moderation pipelines. With the cycle of harm we see, there are two major lines of argument on including or not including these languages and their communities in AI. Either you are included in the current technology and as a result are surveilled, or you are left in the trenches with no protection or support. We argue that this is a false dichotomy in our paper and ask that if we remove the guise of capitalism that currently dominates content moderation landscape, is there a way to have moderation with the power primarily residing in the users?

Luca Belli: Thank you so much. Excellent. Now we have only two speakers to go. Isha Suri is the Research Lead at the Center for Interest in Society. Please, the floor is yours.

Isha Suri: Thank you, Professor Luka. I’ll just quickly share my screen. I’m joined by my co-author Shiva Kanwar and we looked at countering false information and policy responses for the global majority in the age of AI. I’ll quickly give you a teaser and a rundown and we’d be happy to take any questions. So one of the things that we… Something wrong with my screens here, sharing. A background and context World Economic Forum recognizes false information, including misinformation and disinformation as the most severe global risk anticipated over the next two years. And multiple studies have demonstrated that social media is designed to reward and amplify divisive content, hate speech and disinformation. For instance, an internal Facebook study revealed that its newsfeed algorithms exploit the human brain’s attraction to divisiveness. And if left unchecked, it would feed users more and more divisive content to gain user attention and increase time over platform. And one of the factors that emerged was these integrated structures, profit maximizing incentives, ensure that platforms continue to employ algorithms recommending divisive content. For instance, a team at YouTube tried to change its recommender systems to suggest more diverse content, but they realized that their engagement numbers were going down, which was ultimately impacting their advertising revenue and they had to roll back on some of these changes. And this, as we found, leads to a lot of harmful divisive content being promoted on these systems. We then delve into what are the regulatory responses that are emerging from the global majority countries and we sort of realized that it was bucketed into one of these three large categories. One was amendments to existing laws, including penal code, civil law, electoral law, and cybersecurity law. And largely the focus was on ascribing criminal liability in cases where false information is defined broadly. And we later found that that carries significant risks of censorship. In our paper, we also go into an India-specific case study where empirical research has demonstrated that platforms over-comply and that leads to a chill effect on speech and freedom of speech and expression. Another aspect that is emerging is that legislative… proposals are transferring the obligation to internet communication corporations, largely like the intermediary liability regime is being tinkered with. Legislations are being tied to the size of a platform. I think the German example comes to mind, where a for-profit platform with more than 2 million users has additional obligations, where manifestly illegal content and illegal content has to be taken down. There are also ex-ante obligations on intermediaries, such as the Digital Services Act in the EU. The Digital Services Act is an important one, I think, because that is one piece of legislation that really transfers the obligation on platform providers to have more transparency in how their algorithms are working. In addition to regulatory responses, I think fact-checking initiatives have also emerged as a response to counter-false information. Meta’s fact-checking initiative is the one that has probably taken a lot of prominence. But again, it leads to questions of inherent conflict. There are also concerns about the payment methods, how is Meta paying or reimbursing these fact-checkers, and there is lack of clarity whether there is sufficient independence within the organization as such. We sort of also see a trend within a global majority countries to mimic EU or the global north regulations, also known as the Brussels effects. And with this, I’ll just also segue into our conclusions and recommendations and sort of tie down whatever we’ve discussed in the past few minutes. This is the broad table that we have in the essay. I’ll not stop over it, but just to give you an overview of how we’ve categorized some of these countries and looked at what the instrument and response is, what are the sort of criminal sanctions, whether it’s an intermediary liability framework that they’ve sort of introduced, and whether there is a transparency and accountability sort of an obligation that they have introduced. European Union and Germany, as an example, has been given because we felt that they have additional transparency and accountability requirements, as opposed to some of the other countries that you see on your screen.

Luca Belli: So I’ll stop here, and thank you so much. Fantastic. And now last, but of course not least, Dr. Guangyu Qiao-Franco from the Radboud University.Dr. Xiao-Franco, the floor is yours. Thanks, Professor Belli. And thanks for staying around for my presentation.

Guangyu Qiao-Franco: So my contribution is co-hosted with Mr. Mahmoud Javadi of Free University Brussels, who is also present online today. So our research is on military AI governance. And in our paper, we highlight the concerning and widening gap between the North and the South in military AI governance. One striking observation is the limited and decreasing participation of global South countries in UN deliberations on military AI. Between 2014 and 2023, fewer than 20 developing countries contributed to UNCCW meetings on lethal alternative weapons systems on a regular basis. Our interviews indicate different priorities in AI governance. While the global North emphasizes security governance and ethical frameworks, the global South prioritizes economic development and capacity building. The North and the South also diverge in their preferred approaches to military AI governance. Most developing countries prefer a legal ban on autonomous weapon systems, while the North favors soft law approaches represented by the re-aimed blueprint for action and the US political declaration. However, these North-led frameworks have received limited endorsement from global South countries. And notably, none of the BRICS member states, key players in global innovation, have endorsed these documents. Global South’s participation in military AI governance is further complicated by the dual user nature of AI technologies, geopolitical tensions, stricter access control led by the global North, and concerns about hindering AI development for security reasons have contributed to disengagement among global South nations. So we want, in our paper, and also want to use this opportunity to call for the building of an inclusive AI arms control regime that begins with a thorough assessment of the distinct needs and priorities of both the North and the South, fostering international dialogue, building trust, and promoting partnerships are essential to bridging the divide. Capacity building and knowledge transfer must also be prioritized to incentivize responsible technology use and encourage broader, more active engagement. So I will stop there, and thanks for your attention. Thank you very much for this.

Luca Belli: This has been an incredible marathon, very intense. We have a lot of food for thought. I am pretty sure people in the audience that have been with us over the past hour and a half have comments. If we can, I would take one or two quick, very quick questions or comments, if we have them, from the room. Otherwise, we can have them over coffee. Is there any? Yes, I see one, only one comment. Good, fantastic. So can anyone give a mic, so otherwise, I can borrow mine? You can borrow mine. Hello. We’ll work very quickly.

Audience: Thiago Moraes, PhD fellow from VLB. And some of my peers are here today, both on site and online, which is great, and also several colleagues, which I like a lot. So going very quickly and seeing the work that IGE has been doing, last year, I was able to be an author of the last edition. So it’s very nice to see the initiatives that are being discussed in the document. So what I was thinking here, there has been some ongoing discussions through the IGEF of how regional IGEFs could contribute to the global discussion. And I was just thinking, maybe the cases, especially from global majority, for example, could be discussed in these regional IGEFs. And we try to find some way of showcasing them and then making these connections, especially now that we’re discussing the WCs, like the renewal of the mandate and how we could make Mood Stakeholder a bit more concrete to action. Maybe that could be an interesting way. We can talk more later. I know we don’t have time. So I’ll stop here.

Luca Belli: All right, fantastic. Thank you. Can you hear me? OK, so this is not working. All right, so thank you very much, everyone, for the comments, the presentations, respecting the time. Just to remind that there are still four copies of the book available here and people praying to have them. So you can still have four, or actually five. I don’t need one. And you can download it for free. Actually, as there will be only six months between now and the next IGEF, and we have carefully presented this volume and also the volume of last year, we might use the occasion of the next IGEF to have a debate building on what I’ve already presented this year and last year. Maybe for those who are interested, building a paper on the achievements that we have showcased here in this volume and in the volume of past year. And this actually could be a good way also of building upon what Tiago was mentioning. So try to connect in the dots and showing also what is the rationale behind all this and showing also the complexity. I think if something is clear from the very intense debate and presentation of today is that there is not only a lot of problems, but also a lot of thinking and a lot of potential solutions that can come up from the global south. A lot of challenges, of course, but there is also a lot of room to improve things and collectively at least identify problems and potential common solutions. So let me thank everyone for the very insightful work that I urge everyone to read in this volume and for the excellent presentation of today. Thank you very much. Thank you. Thank you. Thank you.

A

Ahmad Bhinder

Speech speed

134 words per minute

Speech length

741 words

Speech time

330 seconds

Developing regional AI strategies and policies

Explanation

Ahmad Bhinder discusses the Digital Cooperation Organization’s efforts to develop AI readiness assessment tools and data privacy principles for member states. The organization is working on creating frameworks to evaluate AI systems against ethical principles and human rights considerations.

Evidence

DCO is developing an AI readiness assessment tool for member states and drafting data privacy principles that consider AI implications.

Major Discussion Point

AI Governance Frameworks and Approaches

Developing ethical AI principles and assessment tools

Explanation

Ahmad Bhinder describes the DCO’s work on creating frameworks to assess AI systems against ethical principles. They are developing tools to help AI developers and deployers evaluate their systems’ compliance with ethical considerations.

Evidence

DCO is creating a framework that maps AI ethical principles to basic human rights and developing a tool for AI system developers to assess their systems against these principles.

Major Discussion Point

AI Ethics and Human Rights

Differed with

Rachel Leach

Differed on

Focus of AI governance

A

Ansgar Koene

Speech speed

150 words per minute

Speech length

386 words

Speech time

153 seconds

Addressing biases and discrimination in AI systems

Explanation

Ansgar Koene discusses the challenges of identifying and addressing the impacts of AI systems on different groups, particularly young people. He emphasizes the need for organizations to understand how their AI systems affect users and to address potential biases.

Evidence

Koene mentions that organizations often do not fully understand how young people interact with their AI systems or what particular concerns need to be taken into account.

Major Discussion Point

AI Ethics and Human Rights

Agreed with

Bianca Kremer

Agreed on

Addressing biases and discrimination in AI systems

M

Melody Musoni

Speech speed

145 words per minute

Speech length

810 words

Speech time

333 seconds

Creating inclusive AI governance frameworks for the global majority

Explanation

Melody Musoni discusses recent developments in AI governance in Africa, including the adoption of a continental strategy on AI. She highlights the priorities for AI development in Africa, including human capital development, infrastructure, and building an AI economy.

Evidence

Musoni mentions the African Union’s adoption of a continental strategy on AI and the development of a data policy framework to support member states in utilizing data.

Major Discussion Point

AI Governance Frameworks and Approaches

Agreed with

Catherine Bielick

Stefanie Efstathiou

Agreed on

Need for inclusive AI governance frameworks

B

Bianca Kremer

Speech speed

149 words per minute

Speech length

725 words

Speech time

290 seconds

Addressing biases and discrimination in AI systems

Explanation

Bianca Kremer discusses her research on algorithmic racism and its impact in Brazil. She highlights the need to understand and address the economic impacts of algorithmic bias, particularly in the context of facial recognition technologies used in public security.

Evidence

Kremer cites research showing that 90.5% of those arrested in Brazil using facial recognition technologies are black and brown individuals.

Major Discussion Point

AI Ethics and Human Rights

Agreed with

Ansgar Koene

Agreed on

Addressing biases and discrimination in AI systems

Addressing the economic impact of algorithmic racism

Explanation

Bianca Kremer emphasizes the importance of understanding the economic impacts of algorithmic racism. She is conducting research to assess the economic losses for individuals, ethnic groups, and society as a whole due to biased AI systems in law enforcement.

Evidence

Kremer mentions her ongoing research on the economic impact of algorithmic racism in digital platforms, focusing on developing indicators to measure these impacts.

Major Discussion Point

AI Impact on Labor and Economy

L

Liu Zijing

Speech speed

133 words per minute

Speech length

424 words

Speech time

190 seconds

Implementing AI in smart court systems

Explanation

Liu Zijing discusses China’s implementation of AI in its judicial system, including the development of large language models for legal research and reasoning. The presentation highlights the use of AI in various aspects of the judicial process, from pre-litigation mediation to criminal cases.

Evidence

Liu mentions specific AI systems implemented in Chinese courts, such as the Faxin system by the Supreme Court and the Phoenix system in Zhejiang province.

Major Discussion Point

AI in Judicial Systems

Y

Ying Lin

Speech speed

122 words per minute

Speech length

366 words

Speech time

179 seconds

Addressing transparency and fairness concerns in AI-assisted judicial decisions

Explanation

Ying Lin discusses the challenges and concerns related to the use of AI in judicial systems. She highlights issues of transparency, due process, and the potential weakening of judicial accountability when decision-making authority is delegated to AI assistants.

Evidence

Lin raises questions about judges’ understanding of AI algorithms and the potential for AI to make up information, emphasizing the need for human oversight and explainable AI in judicial processes.

Major Discussion Point

AI in Judicial Systems

L

Luca Belli

Speech speed

150 words per minute

Speech length

2525 words

Speech time

1008 seconds

Adopting a multi-stakeholder approach to AI governance

Explanation

Luca Belli emphasizes the importance of a multi-stakeholder approach in AI governance, particularly in addressing cybersecurity challenges. He argues that cooperation between technical experts and policymakers is necessary to identify the best tools and standardization measures for AI governance.

Evidence

Belli cites his research on AI and cybersecurity in Brazil, highlighting the need for multi-stakeholder cooperation to implement effective security measures.

Major Discussion Point

AI Governance Frameworks and Approaches

R

Rodrigo Rosa Gameiro

Speech speed

160 words per minute

Speech length

599 words

Speech time

223 seconds

Ensuring equitable access to AI technologies

Explanation

Rodrigo Rosa Gameiro discusses the dual nature of AI development, highlighting both its benefits and challenges. He emphasizes the need to ensure that AI technologies serve all populations and uphold human rights principles for everyone.

Evidence

Gameiro mentions examples of AI benefits in healthcare, such as enabling new diagnoses and accelerating drug development, while also pointing out the digital divide and unequal access to these technologies.

Major Discussion Point

AI Ethics and Human Rights

Agreed with

Melody Musoni

Catherine Bielick

Stefanie Efstathiou

Agreed on

Need for inclusive AI governance frameworks

C

Catherine Bielick

Speech speed

167 words per minute

Speech length

739 words

Speech time

264 seconds

Creating inclusive AI governance frameworks for the global majority

Explanation

Catherine Bielick proposes using the Paris Agreement as a model for international AI governance. She suggests adopting a framework that allows for collective response with differentiated responsibilities, localized flexibility, and regular reviews to ensure accountability and progress.

Evidence

Bielick outlines five core features from the Paris Agreement that could be applied to AI governance, including nationally determined contributions and a global stocktake mechanism.

Major Discussion Point

AI Governance Frameworks and Approaches

Agreed with

Melody Musoni

Stefanie Efstathiou

Agreed on

Need for inclusive AI governance frameworks

S

Sizwe Snail ka Mtuze

Speech speed

115 words per minute

Speech length

450 words

Speech time

232 seconds

Exploring AI development and challenges in Africa

Explanation

Sizwe Snail ka Mtuze discusses recent developments in AI policy and strategy in South Africa. He highlights the country’s efforts to create a national AI strategy and policy framework, while also noting the mixed reception these initiatives have received.

Evidence

Snail ka Mtuze mentions the South African draft AI strategy published earlier in the year and the subsequent national artificial intelligence policy framework released in August.

Major Discussion Point

Regional Perspectives on AI Development

S

Stefanie Efstathiou

Speech speed

128 words per minute

Speech length

501 words

Speech time

233 seconds

Ensuring equitable access to AI technologies

Explanation

Stefanie Efstathiou emphasizes the need for inclusive AI governance that serves the global majority equitably. She highlights the importance of youth participation in shaping AI’s trajectory and calls for amplifying diverse voices in policymaking processes.

Evidence

Efstathiou mentions examples such as data protection-driven policies for student privacy in Africa and youth-led innovation hubs in Latin America.

Major Discussion Point

AI Ethics and Human Rights

Agreed with

Melody Musoni

Catherine Bielick

Agreed on

Need for inclusive AI governance frameworks

Y

Yonah Welker

Speech speed

135 words per minute

Speech length

374 words

Speech time

165 seconds

Ensuring equitable access to AI technologies

Explanation

Yonah Welker discusses the challenges and opportunities in developing AI technologies for people with disabilities. He emphasizes the need for original solutions tailored to specific languages and contexts, rather than relying on generalized models like ChatGPT.

Evidence

Welker mentions that there are over 120 companies working on assistive technology and highlights the need for dedicated safety models and environments for complex and high-risk AI applications.

Major Discussion Point

AI Ethics and Human Rights

Agreed with

Melody Musoni

Catherine Bielick

Stefanie Efstathiou

Agreed on

Need for inclusive AI governance frameworks

E

Ekaterina Martynova

Speech speed

145 words per minute

Speech length

545 words

Speech time

224 seconds

Examining AI development and regulation in Russia

Explanation

Ekaterina Martynova discusses the current state of AI development and regulation in Russia. She highlights the government’s increased spending on AI development and the cautious approach to regulation, focusing on developing technology rather than hindering it through strict laws.

Evidence

Martynova mentions the use of AI in public services, healthcare, and public security in Russia, as well as the development of sandboxes for AI testing.

Major Discussion Point

Regional Perspectives on AI Development

Differed with

Rocco Saverino

Differed on

Approach to AI regulation

Protecting human rights in AI development and deployment

Explanation

Ekaterina Martynova discusses the human rights concerns associated with AI use in Russia, particularly in public security and facial recognition systems. She emphasizes the need for more safeguards and transparency in AI deployment.

Evidence

Martynova mentions a case considered by the European Court of Human Rights regarding procedural safeguards for people detained through facial recognition systems in Russia.

Major Discussion Point

AI Ethics and Human Rights

R

Rocco Saverino

Speech speed

117 words per minute

Speech length

408 words

Speech time

207 seconds

Analyzing AI governance trends in Latin America

Explanation

Rocco Saverino discusses the emerging AI regulations in Latin America, focusing on Brazil and Chile. He highlights the establishment of specialized AI regulatory bodies and the integration of AI governance into existing data protection frameworks.

Evidence

Saverino mentions Brazil’s law 23.38 of 2023 and Chile’s proposed AI technical advisory council and data protection agency to enforce AI laws.

Major Discussion Point

Regional Perspectives on AI Development

Differed with

Ekaterina Martynova

Differed on

Approach to AI regulation

R

Rachel Leach

Speech speed

159 words per minute

Speech length

444 words

Speech time

166 seconds

Examining the environmental and social costs of AI development

Explanation

Rachel Leach discusses the environmental and social impacts of AI development, particularly in Brazil and the United States. She argues that current regulations are furthering AI development without properly addressing the harms caused by AI and big data systems to the environment.

Evidence

Leach mentions Brazil’s $4 billion BRL investment in AI development and the U.S. Executive Order on AI, which both focus on the benefits of AI without fully examining its environmental impacts.

Major Discussion Point

AI and Environmental Concerns

Differed with

Ahmad Bhinder

Differed on

Focus of AI governance

Considering embodied carbon in AI technologies

Explanation

Rachel Leach emphasizes the importance of considering embodied carbon in AI technologies. This includes the environmental impact of all stages of an AI product’s lifecycle, from raw material extraction to energy consumption for training and retraining models.

Evidence

Leach mentions that environmental costs often fall harder on the global majority, citing examples of U.S.-based companies locating data centers in Latin America, exacerbating issues such as droughts.

Major Discussion Point

AI and Environmental Concerns

A

Avantika Tewari

Speech speed

138 words per minute

Speech length

604 words

Speech time

262 seconds

Analyzing the exploitation of digital labor in AI development

Explanation

Avantika Tewari discusses the hidden human labor behind AI systems, particularly in data annotation and moderation. She argues that this labor, often outsourced to workers in the global majority, is underpaid and unacknowledged, perpetuating digital exploitation and colonial legacies.

Evidence

Tewari mentions platforms like Amazon Mechanical Turk that outsource tasks to workers in the global majority, reducing them to fragmented, repetitive tasks.

Major Discussion Point

AI Impact on Labor and Economy

A

Amrita Sengupta

Speech speed

183 words per minute

Speech length

583 words

Speech time

190 seconds

Examining the impact of AI on medical practitioners’ work

Explanation

Amrita Sengupta discusses the challenges and benefits of AI adoption in healthcare, based on a study of medical practitioners in India. She highlights issues such as the additional time and effort required for data preparation, concerns about liability, and the potential long-term impacts on clinical skills.

Evidence

Sengupta cites survey results showing that 60% of medical practitioners expressed lack of AI-related training as a barrier to adoption, and 41% suggested AI could be beneficial for time-saving and improving clinical decisions.

Major Discussion Point

AI Impact on Labor and Economy

E

Elise Racine

Speech speed

137 words per minute

Speech length

444 words

Speech time

194 seconds

Implementing reparative algorithmic impact assessments

Explanation

Elise Racine introduces the concept of reparative algorithmic impact assessments as a framework to address inequities in AI development and deployment. This approach combines accountability mechanisms with reparative practices to create a more culturally sensitive and justice-oriented methodology.

Evidence

Racine outlines five steps in the reparative algorithmic impact assessment process, including socio-historical research, participant engagement, sovereign data practices, ongoing monitoring, and concrete redress plans.

Major Discussion Point

AI Governance Frameworks and Approaches

H

Hellina Hailu Nigatu

Speech speed

153 words per minute

Speech length

480 words

Speech time

187 seconds

Addressing challenges in AI-powered content moderation for diverse languages

Explanation

Hellina Hailu Nigatu discusses the challenges of content moderation on social media platforms, particularly for non-English content. She highlights the limitations of automated systems and human moderators in effectively moderating content in languages spoken by the majority of the world’s population.

Evidence

Nigatu cites research showing that the majority of content posted on YouTube in 2019 was in languages other than English, yet 81% of human moderators operate in English.

Major Discussion Point

AI in Content Moderation and Misinformation

I

Isha Suri

Speech speed

162 words per minute

Speech length

750 words

Speech time

277 seconds

Developing policy responses to counter false information in the age of AI

Explanation

Isha Suri examines regulatory responses to false information in global majority countries. She discusses various approaches, including amendments to existing laws, transferring obligations to internet communication corporations, and fact-checking initiatives.

Evidence

Suri provides examples of regulatory responses from different countries, such as Germany’s approach to regulating platforms with more than 2 million users and the EU’s Digital Services Act requiring more transparency in platform algorithms.

Major Discussion Point

AI in Content Moderation and Misinformation

G

Guangyu Qiao Franco

Speech speed

122 words per minute

Speech length

329 words

Speech time

161 seconds

Addressing the gap between North and South in military AI governance

Explanation

Guangyu Qiao Franco highlights the concerning gap between the global North and South in military AI governance. She discusses the limited participation of global South countries in UN deliberations on military AI and the divergent priorities and approaches to governance between the North and South.

Evidence

Franco mentions that fewer than 20 developing countries contributed regularly to UNCCW meetings on lethal autonomous weapons systems between 2014 and 2023, and notes that none of the BRICS member states have endorsed North-led frameworks for military AI governance.

Major Discussion Point

AI Governance Frameworks and Approaches

Agreements

Agreement Points

Need for inclusive AI governance frameworks

Melody Musoni

Catherine Bielick

Stefanie Efstathiou

Creating inclusive AI governance frameworks for the global majority

Creating inclusive AI governance frameworks for the global majority

Ensuring equitable access to AI technologies

These speakers emphasize the importance of developing AI governance frameworks that are inclusive and consider the needs of the global majority, including youth participation and localized flexibility.

Addressing biases and discrimination in AI systems

Ansgar Koene

Bianca Kremer

Addressing biases and discrimination in AI systems

Addressing biases and discrimination in AI systems

Both speakers highlight the need to identify and address biases and discrimination in AI systems, particularly their impacts on different groups and in specific contexts like facial recognition technologies.

Similar Viewpoints

Both speakers address the hidden costs of AI development, with Leach focusing on environmental impacts and Tewari on labor exploitation, particularly in the global majority countries.

Rachel Leach

Avantika Tewari

Examining the environmental and social costs of AI development

Analyzing the exploitation of digital labor in AI development

Both speakers discuss challenges related to content moderation and misinformation in the context of AI, particularly focusing on the needs of diverse language communities and global majority countries.

Hellina Hailu Nigatu

Isha Suri

Addressing challenges in AI-powered content moderation for diverse languages

Developing policy responses to counter false information in the age of AI

Unexpected Consensus

Importance of regional and localized AI strategies

Ahmad Bhinder

Melody Musoni

Sizwe Snail ka Mtuze

Ekaterina Martynova

Rocco Saverino

Developing regional AI strategies and policies

Creating inclusive AI governance frameworks for the global majority

Exploring AI development and challenges in Africa

Examining AI development and regulation in Russia

Analyzing AI governance trends in Latin America

Despite representing different regions and contexts, these speakers all emphasize the importance of developing localized AI strategies and governance frameworks tailored to specific regional needs and priorities.

Overall Assessment

Summary

The main areas of agreement include the need for inclusive AI governance frameworks, addressing biases and discrimination in AI systems, considering the hidden costs of AI development, and developing region-specific AI strategies.

Consensus level

There is a moderate level of consensus among speakers on the importance of considering the needs and perspectives of the global majority in AI development and governance. This consensus suggests a growing recognition of the need for more inclusive and equitable approaches to AI governance globally, which could lead to more collaborative efforts in developing AI policies and frameworks that address the diverse needs of different regions and populations.

Differences

Different Viewpoints

Approach to AI regulation

Ekaterina Martynova

Rocco Saverino

Examining AI development and regulation in Russia

Analyzing AI governance trends in Latin America

Martynova discusses Russia’s cautious approach to AI regulation, focusing on developing technology rather than strict laws, while Saverino highlights Latin American countries’ efforts to establish specialized AI regulatory bodies and integrate AI governance into existing frameworks.

Focus of AI governance

Ahmad Bhinder

Rachel Leach

Developing ethical AI principles and assessment tools

Examining the environmental and social costs of AI development

Bhinder emphasizes developing ethical AI principles and assessment tools, while Leach argues that current regulations are furthering AI development without properly addressing environmental harms.

Unexpected Differences

Economic impact of AI

Bianca Kremer

Avantika Tewari

Addressing the economic impact of algorithmic racism

Analyzing the exploitation of digital labor in AI development

While both speakers address economic impacts of AI, their focus is unexpectedly different. Kremer examines the economic consequences of algorithmic racism in law enforcement, while Tewari highlights the exploitation of digital labor in AI development. This difference shows the diverse economic challenges posed by AI in different contexts.

Overall Assessment

summary

The main areas of disagreement include approaches to AI regulation, focus of AI governance, addressing biases in AI systems, and economic impacts of AI.

difference_level

The level of disagreement among speakers is moderate. While there are differing perspectives on specific issues, there is a general consensus on the need for inclusive AI governance and addressing the challenges posed by AI technologies. These differences reflect the diverse contexts and priorities of different regions and stakeholders, highlighting the complexity of developing global AI governance frameworks that address the needs of the global majority.

Partial Agreements

Partial Agreements

Both speakers agree on the need to address biases in AI systems, but they focus on different aspects: Kremer emphasizes algorithmic racism in law enforcement, while Nigatu highlights content moderation challenges for diverse languages.

Bianca Kremer

Hellina Hailu Nigatu

Addressing biases and discrimination in AI systems

Addressing challenges in AI-powered content moderation for diverse languages

Both speakers advocate for inclusive AI governance frameworks, but they propose different approaches: Musoni focuses on regional strategies in Africa, while Bielick suggests adapting the Paris Agreement model for international AI governance.

Melody Musoni

Catherine Bielick

Creating inclusive AI governance frameworks for the global majority

Creating inclusive AI governance frameworks for the global majority

Similar Viewpoints

Both speakers address the hidden costs of AI development, with Leach focusing on environmental impacts and Tewari on labor exploitation, particularly in the global majority countries.

Rachel Leach

Avantika Tewari

Examining the environmental and social costs of AI development

Analyzing the exploitation of digital labor in AI development

Both speakers discuss challenges related to content moderation and misinformation in the context of AI, particularly focusing on the needs of diverse language communities and global majority countries.

Hellina Hailu Nigatu

Isha Suri

Addressing challenges in AI-powered content moderation for diverse languages

Developing policy responses to counter false information in the age of AI

Takeaways

Key Takeaways

AI governance frameworks need to be inclusive and consider perspectives from the global majority

There are significant disparities in AI development and governance between the global North and South

AI has major impacts on labor, the economy, and the environment that need to be addressed

Ethical considerations and human rights protections are crucial in AI development and deployment

Regional approaches to AI governance are emerging, with varying priorities and challenges

Content moderation and countering misinformation are key challenges in the age of AI

AI is being implemented in judicial systems, raising concerns about transparency and fairness

Resolutions and Action Items

Develop more inclusive AI governance frameworks that incorporate perspectives from the global majority

Implement reparative algorithmic impact assessments to address historical inequities

Create open repositories and taxonomies for AI cases and accidents

Develop original AI solutions tailored to regional languages and contexts

Increase capacity building and knowledge transfer in AI between global North and South

Incorporate environmental justice concerns comprehensively in AI discussions and policies

Unresolved Issues

How to effectively balance AI development with environmental sustainability

Addressing the exploitation of digital labor in AI development

Resolving disparities in military AI governance between global North and South

Determining liability in AI-assisted medical decisions

Ensuring fairness and transparency in AI-powered judicial systems

Developing effective content moderation systems for diverse languages and contexts

Suggested Compromises

Adopting a co-regulatory approach to AI governance, balancing government oversight with industry self-regulation

Developing AI tools for content moderation that are inclusive of diverse languages and contexts

Balancing the need for AI development with environmental and social costs through comprehensive impact assessments

Implementing human-in-the-loop systems for AI in judicial decision-making to balance efficiency with fairness

Thought Provoking Comments

AI meets cybersecurity, exploring the Brazilian perspective on information security with regard to AI… even if formally it has climbed the cybersecurity index because it has adopted a lot of cybersecurity sectoral regulation like in data protection, like in telecoms sector, in the banking sector and so on, in the energy sector but the implementation is very patchy and not very sophisticated in some cases

speaker

Luca Belli

reason

This comment highlights the gap between formal regulations and actual implementation, revealing a critical issue in AI governance.

impact

It set the tone for discussing practical challenges in implementing AI governance frameworks, especially in developing countries.

We are developing a tool for assessment of AI readiness for our member states. This is a self assessment tool and this tool is we will make it available in a month’s time to the member states across different dimensions of their AI readiness that includes governance but that goes beyond governance to a lot of other dimensions from for example capacity building, the adoption of AI

speaker

Ahmad Bhinder

reason

This introduces a concrete tool for assessing AI readiness, moving the discussion from theory to practical implementation.

impact

It shifted the conversation towards actionable steps countries can take to prepare for AI adoption and governance.

90.5% of those who are arrested in Brazil today with the use of facial recognition technologies are black and brown. The brown people in Brazil are called pardos. So, we have more than 90% of the population being biased with the use of technology.

speaker

Bianca Kremer

reason

This statistic starkly illustrates the real-world impact of AI bias, particularly on marginalized communities.

impact

It brought the discussion to focus on the urgent need to address AI bias and its societal implications, especially in diverse societies.

Platforms like Amazon Mechanical Turk outsource these tasks to workers in the global majority, reducing them to fragmented, repetitive tasks that remain unacknowledged and underpaid. These workers sustain AI systems that disproportionately benefit corporations in the global north, transforming colonial legacies into new forms of digital exploitation

speaker

Avantika Tewari

reason

This comment exposes the hidden labor behind AI systems and the exploitation of workers in the global majority.

impact

It broadened the discussion to include labor rights and global inequalities in AI development.

Reparative algorithmic impact assessments address these challenges through five steps that combine theoretical rigor with practical action.

speaker

Elise Racine

reason

This introduces a novel framework for addressing AI inequities, combining theory with practical steps.

impact

It moved the conversation towards concrete solutions and methodologies for creating more equitable AI systems.

Overall Assessment

These key comments shaped the discussion by highlighting the complex interplay between AI governance, societal impacts, and global inequalities. They moved the conversation from theoretical frameworks to practical challenges and potential solutions, emphasizing the need for inclusive, culturally sensitive approaches to AI development and governance. The discussion evolved from identifying problems to proposing concrete tools and methodologies for addressing these issues, particularly focusing on the perspectives and needs of the global majority.

Follow-up Questions

How can AI governance frameworks ensure equitable access to and promote development of AI technologies for the global majority?

speaker

Melody Musoni

explanation

This is a key policy question that needs to be addressed to ensure AI benefits are distributed fairly globally.

What are the economic impacts of algorithmic racism in digital platforms?

speaker

Bianca Kremer

explanation

Understanding the economic consequences could provide compelling arguments for policymakers to address algorithmic bias.

How can we develop more specific intersectional frameworks and guidelines for AI in healthcare and education, particularly for underserved populations?

speaker

Yonah Welker

explanation

This would help ensure AI applications in critical sectors like health and education are inclusive and beneficial for diverse populations.

How can we develop AI regulatory models that reflect the unique social, economic and cultural contexts of Latin American countries?

speaker

Rocco Saverino

explanation

This would allow Latin American countries to shape AI governance proactively in a way that suits their specific needs and contexts.

How can we prioritize AI development in areas that provide the most public benefit with the least disruption to existing workflows in healthcare?

speaker

Amrita Sengupta

explanation

This approach could help maximize the positive impact of AI in healthcare while minimizing potential negative consequences.

Is there a way to have content moderation with power primarily residing in the users, rather than being dominated by capitalist interests?

speaker

Hellina Hailu Nigatu

explanation

This could lead to more equitable and culturally sensitive content moderation practices.

How can we build an inclusive AI arms control regime that addresses the distinct needs and priorities of both the global North and South?

speaker

Guangyu Qiao Franco

explanation

This is crucial for developing effective global governance of military AI applications.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Main Session 3: Internet Governance and elections: maximising potential for trust and addressing risks

Main Session 3: Internet Governance and elections: maximising potential for trust and addressing risks

Session at a Glance

Summary

This discussion focused on Internet governance and elections, particularly addressing the challenges of maintaining information integrity and trust in the democratic process in the digital age. Panelists from various sectors and regions shared insights on the experiences of the 2024 “super election year” and discussed strategies to protect election integrity.

Key issues highlighted included the spread of misinformation and disinformation, the impact of artificial intelligence and deep fakes, and the need for better regulation of digital platforms. Panelists emphasized the importance of media literacy, fact-checking, and collaboration between stakeholders to combat these challenges. The discussion also touched on the specific difficulties faced by the Global South, including digital inequality and limited access to information.

Several initiatives were discussed, such as partnerships between tech companies and fact-checkers, training programs for journalists, and the development of AI detection tools. The role of civil society and NGOs in promoting digital literacy and resilience was stressed. Panelists agreed on the need for a multi-stakeholder approach to address these complex issues.

The discussion explored governance principles and mechanisms to protect electoral processes while upholding human rights. Suggestions included improving transparency in political advertising, strengthening data protection laws, and developing global standards for content moderation. The importance of balancing innovation with integrity was emphasized.

Participants highlighted the potential of the Internet Governance Forum (IGF) to facilitate global dialogue and cooperation on these issues. They called for a more coordinated approach between regional and global IGFs to maximize impact. The discussion concluded with a recognition of the ongoing nature of these challenges and the need for sustained efforts beyond election periods to safeguard democratic processes in the digital age.

Keypoints

Major discussion points:

– The challenges of misinformation, disinformation and foreign interference in elections in the digital age

– The need for multi-stakeholder collaboration and governance frameworks to protect election integrity

– The importance of media literacy, journalist safety, and access to reliable information

– The role of social media platforms and technology companies in addressing online harms

– The potential of the Internet Governance Forum to facilitate global cooperation on these issues

The overall purpose of the discussion was to examine the challenges to election integrity in the digital age and explore potential governance principles, tools and mechanisms to protect democratic processes while upholding human rights.

The tone of the discussion was largely serious and concerned about the threats to democracy, but also constructive in proposing solutions. There was a sense of urgency about addressing these issues, balanced with cautious optimism about the potential for multi-stakeholder cooperation. The tone became more action-oriented towards the end as participants offered final recommendations.

Speakers

– Pearse O’Donohue: Moderator

– Tawfik Jelassi: Director from UNESCO

– Lina Viltrakiene: Representative from the Lithuanian government

– William Bird: From Media Monitoring Africa

– Rosemary Sinclair: Chief Executive Officer of the Australian DA

– Daniel Molokele: Member of Parliament from Zimbabwe

– Sezen Yesil: Director of Public Policy at Meta

– Elizabeth Orembo: Researcher at the International Stakeholder Relations of ICT Africa

Additional speakers:

– Giacomo Mazzone: Member of EDMO (European Digital Media Observatory)

– Bruna Martins dos Santos: Organizer of the session

– Maha Abdel Nasser: From the Egyptian parliament

– Alexander Savnin: From Primorsky University in Russia

Full session report

Expanded Summary of Discussion on Internet Governance and Elections

Introduction:

This discussion, moderated by Pearse O’Donohue, brought together in-person and online panelists from diverse sectors and regions to explore the critical intersection of internet governance and election integrity in the digital age. The panel examined challenges, successful initiatives, and potential governance mechanisms to protect democratic processes while upholding human rights.

Key Challenges to Election Integrity:

1. Misinformation and Disinformation:

Multiple speakers, including Tawfik Jelassi, William Bird, Sezen Yesil, and Lina Viltrakiene, identified the spread of misinformation and disinformation as a significant threat to election integrity. This includes coordinated inauthentic behaviour on social platforms and the use of AI and deepfakes to create misleading content.

2. Attacks on Electoral Bodies and Journalists:

William Bird and Tawfik Jelassi highlighted the serious issue of attacks and intimidation against journalists and electoral management bodies, recognising it as a significant threat to press freedom and election integrity. Jelassi specifically noted the increased violence against women journalists.

3. Digital Inequality:

Elizabeth Orembo raised concerns about digital inequality limiting access to reliable information, particularly in the Global South. She also highlighted challenges related to data sharing and the need for proactive information from election management bodies.

4. Emerging Technologies:

Lina Viltrakiene and Sezen Yesil emphasised the threat posed by AI and deepfakes in creating misleading content. Yesil acknowledged these risks and discussed measures taken by platforms to address them.

5. Untrained Influencers:

Daniel Molokele pointed out the rise of influential but untrained social media personalities and podcasters affecting election integrity in Africa, highlighting the lack of regulation for these new media actors.

Successful Initiatives and Best Practices:

1. Multi-stakeholder Collaboration:

Several speakers emphasised the importance of collaboration between various stakeholders, including tech platforms, fact-checkers, authorities, and civil society.

2. Media Literacy and Digital Skills Education:

Tawfik Jelassi highlighted the importance of media literacy and digital skills education programmes in combating misinformation, mentioning UNESCO’s role in training journalists on election coverage and AI’s impact on elections.

3. Technical Measures:

Sezen Yesil discussed technical measures implemented by Meta, including detecting manipulated media, removing inauthentic accounts, and providing transparency in political advertising.

4. Public Reporting Platforms:

William Bird mentioned the development of public reporting platforms for online harms and suggested more nuanced labels to understand different types of misinformation.

5. Consolidated Monitoring Systems:

Lina Viltrakiene described Lithuania’s initiatives, including a consolidated monitoring system and collaboration between business and academia to address digital threats to elections.

6. European Digital Media Observatory (EDMO):

Giacomo Mazzone highlighted EDMO’s role in monitoring European elections and coordinating fact-checking efforts across the continent.

Governance Principles and Mechanisms:

1. Balancing Innovation and Integrity:

Rosemary Sinclair stressed the need to balance innovation with integrity and human rights protections in the digital sphere, emphasising the technical community’s role in maintaining DNS availability during elections.

2. Global Cooperation:

Lina Viltrakiene called for increased global cooperation and information sharing between democracies to address digital threats to elections.

3. Standardisation of Information Quality:

Daniel Molokele suggested the standardisation of quality information and news across regions, particularly in Africa.

4. Platform Accountability:

Lina Viltrakiene advocated for establishing clear legal responsibilities and potential penalties for digital platforms, while Sezen Yesil emphasised voluntary collaboration between platforms and authorities.

5. Information as a Public Good:

Tawfik Jelassi proposed treating information as a public good rather than a public hazard.

6. Ongoing Efforts:

William Bird stressed the importance of continuous efforts to combat misinformation outside of election periods.

Role of the Internet Governance Forum (IGF):

Rosemary Sinclair emphasised the potential of the IGF to facilitate global dialogue and cooperation on election integrity issues. She called for clarifying and strengthening the IGF’s role in addressing information integrity issues globally, developing more coordinated efforts between national, regional, and global IGFs, and potentially contributing to a global governance architecture.

Unresolved Issues and Future Directions:

1. Regulation of Influential Social Media Personalities:

The discussion highlighted the need for effective regulation of influential social media personalities and content creators, particularly in regions like Africa.

2. Addressing the Digital Divide:

Participants recognised the ongoing challenge of addressing the digital divide that limits access to reliable information in some regions.

3. Balancing Free Speech and Combating Misinformation:

The discussion touched on the complex issue of balancing free speech protections with the need to combat harmful misinformation.

4. Global Platform Accountability:

Questions remained about how to hold global platforms accountable across different national jurisdictions.

5. Standardised Definitions:

The need for developing common definitions and standards for identifying misinformation/disinformation was identified as an area for future work.

6. Internet Voting Systems:

An audience member raised concerns about the use of internet voting systems in some countries and the potential risks associated with them.

Conclusion:

The discussion underscored the complex and evolving nature of protecting election integrity in the digital age. While there was broad consensus on the challenges faced, the panelists emphasised the need for continued multi-stakeholder collaboration, enhanced digital literacy efforts, and the development of nuanced governance frameworks to address these critical issues. The role of the IGF in facilitating ongoing global dialogue and cooperation on these matters was highlighted as a key avenue for future progress. The moderator’s final remarks emphasised the importance of the multi-stakeholder process in addressing these challenges effectively.

Session Transcript

Pearse O’Donohue: Good afternoon. Welcome to this open session, the main session on Internet Governance and Elections. Welcome to this open session on Internet Governance and Elections. We want to focus on the issues around elections and maximising potentials. We have a very important session on Internet Governance and Elections. We want to address the issues of the democratic process. We must address, already on Sunday morning, in day zero of this Internet Governance Forum here in Saudi Arabia, we had a session on misinformation. And in that session, we also had a session on the role of stakeholders in protecting election integrity and the right to information. We want to have a discussion on the role of stakeholders for increasing trust and addressing any risks that exist. This session will therefore have a discussion on the role of stakeholders in actually protecting information and election integrity and what are the rights, what are the rights to information and election integrity. And also, we will have a discussion on the role of stakeholders in protecting citizen participation while mitigating the risks to electoral integrity. So for that, I would like to introduce our great panel of speakers, whom we have, and I would like to start by saying hello to Ms. Sezen Jezil, who is Director of Public Policy at Meta. We also have Mr. William Bird, who is from Media Monitoring Africa. And Mr. Tawfik Jelassi from UNESCO. You’re welcome, Tawfik . And then we have online Ms. Rosemary Sinclair, who is the outgoing Chief Executive of AUDA. You are both welcome online and it’s great to see you. We can see you on stage here. So I will move to the seat. I beg your pardon. I am so sorry, Your Excellency. This is the problem of not having paper in front of me. I’m still not adapted. So we have a representative from Zimbabwe, a member of the Parliament of Mozambique, the Honourable Mr. Daniel Molokele So the way we’re going to proceed with this with this panel is that I’m going to allow each of the panel members to make a brief opening statement in relation to a question which I will now ask. They’ll have three minutes to respond and in the good tradition of the IGF we will then immediately allow for input from you, the audience, both here and online to those questions before then I go back to the moderators with some more detailed questions for which we have chosen specific subjects. That’s how we’d like to proceed so as I say get ready we would really like to encourage your participation so that the output of this session will actually be something which we can have some well-informed actionable measures which can be taken and I will say in the context of the IGF where we know that there’s so much that the multi-stakeholder platform that this represents can do in such an important issue. So to get us going I’m going to ask the following question to all of our panel members. With more than 65 countries going to the polls in 2024 this was marked by the biggest number of elections at the same time in history so some have called this the year of democracy but looking now in retrospect at the end of the year how do you think it has gone? How has the year gone by? What worked and what didn’t work? So perhaps I can turn to you please first of all.

Sezen Yesil: Thank you so much. Hello everyone thanks a lot for hosting Metta on this panel. Internally at Meta we call it year of election too so we knew it was coming and we prepared well. Before each election we make a risk assessment specific to that election and this assessment informs our election integrity work at Metta. In 2024 we ran a number of election operations center to monitor continuously the issues on our platforms and to take actions as needed swiftly. I can share a few observations from this year’s elections. So in first of all in our actions we try to strike a balance between protecting voice and keeping people safe and I must admit that it is one of the hardest jobs in the world and we have many policies or rules on what is allowed and what is not allowed on meta platforms and we remove content which is violating our rules or policies. Throughout this year we decided to update some of our policies. For example we updated our penalty system per feedback of the oversight board to treat people more fairly and to give them more free expression and secondly we updated our policy on violence. People of course have every right to speculate on election related corruption but when such content is also combined with a signal of violence we remove it and I can say that those updates work very well during the elections in this year. Second observation is about prevention of foreign interference. In this year only we removed about 20 CIB network coordinated in authentic behavior network. Those networks consist of hundreds of Facebook and Instagram accounts and pages and they work to mislead people, they work to spread disinformation unfortunately. We observed that some of those networks we disrupted moved to other platforms with fewer safeguards than ours. The last observation is about the impact of artificial AI. So in the beginning of this year many people were very concerned about the potential negative impact of gen AI generated content on elections such as deepfakes or AI generated disinformation campaigns. However and sorry to address these risks we took a lot of technical measures plus we signed an AI election accord with other major tech companies to cooperate to combat threats coming from the use of AI on elections and we observed that the risks did not materialize in a significant way and such impact was modest and very limited in scope. example, only less than 1% of the fact-checked misinformation was AI-generated.

Pearse O’Donohue: Time now. Thank you. That’s it. Okay. Thank you. Sorry, Sezen, but you’re the first to suffer from the fact that we will hopefully have a good discussion, so I’ll keep the speaking time short. Now I’ll go to the other end of our list of speakers here, just Tawfik Jelassi, Director from UNESCO. We’ll be very happy to hear your views on that question as to, really, what do you think, how did the year go, and what worked and what didn’t work?

Sezen Yesil: Thank you very much, Mr. Chair. So you reminded us that this is the super election year, with 75 elections being held, that is involving half of the population of the world, and obviously this is a major test for democratic systems around the globe. What has worked well, to answer your question, I think there were some global efforts to protect election integrity from a process point of view, however, the second maybe thing that worked well is the involvement of the youth and first-time voters in elections around the world, especially in countries where half of the population, sometimes even 60% of the population, is under the age of 25. I think we saw this major engagement, that’s good. What has not worked well is the exponential spread of disinformation and hate speech derailing the integrity of electoral processes, and maybe casting some doubt or trust in election outcomes and democratic institutions. Another thing that did not work well, which is a major challenge, is the safety of journalists covering elections. Many attacks happened against them, and we know about the extreme The second thing that did not work well is that there is a huge digital inequality in the world, and that’s why there is a relatively high impunity rate for violence or crimes committed against journalists. The third thing that did not work well is still a huge digital inequality that exists, especially marginalised groups, including women and persons with disabilities, who face major barriers to participate in public spaces, and that’s why we need to change the way or the path forward. I think we need some stronger regulatory frameworks to address harmful online content while protecting freedom of speech, so when I say regulation, I’m not referring to censorship, that’s why I’m saying while safeguarding free speech online. Second, we need maybe to expand media and information literacy in the digital age, especially among the youngsters and citizens, and, finally, I would say that UNESCO is contributing to this global effort on media and information literacy in the digital age, but also through the published UNESCO guidelines for the governance of digital platforms, which happened a year ago.

Pearse O’Donohue: Thank you very much, and some of those subjects that you’ve raised we will come back to in our detailed questions, but it’s a very clear view as to the main points that we must address, including, of course, intimidation and violence against journalists, and the digital gaps which have themselves an impact on the derailment of these elections. So, thank you. If I could now ask the same question to the first of our online participants, our online panellists, so Ms Liz Orenbo, who’s a researcher at the International Stakeholder Relations of ICT Africa. I’d like to hear your views on that same question about how things went and what worked and what didn’t work. Please, Liz.

Elizabeth Orembo: Thank you for the floor, and thank you for inviting me to this very important discussion. In my reflection, I would say that there are things that went well, there are things that didn’t go well, just as far as I’m concerned. There are things that didn’t go well. There are things that didn’t go well, just as far as I’m concerned. you might hear some chicken sound behind me. So one thing that did go well is that stakeholders, even locally, even in Africa, because I work in the context of Africa, they knew that this was coming. And with the rapid changes of technologies, they were aware that they needed to come together and tackle some of these risks. So some of those risks were tackled, but also the challenges of the free flow of information itself. And with that, I also talk about data. That remained a problem. And when free flow of information is not there, with challenges of policy, with challenges of infrastructure, with challenges also of media, then people don’t access information the same way. And it breeds a very fertile ground for misinformation and inequality. There’s also not that culture of data sharing, and especially in the context of election. And this brings that unevenness of access to information itself and also misinformation. But that problem continued. It also meant that trust for election management bodies also kind of went down, because people are yearning for information, truthful information. And at the same time, they’re getting mixed information. But also at the same time, media is not equipped. It’s also a struggling industry to get important information to people. So that also breeds another fertile ground for misinformation. So data and information flow, I would say was a major problem to me. But also, as much as the stakeholders came together to tackle misinformation, also there was a bit of challenge in bringing all stakeholders to come to place. Because with data becoming more available, we also need more capacity to crunch data to get it to people. And those capacities were different as well, and sometimes challenging. So there was sometimes data availability, but challenges in making use of that data. Another one persistent challenge is, and especially us in the global south, reaching the tech companies. And with that, we also experienced regulatory challenges when it comes to crisis during election that can sometimes lead to internet shutdown. I will stop there for fear of being time-limited.

Pearse O’Donohue: Well, thank you. And a very interesting perspective, including that last point, but not. least that last point with regard to the particular issues of the Global South. Hopefully we can come back to some of those questions as well. But now if I could turn to the next of our speakers here, William Byrd from Media Monitoring Africa. Please, William. Thank you.

William Bird: It’s been a big year but I want to just ask if people genuinely feel better about democracy having had 65, 70, 75 elections. Because the sense that I get from speaking to people is that despite it being, it should be a year of celebrating democracy, we don’t feel good about democracy and I think that speaks to some fundamental changes. The first is the rise of fascism and this is a very real problem for us in terms of the fact that I think it’s deepening polarization. It’s framing people that believe and support human rights as left-wing extremists, just because you are talking about fundamental equality and dignity for all. And there’s something that’s happened I think that we also need to accept as a point of departure about power structures. We’re no longer in a place where you can have power determined and messaging and narratives framed by one or a few central entities. There’s now this wonderful possibility that almost anyone can have a view and then as much as that’s a good thing, we mustn’t throw away, throw the baby out with the bathwater as the expression goes, right? Because we do need to make sure that there’s certain things that are common that we can at least agree on. So I think in terms of things that worked well, I was thinking about it last night and I came up with MECA, which stands for Media, it seems appropriate, Media, Electoral Management Bodies, Civil Society, Collaboration and Adaptability. Some colleagues have touched on that sense of adaptability of organizations of entities adapting to the emerging challenges. I think for media we saw them facing huge problems across the continent, particularly in Southern Africa, but we also developed some mechanisms to start to assess how they perform and how they contribute. Electoral Management Bodies in countries where there were big shifts, like in South Africa and in Botswana for example, of political power, we saw that where you’ve got a stronger, more credible Electoral Management Body, they’re able to still contribute and function despite being subjected to significant attacks. Civil Society I think worked really well certainly in our experience in South Africa. They came up with research projects, they worked with universities. There was a reporting mechanism, Real 411, that’s a public complaints platform, and they worked together, which is the next point, collaboration, can I finish, collaboration, which is that we worked with the social media platforms, Google, Meta, and TikTok, and the electoral management body, and that did something really positive.

Pearse O’Donohue: Okay, thank you, William, for a new acronym, but at least a way of analysing the different issues. We will come back to that also. And now, I’m certainly not going to forget him this time, our next speaker is the Honourable Daniel Molokele, who is a Member of Parliament from Zimbabwe, please.

William Bird: Thank you so much. I will speak more from the African point of view. It was also a very huge election year for Africa 2024. I would say as we end the year as a continent, we are generally happy with the election processes across Africa, we had largely peaceful and successful elections in countries such as South Africa, Madagascar, Botswana, and very recently in Ghana. And we managed also to benefit from innovation around media and technologies, especially harnessing the youth population into elections. Generally, young people in Africa are very averse to elections, there is apathy, but I think this year we saw a higher participation of young people as voters. We still need to see more young people as candidates or as elected representatives. We also saw the use of social media in a much more progressive way to mobilize people to voter registration and more importantly to turn out as voters, including media platforms such as TikTok, WhatsApp, Facebook, and X, so Africa is harnessing the media technologies to also improve access to elections by average citizens. We also end the year on a very difficult note in countries such as Mozambique where there is no peace at the moment. The post-electoral violence continues to escalate with no solution in sight. Last time I checked, over 100 civilians have died, mostly at the hands of security officials like police and army in Mozambique. The election remains disputed and we need a solution to that. Interestingly enough, there has been a huge use of media technology or innovative approaches to use of media. The opposition leader is actually not in Mozambique at the moment, but he is able to provide leadership every day in Mozambique and people are using access to media technologies to respond. It can be a bad thing, it can also be a good thing, but that’s the situation at the moment in Mozambique. Thank you.

Pearse O’Donohue: Thank you very much. So between what Daniel Molokeli has said and William before him, we are faced with a number of issues where we need to consider the role of the international online data and communications on issues such as, William mentioned, the rise of extremists as a result of the elections, or in the case of Daniel, the actual fact of violence as part of the elections leading even terribly to the death of citizens and individuals, and to what extent is whatever about misinformation, whatever to what extent is digital or online information or platforms contributing to those serious issues. So the next speaker is here with us, so it’s Ms. Lina Vitrakainė from the Lithuanian government. Please.

Lina Viltrakiene: Thank you very much and good afternoon everybody. And indeed, I would like to say that Lithuanians significantly contributed to this year of democracy by having, participating in three elections this year. We had presidential elections, we had elections to the European Parliament, and we also had the national parliamentary elections. So from the government perspective, it was a challenge and a lot of governmental institutions, including Lithuania’s Central Electoral Commission and a number of other institutions, indeed worked hard and consolidated all their efforts in order to make these elections go smoothly and make them reliable. Particular attention was paid to ensure that only legal sources of fundings are used for electoral campaigns, transparency is maintained with regard to real expenditure of political parties and individuals for the media, effective communications channels with media are maintained, appropriate channels to detect disinformation and a comprehensive system to mitigate risks is established, to mention just a few. Some of these requirements and other important requirements are covered by Lithuanian and European legal acts, like the election code, the criminal code, the political party law, the long provision of information to the public. to mention some of them, and thus solid legal environment is the first thing I would like to mention in the list of what worked. Another action which I prefer to include in the same list is established collaboration of responsible state institutions with media, including with social platforms, which no doubt enlarged public space and reinvigorated public debate during the election campaign. But on the other hand, all around the world, we faced the unprecedented scales of lies and disinformation, deep fake statements of top politicians appearing especially on social platforms. And this increased the threat of influencing the choices of people, seeding distrust in society and eroding trust in democratic institutions. You may know that in the EU, the Romanian and Bulgarian elections experienced significant interference by foreign actors via social media platforms, especially TikTok and Telegraph. Thus, this shows us that we need to work further on continuous collaboration of platforms with state institutions. And while regulatory frameworks perhaps should be improved, and as a model, I would like to refer to the EU’s Digital Services Act, which could really encourage the thinking. Thank you.

Pearse O’Donohue: Thank you very much, Lina. And if I could just add, working in the European Union for the European Commission, also we had put in place a number of measures for monitoring the health of the European Parliament elections. We’re still doing that assessment, but it is clear that some problems were avoided. But you did mention, including a number of other problems, for the first time, the appearance of deep fakes, which can be very influential and turn people against an individual or a tendency or a party, and be very damaging, even if they are very quickly identified as being fake, because sometimes the initial damage is done. Thank you. So now our last speaker who is in line, and thank you very much for your patience, is Ms. Rosemary Sinclair, who is the Chief Executive Officer of the Australian DA. Rosemary, the floor is yours.

Rosemary Sinclair: Thank you, Pierce, and many thanks for the opportunity to bring a technical community perspective to the panel. And I’d like to start with just a technical reminder, really, about the internet. It is, of course, a network of networks, 70,000 in total. It operates on open standards and common protocols to enable global interoperability. It’s made useful by the unique identifiers, the names and numbers, which are coordinated by ICANN, which in itself is an independent technical community that uses a multi-stakeholder approach. So I’m part of that technical community, and I’m responsible for OUDA, which is the small company that administers .AU, the country code for Australia. We focus on technical operations and performance and our domain name licensing rules. And we’re very strong supporters of the multi-stakeholder model of internet governance. When I think about 2024 and what worked and what didn’t work in that year of so many elections, the first point I want to make is that technically the internet worked. In Australia, we delivered 100% availability to users during the year. Every time a user wanted to access the domain name system, they could. Why was that important? It’s because the internet worked to share information, to provide communication and commerce, of course, to grow economies and standards of living. But there are a number of harms, and many of those have been mentioned just now. Misinformation and disinformation and fraud and others, and they are key challenges, particularly in such an election year. So the harms, of course, need policy work, and that’s what we’re here to talk about. And the tensions, as we see it, are between open information, secure identity and privacy for individuals. And the question really is how to balance those things. So practically speaking, during elections, we sometimes see at powder increased requests from people to take down the websites of their political opponents. And those requests are often made with claims of misinformation or disinformation. Those claims must be assessed by others who are authorised by law and skilled to make those judgements. Our response can only be based on our .au licensing rules and not on the political nature of the content or the requester. We’ve not yet seen the impact of AI on elections in Australia, but we’re expecting to have a national election next year, and we think that AI will be something that we need to watch during that process. So the policy work that we all have to contribute to is really a work in progress, and we see the Internet Governance Forum as the place for those discussions to take place across all the different perspectives, including our own technical perspective. Thank you.

Pearse O’Donohue: Thank you, Rosemary. And indeed, thank you for giving us the views of the technical community, and in particular referring to ICANN, but also the importance of the DNS in relation to the issues that we’re talking about, and again, of course, the need for independent verification and moderation with regard to any attempt to take down websites. It’s a two-edged sword. So thank you. So now, thank you to all of the panellists for that first round, and as I said, we are now going to see if anybody from the audience here in the conference room or, for that matter, online, would wish to make any inputs. I will ask that they are short, and to do so in time-honoured fashion, if that is the case, you need to come up to the front and use one of the microphones. So if anyone wants to do so, could you please identify yourself and the organisation you represent and please keep your input very short, two minutes as a very maximum. Thank you.

Giacomo Mazzone: Thank you very much. Giacomo Mazzone. I am a member of EDMO, the European Digital Media Observatory that you know very well. I’m here reporting what we discussed in the workshop on day zero that was organised by EDMO about the task force that worked on vigilant the integrity of the European election last year, compared with what happened in the US election and the South Africa election. The contribution that we can give you is that the assessment of what happened during the European election was very good because there was a successful example of cooperation with the platforms, and made in a multi-stakeholder way, in the sense that in a unique place, that is EDMO, you have academia, you have fact-checkers, you have institutions working together. Through the code of practice that the European Commission signed with a certain number of platforms, this information will bring to the attention of the platform and the platform will immediately react and behave. So we have been successful in removing things without having enforcement, but made on goodwill and cooperation. Unfortunately, what was reported by… U.S. friends was not exactly the same. They said that the level of cooperation in the U.S. was not the same, and also that they lived a very worrying experience, this is important for our UNESCO people here, of pressure and intimidation on fact-checkers, trying to to silence them and having not them in the public discourse. And in South Africa…

Pearse O’Donohue: Sorry, I’m going to have to ask you just to wrap up, please.

Giacomo Mazzone: Yes, the last point, to be complete, is about South Africa’s experience. They reported that any intervention by legislation is seen as censorship, so it shows that it’s different. You need to find a different way to act in different cultural contexts according to the different situation. Thank you very much.

Pearse O’Donohue: Thank you, and thank you for those insights, and indeed as well the very useful workshop that took place on Sunday. We have another speaker, please. Again, your name and organization. Thank you.

Audience: Hello, Alexander Savnin, Primorsky University from Russia. I would like to point out that among this misinformation and data spread, the Internet already may be used by some governments for votings. Like in Russia, this year there were two sets of elections, one of which was actually a presidential election for Mr. Putin, and systems implementing Internet voting was used in these elections. And without possibility to multi-stakeholder discussions on implementation of this system, without possibility to check trust, this system actually undermines any results of elections as all. Unfortunately, implementation of these systems and results of elections are not very well observed or seen by global community, but it brings another dimension to the undermining trust and improving risk of fair elections. Thank you very much.

Pearse O’Donohue: Thank you, indeed. I’m just looking to see, do we have any online inputs? Anybody who’d like to take the floor or make a comment? And this is the way of giving the spotlight to Bruna, who has done all the organization for this session.

Bruna Santos: I would just echo a comment from Mokabedi. So just reading it out loud. Hi everyone. I’m Mokabedi from Iranian academic community, some Krausbräuder digital platforms, refuse to cooperate with the competent authorities of independent countries in the field of immediately dealing with this information that meaningfully affects the election results and harms public trust during the elections due to reasons and excuses, including political reasons and sanctions. They even refuse to establish legal representation. My question to the panel is what can be the legal and political solutions to solve this challenge and the double standards of digital platforms? Should maintaining the health and safety of online elections in different countries have a different degree of importance? That’s the one we have here. Thanks.

Pearse O’Donohue: Thank you, Bruna. And I will ask the panelists if there’s anything from what we’ve heard so far, particularly that last question, if you want to incorporate that in the responses when we come back to you for a discussion. Now we have a final participant from the floor, please. Thank you.

Audience: Thank you very much. My name is Maha Abdel Nasser. I’m from the Egyptian parliament. Actually, the problem is not just during elections, but it gets worse during the elections. We find those, what they call it, the electronic flies or so, they attack anything we put with the, they put a lot of disinformation and they try to get us down by all means, even those people who were, I don’t know, by the regime or by opponents or by anything. And even when we report, it takes a very long time to take any action if the action is taken. So my question is, if there is a possibility to have a platform or anything between all those people to report such attacks or such harassments, especially for politicians, women politicians of course, so the action can be taken in a rapid way and we can get rid of these things or not? Thank you.

Pearse O’Donohue: Thank you. Again, I hope that that question can be addressed. I will allow myself just to very briefly give a partial answer, but it is not the full answer, but that in the European Union, particularly now with the introduction of the Digital Services Act, we do have a requirement for individual, very large operators of platforms to have the facility for the reporting of such activities, but also centralized databases monitoring these issues. And by the way, verbal and online violence against women and particularly female politicians is something that we are particularly concerned about, as it is insidious and has long-term effects, as well of course as the effects on the individual. So these are issues which we must address in the case of the European Union. We do see this as a necessity, the ability to report such incidents and hopefully to see quick action. But I’m sure that there are other experiences from around the world and we’re always willing to learn. So for that, thank you for your participation. We will have another slightly longer section at the end and I hope that we have more participation here in the room and online, but we’re going to move on now to the second set of questions and here we’ve broken them down between our expert panellists and I’m going to start with William Byrd and Liz Orembo and you’ve got the hardest job because I’m going to ask both of you two questions and give you five minutes each to answer both of them. So we’ve put them on screen and I hope that you can see them, but certainly what evidence has come to light of information integrity being weakened through human rights or tech harms? How should the weakening of election integrity through these and other risks be identified? And that’s for William. And then Liz, when we come to you, the question I’d like to ask you is, what are the implications or consequences of such risks to information integrity in elections? But we’ll come back to you, Liz, in a moment. First of all, I’d like to hear William on the first question and you have five minutes, please. Thank you.

William Bird: So I love the point from one of the other people that a lot of these things occur outside of elections. What we see is these things occurring at a heightened level in an election period, but that they, you know, attacks against women, for example, online don’t stop just because it’s not an election period. So I think there are three things where we saw information integrity being weakened in South Africa specifically. Firstly, attacks against the electoral management body. These were multi-pronged and straight out of a disinformation playbook that targeted the entity, its decisions, they spread rumors, missing disinformation, then they target individuals in there, and then they lace these various campaigns with kind of pseudo-legal challenges. And then they rely on a willing platform partner to scale the dirty work. And in that instance, most of these things we saw in South Africa on the platform that was X, which was not part of our collaboration, and unsurprisingly. The second issue is attacks against journalists and human rights defenders and those bodies. So as an example, on X, over a two-week period, we saw over a thousand attacks against journalists, and most of those actually against one journalist in particular. So clearly organized network behavior, including issues linked to incitement.

Tawfik Jelassi: And then thirdly, the bigger impact of the decimation of media, as we’ve seen them being systematically undermined as trusted systems. That feeds into that idea of media and polarization, that sense of people not knowing what’s actually going on, and then being unable to actually operate. So how should they be identified? You spoke about what’s happening in the EU. In South Africa, we’ve got a platform, Mars, which people can report attacks against journalists so that there’s a public archive of them, and we’ve also got the same thing for other online harms, mis- and disinformation, hate speech, and threats, and hate speech. And that’s also, again, a public platform that operates independently of the state so that the public begin to have faith in it. And critically, it applies the same standard, because what we found was problematic is that what’s okay on one platform isn’t okay on another. And so that leaves the public thinking, well, what do I do here? If I want to report on X, nothing happens. If I report on meta, it’s this process. If I report on this platform, it’s another whole process. So we’ve got a system that allows the people to report any platform, and then you can take action.

Pearse O’Donohue: Thank you very much. And of course, consistency in the application and confidence of the individual that whatever the platform is, that they will have the ability to have redress or at least to have the issue examined is very important. Thank you. Now, turning to you, Liz, just to repeat the question is, what are the implications or consequences of these risks to information integrity in elections, including the risks to civil and political rights, or the interference by foreign actors, and so on? Please.

Elizabeth Orembo: Thank you. Well, I’d begin by first looking at the media environment. There are certain most information that comes from the online environment coming to media and vice versa. And when there’s no information integrity on online platforms, it means that the media has to respond to a lot for public interest, get what information that is misleading there and putting it out to the public, demystifying some of this misinformation. Also, competing narratives, a lot of the information also coming online means that the media has to go through all that information and spotlight what kind of information that the public should focus on, because they can also get overwhelmed with a lot of information coming from different media. But then again, we see the capacity issue of the media also because the shift with advertisement and revenue to online spaces. So the media is also challenged there. What does it mean on human rights implications and civic rights is that people don’t vote from an informed point, because they miss a lot of information that can really be detrimental or be useful for voting, the right choice of candidate. It also means that this will also impact things to do with development issues, development which is a right that would enable them to enjoy also first generation of rights like freedom of expression, and that’s a problem there. The other one is incitement. Of course, when there’s no information integrity, there’s a lot of polarization happening online, offline, that have effects to also marginalization. People who are further marginalized, you mentioned women and girls. women who have been active change-makers at the grassroots level, when they try getting into the spaces of governance, they face a lot of violence online and offline, and this really discourages them from pursuing government office or electoral office. That means that we are widening the inequalities there. I would also like to point out that the African continent faces very different challenges and also very different context. We are different levels of development, different levels of democratic progress, and that means that policies that are by the big platforms cannot just be applied blanketly because some will not apply in some countries, because of special tech development context and also democratic context as differing to others. Sometimes we see that there’s no much investment or tech companies get overwhelmed to give special attention to special context. This year, what we’ve seen, and especially with Mozambique as the situation continues, is that not really that tech platforms are not engaging there, but also there’s that overstructure in engagement to respond faster to situation on the ground. Those are the challenges that we are getting in most African countries, that even when there’s attention there, there’s no that specialized attention on the ground because most of these tech companies are not domiciled. The other thing is when we talk about information integrity and trust in electoral management bodies, sometimes you have a focus on electoral management bodies maintaining their reputation. But at the same time, for them to get trust from the public, it means that there’s also need to be an environment where there’s proactive information coming also from election management bodies and especially in the context of how they manage election. Now, because of different media access, the situation in Africa, either connectivity is uneven or even access to media, even traditional media is uneven. That means even when they try to communicate with whatever platform, it doesn’t really reach people. That unevenness in information access also brings about the fertile ground for misinformation. Like I said, it also touches on what William Bird had also mentioned. On this, I’d also like to touch on what we try to do at RIA.

Pearse O’Donohue: Just as quick as you can, please. Thank you.

Elizabeth Orembo: Yes. We are working on Mozambique, Ghana, and Tanzania, which is having elections next year. Our focus is on media coalitions and access to data also. research, another thing that we are seeing right now are the dilemmas around data sharing, data sovereignty, and whether to host data, elections data, in the country and outside the country. I think I will stop there.

Pearse O’Donohue: Okay, I’m sorry that I had to interrupt you, but that was a very interesting analysis and quite a number of issues that you have identified as being things that need to be addressed the consequences in some detail and obviously some lived experiences as to what happens. With that in mind, now we’re going to move on to the next set of panellists. This time the format is slightly different. We have one question and I’m going to ask that question to three panellists and hopefully you can feed off one another. So I will start with Daniel Molokele. And the question is, what initiatives have successfully responded to challenges posed to information integrity in elections? And how is this success measured? And are such initiatives specific to a given time or place? Or could they be used more widely around the world? Mr. Molokele, please.

Daniel Molokele: Thank you so much. Yeah. There are several initiatives, most of them are just starting. But I wanted to highlight a very continental one which occurred in September. We met in Senegal as Africans on the Freedom of Internet Forum. And one of the key pillars of this conference with hundreds of delegates from across the continent, one of the key pillars was access to information from a perspective of elections, especially knowing that in some instances in Africa, we have seen governments using strategies such as internet shutdowns, where they create a complete blackout during the campaign period to force an advantage against the opposition. We’ve also seen instances where social media platforms like WhatsApp are also restricted in terms of operation to make it difficult for people to access information. Also the over-reliance on state media at the expense of media that is independent and shutting down of alternative media platforms, especially media houses that are seen to be sympathetic to the opposition. So we have started an annual meeting in which we will be able to get presentations and research and assessments in terms of electoral processes and access to information. And also related to that, there is a parallel process around challenging policy frameworks and legislative frameworks that make it harder for people to access information, especially civil society, especially political parties that are not the ruling party, especially journalists who are covering elections. Access to information laws in Africa are there, but some of them are designed in such a way that they create a more bureaucratic process. Ostensibly, they are supposed to increase access to information, but at the same time they make it harder for someone to access information. We also have such laws in Zimbabwe, where I come from, called the Official Secrets Act. Official Secrets Act also can be used to make it difficult to access specific information if it doesn’t create advantage to the ruling party. So there is a lot that is happening, and we are seeing not just civil society coming into space, but we are also seeing research coming from universities, from schools that teach journalism and media studies, and that also helps us to have a more robust view around access to information and electoral integrity. Some of the ideas that are coming out, they are mostly unique to Africa, because Africa also is a situation where there is a great digital divide with the rest of the world. The majority of people in Africa have no easy access to the internet, they have no easy access to mainstream media, so at the end of the day, they are subjected to misinformation and disinformation, and a lot of state-funded propaganda. And at the end of the day, it’s such a huge disadvantage, it makes it difficult for election systems to be free and fair, because without being properly informed, you cannot make informed choices as a voter, and in most instances it favours the ruling elite in the continent. Thank you so much.

Pearse O’Donohue: Thank you. So, in suggesting some of the solutions, you’ve also identified one or two further problems that need to be addressed, some arising. from your experience. So now I’d like to ask the same question to Lina. I will read it out very briefly or abridge what initiatives have been have successfully responded to the challenges and how are such initiatives specific to a given time or place or could they be used more widely. Lina, please.

Lina Viltrakiene: Well, thank you very much and indeed measuring the impact of the counter disinformation initiative is really very challenging task but I am willingly like to share with you several good practices which we developed in Lithuania and that could be really replicated worldwide. So I will refer to three of them. First, in Lithuania we created really a consolidated system of monitoring and neutralizing disinformation. We take a comprehensive whole-of-society approach to monitoring, analyzing and countering disinformation involving not only state institutions but also the whole vibrant ecosystem of non-governmental organizations, media, business, which really helps to create resilience of the society and also trust. In this context I would like to particularly stress the importance of NGOs in analyzing and countering disinformation but also particularly in promoting digital and media literacy, including journalists working or writing to audiences of national minorities, developing learning programs and different devices to vulnerable groups. We have NGO Civil Resilient Initiative which worked a lot on that. We have an important non-governmental organization debunk.org. This institution also researches disinformation and runs educational media literacy campaign. So indeed developing… the management itself, main part of our research objects going ambitious and developing critical thinking is key to resilience against a foreign information manipulation and interference. Another important element I would like to mentions, it’s the collaboration between business and academia to develop technical solutions. So, in Lithuania, we have a lot of collaboration between business and academia, and we have technologies, such as AI-driven tools, for example, that could detect manipulated media bots, and also coordinated inauthentic behaviour, and here, the collaboration between science, between academia, and business is really, really important. So, we have a lot of collaboration between business and academia, and we have a lot of collaboration between business and academia, and we have a lot of collaborations about reporting platforms, and so on, so, in Lithuania, we have, really, a lot of people, a lot of society members participating in encountering this disinformation. We have a very nice initiative, the Lithuanian Elves Initiative, where we have a lot of people from all over the world participating in encountering this disinformation, and this really works very well. Second practices which I wanted to share with you, and which is very much related to the first one, is a cross-sectorial approach to find disinformation, and really closely cooperating at the national level. For this reason, I brought with me on my side a team of experts under the framework, under the national crisis management centre. indeed, helping to quick detection and rapid response to disinformation or to information incidents, which could have a big influence. This National Crisis Management Center coordinates the strategic communications and also provides guidelines for a possible response to different information incidents. And our experts from this center are really willing to share and sharing their experiences also with other countries of this effective functioning of cross-institutional framework. And finally, that brings me to my third point, that sharing experiences among democratic states is really, really important. And one of such initiatives we have in Lithuania is the Information Integrity Hub, which is operated by Lithuania and the OECD, which provides the training for officials worldwide. So this is a training program offering opportunity for OECD and non-OECD public officials to peer learn and strengthen their capacities to detect, suppress and prevent and view foreign influence and find disinformation. And indeed, that is very effective when experts are gathering together, when the sharing the cases of disinformation they face, and perhaps also that could form a kind of inventory of practices, of bad practices, which would be then easier to recognize when experts are working together. are discussing and sharing that. Thank you.

Pearse O’Donohue: Thank you very much, Lina. So, now, the same question to Sezen Yezil. You’ve been waiting a long time since you last spoke, so again, what initiatives have successfully responded and can they be used elsewhere?

Sezen Yesil: Please. Thank you so much. Oopsie. I hope that my answer will also address the questions of the audience, the one from the online participant and the one from my sister from Egypt. And I know that women politicians are especially vulnerable, unfortunately, and we have special protections in place, and after this session, if she kindly stays and meet me, I would like to explain more in detail. But I can say that we, as META, we have a very well-established playbook on election integrity and we keep improving it according to the lessons learned after major elections. Our measures are globally applicable, but we make risk assessment for each election specific to that country and adjust our measures if needed. So that participant from online medium said that we don’t have a local representation, et cetera, so that doesn’t matter because all our measures are globally applicable. And we have about 40,000 employees working on safety and security, and we have invested more than $20 billion in this area since 2016. There are five pillars in our election integrity work. First one is that we do not allow fake accounts. Our automatic detection tools block billions of accounts often within a few minutes after creation. Second, we disrupt bad actors. We took down more than 200 coordinated inauthentic behavior networks since 2017. And, as you know, those networks are used to mislead people, especially during election times. And we work in collaboration with law enforcement and security agencies and with academia, researchers, etc. to identify those actors. Third, we fight against misinformation. It is a really tough issue because nobody agrees on the definition of misinformation. For example, let’s say a politician says that they have the best economy in the world. What if the indicators do not agree with him? Are we going to remove that content and label it as misinformation? That won’t be appropriate. So, we have a three-part strategy. Remove, reduce, and inform. Under remove, we do not allow mispresentation of voting date, voting location, and times. We do not allow mispresentation of who can vote, who can participate in elections, and what documents are required, etc. Under reduce, we work with more than 90 third-party fact-checkers around the world. And they cover 60 languages to identify and rate viral misinformation. When rated content is not recommended in our systems, its distribution is reduced. And under inform, we put labels like false information on rated content by the third-party fact-checkers. And we provide more context to the users if they want to have more information on why it was misinformation, etc. And under the fourth pillar, we increase transparency. Especially for political ads, we have an obligatory authorization process. Advertisers, political parties, for example, have to prove who they are and where they are located. They can only target audience in the country where they are based in. And we put a paid-for-buy disclaimer to the ad so that people can understand who is funding that political advertisement to give more transparency. Also, political ads are kept in our ad library for seven years. So, for example, researchers use it a lot. It is publicly available, free. And you can see all the information like the amount spent on ads, who is funding, etc. is created with AI apps, the advertisers have to disclose it to us. They have to say it. And we put a label on the content, like digitally create, so that people understand it is a photorealistic video or photo or something. And under fifth and last pillar, it’s about partnerships. We work with local trusted partners to receive timely insights on the ground. So okay, final comments. And also user education is very important. We do campaigns with third-party checkers and academia to raise awareness on how to fight disinformation and misinformation. Thanks so much.

Pearse O’Donohue: Thank you very much for that. So we heard, particularly in the answers from Daniel and from Lina, already references to the civil society, to NGOs, to the stakeholders, the multi-stakeholder process, as having an important role with regard to, you know, what could be successful responses and how we learn to share initiatives across countries and regions. And now we might, that’ll be one element of the next question that I’m going to pose to our final two panellists. Again, thank you for your patience. And that question, again, it’s on screen, is what are the governance principles, tools and mechanisms that could be applied in order to help protect the integrity of electoral processes and information in the digital age, while upholding human rights and democratic principles? And then, are there specific roles for particular stakeholders that need to be highlighted? So I’m going to put that question, first of all, to Tawfiq Jalassi, please.

Tawfik Jelassi: Thank you very much, Mr. Moderator. I think we all agree that ensuring that information is trustworthy and accurate is a very critical challenge today, maybe more than ever before, especially during elections. And here I would like just to quote Maria Ressa, the 2021 Nobel Peace Prize winner, who said, without facts, there is no truth. Without truth, there is no trust. And without trust, there is no shared reality. I think this is a very powerful quote that reminds us that fact-checked information is the basis for not only democracy, but for society and for communities to live together. So it’s a major challenge. But then, second and final quote from journalists. Here is Karl Bernstein, a political scientist. We have a journalist who said, what we do as real journalists is to give our readers the best obtainable version of the truth. It’s a simple concept, but it’s very difficult to achieve and especially elusive in the age of social media. We know the power of digital influences, who have today 50 plus million followers per digital influencer. Our recent study shows that more than half of the content they post online is not fact-checked, is not verified. This is a new challenge that we need to deal with. So, the dilemma is there, and the pursuit of truth is especially challenging in this digital age, where information spreads rapidly and far faster than objective information. A recent MIT study shows that false information travels 10 times faster than fact-checked information. So, it’s a real challenge, and as I said, this is at the heart of preserving democratic processes. So, the question is, what can we do about this? And here, let me say that at UNESCO, we are deeply committed to advance our mission of protecting the integrity of information. And here, I must say that we are honored at UNESCO, being asked last month by the G20 Summit to become the secretariat for a global initiative on information integrity and to administer the global fund allocated to that by the G20, the 20 most important economies of the world. So, I think information integrity is at the heart of what we are discussing, especially also when it comes to climate change. How can we combat climate disinformation when we try to resolve the environmental crisis? So, this is part of our… mission. Now the next question is how do we go about it and our approach has been all along anchored in the international human rights standards. We developed the guidelines I mentioned a few minutes ago, the guidelines for the governance of digital platforms, again based on human rights, but also to promote transparency. you Transparency, accountability, and inclusivity. One third of them have to quit because of online harassment and as I said sometimes physical violence as well. So this is what we have been doing to protect women journalists. So you didn’t ask about this, you asked about women politicians and our panelists has addressed that. So again there is one final note maybe to mention is we believe that true empowerment starts with education. Education is at the heart of the matter and some of the panelists mentioned media and information literacy in the digital age. Literacy again in reference to education. Our program on that is a cornerstone of our strategy. We want not only to have guidelines for digital platforms and for regulatory authorities, that’s on the supply side of information, but we have to work on the demand side. of information and the usage. And our aim through our educational program is to make the users of digital platforms become media and information literate by developing a critical mindset so they can distinguish, hopefully, between fact-checked information, objective information, and obviously falsehood. This is something that we believe is very important. We want them to raise a few questions. Who created this information I come across online? Why was it shared? And what evidence supports it? Because otherwise, the users of information online become themselves amplifiers of misinformation. They like and they share that information. And finally, to say it’s a collective effort. I mentioned what UNESCO is trying to do, but of course, it’s a collective effort. We need governments to create policies to protect human rights, safeguard freedom of expression, and having the right regulation, maybe, for digital platforms. We want tech companies to adhere to full transparency and accountability and the proper content moderation and curation, but educators and civil society to empower citizens, the way I mentioned, to discern facts from fiction. Let me just conclude, because I think my time is up, to say not only that we remain at UNESCO steadfast in our commitment to this cause, but we believe that together we can build a digital age that does not divide, but unites, that does not harm, but heals, and that does not undermine democracy, but strengthen it.

Pearse O’Donohue: Thank you very much. A lot to think about there. So finally, waiting patiently, we would like to hear from Rosemary Sinclair on her views on this same question. Please, Rosemary, the floor is yours.

Rosemary Sinclair: Thanks, Piers, and it’s a very big question, as I know you know, so just a few thoughts from me. We’ve been focusing in this panel session on elections, misinformation, and disinformation, but I think we’re really talking more broadly about information, and that means we’re really talking about trust and confidence in an online world. And we’re having this discussion right at the point where we have the possibility to secure amazing innovation, which can benefit individual people, their communities, and their economies. So this is a conversation really worth having. For a long time, we’ve been focused on practical connectivity, and there’s a way to go, I know, particularly in the Global South. More recently, we’ve started to think about cultural connectivity, so efforts focused on digital inclusion through language. But I really want to stress that our focus must be on building, or in some cases, rebuilding confidence online. In Australia, we at .au do annual research into the digital lives of Australians. And for the first time this year, that research told us that Australians are starting to think about doing less online because of the harms that they are experiencing. Right at a time where for productivity, efficiency, and innovation reasons, our policy makers and others are wanting them to do more online. So I think we’ve got to get back to a point where technology is seen as a tool and not as something that is somehow beyond policy. And when we’re thinking about policy, we’ve got to balance innovation and integrity. And I think we need some very big thinking, and we’ve done some of that at .au. We forced ourselves to do it using a scenario process. And if the scenarios are of interest to anybody, they’re available for free use on our website. But there are two scenarios in there that are pertinent to this discussion. And I’m going to summarize them in about six words. One of them says, government is in charge of information. And the other of them says, private sector is in charge of information. And when we dug into those scenarios, what we found were really some shared issues about the rights of individuals to privacy and to choice, the importance of integrity and impartiality around information. There’s a whole set of issues around the importance of the security of people’s identity. We explored the role of the internet, open, free, secure, and globally interoperable. And we really thought about integrity and the assurance processes that would need to be put in place to assure people of integrity. So in answer to the question, we need governance principles, tools, and mechanisms in all of those areas. Getting back to our topic today, which is elections, I wanted to make the point that really democracy now is a team sport. And more than that, it’s actually a global team sport. And who we need on the playing field with the voters and the politicians, we need civil society, we need the technical community, we need the private sector, media, technology companies, the platforms, we need government, public service officials, we need the combination of judiciary and regulators to actually implement and enforce policies, laws, regulations, the people who are accountable for election oversight and the like. In addition, I want to be bold enough to suggest that we might need some philosophers on the playing field as well, to think about the limits of markets as Michael Sandel has done, to think about big questions around values and ethics and culture. My final point, in fact, I’ve got two final points, but the first one is I’m finding it very interesting that organizations that have usually been concentrating on economic policies and competition and the like are becoming very interested in these issues too. And if I just give you one little quote from the OECD’s report, Facts Not Fakes, Tackling Disinformation, Strengthening Information Integrity, that report says, informed individuals are the foundation of democratic debates and society. And the report also goes on to make the comment a multi-stakeholder approach is required to address the complex global challenges of information integrity. More locally in Australia, our ACCC, which is our competition authority, has been conducting an inquiry into digital platforms. And in its final report, it says, this inquiry has highlighted the intersection of privacy, competition, and consumer protection considerations. Privacy. data protection laws can build trust in online markets. So, the fact that these bodies are thinking about these issues for the purpose of economic and societal outcomes, I think, is very interesting. Sorry, Rosemary, I’m going to have to ask you to wrap up now, please. And my final point, please, is just we need to have a global governance architecture. I think the Internet Governance Forum has a role to play, and I’m really hoping that through the processes next year, the role of the IGF is made clear and permanent so that it has the certainty to help do this work.

Pearse O’Donohue: Thank you. Thank you very much, Rosemary, and thank you for that very clear enumeration and explanation of the principles that we need to revisit in the work that we’re doing with regard to the Internet as a whole, and then specifically with regard to election integrity. And, of course, to Taufik for his analysis, and again, the worrying facts of violence against journalists, particularly female journalists, there is a direct and very thick line between that and election integrity. If the journalists, if the free press, is intimidated into silence, then we are already losing the electoral integrity process. So, something that we must think of, and also the effects of the digital elements to that. So now, as I’ve said, we want to again open up the floor to questions, but particularly statements, because on this occasion we’re going to make you work a little bit harder. So this is to participants here in the room, but also, of course, to online participants. We actually have a couple of questions for you, so if anyone would like to answer those questions, address those questions, or address points made by our panellists in their very rich responses to those set of questions that we put to them. So, it’s simply this, how do you think the broader Internet governance debate intersects with electoral information integrity discussions, and how can the IGF discussions, the multi-stakeholder approach, how could it contribute to improving and strengthening the information integrity in elections, and in the election space? So, do we have anyone who’d like to take the floor on this, or make comments on what has been heard from the floor? If so, please come to the microphones, one or other, at the head of the room. And I’m also looking at Bruna, if there is anybody online. Okay, well, we’ll keep going, because we have been very disciplined, I have to say. I’ve been nudging one or two of you, but I would like to thank all the panellists for being so disciplined in time, while giving us such rich responses. But now we have the opportunity, perhaps, to open the debate to you, to everything that your co-panellists have said in the questions they had asked, in relation to what we said, what are the problems, you know, what evidence has come to light, what initiatives have worked, what hasn’t worked, and then what are the principles that we need to apply. So now I’m giving the floor to you, but I would also like to put to you that question that I just posed now, and you can tackle any or all of them as you see fit, is, you know, how can the broader Internet governance discussion and debate intersect with this issue of electoral integrity, and, you know, how can the multi-stakeholder approach contribute to improving and strengthening the situation. So now, the floor is open. Who’d like to take the floor? Please, Tawfiq.

Tawfik Jelassi: Thank you. You remind us that the focus is elections, of course, and reporting on elections in a fact-checked, objective way requires proper training of journalists covering elections. UNESCO has been doing this in many countries recently, to provide the training needed by journalists, because, of course, the information they bring to the fore is so important, especially in this era of misinformation. Second, the impact of emerging technologies on elections, such as the impact of AI on elections. This is another training that we developed. It’s an online course on the impact of AI and generative artificial intelligence on election processes. So this is part of awareness creation, awareness raising, advocacy, because we need to have in place an enabling environment for elections to take place in a fair, free, and democratic way.

Pearse O’Donohue: Very good. Please, William, you were next.

William Bird: So what struck me is, despite us all coming from radically different perspectives, just how similar the issues we’re facing are, and in fact how similar the kind of approaches to dealing with them are, which says that often these things aren’t, as I said at the beginning, a bigger question of how do we deal with this new information chaos environment, where power dynamics have shifted so dramatically. And that seems to be a common kind of question that all of us are grappling with to varying degrees. The second thing is the critical importance of digital literacy. This is mentioned at every single instance of these things that I go to. The thing that is consequently and still missing in massive amounts are effective and properly resourced plans to actually implement these things. So we can come here and say all these good things, but there’s no real meaningful action. And then how do we deal with the outliers, right? Elon Musk being one of those outliers. X’s power is diminishing, but just because it’s diminishing, the harm that it’s causing in very real terms is still significant. And it seems we don’t really have an answer to that, you know. We’ve just seen one of the major world superpowers buddy up to this man who openly used his platform to spread misinformation, and in the case of South Africa, happily allowed it to spread attacks against journalists inside violence and hate speech. And we need an answer to that. IGF and all of us, we need to be able to say how we’re going to deal with this. Thanks.

Pearse O’Donohue: Thank you. Daniel Molokele, please.

Daniel Molokele: Thank you so much. I wanted to speak on something that I feel we have not addressed that affects electoral integrity from an information point of view. It’s the issue around the need for standardization and professionalism. You will see that we are seeing a rise of new social media platforms or media technologies that are highly influential in influencing political opinions, especially for the electorate. Some people have got blogs, some people have got podcasts, some of them are live. Some people have got ex-pages or Twitter pages, they’ve got Facebook pages, they can actually go live at any time and millions of potential voters tune in. In that live broadcast, there are some untested facts around the elections that are said, or even allegations, for example, around rigging or cheating in elections. Because the audience trusts the person behind the podcast or the show, it then affects everything in terms of integrity of the entire election process. Yet most of these people who conduct these live sessions and so on are actually not trained journalists. They do not practice any form of ethics and they have no form of qualification or certification. And at the end of the day, there is no emphasis on professional research and standardization of content. And also driving them is the fact that at the end of each month, they get a paycheck and it’s based on the amount of interaction or interactive use of that blog or pod. So the more people are emotionally tuned, the more viewership, the more interactions, and the more currency at the end of the month. So the net effect of that is a single person or two people… people can actually shape the narrative, depending on the side which they are. And it then allows people with money, maybe business people, for example, who have got interest, maybe in public tender systems, to actually fund these people also, unofficially, and influence the electoral process. Because at the end of the day, they would want a government that would be in power after the elections to be aligned to their business interest. So it’s a major concern. And the main media houses or professional institutions that practice journalism to standards are normally overrun by this kind of live transmissions. Then I also wanted to zero in on artificial intelligence. For us as Africans, we are coming from a position of being left behind. I think the average person, especially in Zimbabwe, where I come from, is still very difficult to distinguish a story that is AI-generated and one that is real. Because if you look at the videos, if you look at the pictures, they look so real. And if you can come up with content that is misleading or misinforming or disinforming, an average voter will be able to take it seriously. By the time clarifications are done, follow-ups are done, it’s too late, and it then affects the credibility of the election system. Thank you so much.

Pearse O’Donohue: Thank you very much, indeed, I fully agree, I see the same. Now, I just wanted to check, I don’t know if we have a hands-up function, but I want to make sure that if either Rosemary or Liz wanted to come in on what we’ve heard in the panel discussion, and also, of course, this question about the broader internet governance debate and the IGF, would either of you like to come in on this?

Elizabeth Orembo: I can come in and make a few short remarks on how elections, the discussion, can integrate with the internet governance, which, from my view, we are looking at the governance of infrastructure and the governance of content, as far as the discussion of internet governance is concerned but then again, when it comes to elections and information integrity, I don’t think any society has really quite agreed what is disinformation or what is misinformation, what is good information that should be encouraged online and what should be discouraged. But also, what we are also seeing is that people are saying that there should be plurality of information, and when does that plurality of information, some of them oppose each other, and in shaping our narratives, we try to label the other information as misinformation. Some misinformation are outright misinformation, and they are put out there to influence some unfair narrative, which is also harmful to the society. But now we are talking about a plurality of information that sometimes causes tensions in the society, and it’s not really intentional on harming any citizenry, but still, it harms our democratic process. I think as a society, we need to reflect on such dangerous misinformation coming out, but if regulatory concerns are taken out, then it means that a certain group will feel offended with it. With internet governance discussion, and this is my last point, is that. But I think we need to go broader to accommodate, to appreciate what’s really happening within the elections environment, because it also touches on the wider issues of development. When the right people in governance, not the right people in governance are put in place, then it really affects how a society will develop democratically. In some elections, violence also erupts. It also means that economic consequences also follow, because in cases where countries face elections aftermath, or even three years after election, contesting elections, going to court, and even being passive in applying policies that are put in place by the illegitimate government, it means there’s a slow economic growth in those countries, and people are not able to prosper there. So I think we need to be wider in how we think about internet governance and democracy, going beyond just content moderation and infrastructure governance, but also to the underlying issues, like other panelists had said, that they also play out in between elections, but they actually rise when it comes to the elections themselves. Thank you.

Pearse O’Donohue: Thank you. And Rosemary, I saw you wanted to come in.

Rosemary Sinclair: Sorry, Piers, I’m having trouble with my mute button. Yes, I was wondering if we could pursue the idea that I think Lina put on the table earlier. I was wondering if we could pursue the idea that I think Lina put on the table earlier. And I just wondered if we could hear a little bit more about that work in Lithuania.

Pearse O’Donohue: Okay, well, Lina, I think that’s an invitation to you. Please.

Lina Viltrakiene: Well, of course, as I already presented, we have a quite comprehensive system. I think it’s very important for us to understand that the IGF is not a system established in Lithuania on countering disinformation. But perhaps as we are now moving to the end of the discussion, I wanted to react very briefly to your question about how IGF, indeed, discussions could lead us to something more specific. I think it’s very important for us to understand that the IGF is not a system established in Lithuania on countering disinformation. But perhaps as we are now moving to the end of the discussion, I wanted to react very briefly to your question about how IGF, indeed, discussions could lead us to something more specific. But perhaps as we are now moving to the end of the discussion, I wanted to react very briefly to your question about how IGF, indeed, discussions could lead us to something more specific. So, we are now in the process of defining the responsibility of social media platforms, and perhaps finding some legal tools to enforce that. So, we really think that we could establish clear responsibilities, legal obligations, and sometimes even penalties for platforms that fail to prevent the spread of organized disinformation campaigns. But perhaps, inspiring thee mechanisms e architectures s such as digital market and information distribution, often these are completely explicit approaches which we could develop in discussing in this multi-stakeholder format, we are all views hurt, and all of us are onboard. When we are hyper great, that is time to have these kind of discussions.

Pearse O’Donohue: » I am going to come to you now I see on finished business, but also great opportunity to take this forward. Because I’m going to ask all of the panelists now to give their views, and I will come to you first, Sez. » I would like to start by saying thank you to all of you for being here. I know that you are all very committed and you know that I’m tough, but you are very disciplined, and the purpose really here is that I’d really love if you could come up with a recommendation, a request, a best practice that we could take as takeaways from this discussion, and as I said, I think there can be another one from Nina or from Rosemary, but I would like to start with you, Sez, and then we can move on to the next panel. » Thank you very much. I think that the way in which a person, a user, can address themselves, and hope to have action taken is one of those issues, and how we can adapt that, I’m sure there are different models. but if it were to be throughout the system, it would be great. But anyway, I’ll ask you to do that now. Two minutes maximum, and I will shut you off so that everyone gets their last word. And I will start with you, Sezen, please.

Sezen Yesil: Okay, thanks so much. So, the problems we have discussed today, like disinformation, misinformation, are probably as old as the history of democracy. But of course, the use of internet brings those to another level. We all accept that. Also, the problems are not specific to one country or to one platform only. Bad actors, for example, can work from country X to target people country Y, and also they use all the platforms they can use. So, I believe that, like all other global problems, election integrity-related problems can be tackled best in collaboration with all the stakeholders, like private, public, academia, and civil society. The beauty of IGF is that it’s bringing us all, and we are hearing each other. That’s great. So, I really appreciate all the esteemed panelists’ views and comments on the matter. I’m taking this as my homework. I will feed those as input to our election integrity work back at META. So, at META, we understand our responsibility, and we try to improve ourselves. And we are already leading and participating in many collaborative efforts, and we will be more than happy to expand our collaborative efforts to the other stakeholders, including governments, UNESCO, etc. Thanks so much for this opportunity. Thank you very much.

Pearse O’Donohue: And now I’ll turn to you, William. Your two minutes. Yes. Don’t worry. I’m being random for a purpose.

William Bird: Okay. So, my points would be, I think, to call for resources outside of elections periods, because what we see is that elections approach, and suddenly everyone’s excited, and then elections go, and we all say, yes, these are bad, and then suddenly there’s no work to… We need to see these as ongoing societal challenges. The second point is the intersection of online harms needs to be dealt with comprehensively, and then we need to see some action. It’s not enough for us to just say, oh, yes, the attacks against women online is very bad, and we really must do something. Let’s do something. Let’s hold some of these people accountable. We can’t take Elon Musk to court in South Africa because they don’t have anything there, but why aren’t gender-based violence groups… him to court in the United States where he’s domiciled. We need to be holding people accountable that continue these things. We can’t leave it as is any longer. And then the third thing I think is that we need to, mis and disinformation are thrown around and you’re right, everyone said there’s no common definition. For us we reference public harm as one of the elements of mis and disinformation and I think one of the things that we could and should be looking at are more nuanced labels around understanding mis and disinformation that it isn’t all the same thing because we already accept some things as problematic. Thank you.

Pearse O’Donohue: Thank you very much. I’ll now turn to Tawfiq please for your two minutes worth. Thank you.

Tawfik Jelassi: Thank you very much. Not to repeat myself but the title here is how to maximise potential for trust and addressing the risks. For me, I repeat myself, the number one risk is dis and mis information. It’s not by chance that Davos World Economic Forum this year put fighting disinformation, disinformation as the number one global risk for 2024 and 2025. This is the super election year that will continue in 2025. Disinformation for me is at the heart of the battle and if we can address it, if we can minimise it, maybe we cannot totally reduce it, I think we will be able to maximise trust and address the risk that it represents.

Pearse O’Donohue: Thank you very much. Liz, are you still with us? We see a very interesting photo of you. If not perhaps then I will turn for the moment to Rosemary please. What would be your last comments? Now we see you Liz but we’ll give you a moment in a moment. Thank you. Rosemary, go ahead.

Rosemary Sinclair: Yes, thank you. I’d like to make two comments really. One is to re-emphasise what I said very briefly that I would like to see the role of the IGF clarified and made permanent so that we have a forum for large multi-stakeholder discussions about matters of importance. importance. Second thing is practically I would like to see the tapestry of internet governance forums knitted together much more closely. So in Australia we have the local IGF, AUIGF, and then we have the Asia-Pacific region IGF, and then we come to the global IGF. If we could imagine a world where all of that effort was focused on a topic area, perhaps centralised clearinghouse or reporting, perhaps how to deal with platforms. If we could somehow maximise that effort and bring that work to the global IGF for consideration and discussion by the multi-stakeholder community, then I can see a possibility of progress. Thank you.

Pearse O’Donohue: Thank you very much. Now, Liz, we’d like to hear from you. No, we don’t hear you. Keep trying to unmute. Yes, now I succeeded in unmuting, thank you.

Elizabeth Orembo: I think my only one point is that positively we’ve seen a lot of efforts on strengthening information integrity with the election. I think this year it came as a golden opportunity because of the many elections happening, and in Africa we’ve seen a lot of partnerships, different kinds of partnerships, stakeholders coming together to fight disinformation, to map risks on disinformation and fight it proactively. But then this has happened not really in silos, but different parts of the partnership, but also leaving important stakeholders. Like one panelist said, it’s a big team thing and we should expand more. But also the other thing is that we should not leave the work here. We should make these different works connect to each other to get a full picture of what really happened this year and what to anticipate in the next elections next year and even the years after. So that connection, connecting the dots from the partnership in the civil society with the tech people to even the data enthusiasts. There are people who are working with data. What really happened there and how we can connect the dots there. Thank you.

Pearse O’Donohue: Thank you so much. Now I’d like to turn to Daniel, please.

Daniel Molokele: Thank you so much. I think in 2024, we saw democracy continue to grow and take root in Africa. The elections that we had all across Africa, they significantly gave us an opportunity as Africa to showcase ourselves and rebuild our reputation as a continent. To that end, access to information from a perspective of electoral integrity is very important to Africans. And as a parliamentarian, I think one of the issues we need to focus on is making sure that there is standardization in terms of quality of information and news across Africa, especially during election campaigns and election announcements, results announcements. And we must make sure that the policy framework, the legislative framework all across Africa is modelled and standardised so that it benefits democracy in Africa. Because information is the potential to build our democracy, is the potential to make our electoral system be accepted and conventional. But at the same time, it is the potential to undermine our electoral integrity. So it’s important that we create model laws from a continental point of view that will enhance access to quality information and promote electoral integrity. Thank you.

Pearse O’Donohue: Thank you. And now, Lina, please.

Lina Viltrakiene: Well, from my side, I would like to leave you with a message that elections is the test of democracy. And indeed, democracy is not something that we have for granted. So if we want to live in democratic society, upholding liberty, human rights, rule of law, other democratic values, all together, we need to work for strengthening the democracy. And in this task, I firmly believe that all our societies need to be on board. and that is the only way to build trust, to make comprehensive action on strengthening resilience and critical thinking. And again, I believe that critical thinking and resilience is key, indeed, in all our efforts to ensure the smooth, reliable, free from malign interference elections, democratic elections. And perhaps, in addition, just one more thing I wanted to mention is that how important it is to coordinate among ourselves, among democracies, and that is crucial, indeed, to ensure that we appropriately, effectively respond to foreign information manipulation and interference, and that we prevent hostile actors from manipulating and hijacking the information space. Thank you.

Pearse O’Donohue: Thank you, Tawfik. You were very, very short in your-

Tawfik Jelassi: 30 seconds.

Pearse O’Donohue: So you have 30 seconds that you didn’t use. Go ahead, please.

Tawfik Jelassi: Why I ask the floor again, because your question had two parts. What can the IGF do about it, which I did not answer in my first intervention. 20 years ago, IGF did not foresee the rise of digital platforms, nor the harmful online content that we suffer from today. I believe that, going forward, IGF has to ensure that information is a public good, not a public hazard, not a public harm.

Pearse O’Donohue: Thank you. So, I want to draw a close to the meeting. I’m not going to draw formal conclusions. That would not be appropriate, and it would be very subjective and impressionistic, and I’ll tell you about what we’re going to do in a moment. But what I would like, first of all, is I think we should show our appreciation for the fantastic insights and analysis by our panel, physically and online. Please, a round of applause. And they have made my job very easy, one, by really focusing on the questions, but also by allowing the discussion to continue by limiting their time. I know it’s very frustrating. The only censorship that is allowed during the multi-stakeholder process is your speaking time. Everything else is just not allowed. So what I will do is I would like to say that we have a clear set of well-informed views that show, yes, the experience tells us that the threats are real, that the challenges have been experienced across a number of countries and regions, and we would expect that they will get worse unless action is taken. Whereas there were no, well sorry, I will take that back, where largely disasters were avoided in 2024. There have been some very stark examples given to us of where serious problems have arisen. But almost all domains, countries have seen a level of disinformation, certainly misinformation, going all the way to the use of deepfakes, as well as the suppression of opposing views. So we are united in diversity and it might not always be the case that I am happy with the result of the election. My side didn’t win. That’s not the point. That’s democracy. It is, well, did the side that win do so on the basis of the democratic process, which we all welcome, or did they do so because they used digital technologies to misinform, to disinform, or to actively prevent another voice from being heard. And that is the line that we must follow with regard to information and with regard to election integrity. I think we’ve had some great insights. As I said at the start, we do hope in what will come next is to, in listening to the stakeholders, to actually share our experiences and actually find inspiration, make suggestions, be able to give actionable insights to guide stakeholders and the actions that they can take, including and particularly in the IGF. We have the IGF coming to us next June. I think this work can continue. And for that, we have a rapporteur from this session who will help us, and I’d like to thank him, Jordan Carter, for his contribution in organising the session as well. But I must single out, as well as thanking our panellists, in particular I must single out Bruna Martins dos Santos, who has been the driving force in organising this event. I saw Bruna in action in NetMundial, and we’re very grateful for all the work that she’s doing on the MAG. By the way, Jordan is also on the MAG, and we really think that this is an issue that we will continue to need to focus on, and where the IGF and the multi-stakeholder process that it represents is the only forum in which we can find consensual responses to the challenges of digital while, and I think this was also what Taufik wanted to say as well, embracing all the good that digital technologies can bring to societies across the world. Thank you for your presence, thank you for those online, thank you again to the speakers and I wish you a great continuation of the IGF.

T

Tawfik Jelassi

Speech speed

137 words per minute

Speech length

1363 words

Speech time

593 seconds

Spread of misinformation and disinformation online

Explanation

Disinformation and misinformation are major challenges to election integrity in the digital age. They spread rapidly online and can significantly impact public trust and democratic processes.

Evidence

MIT study shows that false information travels 10 times faster than fact-checked information.

Major Discussion Point

Challenges to election integrity in the digital age

Agreed with

William Bird

Sezen Yesil

Lina Viltrakiene

Agreed on

Misinformation and disinformation as major threats

Violence and intimidation against journalists, especially women

Explanation

Journalists, particularly female journalists, face violence and intimidation when covering elections. This poses a serious threat to press freedom and election integrity.

Evidence

One third of women journalists have quit due to online harassment and physical violence.

Major Discussion Point

Challenges to election integrity in the digital age

Media literacy and digital skills education programs

Explanation

UNESCO is focusing on education programs to improve media and information literacy in the digital age. These programs aim to help users develop critical thinking skills to distinguish between fact-checked information and falsehoods.

Evidence

UNESCO’s program on media and information literacy is a cornerstone of their strategy.

Major Discussion Point

Successful initiatives and best practices

Training journalists on election coverage and emerging technologies

Explanation

UNESCO provides training for journalists on covering elections and the impact of emerging technologies like AI. This helps ensure more accurate and responsible reporting during election periods.

Evidence

UNESCO has developed an online course on the impact of AI and generative artificial intelligence on election processes.

Major Discussion Point

Successful initiatives and best practices

Treating information as a public good, not a public hazard

Explanation

The IGF should focus on ensuring that information is treated as a public good rather than a public hazard. This approach is crucial for addressing the challenges of harmful online content and protecting democratic processes.

Major Discussion Point

Governance principles and mechanisms needed

W

William Bird

Speech speed

155 words per minute

Speech length

1485 words

Speech time

572 seconds

Attacks on electoral management bodies and journalists

Explanation

There have been multi-pronged attacks on electoral management bodies and journalists, following a disinformation playbook. These attacks target the entities, their decisions, and individuals within them, often using pseudo-legal challenges.

Evidence

Over a two-week period, there were over a thousand attacks against journalists on X, with most targeting one journalist in particular.

Major Discussion Point

Challenges to election integrity in the digital age

Agreed with

Tawfik Jelassi

Sezen Yesil

Lina Viltrakiene

Agreed on

Misinformation and disinformation as major threats

Public reporting platforms for online harms

Explanation

South Africa has implemented public platforms for reporting attacks against journalists and other online harms. These platforms operate independently of the state and apply consistent standards across different social media platforms.

Evidence

South Africa has platforms called Mars and Real 411 for reporting attacks against journalists and other online harms like misinformation and hate speech.

Major Discussion Point

Successful initiatives and best practices

D

Daniel Molokele

Speech speed

138 words per minute

Speech length

1200 words

Speech time

519 seconds

Lack of regulation for influential social media personalities

Explanation

There is a rise of influential social media personalities who can shape political narratives without proper journalistic training or ethics. This lack of regulation and standardization can significantly impact election integrity.

Evidence

Examples of podcasts, blogs, and live broadcasts that can reach millions of potential voters with untested facts or allegations about elections.

Major Discussion Point

Challenges to election integrity in the digital age

Standardization of quality information and news across regions

Explanation

There is a need for standardization in the quality of information and news across Africa, especially during elections. This includes developing model laws and policy frameworks to enhance access to quality information and promote electoral integrity.

Major Discussion Point

Governance principles and mechanisms needed

E

Elizabeth Orembo

Speech speed

124 words per minute

Speech length

1719 words

Speech time

831 seconds

Digital inequality limiting access to reliable information

Explanation

Digital inequality in Africa leads to uneven access to information, creating fertile ground for misinformation. This inequality affects people’s ability to make informed choices during elections.

Evidence

Challenges in policy, infrastructure, and media access in African countries.

Major Discussion Point

Challenges to election integrity in the digital age

S

Sezen Yesil

Speech speed

138 words per minute

Speech length

1647 words

Speech time

711 seconds

Coordinated inauthentic behavior on social platforms

Explanation

Meta has identified and removed numerous networks engaged in coordinated inauthentic behavior. These networks spread disinformation and mislead people, particularly during election periods.

Evidence

Meta removed about 20 coordinated inauthentic behavior networks in 2024 alone.

Major Discussion Point

Challenges to election integrity in the digital age

Agreed with

Tawfik Jelassi

William Bird

Lina Viltrakiene

Agreed on

Misinformation and disinformation as major threats

Collaboration between platforms, fact-checkers and authorities

Explanation

Meta collaborates with third-party fact-checkers, local trusted partners, and authorities to combat misinformation. This multi-stakeholder approach helps in receiving timely insights and taking appropriate actions.

Evidence

Meta works with more than 90 third-party fact-checkers around the world, covering 60 languages.

Major Discussion Point

Successful initiatives and best practices

Agreed with

Lina Viltrakiene

Rosemary Sinclair

Agreed on

Need for multi-stakeholder collaboration

Differed with

Lina Viltrakiene

Differed on

Approach to regulating digital platforms

Technical measures to detect manipulated media and inauthentic accounts

Explanation

Meta employs various technical measures to detect and remove fake accounts and manipulated media. These measures help maintain the integrity of the platform during elections.

Evidence

Meta’s automatic detection tools block billions of fake accounts, often within minutes of creation.

Major Discussion Point

Successful initiatives and best practices

L

Lina Viltrakiene

Speech speed

121 words per minute

Speech length

1451 words

Speech time

719 seconds

Use of AI and deepfakes to create misleading content

Explanation

The use of AI and deepfakes to create misleading content, such as fake statements from top politicians, poses a significant threat to election integrity. This technology can influence people’s choices and erode trust in democratic institutions.

Evidence

Experiences from Romanian and Bulgarian elections where significant interference by foreign actors via social media platforms was observed.

Major Discussion Point

Challenges to election integrity in the digital age

Agreed with

Tawfik Jelassi

William Bird

Sezen Yesil

Agreed on

Misinformation and disinformation as major threats

Multi-stakeholder approach to monitoring and countering disinformation

Explanation

Lithuania has implemented a consolidated system for monitoring and neutralizing disinformation. This system involves various stakeholders including state institutions, NGOs, media, and businesses to create societal resilience against disinformation.

Evidence

Lithuania’s Civil Resilient Initiative and debunk.org work on analyzing and countering disinformation, as well as promoting digital and media literacy.

Major Discussion Point

Successful initiatives and best practices

Agreed with

Sezen Yesil

Rosemary Sinclair

Agreed on

Need for multi-stakeholder collaboration

Clear responsibilities and accountability for digital platforms

Explanation

There is a need to establish clear responsibilities, legal obligations, and potential penalties for digital platforms that fail to prevent the spread of organized disinformation campaigns. This approach aims to improve the governance of digital platforms during elections.

Major Discussion Point

Governance principles and mechanisms needed

Differed with

Sezen Yesil

Differed on

Approach to regulating digital platforms

Global cooperation and information sharing between democracies

Explanation

Coordination among democracies is crucial to effectively respond to foreign information manipulation and interference. This cooperation can help prevent hostile actors from manipulating the information space during elections.

Major Discussion Point

Governance principles and mechanisms needed

R

Rosemary Sinclair

Speech speed

119 words per minute

Speech length

1509 words

Speech time

756 seconds

Balancing innovation with integrity and human rights protections

Explanation

There is a need to balance innovation in the digital space with integrity and human rights protections. This involves developing governance principles that address issues of privacy, security, and trust in the online world.

Evidence

Research in Australia shows that people are starting to do less online due to the harms they are experiencing.

Major Discussion Point

Governance principles and mechanisms needed

Strengthening the role of IGF in addressing information integrity

Explanation

The role of the Internet Governance Forum (IGF) should be clarified and made permanent to provide a forum for multi-stakeholder discussions on important issues like information integrity. This could help in developing more effective global governance mechanisms.

Major Discussion Point

Governance principles and mechanisms needed

Agreed with

Sezen Yesil

Lina Viltrakiene

Agreed on

Need for multi-stakeholder collaboration

Agreements

Agreement Points

Misinformation and disinformation as major threats

Tawfik Jelassi

William Bird

Sezen Yesil

Lina Viltrakiene

Spread of misinformation and disinformation online

Attacks on electoral management bodies and journalists

Coordinated inauthentic behavior on social platforms

Use of AI and deepfakes to create misleading content

Multiple speakers identified the spread of misinformation and disinformation as a significant threat to election integrity, highlighting various forms and channels through which this occurs.

Need for multi-stakeholder collaboration

Sezen Yesil

Lina Viltrakiene

Rosemary Sinclair

Collaboration between platforms, fact-checkers and authorities

Multi-stakeholder approach to monitoring and countering disinformation

Strengthening the role of IGF in addressing information integrity

Several speakers emphasized the importance of collaboration between various stakeholders, including tech platforms, fact-checkers, authorities, and civil society, to effectively address election integrity issues.

Similar Viewpoints

Both speakers highlighted the serious issue of attacks and intimidation against journalists, recognizing it as a significant threat to press freedom and election integrity.

Tawfik Jelassi

William Bird

Violence and intimidation against journalists, especially women

Attacks on electoral management bodies and journalists

Both speakers addressed issues related to information quality and access in Africa, emphasizing the need for better regulation and infrastructure to ensure reliable information during elections.

Daniel Molokele

Elizabeth Orembo

Lack of regulation for influential social media personalities

Digital inequality limiting access to reliable information

Unexpected Consensus

Importance of digital literacy and education

Tawfik Jelassi

William Bird

Lina Viltrakiene

Media literacy and digital skills education programs

Public reporting platforms for online harms

Multi-stakeholder approach to monitoring and countering disinformation

Despite coming from different backgrounds (UNESCO, civil society, and government), these speakers all emphasized the importance of digital literacy and education in combating misinformation and protecting election integrity.

Overall Assessment

Summary

The main areas of agreement included recognizing misinformation and disinformation as major threats to election integrity, the need for multi-stakeholder collaboration, the importance of protecting journalists, and the value of digital literacy and education programs.

Consensus level

There was a moderate to high level of consensus among the speakers on the key challenges facing election integrity in the digital age. This consensus suggests a shared understanding of the problems, which could facilitate more coordinated and effective responses to these challenges. However, there were some differences in the specific solutions or approaches proposed, indicating that while there is agreement on the problems, there may be diverse views on how best to address them.

Differences

Different Viewpoints

Approach to regulating digital platforms

Lina Viltrakiene

Sezen Yesil

Clear responsibilities and accountability for digital platforms

Collaboration between platforms, fact-checkers and authorities

Lina Viltrakiene advocates for establishing clear legal responsibilities and potential penalties for digital platforms, while Sezen Yesil emphasizes voluntary collaboration between platforms, fact-checkers, and authorities.

Unexpected Differences

Focus on AI and deepfakes

Lina Viltrakiene

Sezen Yesil

Use of AI and deepfakes to create misleading content

Technical measures to detect manipulated media and inauthentic accounts

While Lina Viltrakiene emphasizes the threat of AI and deepfakes in creating misleading content, Sezen Yesil surprisingly downplays this concern, stating that the risks did not materialize significantly in recent elections. This unexpected difference highlights varying perceptions of the immediate threat posed by AI in election integrity.

Overall Assessment

summary

The main areas of disagreement revolve around the approach to regulating digital platforms, the focus on AI and deepfakes as immediate threats, and the most effective methods for combating misinformation and improving information quality.

difference_level

The level of disagreement among speakers is moderate. While there is a general consensus on the importance of addressing misinformation and protecting election integrity, speakers differ on the specific strategies and priorities. These differences reflect the complex nature of the issue and the need for a multi-faceted approach, potentially complicating efforts to develop unified global strategies for protecting election integrity in the digital age.

Partial Agreements

Partial Agreements

All speakers agree on the need to improve information quality and combat misinformation, but propose different approaches: Tawfik Jelassi focuses on education programs, William Bird on public reporting platforms, and Daniel Molokele on standardization of news quality.

Tawfik Jelassi

William Bird

Daniel Molokele

Media literacy and digital skills education programs

Public reporting platforms for online harms

Standardization of quality information and news across regions

Similar Viewpoints

Both speakers highlighted the serious issue of attacks and intimidation against journalists, recognizing it as a significant threat to press freedom and election integrity.

Tawfik Jelassi

William Bird

Violence and intimidation against journalists, especially women

Attacks on electoral management bodies and journalists

Both speakers addressed issues related to information quality and access in Africa, emphasizing the need for better regulation and infrastructure to ensure reliable information during elections.

Daniel Molokele

Elizabeth Orembo

Lack of regulation for influential social media personalities

Digital inequality limiting access to reliable information

Takeaways

Key Takeaways

The integrity of elections is facing significant challenges in the digital age, including misinformation, disinformation, and attacks on electoral bodies and journalists

Successful initiatives to protect election integrity include multi-stakeholder collaboration, media literacy programs, and technical measures by platforms

Governance principles needed include balancing innovation with integrity, global cooperation between democracies, and treating information as a public good

The Internet Governance Forum (IGF) has an important role to play in addressing information integrity issues globally

Resolutions and Action Items

Continue discussions on election integrity at future IGF meetings

Clarify and strengthen the role of the IGF in addressing information integrity issues

Develop more coordinated efforts between national, regional and global IGFs on key topics like election integrity

Expand collaborative efforts between platforms, governments, civil society and other stakeholders

Unresolved Issues

How to effectively regulate influential social media personalities and content creators

Addressing the digital divide that limits access to reliable information in some regions

Balancing free speech protections with the need to combat harmful misinformation

How to hold global platforms accountable across different national jurisdictions

Developing common definitions and standards for identifying misinformation/disinformation

Suggested Compromises

Balancing innovation in digital technologies with the need for integrity and human rights protections

Finding a middle ground between government regulation of platforms and industry self-regulation

Developing nuanced labels and categories for different types of problematic content, rather than broad definitions

Thought Provoking Comments

We must address, already on Sunday morning, in day zero of this Internet Governance Forum here in Saudi Arabia, we had a session on misinformation. And in that session, we also had a session on the role of stakeholders in protecting election integrity and the right to information.

speaker

Pearse O’Donohue

reason

This comment set the stage for the entire discussion by framing it within the broader context of the IGF and highlighting the key themes of misinformation and stakeholder roles in protecting election integrity.

impact

It focused the discussion on the intersection of internet governance and election integrity, prompting panelists to address these specific issues throughout their remarks.

Throughout this year we decided to update some of our policies. For example we updated our penalty system per feedback of the oversight board to treat people more fairly and to give them more free expression and secondly we updated our policy on violence.

speaker

Sezen Yesil

reason

This comment provided concrete examples of how a major tech platform is adapting its policies to balance free expression with preventing harmful content, particularly in the context of elections.

impact

It sparked discussion about the role of tech platforms in moderating content and the challenges of balancing different rights and interests.

What has not worked well is the exponential spread of disinformation and hate speech derailing the integrity of electoral processes, and maybe casting some doubt or trust in election outcomes and democratic institutions.

speaker

Tawfik Jelassi

reason

This comment highlighted a major challenge facing election integrity in the digital age, pointing to the broader implications for democratic institutions.

impact

It shifted the conversation to focus more on the negative impacts of disinformation and hate speech, prompting other panelists to address these issues in their remarks.

Because with data becoming more available, we also need more capacity to crunch data to get it to people. And those capacities were different as well, and sometimes challenging.

speaker

Elizabeth Orembo

reason

This comment introduced the important issue of data literacy and capacity, particularly in the context of the Global South.

impact

It broadened the discussion to include considerations of digital inequality and the need for capacity building in data analysis and interpretation.

It’s been a big year but I want to just ask if people genuinely feel better about democracy having had 65, 70, 75 elections. Because the sense that I get from speaking to people is that despite it being, it should be a year of celebrating democracy, we don’t feel good about democracy

speaker

William Bird

reason

This comment challenged the assumption that more elections necessarily lead to stronger democracy, introducing a more nuanced perspective on the state of global democracy.

impact

It prompted a deeper reflection on the quality of democracy beyond just the quantity of elections, influencing subsequent comments on the challenges facing democratic processes.

We still need to see more young people as candidates or as elected representatives. We also saw the use of social media in a much more progressive way to mobilize people to voter registration and more importantly to turn out as voters, including media platforms such as TikTok, WhatsApp, Facebook, and X

speaker

Daniel Molokele

reason

This comment highlighted the positive potential of social media in engaging young voters, while also pointing out the need for greater youth representation in politics.

impact

It shifted the discussion to consider the role of social media in political engagement and the importance of youth participation in democratic processes.

Thus, this shows us that we need to work further on continuous collaboration of platforms with state institutions. And while regulatory frameworks perhaps should be improved, and as a model, I would like to refer to the EU’s Digital Services Act, which could really encourage the thinking.

speaker

Lina Viltrakiene

reason

This comment introduced the idea of regulatory frameworks as a potential solution to challenges in digital election integrity, specifically referencing the EU’s Digital Services Act.

impact

It prompted discussion about the role of regulation in addressing digital challenges to election integrity and the potential for international cooperation in this area.

So practically speaking, during elections, we sometimes see at powder increased requests from people to take down the websites of their political opponents. And those requests are often made with claims of misinformation or disinformation. Those claims must be assessed by others who are authorised by law and skilled to make those judgements.

speaker

Rosemary Sinclair

reason

This comment provided a concrete example of the challenges faced by technical operators during elections, highlighting the complexity of content moderation decisions.

impact

It grounded the discussion in practical realities and emphasized the need for clear guidelines and authorized bodies to make content moderation decisions during elections.

Overall Assessment

These key comments shaped the discussion by highlighting the multifaceted challenges facing election integrity in the digital age, from disinformation and hate speech to digital inequality and youth engagement. They prompted a nuanced exploration of the roles and responsibilities of various stakeholders, including tech platforms, governments, civil society, and international bodies. The discussion evolved from identifying problems to considering potential solutions, including policy updates, capacity building, regulatory frameworks, and multi-stakeholder collaboration. Throughout, there was a tension between the potential of digital technologies to enhance democratic participation and the risks they pose to election integrity, reflecting the complex nature of internet governance in relation to democratic processes.

Follow-up Questions

How can we develop more nuanced labels and definitions for misinformation and disinformation?

speaker

William Bird

explanation

Current definitions are too broad and don’t account for different types and levels of harm. More precise categorization could help in addressing these issues more effectively.

How can we create a centralized platform for reporting online harassment and attacks, especially against politicians and journalists?

speaker

Maha Abdel Nasser

explanation

A unified reporting system could help address online harassment more quickly and effectively, particularly during election periods.

What legal and political solutions can address the challenge of digital platforms refusing to cooperate with authorities in independent countries?

speaker

Mokabedi (online participant)

explanation

This is important to ensure consistent enforcement of policies across different countries and platforms.

How can we standardize the quality of information and news across Africa, especially during elections?

speaker

Daniel Molokele

explanation

Standardization could help improve the integrity of electoral information and strengthen democracy across the continent.

How can we better connect and synthesize the work of different partnerships and stakeholders working on election integrity?

speaker

Elizabeth Orembo

explanation

Connecting these efforts could provide a more comprehensive understanding of election integrity issues and more effective solutions.

How can we ensure consistent implementation of content moderation policies across different social media platforms?

speaker

William Bird

explanation

Consistency across platforms is crucial for effective management of online harms and misinformation.

How can we better address the challenges posed by non-professional content creators (e.g., podcasters, bloggers) in spreading election-related misinformation?

speaker

Daniel Molokele

explanation

These new media sources have significant influence but often lack professional standards or oversight, potentially impacting election integrity.

How can we improve digital literacy efforts, particularly in the Global South, to help users distinguish between AI-generated and real content?

speaker

Daniel Molokele

explanation

As AI-generated content becomes more prevalent, the ability to identify it is crucial for maintaining election integrity.

How can the role of the Internet Governance Forum be clarified and made permanent to address ongoing issues of online information integrity?

speaker

Rosemary Sinclair

explanation

A clearer, permanent role for the IGF could provide a consistent forum for addressing these evolving challenges.

How can we better integrate local, regional, and global Internet Governance Forums to address issues like election integrity more effectively?

speaker

Rosemary Sinclair

explanation

Better integration could lead to more coordinated and effective responses to global challenges in online information integrity.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Networking Session #74 Digital Innovations Forum- Solutions for the Offline People

Networking Session #74 Digital Innovations Forum- Solutions for the Offline People

Session at a Glance

Summary

This discussion focused on efforts by various organizations to improve internet access and digital inclusion globally, particularly in developing regions. Representatives from donor organizations, government agencies, and foundations shared their initiatives and perspectives on challenges in this area.


Key themes included the need for affordable broadband access, digital skills development, and addressing widening digital divides. Many speakers emphasized the importance of coordinating efforts between donors and stakeholders to avoid duplication and maximize impact. Innovative approaches mentioned included using local NGOs and digital ambassadors to reach underserved communities, integrating digital literacy into formal education, and employing blended financing models.


Challenges highlighted included regulatory barriers, geopolitical issues limiting work in certain countries, and ensuring projects maintain market competitiveness. Several participants noted the difficulty in reaching the most vulnerable populations due to sanctions or security concerns. The need to address the digital gender divide was also raised as a priority.


Looking ahead, speakers suggested focusing on areas like AI governance, cloud transformation for countries, and completing partially implemented projects. There was broad agreement on the need for continued dialogue and knowledge sharing between funders and implementers. Suggestions included creating simplified grant processes, taking a whole-of-government approach to digital transformation, and finding ways to work within legal frameworks to support restricted areas.


Overall, the discussion underscored the complex, multifaceted nature of expanding meaningful internet access globally and the ongoing need for collaboration and innovation in this space.


Keypoints

Major discussion points:


– Challenges in improving internet access and digital inclusion, including affordability, digital literacy, and reaching underserved populations


– The need for better coordination and collaboration between donors and organizations working on digital development projects


– Innovative approaches to funding and implementing digital inclusion initiatives, such as working with local partners and simplifying grant processes


– The importance of addressing widening digital divides and emerging issues like AI governance


– Barriers to funding certain regions due to sanctions and geopolitical issues


The overall purpose of the discussion was to bring together representatives from donor organizations, regulators, and foundations to share their work on digital inclusion projects, discuss challenges, and explore opportunities for collaboration and improved coordination of efforts.


The tone of the discussion was collaborative and solution-oriented. Participants were open in sharing both successes and challenges in their work. There was a sense of urgency around addressing digital divides and a willingness to consider new approaches. The tone became more action-oriented towards the end as participants discussed concrete next steps and ways to continue the dialogue.


Speakers

– Amrita Choudhury: Moderator


– Franz von Weizsäcker: Representative from GIZ


– David Hevey: Representative from Australian Department of Foreign Affairs


– Zhang Xiao: Representative from CNNIC (China Network Information Center)


– Sarah Armstrong: Representative from ISOC Foundation


– Rajnesh Singh: CEO of APNIC Foundation


– Samia Melhem: Representative from World Bank


Additional speakers:


– Ekaterina (Katrina): Commissioner from Georgia


– Shadia (Sharir): Representative from Islamic Development Bank


Full session report

Revised Summary of Digital Inclusion Discussion


This report summarizes a discussion on efforts to improve internet access and digital inclusion globally, with a focus on developing regions. Representatives from donor organizations, government agencies, and foundations shared their initiatives and perspectives on challenges in this area.


Key Themes and Speakers’ Contributions


1. David Hevey (Australian Government):


– Discussed infrastructure investments in Pacific Island countries


– Emphasized the importance of affordable broadband access


– Suggested developing cloud transformation roadmaps for countries like PNG and Vanuatu


– Stressed the need for regional coordination among donors to avoid overloading recipient countries


2. Franz von Weizsäcker (GIZ):


– Highlighted social insurance and digital skills programs in Southeast Asia


– Emphasized the importance of funding local organizations for better context understanding


– Advocated for a decentralized decision-making approach in project implementation


– Proposed simplifying grant processes to better support local innovation


3. Sarah Armstrong (Internet Society Foundation):


– Discussed digital literacy programs for women with disabilities


– Emphasized the need to address the digital gender divide


– Highlighted the foundation’s focus on innovative technologies, digital skills training, and internet economy


4. Zhang Xiao (formerly with ITU):


– Emphasized the importance of digital literacy programs for elderly populations


– Highlighted the need to address AI governance in the context of digital development


5. Rajnesh Singh (APNIC Foundation):


– Raised concerns about widening digital divides across multiple layers of society


– Mentioned an innovation fund for inclusion, knowledge, and infrastructure


– Highlighted challenges related to sanctions and geopolitical issues limiting aid to certain economies


– Emphasized the need for multi-modal approaches rather than relying on a single technology


6. Samia Melhem (World Bank):


– Discussed initiatives in digital public infrastructure and sectoral applications


– Emphasized the importance of local knowledge and capacity building


– Stressed the need to complete and perfect started projects rather than initiating new ones


7. Shadia (Islamic Development Bank):


– Highlighted the bank’s focus on digital transformation and innovation


– Discussed initiatives to support member countries in digital development


8. Ekaterina (Commissioner from Georgia):


– Shared information about a broadband development program in rural Georgia


– Discussed digital literacy initiatives implemented in the country


Challenges and Considerations


1. Affordability: Multiple speakers emphasized the need for affordable broadband access, particularly in developing regions.


2. Digital Literacy: Various digital literacy programs were discussed, targeting different demographics such as elderly populations, women with disabilities, and rural communities.


3. Widening Digital Divides: Concerns were raised about current approaches potentially exacerbating inequality rather than reducing it.


4. Cybersecurity: The need for robust security measures alongside increased access was highlighted.


5. Regulatory and Geopolitical Barriers: Sanctions and geopolitical issues often limit the ability to provide aid to certain economies, presenting a significant challenge to digital inclusion efforts.


6. Market Competitiveness: Balancing market competitiveness with funding rural access initiatives was identified as a challenge.


7. Donor Coordination: The need for better coordination among donor organizations to avoid duplication of efforts and ensure more effective use of resources was emphasized.


Future Focus Areas and Considerations


1. Implementing digital literacy in formal education


2. Addressing the digital gender divide more effectively


3. Supporting cloud transformation roadmaps for countries


4. Improving monitoring and evaluation of digital development projects


5. Addressing AI governance in digital development contexts


6. Facilitating partnerships between large organizations and NGOs for project funding and implementation


7. Ensuring follow-up projects to complete and perfect initiatives that have been started


8. Exploring ways to work within legal frameworks to support people in countries where direct funding is challenging due to sanctions or regulations


9. Simplifying grant application processes to make them more accessible to local organizations


The moderator suggested creating a mailing list for continued dialogue among donors, highlighting the ongoing need for collaboration and information sharing in this space.


In conclusion, the discussion underscored the complex, multifaceted nature of expanding meaningful internet access globally. While challenges remain, particularly in terms of coordination and reaching the most vulnerable populations, there was a clear commitment among participants to finding innovative solutions and improving the effectiveness of digital inclusion efforts.


Session Transcript

Amrita Choudhury: . . . . . . . Hi everyone and thank you for coming this afternoon. My name is Amrita and I would be the moderator of this session and I hope you can hear me. Okay. France has gone to get some water. Kind of setup but we have to make do with this. So this discussion is primarily to it’s an interactive open discussion which we are having today. And we have a few donor organizations and regulators here, you know, and what we want to do is have a discussion on, you know, the kind of projects they are doing, why they’re investing in these kind of projects, what challenges they see. And if there could be a way ahead in how there could be some synergies between the different entities. Oops, sorry. Sorry. There could be some exchange of information, or you know even gap analysis as to what could be done better. to explore collaborations, if possible. So with us today, and this is going to be open. We’ll have some questions. And I’ll throw to all the speakers here. And then you can ask questions or even give comments. You can raise your hand. And we have with us France from GIZ. And then we have David, who is from the Australian Department of Foreign Affairs. We have Xiaoxiao Sinig. We have Sarah Armstrong from the ISOC Foundation. We are supposed to have Samia, but I think she’s not here from the World Bank. We have E. Katrina, who is the commissioner from Georgia. Yes, that’s about it for now. We may have one or two more colleagues who may be joining for this discussion. So without much ado. Sorry, and I forgot the organizers, as in Raj Singh is here, who is the CEO of APNIC Foundation. And sorry, Raj, for this. So I don’t want to waste much time. 60 minutes, I think, five minutes have already gone. So my first question to all the panelists would be, are you supporting projects to improve internet access and inclusion? And if yes, what do you perceive, from your perspective, are the two main issues that need to be addressed? And since David is just next to me, David, you’re the first one.


David Hevey: Thank you very much for that. Actually, to be honest, I don’t need to say that. I would cut out. No, it’s just in. Sorry about that. Thank you for the question. I know it’s supposed to be an informal roundtable, but I’ve got some notes that I’ve been told to read off as well. So bear with me. Thank you. Is that there? Australia is working with the Indo-Pacific regional partners to achieve greater connectivity. and improve and achieve inclusive internet access. For example, our Australian Infrastructure Financing Facility, we’ve been partnering with Pacific Island countries on investments to support all Pacific Island countries, or PICs as we call it, having primary telecommunications cable connectivity by the end of 2025. So that’s been a really big focus of that facility. Since 2018, we’ve committed 350 million Australian dollars on that. Again, it keeps on cutting out, apologies. The facilities also made more investments for end connectivity to build out a secure and resilient, reliable digital ecosystem. So this is actually also in some investment in like terrestrial infrastructure, we’ve seen in PNG, and also in other Pacific Island countries as well. I’d say, so that’s the infrastructure aspect covered. But in terms of the work that we’re supporting for secure and inclusive internet access, we go also, we have our 2023 to 2030 cybersecurity in Australia. So across in line with that, we’ve actually got a Southeast Asia and Pacific Capacity Building Program across both Southeast Asia and the Pacific regions. We’ve been focused on cybersecurity, including and improving resilience in cybercrime and online scam response. So that’s what we’ve sort of been focused on there. And the program’s also supported Australia’s e-Safety Commission work. So, having been at the booth here today with the Australia booth, if you haven’t already, please go there. It’s a koala photo time soon. But we’ve also sort of focused on online safety work as well for digital inclusion. I think also another important thing, and I’m almost finished, please bear with me I’m sorry, but digital trade. Digital trade has also been a key part as well. We recognise that as an important inclusionary tool. So what we try and do is we’ve been advocating digital trade rules to achieve trust in the online environment, including online consumer protection, but also facilitating cooperation so that trading partners can actually make the most of digital trade. So I’ll leave it at that there, and thank you. Challenges. Challenges, okay. Well, I suppose the two main issues or challenges that we think need to be addressed, again, I kind of touched on it before, but it’s a cyber resilience piece. We talked about, again, as I said, the rapid response that we’ve got set up and deploying incident response. We set that up after Vanuatu and Tonga had cyber incidents, which Australia had deployed some assistance in late 2022, early 23. That’s all publicly out there and which we acknowledge. So we saw that there was a genuine need for working with our regional partners to assist with incident response. So that’s why we set up that facility under the last year’s federal budget a couple of years ago. What we’ve also done, I think, we’ve also sort of, again, on the cyber crime, and the reason why we focus on cyber crime in terms of the two priorities, and cyber crime and cyber-enabled crime, as many of us know, it’s increasing. And because it can impact individuals and small to medium enterprises, it can actually have much more profound impact as well. So that’s some of the challenges that we see there. For example, where we’re putting some rubber to hit the road there, so to speak. We’ve partnered with New Zealand and also Identity Care Australia for some trial funding over this year, where we’ve actually got a support service. service for them to work with impacted individuals and businesses in PNG and Fiji to respond to delivering tailored cybercrime and online scam response assistance. So that gives a measure in terms of what we’re trying to prioritise as a key challenge. And I suppose just covering off, I’m realising I’m running out of time, but the second main issue that we see that needs addressing is actually prioritising regional coordination. We have so many actors and donors out there as well, look at us all sitting up on the stage here, thank you for joining me, but I used to work on some capacity building assistance programs with DFAT in the Pacific and, for example, you’d have one person that might have three or four different roles that would also then have to go to four or five different trainings as well. So there’s always a challenge in terms of ensuring that we’re not overloading the people that we’re actually trying to help in country there as well. So that’s why Australia’s working with the partners of the Blue Pacific, so that’s Australia, New Zealand, UK, US, Canada, Japan, and Germany as well, and Korea. That’s why we set up through that partners of the Blue Pacific, the Pacific Cyber Capacity and Coordination Conference, so I’ll take a breath after that, apologies. And that’s one of the key outcomes of the initial and the intersessional meetings was around ensuring that people, better coordination. I’ll leave it at that. Thank you.


Amrita Choudhury: Thank you so much. Raj, you will come last. I’m not going to give you to speak right now. So I’ll go to Franz. Franz, over to you. GIZ has been doing a lot of work, so is there some synergy? They’re working a lot on cybersecurity, and he also mentioned something like regional coordination between donors, et cetera, but overall to you.


Franz von Weizsäcker: Definitely, I mean, and I would be speaking for a very long time if I was going to go through all the project lists that we’re having, having over 1,000 projects of whom maybe 30% has some relevance in digital transformation. But maybe the focus on the region, Asia, you could look at the inclusion, digital inclusion, access to internet, access to affordable internet at different levels. I mean, we don’t obviously invest in telecommunications industry. That’s the private sector job. In some cases, we have advisory projects to regulatory bodies, also at the regional level, at the ASEAN level in Jakarta. But what we mainly focus on is a different level of inclusion. We have a very big program in Cambodia as well as in Indonesia on digital inclusion for social safety and security, and that forms a very basis of inclusivity also in the less explored parts of the, less developed parts of the country where the internet affordability is lower. Because generally we have in many parts of Asia pretty large coverage, a high percentage of the population is in principle reached, but affordability is a major issue. And maybe India is the best positive example in terms of having a very capable regulator and a very competitive telecommunications industry that allows the prices per gigabyte to drop as low as nobody else in the world. And then there are some negative examples where we have, I think, especially Central Africa has the highest price per gigabyte around the world. And the reason for that is it’s not a good investment environment. It’s not safe. There’s no good rule of law. There is no good regulatory environment, no good competition, and so on. So that’s the very basis of affordable and inclusive internet connectivity. But then in GIZ, we’re also addressing the more, like a few of the soft enablers that come on top of that. One is the general inclusivity of society. That’s why social insurance programs in Indonesia and Cambodia, and a lot of the other programs focus on the institutions for education that is both at the level of general education as well as vocational training, technical training institutes. And those focusing also on the on the digital skills in that area. So there’s a few regional projects that we also focus on in Asia. That is the GIZ focus and when we come to talk to challenges, well their challenges are at many different levels. But if we focus on the core of internet connectivity, the usual challenge is it’s a very political landscape, the regulations of telecommunications. This is a billion-dollar business and of course a lot of interest is involved in there. The form, how the regulation is shaped, influences whether companies can be profitable or not. So that is maybe the key challenge and in many cases we have seen that the rural access funds for other purposes and in some cases they’re not not resulting in actual that is in any public sector funding you always have as many silos as you have that’s a very typical situation and that’s why maybe we should look at the there was a big meta study done by I think USAID who looked into aid effectiveness and who noticed that aid is much much more effective when it’s funded it’s it’s channeled into local organizations and those projects are the most effective ones that have a large part of the budget being allocated to rather than having the big international implementers implementing all by themselves so that is maybe one one approach in terms of coordination is not only the big donors talking with each other but also the funding lines being dispersed to local organizations. Thank you.


Amrita Choudhury: For bringing in even the regulatory environment, the use of access funds rural access funds like India has good things but the rural access fund has not been used. The USOFPO is used everywhere but not giving success. Zhao, over to you.


Zhang Xiao: Well actually this year is a very particular year for China. In 1994 internet was introduced into China so exactly for 30 years we see from internet users amount is huge. Actually we as a dossier but we also do policy research and statistics for the internet coverage rate. The penetration rate is 78% but if we we include the children that mean under 10 years old if we exclude this this group according to the national more internationally the penetration rate will be over 91%. Actually it’s 1.1 billion it’s going to be 1.1 billion at the end of the month. So nearly the mobile penetration rate is 100% so everyone is just I just take my cell phone to go anywhere I don’t need to take a card I don’t need to my key I don’t need to take something else and I can just take a phone my smartphone so we see the It’s huge. So the internet penetration rate is huge. We actually, for .cn, we recorded some data. From data we can see that a lot of things going on. So from my view, I think there are two challenges for the inclusion of the internet. The first thing is the elderly people because the penetration rate is good and we have 1.1 billion already, a huge amount of people, but because we are entering an elderly society, normally it’s 18.7% of Chinese people, over 60 and above, and going to be 20%. And in the following, not more than 20 years, it’s going to be 35% of the total population is going to be 60 and above. And like my father, he couldn’t use a smartphone well and a lot of smart appliances he couldn’t use that well. So he has access, but he couldn’t use that well. So that’s a big problem for the elderly society. It’s not a problem just in China. I think in Europe, in Japan, and some other countries as well. And the second question I think for us is, while still there are some 2.6 billion people have no access to the internet. How could we help them? I think we have best practice. We have a lot of cases to share and we could call on investments, a lot of things to do to help them. Also in China, we still have 10% of them have no access. But really, if you look at the reason why they have no access, it’s not just investment in the telecom. It’s because they have no digital literacy. They don’t recognize the characters. They can’t read or write. And it’s another reason they have no awareness that it’s important. It’s of no importance. It has nothing to do with my life. So I think still we have a long way to go. Thank you. I have a follow-up question for you. What’s CENIC doing? Is CENIC playing a role? Yes, yes. Actually, we are for .CNN. We have an operate .CNN. And it’s huge. We have 20 million users actually. And we are, I don’t want to put it in technically, but also we have research policy for the usage retail of the internet. So with our data and our research, we support policymaking. And like, for example, how many women are using the internet? If you look at the gender, it’s quite balanced. China, 49% of users are women and 51% are men. And also we can see the age classification. So with these results, we can support policymaking. And what we should do next, I think the telecom and all the governments are very interested in it.


Amrita Choudhury: Thank you so much, Sarah. ISOC Foundation is investing a lot in various projects. So what is it you are investing in the ICT projects? Primarily that’s where your focus is. And what do you see as the main challenges at this point?


Sarah Armstrong: Okay. So the Internet Society Foundation is a supporting organization for the Internet Society. And we are responsible for giving grants. And we do this throughout the world. In fact, we were started in the first operational year in 2020. And since that time up until now, our fifth operational year, we have distributed over $63 million in funding. We’ve issued more than a thousand grants and we are working or have worked in 121 countries. Specifically when it relates to the APAC region, we’ve done nearly $5 million right now at this current time. And we have 37 active grants in the APAC region. Now these are some overall statistics about the Internet Society Foundation, but I’d like to give you some specific examples to answer your question of some of the things that we are funding. So we are again, a funding organization. We work with organizations throughout the world and we are definitely interested in the issue of connectivity access. And then we also care a lot about how people can benefit from the Internet and how they can be up-skilled. in order to learn what the things are that they need to do to increase their economic opportunities and their education. So an example in Indonesia, we have a project called Kota Kita. It’s part of our skills program. We have 11 different programs. I won’t go into them all, but I will just say this is a skills program. It’s about building digital literacy. And they are working actually with women with disabilities to help them with social enterprises. So that is a growth opportunity there that’s focused on training that we think is really important. And we have many training programs, but that’s an example in the APAC region. We have another grantee, Digital Empowerment Fund that works in India. So this program right now is aiming to bring 50,000 people across 100 communities worldwide, and they are working with tea tribes throughout an area of India. They’re also doing a resiliency grant. This is a program where we help communities prepare for disasters that we know will come so that they’re better equipped to deal and be ready to get back up online and communicate. The small island states are certainly an area that we believe in targeting, and we will be doing more of that in 2025 going forward. We also are, excuse me, funding an organization called the Institute of Electrical and Electronics Engineer, IEEE. This is a resiliency grant, again, another grant to help communities prepare for the inevitable disasters that will come and be ready. And this is working right now, aiming to impact directly 20,000 people. So the final project to share in the APAC region, it’s not the only, this is not the only set of the grants that we have, but I wanted to give you an idea to answer the question of what it is we are funding to help with this situation. And this is a beyond the net large grant where they’re providing literacy skill building and to allow citizens to participate in e-government services provided by the government of Kyrgyzstan. Very important, again, it’s not just getting access, it’s knowing what to do once you have that access. There’s also a research grant where they are creating an open and secure IOT infrastructure for monitoring and preventing emergencies in landlocked mountainous communities. So we’re doing a lot in APAC, we’re doing a lot throughout the world. We don’t do the work ourselves, we fund the work. So we are a funder. And I think to talk about the biggest issues, affordable, meaningful access and how we can do that, we find to be extremely important. And then the others, as I was saying, I’m sure that training is part of a lot of the programs that we do so that we can be sure people are benefiting to the fullest. So those are the areas that we know are challenged. And I guess the biggest challenge too, quite frankly, is that the need to connect and we have still 2.6 billion people who are not connected.


Amrita Choudhury: Some places have been giving the grants, like in India, you can’t give for many people. People also is a challenge. But Ekaterina, I will come to you, but I will first talk to the donors and we have some here. Thank you for joining and Shadia came in. If you could share, World Bank is in a lot of things, but what ICT project… And it’s so good to be here with all our partners and friends from the UN system.


Samia Melhem: The World Bank has been doing a lot on digital and we are scaling up our products and services. We, as you know, the World Bank provides financing support either through loans or grants and for the low and really low income countries, most of the assistance is through what we call AIDA grants. So we’re seeing an unprecedented increase in AIDA grants. We have around 100 billion mobilized for that round. And unlike the last 10 years, digital has become a big priority at the World Bank. We’ve been reorganized and digital became one big vice presidential unit at par with human capital, sustainable development and infrastructure. So it’s really big for us in terms of both the attention, the mandate and the resources that are being made available to support digital acceleration. For one reason, we are worried about that big digital divide. We are seeing the impact it’s causing on attaining the SDGs. You know very well that countries that are adopting digital are much more likely, 45% more likely, in fact, to achieve the SDGs on time. We are also seeing the big gap in the job markets, how all the jobs are kind of almost monopolized in countries where you have high end, high value jobs, pretty much because you have strong capacity in STEM and digital skills. And these never happen without a strong digital public infrastructure. If these kids grow up with no connection and by the time they’re connected, they’re 50, they would have missed out on a lot of job opportunities. So we’re seeing that. We’re seeing that more now with AI, AI which needs really a lot of data, a lot of good data if we want it to be useful, if we want it to be ethical, but it also needs data in all the spoken languages we are talking about, specifically as we mentioned here, Asia in general or South Asia. So really a lot to do and we are pushing to accelerate. Our focus is, as all my colleagues said affordable broadband for all. The second one is financing with government and private sector not only telecom but also the digital public infrastructure, the government networks, the digital ID, the shared services, the authentication and security, and then the sectoral applications for health, education, transport, security, so on so forth. The last one you brought it so well digital skills. What is this use if we’re putting millions and billions if people cannot use it or if they depend on the north or the west or whatever we call it to provide that. So really and and here everybody what everybody said is really music to my ear and I completely agree with all the focus on digital skills, building local capacity, investing in NGOs. Look as you transfer capacity to a local entity whether it’s a government or a private sector are they going to be as great in the beginning than the top consulting firms? No, but they know the local context, they know who does what, they have the local intel that many times these big firms don’t have and they fail just because of that. They have the know-how they don’t know the context. So I think I outlined the challenges. If I can just focus on one thing which is very dear to me which is what you all mentioned really cooperation at a country level, at a regional level making sure that we put all that good know-how and financing in coherent pieces so that one day we’ll be sitting here and the 2.6 billion people would have been connected with meaningful access. Thank you.


Amrita Choudhury: So well said. Raj, I’ll come to you and then Sharya. You know APNIC Foundation, what you’re doing and what do you see as challenges?


Rajnesh Singh: Thank you Amrita and it’s nice to have all of you join this session. So thank you for making time for it. In terms of what the APNIC Foundation does, I’m going to go out on a limb here and say that no one knows the Asia-Pacific better than we do. We’ve been around for over 30 years. We’ve built most of the internet infrastructure in the region in some way or form. We’ve helped with training, with capacity building, and so on. The foundation itself, of course, is the development arm of APNIC, which is the Regional Internet Registry. We have the longest running innovation fund in the region that’s been running for over 16 years now, the Information Society Innovation Fund. We have funded programs across inclusion, knowledge, and infrastructure, which are the three pillars of the program. And whether it’s senior citizens or upskilling women or improving gender diversity in the workforce, we do all of that. We’re a small organization, but we do a lot of work. So one of the things we like to say is that we are more about action rather than words. So action, not words. I’ll get to some of the issues and challenges I see. One is, and this has concerned me for quite a while, and if you’ve had me speak before, I keep on repeating the same thing, because hopefully someone’s going to listen. And Samir, you mentioned some of this, actually. It’s the widening digital divides we’re creating. So it’s not just the digital divides. It’s the widening digital divides we’re creating. And that’s got to do with infrastructure. It’s got to do with the devices people use. It’s got to do with digital literacy. We can go down through the layers and define where those divides are widening. It also goes down to a simple thing, very technical, whether an economy or a country or an organization is using IPv6 or is it still using IPv4. So there’s just so many layers of these widening digital divides, because depending on what they have access to and what they can leverage, that will determine what they can do with that connectivity they have. You know, we talk about how wireless, for example, Leosets are changing the landscape. Yes, they are, but there’s still challenges with that as well. but there’s legislative issues or regulatory issues at play with that. There’s also an affordability angle to that. So we shouldn’t just look at one form of technology as the solution to fix everything. It has to be multi-modal in nature. The second problem, in fact, I’m gonna mention three problems here. The second problem is just the level of prioritization that exists between governments, within governments, within regions. And unfortunately, even within the government, very few have a whole of government approach to digital transformation. Time and time and again, this keeps coming up. You’ve got an IT ministry, someone set up a digital ministry, then there’s the finance ministry, then there’s home affairs or foreign affairs who also want to, not having a shot at you, but the thing is that if you don’t have a whole of government approach, you’re going to be working in silos, as Fran said. And then, of course, the third thing is just the coordination between donors. There’s just so much duplication of work. And it distresses me when I see multiple organizations giving funding to do the same thing that someone else has already done. And yes, you’ve got to tick off some KPIs or tick some boxes on your delivery or whatever it is that you need to do. But if you want to bring about holistic transformative change, you have to consider what’s already out there and where do I go and plug the gaps? That’s what the APNIC Foundation does. We’re more interested in plugging the gaps. So if any of you want to come help and support us and plug gaps with us, please talk to us. Thanks, Sumit.


Amrita Choudhury: Thanks so much, Raj. Coordination is something which I’m hearing everywhere, most of the things. And Shari, over to you.


Panelist: Thank you. Thank you very much. I think my job has been easier by Raj and by Samia and ISOC. So we do a combination of what has been said, but comparing ourself, Islamic Development Bank as… can be seen as a smaller version of what the World Bank does. So we do financing of digital development. Our main objective, of course, we are more on the digital inclusion side and we have digital inclusion strategy that was launched last year for four years where we have four key areas that we focus on. So first focus is basically on smart policy and that we build this. So without private sector intervention, there could not be either bridging the digital divide or we talk about widening the digital divide. And now we are hearing in this conference that we are having the AI divide pretty soon that we will be able to monitor and capture and see. The other aspect that we cover is, of course, the capacity building, which is digital literacy of not only policy makers, but end users as well. So that once they have the internet connectivity provided, the use of these services. Then, of course, we have one of the aspect that we have is traditional. Well, what we have been doing is financing of enabling digital infrastructure so that the country has the capacity so that we do work on the upstream side, financing submarine cables, fiber optic backbone. We have done several of them in Southeast Asia, East Africa, West Africa. And last but not least, what we have really focusing very recently is mainstreaming of technology into developed sectors like smart education, how we can use technology and education, telemedicine and health services, e-agriculture, smart cities. So that the idea is we make our development operations more. effective, more impactful, and last but not least, more sustainable. Because of low resources, this is a challenge that we face. We don’t have the luxury of the lot of grants that the World Bank has, so we have very limited amount of grants. We have limited amount of financing. And of course, our member countries, which are the Global South, so they also have challenges in borrowing. So we have to come up with innovative financing instruments of blended finance so that it is affordable for the country to absorb this financing. And then of course, make – so that’s why we insist on developing a business case so that it’s even commercially viable, so that it is self-sustainable. And then basically, once we have an exit policy, once we finish the funding, what is the model that will sustain the whole operation of this intervention that we do? I’ll share – sorry for taking a bit longer – is that we recently came up with – just three weeks ago, we did a policy digital – for Africa region partnership with ITU. And we – actually, this program was never done before. We co-designed it because what we are trying to promote is we are promoting a concept of government ownership. So we encourage policymakers to come up with innovative programs that an international financing institution could finance to help bridge the digital divide. So otherwise, these innovative solutions we expect normally from the private sector, from SMEs, but what we are trying to promote is government ownership that we encourage policymakers that we help you, we will help you to have the right capacity so that you think out of the box and come up with programs that will help us bridge the digital divide correctly. We are doing a similar program with UNDP. We call it digital stewardship community. We are coming up with a community of policymakers who have the digital capacity. So they basically not only do capacity-building programs, we do programs for them, and then they encourage their other – other policy makers within their ministries to come up with projects to help them to brush up and then ultimately when we do the MCPS, which is member country partnership strategy, country engagement framework, then those policy makers are invited to share their experiences what they have learned. So what we are doing is we are not only doing capacity building programs, but we are hand holding them to come up with larger projects and then we ultimately either us or help our partners to come and finance those interventions. We are working with the Ministry of Villages. We are financing their AI tool that would basically help develop them to have the capacity to do their infrastructure service delivery planning. So in Indonesia, if you’re aware that the village have their community engagement to set up a yearly plan. So initially it was manual, but now they converted into a service innovation platform. And now we are financing the AI embedded tool of that platform that would utilize information available so that every participatory stakeholder is involved so that they can give their input and we can come up with a better infrastructure services, be it water, be it road, be it energy supply in the villages. So these are a couple of examples that I wanted to share that we are working on and we are happy to collaborate with others because we will be replicating in other countries and in the country scale it within the country as well. Thank you.


Amrita Choudhury: Thank you so much, Sharir. I’ll go to Katrina. You’ve been hearing the donor community speaking a lot. From a government perspective, where is it that, what do you see are the gaps which you find when there are projects being held in a country? What from your perspective, the other side are the gaps?


Panelist: Thank you so much. I think that this is the perfect momentum for me because after, I will give you another perspective from the implementation side, how it evolved and also take you to other parts of the world, which is South Caucasus, but still working closely with World Bank and other donor organizations like ISOC, GIZ, also European Investment Bank. So let me mention the major state program, which was supported by World Bank, which is the State Broadband Development Program, which was bringing the high-speed broadband to villages of Georgia. but mountainous country and we still have the rural-urban gap. We all agree that today’s economy and sustainable economic development is absolutely unthinkable without having your citizens to have access to affordable, high-speed, high-quality internet. So we were supported to bring the middle-middle connectivity. It’s 5,000 kilometers of the broadband and up to 1,000 villages will be covered. And it’s very important when we speak about the challenges, the challenge for the regulator and state is that the funded project should also still maintain competitiveness of the market. So the players might tell you that if you have enough funds to just give the funded internet to all the villages, we might step back and leave the market. So this is one of the challenges, to ensure, to be very precise where are those white spots where you definitely need this funding and this will not breach the competitiveness overall for the country and it will evolve to more competitive digital ecosystem for the country and not to scare investors, for example, for the innovative technologies. But most important for the state is to make your citizens protected and give them access and affordable access to high-speed internet. So this was the first step. The second important step where the role of ComCom, the regulator, was also broader is another second component of the project. First was the infrastructure build-up and second was that supporting the component that will… support the literacy component which is bringing awareness how to use the internet which is another challenge because you need to really come to the each small village and find community and make people confident that it’s really interesting to listen how you can use internet for economic to grow up your business or your household and for economic benefits also to reach out to people from the different ethnical groups so we are small country but we have different ethnical groups and they might say so it was a Comcom’s role to find proper communities in the regions in the villages to reach out and start this media literacy trainings and Comcom has been given the national nationwide role of developing media literacy in the country so this with this broader state-defined role we were empowered we are we were supported by Ministry of Education to work with schools with universities and also to also reach out to the people with disabilities for whom it is most important to have access to the digital technologies nowadays it is a bigger enabler I would say for the people with disabilities to be reached out and to learn how to turn this access into the benefit for their lives for improving their next day or improving or it’s very popular in Georgia for for example micro businesses small businesses to really understand how that they can digitize their services and how they can use this high-speed broadband for their day-to-day activities. So I think there are very similar challenges around the world and let’s keep doing, let’s keep going with this strategy. Thank you so much.


Amrita Choudhury: Thank you so much. A question which all the speakers out here need to think but we’ll take some questions from the ground also. I’m coming to you. One question which I would like you to respond later is two areas where you would be focusing on from 2025 and beyond. But we have some questions. We’ll take two or three together and then you could respond.


Audience: Sorry, this is more of a comment than question. It’s okay. I’m from the Internet Society Foundation and I just wanted to say, going back to your point, your comments about not being able to fund India and Raja’s point about the need for cooperation and peer dialogue among funders. One thing working at the foundation that I think is another challenge is not being able to fund a lot of people you want to fund and this is something that we recently experienced with Georgia. So I would just ask or put out if there are ways as funders or and organizations to think of ways to work within legal frameworks but also just find ways to support people in countries where they might not be able to receive funding legally from other sources. Just being able to share ideas or maybe partners that you work with that you know are good in the region.


Amrita Choudhury: Thank you so much. So it’s again coordination, better coordination how you can use. Yes, please.


Jordanka Tomkova: Hi, Jordanka Tomkova from Innova Bridge Foundation, Switzerland. I’m curious about the innovation in the sort of interventions that you have made and the funding that you’re providing. If you have any good examples of innovative approaches, whether centralized or decentralized because that is the title of today’s talk. So not what has been done and what is sort of standardly done but what is innovative about it. Thank you.


Amrita Choudhury: Thank you so much. Any other questions from the room? If not, I will. Any of the speakers wants to speak innovatively using approaches of funding and can there be more exchange between community as in since you’re here I would say the community here to share best practices of how you can reach out to more communities who need fund where you currently because of regulatory challenges are not able to reach. Anyone wants to take a step?


Samia Melhem: I’ll be very quick because yes a very good question innovation. I think the first innovation is in the approach on how do you really get to those that are not accessed because there’s no way to reach them if you can call them many don’t even have an ID. So the idea of using local NGOs community we have digital ambassadors in most of our projects and these are hand-picked thousands of them that we kind of empower, train, compensate in different ways and they are from the areas that are newly connected. We have that in Congo, in Zambia, we have that in Bangladesh, in Pakistan. So really using the youth on the ground which we didn’t in the past. The other element is to really have more participation crowd sourcing in the design of projects. These projects are getting bigger and they really need to have participation of the stakeholder. So more design thinking as we plan these particular projects. Last but not least and it’s not an innovation but really working more amongst one another and the private sector to understand where the jobs are and help universities, academic institutes, learning institutions reform their the supply of programs to really align with the job market.


Rajnesh Singh: Yeah thanks Amrita. So Samia covered some of the approaches they take which I think some of us take as well. One thing I do want to bring up and and it’s just pointing out the elephant in the room, a lot of the reasons we can’t deploy funds in certain economies is due to a thing called sanctions and due to another thing called geopolitics. So that I think has to be taken into account because what we find is where the greatest intervention required, where we could have the most impact are those economies and peoples in those economies who are suffering, not because of their doing, it may be the political system or whatever else that exists in that economy. So how do you try and ensure that we can help the people who need the most help? And that I think has been one of the challenges we’ve had to face. So what we’ve done, our approach has been to find partners who can actually go and work in those economies. Sometimes we can ourselves, and sometimes we are limited by government sanctions and or legislation that we can’t, but there are ways that that can be addressed as well. However, what sort of concerns me is that we don’t seem to talk about that as enough because we wanna do things here and there and everywhere else, but the people who can have the most benefit from digital technologies and all the benefits that the internet can provide, sometimes we just don’t go there because we can’t even send money there or we can’t send people because there’s a security issue, for example. So, I think there is some scope to work together to find some innovative approaches on how we can address that issue. And I’m happy to discuss it further given the fact that we are out of time.


Amrita Choudhury: Thank you. I know Sarah is going to go next. For example, Raj would not say it, I’ll say it, Afghanistan is one place where putting in money for most people is difficult, but they need it the most, women, et cetera. Sarah, you and then Shadia.


Sarah Armstrong: Yes, just briefly. Some of the examples that I mentioned here, we feel are really putting forth real innovative solutions. This organization that I mentioned earlier that is doing the IOT to help with the detection of emergencies in the climate area, that’s our CURSIC-SAN chapter. I think that’s a really good. example of what’s going on in being innovative, and we have also the IEEE that’s working in India, and they are working with unintended solutions, and we’re seeing elements of that. And we also would say that in order to continue to find innovative solutions, we are encouraging our grantees to learn from one another and to experience what others are doing and how they’re doing it and how it can be more innovative and responsive to the cultures in which they’re working. So that’s a very important part of the type of foundation that we are and we’d like to continue to be.


Panelist: Yeah, maybe just on one thing that I wanted to share. So, the new method that we came up with very recently, traditionally the bank has been operating in a way that we receive a request from our member country and then we address the needs. So when we were developing our digital inclusion strategy, which was launched here in Riyadh last year, we actually went through a very detailed consultative process of over one and a half year which involved policymakers from 14 countries, 14 member countries and 10 international organizations. I can see Mr. Sharif sitting here and he was part of that discussion. So, we had called something called ISDB, Digital Inclusion Technical Working Group, and we reached out to different policymakers from countries for in-person and hybrid workshops in different parts of our member country constituency. Out of that, fast forward one year, when we came up with the strategy, we said, okay, we have a strategy, but what would be a catalyst to embark on certain immediate programs? that would kick-start some ground. So we came up with this Digital Inclusion Strategic Partnership Program, where we encouraged all those partners that were with us throughout the journey in terms of developing the strategy to come up with pilot programs that would have some sort of scalability criteria and or appetite for that, that would encourage more people or more international organizations, even in private sector to come in and finance the scalability part. So we are now financing for pilot programs in Pakistan, in Smart Village. We are doing a pilot program in Indonesia, in Maldives, and working with different partners on digital ID so that we de-risk for the government so that if we can have some sort of immediate results, we know a certain amount that we are looking at in terms of financing some immediate results that would encourage potential financiers or investors to come in and fill in these gaps. So these are some of the innovative mechanisms other than the one that I already mentioned is blended finance. So we come up with a grant portion with some sort of a soft lending, especially for low-income countries that otherwise would not be able to afford the financing for the immediate needs that they would like to have some sort of impact in the country. So this is what I wanted to share. Thank you.


Amrita Choudhury: I do have one question, and I see an alert of five minutes. So, you know, there is one question. I have to give it to her. She’s from Myanmar. But I would take back the last question. I would think, what can we do next? One minute for each, or even half a minute, feel quick.


Audience: Yeah, I just wanted to agree with Raj, and thank you for raising about the Vanuatu community that we are suffering a loss of the internet issue. And that is also another challenge of the community like Myanmar, Afghanistan, to resist the funding to build the capacity in our community. I really think, and I would like to think, like to request to all of the grantors and funders to think about some time to break through to see the community, not seeing the sanction or geopolitical situation. Thank you very much.


Amrita Choudhury: Thank you. Very relevant point. So my question to all of you would be, what do we do next? We’ve discussed it. We say coordination is important. But what should be our next steps if we really want to take this discussion ahead? Anyone wants to take a jab first? I have the mic.


Panelist: So what we have done, in addition to what I’ve just mentioned, is what we are now coming up is that we have already a whole list of 72 programs in our pipeline that has the potential to have over $1 billion worth of digital development projects. And of course, we ourself cannot finance all of it. So what we are doing is we are encouraging partners to come in and chip with the development of these programs. So the idea is that over the next three years, we will help not only do the capacity building for the policymakers and the countries itself, but we will help them come up with basically from a concept note to a bankable project that would ultimately be available for financing for either the private sector, any MDB. So there are a certain number of programs that we need to do the seed funding for. And we look forward to those kind of interventions and happy to collaborate and work together towards achieving collective goals. Thank you. Sorry. OK, thank you so much. I’ll try to be very, very quick. So I want to mention one step that will be still ongoing is implementing media digital literacy into formal education. This is one of the components that is crucial for the country. And I think this will somehow make the circle of the success. story of the whole broadband development with the component of literacy and with the component of safe use of internet, so we will make it even broader in schools and universities. And a second topic that is on our agenda as far as the digital is a cross-cutting stream, so make, involve more sectors into this. When you bring the broadband infrastructure and literacy, you need to bring also other economic stakeholders to make a real success story out of your investment. Thank you.


Franz von Weizsäcker: All right, so on what to do next for GIZ as a big organization, our answer is we take our decisions very decentrally, probably 90% of our budgets are being decided by people who live and reside in the country where it’s being implemented, and that is one of the answers towards innovation, because when you want to source good and applicable innovations, that needs to happen locally. And so that’s where also most of the coordination shall happen going forward, and another recommendation for all the granting mechanisms and calls for proposals, I mean, one big important feedback that we got and what the grantees very much appreciated is keeping all these processes very simple, have the pitch done on one sheet of paper and then have a subsequent pitching session and do not overburden grantees with bureaucracy, that’s the best way to become effective and good value for money and sourcing the real innovations and not just those organizations that are very good at checking all the boxes of all the donors. So don’t put too many strings attached, but rather make it simple and make it fit for purpose.


Rajnesh Singh: So I’ll just repeat what I said before, don’t duplicate, if you want to do something in the Asia-Pacific, come and talk to us, come and talk to me for you. Thank you. G’day.


Sarah Armstrong: Oh, again. No, again, please. Pleasant to meet you. Thanks. Okay. Just to add on, absolutely, we are continually looking for ways to improve the foundation and the way in which it works. It’s five years old and we are just launching our next five-year strategy, so we are full of new ideas, more simplicity with our grantees we know is extremely important, so we’re finding ways to do that, we think that’s important. Looking, too, for more gender focus, we think that the digital gender divide is an important thing for us to be addressing, so that’s another area that we’ll be focusing on going into the future in terms of that. And then also continuing to find innovative ways, because some of the environments in which we’re working are very difficult, as we’ve talked about, and seeing if we can identify grantees that are able to come up with solutions where they can address some of the problems that we’ve encountered. We are going to move forward with a lot of enthusiasm and possibly a fair amount of changes.


David Hevey: Thank you. Beyond what I already said around focusing on the capacity… building on cybersecurity and cybercrime. I think also what we’re, as an honorable mention in, with the data, my colleague from World Bank said before, with the data and the data being critical in a number of things, one thing that we are looking forward to is actually supporting cloud transformation roadmaps for countries, particularly in PNG and Vanuatu. So that’s one thing that we actually also focus on there too. But also taking stock of things, that’s really important. The monitoring and evaluation piece is actually, is really important. It’s all well and good that we have these approaches which have worked and we’re continuing to innovate, but ensuring that what we’re doing actually is hitting the mark and is doing what we need it to do. And also having been Foreign Affairs and working with the APNIC Foundation, I support Raj’s plug to partner with APNIC Foundation. There we are.


Zhang Xiao: Yeah. Personally, I would like to focus on AI governance because digitalization is a process. Internet is the foundation. But AI is going to change each field, more or less. So I want this talk to continue, dialogue to continue, because as a group of people, we are going to make a sense. Thank you.


Samia Melhem: Thank you. Yes, you’ve said it all. And if I can complement with two actions, the first one is for big organization, perfect. The Saudi government wants to fund a lot of these projects. How to make sure it is done, it is approved, and it’s not too complicated. NGOs, et cetera. So make partnership much easier. And second, with the client side, with the government, we oftentimes start big projects, but they’re not completed. We do an ID system, and 10% of the people of that country are in it. What about the other 90%? Make it easy to have follow-up projects to complete what we started and keep perfecting it. Thank you.


Amrita Choudhury: Thank you so much. We would be ending, but I think this dialogue, as Xiao mentioned, needs to continue. And may I, as a moderator, suggest that perhaps if there could be a mailing list kind of a thing where like-minded donors could actually share, like World Bank, et cetera, has tremendous knowledge, like what you shared, that projects are not completed, or make it easier for hosts, et cetera. Then the bank has some ideas, governments have some ideas. Could there be some kind of a mailing list where you can share the experiences, like Franz was saying, someone was saying, let’s not duplicate projects, I think. France said, and let’s see how this knowledge which is there can use. For example, if GIZ is making a simpler form, could others look at it as an inspiration? I’m not saying copy-paste it, but so perhaps if someone can think of having a mailing list, et cetera. I don’t know, Raj, would APNIC Foundation would want to have a mailing list if others want, and it could be informal way where you all exchange so that what discussion happens, you can continue, other people can also join it or not join it. That’s just a suggestion, you can take it or leave it. And we would like to have a group photo, first of the speakers. Thank you so much, it was really good. We would love to have a 90 minute session actually, but we ran out of time. If we would get just three minutes to take a photograph of the speakers and then of the group, right? Thank you. Thank you.


D

David Hevey

Speech speed

163 words per minute

Speech length

1063 words

Speech time

390 seconds

Cyber resilience and cybercrime response

Explanation

David Hevey emphasizes the importance of addressing cyber resilience and cybercrime response. He highlights these as key challenges in improving internet access and inclusion.


Evidence

Australia has set up a rapid response facility for deploying incident response after cyber incidents in Vanuatu and Tonga. They are also partnering with New Zealand and Identity Care Australia to provide cybercrime and online scam response assistance in PNG and Fiji.


Major Discussion Point

Challenges in Improving Internet Access and Inclusion


Infrastructure investments in Pacific Island countries

Explanation

David Hevey discusses Australia’s investments in infrastructure to improve connectivity in Pacific Island countries. This includes both submarine cable connectivity and terrestrial infrastructure.


Evidence

Australia has committed 350 million Australian dollars since 2018 to support all Pacific Island countries having primary telecommunications cable connectivity by the end of 2025.


Major Discussion Point

Current Projects and Initiatives


Need for regional coordination among donors

Explanation

David Hevey emphasizes the importance of coordination among donors to avoid overloading recipient countries. He points out the challenge of multiple organizations providing similar trainings or assistance.


Evidence

Australia is working with partners of the Blue Pacific to set up the Pacific Cyber Capacity and Coordination Conference to improve coordination.


Major Discussion Point

Coordination and Collaboration Among Donors


Agreed with

Rajnesh Singh


Franz von Weizsäcker


Agreed on

Need for better coordination among donors


Cloud transformation roadmaps for countries

Explanation

David Hevey mentions supporting cloud transformation roadmaps for countries as a future focus area. This initiative aims to help countries modernize their digital infrastructure.


Evidence

He specifically mentions plans to support cloud transformation roadmaps for PNG and Vanuatu.


Major Discussion Point

Future Focus Areas


F

Franz von Weizsäcker

Speech speed

139 words per minute

Speech length

818 words

Speech time

352 seconds

Affordability of internet access

Explanation

Franz von Weizsäcker highlights affordability as a major challenge in internet access and inclusion. He notes that while coverage may be high in many parts of Asia, affordability remains a significant issue.


Evidence

He contrasts India’s competitive telecommunications industry and low prices with Central Africa, which has the highest price per gigabyte globally due to poor investment environment and regulatory issues.


Major Discussion Point

Challenges in Improving Internet Access and Inclusion


Social insurance and digital skills programs in Southeast Asia

Explanation

Franz discusses GIZ’s focus on social insurance and digital skills programs in Southeast Asia. These programs aim to improve digital inclusion and skills development.


Evidence

He mentions specific programs in Cambodia and Indonesia focused on digital inclusion for social safety and security.


Major Discussion Point

Current Projects and Initiatives


Agreed with

Sarah Armstrong


Zhang Xiao


Samia Melhem


Agreed on

Importance of digital literacy and skills development


Importance of funding local organizations

Explanation

Franz emphasizes the importance of channeling aid through local organizations for greater effectiveness. He suggests that local organizations have better understanding of the context and local intelligence.


Evidence

He references a meta-study by USAID which found that aid is much more effective when channeled into local organizations.


Major Discussion Point

Coordination and Collaboration Among Donors


Agreed with

David Hevey


Rajnesh Singh


Agreed on

Need for better coordination among donors


Simplifying grant processes and sourcing local innovations

Explanation

Franz recommends simplifying grant processes and focusing on sourcing local innovations. He suggests that this approach leads to more effective and innovative solutions.


Evidence

He advises keeping processes simple, such as using one-page pitches and subsequent pitching sessions, to avoid overburdening grantees with bureaucracy.


Major Discussion Point

Future Focus Areas


Z

Zhang Xiao

Speech speed

153 words per minute

Speech length

664 words

Speech time

258 seconds

Digital literacy for elderly populations

Explanation

Zhang Xiao identifies digital literacy for elderly populations as a significant challenge. He points out that while internet penetration is high in China, many elderly people struggle to use digital technologies effectively.


Evidence

He mentions that 18.7% of Chinese people are over 60, expected to reach 35% in the next 20 years, and many struggle with using smartphones and smart appliances.


Major Discussion Point

Challenges in Improving Internet Access and Inclusion


Agreed with

Franz von Weizsäcker


Sarah Armstrong


Samia Melhem


Agreed on

Importance of digital literacy and skills development


AI governance

Explanation

Zhang Xiao expresses a desire to focus on AI governance in the future. He sees AI as a transformative technology that will impact various fields.


Major Discussion Point

Future Focus Areas


S

Sarah Armstrong

Speech speed

141 words per minute

Speech length

1031 words

Speech time

437 seconds

Affordable and meaningful access

Explanation

Sarah Armstrong emphasizes the importance of affordable and meaningful access to the internet. She highlights this as a key challenge in improving internet inclusion.


Major Discussion Point

Challenges in Improving Internet Access and Inclusion


Digital literacy programs for women with disabilities

Explanation

Sarah Armstrong discusses the Internet Society Foundation’s support for digital literacy programs, particularly those targeting women with disabilities. These programs aim to improve digital skills and economic opportunities.


Evidence

She mentions a project in Indonesia called Kota Kita that works with women with disabilities to help them with social enterprises.


Major Discussion Point

Current Projects and Initiatives


Agreed with

Franz von Weizsäcker


Zhang Xiao


Samia Melhem


Agreed on

Importance of digital literacy and skills development


Gender focus and addressing the digital gender divide

Explanation

Sarah Armstrong mentions a future focus on addressing the digital gender divide. The foundation plans to increase its emphasis on gender-focused initiatives.


Major Discussion Point

Future Focus Areas


R

Rajnesh Singh

Speech speed

191 words per minute

Speech length

1023 words

Speech time

319 seconds

Widening digital divides across multiple layers

Explanation

Rajnesh Singh expresses concern about widening digital divides across various layers, including infrastructure, devices, digital literacy, and technical aspects like IPv6 adoption. He emphasizes that these divides determine what people can do with their connectivity.


Major Discussion Point

Challenges in Improving Internet Access and Inclusion


Lack of whole-of-government approach to digital transformation

Explanation

Rajnesh Singh points out the lack of a whole-of-government approach to digital transformation in many countries. He argues that this leads to working in silos and ineffective implementation of digital initiatives.


Evidence

He mentions the existence of separate IT ministries, digital ministries, finance ministries, and other departments working on digital transformation without proper coordination.


Major Discussion Point

Challenges in Improving Internet Access and Inclusion


Sanctions and geopolitics limiting aid to certain economies

Explanation

Rajnesh Singh highlights how sanctions and geopolitics limit the ability to provide aid to certain economies. He points out that often the areas most in need of intervention are those affected by these limitations.


Major Discussion Point

Challenges in Improving Internet Access and Inclusion


Agreed with

Audience


Agreed on

Challenges in funding certain countries due to regulations


Innovation fund for inclusion, knowledge and infrastructure

Explanation

Rajnesh Singh mentions APNIC Foundation’s Information Society Innovation Fund, which has been running for over 16 years. The fund supports programs across inclusion, knowledge, and infrastructure.


Evidence

He mentions that the fund has supported programs for senior citizens, upskilling women, and improving gender diversity in the workforce.


Major Discussion Point

Current Projects and Initiatives


Duplication of work among donor organizations

Explanation

Rajnesh Singh expresses concern about the duplication of work among donor organizations. He argues that this leads to inefficient use of resources and limits the overall impact of interventions.


Evidence

He mentions seeing multiple organizations giving funding to do the same thing that someone else has already done.


Major Discussion Point

Coordination and Collaboration Among Donors


Agreed with

David Hevey


Franz von Weizsäcker


Agreed on

Need for better coordination among donors


S

Samia Melhem

Speech speed

157 words per minute

Speech length

937 words

Speech time

356 seconds

Digital public infrastructure and sectoral applications

Explanation

Samia Melhem discusses the World Bank’s focus on financing digital public infrastructure and sectoral applications. This includes government networks, digital ID, shared services, and applications in health, education, and other sectors.


Major Discussion Point

Current Projects and Initiatives


Agreed with

Franz von Weizsäcker


Sarah Armstrong


Zhang Xiao


Agreed on

Importance of digital literacy and skills development


Completing and perfecting started projects

Explanation

Samia Melhem emphasizes the importance of completing and perfecting started projects. She points out that many projects are not completed or only partially implemented.


Evidence

She gives an example of an ID system where only 10% of the country’s population is included, questioning what happens to the other 90%.


Major Discussion Point

Future Focus Areas


P

Panelist

Speech speed

148 words per minute

Speech length

2231 words

Speech time

900 seconds

Maintaining market competitiveness while funding rural access

Explanation

The panelist discusses the challenge of maintaining market competitiveness while funding rural internet access. They emphasize the need to balance government-funded projects with maintaining a competitive market environment.


Evidence

The panelist mentions that players might step back from the market if the government provides funded internet to all villages, potentially reducing overall market competitiveness.


Major Discussion Point

Challenges in Improving Internet Access and Inclusion


Broadband development program in rural Georgia

Explanation

The panelist discusses a state broadband development program in Georgia, supported by the World Bank. The program aims to bring high-speed broadband to rural and mountainous areas of the country.


Evidence

The program involves building 5,000 kilometers of broadband infrastructure to cover up to 1,000 villages.


Major Discussion Point

Current Projects and Initiatives


Implementing digital literacy in formal education

Explanation

The panelist emphasizes the importance of implementing digital literacy in formal education as a future focus area. This approach aims to create a comprehensive digital literacy program integrated into the education system.


Evidence

They mention plans to make digital literacy broader in schools and universities, including components on safe internet use.


Major Discussion Point

Future Focus Areas


U

Unknown speaker

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Developing partnerships for financing digital projects

Explanation

The speaker discusses the development of partnerships for financing digital projects. This approach aims to leverage resources from multiple partners to fund and implement digital development initiatives.


Evidence

The speaker mentions having a pipeline of 72 programs with the potential for over $1 billion worth of digital development projects, encouraging partners to contribute to the development of these programs.


Major Discussion Point

Coordination and Collaboration Among Donors


A

Audience

Speech speed

154 words per minute

Speech length

248 words

Speech time

96 seconds

Challenges in funding certain countries due to regulations

Explanation

An audience member raises the issue of regulatory challenges in funding certain countries. This highlights how legal and political factors can limit the ability of donors to support digital development in some regions.


Evidence

The speaker mentions difficulties in funding projects in countries like Myanmar and Afghanistan due to sanctions or geopolitical situations.


Major Discussion Point

Coordination and Collaboration Among Donors


Agreed with

Rajnesh Singh


Agreed on

Challenges in funding certain countries due to regulations


Agreements

Agreement Points

Importance of digital literacy and skills development

speakers

Franz von Weizsäcker


Sarah Armstrong


Zhang Xiao


Samia Melhem


arguments

Social insurance and digital skills programs in Southeast Asia


Digital literacy programs for women with disabilities


Digital literacy for elderly populations


Digital public infrastructure and sectoral applications


summary

Multiple speakers emphasized the importance of digital literacy and skills development programs, targeting various groups including women with disabilities, elderly populations, and general workforce development.


Need for better coordination among donors

speakers

David Hevey


Rajnesh Singh


Franz von Weizsäcker


arguments

Need for regional coordination among donors


Duplication of work among donor organizations


Importance of funding local organizations


summary

Several speakers highlighted the need for better coordination among donors to avoid duplication of efforts, overloading recipient countries, and to ensure more effective use of resources.


Challenges in funding certain countries due to regulations

speakers

Rajnesh Singh


Audience


arguments

Sanctions and geopolitics limiting aid to certain economies


Challenges in funding certain countries due to regulations


summary

Both Rajnesh Singh and an audience member raised concerns about regulatory challenges and sanctions limiting the ability to fund digital development projects in certain countries.


Similar Viewpoints

These speakers all emphasized the importance of affordable and accessible internet infrastructure, particularly in developing regions.

speakers

David Hevey


Franz von Weizsäcker


Sarah Armstrong


arguments

Infrastructure investments in Pacific Island countries


Affordability of internet access


Affordable and meaningful access


Both speakers highlighted concerns about digital divides, particularly focusing on how certain populations (such as the elderly) may be left behind in digital adoption.

speakers

Rajnesh Singh


Zhang Xiao


arguments

Widening digital divides across multiple layers


Digital literacy for elderly populations


Unexpected Consensus

Importance of local context and organizations in project implementation

speakers

Franz von Weizsäcker


Rajnesh Singh


Samia Melhem


arguments

Importance of funding local organizations


Lack of whole-of-government approach to digital transformation


Completing and perfecting started projects


explanation

There was an unexpected consensus on the importance of understanding and working within local contexts, involving local organizations, and ensuring projects are completed and effective within specific country environments. This highlights a shift from top-down approaches to more locally-driven development strategies.


Overall Assessment

Summary

The main areas of agreement included the importance of digital literacy and skills development, the need for better donor coordination, addressing affordability and accessibility of internet infrastructure, and recognizing the challenges posed by regulations and sanctions in certain countries.


Consensus level

There was a moderate to high level of consensus among the speakers on these key issues. This consensus suggests a growing recognition of the complex, multifaceted nature of digital development challenges and the need for collaborative, locally-sensitive approaches. The implications of this consensus could lead to more coordinated efforts among donors, increased focus on digital literacy alongside infrastructure development, and potentially new strategies for overcoming regulatory barriers in challenging environments.


Differences

Different Viewpoints

Approach to funding and implementation

speakers

Franz von Weizsäcker


Rajnesh Singh


arguments

Franz emphasizes the importance of channeling aid through local organizations for greater effectiveness. He suggests that local organizations have better understanding of the context and local intelligence.


Rajnesh Singh expresses concern about the duplication of work among donor organizations. He argues that this leads to inefficient use of resources and limits the overall impact of interventions.


summary

While Franz advocates for channeling aid through local organizations, Rajnesh expresses concern about duplication of work among donor organizations. This suggests a difference in approach to funding and implementation of projects.


Unexpected Differences

Overall Assessment

summary

The main areas of disagreement appear to be around the approach to funding and implementation of projects, with some speakers advocating for local involvement and others focusing on coordination among larger donor organizations.


difference_level

The level of disagreement among the speakers appears to be relatively low. Most speakers seem to agree on the major challenges and goals, with differences mainly in the specific approaches or areas of focus. This suggests that there is potential for collaboration and coordination among the various organizations represented, which could lead to more effective interventions in improving internet access and inclusion.


Partial Agreements

Partial Agreements

Both speakers agree on the need for better coordination among donors, but they approach it from different angles. David focuses on avoiding overloading recipient countries, while Rajnesh emphasizes avoiding duplication of work.

speakers

David Hevey


Rajnesh Singh


arguments

David Hevey emphasizes the importance of coordination among donors to avoid overloading recipient countries. He points out the challenge of multiple organizations providing similar trainings or assistance.


Rajnesh Singh expresses concern about the duplication of work among donor organizations. He argues that this leads to inefficient use of resources and limits the overall impact of interventions.


Similar Viewpoints

These speakers all emphasized the importance of affordable and accessible internet infrastructure, particularly in developing regions.

speakers

David Hevey


Franz von Weizsäcker


Sarah Armstrong


arguments

Infrastructure investments in Pacific Island countries


Affordability of internet access


Affordable and meaningful access


Both speakers highlighted concerns about digital divides, particularly focusing on how certain populations (such as the elderly) may be left behind in digital adoption.

speakers

Rajnesh Singh


Zhang Xiao


arguments

Widening digital divides across multiple layers


Digital literacy for elderly populations


Takeaways

Key Takeaways

There are significant challenges in improving internet access and inclusion, including cybersecurity issues, affordability, digital literacy gaps, and widening digital divides.


Donor organizations are implementing various projects to address these challenges, focusing on infrastructure development, digital skills training, and sector-specific applications.


Better coordination and collaboration among donors is needed to avoid duplication of efforts and maximize impact.


Future focus areas should include AI governance, addressing the digital gender divide, implementing digital literacy in formal education, and simplifying grant processes.


Resolutions and Action Items

Explore ways to improve coordination and information sharing among donor organizations


Consider creating a mailing list for donors to share experiences and best practices


Focus on completing and perfecting started projects rather than initiating new ones


Simplify grant processes and funding mechanisms to make them more accessible


Unresolved Issues

How to effectively provide aid to countries affected by sanctions or geopolitical issues


Balancing government-funded projects with maintaining market competitiveness


Addressing the challenges of funding certain countries due to regulatory restrictions


Finding innovative ways to reach and support underserved communities in difficult environments


Suggested Compromises

Using local NGOs and community-based organizations to reach areas where direct funding is challenging


Implementing blended finance approaches to make projects more affordable for low-income countries


Balancing centralized decision-making with decentralized implementation to foster local innovation


Partnering with local organizations and existing regional experts (like APNIC Foundation) to leverage local knowledge and networks


Thought Provoking Comments

We shouldn’t just look at one form of technology as the solution to fix everything. It has to be multi-modal in nature.

speaker

Rajnesh Singh


reason

This comment challenges the tendency to view new technologies like LEO satellites as a panacea, highlighting the need for diverse, context-appropriate solutions.


impact

It broadened the discussion beyond specific technologies to consider more holistic approaches to digital inclusion.


The widening digital divides we’re creating. So it’s not just the digital divides. It’s the widening digital divides we’re creating.

speaker

Rajnesh Singh


reason

This insight highlights how current approaches may be exacerbating inequality rather than reducing it, forcing a critical examination of existing strategies.


impact

It shifted the conversation to focus more on the unintended consequences of digital development efforts and the need to address root causes of inequality.


Look as you transfer capacity to a local entity whether it’s a government or a private sector are they going to be as great in the beginning than the top consulting firms? No, but they know the local context, they know who does what, they have the local intel that many times these big firms don’t have and they fail just because of that.

speaker

Samia Melhem


reason

This comment challenges the conventional wisdom of relying on external expertise, emphasizing the value of local knowledge and capacity building.


impact

It prompted discussion on more sustainable, locally-driven approaches to digital development and capacity building.


A lot of the reasons we can’t deploy funds in certain economies is due to a thing called sanctions and due to another thing called geopolitics.

speaker

Rajnesh Singh


reason

This comment brings attention to often-overlooked political barriers to digital inclusion efforts, highlighting systemic challenges beyond technological or financial constraints.


impact

It broadened the scope of the discussion to include geopolitical factors and prompted consideration of how to work within or around these constraints.


Don’t put too many strings attached, but rather make it simple and make it fit for purpose.

speaker

Franz von Weizsäcker


reason

This insight challenges conventional grant-making processes, suggesting that overly complex requirements may hinder innovation and effectiveness.


impact

It sparked discussion on how to streamline funding processes to better support local innovation and implementation.


Overall Assessment

These key comments shaped the discussion by challenging conventional approaches to digital inclusion and development. They broadened the conversation beyond technological solutions to consider geopolitical factors, local capacity building, and the unintended consequences of current strategies. The discussion shifted towards more nuanced, context-specific approaches that prioritize local knowledge and simplify implementation processes. This led to a more critical and holistic examination of digital inclusion efforts and their impacts.


Follow-up Questions

How can donors work within legal frameworks to support people in countries where they might not be able to receive funding legally from other sources?

speaker

Audience member from Internet Society Foundation


explanation

This is important to find ways to support communities in need despite regulatory challenges or sanctions.


What are some examples of innovative approaches in interventions and funding, whether centralized or decentralized?

speaker

Jordanka Tomkova from Innova Bridge Foundation


explanation

Understanding innovative approaches can help improve the effectiveness and reach of development projects.


How can we address the challenge of deploying funds in economies affected by sanctions or geopolitical issues?

speaker

Rajnesh Singh


explanation

This is crucial for helping people who need the most assistance but are limited by political circumstances beyond their control.


How can we improve coordination and information sharing among donors and implementers?

speaker

Multiple speakers (Rajnesh Singh, Samia Melhem, Franz von Weizsäcker)


explanation

Better coordination can reduce duplication of efforts and improve overall effectiveness of development projects.


How can we simplify grant application processes to make them more accessible to local organizations?

speaker

Franz von Weizsäcker


explanation

Simpler processes can help source real innovations and support organizations that may not have extensive resources for complex applications.


How can we address the digital gender divide?

speaker

Sarah Armstrong


explanation

Focusing on gender-specific issues in digital inclusion is important for ensuring equitable access and opportunities.


How can we support cloud transformation roadmaps for countries, particularly in PNG and Vanuatu?

speaker

David Hevey


explanation

This is important for helping countries modernize their digital infrastructure and services.


How can we improve monitoring and evaluation of digital development projects?

speaker

David Hevey


explanation

Effective monitoring and evaluation is crucial for ensuring that projects are achieving their intended outcomes and for continuous improvement.


How can we address AI governance in the context of digital development?

speaker

Zhang Xiao


explanation

As AI becomes more prevalent, understanding and managing its impacts on various fields is crucial for sustainable and ethical development.


How can we make partnerships easier between large organizations and NGOs for project funding and implementation?

speaker

Samia Melhem


explanation

Streamlining partnerships can help leverage resources and expertise more effectively for development projects.


How can we ensure follow-up projects to complete and perfect initiatives that have been started?

speaker

Samia Melhem


explanation

This is important for ensuring that projects achieve their full potential and reach all intended beneficiaries.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Launch / Award Event #168 Parliamentary approaches to ICT and UN SC Resolution 1373

Launch / Award Event #168 Parliamentary approaches to ICT and UN SC Resolution 1373

Session at a Glance

Summary

This panel discussion focused on parliamentary approaches to using information and communication technologies (ICTs) in counterterrorism efforts, in accordance with UN Security Council Resolution 1373. Experts from various international organizations and parliamentary bodies shared insights on the challenges and opportunities presented by ICTs and artificial intelligence (AI) in combating terrorism.


The speakers emphasized the critical role of parliamentarians in developing legislation, allocating resources, and providing oversight for counterterrorism measures involving new technologies. They stressed the importance of balancing security needs with human rights protections and adhering to international law. The discussion highlighted how terrorist groups are exploiting AI and other emerging technologies for propaganda, recruitment, and planning attacks, while also noting the potential for these same technologies to enhance threat detection and prevention efforts by authorities.


Key points included the need for technology-neutral legislation, international cooperation, and public-private partnerships in addressing these challenges. Speakers also emphasized the importance of digital literacy and public awareness campaigns to build societal resilience against online radicalization and disinformation. The UN Security Council Resolution 1373 was cited as a foundational document guiding international counterterrorism efforts, with speakers noting the ongoing need to adapt its principles to the evolving technological landscape.


The panel concluded by reiterating the importance of human rights considerations in all counterterrorism measures and the need for continued dialogue and collaboration among parliamentarians, international organizations, and other stakeholders to effectively address the complex challenges posed by terrorist use of ICTs and AI.


Keypoints

Major discussion points:


– The dual nature of information and communication technologies (ICTs) in counterterrorism – both as tools that can be exploited by terrorists and as valuable resources for preventing/countering terrorism


– The critical role of parliamentarians in developing legislation, allocating resources, and providing oversight related to ICTs and counterterrorism


– The importance of UN Security Council Resolution 1373 as a foundational document guiding international counterterrorism efforts


– The need to balance security measures with protection of human rights and fundamental freedoms when regulating ICTs


– The value of international cooperation and public-private partnerships in addressing ICT-related terrorism challenges


Overall purpose:


The goal of the discussion was to explore parliamentary approaches to using ICTs in counterterrorism efforts, in accordance with UN Security Council Resolution 1373. Speakers shared insights on challenges, opportunities, and best practices for parliamentarians to consider.


Tone:


The tone was largely formal and informative, with speakers providing expert perspectives in a professional manner. There was an underlying sense of urgency about the topic, but the tone remained measured and analytical throughout. The discussion concluded on a note of ongoing commitment to addressing the complex issues raised.


Speakers

– Murad Tangiev: Chief of the UNOCT program office on Parliamentary Engagement


– David Alamos: Moderator, Chief of the UNOCT program office on Parliamentary Engagement


– Kamil Aydin: Chair of the Ad Hoc Committee on Counterterrorism of the Organization of Security and Cooperation in the European Parliamentary Assembly


– Ahmed Buckley: Author of the UN Parliamentary Handbook on the Implementation of UN Security Council Resolution 1373, former diplomat and counterterrorism expert


– Emanuele Loperfido: Vice-Chair of the Ad-Hoc Committee on Counterterrorism of the Parliamentary Assembly of the Organization for Security and Cooperation in Europe, member of the Italian delegation to the OSCEPA


– Abdelouahab Yagoubi: Member of the People’s National Assembly of Algeria, PAM rapporteur on artificial intelligence


– Jennifer Bramlette: Coordinator for Information and Communication Technology of the United Nations Counterterrorism Committee Executive Directorate


– Akvile Giniotiene: Head of the Cyber and New Technologies Unit at the United Nations Office of Counterterrorism


Additional speakers:


– Dr. Ahmed Al-Muhannadi: Member of the Shura Council of Qatar (mentioned but did not speak)


– Pedro Roque: Vice President of the Parliamentary Assembly of the Mediterranean (mentioned but did not speak)


– Audience member: Badil Badi from Shura Council, Qatar (asked a question at the end)


Full session report

Parliamentary Approaches to Using Information and Communication Technologies in Counterterrorism Efforts


This panel discussion, held in the context of the Internet Governance Forum (IGF), brought together experts from various international organisations and parliamentary bodies to explore the challenges and opportunities presented by information and communication technologies (ICTs) and artificial intelligence (AI) in combating terrorism. The dialogue centred on the role of parliamentarians in developing legislation, allocating resources, and providing oversight for counterterrorism measures involving new technologies.


The Role of Parliaments in Addressing ICT/AI Challenges


Speakers unanimously agreed on the critical role of parliaments in addressing the challenges posed by ICTs and AI in counterterrorism efforts. Parliamentarians are responsible for transposing international commitments, such as those outlined in UN Security Council Resolution 1373, into national laws and allocating resources based on credible threat assessments. David Alamos, the moderator and Chief of the UNOCT programme office on Parliamentary Engagement, emphasised the parliamentary role in allocating budgets and conducting oversight of counterterrorism efforts.


Akvile Giniotiene from the UN Office of Counterterrorism highlighted the importance of establishing legal frameworks for law enforcement to use new technologies effectively. She also discussed UNOCT’s capacity-building efforts to support member states in developing these frameworks.


Dual Nature of ICTs and AI in Counterterrorism


A significant portion of the discussion revolved around the dual nature of ICTs and AI in counterterrorism efforts, acknowledging both the potential benefits for authorities and the risks posed by malicious actors exploiting these technologies.


Challenges:


1. Kamil Aydin, Chair of the Ad Hoc Committee on Counterterrorism of the OSCE Parliamentary Assembly, noted that AI enables sophisticated propaganda and automated recruitment by terrorists.


2. Emanuele Loperfido, Vice-Chair of the Ad-Hoc Committee on Counterterrorism of the OSCE Parliamentary Assembly, warned about the risks of deepfakes in spreading disinformation and eroding public trust.


3. Akvile Giniotiene highlighted how terrorists are exploiting cybercrime-as-a-service on the dark web.


Opportunities:


1. Abdelouahab Yagoubi, Member of the People’s National Assembly of Algeria, pointed out that AI and ICTs can enhance threat detection and analysis for authorities.


2. Jennifer Bramlette, from the UN Counterterrorism Committee Executive Directorate, emphasised the need for digital literacy training to build societal resilience against online threats.


International Cooperation and Public-Private Partnerships


The discussion highlighted the importance of international cooperation and public-private partnerships in addressing the challenges of terrorist use of ICTs. Emanuele Loperfido stressed the significance of public-private partnerships, while Akvile Giniotiene emphasised the need for cross-border cooperation mechanisms.


Abdelouahab Yagoubi highlighted the role of parliamentary assemblies in promoting knowledge sharing, particularly mentioning the Parliamentary Assembly of the Mediterranean’s (PAM) work on AI and emerging technologies. David Alamos noted that UN entities provide capacity-building support to member states.


The speakers agreed that no single entity or nation could effectively combat the terrorist use of new technologies in isolation, making international collaboration crucial. They also discussed the importance of a coordination mechanism among parliamentary assemblies to enhance knowledge sharing and cooperation.


Balancing Security and Human Rights


A recurring theme throughout the discussion was the need to balance security measures with the protection of human rights and fundamental freedoms when regulating ICTs. Emanuele Loperfido particularly emphasised this point, highlighting the ethical considerations that must be taken into account when implementing new technologies in counterterrorism efforts. He also presented the OSCE Parliamentary Assembly’s resolution on AI and counterterrorism, which addresses these concerns.


This balance is especially crucial given the potential for misuse of counterterrorism measures to infringe on civil liberties. An audience question regarding the broad use of the term “terrorism” and its potential misuse further underscored this concern. The speakers agreed that any legislative or policy frameworks developed must have robust safeguards to protect individual rights while still allowing for effective counterterrorism measures.


Conclusion and Future Directions


David Alamos concluded the discussion by reiterating the importance of continued dialogue and collaboration among parliamentarians, international organisations, and other stakeholders to effectively address the complex challenges posed by terrorist use of ICTs and AI. Key areas for future focus include:


1. Updating and improving the UN Parliamentary Handbook on the Implementation of UN Security Council Resolution 1373 to reflect evolving threats and good practices.


2. Developing more effective legislative frameworks to counter the abuse and misuse of AI and emerging technologies by malicious actors.


3. Enhancing parliamentarians’ understanding of new technologies to enable more informed decision-making and oversight.


4. Establishing clear legal mandates and policy frameworks for law enforcement agencies to use new technologies in investigating and prosecuting terrorist offences.


5. Investing in digital literacy and public awareness campaigns to build societal resilience against online radicalisation and disinformation.


The discussion underscored the ongoing need to adapt international counterterrorism efforts to the rapidly evolving technological landscape while maintaining a steadfast commitment to human rights and the rule of law. It also highlighted the critical role of parliamentarians in shaping these efforts and the importance of international cooperation in addressing global challenges.


Session Transcript

Murad Tangiev: Your Excellency, thank you so much for your insightful speech and for your support. And of course, for the support of the Shura Council of the State of Qatar, for all the work that our program office is doing. Now, it gives me a pleasure to invite here at this stage, the chief of the UNOCT program office on Parliamentary Engagement, Mr. David Alamos. David, please.


David Alamos: Thank you very much, Murat. Good morning, excellencies, honorable participants, esteemed colleagues, ladies and gentlemen. It is a great honor to welcome you all to this important event organized on the margins of the Internet Governance Forum here in the beautiful city of Riyadh. I would like to thank the Kingdom of Saudi Arabia and IGF for hosting and organizing this critical global platform, as well as to each of you for your commitment to addressing one of the of the most pressing challenges of our time, terrorism and its evolving complexities in the digital age. At the outset, I wish to express my heartfelt gratitude to the Shura Council of the State of Qatar for its unwavering and continuous support to the UNOCT Program Office on Parliamentary Engagement in Preventing and Countering Terrorism. I also wish to extend my appreciation to all participants joining us today, including representative from parliamentary assemblies, members of national parliaments, governments of member states, international organizations, media, academia, and civil society, both in person and also online. I would also like to convey our gratitude to our expert panel, comprising distinguished representative of the Parliamentary Assembly of the Organization for Security and Cooperation in Europe, with whom we have co-organized this event, the Parliamentary Assembly of the Mediterranean, the United Nations Counterterrorism Committee Executive Directorate, and the UNOCT Global Program on Cybersecurity and New Technologies, and other international experts on counterterrorism, ICT, and artificial intelligence. Excellencies, terrorism remains a persistent global threat, transcending borders, nationalities, and beliefs. The international community, through robust frameworks, such as the United Nations Security Council Resolution 1373, has provided a roadmap for coordinated action. National parliaments are pivotal in this endeavor, serving as the bridge between international obligations and their implementation through effective legislation, oversight, and policies. As we navigate an era of rapid technological advancement, the dual role of information and communication technologies, particularly artificial intelligence, cannot be overstated. These technologies offer unprecedented opportunities. to enhance data analysis, improve threat detection, and bolster predictive capabilities in counterterrorism. Yet, they also present profound challenges as terrorist groups increasingly exploit digital tools for recruitment, fundraising, and the dissemination of propaganda disinformation. The recent UN Summit of the Future underscores the importance of addressing these opportunities and challenges. The Pact for the Future, adopted by the General Assembly in September 2024, highlights the necessity of a multi-stakeholder approach. It calls for enhanced engagement with national parliaments while respecting their legislative mandates and promoting collaboration across all sectors of the society. In this context, national parliaments are not just participants, but leaders. By proactively regulating ICT to support counterterrorism strategies, they can ensure that such measures align with UN Security Council Resolution 1373, advance the Sustainable Development Goals, and adhere to principles of inclusivity, human rights, and gender sensitivity. Our today’s event is a great opportunity to foster dialogue and raise awareness about these critical issues. To conclude, let me reaffirm the UN OCT’s unwavering commitment to supporting national parliaments and their efforts to combat terrorism and violent extremism in all its forms. Thank you very much, and I wish you all and us all a productive and insightful session. Thank you very much.


Murad Tangiev: Thank you very much, dear David. Finally, I would like to invite here to connect with us online, Honorable Mr. Kamil Aydin. He is the Chair of the Ad Hoc Committee on Counterterrorism of the Organization of Security and Cooperation in the European Parliamentary Assembly, to make his welcoming remarks today with us. Honorable Kamil, the floor is yours. Thank you.


Kamil Aydin: Thank you, Murat. Can you hear me? Yes, we can hear you and see you very well. Thank you. Thank you. Dear Excellencies, colleagues, and esteemed participants. Above all, I would like to express that I wholeheartedly wanted to be there with you, but I couldn’t make it as we have been intensively discussing the annual budget for the last 10 days in the Turkish current National Assembly. And I would like to say welcome to everybody participating in this very precious organization. Dear Excellencies, distinguished colleagues and guests, on behalf of the Parliamentary Assembly of the Organization for Security and Cooperation in Europe and its Ad Hoc Committee on Countering Terrorism, it is my great pleasure to welcome you all to this launch and award session on Parliamentary Approaches to the Use of Information and Communication Technologies in Counterterrorism in accordance with the UN Security Council Resolution 1373 on the margins of this year’s Internet Governance Forum in Riyadh. The OSCE is the world’s largest regional security organization devoted to promoting peace and stability across its 57 participating states through cooperative dialogue. In today’s increasingly challenging geopolitical landscape, one of the priorities of the OSCE and its Parliamentary Assembly has been developing responses to terrorism and violent extremism that are both effective and well-rooted in human rights. Today’s event, co-organized with our partners at the UNOCT, reflects this shared dedication to global efforts against terrorism while emphasizing the critical role of AI and new technologies in shaping modern security strategies. We must stand together against those seeking to undermine our democratic values and threaten our societies through malicious acts. Information and communication technologies have transformed governance and society. but are increasingly exploited by terrorist groups for recruitment, propaganda and coordination. Recent data underscores the urgency of this challenge. The Global Internet Forum to Counter Terrorism reported a 32% increase in AI-enabled extremist content between 2020 and 2023, highlighting the growing use of technology in radicalization and propaganda. 90% of all terrorist propaganda is currently disseminated online, and an AI-generated content can significantly enhance the quality and quantity of this. Terrorist organizations such as Daesh, Al-Qaeda, PKK and far-right violent extremist groups are increasingly leveraging AI in their operations, exploiting AI’s capabilities to produce sophisticated propaganda, automate recruitment processes and manipulate social media algorithms to amplify their narratives. These and other threats associated with the potential misuse of AI and new technologies for terrorist purposes, as well as parliamentary approaches to using AI in counter-terrorism, will be the focus of today’s discussion. This complex, multifaceted nexus between AI and countering terrorism has been high on the agenda of the OSCE Parliamentary Assembly for some time. Now, not least since the adoption of our resolution on AI and the fight against terrorism on the margins of our last annual session in Romania. This resolution recognizes the significant threat to international security posed by the potential misuse of AI by terrorists and violent extremists. And, at the same time, acknowledges the opportunities that lies in the ethical application of AI in countering terrorism. The document represents the culmination of our efforts to be at the forefront in tackling yet another emerging security threat that needs to be addressed cooperatively. Accordingly, two weeks ago, in cooperation with the UNOC, we organized a highly relevant parliamentary policy dialogue on countering the misuse of AI for terrorist purposes in Rome, Italy, engaging 13 parliamentary assemblies from around the world and many renowned experts on this emerging issue. After all, parliamentarians play a critical role in preventing and countering terrorism, violent extremism and radicalization that leads to terrorism. We act as enablers, shaping national legislation and establishing the mandate of counter-terrorism bodies. We serve as controllers, ensuring that all counter-terrorism measures respect fundamental freedoms. And we bridge diverging views at all levels, facilitating constructive exchanges and ensuring citizens’ participation in state affairs. Against this backdrop, I would like to commend our United Nations partners at the Office on Counter-Terrorism. UNOC has been at the very forefront in engaging parliamentarians in counter-terrorism affairs, and we are deeply grateful to them for their invaluable support and expertise. It was an honor for our assembly to preside for two constructive years over the work of the new coordination mechanism of parliamentary assemblies on countering terrorism, and we are confident that our efforts have strengthened parliamentary engagement in the field. Misuse of AI for terrorist purposes is an urgent and critical issue. And I am deeply grateful for the expertise and insights gathered in Riyadh today. While I regret not being able to join you in person, I am confident that my colleague and Vice Chair Emanuele Porfido will represent the OSCPA’s comprehensive work on this matter effectively. On that note, I wish you all a productive and engaging panel discussion. Thank you and best wishes from the Grand National Assembly in Ankara. Thank you.


Murad Tangiev: Honorable Aden, thank you very much for your kind words and for all your support throughout these two years we had this privilege to work with the OSCPA. And now allow me please to give the floor, not to give the floor, give the moderation role to David Alamos to continue this dialogue. Thank you, David, over to you.


David Alamos: Oh, yeah, okay. Yeah, okay. Good afternoon already to everybody, excellencies and honorable participants. I will have the pleasure to moderate this panel of distinguished experts to address basically the topic of today, how parliaments may approach the use of information and communication technologies in counter-terrorism in accordance with the UN Security Council Resolution 1373. And I will just very briefly say that we will cover basically during the discussion three key questions, okay? Which will be, what are the challenges and opportunities posed by information and communication technologies in preventing and countering terrorism? What is the role of parliamentarian in addressing these challenges? And of course, how can the UN Security Council Resolution 1373? help member states in ensuring that the national content duration measures are holistic, inclusive, human rights compliant, gender sensitive, and effective. So without any further delay, in the sake of time, I will give the floor and present the first of our speakers, which I have the pleasure to introduce Dr. Ahmed Bakley, the author, indeed, of the UN Parliamentary Handbook on the Implementation of UN Security Council Resolution 1373. And allow me to briefly say that he has joined the Egypt’s diplomatic corps two decades ago. His career has been dedicated to counter-terrorism, including serving as deputy director of the counter-terrorism unit at the Ministry of Foreign Affairs. He was also a member of the analytical support and sanctions monitoring team, supporting the UN Security Council Sanctions Committee on ISIL, al-Qaeda, and the Taliban. And his background is impressive. Also, he has a Master of Arts in comparative politics in the Middle East, another one in terrorism and international security. And he is now undertaking a PhD. So Dr. Bakley, please, the floor is yours.


Ahmed Buckley: Thank you very much, David. And I’d like to extend my deep gratitude to the Shura Council of Qatar and UNOCT for having me here. And I don’t think we can say this enough, but thank you also to the government of the Kingdom of Saudi Arabia for graciously hosting this event in this fabulous venue. When we talk about international cooperation on counter-terrorism, I always like to begin with highlighting two points. The first is that despite all of our definitional differences on what is terrorism, who is a terrorist, or all our haranguing on these definitions, we were still, as an international community, able to make large strides on counter-terrorism cooperation. And the bedrock of that cooperation was… was UN Security Council Resolution 1373 and its descendants. The second point, which is particularly relevant because we’re talking in a parliamentary track, is that none of this international cooperation could have taken place and none of it is sustainable in the future without the active participation of parliamentarians. Parliamentarians are of course the legislators, they are the ones responsible for transposing all of these international commitments into national laws, but they are also the dispensers of resources. They are the ones who make the correct decisions on appropriations and budgetary allocations to face the threats based on credible threat assessments from the security agencies. They are also in the best position as representatives of the electorate to make sure that before any of these laws or measures are enacted, that they are the culmination of a wide-ranging consultative process that takes into account the views of the law enforcement agencies, the private sector, as well as civil society. Finally, of course, they are the bulwark to ensure that all of these measures and laws are commensurate with the Member States’ constitutional and international commitments on human rights, as you mentioned, David. On the threats, and for the sake of time, I won’t delve deep into that. I think they were covered by the Honourable Agne, and maybe Akfili will also talk about the threats, and we’ve heard in other workshops that’s emanating from AI. You mentioned propaganda. There’s also the fear of terrorists using AI to raise funds in the form of scams. If criminal organizations are starting to use AI to raise funds, you can be sure that terrorists will be quickly to follow on their heel. Um, how has the Security Council addressed the issue of the misuse of ICTs? Well, it goes back to, uh, to the mother of all resolutions to 1373, uh, which obliged member states to prevent the collusion of safe neighbors, and in turn, scan large records, terrorist acts, despite the fact that the nation also extends to virtual territory, online platforms, uh, end-to-end encryption, uh, services, uh, and any other virtual space which has been also used to, uh, plan, coordinate, recruit, and raise funds for terrorist, uh, acts. There is Resolution 1624, uh, a few years later, which obliged member states to criminalize the incitement of terrorism and the glorification of terrorism, and it explicitly in that resolution called on member states to take all, um, legal and regulatory measures, uh, to, um, prevent the misuse of ICTs in, uh, creating propaganda for, uh, terrorist organizations. Um, you have, uh, Resolution, uh, uh, 2232 on, on, on, uh, global counterterrorism, uh, cooperation as well, which laid, uh, uh, a map, a roadmap for member states on how to, uh, establish, uh, robust mechanisms and channels, uh, within each member state to gather and disseminate, uh, uh, information across borders and to facilitate, um, you know, the, the, the, the drafting, the, the sending and receiving of mutual, uh, legal assistance, um, requests regarding, uh, ICTs in, in terrorism. And you have Resolution 2341, which talked about… about the critical infrastructure. And while the Security Council did not explicitly in that resolution define what in what critical infrastructure was for each member state, it still was very cognizant that member states, some of them will, will consider the internet as a critical infrastructure. And the council called on UN entities to help member states, whether in through capacity building or technical assistance to take the appropriate measures to protect the internet from being misused by terrorists. I say I give these examples, again, to make two points that the Security Council was from the very beginning, aware of the misuse of ICTs and gave it is it’s, it’s due attention, but also that most of these resolutions have been drafted in a technology neutral language. And in fact, member states are also encouraged when they develop their legislation to do so in this technology neutral language, which focuses on criminalizing the crime, not necessarily on the tool by which that crime was committed. In fact, I think it is safe to say that when we’re talking about the threats from artificial intelligence, most countries do not require a substantial overhaul of their, of their legal frameworks. But what they need to do is concertedly address, you know, raising the capacity of law enforcement agencies to detect and prevent and prosecute these crimes when they are being committed by artificial intelligence tools. You mentioned the handbook. Thank you very much for bringing it up. It was a privilege working on it. And the handbook I think is a very useful tool. Hashtag shameless self promotion here. I shouldn’t be praising my own product. But I think it is a it is a useful tool. Because on one hand, it provides a very good overview for parliamentarians on all of these on this oeuvre of the Security Council resolutions regarding counterterrorism. And it also gives them a sort of checklist, what they need to check to gauge their, their level of implementation. Of course, it’s not the definitive guide to for gap analysis for member states. The the that is still the preserve of CTAD’s technical guide on implementing resolution 1373. But the handbook is a useful reference for some of these checklists and also for additional resources when member states are making proper legislation. It also covers some parallel legislative concerns that that complement counterterrorism legislation. So many member states are now undergoing legislation on personal data protection on cyber security. And you will find in the handbook, some, some concerns, some aspects to take into consideration when you are legislating and taking measures against those types of threats. Now, I don’t think that the handbook is complete. I think it should be a living document. You know, it should take into consideration some of the good practices that parliaments have done already in this regard, and that’s just a heavy hint, David, to say that we still need to work together on improving the handbook as much as we can, and with that I’ll hand it back to you. Thank you very much.


David Alamos: Thank you very much, Dr. Ahmed, for your insightful and comprehensive presentation. I have to say it was also a really big honor for us to work with you in the elaboration of the handbook. This is the handbook, with Excellency Dr. Ahmed Al-Muhannadi. So it is also available online, in case that anyone wants to check our web page and get it from there. So it is now my pleasure to introduce the next speaker, the Vice-Chair of the Ad-Hoc Committee on Counterterrorism of the Parliamentary Assembly of the Organization for Security and Cooperation in Europe, Honorable Mr. Emanuele Loperfido, and let me just say that Honorable Loperfido is a member of the Italian delegation to the OSCEPA and the principal sponsor of the 2024 OSCEPA Resolution on Artificial Intelligence and the Fight Against Terrorism. He currently serves as Secretary of the Foreign Affairs Committee in the Italian Chamber of Deputies, and he is also an active member of the Defense Committee. And today, Honorable Loperfido will speak about the OSCEPA Resolution, a very important resolution indeed. For all of us, please, Honorable Loperfido, you have the floor.


Emanuele Loperfido: Good morning. Thank you, David. Thank you for the kind introduction. Thank you for all of you that are here to listen to us. Thank you to the OSCEPA staff members that are working all this year together with us to support the parliamentarian to try to give response to this a new challenge that we are facing as parliamentarian. And it is very important the work that the United Nations Organization, Office of Counterterrorism and the OCPA is doing together because the most important thing is to make a real partnership to face the challenges. So I’m delighted to be here and contribute to this distinguished panel. My capacity as a vice chair of the OC Parliamentary and Assembly Ad-Hoc Committee on Counterterrorism and speaking directly about the artificial intelligence, we know that it has brought significant advantages across various sectors and hold promising potential for the use of authorities in the fight against terrorism. But at the same time, the same technology when exploited by malicious actor poses a significant risk to international security. As AI capabilities evolve, so does the potential for them to be used in ways that threaten peace and stability. For example, widely available AI-driven tools could enable individuals or groups to assess technologies such as drones that could be misused for surveillance, targeted attacks, or other malicious purposes. Another area of concern is potential for extremists to harness AI algorithms to identify and target vulnerable individuals, tailoring messaging to exploit fears and biases. These prospects underscore the importance of vigilance as AI could inadvertently aid in amplifying extremist narratives and online radicalization. Particularly troubling dimension is the rise of deepfake technologies. We must consider how the ability… to create convincing but fabricated audio and video content could be leveraged by terrorist groups to spread disinformation, incite violence, or erode public trust, which will have far-reaching impacts on social cohesion and national security if left unaddressed. This is why, over the past year, the OCPA and the ADOC committee have made significant strides in response to these ever-evolving challenges. As the world’s largest inter-parliamentarian forum dedicated to peace and security, our assembly worked hard to promote more knowledge around this topic in order to inform national and international policymaking. In February, we had a high-level panel discussion in Vienna, bringing together experts from the tech industry and the public sector. These pressing issues were further examined with renowned academics during the official visit to Turkey in early May, and it was organized by our dear President Kamil Aydin, that I would like to thank for the continued effort to support the assembly in becoming more and more expert in counterterrorism. And at the end, we had a last but not least, obviously, conference in Portugal. And during the annual session in Bucharest, our committee, together with the OCPA, adopt the Bucharest Resolution on Artificial Intelligence and the Fight Against Terrorism, which codified some key findings and which represent one of the very first policy attempt to address the dual security impact of the rapid advancement of the artificial intelligence. So the resolution clearly focused on mitigating the risk of the AI misuse in the focus on the strengthened national legal frameworks that govern AI development and deployment, ensuring robust and ethical standards and human oversight. While AI can be a powerful tool in detecting threats and preventing radicalization, its use must always be balanced with respect for privacy and freedom of expression. This dual approach not only strengthen public trust, but it also ensure that AI innovation remains aligned with our shared values of democracy and security. Italy, for instance, has recently underlined the importance of ethics in AI governance by appointing a theologian, an AI expert, a member of the United Nation Committee to lead national AI coordination. This choice reflect a broader commitment to ensure that AI technology are developed and applied with respect for human dignity and rights. The resolution that we adopted went beyond that. Indeed, it emphasized how these tools can also be used by a security agency to quickly identify potential threats, preventing attack and supporting early radicalization pattern. Additionally, our document is stressed the importance of a public and private partnership and the importance of strengthen international cooperation. Lastly. The resolution highlighted the critical role of the education and the importance of having a digital literacy in order to create and improve the public awareness campaign in order to help societies to recognize and resist disinformation and manipulation. Ultimately, through this resolution, we aim to foster an environment where AI is secure, ethical and aligned with democratic principles while remaining economically viable. Hopefully, other national parliaments and other international parliamentaries will follow our example, as we did in Rome, the last event, where together with the UNOCT and the inter-parliamentary policy, we address our effort to reinforce the cooperation, the mechanisms of cooperation together with the members of the parliament, together with the parliamentarians in order to create legislation that is international, respecting the rights that I just said, but in order to secure to the people that are living in our world, to have a secure world. So our effort will continue together with the members of the OCPA, together with UNOCT and together with all the parliamentarians that will support our effort against this challenge.


Murad Tangiev:


David Alamos: Thank you very much, Your Honourable Loperfido, for a highly topical and informative presentation. I would like to also express my gratitude to you, to Honourable Camille Aydin, online and to the Parliamentary Assembly. of OSCE for the continuous support, collaboration with our Parliamentary Engagement Office and especially for this last excellent two years of Presidency of the Coordination Mechanism of Parliamentary Assemblies. Now I would like to give the floor to our next speaker, a representative of the Parliamentary Assembly of the Mediterranean, PAM, member of the People’s National Assembly of Algeria, Honourable Mr. Abdullohab Jacobi. He is elected in Paris, indeed, at the Algerian National Assembly’s Foreign Affairs Cooperation and Immigration Committee. Honourable Jacobi is a member of the Algerian delegation to the Parliamentary Assembly of the Mediterranean since 2021 and an expert on AI and ICT. He has a large experience in private sector and international companies and at present he holds the function of PAM rapporteur on artificial intelligence. Please, Mr. Honourable Jacobi, you have the floor.


Abdelouahab Yagoubi : Thank you very much, dear David. Good afternoon, everybody. Excellencies, distinguished colleagues, leaders and gentlemen, on behalf of the Parliamentary Assembly of the Mediterranean, I wish to thank the UNOCT, OSCE, PIA and the Shura Council of Qatar for organising this side event. I am especially pleased to gather here today following the election of PAM to the Presidency of the Coordination Mechanism of the Parliamentary Assembly on Counter-terrorism which took place two weeks ago in Rome. In this regard, I wish to strongly reiterate that PAM will work with all international parliamentary assemblies to fulfil its mandate and advance towards a future free of terrorism for the generations to come. The highly development and expansion of AI and emerging technologies made imperative for parliaments to pay attention to and develop more effective legislative frameworks and strategies to counter their abuse and misuse. As it was predictable, the accessibility, low cost and efficiency provided by AI and emerging technologies have allowed malicious actors, including but not limited to, terrorist and criminal organisations to exploit them for their purpose. In response to these threats and in compliance with the provisions of the UN Security Council Resolution 1373, a concreted and united international approach is critical to address the challenge but all the opportunities to AI and emerging technologies in preventing and countering terrorism. This coordinated approach must take into consideration the centrality of national and regional parliaments to advance relevant and dedicated legislations. Moreover, it is always worth highlighting that any framework adopted by States to combat the misuse of IAEA and emerging technologies must be compliant with international human rights law and the respect of fundamental freedom of individuals, which are equally applicable online as offline. Fully aware of this complex landscape, PAM, with the support of its Center of Global Studies – CGS – and in partnership with the UN Security Council Counterterrorism Executive Directorate – CITED – recently published a report on the malicious use of AI and emerging technologies by terrorist and criminal groups’ impact on security, legislation, and governance. Among other elements, the report also stressed that AI and emerging technologies play a provided role in the fight against terrorism and organized crime. This also includes the automatic analysis of vast amounts of data patterns and trends associated with the malicious use of technological tools, which enable authorities to rapidly identify the most effective approaches and strategies. As a result of its report, PAM established a permanent global parliamentary observatory on AI and ICT. We’ve assembled by PAMCGS and begun the publication of a daily and weekly digest to disseminate news and analysis about trends related to technological advancement in a number of fields, including security and defense. I invite you to reach out to PAMCGS in order to strengthen our collaboration, multiplying the effectiveness of our work. Thank you for your attention.


David Alamos: Thank you very much, Honorable Jacobi, for your precise intervention. I would also like to express my gratitude to you and also to Honorable and Excellency Pedro Roque, Vice President of the Parliamentary Assembly of the Mediterranean, who is accompanying us also here for your constant support and also for being now – we are very grateful for that – the new elected President and Chair of the Coordination Mechanism for Parliamentary Assemblies. Let me now turn to my dear colleague of the UN and friend, Ms. Jennifer Bramlett. Just to let you know also that Ms. Bramlett serves as the Coordinator for Information and Communication Technology of the United Nations Counterterrorism Committee Executive Directorate. In this role, she focuses on issues relating to preventing and countering the use of ICT and related new and emerging technologies for terrorist purposes. Ms. Bramlett has also served as the Strategic Advisor to CTED’s Executive Director, CTED Legal Officer, and as the Program Manager. and Senior Advisor of UNODC’s Global Program against Money Laundering, Proceeds of Crime, and the Financing of Terrorism. And she has a really large experience also before even the UN in the US Department of Defense. So please, Ms. Bramblitt, you have the floor.


Jennifer Bramlette: Thank you, David. And good morning or good afternoon to everybody. I just want to start off by saying how delighted I was and seated was when a UN OCT said that they were going to put this parliamentarian handbook together on Resolution 1373. The main reason is that Resolution 1373 is a groundbreaking, forward-thinking, essential document for all of the work that the UN Security Council and other partner agencies around are doing on counterterrorism. It set the groundwork for everything that has come since. There have been a number of Security Council resolutions on counterterrorism, 16 of which deal with the issue of information and communication technologies. Resolution 1373 set the groundwork by initiating a requirement for states to share operational communications information. Now, it seems like a pretty small mandate. But that set operational interactivity between law enforcement agencies, border control agencies, between aspects of government that had never traditionally worked together, usually operational information was held on the security side of the house. And all of a sudden, now you had Ministries of Foreign Affairs, Ministry of Interior, Ministries of Education starting to work together. And so Resolution 1373 was essential as a starting point for all of the work that we’re doing today. and what we’re talking about today. Now my office is a special political mission that supports the United Nations Security Council’s Counterterrorism Committee. For them, we conduct assessments of member states’ capacity to counterterrorism in accordance with Security Council resolutions, and particularly Resolution 1373. We also have a mandate to identify gaps in implementation and to facilitate technical assistance so that member states can better implement these resolutions. We also have mandate to look at emerging threats, evolving trends, and to keep an eye on what’s happening in the world so that, again, we can better assist member states to implement Security Council resolutions. I was so delighted with this handbook because Resolution 1373 is our bread and butter. This is where we first started. And when we first started working with this resolution, we broke things down into looking at legal frameworks, because this is where the resolution sets the groundwork for looking at legal frameworks and how states actually can criminalize terrorist acts with the end state the goal of bringing terrorists to justice. And Resolution 1373 lays out all of these various components, these activities that states need to do in order to be able to bring terrorists to justice. The resolution doesn’t tell states how to do it. It just says that you must prevent terrorism financing. You must prevent terrorism arming. You must prevent the safe havening of terrorist groups. It doesn’t say how. This is where the regular dialogue with member states, where the activities of capacity building and technical assistance come into state is to help member states accomplish these goals. My office is looking not only now at legal frameworks, but also at institutions and how institutions are mandated and how they coordinate, cooperate, and share information, including operational information, again, going back to Resolution 1373. And we’re also looking at how the practical measures they’re taking are effective or not, looking at good practice, and again, looking at shortfalls. When it comes to ICT, I think that Buckley made an excellent point in how we think about about terrorist use of the internet, social media platforms, alternative online spaces, new technologies like AI, like virtual and augmented reality, even looking forward into quantum computing. And we have to think about it differently because when we think about terrorism we often think about bombs and buildings, we think about people being injured, we think about real-life harms, and yet there’s this whole other world, whether you call it the cybersphere, the digital world, online spaces, where terrorism happens. And we were asked actually, why are terrorism bodies here at the IGF? Well, we made a point earlier in an intervention on misinformation that the way misinformation is being written and propagated online is very similar to how terrorists are using online spaces to move their messaging, their propaganda, to coordinate and operate. How misinformation and harmful content is housed online is very similar to how terrorist material is housed online. And so we have to have this open mindset that the cybersphere, these online spaces, are operational spaces for terrorist organizations and that everything that’s being discussed here at the IGF is relevant to countering terrorism. Everything being talked about with regard to misinformation and the way societies need to be able to be inoculated against misinformation and disinformation and also terrorist propaganda are all similar. So in our work, our assessment work, some of the challenges we’ve seen are many, and I won’t go into all of them, but I would say that where we’ve seen great success is in states investing in digital and AI literacy training to build resilience in their populations. And this is from children all the way through to elders to teach them how the internet works, how social media works, and how they can interpret the information they see so they can determine for themselves if it’s true or not and if it’s something they should believe. So this investment into AI and digital literacy training is very important. Also efforts to work with the tech industry on safety by design and on issues around good programming and the tech aspects to ensure that material going into the internet and the spaces on the internet are safe and monitored and workable for all cultures and all societies. I would reiterate the points made on human rights, that human rights cannot be sacrificed in any way. I know many states claim that it’s difficult to balance security and human rights. But I would say that human rights are as applicable online as they are offline, and they cannot be compromised. And so there must be a way to have justice in all aspects of life for users and for states. And that’s a conversation that must continue with the outcome of privacy, data protection, freedom of expression, and all of the other fundamental freedoms that we have come to enjoy and need to maintain. Thank you very much. I’ll stop there.


David Alamos: Thank you very much, dear Jennifer, for your insights, observations, and recommendations as always very relevant, highly relevant and valuable, and really appreciate the collaboration with CTED. That’s very, really important for us of this common approach to member states. And I would like now to give the floor to our final speaker, who is our dear colleague from UNOCT, Mrs. Akhil Jinyo-Thien. You have full time, because we have been given extra time. It’s like a football match, so we have some extra minutes. So please, you can have your five to seven minutes completely. But let me first say that Mrs. Akvile Giniotiene is the head of the Cyber and New Technologies Unit at the United Nations Office of Counterterrorism. Prior to joining the United Nations, she had served for 25 years in different capacities for the government of the Republic of Lithuania, including as the Deputy Director of the State Security Department, Deputy Chair of the National Security Authority, and in private sector, where she has been an active participant of international cybersecurity dialogue and capacity building initiative and assisted governments in the development of national cybersecurity strategies and critical information infrastructure protection frameworks. Dear Akhil, thank you very much. You have the floor, please. Thank you, David.


Akvile Giniotiene: And good afternoon to all. It’s really a pleasure to be here and be engaged in the discussion of parliamentary approaches to the terrorist use of ICT. I come from a little not a legal background, but from more operational background. And the program is a capacity building tool to support member states to develop necessary capacities to respond both to the challenges and opportunities that new technologies provide in countering terrorists. And in our work, we are helping member states to understand the threat stemming from terrorist use of new technologies, what are the opportunities, and also build necessary capacities, like protect critical infrastructures against terrorist cyber attacks, develop necessary law enforcement capacities to use. new technologies for investigation of terrorist offenses, also develop policy frameworks that are necessary to ensure the strategic and whole of the government approach to new technologies in countering terrorism. And of course, from my capacity building work, I can say that such capacities cannot be built in vacuum. So there should be legal mandates in place for law enforcement to do things online, to use information collecting using new technologies for investigation and prosecution. There should be policies in place as well. And I had a pleasure also to participate two weeks ago in a parliamentary assembly dialogue in Rome. And I was really, really impressed of the amount of thought given by parliamentarians on how to go about it. And at least my takeaway from all the discussions there were that to regulate, legislate and deliver proper oversight of new technologies in countering terrorist domain, first, you need to understand what is the threat, how malicious actors can abuse new technologies for countering terrorism and what are the opportunities there for law enforcement and wider communities to use new technologies in this regard. And I’m happy that a program in a little bit of a way supports member states in this regard. So three years ago, we published a report on the use of artificial intelligence by terrorist organizations, outlining different areas how terrorists can use artificial intelligence in future. So be it as cyber enabled attacks, be it physical attacks using self-driving cars or drones equipped with facial recognition technology to identify particular targets in the crowds or enhancing their operational capability to count a few documents and spread misinformation. information. It was a little bit futuristic at that time because generative AI was not there, but two years passed and generative AI hit the floor, and we see some of the scenarios already becoming a reality of today that parliamentarians are trying to address today. Also, one of the most recent report that is also available online is regarding terrorist use of cybercrime as a service on dark web, how cybercrime as a service is available, to be procured at a very cheap price, and could cause massive effects against critical infrastructure or help them to raise money. And in terms of capacity building, we are engaging with member states to help them develop understanding of the threats and risks at national level in a structured manner, inviting all relevant parties to prioritize the risks, which can become a national risk, be it use of deepfakes, be it artificial intelligence, and how to address them through policy responses, how to prevent those scenarios from happening, how to deny them from happening, how to protect and recover once they happen, and how to prosecute through policy approaches. In our capacity building work, I would say it’s always very good to have parliamentarians in these discussions. It’s not always happening, but in those cases that we had, the representatives from the, let’s say, committees on national security and defense or committees of new technologies, it was a very good discussion, bringing all relevant parties together. So when it comes to opportunities of new technologies, the program is mostly focusing on building law enforcement capacities. So we help law enforcement to embrace open source intelligence, how to conduct investigations online. how to conduct dark web investigations, how to use facial recognition, how to use digital forensic techniques, how to run cryptocurrencies investigations, how to seize cryptocurrencies, which is also a very difficult thing to do, and how drones can support counter-terrorist efforts. And in all these regards, to wrap it up, it’s very important that their legal aspects are addressed. So first of all, the use of new technology shall be based on clear provisions of law by counter-terrorist agencies to ensure the principles of rule of law and adherence to international law. Because if law enforcement agencies do not have a mandate to use those new technologies, it will not lead to prosecution and adjudication of terrorist offenses, which is the end goal of any counter-terrorist agencies to reduce the number of threat actors that we need to deal. Second, and it was also, I’m repeating other experts on the panel, that any measures impacting or restricting human rights must be established by law, necessary and proportionate. Also, I think it’s very important that the law establishes legal powers for review and redress, which are independent from law enforcement agencies. So if there’s a concern that law enforcement agencies are not using the powers, the new technologies properly, so there are mechanisms to raise that and to resolve through necessary mechanisms. Also, we are using the increased use of advanced data collection, which is a very efficient way for law enforcement to address counter-terrorism and use of CCTV and big data, but they also should be governed to… to prevent excessive information collection. And of course, as Jennifer mentioned, prohibition of act of terrorist, because that’s how law enforcement have powers to investigate. So it’s very important that these new and evolving crimes are addressed in criminal laws. And they give a mandate to law enforcement to do those. And legal arrangement to support cross-border cooperation is also very, very important because terrorists has no borders, technologies have no borders, data is everywhere. So parliamentarians have a very important role to play and then increasingly making efforts in this regard, which is appreciated by law enforcement and counter-terrorist community. So thank you again for inviting me to be in this panel and thank you very much.


David Alamos: Thank you. Thank you very much, Dr. Actil, for your presentation. And let me highlight important work that you and your unit is doing in serving and supporting member states on cybersecurity, artificial intelligence, and ICT in the prevention and countering of terrorism. We have just two more minutes, okay, because we will need to close because there is a new session at 1.15. But if there is any comment or question that would like to be raised, please, just in 30 seconds, I would be very grateful.


Audience: Thank you, Badil Badi from Shura Council, Qatar. I thank everybody here. Unfortunately, the law enforcement used the word of terrorism in many aspects a long time ago. So if you want to put someone in trouble, just tell them, use the word, and that’s enough to put them into much troubles. And if you also support them with legal action, we are afraid to go deeper. So that’s one point. And hopefully, we can understand and defend. of the terrorist or terrorism or whatever, the, you know, the word is just wide used for everybody and everybody just misuse it. And, you know, but the terrorism, as Dr. Ahmed said in the beginning, it means a lot and it means not only, but it means other, many other things. And we’ve seen it in hackers or whatever. It’s all, it’s terrorist. So thank you.


David Alamos: Thank you very much, Excellency. So if there is any other further question, I would suggest that, yeah, I know. I would suggest that after the event, please do reach out to our distinguished panelists. I would like to conclude by just saying that we still have a lot of challenges. We will, we need to keep on working on strengthening the legal frameworks, especially we need to, we have a UN Security Council Resolution 1373 as a guiding document also for that that has to be taken into consideration as a mandatory resolution from the Security Council. And let me just say, highlight the important role of parliamentarians, not only in developing legislation, but also, yes, it has been said in allocating budgets, in conducting the oversight functions, and especially also for all of us, has been reiterated in many occasion, but I would like to conclude with that, importance of having human rights at the forefront of all our dialogues and decision in these key matters. Let me conclude by thanking all of the distinguished panelists and experts that have been accompanying us during today’s session, and to all of you also for having been with us and participating in this session. Thank you very much.


A

Ahmed Buckley

Speech speed

128 words per minute

Speech length

1152 words

Speech time

537 seconds

UN Security Council Resolution 1373 as foundational framework for international counterterrorism cooperation

Explanation

Resolution 1373 is the bedrock of international cooperation on counter-terrorism. It provides a framework for member states to work together despite definitional differences on terrorism.


Evidence

The resolution obliged member states to prevent the collusion of safe neighbors and scan large records for terrorist acts.


Major Discussion Point

The role of UN Security Council Resolution 1373 in countering terrorism


Agreed with

Jennifer Bramlette


David Alamos


Agreed on

Importance of UN Security Council Resolution 1373


Responsible for transposing international commitments into national laws

Explanation

Parliamentarians are responsible for transposing international commitments into national laws. They are the ones who make the correct decisions on appropriations and budgetary allocations to face threats based on credible threat assessments from security agencies.


Major Discussion Point

The role of parliaments in addressing ICT/AI challenges in counterterrorism


Agreed with

Emanuele Loperfido


David Alamos


Akvile Giniotiene


Agreed on

Role of parliaments in addressing ICT/AI challenges


J

Jennifer Bramlette

Speech speed

125 words per minute

Speech length

1043 words

Speech time

497 seconds

Resolution 1373 requires operational information sharing between agencies

Explanation

Resolution 1373 initiated a requirement for states to share operational communications information. This set operational interactivity between law enforcement agencies, border control agencies, and other aspects of government that had not traditionally worked together.


Evidence

Ministries of Foreign Affairs, Interior, and Education started working together, sharing operational information that was traditionally held on the security side.


Major Discussion Point

The role of UN Security Council Resolution 1373 in countering terrorism


Agreed with

Ahmed Buckley


David Alamos


Agreed on

Importance of UN Security Council Resolution 1373


Resolution 1373 provides guidance on legal frameworks to criminalize terrorist acts

Explanation

Resolution 1373 lays out various components that states need to implement in order to bring terrorists to justice. It sets the groundwork for looking at legal frameworks and how states can criminalize terrorist acts.


Evidence

The resolution mandates states to prevent terrorism financing, prevent terrorism arming, and prevent the safe havening of terrorist groups.


Major Discussion Point

The role of UN Security Council Resolution 1373 in countering terrorism


Agreed with

Ahmed Buckley


David Alamos


Agreed on

Importance of UN Security Council Resolution 1373


Need for digital literacy training to build societal resilience

Explanation

States should invest in digital and AI literacy training to build resilience in their populations. This training should cover how the internet and social media work, and how to interpret information to determine its truthfulness.


Evidence

This training should be provided from children all the way through to elders.


Major Discussion Point

Challenges and opportunities of AI and ICTs in counterterrorism


D

David Alamos

Speech speed

142 words per minute

Speech length

1895 words

Speech time

796 seconds

Resolution 1373 needs to be implemented through national legislation by parliaments

Explanation

UN Security Council Resolution 1373 is a guiding document that has to be taken into consideration as a mandatory resolution from the Security Council. It needs to be implemented through national legislation by parliaments.


Major Discussion Point

The role of UN Security Council Resolution 1373 in countering terrorism


Agreed with

Ahmed Buckley


Jennifer Bramlette


Agreed on

Importance of UN Security Council Resolution 1373


Allocating budgets and conducting oversight of counterterrorism efforts

Explanation

Parliamentarians play a crucial role not only in developing legislation but also in allocating budgets and conducting oversight functions in counterterrorism efforts. This is particularly important in the context of using new technologies for counterterrorism.


Major Discussion Point

The role of parliaments in addressing ICT/AI challenges in counterterrorism


Agreed with

Ahmed Buckley


Emanuele Loperfido


Akvile Giniotiene


Agreed on

Role of parliaments in addressing ICT/AI challenges


UN entities providing capacity building support to member states

Explanation

UN entities are providing capacity building support to member states in their efforts to counter terrorist use of ICTs. This support is crucial in helping states develop the necessary capabilities to address the challenges posed by new technologies in the context of counterterrorism.


Major Discussion Point

International cooperation on countering terrorist use of ICTs


K

Kamil Aydin

Speech speed

118 words per minute

Speech length

815 words

Speech time

411 seconds

AI enables sophisticated propaganda and automated recruitment by terrorists

Explanation

Artificial Intelligence is being leveraged by terrorist organizations to enhance their operations. This includes producing sophisticated propaganda and automating recruitment processes.


Evidence

Terrorist organizations such as Daesh, Al-Qaeda, PKK and far-right violent extremist groups are increasingly leveraging AI in their operations.


Major Discussion Point

Challenges and opportunities of AI and ICTs in counterterrorism


Agreed with

Emanuele Loperfido


Abdelouahab Yagoubi


Akvile Giniotiene


Agreed on

Challenges posed by AI and ICTs in terrorism


E

Emanuele Loperfido

Speech speed

111 words per minute

Speech length

890 words

Speech time

478 seconds

Deepfakes pose risks of disinformation and eroding public trust

Explanation

The rise of deepfake technologies presents a troubling dimension in the fight against terrorism. These technologies could be leveraged by terrorist groups to spread disinformation, incite violence, or erode public trust.


Evidence

The ability to create convincing but fabricated audio and video content could have far-reaching impacts on social cohesion and national security if left unaddressed.


Major Discussion Point

Challenges and opportunities of AI and ICTs in counterterrorism


Agreed with

Kamil Aydin


Abdelouahab Yagoubi


Akvile Giniotiene


Agreed on

Challenges posed by AI and ICTs in terrorism


Need to balance security measures with human rights protections

Explanation

While AI can be a powerful tool in detecting threats and preventing radicalization, its use must always be balanced with respect for privacy and freedom of expression. This dual approach not only strengthens public trust but also ensures that AI innovation remains aligned with shared values of democracy and security.


Evidence

Italy has recently underlined the importance of ethics in AI governance by appointing a theologian and AI expert as a member of the United Nation Committee to lead national AI coordination.


Major Discussion Point

The role of parliaments in addressing ICT/AI challenges in counterterrorism


Agreed with

Ahmed Buckley


David Alamos


Akvile Giniotiene


Agreed on

Role of parliaments in addressing ICT/AI challenges


Differed with

Akvile Giniotiene


Differed on

Approach to regulating AI and ICTs in counterterrorism


Importance of public-private partnerships

Explanation

The resolution adopted by OSCEPA emphasized the importance of public and private partnerships in addressing the challenges of AI and ICTs in counterterrorism. This approach is crucial for developing effective strategies to counter terrorist use of new technologies.


Major Discussion Point

International cooperation on countering terrorist use of ICTs


A

Abdelouahab Yagoubi

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

AI and ICTs can enhance threat detection and analysis for authorities

Explanation

AI and emerging technologies play a provided role in the fight against terrorism and organized crime. They enable authorities to rapidly identify the most effective approaches and strategies.


Evidence

This includes the automatic analysis of vast amounts of data patterns and trends associated with the malicious use of technological tools.


Major Discussion Point

Challenges and opportunities of AI and ICTs in counterterrorism


Role of parliamentary assemblies in promoting knowledge sharing

Explanation

Parliamentary assemblies play a crucial role in promoting knowledge sharing on the use of AI and ICTs in counterterrorism. This includes disseminating news and analysis about trends related to technological advancement in various fields, including security and defense.


Evidence

PAM established a permanent global parliamentary observatory on AI and ICT, and began the publication of a daily and weekly digest to disseminate news and analysis about trends related to technological advancement.


Major Discussion Point

International cooperation on countering terrorist use of ICTs


A

Akvile Giniotiene

Speech speed

143 words per minute

Speech length

1044 words

Speech time

436 seconds

Terrorists exploiting cybercrime-as-a-service on the dark web

Explanation

Terrorists are using cybercrime-as-a-service available on the dark web. This service can be procured at a very cheap price and could cause massive effects against critical infrastructure or help terrorists raise money.


Evidence

A recent report is available online regarding terrorist use of cybercrime as a service on the dark web.


Major Discussion Point

Challenges and opportunities of AI and ICTs in counterterrorism


Agreed with

Kamil Aydin


Emanuele Loperfido


Abdelouahab Yagoubi


Agreed on

Challenges posed by AI and ICTs in terrorism


Establishing legal mandates for law enforcement to use new technologies

Explanation

It’s crucial that law enforcement agencies have clear legal mandates to use new technologies in counter-terrorism efforts. Without such mandates, their actions may not lead to successful prosecution and adjudication of terrorist offenses.


Evidence

The use of new technology by counter-terrorist agencies should be based on clear provisions of law to ensure the principles of rule of law and adherence to international law.


Major Discussion Point

The role of parliaments in addressing ICT/AI challenges in counterterrorism


Agreed with

Ahmed Buckley


Emanuele Loperfido


David Alamos


Agreed on

Role of parliaments in addressing ICT/AI challenges


Differed with

Emanuele Loperfido


Differed on

Approach to regulating AI and ICTs in counterterrorism


Need for cross-border cooperation mechanisms

Explanation

Legal arrangements to support cross-border cooperation are crucial in countering terrorism. This is because terrorists and technologies have no borders, and data is everywhere.


Major Discussion Point

International cooperation on countering terrorist use of ICTs


Agreements

Agreement Points

Importance of UN Security Council Resolution 1373

speakers

Ahmed Buckley


Jennifer Bramlette


David Alamos


arguments

UN Security Council Resolution 1373 as foundational framework for international counterterrorism cooperation


Resolution 1373 requires operational information sharing between agencies


Resolution 1373 provides guidance on legal frameworks to criminalize terrorist acts


Resolution 1373 needs to be implemented through national legislation by parliaments


summary

Multiple speakers emphasized the crucial role of Resolution 1373 in providing a framework for international cooperation, information sharing, and legal guidance in counterterrorism efforts.


Challenges posed by AI and ICTs in terrorism

speakers

Kamil Aydin


Emanuele Loperfido


Abdelouahab Yagoubi


Akvile Giniotiene


arguments

AI enables sophisticated propaganda and automated recruitment by terrorists


Deepfakes pose risks of disinformation and eroding public trust


Terrorists exploiting cybercrime-as-a-service on the dark web


summary

Several speakers highlighted the various ways terrorists are exploiting AI and ICTs, including for propaganda, recruitment, and cybercrime.


Role of parliaments in addressing ICT/AI challenges

speakers

Ahmed Buckley


Emanuele Loperfido


David Alamos


Akvile Giniotiene


arguments

Responsible for transposing international commitments into national laws


Need to balance security measures with human rights protections


Allocating budgets and conducting oversight of counterterrorism efforts


Establishing legal mandates for law enforcement to use new technologies


summary

Multiple speakers emphasized the critical role of parliaments in legislating, overseeing, and balancing security needs with human rights in the context of ICT/AI use in counterterrorism.


Similar Viewpoints

Both speakers emphasized the importance of building capacity, whether through public education or legal frameworks, to address the challenges posed by new technologies in counterterrorism efforts.

speakers

Jennifer Bramlette


Akvile Giniotiene


arguments

Need for digital literacy training to build societal resilience


Establishing legal mandates for law enforcement to use new technologies


Both speakers highlighted the importance of collaboration and knowledge sharing between different sectors and entities in addressing the challenges of AI and ICTs in counterterrorism.

speakers

Emanuele Loperfido


Abdelouahab Yagoubi


arguments

Importance of public-private partnerships


Role of parliamentary assemblies in promoting knowledge sharing


Unexpected Consensus

Dual nature of AI and ICTs in counterterrorism

speakers

Emanuele Loperfido


Abdelouahab Yagoubi


Akvile Giniotiene


arguments

Need to balance security measures with human rights protections


AI and ICTs can enhance threat detection and analysis for authorities


Establishing legal mandates for law enforcement to use new technologies


explanation

There was an unexpected consensus among speakers from different backgrounds on the dual nature of AI and ICTs in counterterrorism – recognizing both their potential benefits for authorities and the need for careful regulation to protect human rights.


Overall Assessment

Summary

The speakers generally agreed on the importance of UN Security Council Resolution 1373, the challenges posed by AI and ICTs in terrorism, and the crucial role of parliaments in addressing these challenges. There was also consensus on the need for capacity building, collaboration, and balancing security measures with human rights protections.


Consensus level

High level of consensus among speakers, suggesting a shared understanding of the complex issues surrounding ICT/AI use in counterterrorism. This consensus implies potential for coordinated international action, but also highlights the need for careful consideration of human rights and legal frameworks in implementing new technologies and strategies.


Differences

Different Viewpoints

Approach to regulating AI and ICTs in counterterrorism

speakers

Emanuele Loperfido


Akvile Giniotiene


arguments

Need to balance security measures with human rights protections


Establishing legal mandates for law enforcement to use new technologies


summary

While Loperfido emphasizes the need to balance security measures with human rights protections, Giniotiene focuses more on establishing clear legal mandates for law enforcement to use new technologies in counterterrorism efforts.


Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the balance between security measures and human rights protections, as well as the specific approaches to addressing the challenges posed by AI and ICTs in counterterrorism.


difference_level

The level of disagreement among the speakers appears to be relatively low. Most speakers agree on the importance of addressing the challenges posed by AI and ICTs in counterterrorism, but they propose slightly different approaches or emphasize different aspects. This level of disagreement is not likely to significantly impede progress on the topic, but rather suggests a need for a comprehensive, multi-faceted approach that incorporates various perspectives.


Partial Agreements

Partial Agreements

Both speakers agree on the need to address the challenges posed by new technologies in counterterrorism, but they propose different approaches. Bramlette emphasizes digital literacy training for the public, while Giniotiene focuses on legal mandates for law enforcement.

speakers

Jennifer Bramlette


Akvile Giniotiene


arguments

Need for digital literacy training to build societal resilience


Establishing legal mandates for law enforcement to use new technologies


Similar Viewpoints

Both speakers emphasized the importance of building capacity, whether through public education or legal frameworks, to address the challenges posed by new technologies in counterterrorism efforts.

speakers

Jennifer Bramlette


Akvile Giniotiene


arguments

Need for digital literacy training to build societal resilience


Establishing legal mandates for law enforcement to use new technologies


Both speakers highlighted the importance of collaboration and knowledge sharing between different sectors and entities in addressing the challenges of AI and ICTs in counterterrorism.

speakers

Emanuele Loperfido


Abdelouahab Yagoubi


arguments

Importance of public-private partnerships


Role of parliamentary assemblies in promoting knowledge sharing


Takeaways

Key Takeaways

UN Security Council Resolution 1373 remains a foundational framework for international counterterrorism cooperation, especially regarding ICTs


AI and new technologies present both significant challenges (e.g. sophisticated propaganda, deepfakes) and opportunities (e.g. enhanced threat detection) for counterterrorism efforts


Parliaments play a crucial role in addressing ICT/AI challenges in counterterrorism through legislation, budget allocation, and oversight


International cooperation, including public-private partnerships and cross-border mechanisms, is essential for countering terrorist use of ICTs


Human rights protections must be balanced with security measures when developing counterterrorism strategies involving ICTs/AI


Resolutions and Action Items

Parliamentary Assembly of the Mediterranean elected to Presidency of the Coordination Mechanism of Parliamentary Assemblies on Counter-terrorism


OSCE Parliamentary Assembly adopted the Bucharest Resolution on Artificial Intelligence and the Fight Against Terrorism


PAM established a permanent global parliamentary observatory on AI and ICT


Unresolved Issues

How to effectively balance security measures with human rights protections in the digital sphere


Addressing the potential misuse of the term ‘terrorism’ in law enforcement and legislation


Developing comprehensive legal frameworks to govern the use of new technologies in counterterrorism efforts


Suggested Compromises

Adopting technology-neutral language in legislation to focus on criminalizing actions rather than specific tools


Investing in digital and AI literacy training to build societal resilience against online threats and misinformation


Establishing independent review and redress mechanisms for the use of new technologies by law enforcement agencies


Thought Provoking Comments

Despite all of our definitional differences on what is terrorism, who is a terrorist, or all our haranguing on these definitions, we were still, as an international community, able to make large strides on counter-terrorism cooperation. And the bedrock of that cooperation was UN Security Council Resolution 1373 and its descendants.

speaker

Dr. Ahmed Buckley


reason

This comment highlights the importance of international cooperation in counter-terrorism efforts, despite definitional challenges. It sets the tone for discussing practical approaches rather than getting bogged down in semantic debates.


impact

It framed the subsequent discussion around concrete actions and cooperation, rather than theoretical debates about definitions. This allowed for a more productive conversation focused on implementation and parliamentary roles.


Parliamentarians are of course the legislators, they are the ones responsible for transposing all of these international commitments into national laws, but they are also the dispensers of resources. They are the ones who make the correct decisions on appropriations and budgetary allocations to face the threats based on credible threat assessments from the security agencies.

speaker

Dr. Ahmed Buckley


reason

This comment succinctly outlines the crucial role of parliamentarians in counter-terrorism efforts, highlighting both their legislative and budgetary responsibilities.


impact

It shifted the focus of the discussion to the specific roles and responsibilities of parliamentarians, leading to more detailed explorations of how they can contribute to counter-terrorism efforts in practical ways.


As AI capabilities evolve, so does the potential for them to be used in ways that threaten peace and stability. For example, widely available AI-driven tools could enable individuals or groups to assess technologies such as drones that could be misused for surveillance, targeted attacks, or other malicious purposes.

speaker

Honorable Emanuele Loperfido


reason

This comment introduces the dual-use nature of AI technologies and their potential misuse by malicious actors, highlighting a key challenge in the intersection of technology and security.


impact

It sparked a more nuanced discussion about the challenges of regulating and governing AI technologies in the context of counter-terrorism, leading to considerations of balancing security needs with ethical concerns and human rights.


Resolution 1373 set the groundwork by initiating a requirement for states to share operational communications information. Now, it seems like a pretty small mandate. But that set operational interactivity between law enforcement agencies, border control agencies, between aspects of government that had never traditionally worked together, usually operational information was held on the security side of the house.

speaker

Jennifer Bramlette


reason

This comment provides historical context and highlights the transformative impact of Resolution 1373 on inter-agency cooperation, which is crucial for effective counter-terrorism efforts.


impact

It deepened the discussion by emphasizing the importance of information sharing and inter-agency cooperation, leading to further exploration of how to enhance these aspects in the context of new technologies.


To regulate, legislate and deliver proper oversight of new technologies in countering terrorist domain, first, you need to understand what is the threat, how malicious actors can abuse new technologies for countering terrorism and what are the opportunities there for law enforcement and wider communities to use new technologies in this regard.

speaker

Akvile Giniotiene


reason

This comment emphasizes the importance of understanding both the threats and opportunities presented by new technologies before attempting to regulate them, highlighting the need for informed policymaking.


impact

It shifted the discussion towards the importance of technological literacy among policymakers and the need for ongoing education and collaboration between tech experts and legislators.


Overall Assessment

These key comments shaped the discussion by emphasizing the importance of international cooperation, the crucial role of parliamentarians, the dual-use nature of AI technologies, the need for inter-agency information sharing, and the importance of understanding both threats and opportunities before legislating. The discussion evolved from broad principles of counter-terrorism to specific challenges and opportunities presented by new technologies, with a consistent focus on the role of parliamentarians in navigating these complex issues. The comments collectively highlighted the need for a multifaceted approach that balances security concerns with ethical considerations and human rights, while also emphasizing the importance of technological literacy among policymakers.


Follow-up Questions

How can the UN Parliamentary Handbook on the Implementation of UN Security Council Resolution 1373 be improved and updated?

speaker

Dr. Ahmed Buckley


explanation

The handbook should be a living document that incorporates good practices from parliaments and evolves with new developments


How can parliaments develop more effective legislative frameworks and strategies to counter the abuse and misuse of AI and emerging technologies?

speaker

Honorable Abdelouahab Yagoubi


explanation

This is crucial for addressing the evolving threats posed by malicious actors using new technologies


How can states effectively balance security needs with human rights protections when implementing counter-terrorism measures using new technologies?

speaker

Jennifer Bramlette


explanation

This balance is essential to ensure that counter-terrorism efforts do not compromise fundamental freedoms


What legal mandates and policy frameworks are needed to enable law enforcement to effectively use new technologies for investigating and prosecuting terrorist offenses?

speaker

Akvile Giniotiene


explanation

Clear legal and policy foundations are necessary for law enforcement to leverage new technologies while adhering to the rule of law


How can parliamentarians improve their understanding of the threats and opportunities presented by new technologies in the counter-terrorism domain?

speaker

Akvile Giniotiene


explanation

A deeper understanding is crucial for effective legislation and oversight of counter-terrorism efforts involving new technologies


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #138 Empowering End Users Voices in Internet Governance

WS #138 Empowering End Users Voices in Internet Governance

Session at a Glance

Summary

This discussion focused on empowering end-users’ voices in Internet governance through multi-stakeholder approaches. Participants emphasized the importance of including diverse perspectives, particularly from underrepresented groups, in shaping digital policies. They highlighted challenges such as digital divides, language barriers, and power imbalances that hinder meaningful participation.

Several speakers stressed the need to evolve multi-stakeholder models to be more inclusive and results-oriented. They suggested disaggregating stakeholder categories to better reflect diverse interests and improving mechanisms for filtering up local concerns to global forums. The role of governments in facilitating inclusion was debated, with some emphasizing their unique responsibilities while others cautioned against overreliance on traditional power structures.

Participants discussed strategies for engaging end-users, including citizen assemblies, opinion polls, and leveraging emerging technologies like AI for improved accessibility. However, concerns were raised about potential biases in AI systems and the need to involve underrepresented groups in technology development. The importance of creating channels for expression and empowering users to shape technologies’ future was emphasized.

The discussion touched on the changing digital landscape, particularly the impact of AI and the need for governance to keep pace. Speakers noted the challenges of balancing rapid innovation with inclusive decision-making processes. The upcoming WSIS+20 review was highlighted as a crucial opportunity to reaffirm and refine multi-stakeholder approaches in Internet governance.

Overall, the conversation underscored the complexity of ensuring meaningful end-user participation in Internet governance while adapting to technological changes and addressing systemic inequalities.

Keypoints

Major discussion points:

– The importance of including end-user voices in internet governance, while recognizing the challenges in defining and engaging diverse end-user groups

– The need to evolve and improve multi-stakeholder processes to be more inclusive, effective, and results-oriented

– The role of governments in internet governance and ensuring global agreements remain relevant amid rapid technological changes

– The potential of new technologies like AI to enhance participation, while also considering risks of perpetuating inequalities

– The importance of engaging youth and underrepresented groups in shaping the future of internet governance

Overall purpose:

The goal of the discussion was to explore ways to empower end-users’ voices in internet governance and improve multi-stakeholder processes to be more inclusive and effective in the rapidly changing digital landscape.

Tone:

The tone was largely constructive and collaborative, with participants building on each other’s ideas. There was a sense of urgency about the need to improve current approaches, balanced with optimism about potential solutions. The tone became more reflective towards the end as participants summarized key takeaways.

Speakers

– Pari Esfandiari: Moderator

– David Souter: Managing Director, ict Development Associates

– Carol Roach: Government stakeholder representative

– Olga Cavalli: Government stakeholder representative

– Amrita Choudhury: Civil society representative

– Wolfgang Kleinwächter: Expert in Internet governance

– Olivier Crepin-Leblond: Internet governance expert

– Ellen Helsper: Researcher on links between social and digital inclusion

– Sebastien Bachollet: Online moderator

– Yik Chan Chin: Summarizer of key takeaways

Full session report

Empowering End-Users’ Voices in Internet Governance: A Multi-Stakeholder Approach

This summary reflects the discussions held during an Internet Governance Forum (IGF) session focused on empowering end-users’ voices in Internet governance through multi-stakeholder approaches. The session featured a panel of experts, invited community leaders, and audience participation, exploring challenges and opportunities in creating more inclusive and effective Internet governance processes.

Setting the Context

The session began with polls to gauge audience perspectives on multi-stakeholder approaches and end-user participation in Internet governance. Participants, including digital policy experts, government representatives, civil society members, and researchers, emphasized the critical importance of the multi-stakeholder model in Internet governance while acknowledging its challenges.

Wolfgang Kleinwächter highlighted the historical context, referencing the NetMundial Plus10 and the Sao Paulo guidelines, which laid the foundation for multi-stakeholder Internet governance. He noted, “We have no clear procedures how multi-stakeholder collaboration works in practice.”

Key Themes and Challenges

1. Inclusion and Representation

A central theme was the importance of including diverse perspectives, particularly from underrepresented groups. David Souter highlighted the digital divide between governments and other stakeholders, while Olga Cavalli pointed out barriers such as language, finances, and lack of information. Ellen Helsper emphasized the underrepresentation of vulnerable groups and the Global South, noting, “Young people in these regions make up a majority of the population but are often excluded from governance discussions.”

Carol Roach cautioned against oversimplification, stating, “We need to stop looking at people as being one dimensional and review how we label boxes and how we label people.” This insight challenged current stakeholder categorizations and prompted consideration of more nuanced approaches to representation.

2. Evolving Multi-Stakeholder Models

Several speakers stressed the need to evolve multi-stakeholder models. David Souter argued for disaggregating and expanding stakeholder categories beyond the current model of four groups, suggesting this doesn’t capture the complexity of interests. He stated, “We need to be multisectoral in thinking about it. The internet is not the end in itself, in other words, it’s means to an end.”

Amrita Choudhury stressed the need to strengthen the legitimacy of civil society stakeholders beyond tokenism. Carol Roach proposed considering media as a separate stakeholder group, highlighting its unique role in shaping public opinion.

3. Role of Governments and Power Dynamics

The role of governments in facilitating inclusion was debated. Olga Cavalli noted that governments have unique responsibilities but must understand the multi-stakeholder approach. David Souter highlighted the need to address power imbalances between stakeholders, a point echoed by Ellen Helsper who specifically mentioned the role of big tech companies in shaping the Internet.

4. Artificial Intelligence and Internet Governance

The potential of AI to enhance participation was a significant point of discussion. Olivier Crepin-Leblond was optimistic about AI’s potential to overcome language barriers, stating, “AI will help me in that. And I’ll develop a tool for this for my own means.” However, Ellen Helsper cautioned about AI models potentially perpetuating existing inequalities, highlighting the complex relationship between technology and inclusion.

5. Engaging End-Users and Creating Channels for Expression

Participants discussed various strategies for engaging end-users, including citizen assemblies and opinion polls. Wolfgang Kleinwächter emphasized the importance of creating channels for everyone to express their opinions. Amrita Choudhury highlighted the importance of creating narratives to engage end-users on issues that affect them.

Ellen Helsper stressed the need to counter disempowering discourse around technology, while Yik Chan Chin called for self-motivation from end-users, presenting an unexpected difference in approach to end-user empowerment.

6. Capacity Building and Awareness-Raising

Several speakers emphasized the importance of capacity building and awareness-raising for end-users. Olga Cavalli highlighted the need to understand how new generations use information and media, stating, “We need to understand how the new generations are using information and media.”

7. National and Regional IGFs

The importance of national and regional IGFs in fostering local participation and addressing context-specific issues was discussed by several speakers, emphasizing their role in building a more inclusive Internet governance ecosystem.

Conclusion

The discussion underscored the complexity of ensuring meaningful end-user participation in Internet governance while adapting to technological changes and addressing systemic inequalities. It highlighted the need for more inclusive and representative Internet governance, improved multi-stakeholder processes, and careful consideration of the role of emerging technologies in both enabling and potentially hindering participation.

As the digital landscape continues to evolve rapidly, the conversation emphasized the urgency of addressing these challenges to create a more inclusive, effective, and forward-looking approach to Internet governance. The upcoming WSIS+20 review was highlighted as a crucial opportunity to reaffirm and refine multi-stakeholder approaches in this context.

Session Transcript

Pari Esfandiari: Well, good morning and welcome everyone. Whether you are joining us online or here in person, it’s a beautiful day in Riyadh, Saudi Arabia, which has the honor of hosting the IGF this year. I would like to take a moment to express my heartfelt gratitude to the IGF for its invaluable contributions to the Global Digital Governance Dialogue. This platform not only brings us together, but also empowers us to engage in meaningful conversations and contribute to shaping the future of digital governance. My name is Pari Esfandiari, and it’s an honor to moderate today’s critical discussion on empowering end-users’ voices in Internet governance. As you can see on the screen, joining me is Sebastian Bachelet, a well-known figure in Internet governance. He’s joining us virtually and will moderate the online comments and questions. I am also joined by a distinguished group of panelists, David, Carol, Olga, and Amrita. They are here and will provide their perspective and thoughts. We are also joined by invited community leaders, Olivia, Wolfgang, who are here, and Ellen, who will join virtually, both to express their perspectives, but also to include you, the community, in this interactive session. We are also joined by Yek-Chan Chin. She will summarize the key takeaways. As you can see, we have renowned leaders in the field of Internet governance. Their contributions speak for themselves, and they hardly need an introduction. We go now to the next slide. Before we dive in, let me briefly outline our agenda for the next 90 minutes. First, I will set the stage and introduce today’s topic. Then our panelists will address three core questions, offering their diverse and unique perspective. This will be followed by invited community leaders sharing their responses, fostering a dynamic exchange of ideas. We will then open the floor to all participants for comments and questions. We will wrap up with reflections, a summary of takeaways, and closing remarks. So now let me take a moment to set the scene. As we gather here… Thank you. Thank you. Today, it’s clear that the Internet has evolved far beyond being just a tool or platform. It’s now the backbone of our interconnected world, driving economies, transforming societies, and deeply impacting personal lives. With its integration into nearly every aspect of life, governing the Internet has become an increasingly complex and critical task. This complexity is heightened by rising geopolitical tensions and the inherent friction between the Internet’s borderless nature and traditional nation-state frameworks. In this context, the need for inclusive global agreements, adoptable standards, and collaborative approaches is more urgent than ever. The evolution of Internet governance reflects a profound shift in the dynamics of power, influence, and collaboration in the digital age. Traditional multilateral and bilateral frameworks often struggle to keep pace with the rapid technological advancement and transnational challenges of the Internet. This is where the multi-stakeholder approach emerges as indispensable. Unlike conventional governance models dominated by governments, multi-stakeholder acknowledges the Internet as a shared global resource, requiring shared responsibilities and diverse representation. Where governments, civil society, technical community, academia, and the private sector work collaboratively to navigate this complex landscape. At the heart of this ecosystem are those who are impacted. Their perspectives are not only… valuable but fundamental to shaping an Internet that reflects the needs and aspiration of global communities. They bring critical first-hand insights into navigating the digital landscape, from addressing privacy concerns and ensuring accessibility to building trust and fostering innovation. Communities shaped by these lived experiences are more likely to be effective, trusted, and widely adopted. Conversely, excluding them risks governance being dominated by narrow interests, perpetuating inequalities and missing opportunities for meaningful progress. Yet, despite its necessity, as well as its amazing achievements, the multi-stakeholder approach faces serious challenges, as highlighted during NetMundial Plus10 earlier this year. These challenges include issues of representation, inclusivity, meaningful participation, inefficiency, and a perceived inability to deliver actionable results. These concerns underscore the need for reform, innovation, and a renewed commitment to making multi-stakeholder work, not just in principle but in practice. The stakes have never been higher as we approach the WSIS Plus20 review, a pivotal moment to shape the future of Internet governance. Now, with this context in mind, I would like to engage with our audience by launching three quick polls on today’s key topics. You can… You have one minute to respond. Okay. This is poll number one. Okay. Hmm. Oh, you can’t… I think there’s… Yeah, we have 8%, one and one, and I think we now stop the polls and we continue. Could we end the polls, please? Yeah, we have 8%. 1%, 1%, and 1%. So could we please end the polls, because I cannot now change the slides. Okay, it’s changed. Okay, could we? So, sorry for this. So, now we have three overarching questions as shown on the screen. To delve into this discussion, I would like to begin with David. So, if David is online, the issue of inclusion of internet user has been underscored, but who exactly are we talking about, and what are barriers here? Please limit the response to three minutes. So, I think I’d like to start by building on what you’ve just been saying, because to me, what matters about the internet and the work I do is on the development of digital policy,

David Souter: which includes at the moment, working for the United Nations on the 20 year review of the WSIS process. What matters most to me is issues around impact. And on the whole, internet governance has been largely led by digital insiders. So, by businesses, by the technical community, by governments that are government departments that are involved in the supply of the internet rather than its impact on society as a whole. So, the question here I think is particularly driven by the way the internet has evolved to be something that is now impactful across all areas of economy, society, and culture. So, the first part of the answer is. is actually not to do with the end users themselves, but to do with the expertise that is involved in internet governance discussions. I think that needs to be much, much more informed, at least as much informed by people whose expertise lies in those fields of impact rather than in the fields of the internet itself. So by environmental experts, by health specialists, by educators and so on. We don’t have sufficient space for that in internet governance. In terms of end users, they’re of course very diverse. And they’re the demand side rather than the supply side of the internet. So not just individuals, but also organizations, more businesses, trades unions, sports clubs, religious organizations, whatever. Not just organizations, but also individuals who are also very diverse from where they come in age, gender, education, in their requirements of the internet. Not just intensive users, but also occasional users. Not just those who want to take part in internet governance. We also need to understand the perspectives of those who don’t want to take part in the process. And not only users, because non-users are also severely affected these days by the impact of the internet on their lives and their societies. So there are ways to get a wider range of views like this. And just, maybe I’ll come back to these later. But I would particularly look at ways that do not just attract vested interests or insiders to the process. So a couple of things that might be considered here are the kind of household surveys or opinion polls that have been used a lot by Research ICT Africa, by Ofcom, the regulator in Britain. And citizens assemblies, which have been successful ways of bringing in the very wide diversity of views on controversial issues in some societies, to be in Ireland as ways of just ensuring that discussion is informed by everyone and not just by those who want to take part.

Pari Esfandiari: Thank you very much David. And now Carol, you heard David’s comments and how he expands the concept of end-user. With your leadership experiences, why do you think that they remain invisible in multi-stakeholder process?

Carol Roach: Thank you. Thanks David. End-users are part of or trying to be part of the digital society, so that means that they want to get involved using technology for social reasons, that means education, health, employment, or even if you look at governments or civil servants, they have to provide services online. So the barrier for the multi-stakeholder, for the end-user, is that we don’t tend to determine that we may have missed an end-user within a group of persons. We tend to group them a lot. So you find that the barriers that you find offline are the same type of barriers that you would find online. So you might have an end-user that’s missed because of their economic standing, or they don’t have the capacity, they may have some kind of disability, and therefore they’re not aware that they could be part of the multi-stakeholder. They also say they think that there’s some representative out there and they’re doing the work, it’s not me. And I think it’s a lack of awareness because we tend… to categorize people and put them in this labeled box and sometimes I might be somebody that falls in more than one category. So therefore I’m not in a box anywhere. I’m totally left out. So I think we need to stop looking at people as being one dimensional and review how we label boxes and how we label people.

Pari Esfandiari: Thank you very much, Carol. And with that, Olga, in your view, to what extent do the barriers lie in inclusion and how much are they rooted in a lack of participation?

Olga Cavalli: Thank you. Can you hear me? Yes. Thank you very much. First, thank you very much for inviting me. I’m very honored to be with all these very important people here in this room. I would like to build upon what Carol said, and I totally agree with you. Whether you work in a big company or a government or in a civil society organization, you’re always an end user. You have your own life, you learn, you communicate with your first, with your students or with your friends or with your family through the Internet. So at the moment we are always end users. So I always find somehow weird this division, for example, in ICANN you have the end users in one place and then you have that label thing that you mentioned. I think it’s a very interesting way of describing it and put it into words. Barriers. The ones that we always come very easily to our mind, lack of resources to participate, which we all know that it’s a problem, especially for developing economies, people living far away from where the meetings are happening. This is the beauty of rotation of the meetings, because you always have the possibility of having something closer to your home or at your own town. And then it’s the language barrier. I don’t know in other regions I don’t have that deep insight, but in Latin America that is a big barrier. Many people are able perhaps to read English, but hearing a native speaker of other language, English, is complicated. So that is a barrier which is important. But I would like to also stress another barrier, which I think is a lack of information. Sometimes people don’t know where to go. There are diversity of spaces of participation. They don’t know how to direct their interests, which meeting they should be focused on. There are several and sometimes they don’t know how. There are sources of funding, for example, they don’t know. So I talk with my students about the fellowship of ICANN or some other fellowships from ISOC to participate in IGF, in ICANN, and they have no idea. So it’s communication, it’s information, and also it’s capacity building about this, how to participate and how to participate in a meaningful way in all these different spaces where we can make our voices heard. So it’s not only money, it’s not only resources, but it’s also information, communication, and a good networking to spread this news.

Pari Esfandiari: Thank you very much. And with that, we go to Armita. Armita, from a grassroots and civil society viewpoint, what are your thoughts on this?

Amrita Choudhury: Thank you. So if you look at the end user, and I’ll go to that question first, end user where? Different processes will have different people as end user. A government also can become an end user of a process. So we need to be very clear who the end user is and what the impact, as David was saying. So that’s one thing, and end-users are not homogenous. If we think just bringing three people into the room when AI is being discussed would be end-user, no. Who is impacted? How is the kind of impact? Do they understand it? Is important. For the grassroots level, as Olga was mentioning, one of the important things is capacity. Everyone at the grassroots does not have the same resources to understand what the global discussions are all about. Are we building that amount of capacity? Because the learnings, the amount of learning which goes is extremely high. Are we building it? And I think the Sao Paulo principles also speak a bit about that. The other thing, obviously, finances is this. You know, resources are another thing. Even amongst grassroots NGOs, there may be bigger ones, there may be smaller ones. Are we making it equitable amongst the developed and developing countries? I think there are many things which needs to be looked into. Many dimensions, apart from languages, skills, et cetera. I’ll leave it at that.

Pari Esfandiari: Thank you very much, Amita. With that, I go to Carol. From your experiences, what best practices ensure meaningful inclusion?

Carol Roach: A very good question. We tend to talk about inclusion all the time, but I don’t think we break it down to say, who’s not being included? We need to be able to identify and understand what their need is, why their need, and we go back again to thinking that persons are one-dimensional, and we’re not. So therefore, we need to look at a stakeholder management model, and there’s so many models that we can apply where we would look at, or something like a stakeholder. called mapping where let’s say we look at the interest level and the abilities of the person and we create a strategy based on that because just creating one strategy, it doesn’t fit everybody. So you really need to sit down and take stock of who the end user is, who we’re trying to reach, who we are missing. And another thing you need to do is to make it an iterative approach in terms of, okay, I tried this strategy. Who did I capture? Did I meet my objectives? If not, well, let me go back at it. Let me make a change to it. Let me see who I did miss out and then what’s my strategy to grab that person. And you just keep doing iterative approach so you could be more agile. Sometimes we write these big strategies on paper and we say, okay, that’s it, I’m done. Let’s try to implement it. And it usually doesn’t work or you don’t get the impact that you would want to. There’s also for stakeholder management, you can have a spectrum because people will fall somewhere on a spectrum and you can decide, okay, what’s my different criteria on the spectrum? And you could create different strategies for it. It’ll require more resources, but if you want to be impactful, then you need to take the time to understand as Amrita and everybody’s saying, who really is the end user? Am I trying to reach the government, the public service? Am I trying to reach the persons that use public services? Who am I really trying to reach? And what it is that they’re interested in. Sometimes we impose what we’re interested in onto what we think other persons are interested in.

Pari Esfandiari: Thank you very much. And with that, I go to Amrita. Amrita, you heard Carol. So from your point of view, how could we grassroots approaches better support inclusion?

Amrita Choudhury: I think there has to be a bit of, for inclusion at the grassroots level, as in not from the grassroots. David gave an example of Africa, there are community discussions happening. But, training the trainer to work at the grassroots is important. For example, if I look at a country like India with 1.2 billion population, just five community meetings will not be enough. You need language, you need to build that capacity. So, building the capacity of, for example, it cannot be one size fit all for all topics. If I’m say, taking AI for good, which is a buzzword these days, and you want to use it, how is it helping in agriculture, or climate change, or even, you know, the change, even jobs for that matter. You have to know who in that range is working. Mapping, as Carol mentioned, how do you build their skills? Are your interests and their interests aligning? And how do you get the feedback and take it up when decisions are making? I think that’s also important, how you map. And it’s not going to be similar for similar places, but I think building the capacity, having that information flowing when you give suggestions, how is it being used, unused, the transparency in the processes, I think those are important, and building accountability. If, for example, the problem that many point out with multi-stakeholderism when we talk is, we don’t have stakeholder accountability. Are we trying to bring in some accountability of what I am preaching? Thank you.

Pari Esfandiari: Thank you very much, Amirita. With that, I go to Olga. What role can government play in including underrepresented voices?

Olga Cavalli: Thank you for the question. There is one thing in multi-stakeholder concept that I usually say, it’s this confusion about equal footing and all stakeholders are equal. So that is something that people, oh, we all sit together and we all talk together, but the responsibilities of each stakeholder are different. And I think that the government has this kind of particular and important role because governments are responsible for security, for promoting the economy, for taking care of citizens, security at the streets and all that. So they have an important role and I think we as members of the community have a big challenge in trying to make the governments understand the beauty and the importance of building a real open multi-stakeholder environment to interact with this multilateral meetings. So both are okay. And there is this fantasy that multi-stakeholder is easier, no, it’s much more difficult because you have to bring everyone at the table but really all stakeholders have a good dialogue, an open dialogue. Multilateral is easier, you put all the representatives of government together, they talk with their advisors and that’s it and they do a document. That’s very important but at the same time governments must understand that the inclusion of end users and other stakeholders in the dialogue is fundamental for these new technologies that are impacting the society. So they are a very important stakeholder, I wouldn’t say more important than others but they have a kind of a gathering role of all parts of the society.

Pari Esfandiari: Actually that’s a great point to make and often it’s overlooked. With that I go to David. What strategies can help make multi-stakeholder process more inclusive for underrepresented groups?

David Souter: Okay, so I think the starting point here, which applies not just to issues around the internet but to everything really, is if you want to, as a policymaker, you want to engage with the people who’s… lives your policies impact upon. If you want to engage with them, you have to engage with them on the terms that have meaning for them and that encourage them to participate. So there are, I think, probably a couple of points here. First is that most people, and this includes most end users, don’t have the time, don’t have the inclination or sufficient interest to get deeply involved in the issues that are the priorities in most internet governance discussions. They’re not interested in how the technology works, they’re interested in what it does to them. And so the internet governance institutions, if they want to reach out to those whose lives are impacted, have to have to do so by starting from the point of view of what is important to them, what impacts matter to them, how are their daily lives affected, and then reach back from that to what the internet governance technology questions are, how they should respond to those. The internet is not the end in itself, in other words, it’s means to an end. We need to be multisectoral in thinking about it. The suggestions that I made earlier, I think, are trying to do that sort of reaching beyond. So the point of household surveys or of opinion surveys is to try and get to those people who would not naturally participate. And citizens assemblies, which I also mentioned, are a particularly effective way, I think, of doing that on complex issues over a period of time. What you do with those is you have a randomised selection made also representative of the population as a whole of maybe 100-200 people who over a period of time with expert input discuss an issue that is complex and difficult and challenging and seek to reach consensus about it, which is a consensus of the opinion of society. It’s been very helpful in a number of countries in dealing with issues that are highly contentious, such as those to do with reproductive rights, abortion and gay rights, for example, in Ireland. And I think that is a way of getting to the public, as opposed to the much easier thing that happens, which is internet governance, insiders talking to themselves.

Pari Esfandiari: Thank you, David. And while I have you, maybe you could comment on one key challenge is fast changing digital landscape and how multi-stakeholder approach could adapt to it.

David Souter: What challenges, the biggest challenge in the digital landscape at the moment is to do with some frontier technologies and artificial intelligence that we use that term and other things, too, where the pace of technological change is too fast for our institutional frameworks to deal with that governance, the regulation, the governance to deal with the uncertainties and risks that are associated with them. That makes it particularly important to understand the purposes of technological change as being about the common good. And so understanding what the kind of long-term goals we might have for society as a whole, rather than seeing them as being about what is the good of technology itself.

Pari Esfandiari: Thank you, David. With that, I go to Carol. How can multi-stakeholder discussions stay flexible and responsive to digital changes?

Carol Roach: So if you’re talking about global agreements, I think what persons that there’s usually an argument between multilateral and multi-stakeholder. But what I tell persons from, and it could be because I’m from the government stakeholder, is that at the end of the day, the people vote for governments. They don’t vote for civil society. They don’t vote for technical companies. They vote for people who will represent them. So when it comes to global agreement, as Olga says, no, not all stakeholders are created equal all the time, every time. So in a case where you’re talking about negotiations for global agreements, then the government is a important stakeholder. Now they have the influence, but a lot of times they don’t have the interest. So what we need to do is to ensure that we raise the interest level. We need to keep the awareness up. We have missions that go, each country or state have a mission that will actually do the negotiations for them. So therefore, we need to find some way in which we can raise the awareness to them. We have to ensure that we do it constantly. We just can’t say, okay, wow, there’s an agreement coming in up that has to be signed. Let’s try to get some meetings with them. No, if you keep them constantly updated and aware, then they feel comfortable that you’re not just trying to pressure them into learning something into an agreement. So I think we just need to keep it constant. And as I think another, I can’t remember who said it, we need to make the stakeholders more accountable. So you have to be a part of it, you just can’t sit back. You have to play a part. You just can’t say, oh, look what they did. Okay, you have to be accountable.

Pari Esfandiari: Thank you very much. And with that, I go to Olga. How can governments ensure global agreements remain relevant amid rapid technological? changes?

Olga Cavalli: That is a very interesting question and a very difficult one. Also, governments are not equal among themselves. It’s not the same government of a small developing country than a global leader in the world. So for developing economies, it’s a challenge. Because developing countries, we, and I live in one, so the urgencies are other. So there are many things that are economy-suppressing things, strikes, or inflation, and other things that have to be solved in the short term. And they’re very, very impacting the society. So when you go to them and say, hey, we need to talk about something about artificial intelligence. Oh, Olga, what are you talking about? We don’t have time for that. But I think Carl made a very interesting point. We have to be consistent. We have to be able to approach information in a way that they can quickly digest and use. You cannot provide to them 100 pages to read. Perhaps a brief document that opens their eyes to be aware of some negotiations that could be global, but at the end will have an impact at the national level. So we have seen that, for example, with new GTLDs. I’ve been talking about with my government for decades. And once, one of the names of one of our regions in Argentina got a TLD named by a company that, oh, it’s so good that you’re there. OK, I’ve been talking about this for years. So it is a process. I would say that it’s the way that it’s not one point thing. It’s going patiently to their advisors and to the government to tell them that there are global decisions that will have, someday, an impact. at the national level, and they have to be aware of that. But it’s challenging, especially in developing economies.

Pari Esfandiari: Thank you very much. And with that, I go to Amrita. Amrita, how can grassroots and civil society voices help keep multi-stakeholder processes adaptable?

Amrita Choudhury: By giving regular and constructive feedback, keeping many times, Olga rightly said that governments have sovereign interests they need to protect. However, many times in many developing countries, in the name of sovereign interests, the interests of end users or others are overridden. So end users, I would say civil society organizations should continue to raise their voice, point out the things which needs to be corrected, because at the end of the day, if you look at the internet or the digital technologies, it impacts everyone. And if the concerns are not taken up and deliberated in a nuanced way, no process or regulation can work. So that is, you know, why multi-stakeholder, why different stakeholders have to be there is not a question of having everyone in the table. It is to get the legitimate concerns and advantages coming to one point so that when decisions are taken, all aspects can be heard, not necessarily adapted, but at least heard. And there is a buy-in when you have to, you know, implement those things. So it is in the vested interest of a smart government if they really want things to happen in the ground. So I think grassroots level civil society have to keep on raising their voices and calling people out to make them more accountable. Thank you.

Pari Esfandiari: Thank you very much. And with that, now we go to our invited community leaders. So you heard, we set the stage. you heard the panel. I would like you now to make two comments about what you have heard so far. Who wants to go first?

Wolfgang Kleinwachter: Yeah, thank you. Thank you very much. It’s an inspiring discussion and this remembers me on debates we had nearly 30 years ago in the 90s when all this was new and people came up with ideas for a cyber democracy. So I think I haven’t heard too much in the last couple of years about cyber democracy but in the 90s this was the catchword and there was a question what is cyber democracy and some people said okay people with a passport are citizens and people with a passport are now netizens and they should have the same rights like citizens and so the idea of election came out because the accountability question was raised already in particular in the ICANN context and we had this very interesting experiment in the year 2000 to bring all, to give all internet users and users a right to participate in global election. At this time it was for five directors of the ICANN board. So this was an incredible experience and the conclusion from this election was that people who were first excited about this global election and global cyber democracy became a little bit out, you know, they lost their illusions in the process and were more skeptical and people who were skeptical in the beginning said okay yeah this is something new we should have reached a level for accountability also for stakeholder groups by continuing with the election. The wise decision which was made by ICANN in the year 2002 was, you know, to find a mix between what the American people want and what the European people want. democratic theory is called the representative democracy and the participatory democracy. So I think there was a long debate in this, whether the participatory democracy will remove or substitute the representative democracy. And the outcome was, no, this brings additional value to the process. So that means participatory elements are important in particular when the representative democracy has reached a certain limit. And insofar, the user participation is an important element, you know, to bring more sustainability to decision, to bring more voices, more perspective to the policy development process. And then it depends on the issue, because we always decided between policy development and decision making. I think for decision making, you have to have a certain authority. But I think before a decision is made, the policy development process is even more important. So that means if you have a good, broad, open, inclusive policy development process, then the decision maker, at the end of the day, just rubber stamps the recommendation which comes out from the PDP. This is in the ideal world. But the problem is, and I remember the argument 30 years ago, and said, okay, do you really want to go for global elections? Do you want to have five billion people who go to the ballot box? How you can organize this? So there was also a little bit illusions and some dreams around it. And to bring it down to a real situation in 193 countries. So it’s difficult, really. to have the wish to invite everybody to the process. So there is a natural barrier and not only barrier like language, finance and things like that. So that means people who buy a car, so do not have to be engineers and have not to understand to build a car, but people have to understand the rules when they use the car. And then so far, so it’s also a question when we speak about user involvement and user involvement, the question is then where and what? So that means you have to be a little bit more specific. For me, and it’s my final word, the most important thing is that you have a channel for everybody where he can express his voice, make his position heard. And I think in a democracy, we have to free media, we have all kinds where people can express their voices and can have a channel where they can participate in policymaking in their country. And in our internet world, that’s why the national IGF is the best institutional framework you can have, because an IGF gives you an opportunity to bring everybody to a table, it’s like a round table discussion. A business person has a different perspective than a technical expert, civil society organizations is a different and in governments are wise, they will listen to what’s going on there and then everybody goes home and does the decision where he has an authority to make decision. So this little bit idealistic, so I’m an academic person, so I’m working with models, but I think you have to have a vision if you want to move forward into reality. Thank you.

Pari Esfandiari: Thank you very much. I see Olivier. is shaking his hand and agreeing with all the comments made. So maybe you would like to make comments.

Olivier Crepin-Leblond: Yeah, thank you very much, Pari. Olivier Clapin-Hublot. And I agree with a lot of the things that were said in this session. Of course, having been involved with internet governance for quite some time, there’s a lot of things that we are hashing again and again, but we don’t seem to have solutions for them. Carol was mentioning the need not to put people in boxes, but it’s so easy to put people in boxes. It’s, oh, what stakeholder group are you? And then, there you go, you’ve got a label. We’ve dealt with those people. Let’s deal with the others. That’s one of the things that we’ve been used to do. Olga mentions that there’s big governments, small governments. You can’t just put all governments under the same banner. And of course, everyone is a user at the end of the day. Amrita mentioned that the learning is really high. And I’ve got a thought about this, because yes, there is a learning barrier with everything. And of course, Wolfgang mentioned that you don’t need to know how to build a car to be able to operate one, but you do need to learn how to operate it. My belief, and by the way, I don’t forget, of course, David’s description of methods, which I find interesting about the sampling of people, taking a sample, a representative population, and then asking them questions. I’m a firm believer in technology. And I think that we are, at the moment, living a fundamental change. The past maybe year, two years, a fundamental change into how everything is happening. First, we’re seeing this complete crazy instability worldwide with regards to politics. Things that we would have never imagined are actually happening. Things that nobody has even forecast are happening. It seems that intelligence agencies worldwide are either on holiday or something, but they didn’t tell us that something was gonna take place. And suddenly, you open the TV, and you think, oh, this has happened. And you’re just thinking, oh, we’re living this crazy reality TV show. And why is that? Well, I have no answer for this, but one thing that I do know is that there is a fundamental change in the way that we’re doing things that we need to embrace, and that’s the use of artificial intelligence. And that is a tool that is so powerful, I really think it will help us in our aim to make multi-stakeholder governance something that will succeed. Suppose the various barriers that we have in front of us, for example, languages. We all speak different languages, we all have a common language that we’re using, which is English. We all sometimes use interpreters, but that’s extremely expensive. I believe that AI with automatic interpretation will be able to help us greatly in this respect. Finances, well, okay, financing is still a huge problem because we all feel the need to meet face-to-face. But with the technologies that we have and that are going to be developed, it’s going to be easier and easier to not only interact on a Zoom room remotely, but with other tools as well to be able to interact. And when you start linking the physical world and the virtual world, that will make things a lot easier because you could have a meeting with someone with a holographic image that you just put on your glasses and say, oh, by the way, I’m having a chat with some person in New York at the moment. Sorry, I’ll talk to you in a second, I’ll just finish my chat with the person. This sort of thing, it’s stuff that is inconceivable today because AI is at the level where aviation was a hundred years ago. Now a hundred years ago, if you ever go to the Udvar-Hazy Center in, I think it’s in Washington DC, there’s a huge airplane museum and you see some of the earliest instances of aviation and you think, there’s no way in hell that I would ever even think of going on one of these things because it’s 99% sure you’ll kill yourself. And you know, whoever wants to fly are crazy people. And yet, the majority of us who have come from outside the country have flown into here, and we haven’t really thought twice about it. And that’s because, of course, aviation has got this whole history of improvements that have happened over the years. We are at the very early stage of artificial intelligence, and already we are able to summarize things using generative AI. We’re able to use it to take a complex idea that is presented in a professional paper from people who have written about a topic for the past 30 years, and that are able to use a certain jargon and a certain way of expressing themselves that is easy for them, but very difficult for newcomers. And we’re able to say, I don’t understand this, simplify it please. And the machine will do it for us. And it will, you know, it’ll write six pages. No way I’m writing, I’m reading six pages. Say it in one page. And it will do a pretty good job. Sometimes they’ll get it wrong. But it’s still very early days. It’s the days when you don’t want to go on that device that might jump over the cliff and kill you. In a couple of years’ time, all of these models are going to work better. And I really think… See, that’s the technology we have today. Yes. So that makes the point. We have very basic tools at the moment. The flight has crashed. Not at the moment. Sebastien, are you able to hear? No, it seems technology has failed us. How ironic. Shall I speak in French instead or another language? 60 page page. Want to be able to use. It’s very early on in the use of artificial intelligence. And I really believe that the tools that are currently being developed, that we ourselves can develop. Because AI allows us to develop our own tools too. I really believe that we will, as a group, as people, as end users, be able to develop tools for ourselves. That will help us in better being equipped for taking part in these discussions of Internet governance. Whether it’s explanations on things that we don’t understand when somebody else talks about it. Whether it’s ways for us to express ourselves. Because there are some difficulties sometimes when you enter a place and you have to convey a story, convey a point. But you don’t quite know what language to use for that. And at the same time also being able to do exactly what I don’t do. Which is to make very short interventions and let other people speak as well. AI will help me in that. And I’ll develop a tool for this for my own means. And I’m sure you will all be able to develop your own tools that will help you and the people around you in taking part in these issues and these discussions.

Pari Esfandiari: Thank you very much, Olivier. And my apologies for the technical glitch we had. so far. So with that, I go to Ellen. Ellen, could you please make your intervention on the conversation that has taken so far?

Ellen Helsper: Yes, thank you very much. I hope you can hear me. I apologize for my voice. I’ve been ill, so it’s not that strong. I hope it’s okay. I’m actually quite glad to be following Olivier, because I’m going to give the exact counter argument that while how everything happens might be changing, and we’ve seen how everything happening changing at several occasions throughout the history in relation to technology, what doesn’t tend to change is what the result is, especially for people who are more vulnerable and unrepresented. We see that in digital spaces, their voices are often less heard than in, and their experience is less represented, because especially with AI, the models that AI is built on are built on the lived experience of those who have been most present online and who’ve created most content, and those don’t tend to be the people who are underrepresented in society in general, and who have historically been systematically excluded. My work is in the links between social and digital inclusion, so what happens to vulnerable groups as societies become increasingly digital, and what I find interesting in this discussion, and in the framing of this panel, and I would have to say I am in line also with much of what the other speakers have said, and especially David Souter, is that it’s interesting that we talk about users, because that presents in the internet, and let’s not forget the internet is not just the infrastructure, but all the applications and platforms that are on it, it presents people with something as a fact that they then need to become engaged with, and it’s interesting that we talk about users, because that presents So it presents them in a way as passive in the creation of these technologies and to have to get involved with something that wasn’t from the beginning, designed by or for them. So I think looking forward, and this is the kind of our experience in working with groups who tend to be underrepresented or have been excluded in various ways from society more in general, and especially from more digital societies, is that often there is a kind of individual responsibilization of people need to get skills to get engaged, they need to become literate on how to use technologies and what these processes are, and that this often feels quite exploitative for them. It feels like it’s passive for them as well. And I think this has also been mentioned before, it’s kind of a mismatch about what the internet and internet governance is for, that that’s not understood, that the outcomes of internet governance or digitization in general are not presented in ways that have meaning or are relevant to a lot of the people that I work with. In my research, and well, yeah, I think it is definitely governments and other powerful stakeholders that should be held accountable, but for kind of the outcomes that people get from this process of governance, but we should be thinking about what kind of internet and what kind of technology we want for the future and that future should include all these experiences. And including some of the work that I’ve done, I think there’s two things that it’s a bit rambling my thoughts because I’m following up on many very well made points earlier. But I think one of the two things that I’ve been thinking about the future that I think we haven’t really discussed yet is that there’s many, many, many young people on the world. world, and actually young people and children especially make up the majority of the population in the Global South. And both children and the Global South in general are underrepresented in terms of the kind of lived experience on the ground. And they also have a very hard time making up this future that we’re going to be living. They have a really hard time of getting their voices heard at a higher level. And when we talk, you know, there was talk about, you know, a level playing field, all stakeholders being involved. But in the end, even if we talk about local or national IGFs, there needs to be a mechanism for filtering up and governments and governance bodies need to be held accountable for putting the mechanisms in place. So that’s through the forums that David Souter talked about, but also through kind of civil society organizations that work very locally, that really understand the local impact of the way in which technologies are designed, that these organizations are involved and that they have a meaningful voice, that it’s not the responsibility of the individuals who are really struggling to make their voices heard, but that there’s a really clear process for that. I think also, and that’s the other point that I wanted to make, something that we haven’t mentioned, but that’s obviously the big elephants in the room, that internet governance cannot be talked about without talking about the huge power inequalities in terms of who is shaping the internet, its infrastructure, its content, its platforms, whose data gets used and collected. We have not talked about the enormous sums of money and funding that come from the tech industry itself. We haven’t mentioned them necessarily as a stakeholder. They’re also not here on the table, but in the end, many governments around the world are truly beholden. to what big tech companies from specific parts of the world, very specific parts of the world, will allow them to do in a way or help them to do by providing content platforms and infrastructure. And I think we really need to talk about that because in the design of these platforms and in the design of the content and the infrastructure, this is where we also see a huge under-representation. So I don’t think it’s not just involving people as end-users and focusing on who is most likely to be advantaged, not all end-users, but I would say especially people who tend to have been under-represented, making sure that they are heard through some mechanism without making them responsible for their voices being heard, reaching out to them, as David Souter was saying, but also to think about how governments and other stakeholders, civil society and other stakeholders, get more involved in making sure that in the design and the construction of the infrastructure and the content and the platforms, in these global tech companies and the global flows of money and funding, that they are involved from the beginning and not as an after-the-fact, here is a technology, how should we govern it, but really thinking ahead. So to make sure that these patterns that I was talking about before, that we can see happening with AI right now, in terms of who is represented, who are these technologies designed around to be made useful, that will prevent a more unequal future because these technologies are governed and designed in a way that doesn’t necessarily represent the best interests of these future generations of vulnerable populations. And I would say getting more young voices, young underrepresented voices, especially from parts of the world that have been underrepresented. And I don’t want to put people into boxes. My approach is always kind of understanding kind of a disadvantage or vulnerability or living in precarious conditions from an intersectional perspective, from a local perspective. But that requires accountability at the top for involving these voices and kind of perspectives from the beginning and not at the kind of pick box exercise I think was mentioned before. So that would be my contribution.

Pari Esfandiari: Thank you very much. Thank you. With that, I think we go to Sébastien and open the floor for questions. Sébastien, the floor is yours. Sébastien?

Sebastien Bachollet: Thank you very much, Pariq. We don’t have any questions yet into the chat. If people want to raise a question right now, will be very useful. And maybe you have people in the room who would like to take the floor too.

Pari Esfandiari: Well, maybe why don’t we start with you? Maybe you could make your own comment.

Sebastien Bachollet: OK, I can. I can do that. Thank you very much. Thank you for all these exchanges. It’s quite interesting. And I will say that they are possible. I am Sébastien Bachelet. I was presented by Pariq at the beginning. Importantly, I am not with you, but a lot of my friends are there, and that’s good. It is a real interesting discussion. I would like to point out a few of the points you raised. And I will not pinpoint who said what, but artificial intelligence, yes, it’s an interesting tool. If it’s done by a foreign user, who is building it today? Therefore, do we need to trust them as we trust any of the other platforms? Therefore, yes, it could be one interesting tool, but it will depend on how the tool will be set up. The second point is why we are talking about end-users here. Because very often we don’t talk about them. I just want to give you a short story. When I started my first meeting in ICANN, I went to my government representative and they told me, while you are here, we don’t need an end-user voice. We are the voice of citizens of the country, therefore, we are there for you. Go away. I went to the representative of the CCTLD and they told me, but while you are here, we are gathering the users of the CCTLD of the country and we are the voice of end-users. You don’t need to be there. And so on and so forth. It happens that I am the only one still around. Okay, too long, but I’m the only one still around. And they left. Therefore, they left. Literally, there is no representative from my government anymore in ICANN. And therefore, it’s important that we keep the voice of all the stakeholders if we want to have multistakeholder reality. But don’t forget that end-user, it’s not just gathering the billions of people around the world directly. We can’t do that. democracy it’s not working like that therefore it’s important that we have also place where we gather people. Civil society or end-user organization are really really very very important and don’t forget that end-user are also organized more better in some part of the world but they are organized in a lot of places in the world and therefore you can’t say oh civil society it’s everybody but they don’t have any organization yes civil society get trouble for financing participation but we are we have organization um and um my wish my last point it’s a question of equal stakeholder i really feel that equal stakeholder it’s really really important no at the end of the day it’s not just a government we need to decide and and the south polo declaration it’s quite interesting with that because it show how we want to work between the two models um but uh uh we we don’t want to work and say oh multi-stakeholder will discuss and the multilateral will decide it it couldn’t work like that it’s need to be more uh agile than that and uh and once again the declaration of net mundial plus 10 was very interesting for for that um once again thank you very much for your exchanges i am sure that there are a lot of to to to say and and maybe some of the topic we are raising here during this discussion need to be taken into account in next session in next igf at the the national IDF, regional IDF, or global IDF. And my last point, yes, we can’t discuss everything here, but a lot of things are discussed in other rooms within the IDF today and during this five days. And we need to take all that into account in our thinking. Pari, back to you.

Pari Esfandiari: Thank you very much. Thank you, Sebastian. With that, I would go to Yig Chen, please.

Yik Chan Chin: Thank you very much. I think it is a very inspiring and very interactive discussion, so I just pick up some points from the previous discussion. I think we have a kind of debate at two levels. One is, as the first speaker said, we have to raise the bar of the demand side, not only the supply side. So when we say demand side, we actually talk about the individual users and also civil society. So therefore, actually the whole debate is about the digital divide between the government and different stakeholders, including the users. So there’s a digital divide, which can be the financial issue, capacity building, or IT literacy. So I think this is one debate we have in here. One is about the digital divide, and the other side actually is about the role of the government. We talk about what the government’s role in this multi-stakeholder process. Should they understand more about the individuals and the different stakeholder groups? So I think that’s the issue from the government. But on the other hand, actually we have also, as an end user, or civil society, or other stakeholder, we also have a responsibility, just like a carousel. We need to raise the awareness. government, which is not entirely up to the government, but it is also up to us as a civil society or other sector to influence the government. I think the last two speakers, Alan and also, of course, Olivia, talked about the technology, how technology could enhance or empower us. I actually agree with him. But on the other hand, Alan also talked about the continuum and the mind and the represent of one of our book. But she made a very interesting comment about, you know, we should make, I think it was involved underrepresented group to get them involved without making them responsible to make their voice to be heard. So I’m a bit cautious about this argument. For example, I had a son, he’s 18 years old. I think for him, I needed him to be self-motivated to some extent. I cannot take an entire responsibility for his life and career. I think we need some kind of a self-motivation in that respect. And I really appreciate Wolfgang’s point about, you know, the most important thing actually is to have a channel, you know, for everybody can express their opinions and to get heard, you know. And that channel is very important. I think IGF is a very important and crucial platform for us to have that kind of exchange. I think I’ll stop here. Thank you.

Pari Esfandiari: Thank you. Thank you very much. Amrita, you want to make one? Thank you. Thank you. So on that point, I think we are arriving to the reflection. So each of you have one minute to reflect on what has been said. Maybe I’ll start with Olga.

Olga Cavalli: Thank you very much. A lot of very interesting thoughts. For me, not a total conclusion. I think the way is the destination in all these processes about multi-stakeholder, something that came. up to my mind when Olivier was talking about, I’m an engineer, I was never considered part of the technical community, never ever. So I don’t know why. Many times I tried to participate, no, no, no, you’re not part of it, but I’m an engineer. And they usually are a lot of lawyers there in that stakeholder. So I think we have to be careful of the society that we are interacting with, especially young people, as you said, your son, and young people have a totally different way of using information and media. They don’t see television, they don’t use, my son and my daughter just don’t have television at home, just everything is through the internet and through YouTube channels, different channels that inform them. So we have to understand how new generations will use the information so they can build upon these processes that we are building upon. So we have to stay aware of what is happening with artificial intelligence and young people. And thank you for inviting me.

Wolfgang Kleinwachter: Thank you very much, you know, we have reached a certain progress in the last 25 years, because 25 years ago, it was a question mark whether civil society and end user are seen as a stakeholder. So in the middle of the 90s, it was a question mark. Today, I think that’s the good news. Civil society and users are recognized as an independent stakeholder group within the multi-stakeholder approach. The weak point is, that’s the bad news. So this fact is partly misused by others and they use it just to show you have a seat on the table, but you have nothing to say, or you have weak representation and things like that. So that means what we are missing are procedures, how multi-stakeholder collaboration works. in practice, both in negotiations, also in intergovernmental negotiations, how far non-state actors are involved in these negotiations, but also in multi-stakeholder bodies. So the procedures for interaction are not well defined or are not existent. Insofar, the NetMundial plus 10 multi-stakeholder guidelines, the Sao Paulo guidelines, are a step forward. It’s not the final solution, but we have now clear criteria where we can measure whether this collaboration can be labeled multi-stakeholder or not. So that means this is the next step and I think we have to work in the next couple of years, in particular also in the context of the global IGF, the national regional IGF, to make it more clear, also for outsiders, how the multi-stakeholder approach works in practice. So it’s not only a label which you put on a person’s fine and then, you know, it’s used as an excuse for traditional power policy. So, no, it has to be different, but we have not yet a full, clear understanding what the multi-stakeholder approach means in practice. Thank you.

Olivier Crepin-Leblond: Yeah, I want to thank Ellen, actually, for bringing me back down to earth from my technological heights around, but I was thinking about, not recent actually, a trip to India a few years ago. India has made incredible advances in technology and in spreading the use of mobile phones to a very large segment of its population. I remember being at the airport and the phone ringing repeatedly behind me and I turned around and the lady who was sweeping on the side was using a smartphone and had received the call and was speaking. And the tuk-tuk driver a little bit later was also with his smartphone. And I thought, wow, of course these people are able to use technology. And technology has reached a level where it’s affordable for them. There was a way for them to use the system. And I’m really hoping that the technologies that we have now today around AI are technologies that will be affordable and easy for people to use. And people, including those that Ellen was speaking about, the ones that are more disenfranchised, that are the young people, etc. I think young people have a faster capability to adapt than we do at our age. So I’m not too concerned about them. We just have to give them the chance. And giving a chance to those people that are currently not listened to and that are young and deprived communities and so on is not a burden for us but should be something of an asset. Because they’re the ones that will also help with the change. Thank you.

Pari Esfandiari: Thank you very much. With that, I go to Ellen. Ellen, would you like to have your final reflection? One minute.

Ellen Helsper: Yes. Thank you, all of you, for following up on that. I can’t agree more. My final reflection actually is to position also the governance within a wider discourse that is going on in society at the moment where we see that there’s a kind of a disempowering discourse where there’s a lot of what in academia we would call panics around technologies where people feel that it’s running ahead of them. And I think one of the important things of the governance forum and other similar multi-stakeholder approaches is to try and counter this and give people the feeling again, and especially the groups that I work with, that there is still change to be made, that they can be involved. that they are not powerless in the face of the technological developments that are going on, and the documentaries that are out there about the terrible impacts that technology is having on our lives, and then things like that. I think that is a really important step, without falling into undue technological optimism about creating a very rosy future, but it is important that we give back this feeling of empowerment and influence over the future of technologies. I don’t have the best way of doing that, but I think that should be a priority to make the internet ours, as in the world’s and the end users, again, rather than in the realms of dystopias or utopias that are governed by people who are very much not like most of the citizens of the world.

Pari Esfandiari: Thank you, very much, and with that we go to David. David, would you like to have your final reflection? One minute, please.

David Souter: Yes, okay. Let me come back to, we tend to talk about multi-stakeholderism, don’t we? So I think we have too simple a model of multi-stakeholderism, and we don’t sufficiently critique it. So the purpose of multi-stakeholder involvement is to improve the quality of decision making, and enable it to contribute more effectively to society. Sorry, can you hear me? Yes, we can. Sorry. Right, it disappeared from my screen. To contribute more effectively to society, we need to pay more attention to a number of things there. So we need to pay much more attention to power structures and power imbalances, which Ellen was talking about. In particular, I think we need to recognise the vested interests within each and all of us. stakeholder groups and how that influences the discussions that we have about governance. We need especially, I think, to disaggregate the four stakeholder groups that we tend to talk about or we tend to have in our minds of government, business, civil society and the technical community. I think that’s far too simplistic. It doesn’t recognize fundamental differences such as that between the supply and demand sides of the Internet. So if you look around you in the meeting in Riyadh, you know, ask yourselves how many businesses are there from the demand side of the Internet, the people who businesses that make use of it to do other things compared with how many are from the supply side of the Internet with their particular interests to pursue. And individual users are also much more complex. We need to consider them not just as consumers of the Internet, but also as citizens of their societies. There are differences between people here, but there are also differences within people about how they perceive their own context. We need to reflect on the diverse needs and priorities there and the fact that they are often in conflict with one another. So there are conflicting needs and priorities from the Internet and its governance. And then we need to reach out to that wider community of users in ways that they think are sufficiently relevant to them to bother taking part. In other words, if we want to hear from people, we need to listen to them and we need to create the opportunity for us to listen to them, which is also the opportunity for them to speak to us.

Amrita Choudhury: As reflections, I do agree with what Wolfgang mentioned that we have a stake in the table now, but it should not be tokenism. We need to strengthen it so that at least it is heard with legitimacy. and that’s where we need to work. And I also agree with Olga. If you want the next generation to get involved in these issues, you have to work and act with them the way they look at it. And simple, and another example I would give is when it hits the end user interests, end users rise. In India we had the Free Basics which came in. There was a huge furor from the end user community, civil societies, and it was pushed back successfully. So when it matters and when people understand that their interests are at stake, I think they work. So you have to create the narratives so that people understand what they would lose if they are going with it. And for the younger generation, they use technology, they take it for granted, but what they miss out or what are the risks or what are the trade-offs they are having, I think you need to explain it to them. Thank you.

Pari Esfandiari: Carol?

Carol Roach: I agree with what David was saying. We need to evolve the multi-stakeholder processes that we have, and it’s something that we’re looking for, trying to do with the IGF. Collaboration needs to be more effective. We need to be more result-oriented. Not a result for one particular stakeholder, but we need to come to some agreed set of objectives and then aim to meet those objectives. Each stakeholder has a different objective. But if we could come to an agreement, then it’s good. One of the persons from the media, they came to me and said, oh, I’m so glad that the IGF finally recognized the media as a stakeholder. And it came out from one of the meetings that we had. And when you look at it. you know, where does media fit? Are they private sector, are they civil society? But they have a different angle, and they have a different perspective, they have a different interest, and they have influence. So we do need to look at how we categorize stakeholders, so we need to be more flexible, we need to evolve the model, and to not only look at the issue, but look at the interest and the influence that persons, even end users have.

Pari Esfandiari: Thank you very much. I think we had a very insightful conversation here, and as we conclude, I want to emphasize the critical role of the multi-stakeholder approach in navigating the complexities of a rapidly evolving digital landscape, and the importance of end users’ participation in shaping our common digital future. The upcoming Visas Plus 20 review is a pivotal opportunity to reaffirm this approach, ensuring that end users’ perspectives remain at the heart of the internet governance decisions. With that, thank you all for your time and comments to this shared mission. Thank you to our panelists, invited community leaders, and participants, both online and in person, for your engagement and thoughtful contributions. Together, let’s continue to advance for an internet that reflects the needs and aspiration of all. Again, thank you to support group, thank you to technical community, and thank you to IGF. And with that, we end this meeting. Thank you all. Thank you. Thank you. . . . . . . .

D

David Souter

Speech speed

156 words per minute

Speech length

1368 words

Speech time

523 seconds

Digital divide between governments and other stakeholders

Explanation

David Souter highlights the gap in digital knowledge and capabilities between governments and other stakeholders in internet governance. This divide impacts the ability of different groups to participate effectively in discussions and decision-making processes.

Evidence

Working for the United Nations on the 20 year review of the WSIS process

Major Discussion Point

Challenges in Including End-Users in Internet Governance

Agreed with

Olga Cavalli

Carol Roach

Wolfgang Kleinwachter

Ellen Helsper

Amrita Choudhury

Agreed on

Need for more inclusive and representative internet governance

Power imbalances between stakeholders need to be addressed

Explanation

David Souter emphasizes the need to recognize and address power structures and imbalances within the multi-stakeholder model. He argues that these power dynamics significantly influence discussions and outcomes in internet governance.

Major Discussion Point

Role of Government and Other Stakeholders

Need to disaggregate and expand stakeholder categories beyond current model

Explanation

David Souter suggests that the current four-stakeholder model (government, business, civil society, technical community) is too simplistic. He argues for a more nuanced approach that recognizes fundamental differences within these groups, such as between supply and demand sides of the internet.

Evidence

Example of businesses from demand side vs supply side of the Internet at the Riyadh meeting

Major Discussion Point

Improving Multi-Stakeholder Processes

Agreed with

Wolfgang Kleinwachter

Carol Roach

Agreed on

Improving multi-stakeholder processes

Differed with

Amrita Choudhury

Differed on

Approach to engaging end-users

O

Olga Cavalli

Speech speed

150 words per minute

Speech length

1187 words

Speech time

474 seconds

Barriers like language, finances, and lack of information

Explanation

Olga Cavalli identifies several barriers to participation in internet governance, including language difficulties, financial constraints, and lack of information. She emphasizes that these barriers particularly affect developing economies and people living far from meeting locations.

Evidence

Example of language barrier in Latin America

Major Discussion Point

Challenges in Including End-Users in Internet Governance

Agreed with

David Souter

Carol Roach

Wolfgang Kleinwachter

Ellen Helsper

Amrita Choudhury

Agreed on

Need for more inclusive and representative internet governance

Governments have unique responsibilities but must understand multi-stakeholder approach

Explanation

Olga Cavalli argues that while governments have specific responsibilities, they need to understand and embrace the multi-stakeholder approach. She emphasizes the importance of governments recognizing the value of including diverse stakeholders in dialogue and decision-making.

Major Discussion Point

Role of Government and Other Stakeholders

Need to understand how new generations use information and media

Explanation

Olga Cavalli highlights the importance of understanding how younger generations consume and interact with information and media. She argues that this understanding is crucial for building effective internet governance processes that engage future generations.

Evidence

Example of her children not using traditional television

Major Discussion Point

Role of Technology in Empowering End-Users

C

Carol Roach

Speech speed

143 words per minute

Speech length

1137 words

Speech time

474 seconds

Need to avoid putting people in boxes/categories

Explanation

Carol Roach argues against categorizing people into rigid groups in internet governance discussions. She emphasizes that individuals often have multiple identities and interests that may not fit neatly into predefined stakeholder categories.

Major Discussion Point

Challenges in Including End-Users in Internet Governance

Agreed with

David Souter

Olga Cavalli

Wolfgang Kleinwachter

Ellen Helsper

Amrita Choudhury

Agreed on

Need for more inclusive and representative internet governance

Need for accountability from all stakeholders, not just governments

Explanation

Carol Roach emphasizes that all stakeholders, not just governments, should be held accountable in the multi-stakeholder process. She argues for a more balanced approach to responsibility and participation in internet governance.

Major Discussion Point

Role of Government and Other Stakeholders

Importance of being more results-oriented in collaboration

Explanation

Carol Roach advocates for a more results-oriented approach in multi-stakeholder collaboration. She suggests that stakeholders should agree on common objectives and work towards meeting these goals, rather than pursuing individual agendas.

Major Discussion Point

Improving Multi-Stakeholder Processes

Agreed with

David Souter

Wolfgang Kleinwachter

Agreed on

Improving multi-stakeholder processes

W

Wolfgang Kleinwachter

Speech speed

138 words per minute

Speech length

1107 words

Speech time

478 seconds

Importance of having channels for everyone to express opinions

Explanation

Wolfgang Kleinwachter emphasizes the critical need for channels that allow all individuals to express their opinions in internet governance. He argues that providing these channels is fundamental to ensuring inclusive and representative decision-making processes.

Major Discussion Point

Challenges in Including End-Users in Internet Governance

Agreed with

David Souter

Olga Cavalli

Carol Roach

Ellen Helsper

Amrita Choudhury

Agreed on

Need for more inclusive and representative internet governance

Need for clear procedures on how multi-stakeholder collaboration works in practice

Explanation

Wolfgang Kleinwachter calls for the development of clear procedures for multi-stakeholder collaboration in internet governance. He argues that without well-defined processes, the multi-stakeholder approach risks being misused or becoming merely symbolic.

Evidence

Reference to NetMundial plus 10 multi-stakeholder guidelines

Major Discussion Point

Improving Multi-Stakeholder Processes

Agreed with

David Souter

Carol Roach

Agreed on

Improving multi-stakeholder processes

E

Ellen Helsper

Speech speed

151 words per minute

Speech length

1541 words

Speech time

610 seconds

Underrepresentation of vulnerable groups and Global South

Explanation

Ellen Helsper highlights the persistent underrepresentation of vulnerable groups and the Global South in internet governance discussions. She argues that this lack of representation leads to decisions that may not reflect the needs and experiences of these communities.

Evidence

Mention of young people and children making up the majority of the population in the Global South

Major Discussion Point

Challenges in Including End-Users in Internet Governance

Agreed with

David Souter

Olga Cavalli

Carol Roach

Wolfgang Kleinwachter

Amrita Choudhury

Agreed on

Need for more inclusive and representative internet governance

Caution about AI models being built on experiences of those already most represented online

Explanation

Ellen Helsper warns about the potential bias in AI models used in internet governance. She points out that these models are often based on the experiences of those who are already well-represented online, potentially perpetuating existing inequalities.

Major Discussion Point

Role of Technology in Empowering End-Users

Differed with

Olivier Crepin-Leblond

Differed on

Role of technology in empowering end-users

Need to counter disempowering discourse around technology

Explanation

Ellen Helsper argues for the importance of countering disempowering narratives about technology. She suggests that governance forums should work to give people, especially marginalized groups, a sense of agency and influence over technological developments.

Major Discussion Point

Future Directions for Internet Governance

O

Olivier Crepin-Leblond

Speech speed

169 words per minute

Speech length

1463 words

Speech time

517 seconds

Potential of AI to help overcome language barriers and improve participation

Explanation

Olivier Crepin-Leblond discusses the potential of AI to address language barriers in internet governance. He suggests that AI-powered translation tools could significantly improve participation by making discussions more accessible to non-English speakers.

Major Discussion Point

Role of Technology in Empowering End-Users

Differed with

Ellen Helsper

Differed on

Role of technology in empowering end-users

Importance of making new technologies affordable and accessible to disenfranchised groups

Explanation

Olivier Crepin-Leblond emphasizes the need to make new technologies, including AI, affordable and accessible to disenfranchised groups. He argues that this is crucial for ensuring these groups can participate meaningfully in shaping the future of the internet.

Evidence

Example of widespread smartphone use in India, including by tuk-tuk drivers

Major Discussion Point

Role of Technology in Empowering End-Users

A

Amrita Choudhury

Speech speed

161 words per minute

Speech length

921 words

Speech time

342 seconds

Importance of creating narratives to engage end-users on issues that affect them

Explanation

Amrita Choudhury emphasizes the need to create compelling narratives that help end-users understand how internet governance issues affect them. She argues that this understanding is crucial for motivating meaningful participation from diverse user groups.

Evidence

Example of the Free Basics controversy in India

Major Discussion Point

Improving Multi-Stakeholder Processes

Differed with

David Souter

Differed on

Approach to engaging end-users

Need to strengthen legitimacy of civil society stakeholders beyond tokenism

Explanation

Amrita Choudhury argues for strengthening the role of civil society stakeholders in internet governance beyond mere tokenism. She emphasizes the importance of ensuring that civil society voices are not only included but also heard with legitimacy in decision-making processes.

Major Discussion Point

Future Directions for Internet Governance

Agreed with

David Souter

Olga Cavalli

Carol Roach

Wolfgang Kleinwachter

Ellen Helsper

Agreed on

Need for more inclusive and representative internet governance

P

Pari Esfandiari

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Upcoming WSIS+20 review as opportunity to reaffirm multi-stakeholder approach

Explanation

Pari Esfandiari highlights the upcoming WSIS+20 review as a crucial opportunity to reaffirm and strengthen the multi-stakeholder approach in internet governance. She emphasizes the importance of ensuring that end-users’ perspectives remain central to decision-making processes.

Major Discussion Point

Future Directions for Internet Governance

Agreements

Agreement Points

Need for more inclusive and representative internet governance

David Souter

Olga Cavalli

Carol Roach

Wolfgang Kleinwachter

Ellen Helsper

Amrita Choudhury

Digital divide between governments and other stakeholders

Barriers like language, finances, and lack of information

Need to avoid putting people in boxes/categories

Importance of having channels for everyone to express opinions

Underrepresentation of vulnerable groups and Global South

Need to strengthen legitimacy of civil society stakeholders beyond tokenism

Speakers agreed on the need to address various barriers to participation and ensure more diverse representation in internet governance processes.

Improving multi-stakeholder processes

David Souter

Wolfgang Kleinwachter

Carol Roach

Need to disaggregate and expand stakeholder categories beyond current model

Need for clear procedures on how multi-stakeholder collaboration works in practice

Importance of being more results-oriented in collaboration

Speakers agreed on the need to refine and improve multi-stakeholder processes to make them more effective and inclusive.

Similar Viewpoints

Both speakers emphasized the importance of understanding and including younger generations and underrepresented groups in internet governance discussions.

Olga Cavalli

Ellen Helsper

Need to understand how new generations use information and media

Underrepresentation of vulnerable groups and Global South

Both speakers highlighted the need for a more balanced approach to power and accountability among different stakeholders in internet governance.

David Souter

Carol Roach

Power imbalances between stakeholders need to be addressed

Need for accountability from all stakeholders, not just governments

Unexpected Consensus

Role of technology in addressing participation barriers

Olivier Crepin-Leblond

Ellen Helsper

Potential of AI to help overcome language barriers and improve participation

Caution about AI models being built on experiences of those already most represented online

While Olivier was optimistic about AI’s potential to improve participation, Ellen cautioned about potential biases. However, both recognized the significant role of technology in shaping participation, which was an unexpected area of alignment given their different perspectives.

Overall Assessment

Summary

The main areas of agreement centered around the need for more inclusive and representative internet governance, improving multi-stakeholder processes, and recognizing the role of technology in both enabling and potentially hindering participation.

Consensus level

There was a moderate level of consensus among speakers on the need for change and improvement in current internet governance processes. This consensus suggests a shared recognition of existing challenges and a willingness to explore new approaches, which could potentially lead to more inclusive and effective internet governance frameworks in the future.

Differences

Different Viewpoints

Role of technology in empowering end-users

Olivier Crepin-Leblond

Ellen Helsper

Potential of AI to help overcome language barriers and improve participation

Caution about AI models being built on experiences of those already most represented online

While Olivier Crepin-Leblond sees AI as a potential solution to overcome barriers in participation, Ellen Helsper cautions against the potential biases in AI models that could perpetuate existing inequalities.

Approach to engaging end-users

David Souter

Amrita Choudhury

Need to disaggregate and expand stakeholder categories beyond current model

Importance of creating narratives to engage end-users on issues that affect them

David Souter advocates for a more nuanced categorization of stakeholders, while Amrita Choudhury emphasizes the importance of creating compelling narratives to engage end-users.

Unexpected Differences

Responsibility for end-user participation

Ellen Helsper

Yik Chan Chin

Need to counter disempowering discourse around technology

We need some kind of a self-motivation in that respect

While not directly contradicting each other, Ellen Helsper’s emphasis on countering disempowering narratives and Yik Chan Chin’s call for self-motivation from end-users present an unexpected difference in approach to end-user empowerment. This highlights a tension between institutional responsibility and individual initiative in internet governance participation.

Overall Assessment

summary

The main areas of disagreement revolve around the role of technology in empowering end-users, approaches to engaging end-users, and the balance of responsibilities between institutions and individuals in promoting participation.

difference_level

The level of disagreement among the speakers is moderate. While there are clear differences in perspectives and approaches, there is also a significant amount of common ground, particularly in recognizing the need for more inclusive and effective multi-stakeholder processes. These differences in viewpoints contribute to a rich discussion that highlights the complexity of internet governance issues and the need for diverse perspectives in addressing them.

Partial Agreements

Partial Agreements

Both speakers agree on the need for improved accountability and clarity in multi-stakeholder processes, but differ in their focus. Carol Roach emphasizes accountability from all stakeholders, while Wolfgang Kleinwachter stresses the need for clear procedures in collaboration.

Carol Roach

Wolfgang Kleinwachter

Need for accountability from all stakeholders, not just governments

Need for clear procedures on how multi-stakeholder collaboration works in practice

Both speakers agree on the need to address power imbalances and underrepresentation in internet governance, but they approach it from different angles. David Souter focuses on general power structures, while Ellen Helsper specifically highlights the underrepresentation of vulnerable groups and the Global South.

David Souter

Ellen Helsper

Power imbalances between stakeholders need to be addressed

Underrepresentation of vulnerable groups and Global South

Similar Viewpoints

Both speakers emphasized the importance of understanding and including younger generations and underrepresented groups in internet governance discussions.

Olga Cavalli

Ellen Helsper

Need to understand how new generations use information and media

Underrepresentation of vulnerable groups and Global South

Both speakers highlighted the need for a more balanced approach to power and accountability among different stakeholders in internet governance.

David Souter

Carol Roach

Power imbalances between stakeholders need to be addressed

Need for accountability from all stakeholders, not just governments

Takeaways

Key Takeaways

The multi-stakeholder approach is critical for navigating the complexities of internet governance, but faces challenges in meaningful inclusion of end-users and underrepresented groups.

There is a need to evolve and improve multi-stakeholder processes to be more inclusive, results-oriented, and reflective of diverse stakeholder interests.

Governments play an important role but all stakeholders need to be held accountable in internet governance.

Technology like AI has potential to improve participation, but also risks perpetuating existing inequalities if not carefully implemented.

The upcoming WSIS+20 review is an important opportunity to reaffirm and strengthen the multi-stakeholder approach in internet governance.

Resolutions and Action Items

Work to develop clearer procedures for how multi-stakeholder collaboration functions in practice

Improve efforts to engage and include young people and underrepresented groups in internet governance processes

Explore ways to disaggregate and expand current stakeholder categories to better reflect diverse interests

Unresolved Issues

How to effectively balance power dynamics between different stakeholder groups

Best methods for including end-user perspectives without placing undue burden on individuals

How to ensure AI and other new technologies are developed and implemented in an inclusive manner

Specific mechanisms for improving accountability of all stakeholders in internet governance processes

Suggested Compromises

Combining elements of multilateral and multi-stakeholder approaches, as referenced in the Sao Paulo declaration

Using tools like citizen assemblies to gather input from a wider range of voices without requiring extensive time commitment from individuals

Developing targeted strategies to engage different stakeholder groups based on their interests and capacities

Thought Provoking Comments

We need to be multisectoral in thinking about it. The internet is not the end in itself, in other words, it’s means to an end.

speaker

David Souter

reason

This comment shifts the focus from technology to its societal impacts, challenging the technocentric view often prevalent in internet governance discussions.

impact

It broadened the scope of the discussion to include considerations of how internet governance affects various sectors of society and everyday lives of people.

We tend to group them a lot. So you find that the barriers that you find offline are the same type of barriers that you would find online.

speaker

Carol Roach

reason

This insight highlights how digital inequalities often mirror and amplify existing social inequalities, adding nuance to the discussion of inclusion.

impact

It prompted further discussion on the multifaceted nature of digital exclusion and the need for more nuanced approaches to inclusion.

AI will help me in that. And I’ll develop a tool for this for my own means. And I’m sure you will all be able to develop your own tools that will help you and the people around you in taking part in these issues and these discussions.

speaker

Olivier Crepin-Leblond

reason

This comment introduces a provocative perspective on how AI could potentially democratize participation in internet governance.

impact

It sparked a debate about the role of AI in governance processes, with subsequent speakers both building on and challenging this optimistic view.

We should be thinking about what kind of internet and what kind of technology we want for the future and that future should include all these experiences.

speaker

Ellen Helsper

reason

This comment reframes the discussion from reactive governance to proactive shaping of technology, emphasizing inclusivity.

impact

It shifted the conversation towards considering long-term visions and values in internet governance, rather than just immediate technical concerns.

We need to stop looking at people as being one dimensional and review how we label boxes and how we label people.

speaker

Carol Roach

reason

This insight challenges the oversimplification often present in stakeholder categorizations in internet governance.

impact

It led to further discussion on the complexity of user identities and the need for more nuanced approaches to representation in governance processes.

Overall Assessment

These key comments collectively shifted the discussion from a narrow focus on technical governance to a broader consideration of societal impacts, inclusion, and long-term vision. They challenged simplistic categorizations of stakeholders and users, emphasized the need for proactive shaping of technology’s future, and sparked debate about the potential role of AI in governance processes. The discussion became more nuanced, considering the multifaceted nature of digital inclusion and the complex interplay between online and offline inequalities. Overall, these comments pushed the conversation towards a more holistic, forward-looking, and inclusive approach to internet governance.

Follow-up Questions

How can we develop more effective procedures for multi-stakeholder collaboration in internet governance?

speaker

Wolfgang Kleinwachter

explanation

Wolfgang highlighted that while civil society and users are now recognized as stakeholders, clear procedures for how multi-stakeholder collaboration works in practice are still missing. Developing these procedures is crucial for the effectiveness of the multi-stakeholder approach.

How can we better disaggregate and represent the diverse interests within each stakeholder group?

speaker

David Souter

explanation

David argued that the current model of four stakeholder groups (government, business, civil society, technical community) is too simplistic and doesn’t capture the complexity of interests, especially the differences between supply and demand sides of the internet.

How can we create more effective channels for end-users to express their voices in internet governance?

speaker

Wolfgang Kleinwachter

explanation

Wolfgang emphasized the importance of having channels for everybody to express their opinions and be heard in internet governance processes.

How can we ensure AI and other emerging technologies are developed and governed in ways that represent the interests of underrepresented groups?

speaker

Ellen Helsper

explanation

Ellen raised concerns about AI models being built on the lived experiences of those most present online, potentially excluding vulnerable and underrepresented groups.

How can we better involve young people, especially from the Global South, in internet governance processes?

speaker

Ellen Helsper

explanation

Ellen pointed out that young people, particularly in the Global South, make up a majority of the population but are underrepresented in internet governance discussions.

How can we create more effective mechanisms for filtering up local and national concerns to global internet governance forums?

speaker

Ellen Helsper

explanation

Ellen suggested the need for better mechanisms to ensure local voices are heard at higher levels of internet governance.

How can we address the power inequalities in shaping the internet, its infrastructure, content, and platforms?

speaker

Ellen Helsper

explanation

Ellen highlighted the need to address the significant power imbalances in who shapes the internet, including the role of big tech companies.

How can we evolve the multi-stakeholder model to be more flexible and inclusive of diverse perspectives?

speaker

Carol Roach

explanation

Carol suggested the need to evolve the multi-stakeholder processes to be more effective, result-oriented, and inclusive of diverse perspectives, such as media.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #59 Towards a Greener Future with E-Waste Management

Open Forum #59 Towards a Greener Future with E-Waste Management

Session at a Glance

Summary

This discussion focused on the global challenge of e-waste management and potential solutions. The Digital Cooperation Organization (DCO) presented their initiative to develop a framework for addressing e-waste issues, emphasizing the need for collaboration between governments, businesses, and individuals. Key points included the rapid growth of e-waste, with projections showing it will more than double by 2030, and the low global recycling rate of only 20%.

Participants highlighted several barriers to effective e-waste management, including lack of consumer awareness, data privacy concerns, and the complexity of the supply chain. The importance of collection systems, consumer education, and achieving economies of scale in recycling were stressed. The discussion also explored the potential for reusing and redeploying electronics to bridge the digital divide in underserved communities.

Cross-border collaboration was identified as crucial for addressing e-waste challenges, particularly for smaller countries with limited resources. Participants discussed various initiatives, such as government-led collection programs and partnerships with the informal sector. The need for better data collection and standardized metrics for measuring e-waste was emphasized.

The DCO presented a framework for governments, focusing on regulation and policies, financial instruments, awareness and capability building, and infrastructure development. The discussion concluded with a call to action for all stakeholders to take responsibility and contribute to creating a more sustainable digital economy.

Keypoints

Major discussion points:

– The growing problem of e-waste and its environmental/economic impacts

– Challenges around e-waste collection, consumer awareness, and data privacy concerns

– Potential for reusing and recycling e-waste to bridge the digital divide

– Need for collaboration between governments, private sector, and NGOs to address e-waste

– Developing policies, infrastructure, and economic incentives for e-waste management

The overall purpose of the discussion was to raise awareness about the e-waste challenge, gather input from diverse stakeholders on potential solutions, and promote collaboration to develop more effective e-waste management practices globally.

The tone of the discussion was informative and collaborative. It started out more formal with presentations on the e-waste issue, but became more interactive and participatory as attendees were encouraged to share their perspectives and ideas. There was an emphasis on collective responsibility and finding practical solutions together.

Speakers

– Alaa Abdulaal: Representative from the Digital Cooperation Organization (DCO)

– Syed Iftikhar: Representative from DCO

– Arianna Molino: Sustainability specialist at Kearney, collaborating with DCO

– Mohamed Mashaka: From United Republic of Tanzania

– Ayman Arbiyat: From Jordan

Additional speakers:

– Noia: From Tuvalu (Pacific island country)

– Dr. Nagwa: From the Academy in Egypt

– Abdul Aziz: From CST (KSA regulator), mentioned but did not speak directly

Full session report

E-Waste Management: Global Challenges and Collaborative Solutions

Introduction

The Digital Cooperation Organization (DCO) hosted a discussion on the critical global challenge of e-waste management, bringing together representatives from various countries and organizations. The interactive dialogue, which included audience participation through Slido polls, focused on the rapid growth of e-waste, projected to reach 74 million tons by 2030, and the current low global recycling rate of only 20%. Participants examined barriers to effective e-waste management and explored potential solutions, emphasizing the need for collaboration between governments, businesses, and individuals.

Key Challenges in E-Waste Management

1. Growing Environmental and Health Risks

Speakers, including Alaa Abdulaal from DCO and Arianna Molino from Kearney, highlighted the increasing volume of e-waste and its negative impacts on the environment and human health. E-waste was identified as a significant contributor to climate change and pollution.

2. Lack of Consumer Awareness

Mohamed Mashaka from Tanzania emphasized the critical issue of public awareness, noting that many citizens are unaware of the impact of e-waste on various initiatives. This lack of awareness was identified as a major barrier to proper e-waste disposal and management.

3. Data Privacy Concerns

Ayman Arbiyat from Jordan raised the issue of data privacy concerns when disposing of electronic devices, recognized as a significant obstacle preventing individuals from properly recycling their e-waste.

4. Complexity of Supply Chains

Arianna Molino highlighted the complexity of e-waste supply chains as a major challenge, based on participant responses. This complexity makes it difficult to track and manage e-waste effectively throughout its lifecycle.

5. Unique Challenges for Small Island Nations

An audience member from Tuvalu brought attention to the specific challenges faced by small island nations due to their geographical isolation, emphasizing the need for tailored solutions in different contexts.

Proposed Solutions and Strategies

1. Comprehensive E-Waste Strategies

Mohamed Mashaka called for the development of comprehensive e-waste strategies and guidelines to address the multifaceted nature of the problem.

2. Improved Data Collection and Measurement

Dr. Nagwa from Egypt highlighted the importance of accurate data collection and measurement in e-waste management, recognizing its critical role in effective policy-making and understanding complex e-waste supply chains.

3. Promoting Reuse and Repair

Arianna Molino advocated for promoting the reuse and repair of electronic devices to extend their lifespans and reduce e-waste generation. Audience members raised concerns about the quality and reliability of refurbished electronics.

4. Leveraging Technology and AI

An audience member suggested exploring the use of artificial intelligence and other technologies to support e-waste management.

5. Multi-stakeholder Collaboration

Speakers emphasized the importance of collaboration between governments, the private sector, and NGOs to address e-waste challenges effectively. The need for cross-border and regional collaboration was also highlighted.

6. Policy and Regulatory Frameworks

Participants discussed the need for e-waste-specific regulations and standards, including extended producer responsibility policies and financial incentives for proper e-waste management. The importance of harmonizing cross-border e-waste regulations was noted.

7. Consumer Education and Awareness Campaigns

Multiple speakers stressed the need for increased consumer awareness and education to address concerns and promote proper e-waste disposal.

8. Cross-border Initiatives

The discussion included potential initiatives for global e-waste management, such as global regulation and responsible recycling certification.

The Role of the Digital Cooperation Organization (DCO)

The DCO presented a framework for governments focusing on four key components:

1. Regulation and policies

2. Financial instruments

3. Awareness and capability building

4. Infrastructure development

This framework aims to guide governments in developing comprehensive strategies to address e-waste management challenges. The DCO emphasized its role in facilitating collaboration and knowledge sharing among member states to tackle the global e-waste problem.

Specific E-Waste Management Initiatives

Participants shared examples of ongoing e-waste management efforts, including:

– The KSA government working with the social sector to collect devices and ensure privacy

– Tanzania’s efforts to develop comprehensive e-waste guidelines

Closing Remarks

Alaa Abdulaal and Syed Iftikhar from DCO concluded the discussion by emphasizing the importance of collective action in addressing the e-waste challenge. They encouraged participants to take personal responsibility for proper e-waste disposal and to contribute to creating a more sustainable digital economy. The speakers reiterated the DCO’s commitment to supporting member states in developing effective e-waste management strategies and fostering international collaboration on this critical issue.

Session Transcript

Alaa Abdulaal: issue. As the digital economy continues to grow, connecting billions of people, it also has oppressing environmental consequences of this progress. The rapid growth of e-waste is really becoming more and more, with more consumers of electronics and mobile phones and home devices. All of this leaving us with this challenge that we want to tackle and we really need to look at it as a shared responsibility, not only by government but by individuals, by ourselves. It is the responsibility upon everyone in this room and even listening to us to take this challenge and really think about it. Because imagine that there is a lot of e-waste that is not being recycled. Look at the lost opportunity, not only from an economic perspective but also from an environmental aspect. There is a lost opportunity here of all of this e-waste not being recycled and having those devices reaching to places where there is a need for it. We are also now facing a challenge of affordability of devices. So why not seize this opportunity and look at this e-waste and see how it can be recycled, how it can be managed. And again, as I said, it’s not a government responsibility or a private sector responsibility. I believe it’s a shared responsibility on each individual. We are the ones who are consuming those electronics, those mobiles, those devices. What are we doing with them? If I ask to raise hand, how many devices do we have more than one in the room? Who has more than one device in the room? Yeah, a majority of the room is raising their hand. What did you do with your old devices? How many times are you buying new devices? So it’s just to think about this. And for us as the digital cooperation organization, because we are looking at how to have that inclusive and sustainable growth of the digital economy, we saw that this is an opportunity for us to gather stakeholders and to look at this challenge and see what we can do. And specifically from a cross-border e-waste management. Because DCO is really committed to this mission. And through our e-waste management program initiative, we aim to foster circular economy in the ICT sector, advance cross-border solution and leverage technology to mitigate environmental harm. And today in this workshop, we want to share our work and we want to hear from you and to give us insights on what we are doing. Because we believe in a multi-stakeholder approach. We believe in learning from other and listening from experts like the one in the room. And for us to have that comprehensive solution. Because again, we as DCO, we really believe that we want to give a fair opportunity for each person, each nation, each business to prosper in a cross-border inclusive digital economy. So thank you, everyone, for being here today. And looking forward to hear from you to be engaged in this interactive workshop. I want to give the floor to my colleague,

Syed Iftikhar: Dr. Sayed Iftikhar, to have his word. Thank you. Thank you, Ms. Salah, for giving more insight about DCO and particularly the e-waste management initiatives. So basically, first of all, I give some introduction about the DCO more in detail because DCO is established in November 2020 and we are aggressively working on different aspects. So DCO is a unique, multilateral, intergovernmental organization. And considering to support the stakeholders, particularly the government businesses and the individuals, for emerging areas, particularly in terms of digital economy. So one of the challenges is the sustainability. And we’re working, cooperating with our member states and countries of the world on how we tackle this challenge. So DCO, as per the structure, we have 16 member states. We also have observers, more than 40. These observers are from international organizations, private sectors, and NGOs as well. So we also have a partnership with international organizations like UN, World Economic Forum. So this is somehow our overall structure. So we have core functions. We are an information provider. We are educators. We are facilitators. So this is somehow the digital cooperation organization structure. As I said, we have 16 member states and we represent 800 million populations. And notably, the 70% of the population is youth. So as we have youth, and youth are more focused on the digital areas, particularly the digital devices. So that’s why we need to care about more on the sustainability aspect from environmental aspect, from the health aspects. Our member states have a GDP about USD 3.5 trillion. So you see on the screen first, you see a word DSA, Digital Space Accelerator. Basically, it’s a working group of think tank, researcher, policy makers, and individuals. And we gather this working group on different international forums. In this year, we also organized different roundtables. What exactly the objective of this DSA? DSA is to focus on what exactly the emerging issues in digital economy and how we solve these challenges. We keep all these stakeholders on board to discuss, to co-create the possible solution to tackle the emerging challenges in digital economy. And one of the challenges is sustainability. So why we keep this DSA program? Basically, it has impact. We want to create impactful solutions. Another, we want to present our organizations as a credible source of information. And we also want, because our name is more focused on the cooperation, we want to expand the cooperation as well. In this year, we have different topics, and one of the topics is on e-waste management, and particularly focus on the cross-border e-waste management. In 2024, DCO focused on sustainability, and from sustainability, we are focusing on e-waste management program. So this program, as I told you, the DCO is more focused to concentrate on to address these challenges through stakeholders. And how we manage this challenge? So there are different ways. One of the things is we want to reduce this e-waste, number one. Second is that we want to leverage the economic value of the e-waste. And definitely, we also focus on the digital inclusions. So the scope of this project is mainly to analyze the best practices and to utilize these practices to tackle the challenge of e-waste. And the second major objective is to co-create the framework, holistic framework. It covers all aspects, regulatory aspects, capacity building aspects, financial dimensions, and how we promote the digital inclusion. So this is also one of the key scope of this project. So I stop here. I invite our experts, Ari, to give you more insight about the detailed projects. Thank

Arianna Molino: you. Ari, please. Hello, everyone. Nice to meet you. I’m Arianna Molino. I’m a sustainability specialist at Kearney. And I’m helping and collaborating with the DCO on this, I think, very passionate and very interesting initiative. Because as also Ms. Ala was mentioning, it’s an urgent problem. We love technologies. I think we’re addicted to technologies. And our kids will definitely also use more and more of it. So if we want really to have a digital economy, we need to tackle these issues and making sure that it’s sustainable. So for the today’s session, what we would like to do, it’s really have an engagement with you, as mentioned, giving your expertise. And also I see I hope that from different parts of the world, it would be great to get your perspective. So on one side, we want to discuss the importance of e-waste in your countries or for your sector. The second objective would be to also try to link the environmental issues and the social benefits. Right? So how do you see the reuse of devices to tackle digital to bridge the digital divide? And then third, we believe in collaboration. I mean, it’s a little bit obvious because DCO is known for cooperation. But we really believe that you cannot do a work independent. It’s a complex ecosystem. If you start working and alone, you will definitely not reach economy of scales. Profitability is an issue. And so this is where we really encourage to have discussions and to understand your lesson learned. So through collaboration, what did you gain? And what did you learn? And also the challenges and how we can collaborate together. Miss Hala mentioned it’s national focus, but also cross-border. Because I think it would be great here in this forum, where it’s global, to understand if it’s possible to have collaboration between countries. Cross-border e-waste trade is under the Basel Convention. It’s regulated. And we definitely need to be responsible in how we trade e-waste. But also we believe that there is a lot of potential in order to leverage technologies and making sure that we don’t duplicate infrastructure around the world if we can work together to reach efficiency. So these are the main objectives. We wanted to have introduction, but I see that maybe we are a little bit too many. We will have a poll online, where we will start understanding from which country you are, from which also sector, et cetera. But I really encourage you, if you want to intervene, please do. Because really, we see that if you start talking to each other, at the end, it would be really amazing if you start bouncing ideas among you. We are here just to facilitate. But really, if you go out from this session with more energy and more hope for setting up a business or really scaling up your efforts in terms of policy, that would be amazing. I totally agree with Arianna. It is important. We are here to listen and hear from you. And without your interventions and feedback, this session will not be successful. So yes, I really encourage you to be interactive and share your insights with us. Yes. And of course, we have microphones. So we can also be a star with microphones and sending it around. So also to set up a little bit the agenda, after this brief introduction, we will talk about e-waste, probably you know already the basics. But just to remind us about the big numbers, the big pictures, to kind of phase out and understand what is the environmental impact, the social impact, but also the economic potential. And then looking into the value chain. Because we know that the issue is not just recycling, right? It’s like collection. It’s kind of sorting. It’s really taking the private sector together with the social sector and the informal one. Because we need to remember that at the end, each country is going through a certain level of involvement. So we really would like to see and discuss with you the value chain itself. Second part would be on digital divide and e-waste. I think we need to remind ourselves, but before recycling, we can reuse and redeploy, right? Because circular strategies, you need to close the loop early on. And it’s a great potential to use this in order to then create a new market, to give access to a potentially part of the global economy. Part of the population that cannot buy a new iPhone every two years, right? So we’ll talk a little bit about that. And finally, cross-border and the national collaboration. Here would be great. We will see if maybe we can split into groups or if we have enough energy. We can also see among you if you want to raise the hands in terms of potential ideas that you have grabbed or that you have also experienced and you are doing in your organization or country. So again, raise the hands if you want to intervene. I think everything is very valuable. So really, don’t shy away. So e-waste. This figure was already mentioned by Ms. Ala, but just to tell you about the growth, right? In 2010, it was estimated. Of course, there is no really precise data, but estimated $34 billion. This is projected to more than double in 20 years. So I’m not sure if you have kids or not, but I started to do the projection for my kids and was like, OK, if I don’t start to tackle it now, on one side, we will have plenty of waste, but also we will not have raw material. So I’m not sure if my kids will have a phone when it will be 30, 40. We can debate if it’s good or not for kids not to have phones, but in general, the growth is very, very scary. On the other side, you might say, OK, but if the recycling rates are going up, at least we are tackling it. Unfortunately, that is not really a good news because recycling rates are not going up as the e-waste is going up. So now, on average, in the world, we have almost 20% recycling rate. The ambition is to arrive to at least 60%, 80%. So just to clarify, we know that 100% definitely will not happen in five years, right? So the international organization are really trying to push their mission at least to 60% and 80%. And we will see what is the impact there. So if we then say, OK, why we are doing it? On one side, environment. If you do not recycle well, it’s not just that you pollute the air, the land, the soil, the water, but also there is risk for the well-being. So that is where you have the environmental impact that is affecting both the environment and the society. And here, there are estimates of 145 billion of CO2 emissions in the environment. Is it 50% of the global emission? No, but this really is contributing to it. And we have high potential to lower it down to making sure that we close the loop in this ICT sector. There are informal workers that are affected, 11 million. I want really to hear your opinion on informal sector, because talking with UNITAR and with different organizations, they will say, like, informal sector is great, because at the end, they are really embedded in the society. They are really working in different areas of the city. So they are not per se wrong, or you need really to dismantle it. But you need to help them to follow probably some compliance rule or any way to be careful of the environment and the people. So here, it was mainly the environmental impact. But then also, if we think about the social impact in terms of digital divide, and we try to close the loop early on, we really redeploy in just 1% of the smartphones. And we are like 5 billion smartphones in one year. 5 billion, right? Like, it’s just to let it sink. We can really help 50 million people, at least. So that is where we are in this forum. There is a lot of talks about bridging digital divide, how to bring internet in the rural areas, et cetera. And it is part of it. Another part is the device. Of course, if you have the device and you don’t have internet, I mean, you’re solving partially the issues. But in this workshop, it would be great to also get your opinion about this topic. Again, people are a little bit skeptical about it because they are saying, normally, you pretend it is a redeployed device and you send it to the south of the world. But in reality, it’s just e-waste, right? So also, how you make sure that you are doing this app cycle in the right way that touch the people in need. And then economy, because on one side, driven by the environment issues and the social causes, it’s important. But on the other side, I think it’s also great to remind us that we are talking about GDP impact. If you have more people affected in terms of health, they will go to the hospital. You have a cost on that. If you have impact on the soil, on long term, you will have also impact on agriculture, on your economy. If you have a climate change, you will have impact on the different disasters that we see now, where sometimes, especially if you have Islam or the different cities that are not set up for this climate disaster, you have a human impact and economic impact. So here in the Global E-Waste Monitor 2024, you can see an interesting calculation that at the moment, with this 20% recycling. rate, we are losing globally 37 billion. But as soon as we start recycling, we can have a net positive impact. Why? Because on one side you are creating a market. You’re creating economy activities. Then less people are healed. So less people go to the hospital and then have to pay for medical care. And then on the other side you have less impact on the environment. So less pollution and green gas emission. It is true that this type of concept relates more to government, right? Because at the end, the government is the one that will allocate funds for also the health care, funds for the environment, et cetera. So this is really to also arrive to government level and tell them, investing even more in the private sector, social sector doesn’t really stop into the e-waste management, but goes beyond at your national and global economy. So here we have done a couple of workshops. One in London last week, where we had more kind of private sectors and NGOs coming in and tell us their point of view on the policies and their challenges. We had also in Singapore and with also private sector and policymaker that we’re discussing how the e-waste is done in Malaysia and in Singapore. And finally, also here in Riyadh with the GCC countries where we focus mainly on kind of policies. And what we have identified are five main kind of best practices. On one side, collection system is important. We saw some countries that invested into recycling infrastructure, but they don’t have feedstock. So then you don’t have, you know, your economy will like profitability will not stand. So collection is important. Another point is if you want to, if you raise awareness with consumer, but don’t give them the access to recycle, to like bring the device, then, you know, like they are, okay, like you are telling me to do it, but I don’t know how to do it. So, you know, you lose the momentum. And so, yeah, the second one is consumer awareness. Once you have at least set up some access point, right? And as also Miss Hala was saying, consumer is also us. So I was also discussing with my colleagues that I’m not sure you, but I have a bag in my home with all the cables of the old phones, et cetera, that I, with this project, I started on saying, okay, where I have to put it, how, what is the impact, et cetera. But it’s not something that everybody knows. And especially the collection system, all this system will really change from Spain to Saudi, for example. Economy of scale. In a lot of countries, both private and public sector, told us that they want to do recycling facilities, but they don’t have enough feedstock to make it profitable. And that is where on one side, okay, if you increase the collection, most probably you will be kind of profitable. But on the other side, this is where you maybe need on one side to think broadly on collaboration and how potentially you can work with other countries or other part of municipality, et cetera, to then consolidate and making sure that you’re not setting up small facilities around the territory that no one is profitable, and then everybody needs to close or have subsidies, right? So economy of scale is important. Private sector. I think talking with the private sector, they really want to get involved. Of course, there are a lot of opinion about policies. Like, okay, this we need to change. This is about legacy technologies. It also has to be updated. This is too strict. This is too broad, of course. But I think what’s the key message that came from this workshop is that government alone cannot centralize, because e-waste is also spreading the territory. So you need to enable, but then the private sectors also have the appetite to come and then to build, you know, strong supply chain and value chain. And last but not least, regional collaborations. So here is where, you know, like, depending also on the regional policies and the regional relationship, you can really think on how I can work, for example, between Saudi Arabia and Oman to making sure that if I specialize on recycling on the one type of e-waste or one type of battery, then I will receive the feedstock, but then I don’t have to do the recycling of that type of material, because then I can collaborate and then I can send it there. So this exchange and this promotion of e-waste cross-border trade, I think it’s important. But also sharing lessons learned. Because there is no one system that fits all. First of all, we are very different in terms of society. Like doing some policies in Europe, it’s totally different doing policies in Ghana and doing policies here in Saudi Arabia, right? So this is where maybe collaborating between the GCC countries that can help saying, okay, what did you do that works? What did you do that didn’t work? So maybe can we try to test it together? Can I try to learn your governance, your policies, your EPR implementation? So this is really a call to action on collaboration. Now I will stop talking. Maybe before doing the Slido, I want just to open the floor for maybe reactions. I know that you need to be brave to be the first one talking, because everybody is like, as soon as I share, like someone wants, okay, perfect. We

Mohamed Mashaka: have a volunteer. Thank you. Hello, yes, my name is Mohamed Mashaka from the United Republic of Tanzania, which is an Eastern part, it’s an Eastern part, East African country back there. One of the areas that I think we are really facing as a challenge is the literacy to our citizens, because most of these citizens, they aren’t aware of the impact of this e-waste on different initiatives that they are doing. Maybe the issue is we are trying to look on the case studies whereby the countries have these strategies towards the e-waste, and probably the guidelines towards the safe usage of this ICT and all infrastructures so that they can be aware of that. So one of the biggest challenges that I’ve seen, which is actually coming across, is the literacy level of the people, and how effective we are really going to do it. So I think there is a need to have an e-waste strategy, and probably a guideline for the e-waste as well. And the awareness campaign that needs to go to the people, because as you have mentioned, we have the low-income people, the people who are actually in the informal sectors, they are the ones who are much more affected into this. So there is a need to increase much effort into it. So we really appreciate that. That’s the comment I wanted to add. Thank you. Thank you very much. Maybe we have someone that wants to…

Ayman Arbiyat : Hello. Good morning. My name is Ayman Arbiyat from Jordan. First of all, I am not an expert in e-waste, but I like the concept. So I would like to ask you about the best practices. In the last slide, please, because I have the same issue. I have some e-waste, but I don’t know where I can send it to any organisation, how we can collect what is the effective collection system. For the next best practice is about consumer awareness, and maybe I will raise some privacy concerns. Because, let’s see, when I talk to my wife or to my colleague, they say this phone has my photo, even if I delete it. But we still believe that this device still has many photos or many personal information, and for that we keep it in the home instead of recycling it. Thank you.

Audience: Anyone wants to… Yes. Okay. I wanted to say ladies first, but we are two ladies, so… Hello. My name is Noia. I’m from the small Pacific island country of Tuvalu. I… Very nice presentation. You know, the Pacific is very vulnerable to climate change, and you mentioned something about climate change and how e-waste contributes to solving all that. We have very unique challenges due to our geographic location and our isolation. We have very limited infrastructure and resources are constraints as well. My question is around the recycling and reprocessing. Are there any cost-effective recycling solutions suitable for small-scale and more decentralized systems in the Pacific? Also, can e-waste be repurposed or up-cycled locally to create some economic opportunities, in a way, because we are too far away from where the e-waste are processed and shipping is also a challenge for small Pacific Island countries. So reprocessing and up-cycle of those waste can be… We are looking at leveraging as an economic opportunity. Thank you. Thank you. My name is Dr. Nagwa. I’m from the Academy from Egypt. Actually, my question is related to if there is any intention in your organization to measure the e-waste, especially that what I noticed that all the figures presented are estimated figures and for globally. It’s worse, and it’s not a good thing, and it’s not a good thing. It’s worse, and it would be useful to think about following the methodology in order to measure the e-waste, as well as to see the impact when you set some policies or strategies and implementation of these policies. You can see how how there is improvement in place due to the implementation of these policies. So I think this is very important, not only for the Kingdom, I think for maybe for the region as well, because you can, after this, you can also share this with the others, whether it’s on international level or on the regional level as well. Thank you very much, and congratulations for such wonderful work for Engineer Ela and for you, Ariane. Thank you.

Alaa Abdulaal: Thank you. And also, thank you for all the different questions. I will try to address very briefly. I think then during the presentation we will see also other kind of example of initiatives, and I will also encourage you to reply to one another, right? So if you know some awareness campaign that works, please raise a hand and say in our country we did this and was successful, etc. So I think we have one question about how you do consumer awareness. Another one, it’s about collection. How does it work in data privacy? And then it’s how does it make, like, what are the different recycling processes that make sense for a small community or small economy? And then can you hear? Yeah. So consumer awareness. Short answer, it takes time. It’s not something that you can do from one day to another. I like the sentence repetition is communication. You need to hear one message once, twice, three times before really understanding and reaching all the population. I think what works well is government awareness campaign, but mainly working with the NGOs that are on the ground. That is where you need to really access different type of channels. So it’s, of course, internet. A lot of us are now on the web. It’s about also on the ground kind of workshops of this type of events that we are doing, but then going face to face with the informal sectors and try to make them aware of the challenges. Of course, the more on the ground you are, practical example you will need to give. You cannot go to a small area, a small village in in Zambia and say, you know, the impact of e-waste is 58 billion. That will not work, right? You can make an example of if you don’t recycle, then it will impact the health for the lungs, etc. So that might be a little bit more concrete example. Yeah, it takes time and you need to leverage a social sector with the different channels. Collection system. Great question. So on one side private sector, social sector and government. Private sector, now with some policies of EPR, of extended producer responsibilities, a lot of companies need to take your phone back. So the first thing is you want to give me the iPhone back. So if you want one, I’m talking about the iPhone, but sorry, Samsung, you want a phone, you need to give the other phone back. And this is the reverse logistics, right? We’ll not go into the discussion if it makes sense or not economically for the producers, but this is one channel. The second one, again for the private sector, they sometimes put the different boots where you can drop the technologies. Of course, if there is data security, we’ll talk later, you might not be comfortable just to put it in a box. But then there are other kind of cables where you might be fine with it. Social sector or kind of organization, now there are organization that on behalf of the producer will then organize the collections. So that is where it is important again, and that is spread on the territory. And then it’s very clear who does it and what is the purpose for this. So that is NGOs that of course is connected with a consumer awareness, but then accessibility. And then the last one is government. I saw that before here was Abdul Aziz that is from CST, from the KSA

Arianna Molino: regulator, and they did a great initiative to recycle my device when they tackled both of the questions you had. First of all, they saw that in KSA, but I guess in most of the countries, there is a lot of concern about privacy. Where my photo will go, right? Even if I delete it, I also don’t know. If I delete it, I’m not sure if they’re probably there, right? There’s always some magic gig that resuscitates stuff. At least when I delete, I hope that there is someone that will do for me. So they see that in some countries, government have the trust and so consumer are okay to handle the device to the government. And so they will reassure you that the data will be the first one that will be kind of erased in a secure way before handling over to another actor in the supply chain, right? Now, this doesn’t work in China, for example. China, they don’t trust to give to the government, right? They trust the private sector more than the government, correct. So then you have to understand in your country, what is the culture, right? I’m Italian. I’m not sure if I will trust my government, right? But again, also private sector. I’m questionable. But for example, in China, we know that private sector will have, you know, better trust. In the London workshop, there was a specifically one company that was dedicated on erasing data and make it secure. So that is where you need to build trust. You need to build the technologies, etc. But long story short, this is a pain point. This is a pain point that, yes, we need to be addressed. And we cannot just think that, you know, once you know where to throw your phone, you’re happy to do it. For the privacy perspective, there are some government policies as well. If we see some advanced countries, they have policies for when they export the e-waste, particularly the electronics. So they mention specific clauses, like they need to discard the hard disk, and this is where the data is stored. So usually in the government and national level policies, they mention categorically to don’t export the storage devices. Yeah, indeed. So it started with policies and then with the implementation, right? So, as always, then there are people that say how are you making sure that the policies really get enforced, right? That is another, you know, topic that we can discuss later. But then going maybe to the question of what small economies or small islands can do, I think upcycle and recycle is the first thing that needs to be done. First of all, because you close the loop earlier and you extend the life of the device. Of course, there are some cultures where secondhand is better accepted than others. And yeah, we know that in Africa, for example, they are really super good in it, right? While in Italy, to be honest, if I have something broken, I will always go to my grandpa because the whole generation knows how to do it. And I’m the one that is not really comfortable sometimes to fix, you know, different devices. So first of all, it’s upcycle and recycle. And then what recycling business is more profitable or makes sense? I would say focus on dismantling, because then you can extract the plastic. And then the plastic can go in any plastic recycling.

Audience: Or then extract in the glass. So the glass can then go not only on e-waste recycling, but can be on glass recycling for other industries. Probably you will not have the volume to do batteries that you will need to. But good news is that critical raw material are more and more a hot topic nowadays. So you can translate it in an economic advantage. I can sell you that. Because soon, if we continue like this, we will not have raw material. So in order to make batteries for a new e-vehicle, we will not have lithium, et cetera. So you will need to recycle. So this is where they will come to have kind of feedstock. Of course, logistics might be an issue. But that can be, again, something to evaluate, or at least to see the numbers in order to do the business case. And last but not least, data. So you’re talking with a consultant that needs to do data collection. So I feel the pain, right? So we try to go to different countries and different representatives and say, OK, we go to the source. You as a government that look at e-waste, can you please give us country data? And that is where you understand that, first of all, there is no common understanding of e-waste. The definition, you always have to step back and say, no, no, no, it’s not just phone, right? Like, six categories, et cetera. So, ah, OK. So this is where maybe it’s not one department. Maybe there are two, three departments. And then, to be honest, in this region, they are still kind of step towards it. KSA now, in the last two years, is launching one initiative for data collection, but also mapping all the different supply chain on waste in general. So that is where you can start having a little bit more granularity. But then you can see some efforts also using technologies, because, again, I think ICT can really be leveraged for this type of problems. And so they have platforms where maybe you’re not really tackling the data across the value chain, but you’re putting together maybe the collectors with the recyclers that has to exchange e-waste.

Arianna Molino: So at least you see all the transaction, and you start really understanding what’s going on, at least in that part of the supply chain. So this is happening. There are different countries that are doing it. And also e-waste trade, because they have to enter your border and exit. At least in that transaction, you can start to have control over it. Now, issue, classification. Some do not classify as e-waste, also because most of the time it’s illegal. They’re ban on export, ban on import. So they are sometimes wrongly defined as electronics. So that is where, of course, you will need more enforcement. But yeah, we are also thinking about it. So with Ms. Ala and Dr. Saeed, we are also trying to tackle this and maybe give some lesson learned in 2025. So conscious of time, but thank you for that. I hope we discuss. And of course, later on, we can also have a better, deeper discussion. Now, Dr. Saeed told me that you are a little bit late. Italian style, I’m talking a lot. So now, we are in IGF. So now, yes, you can take your phone. I think some have already anticipated this moment. I already have the phone. Slido, you can scan this code. And you can access it. Ideally, it will not fail. That would be magic. And the first question is pretty simple. I think everybody know the country where they are from. Now, if you are from one country and you’re representing another one, up to you what you want to put. Yeah, and we will see the different results. And I’m really happy that we see different country represented here. So Thailand, Maldives, Tonga, Netherlands. Maybe I know who is from Netherlands and Saudi. We have five participants. I think in this room, there are a little bit more than five. So I encourage everyone, if it’s possible, to participate. It’s anonymous, as you can see. So it’s really to start. Then later on, we will try to connect the different solutions, different issues to the. Sorry. Two value. Two Netherlands. Good. Rwanda, Tanzania. Perfect. OK, and you can join the poll also later. There will be a couple of questions. So the next question is, which sector you are representing? If it’s private sector, public sector, or social third sector? This will help us also understanding you’re more interested into policies, you’re more interested into solution implementation, more in kind of awareness and collaboration for different impact. Great. OK, we have public sector. That is almost half of the audience. And then also private sector. Social sector is a little bit kind of less. So here, if you know there are policy makers on one side, if you know there are business on the other side, I really encourage you to discuss. Because that is where we see the most interesting debate. Because of course, policy maker need to arrive to some compromises in defining the regulations. But then you have here some of the businesses that really, it’s not really feel the pain, but has really to implement it, right? And you’re from different countries, so it’s not that they are pointing you specifically. So great. So e-waste. Just if you have to think one word, two words, about e-waste, after this 40 minutes of discussions, what do you think? And probably there will be some repetitions. This is really to test which part of the e-waste you are more concerned or passionate, if you want to. I like Infinity Loops. Wow. The Infinity Loops author needs to do this type of workshop, because definitely the wording is very, very good for so different campaigns of awareness. And use electronic material, collection, environmental justice. I like that. Kind of just transition, environmental justice, expand lifespan. As I was mentioning, it’s not just a matter of recycling. We need to extend the life of the devices. Great. So reuse, recycle. Great. Perfect. So now that we want to test the issue that you see this, how much do you think it’s important e-waste in your country? And it’s not that you have to say, ah, it’s horrible, because I did the presentation, right? If you also feel that it’s tackling or there’s not enough volume, really here. So it’s not yet significant, but it’s increasing and moderately significant. So you see, after this presentation, at least I think it’s not that everybody thought, OK, it’s super significant. So I think that this is a good kind of takeaway that probably the urgency is not there yet, either because of facts or perception. Then let’s see. Of course, we need data to arrive at some conclusion, but this we will probably for each country will understand case by case. So yeah, it’s not significant, but it will come. So better to be prepared and proactive rather than reactive. Right. So here about collaboration. Are you collaborating or not? Because when we were in other workshop, this was a big debate, like super complex. I want to collaborate, but I don’t know with whom. Other are like, no, no, I base my business on collaboration because it’s very complex and I need to be interrelated with other organization in order to be successful. Yeah, as also other workshop, like I don’t collaborate, but it would be great to do it. And this is also for the policymakers, right? Because, for example, in Ghana, they are kind of mature in terms of policies. they have a lot of discussions with the informal sector and private sector. Now there are critics that say that like they are they did a lot of policies but then the private sector is not able to implement or it cannot enforce it right but there is this kind of culture of different workshop different working group to enable the collaboration. On the other there’s other countries we’re like for example with Oman that they are still starting to work with the private sector. Okay great so you want to start you have a lot of people here that potentially are passionate about it okay probably will not do a waste trade between a kind of Tanzania Netherlands probably that will not be the most effective and in terms of also impact of emission but still feel free to then after grab and know each other to see if there is any potential option. Maybe just going back like I have already explored and activated some. I think it’s amazing that no one has a lot of collaboration active right. This tells a lot about the maturity of the system and how much still we are working in silence. Not sure if someone wants to to talk about collaboration that they had so far set up and with whom and why mainly so so why because the driver is important why you’re doing it is because of economic profitabilities it is because of kind of operation you need feedstock or it is you know for designing the solution together. No volunteer okay maybe later on someone will be adventurous. Now barriers. Barriers to scale up the collaboration we said okay maybe I want to start maybe I want to do more I think there was at least one that was saying I’m not interested but for the one that would like to really scale up the collaboration why why you’re not doing it it is because I don’t have funds or there is no financial mechanism that helps me collaborating or do I need to there is a lack of awareness or policies I don’t have data so I don’t know where to turn around. Some yeah I think here like lack of infrastructure and ICT enablers doesn’t seems to be okay jumping up so I will maybe hold from conclusion until we everybody has replied. Just here what we have seen is that financial drivers that or mechanisms are not really aligned with the profitability of the value chain so mainly a comment that we had in London last week was that financially it’s not really kind of profitable or to for example reuse and repair it costs more to reuse and repair so what do you do is just recycle and that is something that as a government you know you need to adjust because you want to close the loop early on and again if someone wants to raise their hands and for questions for comments really please do it okay so awareness I think as also here we had for the first comment awareness is the first barrier with also investment and yeah that is where I think around the world is still a lot of to do so if we then I think we have just one couple of questions and then we can move on with the digital divide what potential initiatives would you recommend and here if it’s too long to write also you can just you can just open the floor for for discussion just to give some example again there were collaboration in KSA with the government working with the social sector that would collect the device and the governor will be the one that guarantee that privacy is respected and then will connect also with also the producer right or for the telco company to ensure that it will also start doing awareness for their consumers to to tell them okay you have a new devices a modem bring back and then I will give you a new one so that is one that collaboration that we saw in KSA okay so here potential solution or potential initiatives solution is yeah collaborate collaborate and collaborate it’s about the reward system and that is interesting because it’s being also to the financial right to the second challenge that we have discussed so reward you as always in this world of this economy you need to have a reward in order to really be motivated also because at the end you need to have financial sustainability as you will not do kind of a business so at the end this is why EPR sometimes works because you then give the responsibility to the producers so at the front of their value chain so that is where they have to increase their price or costs in order then to manage till the end of life and that will help you potentially do some reward when you give back the phone and you have X money back right of course it will not be after like after three years that you use one device you will not that you cannot value $100 but still that small reward can be pennies but for some people you know it’s good enough like in Germany sometimes you put the bottles and you have two cents five cents still it’s thing that kind of kind of work for consumers using AI to support for that I think like we can also open the floor of you know how technologies our AI can help I don’t know who wrote that if he wants to or she wants to elaborate more in the workshop in Singapore there were a lot of discussion or so how to integrate AI and other technologies in the warehouse of the different companies to understand the values of the product when they depreciate to bring back etc to optimize and yeah that was also quite quite stimulating as conversation doing policies yeah public awareness and behavioral change launch campaigns to educate consumers encourage cultural shift yes to repair and reuse great thanks you thank you everyone I think like now I want to link this to the digital divide I think we already have mentioned it before how we can promote e-waste and then also do that you know promote this reuse of devices in the in the part of the population in need so if you think about reuse of electronics you as a company or you as a policymaker why would you do it it’s because you think about the environment you think about this because of the social you think about the brand and the corporate social responsibility so the governance or you think about the economics so that ESG and also the profitability so here I want really to test the driver why would you reuse or you would encourage to reuse or repair one device you can I think choose up to two yeah so one driver or reward to connect to the the previous is CSR or brand private sectors can really use that to communicate their commitment to sustainability right of course it should not be like just greenwashing it should be substantiated by a real initiative there but I think to be honest it’s good enough right you if this helps you as a business to be positioned in the market as someone that cares about the environment etc I think it’s more than fair to to leverage that I think here I’m surprised yeah about the like social impact it’s very prominent here at least a third talked about it sometimes if you talk to people about e-waste social divide doesn’t come naturally into mind right if you start talking about um but what about you know reuse etc they start thinking maybe but I think in the south of the world the association is stronger right to start to go to Europe etc there I don’t think it’s something very prominent so this again some cultural chain cultural differences economic job creation with environment is the least. So this is again, maybe on one side the environment is linked to awareness, right? You don’t know what is the impact of reuse. And so that is where I think awareness will definitely help. And economic and job creation. A lot of companies were saying that it’s not profitable. What should I do it, right? It’s better to recycle. Like I pay for this device to collect. I have to repair. And I have to then probably send around the world and the margin are not high, right? So the economics, there are no economic incentive for it. So are you do either policies or reward system to making sure that people are incentivized? People and businesses, right? Great. So I’m going a little bit I think we’re at two mores and then maybe we can we can see either to divide into groups but due to time and I think due to also I see there’s not a lot of sharing of ideas so maybe we will see what to how to structure that one. But here rank your concern and barriers about this topic. So you’re concerned about the complexity of the value chain. You’re concerned about I don’t know what is the demand. I don’t know really how many devices repair devices I will sell. So why have to promote it? Why do I have to go into this business? Some concern about the quality. If you also do a secondhand microwave, you are maybe like a little bit worried that it would explode in your hands or your kids hands. And other mentioned export and import because sometimes you cannot export, you cannot import used devices or waste, right? So that is where sometimes there are some barriers to bring from Europe for example to the south. There are also barriers that were mentioned that about critical material. A lot of now there is a new regulation in Europe that promotes critical material recycling in Europe. So even if one device can be used for repair, it will maybe stay in Europe because the critical material are predominant. Okay so here complexity of the supply chain is the winner. And the second equality and the reliability. So the complexity I think that also comes about data, comes about awareness and comes also about probably the maturity of the market. And the second one quality and reliability that you need to have standards, right? Because sometimes if you don’t have standards, you don’t know really if one device was recycled or repaired properly. So for some probably a phone you’re okish, most of that will not explode in your hands, but there are others where probably you don’t want to access it because of that concerns. And unethical trade and dumping. Okay we go to the third one. This is something that we have tested especially for trade. Then I can give my device but also the clothes, right? But then probably they will be anyway go to the landfill. So do I trust it? It’s a matter of trust, a matter of transparency, a matter of awareness if really what is claimed really will happen. Great. So here again maybe initiative more for cross-border e-waste. I think also we can talk on the microphone. It’s too much rather than typing. If you have done already or if you think that there is some initiative that you believe are valuable, what could be? Like between collectors and producers. Maybe also thinking about you know Egypt or about the island or KSA. What might be? Maybe it’s lunchtime. Everybody’s looking forward for a break. And I promise it’s almost done. Last 10 minutes of brainpower. I think there is one participant typing. So we will wait for the brave one. And I promise I will not call out. You’re safe. Oh two? Okay. As a sharing like an initiative that UNITAR told us to do is to go really on the ground and follow the informal sector, the pickers, to really collect the data. That’s one of the point was mentioned before. And really to understand the complex supply chain etc. So great. So global regulation and then responsible recycling certification. That is nice. Yeah certification or standards that will then reassure you that the quality is there. Right. Having collection point and doing campaign and EPR. Yeah. EPR it’s something that a lot of countries in the world are not doing it. And also for global regulation just to connect to it with a DCO we are working on a framework for government. So targeting government not private sector. And tell them what are the key components that they need to do. So maybe ending with that. The key components of the framework are this one. So government we believe that first they need to think about regulation and policing and strategies. Of course based on data. Because if you based on fingers in the air most probably will not be that effective. Financial instrument. Again reward. You need to on one side understand what comes into your pocket and what goes out. Right. And you need to balance it. And in order to really foster and give incentives to the to the private sector to the social sector you need to have for example EPR strategies. EPR fees that make sense for the ecosystem. And then awareness and capability building. Here awareness for consumer but also for businesses. Right. Sometimes also businesses like are not aware that maybe that component is really e-waste. And they are you know dumping it. On capability building. Also the government needs to learn about it. I mean it’s a it’s a like new new topic somehow. So so like the government the people in the government for example to understand the vapes. Was a big debate last week in London about the vapes are electronic devices. They are super like increasing volumes etc. And the legislator do not know how to manage that big topic. Again capability building everywhere. And finally infrastructure and technologies. You need to have infrastructure for recycling. Also dismantling it. Right. It’s not just a matter of mechanical or chemical recycling. It’s also all the infrastructure for collection dismantling etc. And technologies. ICT. And I think in this forum ICT can be also a great tool. And with this we were planning to do the like a discussion in groups. I think based on the time and based also on this the setup of the rooms and the participation maybe I can just give the the microphone to either Miss Alaa or Dr. Saeed for their final remarks. And thank you everyone. You made it almost to one. Thank you very much for your active participation. I think it’s very two-way communication and we get a lot of information from you. And as

Syed Iftikhar: we told that this initiative is we are co-creating the framework. And definitely we incorporate your feedback in our framework. This framework is not only for the DCO member state. Basically this framework for the whole countries of the world. So thanks again for your feedback and participation. Thank you.

Alaa Abdulaal: Thank you everyone. So just one thing if we can get out out of this session with the feel of responsibility that this challenge is in the hands of each. The solution is in the hand of each one of us. This would be for us we have achieved a great goal. And this is a call of action to be part of the solution to to build a future, a greener future, a more sustainable economical future for everyone. Thank you for joining us and hope that we can hear from you and your feedback. Thank you so much.

A

Alaa Abdulaal

Speech speed

150 words per minute

Speech length

1201 words

Speech time

479 seconds

Growing e-waste volumes pose environmental and health risks

Explanation

The rapid growth of e-waste is becoming a significant challenge due to increasing consumption of electronics and mobile devices. This issue has pressing environmental consequences and needs to be addressed as a shared responsibility.

Evidence

Billions of people are connected through the digital economy, leading to more consumers of electronics and mobile phones.

Major Discussion Point

E-waste challenges and impacts

Agreed with

Mohamed Mashaka

Arianna Molino

Agreed on

E-waste is a growing environmental and social challenge

E-waste management requires shared responsibility across sectors

Explanation

Addressing the e-waste challenge is not solely the responsibility of governments or the private sector. It is a shared responsibility that involves individuals, governments, and businesses working together to tackle the issue.

Evidence

The speaker asks the audience to consider how many devices they own and what they do with old devices, emphasizing individual responsibility.

Major Discussion Point

Collaboration and stakeholder engagement

Agreed with

Syed Iftikhar

Arianna Molino

Agreed on

Multi-stakeholder collaboration is crucial for effective e-waste management

M

Mohamed Mashaka

Speech speed

138 words per minute

Speech length

246 words

Speech time

106 seconds

Lack of consumer awareness about e-waste impacts

Explanation

One of the main challenges in addressing e-waste is the low level of awareness among citizens about its impact. Many people are not aware of how e-waste affects various initiatives and the environment.

Evidence

The speaker mentions that this is a challenge faced in Tanzania, an East African country.

Major Discussion Point

E-waste challenges and impacts

Agreed with

Ayman Arbiyat

Arianna Molino

Agreed on

Consumer awareness and education are key to improving e-waste management

Need for comprehensive e-waste strategies and guidelines

Explanation

There is a need for countries to develop e-waste strategies and guidelines for the safe usage of ICT infrastructure. These strategies would help raise awareness and provide direction for managing e-waste effectively.

Evidence

The speaker suggests looking at case studies of countries that have implemented such strategies.

Major Discussion Point

E-waste management strategies

Differed with

Arianna Molino

Differed on

Approach to engaging informal sector workers in e-waste management

A

Ayman Arbiyat

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Data privacy concerns when disposing of devices

Explanation

Consumers have concerns about their personal data remaining on devices even after deletion. This fear of privacy breaches leads people to keep old devices at home instead of recycling them.

Evidence

The speaker mentions that people believe their phones still contain photos and personal information even after deletion.

Major Discussion Point

Consumer concerns and barriers

Agreed with

Mohamed Mashaka

Arianna Molino

Agreed on

Consumer awareness and education are key to improving e-waste management

Lack of accessible e-waste collection systems

Explanation

There is a lack of knowledge about where to send e-waste for proper disposal or recycling. This lack of accessible collection points hinders effective e-waste management.

Evidence

The speaker expresses uncertainty about where to send their own e-waste.

Major Discussion Point

Consumer concerns and barriers

A

Arianna Molino

Speech speed

134 words per minute

Speech length

6619 words

Speech time

2950 seconds

E-waste contributes to climate change and pollution

Explanation

E-waste has significant environmental impacts, contributing to CO2 emissions and pollution of air, land, soil, and water. This affects both the environment and human well-being.

Evidence

The speaker cites an estimate of 145 billion CO2 emissions related to e-waste.

Major Discussion Point

E-waste challenges and impacts

Agreed with

Alaa Abdulaal

Mohamed Mashaka

Agreed on

E-waste is a growing environmental and social challenge

Promoting reuse and repair to extend device lifespans

Explanation

Extending the lifespan of devices through reuse and repair is an important strategy in e-waste management. This approach helps close the loop earlier in the product lifecycle and can create new markets for refurbished devices.

Evidence

The speaker mentions that redeploying just 1% of smartphones could help 50 million people.

Major Discussion Point

E-waste management strategies

Need for cross-border and regional collaboration on e-waste

Explanation

Regional collaboration is crucial for effective e-waste management. Countries can work together to share lessons learned, implement policies, and create efficient recycling systems.

Evidence

The speaker gives an example of potential collaboration between Saudi Arabia and Oman for specialized recycling of different types of e-waste.

Major Discussion Point

Collaboration and stakeholder engagement

Agreed with

Alaa Abdulaal

Syed Iftikhar

Agreed on

Multi-stakeholder collaboration is crucial for effective e-waste management

Engaging informal sector workers in e-waste management

Explanation

The informal sector plays a significant role in e-waste management in many countries. Rather than dismantling this sector, efforts should focus on helping informal workers follow compliance rules and environmental safety practices.

Evidence

The speaker mentions that 11 million informal workers are affected by e-waste management.

Major Discussion Point

Collaboration and stakeholder engagement

Differed with

Mohamed Mashaka

Differed on

Approach to engaging informal sector workers in e-waste management

Quality and reliability concerns with refurbished electronics

Explanation

Consumers have concerns about the quality and reliability of refurbished or repaired electronic devices. This perception can be a barrier to promoting the reuse of electronics.

Evidence

The speaker mentions that some people worry about refurbished devices malfunctioning or being unsafe.

Major Discussion Point

Consumer concerns and barriers

Agreed with

Mohamed Mashaka

Ayman Arbiyat

Agreed on

Consumer awareness and education are key to improving e-waste management

Complexity of e-waste supply chains

Explanation

The e-waste supply chain is complex, involving multiple stakeholders and processes. This complexity can be a barrier to effective e-waste management and collaboration.

Evidence

The speaker notes that this was identified as the top concern in a poll conducted during the session.

Major Discussion Point

Consumer concerns and barriers

Implementing extended producer responsibility policies

Explanation

Extended Producer Responsibility (EPR) policies are an important tool for managing e-waste. These policies make producers responsible for the entire lifecycle of their products, including end-of-life management.

Evidence

The speaker mentions that EPR is being implemented in various countries and can provide incentives for proper e-waste management.

Major Discussion Point

Policy and regulatory frameworks

S

Syed Iftikhar

Speech speed

130 words per minute

Speech length

653 words

Speech time

301 seconds

Importance of public-private partnerships for e-waste initiatives

Explanation

The Digital Cooperation Organization (DCO) emphasizes the importance of collaboration between governments, businesses, and individuals in addressing e-waste challenges. This multi-stakeholder approach is crucial for developing comprehensive solutions.

Evidence

The speaker mentions that DCO has 16 member states and over 40 observers from international organizations, private sectors, and NGOs.

Major Discussion Point

Collaboration and stakeholder engagement

Agreed with

Alaa Abdulaal

Arianna Molino

Agreed on

Multi-stakeholder collaboration is crucial for effective e-waste management

Developing financial incentives for proper e-waste management

Explanation

Financial instruments and incentives are crucial for fostering effective e-waste management. Governments need to understand the financial implications and balance costs and benefits to encourage private and social sector participation.

Evidence

The speaker mentions the need for EPR strategies and fees that make sense for the ecosystem.

Major Discussion Point

Policy and regulatory frameworks

A

Audience

Speech speed

151 words per minute

Speech length

723 words

Speech time

285 seconds

Small island nations face unique e-waste challenges due to isolation

Explanation

Small Pacific island countries face unique challenges in e-waste management due to their geographic isolation and limited resources. This situation requires innovative solutions for recycling and reprocessing e-waste locally.

Evidence

The speaker from Tuvalu mentions limited infrastructure and resource constraints as challenges.

Major Discussion Point

E-waste challenges and impacts

Importance of data collection and measurement for e-waste

Explanation

Accurate data collection and measurement of e-waste volumes are crucial for effective management and policy-making. Current figures are often estimates, which can hinder the development of targeted strategies.

Evidence

The speaker from Egypt notes that all figures presented in the session were estimated and global, suggesting a need for more precise local data.

Major Discussion Point

E-waste management strategies

Leveraging AI and technology for e-waste management

Explanation

Artificial Intelligence and other technologies can play a significant role in supporting e-waste management efforts. These technologies can help optimize processes and improve decision-making in the e-waste value chain.

Evidence

A participant suggested using AI to support e-waste management in response to a question about potential initiatives.

Major Discussion Point

E-waste management strategies

Need for e-waste-specific regulations and standards

Explanation

There is a need for specific regulations and standards governing e-waste management. These would help ensure proper handling, recycling, and disposal of electronic devices and components.

Evidence

A participant suggested implementing global regulations and responsible recycling certifications.

Major Discussion Point

Policy and regulatory frameworks

Harmonizing cross-border e-waste regulations

Explanation

There is a need for harmonized regulations across borders to facilitate effective e-waste management on a global scale. This would help address challenges related to e-waste trade and ensure consistent standards across countries.

Evidence

A participant suggested the need for global regulations in response to a question about cross-border e-waste initiatives.

Major Discussion Point

Policy and regulatory frameworks

Agreements

Agreement Points

E-waste is a growing environmental and social challenge

Alaa Abdulaal

Mohamed Mashaka

Arianna Molino

Growing e-waste volumes pose environmental and health risks

Lack of consumer awareness about e-waste impacts

E-waste contributes to climate change and pollution

Multiple speakers emphasized the increasing volume of e-waste and its negative impacts on the environment and human health, highlighting the urgent need for action.

Multi-stakeholder collaboration is crucial for effective e-waste management

Alaa Abdulaal

Syed Iftikhar

Arianna Molino

E-waste management requires shared responsibility across sectors

Importance of public-private partnerships for e-waste initiatives

Need for cross-border and regional collaboration on e-waste

Speakers agreed that addressing e-waste challenges requires collaboration between governments, businesses, individuals, and across borders.

Consumer awareness and education are key to improving e-waste management

Mohamed Mashaka

Ayman Arbiyat

Arianna Molino

Lack of consumer awareness about e-waste impacts

Data privacy concerns when disposing of devices

Quality and reliability concerns with refurbished electronics

Multiple speakers highlighted the need for increased consumer awareness and education to address concerns and promote proper e-waste disposal.

Similar Viewpoints

Both speakers emphasized the importance of involving multiple stakeholders, including the informal sector, in e-waste management efforts.

Alaa Abdulaal

Arianna Molino

E-waste management requires shared responsibility across sectors

Engaging informal sector workers in e-waste management

Both speakers highlighted the need for comprehensive strategies and incentives to guide and encourage proper e-waste management practices.

Mohamed Mashaka

Syed Iftikhar

Need for comprehensive e-waste strategies and guidelines

Developing financial incentives for proper e-waste management

Unexpected Consensus

Importance of data collection and measurement for e-waste

Audience

Arianna Molino

Importance of data collection and measurement for e-waste

Complexity of e-waste supply chains

There was unexpected consensus on the critical need for accurate data collection and measurement in e-waste management, with both the audience and speakers recognizing its importance for effective policy-making and understanding the complex e-waste supply chain.

Overall Assessment

Summary

The main areas of agreement included recognizing e-waste as a growing challenge, the need for multi-stakeholder collaboration, the importance of consumer awareness and education, and the necessity of comprehensive strategies and incentives for proper e-waste management.

Consensus level

There was a moderate to high level of consensus among speakers on the key challenges and necessary actions for e-waste management. This consensus suggests a shared understanding of the issues, which could facilitate the development of coordinated strategies and policies to address e-waste challenges globally.

Differences

Different Viewpoints

Approach to engaging informal sector workers in e-waste management

Arianna Molino

Mohamed Mashaka

Engaging informal sector workers in e-waste management

Need for comprehensive e-waste strategies and guidelines

While Arianna Molino suggests integrating informal workers into the e-waste management system, Mohamed Mashaka emphasizes the need for comprehensive strategies and guidelines, potentially overlooking the role of the informal sector.

Unexpected Differences

Focus on data collection and measurement

Arianna Molino

Audience

Complexity of e-waste supply chains

Importance of data collection and measurement for e-waste

While Arianna Molino focuses on the complexity of e-waste supply chains, an audience member unexpectedly emphasizes the importance of accurate data collection and measurement, highlighting a potential gap in the discussion.

Overall Assessment

summary

The main areas of disagreement revolve around the approach to engaging informal workers, the balance between promoting reuse and addressing quality concerns, and the prioritization of data collection in e-waste management strategies.

difference_level

The level of disagreement among speakers is moderate. While there is general consensus on the importance of addressing e-waste challenges, there are differing perspectives on specific strategies and priorities. These differences highlight the complexity of e-waste management and the need for multifaceted approaches tailored to different contexts.

Partial Agreements

Partial Agreements

Both speakers agree on the need for better e-waste management, but while Alaa Abdulaal emphasizes shared responsibility across sectors, Ayman Arbiyat focuses on the specific issue of accessible collection systems.

Alaa Abdulaal

Ayman Arbiyat

E-waste management requires shared responsibility across sectors

Lack of accessible e-waste collection systems

Both recognize the importance of reuse and repair in e-waste management, but while Arianna Molino promotes it as a strategy, the audience raises concerns about the quality and reliability of refurbished devices.

Arianna Molino

Audience

Promoting reuse and repair to extend device lifespans

Quality and reliability concerns with refurbished electronics

Similar Viewpoints

Both speakers emphasized the importance of involving multiple stakeholders, including the informal sector, in e-waste management efforts.

Alaa Abdulaal

Arianna Molino

E-waste management requires shared responsibility across sectors

Engaging informal sector workers in e-waste management

Both speakers highlighted the need for comprehensive strategies and incentives to guide and encourage proper e-waste management practices.

Mohamed Mashaka

Syed Iftikhar

Need for comprehensive e-waste strategies and guidelines

Developing financial incentives for proper e-waste management

Takeaways

Key Takeaways

E-waste volumes are growing rapidly, posing environmental and health risks globally

There is a lack of consumer awareness about e-waste impacts and proper disposal

E-waste management requires shared responsibility across government, private sector, and individuals

Collaboration and partnerships are crucial for effective e-waste management

Data collection and measurement are important for developing effective e-waste strategies

Promoting device reuse and repair can help extend lifespans and reduce e-waste

Small island nations face unique e-waste challenges due to geographic isolation

Consumer concerns like data privacy and device quality are barriers to proper e-waste disposal

Policy frameworks and financial incentives are needed to promote proper e-waste management

Resolutions and Action Items

DCO to develop a framework for governments on key components of e-waste management

Participants encouraged to take personal responsibility for proper e-waste disposal

DCO to incorporate participant feedback into their e-waste management framework

Unresolved Issues

How to effectively engage and regulate the informal e-waste sector

Specific strategies for small island nations to manage e-waste given their unique challenges

How to address the lack of profitability in device repair and reuse

Methods to improve data collection and measurement of e-waste volumes

How to harmonize cross-border e-waste regulations globally

Suggested Compromises

Balancing data privacy concerns with the need for proper e-waste disposal through secure data erasure services

Developing standards and certifications for refurbished electronics to address quality concerns

Implementing extended producer responsibility policies while ensuring economic viability for businesses

Thought Provoking Comments

One of the areas that I think we are really facing as a challenge is the literacy to our citizens, because most of these citizens, they aren’t aware of the impact of this e-waste on different initiatives that they are doing.

speaker

Mohamed Mashaka

reason

This comment highlights the critical issue of public awareness and education regarding e-waste, which is often overlooked in technical discussions.

impact

It shifted the conversation to focus more on the importance of public awareness campaigns and strategies for educating citizens about e-waste impacts.

Are there any cost-effective recycling solutions suitable for small-scale and more decentralized systems in the Pacific? Also, can e-waste be repurposed or up-cycled locally to create some economic opportunities?

speaker

Noia

reason

This question brings attention to the unique challenges faced by small island nations and introduces the idea of local economic opportunities through e-waste management.

impact

It broadened the discussion to consider solutions for different geographical contexts and economic scales, prompting thoughts on decentralized and localized approaches to e-waste management.

Is there any intention in your organization to measure the e-waste, especially that what I noticed that all the figures presented are estimated figures and for globally.

speaker

Dr. Nagwa

reason

This comment addresses a crucial gap in e-waste management – the lack of accurate measurement and data collection.

impact

It led to a discussion about the importance of data in policy-making and implementation, highlighting the need for better measurement methodologies in the e-waste sector.

Infinity Loops

speaker

Anonymous participant (via Slido)

reason

This concise response encapsulates the concept of circular economy in e-waste management, showing a deep understanding of the topic.

impact

While brief, this comment reinforced the importance of viewing e-waste management as a continuous cycle rather than a linear process, influencing the subsequent discussion on reuse and recycling.

Complexity of the supply chain is the winner. And the second equality and the reliability.

speaker

Arianna Molino

reason

This summary of participant responses highlights the key challenges in e-waste management as perceived by the audience.

impact

It focused the discussion on addressing supply chain complexities and quality concerns in e-waste management, shaping the direction of potential solutions discussed.

Overall Assessment

These key comments shaped the discussion by broadening its scope from technical aspects to include crucial elements like public awareness, geographical context, data accuracy, circular economy principles, and supply chain challenges. They prompted a more holistic view of e-waste management, considering various stakeholders and contexts. The discussion evolved from a general overview to addressing specific challenges and potential solutions, emphasizing the need for collaborative, data-driven, and context-specific approaches to e-waste management.

Follow-up Questions

What are effective e-waste collection systems?

speaker

Ayman Arbiyat

explanation

Understanding effective collection systems is crucial for addressing the e-waste problem at its source.

How can privacy concerns related to data on electronic devices be addressed in e-waste collection?

speaker

Ayman Arbiyat

explanation

Addressing privacy concerns is essential for encouraging people to recycle their electronic devices.

What are cost-effective recycling solutions suitable for small-scale and decentralized systems in small island countries?

speaker

Noia from Tuvalu

explanation

Finding appropriate solutions for small island nations is important for global e-waste management.

How can e-waste be repurposed or up-cycled locally to create economic opportunities in small island countries?

speaker

Noia from Tuvalu

explanation

Exploring local economic opportunities from e-waste can incentivize better management practices.

How can we improve the measurement and data collection of e-waste globally and regionally?

speaker

Dr. Nagwa from Egypt

explanation

Accurate data is crucial for understanding the scale of the problem and measuring the impact of policies.

How can artificial intelligence be used to support e-waste management?

speaker

Unidentified participant (via Slido)

explanation

Exploring the potential of AI in e-waste management could lead to more efficient and effective solutions.

What are effective awareness campaigns and behavioral change strategies to encourage e-waste recycling and reuse?

speaker

Unidentified participant (via Slido)

explanation

Public awareness and behavior change are crucial for improving e-waste management practices.

How can we develop and implement global regulations and responsible recycling certifications for e-waste?

speaker

Unidentified participant (via Slido)

explanation

Standardized regulations and certifications could improve the quality and reliability of e-waste recycling globally.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.