WS #171 Mind Your Body: Pros and Cons of IoB

WS #171 Mind Your Body: Pros and Cons of IoB

Session at a Glance

Summary

This discussion focused on the emerging field of Internet of Bodies (IoB) technologies and their implications for healthcare, cybersecurity, and society. Experts from various fields explored how IoB devices, which integrate technology with the human body, are revolutionizing medical care while also raising significant ethical and security concerns.

The medical benefits of IoB were highlighted, including improved diagnostics, remote patient monitoring, and personalized treatments. However, participants emphasized the need for robust cybersecurity measures to protect against potential hacking and data breaches, which could have severe consequences given the intimate nature of these devices.

The discussion also delved into the societal impacts of IoB technologies. Concerns were raised about potential inequality in access to these technologies, leading to a divide between “enhanced” and “unenhanced” individuals. The ethical implications of altering human biology and the risks of manipulation through IoB devices were also explored.

Participants stressed the importance of developing comprehensive regulatory frameworks to govern the development, implementation, and use of IoB technologies. They emphasized the need for collaboration between governments, private companies, and individuals to ensure responsible innovation and protect user rights.

The discussion concluded by acknowledging the inevitability of IoB advancements while emphasizing the critical need for ongoing research, public awareness, and proactive policymaking to harness the benefits of these technologies while mitigating potential risks.

Keypoints

Major discussion points:

– Medical applications and benefits of Internet of Bodies (IoB) devices, including AI-assisted diagnostics and remote patient monitoring

– Cybersecurity risks and privacy concerns associated with IoB devices

– Potential for IoB technologies to widen social and economic inequalities

– Ethical implications and need for regulation of IoB technologies

– Future impacts of IoB on human evolution and society

The overall purpose of the discussion was to explore the various implications – both positive and negative – of Internet of Bodies technologies as they become more prevalent. The speakers aimed to raise awareness about the potential benefits as well as risks that need to be considered and addressed.

The tone of the discussion was generally cautious and concerned, while still acknowledging the potential benefits of IoB technologies. Speakers highlighted exciting medical applications but also emphasized the need for careful regulation and consideration of ethical issues. The tone became more apprehensive when discussing future scenarios of human augmentation and potential societal divides, but remained analytical rather than alarmist.

Speakers

– Alina Ustinova: Moderator

– Lev Pestrenin: Deputy Department Head of Center for Diagnostics and Telemedicine of the Moscow Health Department

– Igor Sergeyev: Head Researcher of Federal Center for Applied Development of Artificial Intelligence

– Irina Pantina: JR Director of Positive Technologies

– Gabriella Marcelja: CEO of CJ Impact Ventures

– James Nathan Adjartey Amattey: Speaking from Ghana, expertise in technology and regulation

Additional speakers:

– Anna Kralina-Dias: Online moderator (mentioned but did not speak)

Full session report

Internet of Bodies (IoB) Technologies: Implications for Healthcare, Cybersecurity, and Society

This comprehensive discussion explored the emerging field of Internet of Bodies (IoB) technologies and their wide-ranging implications for healthcare, cybersecurity, and society. Experts from various fields, including medical diagnostics, artificial intelligence, cybersecurity, and venture capital, convened to examine how IoB devices, which integrate technology with the human body, are revolutionising medical care while simultaneously raising significant ethical and security concerns.

Medical Applications and Benefits

The discussion began on an optimistic note, with several speakers highlighting the potential medical benefits of IoB technologies. Lev Pestrenin, Deputy Department Head of the Center for Diagnostics and Telemedicine of the Moscow Health Department, emphasised that artificial intelligence and IoB could improve healthcare quality and access. This sentiment was echoed by Igor Sergeyev, Head Researcher of the Federal Center for Applied Development of Artificial Intelligence, who noted that IoB devices enable remote patient monitoring and early disease detection.

Gabriella Marcelja, CEO of CJ Impact Ventures, further elaborated on how IoB implants and wearables can enhance medical diagnostics and treatment. Specific examples of IoB devices mentioned included smart lenses, insulin pumps, and cochlear implants, illustrating the diverse range of applications in healthcare.

Cybersecurity Risks and Privacy Concerns

The conversation shifted to address the serious cybersecurity risks associated with IoB technologies. Irina Pantina, JR Director of Positive Technologies, warned that IoB devices are vulnerable to hacking, data breaches, and unauthorized access. She stressed the urgent need for robust cybersecurity measures to protect IoB users. Gabriella Marcelja concurred, highlighting the risks of surveillance and privacy violations posed by these technologies.

Alina Ustinova provided a thought-provoking insight: “Sometimes when we speak about internet of bodies, we need to understand that if something is integrated into your body, that means that you are a computer yourself and you can be hacked as a computer.” This comment effectively shifted the discussion towards deeper consideration of the ethical concerns surrounding IoB technologies.

James Nathan Adjartey Amattey, speaking from Ghana, raised a unique challenge specific to internal IoB devices: “Now, if it’s an external software system, you know, there could be preventive measures that could be implemented, but if it’s in the body, and then the body is now being misconfigured, how do we reconfigure that?” This comment highlighted the complex technical and ethical challenges associated with updating or fixing internal IoB devices.

Ethical Implications and Societal Impact

The discussion then delved into the broader ethical and societal implications of IoB technologies. Gabriella Marcelja raised concerns about the potential for IoB to widen socioeconomic gaps and create “augmented elites”. She cautioned, “So wealthy nations will adopt augmentation tech faster. So we will see a marginalised group of countries. And we definitely need to eventually ensure equal access if this is in the interest of the patient.” Marcelja drew an analogy comparing IoB device access to organ transplant waiting lists, highlighting the potential for inequality in access to these technologies.

James Nathan Adjartey Amattey emphasized the potential for manipulation and social engineering through IoB devices, as well as their possible use as tools for government control or weapons. He stressed the importance of community awareness and education about IoB technologies, especially in developing countries.

Gabriella Marcelja also raised the possibility of a “gray economy” of fake implants emerging, further complicating the ethical landscape of IoB technologies.

Governance and Ethical Considerations

The need for comprehensive governance frameworks to address the ethical and societal challenges posed by IoB technologies emerged as a crucial point of discussion. Irina Pantina emphasised the importance of multi-stakeholder cooperation in governing IoB responsibly. She stated, “So establishing the rules for how to use these devices, how to let them enter the markets and how to be affordable for different groups of people. Without any dependencies.”

Lev Pestrenin highlighted the need for multidisciplinary teams and improved communication to address the challenges of IoB. This point underscored the complex nature of developing and implementing such rapidly advancing technologies.

Conclusion

The discussion concluded by acknowledging the significant potential benefits of IoB technologies in healthcare while emphasizing the critical need for ongoing research, public awareness, and proactive policymaking to mitigate potential risks. The speakers demonstrated a moderate level of consensus, particularly on the need for responsible development and governance of IoB technologies.

The conversation raised several thought-provoking questions for future consideration, including strategies for managing cybersecurity risks, preventing social inequality due to unequal access, maintaining personal autonomy in an increasingly connected world, and addressing the potential for manipulation and misuse of IoB devices. These questions highlight the need for continued interdisciplinary dialogue and research to address the complex challenges posed by Internet of Bodies technologies.

Session Transcript

Alina Ustinova: The devices are used for the medical purposes, but because of that new technology, we should understand how it will affect not even us as human beings, our evolution, but also the cyber security issues and the emerging technologies issues. So we will try to understand during this session how this technology will affect us. And we’re joined by the wonderful speakers here. I will introduce them all shortly and they will try to answer your questions. So how the discussion will went, we will be divided by blocks. The first block will be medical. Then we have a little Q&A session. Then we have the AI emerging technology block, and then we’ll have a little Q&A session. And then the last one is cyber security block. But it depends on how you went. And if you want to ask questions after the session, it’s possible. The organizer asked us to be a little bit earlier, so we’ll finish about 10.50. So thank you very much. So today we’re joined by Lev Pistranian, he’s the deputy department head of Center for Diagnostics and Telemedicine of the Moscow Health Department. We also have Igor Sergeyev, the head researcher of Federal Center for Applied Development of Artificial Intelligence. He’s online. Gabriella Marchella, the CEO of CJ Impact Ventures. Irina Pantina, the JR director of Positive Technologies. James Amati of Norrison IT, he’s also with us online. And also we have a wonderful online moderator, Anna Kralina-Dias, who will help me in this session, I hope, and will ask the question coming from the chat online. So we’ll start with the medical block, and with understanding of how these devices are used. in the medical purposes. So Lev, please start. The floor is yours. Thank you.

Lev Pistrenin: Thank you very much. Good morning, everyone. Thanks for coming. I am a researcher in the Moscow Center for Diagnostics and Telemedicine. And we implement artificial intelligence in healthcare, specifically in radiology. So why do I sit here? I think I’m sure that artificial intelligence and Internet of Bodies have something in common. Both are new technologies and we want to benefit from them while we also want avoiding risks of these technologies. So today I’d like to show you on the example of artificial intelligence implementation in the healthcare, I’d like to show how is it possible to really benefit and to avoid risks at the same time. One moment. Something is wrong with my presentation. Yes. Okay, thank you very much. So in a few words, what is radiology in Moscow looks like? As usual, patients undergo radiological studies for example, chest X-ray, then images. all images from all hospitals of Moscow get into the data center which is located in our Center for Diagnostics and Telemedicine and after that radiologists describe these images, make reports and in one, two or sometimes a little bit more hours a doctor and patient already has results of the examination. Next slide please. Okay, so it is possible due to centralized healthcare system, it takes several, it took several years to centralize data to centralize all radiological descriptions and now it is possible, it was possible to start Moscow experiment in implementing artificial intelligence in healthcare. So we started this experiment in 2020 and we had a lot of difficulties, a lot of questions at the beginning because it’s new technology and there were no answers how we could use it and avoid different risks of artificial intelligence. But generally we managed to overcome all these risks of artificial intelligence in healthcare. So today I’d like to tell you about three components, three main components which were the key components for this success. The first component is data. Data sets, high quality data sets are very important for artificial intelligence training and testing. So what is data sets? It is a set of radiological studies with the report and using these data sets it’s possible to train artificial intelligence to find some kind of pathology like pneumonia or cancer for example and it is possible to test artificial intelligence. Basically for data, following one of the main principles, organized storage and organized collection of data. The second principle is artificial intelligence scientists, researchers, developers all over the world said about artificial intelligence, about its power, someone said that it is possible that artificial intelligence could be used and doctors will stay without work. But now we see that it is not so. Artificial intelligence is a great help for doctors. It could make measurements. it could help doctor to find some pathology, but now artificial intelligence can not replace doctors at all. So that’s why is to monitor artificial intelligence work. And we developed and successfully used life cycle for checking the quality of artificial intelligence. We do it regularly every month and we make an assessment of artificial intelligence to be sure that medical care is of high quality. And this life cycle also helps us to improve artificial intelligence because we can find mistakes of artificial intelligence and we could change these AI services and improve their quality. And the third principle is ethics. It is the basic principles of medical ethics which were used for many centuries and we still follow them. So one of the main principles is privacy of patients, is safety of patients and patient’s confidentiality. So artificial intelligence could improve quality of healthcare, but without following ethics, I think it is impossible to use it because it can bring more harm for patients than benefits. So, due to these three components, high quality datasets, the monitoring of artificial intelligence and the third component is ethical principles, we could provide for patients a very simple way of radiological studies. You see it here, so it is as usual, undergoing radiological examination, then doctor describes, write a report, and after that patient and physicians get the results of these studies. And here on the step three, you see that now doctors, radiologists, get two images for every patient. First is a native image, and the second is an image which is processed by artificial intelligence. So that’s why it is possible to use artificial intelligence and benefit from it. And what are the key achievements, key results of implementation of artificial intelligence? Here you see some facts, but generally speaking, there are three main achievements. The first is improved quality of healthcare. The second is eased access to health services. And the last principle is enhanced safety of patients. And I’d like to show you one more slide about artificial intelligence, how it works. You see, we have more than 50 AI solutions in the Moscow. And they could detect different types of pathology on different types of studies, like x-ray, computer tomography, or MRI, for example. And to conclude my presentation, I’d like to say that artificial intelligence, like Internet of Bodies, it’s our possible future, which could help us to live longer and to be healthier during our life. So, I’d like to invite you to visit our center to learn more about artificial intelligence in healthcare. And it’s very easy to visit us, and you could organize a visit. We will be happy to see you in our center. Thank you very much.

Alina Ustinova: This was a very interesting presentation, I guess. People learned a lot about it. And maybe a small question before we move to the next speaker. In your opinion, in your experience, how do you think the introduction of this device or integration into the human body will affect the human evolution? Like, for example, will it be part robots? Just a futuristic question.

Lev Pistrenin: Yes, thank you for the interesting question. I think you are right. Maybe in the future, we will look like robots a little bit. Now we don’t know all technologies, there is a very interesting thought that now, on a short period of time, we tend to overestimate technologies and for a longer period of time we tend to underestimate technologies. So after 20 or 30 years, maybe we have bionic eyes or something like that and yes, we would do our work faster, I think, and we can get a lot of benefits. But I’m sure that using of all these devices should be controlled first of all by the doctors because these devices never should be harmful for people.

Alina Ustinova: Okay, thank you very much for your answer. Now we move to the next speaker and we’ll try to understand how monitoring devices are actually making people’s lives easier, especially in the remote areas where the hospital is not an option. For some people, Igor, please, the floor is yours. Thank you.

Igor Sergeyev: Good morning, ladies and gentlemen. I will present the state of the home care sector in terms of our implementation, our available and available diagnostics and vital sign monitoring devices. The pandemic COVID-19 has become a catalyst for the development and education and production of individual devices. for the monitoring with Alzheimer’s. Russian Federation enormous territory and affects this clinical institution. Sometimes like I do for the patients, there’s these kinds of devices, the only solution from patients condition monitoring. Such device include a range of CO2, ECG, and aspiration recorders, abnormal glucose and cholesterol levels analyzed, and another devices. Innovative companies of Russian Federation including Skolkovo Residence introduced this development of personalized medicine, implementing software solution based on them, including artificial intelligence to improve their level of diagnostics of reduced voluntarily among the adult population. Currently, the registry of such solution in the Russian Federation includes software, natural network system, care mentor, IEEE. This is the website, http://carementor.ru. Software sales, websites, http://sales.ii. Software artificial intelligence systems for analysis of cardiology studies, websites, t-stars.ru. System for supporting in medical decisions, electronic clinical pharmacologist, these websites, www.sp.upb.com. Software system for supporting medical decisions, webioMEDS, webioMEDS.ii. The rapid development for monitoring functional indicators began at the turn of the 2000s. widespread implementation in the system of high-tech care began much later. An example for such projects implementation in the Russian Federation can be Medicare and innovation of professional intelligence systems. Development and implementation of head indicator remote monitoring services will provide undeniable advantage for all interested participants since the use of the services will allow to reduce the level of monetary and disability of the population due to early directions of development cardiovascular risk and endocrine diseases in citizens risk disease acceleration. And the process of continuous monitoring and analysis of patient’s health indicators to ensure doctor’s emergency identification in case of the patient’s monitoring indicators critical deviations. To increase the number of patients directed by observation to increase the quality of medical care for a population without this need for frequent visits to medical institutions to reduce the cost of medical institution by reduction or the need for stabilization. To improve medical knowledge based by processing and analyzing the alleged amount of patient medical data. My report completed. Thank you for attention. I hope you’ve passed questions.

Alina Ustinova: For the presentation I guess that you’re right in telling that sometimes when the medical service are not available. It is better to have some device that can track your medical conditions and you’ll be able to send it to the healthcare institution to understand what kind of illness you have. And that’s why it causes another risk, as we’re going to talk about, cybersecurity risk. Because we know that we can talk about data security, but what about human security? For example, sometimes when we speak about internet of bodies, we need to understand that if something is integrated into your body, that means that you are a computer yourself and you can be hacked as a computer. So the main question is how to get rid of it. Please, Irina, if you can say something. How can we manage the cybersecurity risk into cyber human risk, maybe some kind of that?

Irina Pantina: Thank you, Alina. Take my headphone off. Let me start from maybe just introduction into the types and classification of different types of internet of bodies. I would highlight this from a perspective of whether it can influence you as a human or your body or not. And the first types of devices that also need to be protected are so-called wearable devices. And their main feature is to promote the data to a central storage, to a central data house. And the main risks arise from these types of devices and data transferring as data leak. And all the risks that we know about data leak are applicable for these types of devices, for this group of things. Another group is more interesting from a cybersecurity perspective as a group of devices that could interact with a central processor and that could send a manageable influence on your body. For example, we do not expect that cochlear implant could manage the tone and could send a signal to your ear, even if you are deaf, but it can manage and it could change your behavior. Next example that Igor mentioned and devices that Igor mentioned, insulin pump. When you can change the dosage on remote, then you can influence and you can make different injuries into the bodies. And all of you remember the fresh example of September terrorist attack with smartphones and pagers in Lebanon and Syria. And that’s an example how this transferring signal would damage big groups of people. And if you are talking about widely spread technologies, so for the people with the different types of disease, they could be an interesting and a very important targets for ethics, for criminals of different types of criminals. And you know, this example, when this types of cyber risk weren’t fixed, when one of the heads of US get a pacemaker as an implant, asks to turn it off from a remote management due to the risk, when he got this chair, a very high level chair, he asked to turn off this remote managing due to the risks being attacked and being hacked with criminals. So, and from this perspective, we should talk about two types of influence, the data leakage and the total control of doing impossible and doing an acceptable event. We name it an acceptable event and we have to prevent such an events. using different types of organizational, technical, instrumental features and practices. And let’s talk about what we can get in case we do not think of the cybersecurity risks in terms of Internet of Bodies. The first one is we’ve got the credibility gap of the final users, of the end users of these technologies. And as Igor mentioned about providing healthcare for remote regions, for different types of people who can get healthcare fast and quite close to the place of living, it become a vital problem for them. And it arise a lot of questions around that. And we have choice in between whether we can fix this risk and face them and find ways how to work with them. And on the other hand, to ignore that and face the problems of lacking healthcare, impossibility to help people to live their best life. And as Alina mentioned in her introductory speech, Internet of Bodies is kind of part of Internet of Thing topic and it arise a good point for us because we know a lot to do with the Internet of Things from different points of view. in terms of what every actor has to do, managing cyber risks. And as we are here on United Nations floor and we are sharing our multi-stakeholderism framework, I got some inputs for every participant, and every group of the participant from this perspective. Well, what have the producers and providers of internet of what it can do in this topic? The first one is to think of cybersecurity as a vital question, as a very important point. And as we think about cyber security, about governmental and environmental and social points that are important for every company, then the cybersecurity is crucial for the company as well. So if we can place it in one row, we can talk about ESGC concept, the ESGC framework as a good practice for managing companies. Another point is to test the devices they provide to the end users, to the consumers, and using different practices and inviting the best analysts worldwide. For that, we’ve got some platforms that are well-known as back-bounded platforms that could help to test in circumstances quite close to real ethics, quite close to real life. and techniques that criminals can use. And it helps to improve the systems. It helps to make it more sustainable and applicable for end users. Another one is, another group is cybersecurity providers. They got their own responsibility in improving their own skills, their own expertise in testing internet of bodies with their own focus on specific of it, that it’s not just something that implemented somewhere so far, but it is implemented in the individual’s body. And it has an influence and has an impact on the person. And the second one to provide specific solutions for protecting, it could be software, hardware items, it could be processes and organizational features. All the instruments that we have should be included into the consulting services by the experts in cybersecurity. And the third pillar is governments and authorities. And I think they have to do, they used to. So establishing the rules for how to use these devices, how to let them enter the markets and how to be affordable for different groups of people. Without any dependencies. So, and to conclude my. I would want to make one point that if we do not take into account the cyber risk or if we underestimate it in this exact topics, in this exact topic Internet of bodies, then we have to pay a very high price. And that’s the human life. Thank you.

Alina Ustinova: You covered a lot of topics that are actually kind of at risk today. And just to shoot like a quick question, you mentioned lots of cases where the devices were hacked and like the very bad things happened when the human lives were lost. Can you talk a little bit about like the human sanity, you know, when, for example, there was an episode in Black Mirror, where a person was wearing the lenses, and his memories were not changed but he was caught on that and repeating them correctly. Is it just a point to regulate the usage of the IEB devices by humans themselves so they don’t harm themselves with them, or is it just a point of free will, and we should leave the fate of human in his own hands, just like like that. I think. Thank you for a very important question.

Irina Pantina: I would say that the individuals on the one hand side are rather smart, and they can understand what impact and what influence has a different devices to their lives from a positive side of usage, but they underestimate the risks. It must be. From my perspective, it must be it must be open discussion of these topics started by the experts. And there must be a balance of the views that enthusiasts who can try and who understand the risks to discover the steps that we do not know at the moment. Some risks arise and could be clear after a period of usage, after entering different situations. And in this sense, so-called ethical hackers could play a very important role because they can test from unpredictable ways, from unpredictable points of views to identify the risks, to find a way how to fix them and how to provide the secure solutions as a result. Thank you.

Alina Ustinova: Very much. Yes, I guess you’re right in a point. And now we need to also cover another risk that is not always recognized probably properly when we speak about using this device, because when we use devices like phone or a tablet, we do not actually see the difference between different marks of the phone or maybe the usage, but if we put a device into the human body, it will be seen. So the main question is won’t some IEB devices cause the segregation of people? Like for example, you will have the device that is very costly and a very high price within the human body. And he has the pros of the… on his lies that the person with the cheaper devices doesn’t so how can we avoid this Gabrielle if you can share with us your point of view.

Gabriella Marcelja: Yes, thank you. Thank you very much for for inviting to this panel. So, in terms of segregation you actually are hitting the jackpots you know with this question because we will have to tired future you know so the enhanced people, and the unincorporated so those who are kind of living like our regular normal life as we do right now. So, when we talk about body augmentations powered by AI, and the Internet of Bodies technologies, such as again, neural neural implants biometric prosthetics and so on. So this is actually technologies could widen the existing gap in the social economical classes. So this is something that, you know, we need to understand, because we will definitely have advantages in intelligence and strength in health, maybe So this is for sure, rising a few ethical questions going forward. Perhaps we should think also on how to answer some of the topics related to access to augmentation so should body encasement be treated as a basic luxury, sorry basic right or luxury topics about authenticity so will humanity actually lose the line between natural and synthetic existence. You do also have topics related to decision autonomy as was mentioned also by by the colleague here before. So who actually decides what is acceptable right body augmentation is that the government is corporations individuals. doctors right so like all of these are actually topics that need to be discussed because we need to think as a patient centered healthcare going forward right so like who is the ultimate decision maker of course it’s us individuals but the doctors are the one with the knowledge but on the other hand like they’re have the knowledge of the body the technology is most probably a private owned entity which knows the technology so they will sell the ideas to the doctors so it’s it’s a very complex i would say setting and the ecosystem of course will be regulated by the government so this is like something that we need to in general think when it comes to you know this future social economics integration that is going to happen there is no way out of it whenever you keep on getting new technologies and and you try to do something new for sure some impact one way or another is going to happen so here we can also mention the so-called ai-powered workers so we’re not talking about the robots but enhanced humans that could outperform normal workers like us right now so eventually you won’t need eventually a lot of vitamins or something i don’t know anti-stress pills or vitamin d you know every day so perhaps we will you know be eventually a little bit more faster smarter in that sense we also have then you know the access to health augmenting implants so this could you know of course raise the question of wealth right so it will create divides so the augmented elite if you will and the unprivileged bio-traditionalist let’s put it that way so of course you can always pick but if you want to kind of power up perhaps some people will choose a new way of existence. And we don’t know how is this going to be perceived. Is this going to be perceived as something cool? Or is this going to be perceived as, oh, you’re sick? So it’s a little bit also of this type of thinking that we could, in general, discuss. And I can’t say that I’ve been on many panels that were discussing these type of issues. So I’m more than happy that we keep on discussing these type of things. And in general, global inequality will definitely be a topic. So wealthy nations will adopt augmentation tech faster. So we will see a marginalized group of countries. And we definitely need to eventually ensure equal access if this is in the interest of the patient. And on this point, if I may, I would just continue also a little bit on the harm that these technologies can do. Because we do need to understand that the IOB devices, like, again, smart lenses, are capable, for example, of recording everything you see. So this could actually revolutionize health care and law enforcement as well, but also pose major threats. Because we have the surveillance and privacy risks on one side. Because these type of lenses could easily be some covered surveillance tools. And without regulation in this sense, they could for sure record people without their consent. Enable, of course, governments, companies to track citizens every move. We have different, we have this that is already doing all of this. And now if we put, of course, this on our bodies, this is just going to implement it even more. Create, of course, deepfakes, realities, manipulating basically just some video content recording through these lenses. Eventually, any type of exploitation that you can think of from corporate manipulation, the corporate world is focused on profit. So we need to make profit in order to be sustainable. Because otherwise, we need to lay people off. And it’s not a good way of doing business. So recording users’ environments. course, like, deliver, you know, ultra targeted ads. So you have meta, or I mean, with Facebook, Instagram, and all the similar platforms that of course, are using the data that we see right now, as their actual capital. But this is also, you know, an open door for blackmail, social control by hackers also who can access footage, and can, of course, exploit these private moments of individuals. So in this, I would say, context, we definitely need to, you know, think about the moral implications, and think about some comprehensive governance strategies that definitely must ensure ethical approval of devices, strict licensing, I would say and penalties for any misuse, and just very transparent algorithms that monitor how data is stored, and where shared access, so all of this, right now, AI has still big problems to solve. So we hope in the intelligence of the experts and the analysts that that are working on that, to actually, you know, work on all the black box and all of the issues that AI has right now, and the biases that it create, create. So this is, in any case, something that I would say rises, cyber security, and AI driven privacy violations and cyber crime risks, ecosystem that need to be monitored, and definitely talked to before, you know, we become kind of guinea pigs, and we don’t have like, a clear understanding of where can this go? And what do we do? So we for sure are individuals and patients for our doctors, we will for sure be the ultimate decision makers, we will for sure sign papers that we understand the risks, there is no other way out of this. But at the same time, the supply chain from the idea to the development and then to the implantation in the body definitely needs to be monitored. And for sure also the manufacturing and the understanding of how can you fix if something breaks. So where do you go like, oh, my implant is not working, I’m glitching, where do I go? We have like some, you know, centers like, like phone centers where you just like go and repair yourself. So this is a little bit of the supply chain that eventually would need to be thought of together with all the ethical implications at hand.

Alina Ustinova: You actually covered lots of things that we’re trying to understand with this new emerging technologies and one you mentioned that, yes, we have phones and we use it every day of our lives. And if it will be put inside our bodies, it will definitely change. So I will go to James and ask him to start with that question. How do you think, James, what, how can a person be offline if a device put in him make him online like constantly? And please share with us your input on how Internet of Bodies will be developed in the future.

James Nathan Adjartey Amattey: Yes, thank you very much. My name is James. I’m talking from Ghana. I do hope I’m clear. Now, Internet of Bodies is a very, it’s not new, but it’s, it’s one of the emerging components of Internet of Things, which embodies the integration of chips into regular devices to be able to track, collect and analyze data. Now in, in this new realm, for example, when we look at health, we are looking at the three phases of health, that’s preventive, curative, and then protective, right? So, for example, when we look at a disease like asthma, a doctor wants to know what the patient’s triggers are, when the triggers happen, how often they happen. And sometimes, all of that consultation cannot be done in the hospital. Now, there are certain cases of autoimmune diseases that do not have any known, should I say, any known cure. So, the use of IOB can help doctors and then help hospitals and health facilities to be able to determine what causes it, how long, what the triggers are, and what can be done to prevent it. Now, unfortunately, there’s life after the hospital, and the person has a different life other than the condition they are faced. So, it is very difficult for you to constantly track them and put a cost on it, put them online. Because of certain risks, we’ve, other members of the panel have spoken about cybersecurity risk. Other members of the panel have spoken about data risks, but I would look more at social risks, right? So, for example, we are looking at a problem of dependence, right, how are we going to make sure that the patient does not become too dependent or over-reliant on the implant, right? So, for example, unless it is an artificial limb, that actually helps the patient to walk. If it’s a pill, so there is this, should I say, there’s this name. of pills that have cameras, sensors that stay in the body and then actually record internal body action. And we want to ask ourselves, how long is that allowed to stay in the body? How much of an influence does it have on things like genome editing and DNA configuration and DNA programming and how much of an effect can that alter the unique character of the person? So if you’re, you know, and also what kind of data are these things sending? Right, so we do have something we call the CRISPR-Cas9, which is a form of genome editing. That uses bacterial immune system data to modify DNA and the DNA of living cells. So that means that currently there is data and there’s research and there are certain elements of IOE that can modify that component of your DNA. And that modification then leads to modification of character, modification of behavior, modification of influences, and that could have very long-term effects. Now, one of those issues is an issue of manipulation and social engineering, where there’s something we call a hyper-personalization syndrome. Now, currently, if you are, let’s say you and your friend are having a conversation on WhatsApp or any of these social media platforms, when you jump onto- another platform, you could just see an ad of, oh, you know, the what you were talking about with your friend. Now, how about this noun going into your body and then actually understanding your feeling of what are you feeling currently? That could have very diverse issues. So, for example, when we talk about suicide and mental health, we want to know that how often can a device manipulate and influence suicidal thoughts, and how often can it manipulate the person to take on these suicidal thoughts. So, this real-time modification of the DNA can lead to real-time manipulation, whereby the person is online, or should I say the person is offline, but then there’s an internal online modification. Now, we do have what we call data breaches, right? So, data breaches sort of happen when the data that’s supposed to be stored in a particular place has been given unauthorized access to by a third party, either through hacking, breach force, or, you know, the use of spyware. Now, if it’s an external software system, you know, there could be preventive measures that could be implemented, but if it’s in the body, and then the body is now being misconfigured, how do we reconfigure that? Now, also, we have to look at things like, when we talk about devices and software, we have to look at things like updates. Now, updates are a very thin line when it comes to human computer interaction. Now, if your phone is updating on its own, it has the ability to influence the behavior of applications and also the data that they use, right? And also the permissions these applications have. So if you are looking at internet of things and internet of bodies, we have to be able to design, should I say updates mechanisms and frameworks that do not have adverse issues or adverse effects on the body. We have to look at compatibility. We have to look at biology. We have to look at the device version and that, you know, those things are things that are currently not regulated. I know in the health field, you know, it might be a bit different. I’m not too much of a doctor, but from the, should I say, national regulatory framework, I am here to see a comprehensive study and a comprehensive, should I say, a comprehensive law that caters for devices and their updates and then the changes these updates will bring to the body. Now, all of these data that we are doing and all of these real-time tracking of person that is online, even though the person exists offline can lead to things like predictive profiling. So currently in analytics, you do what we call personal definition where you are looking at the psychographical and nature of the person. So what the person thinks. what a person feels or influences their buying decisions. Now with IOB and then DNA programming, there’s this ability of companies or people who have malicious intentions to use the data that our body provides them through these devices to now profile us and then use things like predictive analysis to then determine what’s can, you know, what somebody is more likely to buy. So it’s now takes it from, you know, device and marketing strategies to now tweaking devices that are in your body. So for example, we have the, the new venom of, should I say, wearable devices, especially for the eye. And these have the tendency to pupillate the iris and then send different signals to the brain. And now these things then lead to what we call attack on cognitive security. Once you bring the brain into the picture, then you now have an attack on the cognitive behavior of the person. So you are now going to exploit cognitive biases. You now have to deal with brain computer interface vulnerabilities. So for example, we have issues of a coin that is being implanted into the brain to allow differently abled people to be able to interact with computers. to be able to move things. Now how much of do we have to create a balance between re-enabling that person and integrating that person and also being that person to be independent of that device and you know having the personal intuition or the personal drive to be able to turn on and off right. So I think that these are some of the things that we could look at in terms of how can a person be able to turn off some of these devices that are in them, how much power do they have over their devices, is it a matter of manufacturer versus patients and who wins or if there’s a an issue with the government and now the government now becomes a third party between the manufacturer and the patients, who who’s going to win in the ability to determine how when and where their data is used. So

Alina Ustinova: thank you very much. I guess you covered lots of issues and it’s very interesting to listen to you and you mentioned actually the thing called genome changing and I guess it’s very interesting aspect that no one said before that and before we move to the Q&A session I have a question for every speaker. You know yesterday Henry Kissinger said a very interesting thing. He said that the future of humanity eventually and to survive humanity needs to have a bias into this with AI. So maybe you can give comments. Do you agree with this point of view? Do you think that eventually we will just become part AI, part So human, etc. And then we move to Q&A. But just a short answer, like, who wants to start? Irina, please.

Irina Pantina: Let me start first. Let me be the first. I believe that humans are smart enough to implement AI to the fields that they can get the positive response, the positive impact, rather than the negative ones. And we have enough power at different levels to fix the negative. From time to time, I’m sure we will have some examples of misusage of this technology. But at the same time, in parallel, we will have examples how to fix it.

Alina Ustinova: Does anyone want to add something and tell his opinion or hers? Gabriella?

Gabriella Marcelja: I’ll be just very quick. I think this is, you know, a philosophical question, you know, where do we want to go as humans? And perhaps at the level of Kissinger, perhaps at the level, you know, global strategists, like you already are trying to understand what else, right? So perhaps this is a way to go. I think it’s an inevitable evolution of the technology. Because once we reach a certain limit of development, whether it’s a country or company, people need to think of, you know, need to sit, think, and then decide, where do we go now? So it’s just a matter of understanding the possible. futures, and if we like those options or not. At this moment, if everyone is feeling excited about this, I think the humanity wants to try it out, not knowing what will happen. So it’s more than the curiosity inside of us that wants to keep moving. Oh, let me see what will happen, even though we most probably won’t be happy with the result. But it’s just human nature to keep pushing in the known when we already have everything. So I think we definitely need to fix the I would say the basic problems of the world. But since the technology has gone so much forward, and we’re, you know, of course, there is still like the count quantum computing and all the cybersecurity issues that come with it, like, but we are still very far from that we’re far from, you know, getting all the AI chips running and so so many things for humans to do. But they think this is just like an additional, you know, curiosity, that just people want to see happening. And, and we, I guess, will live long enough to see what happens.

Alina Ustinova: Okay, now we’re ready to if no one has anything to add, we’re getting to the questions. So if everyone has any questions, please raise your hand. And if anyone in the line has any questions to write it and Anna will read your question to our speaker. So do we have any questions? No one has any questions. Okay, I will ask my question, which I’m very interested at. We covered actually lots of aspects of the time, but I, I like the futuristic works of art, especially in terms of when we see it in games. in the movies. And in the future, they actually show that we have a very gloomy future if we come to use these technologies. For example, like in many games, you show that if you have the medical insurance that is covered more, that is richer, and you have, as I said before, the implants that are costly, you will have the medical attention you need and you will live more than the person who doesn’t have it. And how do you think, maybe it’s a very cynical question, but will the Internet of Bodies eventually let that to the people that some companies, not government, will regulate who will live and who will die depending on which kind of implants it has in their own body? Do we’re moving from the government regulation to the company regulations in terms of the technology? So if anyone has anything to add, you are welcome to add something

James Nathan Adjartey Amattey: to that. Yes, if I may. Yes, I think it has to go hand in hand. We cannot just regulate the companies and leave the governments. The governments also have to regulate themselves. So we, because in this era where we have a lot of wars and just interruptions to global peace, you know, governments can now use some of these things as tools to be able to program, to be able to, shall I say, reprogram people and then give them extra abilities. And that, and, you know, it’s just a matter of limiting how far we can go with this, right? Because we do not want the case where it’s an open… play field, and anybody can do anything. So in one hand, you have to regulate the companies who are producing this thing. The other hand, you also have the government also has to regulate itself to prevent itself from, you know, using, you know, this as a, as a weapon rather than as a tool, right? Because if you take a knife, a knife in the kitchen is used for cutting vegetables, but when you put the knife into the hands of a killer, you know, it becomes a dangerous tool. So the technology in of itself is not dangerous, but once it ends up with in the hands of people with malicious intent, it can, you know, you know, have that dangerous elements to it. So I think that the co-regulation of both governments at the national level, that’s countries themselves, and then also international bodies that they are part of, like, for example, the UN, the ITU, and all these international regulatory bodies, it’s up to them to be able to co-regulate themselves, and then also the governments that form out of them, to be able to make sure that the technology does not go out of hand. Thank you.

Alina Ustinova: Gabriella, you wanted to add?

Gabriella Marcelja: Just very quick, but I just want to draw a parallel here, because I think what your question is asking, who will live and who will die, you know, who is deciding on this, I think, you know, this is already happening. So if you think of just the line in hospitals to get a body transplant, right? So it’s the same thing right now. So you have a line, and you have a line, so you need to wait. Whether you’re, you know, a billionaire, or whether you have nothing, you have a line. And so this is a little bit of the body transplant parallel that I think it’s very, very similar. So it’s just a matter of augmentation, maybe fixing a problem rather than having a new organ transplanted. Because, of course, from how I see it, we will have, I don’t want to say which sector is going to come up with this, but I already can imagine, we will have fake implants. So we will have people that will do this and sell you, like in the gray economy, hey, can you sell me like, today is something else, and tomorrow will be the same product, but in this gray economy, it will happen. It’s like inevitable. So people will try to copy, people will try to, you know, do business in a way, and they will produce harm. So it’s inevitable for this to happen. And in any case, it’s like a line that, okay, you don’t have money for like the good quality products, no problem. You have, you know, a cheap version that, yeah, maybe you will die tomorrow, but hey, you know, this is your chance of getting augmented. So I think this is going to happen, like if we go into this sector here, because as with body transplants today, yes, you have a line, but, you know, my specialization is in criminal law. And so I do understand the underneath unfortunate situation that we have. And we do have, you know, criminal organizations and et cetera, who do work on skipping the line, if you will, right. So this will happen. And I think it’s already happening, you know, in the system we have now with the problems currently that we have now. So it’s just going to be, you know, the problem is going to be augmented, let’s put it that way.

Alina Ustinova: Lev, you wanted to add something?

Lev Pistrenin: Yes, thank you. I’d like to add about control. I think that you are totally right. Now we see that private companies have a lot of control on our phones, on our devices, and sometimes this control is much more stronger than governments could provide. So I think in this case it should be a balance for control and control should be from all sides, all participants of this process. Private companies first, government the second, and people the third. As it was said yesterday on panel discussion, only knowledge is a power and they could help us to survive in the future. So I think we should study about new technologies and we should be aware of how to control and manage them.

Alina Ustinova: Do we have any questions now? You have? Can you please give the mic to the person? Thank you. Really interesting topics.

Audience: Given the work I do, I think often of the risks and the challenges for controlling these types of technologies as well as harnessing their power. So I guess my question is, do we feel that there is enough understanding of the technology to be able to create committees, organizations, frameworks to make sure we can make the most of them? of what IOB can enable while still protecting the individuals who ultimately will bear the risks of the implant and as well as the benefits but so do you know what I mean I think I want to understand do we have the knowledge and capacity to as an international community harness the benefits as well as make sure we are able to understand and control the risks of the technology

Alina Ustinova: basically asking do we know do we know do we know enough to so not make no harm so from does anyone has left please

James Nathan Adjartey Amattey: maybe I can answer that a bit yes so I think it depends on where you’re coming from so for example I live in Ghana I’m from Africa where we are mostly when it comes to IOB we are mostly consumers you know most of the wearables we get are imported and sometimes it’s it’s seen as a lifestyle thing where for example there’s this new trend of BBLs that is seen as a lifestyle thing but people do not totally understand you know the implications technological the people don’t understand where this is coming from and where this can lead right so I think we need to position or to put more effort in community awareness raising awareness education on you know what what is truly at stake here and how best we can move forward because like like Anna said some of these things are inevitable they will definitely happen it’s just a matter of are we ready for it, you know, when it happens, or are we going to allow it to overtake us? And now, you know, we have to play catch up on, you know, things like regulation, things like trying to control the spread and the production of these things. You know, much like social media, we relaxed a bit and then we had to play catch up. I think a lot of that is also can be seen in the AI space. So we want to be able to make sure that we are ahead of the trend, which is very difficult to do, I must say. But, you know, we have to start from somewhere. We have to hope that we stay ahead because once this overtakes us, it’s actually part of the human body. It was very difficult to get rid of, you know, just by a policy or a regulation.

Alina Ustinova: Thank you. James, if you want to add something.

Lev Pistrenin: Thank you. Is it okay? I think now it’s not possible to have all knowledge of all the world because there are many different specialties. And, for example, of artificial intelligence. So, if doctors use artificial intelligence, only them could not be confident that artificial intelligence is a safety. So, and, for example, engineers, they also could not be confident that artificial intelligence is safety because it is multidisciplinary. projects and technologies. I think if we want to have some control from our side, from people, the future is for multidisciplinary teams, groups, committees, or even it’s very good to have some friends with knowledges in different spheres. And so communication is our key opportunity to survive.

Alina Ustinova: Thank you very much. I guess we’re out of time. So thank you for a wonderful discussion. I guess we covered lots of aspects. So if you want to change context with the listeners, please welcome. Thank you, our online speakers as well, who joined us today. And have a good time here for the first. Thank you very much. Thank you very much.

L

Lev Pestrenin

Speech speed

87 words per minute

Speech length

1282 words

Speech time

879 seconds

AI and IoB can improve healthcare quality and access

Explanation

Lev Pestrenin argues that artificial intelligence and Internet of Bodies technologies can enhance the quality of healthcare and improve access to medical services. He suggests that these technologies can assist doctors in diagnostics and treatment, leading to better patient outcomes.

Evidence

Example of AI implementation in radiology in Moscow, where AI processes images to help radiologists make diagnoses faster and more accurately.

Major Discussion Point

Medical applications of Internet of Bodies (IoB) technologies

Agreed with

Igor Sergeyev

Gabriella Marcelja

Agreed on

IoB technologies can improve healthcare

IoB raises issues of human autonomy and decision-making

Explanation

Lev Pestrenin highlights the potential impact of IoB technologies on human autonomy and decision-making. He suggests that as these technologies become more integrated into our bodies and lives, questions arise about who ultimately controls the devices and the data they generate.

Major Discussion Point

Ethical and social implications of IoB

Balancing innovation and risk mitigation for IoB is challenging

Explanation

Pestrenin acknowledges the difficulty in balancing the potential benefits of IoB innovations with the need to mitigate associated risks. He emphasizes the importance of multidisciplinary approaches to address these challenges effectively.

Evidence

Mentions the need for multidisciplinary teams and committees to ensure comprehensive understanding and control of IoB technologies.

Major Discussion Point

Future development and regulation of IoB technologies

I

Igor Sergeyev

Speech speed

93 words per minute

Speech length

386 words

Speech time

248 seconds

IoB devices enable remote patient monitoring and early disease detection

Explanation

Igor Sergeyev contends that Internet of Bodies devices allow for remote monitoring of patients’ vital signs and health indicators. This enables early detection of diseases and health issues, particularly beneficial for patients in remote areas with limited access to healthcare facilities.

Evidence

Mentions various devices such as ECG recorders, glucose and cholesterol level analyzers that can be used for remote patient monitoring.

Major Discussion Point

Medical applications of Internet of Bodies (IoB) technologies

Agreed with

Lev Pestrenin

Gabriella Marcelja

Agreed on

IoB technologies can improve healthcare

G

Gabriella Marcelja

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

IoB implants and wearables can enhance medical diagnostics and treatment

Explanation

Gabriella Marcelja argues that IoB implants and wearable devices can significantly improve medical diagnostics and treatment. These technologies can provide real-time health data, enabling more accurate and timely medical interventions.

Evidence

Mentions neural implants and biometric prosthetics as examples of IoB technologies that can enhance medical capabilities.

Major Discussion Point

Medical applications of Internet of Bodies (IoB) technologies

Agreed with

Lev Pistrenin

Igor Sergeyev

Agreed on

IoB technologies can improve healthcare

IoB may widen socioeconomic gaps and create “augmented elites”

Explanation

Marcelja raises concerns that IoB technologies could exacerbate existing socioeconomic inequalities. She suggests that access to advanced IoB enhancements might be limited to wealthy individuals, creating a divide between ‘augmented elites’ and those without such enhancements.

Evidence

Discusses the potential for a ‘tiered future’ where enhanced individuals have advantages in intelligence, strength, and health over unaugmented individuals.

Major Discussion Point

Ethical and social implications of IoB

IoB technologies pose risks of surveillance and privacy violations

Explanation

Marcelja highlights the potential for IoB devices to be used as surveillance tools, posing significant risks to privacy. She argues that without proper regulation, these technologies could enable unauthorized tracking and data collection.

Evidence

Mentions smart lenses as an example of IoB technology that could record everything a person sees, potentially leading to privacy violations and surveillance risks.

Major Discussion Point

Cybersecurity risks of IoB technologies

Agreed with

Irina Pantina

Agreed on

IoB technologies pose cybersecurity risks

I

Irina Pantina

Speech speed

107 words per minute

Speech length

1250 words

Speech time

698 seconds

IoB devices are vulnerable to hacking and data breaches

Explanation

Irina Pantina emphasizes that IoB devices are susceptible to hacking and data breaches. She argues that these vulnerabilities could lead to serious consequences, including potential harm to users’ health and well-being.

Evidence

Cites examples of potential attacks on medical devices like insulin pumps and pacemakers, which could have life-threatening consequences if hacked.

Major Discussion Point

Cybersecurity risks of IoB technologies

Agreed with

Gabriella Marcelja

Agreed on

IoB technologies pose cybersecurity risks

Cybersecurity measures must be implemented to protect IoB users

Explanation

Pantina stresses the importance of implementing robust cybersecurity measures to protect IoB users. She argues that a multi-stakeholder approach involving producers, cybersecurity providers, and governments is necessary to ensure the safety and security of IoB technologies.

Evidence

Suggests practices such as rigorous testing of devices, development of specific security solutions, and establishment of regulatory frameworks.

Major Discussion Point

Cybersecurity risks of IoB technologies

Differed with

James Nathan Adjartey Amattey

Differed on

Approach to regulating IoB technologies

Multi-stakeholder cooperation is needed to govern IoB responsibly

Explanation

Pantina advocates for a collaborative approach to governing IoB technologies. She argues that cooperation between various stakeholders, including producers, cybersecurity providers, and governments, is crucial for responsible development and implementation of IoB.

Evidence

Outlines specific responsibilities for different stakeholders, such as producers implementing cybersecurity measures, cybersecurity providers developing specialized solutions, and governments establishing regulatory frameworks.

Major Discussion Point

Future development and regulation of IoB technologies

J

James Nathan Adjartey Amattey

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Regulation is needed to ensure equal access and prevent misuse of IoB

Explanation

James Nathan Adjartey Amattey argues for the necessity of regulation to ensure equitable access to IoB technologies and prevent their misuse. He emphasizes the importance of balancing innovation with risk mitigation to protect users and society at large.

Evidence

Discusses the potential for IoB technologies to be used as weapons or tools for manipulation if not properly regulated.

Major Discussion Point

Ethical and social implications of IoB

Differed with

Irina Pantina

Differed on

Approach to regulating IoB technologies

Public awareness and education about IoB is crucial

Explanation

Amattey stresses the importance of raising public awareness and providing education about IoB technologies. He argues that this is essential for informed decision-making and responsible use of these technologies.

Evidence

Cites examples from Ghana where imported wearables are seen as lifestyle products without full understanding of their implications.

Major Discussion Point

Future development and regulation of IoB technologies

Agreements

Agreement Points

IoB technologies can improve healthcare

Lev Pestrenin

Igor Sergeyev

Gabriella Marcelja

AI and IoB can improve healthcare quality and access

IoB devices enable remote patient monitoring and early disease detection

IoB implants and wearables can enhance medical diagnostics and treatment

The speakers agree that Internet of Bodies technologies have significant potential to enhance healthcare through improved diagnostics, remote monitoring, and better access to medical services.

IoB technologies pose cybersecurity risks

Irina Pantina

Gabriella Marcelja

IoB devices are vulnerable to hacking and data breaches

IoB technologies pose risks of surveillance and privacy violations

Both speakers highlight the cybersecurity vulnerabilities associated with IoB technologies, emphasizing the risks of hacking, data breaches, and privacy violations.

Similar Viewpoints

Both speakers emphasize the need for robust regulatory frameworks and security measures to protect users and ensure responsible development of IoB technologies.

Irina Pantina

James Nathan Adjartey Amattey

Cybersecurity measures must be implemented to protect IoB users

Regulation is needed to ensure equal access and prevent misuse of IoB

Both speakers highlight the importance of addressing potential social inequalities and raising public awareness about the implications of IoB technologies.

Gabriella Marcelja

James Nathan Adjartey Amattey

IoB may widen socioeconomic gaps and create “augmented elites”

Public awareness and education about IoB is crucial

Unexpected Consensus

Multi-stakeholder approach to IoB governance

Irina Pantina

Lev Pestrenin

James Nathan Adjartey Amattey

Multi-stakeholder cooperation is needed to govern IoB responsibly

Balancing innovation and risk mitigation for IoB is challenging

Regulation is needed to ensure equal access and prevent misuse of IoB

Despite coming from different backgrounds (cybersecurity, healthcare, and regional perspective), these speakers unexpectedly agree on the need for a collaborative, multi-stakeholder approach to governing IoB technologies, balancing innovation with risk mitigation.

Overall Assessment

Summary

The main areas of agreement include the potential of IoB to improve healthcare, the need for robust cybersecurity measures, the importance of regulation and multi-stakeholder governance, and the necessity of addressing potential social implications.

Consensus level

There is a moderate level of consensus among the speakers, particularly on the need for responsible development and governance of IoB technologies. This consensus suggests a shared recognition of both the potential benefits and risks associated with IoB, implying a cautious but optimistic approach to future development and implementation of these technologies.

Differences

Different Viewpoints

Approach to regulating IoB technologies

Irina Pantina

James Nathan Adjartey Amattey

Cybersecurity measures must be implemented to protect IoB users

Regulation is needed to ensure equal access and prevent misuse of IoB

While both speakers agree on the need for regulation, Pantina emphasizes cybersecurity measures and a multi-stakeholder approach, while Amattey focuses more on ensuring equal access and preventing misuse through government regulation.

Unexpected Differences

Perspective on the inevitability of IoB adoption

Gabriella Marcelja

James Nathan Adjartey Amattey

IoB may widen socioeconomic gaps and create “augmented elites”

Public awareness and education about IoB is crucial

While not explicitly stated as a disagreement, there seems to be an unexpected difference in perspective on the inevitability of IoB adoption. Marcelja appears to view the adoption of IoB as somewhat inevitable, focusing on its potential consequences, while Amattey emphasizes the need for education and awareness, implying that adoption can be more controlled or guided.

Overall Assessment

summary

The main areas of disagreement revolve around the approach to regulating IoB technologies, the balance between innovation and risk mitigation, and the potential social implications of IoB adoption.

difference_level

The level of disagreement among the speakers appears to be moderate. While there is general consensus on the potential benefits and risks of IoB technologies, speakers differ in their emphasis on specific aspects and proposed solutions. These differences reflect the complex and multifaceted nature of IoB technologies, highlighting the need for interdisciplinary approaches and continued dialogue to address the challenges and opportunities presented by IoB.

Partial Agreements

Partial Agreements

Both speakers recognize the potential social implications of IoB technologies, but they propose different solutions. Marcelja highlights the risk of widening socioeconomic gaps, while Amattey emphasizes the need for public awareness and education to address these issues.

Gabriella Marcelja

James Nathan Adjartey Amattey

IoB may widen socioeconomic gaps and create “augmented elites”

Public awareness and education about IoB is crucial

Similar Viewpoints

Both speakers emphasize the need for robust regulatory frameworks and security measures to protect users and ensure responsible development of IoB technologies.

Irina Pantina

James Nathan Adjartey Amattey

Cybersecurity measures must be implemented to protect IoB users

Regulation is needed to ensure equal access and prevent misuse of IoB

Both speakers highlight the importance of addressing potential social inequalities and raising public awareness about the implications of IoB technologies.

Gabriella Marcelja

James Nathan Adjartey Amattey

IoB may widen socioeconomic gaps and create “augmented elites”

Public awareness and education about IoB is crucial

Takeaways

Key Takeaways

Resolutions and Action Items

Unresolved Issues

Suggested Compromises

Thought Provoking Comments

Now we see that artificial intelligence, like Internet of Bodies, it’s our possible future, which could help us to live longer and to be healthier during our life.

speaker

Lev Pestrenin

reason

This comment frames AI and IoB technologies in a positive light as tools for improving human health and longevity, setting an optimistic tone for the discussion.

impact

It prompted further exploration of the potential benefits and risks of these technologies in healthcare and beyond.

Sometimes when we speak about internet of bodies, we need to understand that if something is integrated into your body, that means that you are a computer yourself and you can be hacked as a computer.

speaker

Alina Ustinova

reason

This insight highlights a key security concern with IoB devices, framing humans with implants as potentially hackable systems.

impact

It shifted the discussion towards cybersecurity risks and ethical concerns around IoB technologies.

So establishing the rules for how to use these devices, how to let them enter the markets and how to be affordable for different groups of people. Without any dependencies.

speaker

Irina Pantina

reason

This comment emphasizes the need for regulatory frameworks and accessibility considerations for IoB technologies.

impact

It broadened the conversation to include policy and equity issues surrounding IoB adoption.

So wealthy nations will adopt augmentation tech faster. So we will see a marginalized group of countries. And we definitely need to eventually ensure equal access if this is in the interest of the patient.

speaker

Gabriella Marcelja

reason

This insight raises important concerns about global inequality in access to IoB technologies.

impact

It prompted discussion of the potential for IoB to exacerbate existing social and economic divides.

Now, if it’s an external software system, you know, there could be preventive measures that could be implemented, but if it’s in the body, and then the body is now being misconfigured, how do we reconfigure that?

speaker

James Nathan Adjartey Amattey

reason

This comment highlights unique challenges of IoB devices compared to external technologies, particularly around updates and fixes.

impact

It deepened the discussion on technical and ethical challenges specific to internal IoB devices.

Overall Assessment

These key comments shaped the discussion by highlighting both the potential benefits and significant risks of Internet of Bodies technologies. The conversation evolved from initial optimism about health benefits to deeper exploration of cybersecurity concerns, regulatory needs, global inequality issues, and unique technical challenges of internal devices. This progression reflected a nuanced, multi-faceted examination of IoB’s implications for individuals and society.

Follow-up Questions

How will the integration of Internet of Bodies devices affect human evolution?

speaker

Alina Ustinova

explanation

This question explores the long-term implications of IoB technology on human biology and society.

How can we manage cybersecurity risks related to Internet of Bodies devices?

speaker

Alina Ustinova

explanation

This addresses the critical need for protecting individuals from potential hacking or unauthorized access to their implanted devices.

How can we prevent segregation or inequality caused by different levels of access to IoB devices?

speaker

Alina Ustinova

explanation

This question raises concerns about potential social and economic divides created by advanced medical technologies.

How can a person be offline if a device implanted in them keeps them constantly online?

speaker

Alina Ustinova

explanation

This explores the implications of constant connectivity on privacy and personal autonomy.

How can we ensure ethical approval, strict licensing, and penalties for misuse of IoB devices?

speaker

Gabriella Marcelja

explanation

This area of research is crucial for developing comprehensive governance strategies for IoB technologies.

How can we create a balance between re-enabling differently abled people through IoB devices and maintaining their independence?

speaker

James Nathan Adjartey Amattey

explanation

This question addresses the ethical considerations of enhancing human capabilities while preserving individual autonomy.

Will Internet of Bodies technologies eventually lead to private companies regulating who lives and dies based on implant access?

speaker

Alina Ustinova

explanation

This explores the potential shift in power dynamics between governments, companies, and individuals in healthcare decision-making.

Do we have enough understanding of IoB technologies to create effective frameworks for harnessing benefits while protecting individuals?

speaker

Audience member

explanation

This question addresses the readiness of the international community to regulate and manage IoB technologies.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

Open Forum #22 Citizen Data to Advance Human Rights and Inclusion in the Di

Open Forum #22 Citizen Data to Advance Human Rights and Inclusion in the Di

Session at a Glance

Summary

This discussion focused on the importance of citizen data in fostering inclusive digital environments and promoting human rights. Experts from various fields shared insights on engaging marginalized communities in data governance and digital transformation processes.

The panel emphasized the need for meaningful citizen participation across the entire data value chain to ensure digital systems and policies reflect diverse experiences and priorities. They highlighted tools and frameworks, such as the Copenhagen Framework and the Digital Rights Check, designed to promote responsible use of citizen-generated data and assess human rights risks in digital projects.

Participants stressed the importance of involving women, girls, persons with disabilities, and other marginalized groups in data governance practices. Examples were shared of how citizen data initiatives have improved accessibility and representation for these communities in digital spaces and policy-making processes.

The discussion also touched on the role of national statistical offices in integrating citizen-generated data into official data systems, particularly for addressing data gaps in areas like gender-based violence and disaster impact assessment. Ethical concerns and potential risks associated with data collection and sharing were addressed, with emphasis on the need for safeguards and standards.

Collaboration between civil society, government bodies, human rights institutions, and the private sector was identified as crucial for maximizing the impact of citizen data initiatives. The panel concluded by highlighting the need for increased investment in inclusive citizen participation in digital spaces and the importance of capturing and communicating the real-world impact of citizen data projects to garner further support and resources.

Keypoints

Major discussion points:

– The importance of citizen-generated data for inclusive digital transformation

– Tools and frameworks for ensuring human rights and ethics in digital data collection

– Involving marginalized groups like women, persons with disabilities, and children in data governance

– Challenges and opportunities for national statistical offices to incorporate citizen data

– The role of public-private partnerships in advancing citizen data initiatives

The overall purpose of the discussion was to explore how citizen-generated data can foster inclusion and human rights in public services and policies in the digital age. The panelists shared experiences and insights on involving citizens, especially marginalized groups, in data production and governance processes.

The tone of the discussion was collaborative and solution-oriented. Panelists spoke enthusiastically about the potential of citizen data while also acknowledging challenges. There was a sense of urgency about the need to make digital spaces and data processes more inclusive. The tone remained positive and constructive throughout, with panelists building on each other’s points and offering concrete suggestions for advancing the field of citizen data.

Speakers

– Papa Seck: Chief of the Research and Data Section at UN Women, member of the Steering Committee of the Collaborative on Citizen Data

– Dr. Hem Raj Regmi: Deputy Statistician in the Nepal National Statistical Office

– Joseph Hassine: AI for Social Good, Google.org

– Line Gamrath Rasmussen: Senior Advisor on Human Rights and Tech at the Danish Institute for Human Rights

– Bonnita Nyamwire: Research Director at Pollicy

– Elizabeth Lockwood: Representative of the UN Stakeholder Group of Persons with Disabilities for Sustainable Development, member of the Steering Committee of the Collaborative on Citizen Data

Additional speakers:

– Howie Chen: Works at the UN Statistics Division

– Dina: Audience member from Brazil

Full session report

Expanded Summary of Discussion on Citizen Data for Inclusive Digital Transformation

This discussion brought together experts from various fields to explore the importance of citizen-generated data in fostering inclusive digital environments and promoting human rights. The panel focused on engaging marginalized communities in data governance and digital transformation processes, emphasizing the need for meaningful citizen participation across the entire data value chain.

Opening Statements and Key Themes:

1. Importance of Citizen Data for Inclusive Digital Transformation

Panelists strongly agreed on the critical role of citizen-generated data in creating inclusive digital environments. Papa Seck, Chief of the Research and Data Section at UN Women, emphasized that citizen data is essential for fostering inclusive digital spaces. Bonnita Nyamwire, Research Director at PoliSea, argued that citizen data helps identify and address systemic biases, highlighting her work with women politicians and women in media through PoliSea’s programs. Joseph Hassine from Google.org stressed its importance for accurately understanding world challenges and informed policymaking. Elizabeth Lockwood, representing the UN Stakeholder Group of Persons with Disabilities, highlighted how citizen data can fill critical gaps in information about marginalized groups, citing an example of how such data helped address barriers for persons with disabilities during the COVID-19 pandemic. Dr. Hem Raj Regmi, Deputy Statistician in the Nepal National Statistical Office, viewed citizen data as an alternative source for areas with data gaps, noting its potential cost-effectiveness and representativeness compared to traditional data sources.

2. Ensuring Human Rights and Ethical Standards in Citizen Data

Line Gamrath Rasmussen, Senior Advisor at the Danish Institute for Human Rights, stressed the importance of a human rights-based approach throughout the data collection process. She introduced tools such as the Copenhagen Framework and the Digital Rights Check, designed to promote responsible use of citizen-generated data and assess human rights risks in digital projects. Elizabeth Lockwood highlighted the importance of data confidentiality and protection, while also discussing the Digital Accessibility Rights Evaluation Index, which assesses digital accessibility across countries. Joseph Hassine addressed the need to consider potential harms from data sharing and misuse, emphasizing the importance of data validation and standardization.

3. Meaningful Participation of Marginalized Groups

Panelists agreed on the crucial importance of involving marginalized groups in data governance and collection processes. Bonnita Nyamwire emphasized the need to involve women and girls in data governance practices. Elizabeth Lockwood stressed the importance of ensuring accessibility for persons with disabilities, arguing that organizations of persons with disabilities should lead or co-lead citizen data initiatives. Dr. Hem Raj Regmi highlighted the focus on marginalized communities and population groups in Nepal’s efforts to incorporate citizen data, mentioning Nepal’s new statistics act from 2022 and plans to implement citizen-generated data for violence and disaster impact measurement.

4. Collaboration and Capacity Building

The discussion underscored the importance of cross-sector collaboration and capacity building to advance citizen data initiatives. Bonnita Nyamwire emphasized the need for collaboration between government, civil society, and the private sector. Elizabeth Lockwood called for strengthening capacity for inclusive citizen data, while Dr. Hem Raj Regmi suggested starting with small-scale pilots at municipal or district levels before national implementation. Joseph Hassine stressed the importance of capturing and communicating the impact of citizen data initiatives to attract more resources and support, highlighting Google.org’s focus on data for informed policymaking and more accurate/inclusive AI tooling.

5. Challenges and Opportunities for National Statistical Offices

Dr. Hem Raj Regmi provided insights into the challenges and opportunities for national statistical offices in incorporating citizen data, highlighting its potential as a cost-effective and potentially more representative alternative to traditional data sources.

6. Role of Public-Private Partnerships

Joseph Hassine provided perspectives on how private sector entities like Google.org can contribute to AI for social good and support citizen data projects.

Closing Recommendations:

In a final “lightning round,” panelists offered key recommendations:

– Line Gamrath Rasmussen: Promote and encourage adoption of the Copenhagen Framework on citizen data

– Bonnita Nyamwire: Develop and promote tools to ensure compliance with ethical and human rights standards in data collection

– Elizabeth Lockwood: Strengthen capacity and increase investments in inclusive citizen participation in digital spaces

– Dr. Hem Raj Regmi: Engage in monitoring and implementation of the Global Digital Compact

– Joseph Hassine: Capture and communicate the impact of citizen data initiatives to attract more resources and support

Unresolved Issues and Audience Questions:

Several issues remained unresolved, including strategies for including children and older adults in data initiatives, balancing data confidentiality with openness and accessibility, and addressing the digital divide to ensure offline alternatives for data participation.

Closing Remarks:

Howie Chen, representing the UN Statistics Division, provided closing remarks, emphasizing the importance of the discussion in advancing inclusive digital transformation.

The overall tone of the discussion was collaborative and solution-oriented, with panelists building on each other’s points and offering concrete suggestions for advancing the field of citizen data. The discussion concluded by highlighting the need for increased investment in inclusive citizen participation in digital spaces and the importance of capturing and communicating the real-world impact of citizen data projects to garner further support and resources.

Session Transcript

Papa Seck: Papasek and I’m the Chief of the Research and Data Section at UN Women. I’m also wearing another hat today as a member of the Steering Committee of the Collaborative on Citizen Data. The Collaborative was established last year to foster partnerships, build capacity and promote the responsible use of citizen-generated data for inclusive and sustainable development. UN Women was the inaugural chair of the Collaborative, together with the UN Statistics Division, and we’re really happy to organise this session today. Colleagues, in the digital era, the participation of citizens in data-driven processes is essential for fostering inclusive, equitable digital environments that can serve the diverse needs of all communities and community members. Since Sunday, day zero of this forum, we’ve heard repeatedly the reasons why the data that feeds AI algorithms, for example, needs to be representative, but also needs to be scrutinised, and this absolutely necessitates citizens’ inclusion and engagement. The newly adopted Global Digital Compact considers inclusivity as a cornerstone for a fair and equitable digital future. Meaningfully engaging all citizens in the data production and use is important to ensure that digital systems and policies reflect their unique experiences and priorities, and this paves the way, of course, for more inclusive digital transformation. If we put that another way, there will be no transformation without inclusion and diversity, and this panel today really reflects that. So we have distinguished speakers from civil society, the Human Rights Institute, the National Statistical Office and the private sector. who will share with us their experiences today on the ways citizen data, the citizen data movement can really help foster inclusion and human rights in public services and policies in the digital age. So the session will also explore how the recently launched UN collaborative on citizen data and the Copenhagen framework on citizen data could support marginalized individuals and communities in this endeavor. So we have five distinguished speakers today. We have Lynn Garaf Rasmussen, who’s the senior advisor on human rights and tech at the Danish Institute for Human Rights. Joining me in the room is Bonita Namwari, who’s the research director of policy. And online again, we have Elisabeth Lockwood, who’s the representative of the UN, of the stakeholder group of persons with disabilities for sustainable development. We have Dr. Hemraj Regmi, who’s the deputy statistician in the Nepal National Statistical Office. And last but not least, Mr. Joseph Hassan, who’s at google.org and is AI for social goods at google.org. So let me first start with, Lynne, let me start with you. We will start with a round of just two questions, one question for each of the panelists. And what I ask you is, can you really please briefly describe for us your experience on citizen data, human rights and digital transformation? How does this come across in your work? It will be the same question for all of you, and I kindly ask each of you to stick to the time allocated of two minutes. So Lynne, let me start with you. Okay, so while we sort that out, Bonita, I’ll go to you then. Thank you so much.

Bonnita Nyamwire: So at PoliSea, where I work, we are based in Kampala in Uganda, and our team is spread across the African continent. So our work is deeply rooted in empowering citizens through data. We’ve led initiatives that leverage citizen-generated data to advocate for improved digital services, inclusivity, and accountability. And by integrating data-driven approaches, we’ve explored the intersection of human rights and digital transformation, ensuring that marginalized voices, especially women and girls, are heard in policy-making processes. So our work mostly focuses on improving digital technology services for women and girls across the African continent. And so a significant focus of our work has been ensuring that digital transformation does not exacerbate inequalities, especially gender inequalities, but rather create opportunities for more inclusive and equitable societies. This therefore includes extensive research that we have done as PoliSea in various African countries on issues like technology-facilitated gender-based violence and the need for safer digital ecosystems, and other research on citizen engagement in data governance processes, as well as application of gender data in data governance.

Papa Seck: Thank you. Papa, back to you. Great. Thank you very much, Bonita. Linek, should we try again? or whether you’ve been able to unmute. We still cannot hear you, but let me see if we could try to fix it with the technician in the room. So, in the meantime, let me go to you, Elizabeth. Sorry, Elizabeth, are you speaking? I am. Now I can unmute. Apologies. I couldn’t unmute.

Elizabeth Lockwood: Thank you so much. Apologies. Thank you so much, Papa. I, along with Papa, I’m also one of the Steering Committee members of the Collaborative on Citizen Data, so I’ll speak to that and then I’ll speak to my other role. One of the key outputs of the Collaborative is our Copenhagen Framework, which highlights the importance of meaningful citizen participation across the entire data value chain. It outlines principles for citizen data and the necessary enabling environment. And a central objective of the framework is to integrate citizens’ perspectives into broader discussions with the national data ecosystem, including topics such as digital transformation, artificial intelligence, and data governance. The human rights-based approach of data is also central to the framework, ensuring open, transparent, inclusive, participatory, confidential, ethical, and other approaches. As the stakeholder group of persons with disabilities, we really focus on data led by organizations of persons with disabilities, citizen data in particular, to fill critical data gaps, to provide evidence, to influence policies, and ensure data reflect reality. This includes data to address the increasing digital divide that disproportionately affects persons with disabilities who live in, 80% live in the Global South, and of that group, 90% do not have sufficient access to assistive technologies that they require. And I’ll talk about that a bit more. Thank you, Papa.

Papa Seck: Great. Thank you very much, Elizabeth. Lina, shall we try again? Yes, I hope you can hear me now. Yes, now it’s good. Thank you. Perfect. Thank you so much, and thank you for waiting.

Line Gamrath Rasmussen: Yeah, so working in this space of technology and human rights, there’s always this dual relationship where tech creates these wonderful opportunities for promoting and protecting human rights, but it also creates, can cause serious harm to human rights. So at the Danish Institute, we obviously take a human rights-based approach, so that means we work on the responsibilities of states and businesses to use and deploy technology in a way that’s human rights compliant. But I find that sometimes we forget ourselves as human rights professionals that when we use technology, we also have to think about how we make sure that it’s happening in a way that is not causing harm to human rights. So that’s why we’re working with this human rights impact assessment methodology where you actually try and assess the risks and the impacts that the technology will have on the users involved in your projects. And that’s why we made a few toolkits on human rights impact assessment in the digital space, and also a tool called the Digital Rights Check that we’ll talk about a little bit more later. Thank you. Great. Thank you very much.

Papa Seck: Over to you, Dr. Hem.

Dr. Hem Raj Regmi: Thank you, Chair, distinguished delegates, ladies and gentlemen. Good evening to everyone. Yes, we all know that these data are required for almost everyone from individuals to the governments, policy makers to the decision makers, researchers to the business people. We also know that these data do not come automatically. We need to invest a lot in the production of the data. Some data can be produced with a little bit less efforts like managing administrative data or MIS, management information system. But the data outside the system are quite costly, particularly the censuses or surveys or even the studies. Even the governments like Nepal have not been able to fund sufficiently to produce sufficient amount of the data to monitor national plans as well as to report for the SDG indicators which are the international obligations agreed by the governments. So we are looking for the alternative data sources which may be a little bit cost effective, which may be more reliable, which may be more representative. These non-traditional data sources may be different types, for example, big data may be there supported by the AI and then all these computers and then with high velocities. But the capacity is always limited to manage this big data. That is why the best alternative data source that we assume may be the citizen generated data. Given these constraints, the citizen generated data may be the best alternatives for the data production, for the data users, not only for the governments, even for the broad population. data user society that is why we have collaborated with the UN citizen data, data generated group to focus to produce some data not all maybe some data particularly on the social side for example the violence, the household level violence, even the gender based violence which are quite common then to report that type of the information or maybe the disaster related information if disaster occurs in any place for example Nepal is prone to many types of the disasters and in that situation if we can develop some mechanism to report these data from the citizens or the issues that we are discussing over here like human rights violation or maybe the issues related with the human rights then these data sources may be invaluable for the governments even for the national statistics office so that we can claim that we have been able to provide the data to the data users, the national data users, the global data users as well as the those who are living behind as per the theme of the SDG that is why we are collaborating with the citizen data forum or the trying to advocate this Copenhagen framework a little bit issues are already there in the statistics sector of Nepal so yes I can go later on Papa thank you yeah exactly thank you very much sorry we’re a little bit limited in time so but I’ll come back to you with the second question so last but not least over to you Joseph.

Joseph Hassine: Thank you, thanks for having me today I’m grateful to participate and so in my role at google.org I look at this from the lens of a funder and external partner where from Google’s philanthropy I’m responsible for helping non-profit organizations and civil society leverage AI and technology toward digital transformation and ultimately toward more positive societal outcomes. Specifically, my team funds nonprofits to build with AI in fields like health, where AI can accelerate diagnostics, education, where it can be a supportive tool for teachers, or food security, where AI can help predict famine to enable earlier, more effective response, just as a few examples. All of this work requires data, data that is accurate, accessible, inclusive, and thus we also fund work to encourage a more open data ecosystem as a whole, such as through our partnership with the UN statistics division to build UN data, which is a tool that uses Google’s data commons to build an open source platform for understanding vast amounts of UN data in a single interface with natural language search functionality. So these types of projects are really all towards that goal of a more open data ecosystem that provides us a more clear and accurate understanding of the world and the challenges we’re facing. So I’m looking forward to talking more about about this work and how it connects to the work that some others have mentioned by the collaborative on citizen data as well. Thank you for having me. Great.

Papa Seck: Thank you. Thank you very much, Joseph. And thanks to all of you. I think, you know, again, each of you has highlighted really one area, just as examples of, you know, why the work on citizen data is really so, so rich. So for the second round of questions, I’ll have specific questions for each of you. And in five minutes, Lena, can you can you tell us a little bit more about, you know, deeper about what what you’ve shared with us? You’ve touched upon some of the tools that the Danish human rights institution offers to institute offers to prevent and mitigate human rights risks related to digital projects and solutions in the citizen data context. Can your tools be used to assess whether data collection uses mobile using mobile apps or the development of platforms to showcase data? are in risk of violating human rights. Over to you. Yes, sorry for that small recess.

Line Gamrath Rasmussen: Yeah, definitely, indeed. We do develop these tools. I shared in the chat a link to what we call the digital rights check. It’s something we developed together with the GI set to really have a tool for digital for development projects where they could, you know, where the staff involved in these projects could have a sort of risk management guide for what will be the human rights risk or impacts of this particular project. So the tool gives you several entry points. You can enter as technical development corporation, as an investor or others, and it allows you to… It’s an online questionnaire that helps you identify these potential issues and some corresponding actions that can help you rectify or mitigate the risks that you identify. So it will help the users consider technology-specific risks, application-specific risks. So what kind of technology are you using? Is it AI? Is it cloud services, et cetera? And then also the context-specific risk as relates to, for example, data protection regulation in the context or country where you are deploying this technology. It also, as a human rights-based tool, has attention to vulnerable and marginalized groups and makes you consider also accessibility issues. So it really gets you around the whole process of identifying the different risks and also makes you consider stakeholder engagement and how you think about transparency and accountability. It also gives you these case studies and further readings that you can link to and then in the end you get this final results page with the risk identified and a sort of human rights action plan that you can use to follow up and really react on the risk that you have identified. This is an open source tool that is open to everybody so if anybody would like to adapt it to their own context, their own projects that’s perfectly feasible and in the true spirit of privacy by design the data is also not stored so it will be deleted as soon as you finish the questionnaire. So I would welcome you to explore it and also give us feedback if there’s anything you would like to see in the tool and I hope it can be useful for many of your projects. Thank you.

Papa Seck: Thank you very much and thank you also for sharing the tool and I’ve looked at it myself and it’s really I think an excellent product so I encourage all of you to do so. So now over to you Bonita. You’ve extensively researched the intersection of women, girls and technology. Could you please share your insights on why it is essential to involve women and girls in data governance practices? In what way can they be meaningfully involved?

Bonnita Nyamwire: Thank you so much Papa. So involving women and girls in data governance is essential because their perspectives, their needs, their experiences are oftentimes overlooked in decision making and yet they are very important. And so as policy our research that we have conducted with women and girls shows for instance that they are disproportionately affected by issues like online gender-based violence or technology-facilitated GBV. data privacy breaches and other kinds of online injustices and discrimination. So we’ve done a research and looked at women in politics, women in the media, women human rights defenders across several countries on the African continent. And so why they need to be involved in these processes is one, involved in game, it then helps to identify and address systemic biases, ensuring fairer systems and policies, including their perspectives also ensures that they are unique challenges such as the ones I’ve already mentioned, TFGBV, algorithmic discrimination are not overlooked but rather prioritized in governance practices as well as safeguards developed for them. And then involving women and girls also ensures fair representation in the digital age because as data becomes central to governance and development, excluding women and girls perpetuates their marginalization. Their inclusion therefore is vital for achieving gender parity in leadership, in decision-making roles in the digital era, but also to achieve on sustainable development goal five. And so how do we involve the women and girls in a meaningful way? So meaningful participation for women and girls can take various forms, ranging from involving them in participatory workshops where women could design as data governance, stakeholders on data governance policies, ensuring that solutions are truly reflective of their realities. And we have done this as policy with women politicians, with women. women in the media on our different programs. We have a program for women politicians called Vote Women that we have implemented in Uganda, in Tanzania and in Senegal. And we have seen their involvement help to, you know, protect them online, but also improve their wellbeing in digital spaces. We also have had another program, which is still running, Future of Work for Women in the Media. Where we are also creating their resilience in online platforms, but also to see that their voices are amplified. Then the other one on meaningful involvement of women is need to invest in digital literacy programs to improve their digital skills. Because our research also has shown, different researches we’ve done, that women lack digital skills. So involving them through capacity building, especially on digital literacy and skills building initiatives, will enable their participation in any online programs. Then also the other one is to ensure that their representation in data policy boards and leadership and decision-making bodies is also improved, which will further ensure that they are key stakeholders in decision-making processes, whether at community level, national or global level. So it is important that women and girls are also involved in the decision-making processes at different levels. Then the other one is to create safe spaces for dialogue, where their voices can influence both national and global data governance policies. And these are some of the spaces for dialogue, like here where we are at IGF and several others. And so, you know, and we have seen involvement of women in these spaces actually improve their participation and amplifying their voices in data governance. And then meaningful involvement also requires removing barriers to participation by embedding gender and intersectionality lenses in every stage of policy development and digital transformation. Very important is this point on embedding intersectional lens at every stage of data governance process so that no one is left behind. Then lastly, it is also important to foster collaboration and accountability in the tech ecosystem to prioritize the needs and rights of women and girls. And so this will involve collaboration with the women and girls rights networks and organizations, as well as government departments that work on gender issues. Thank you very much.

Papa Seck: Great. Thank you very much, Bonita. At UN Women, and this was tasked by the UN Statistical Commission to several agencies recently, we are developing a new framework for the measurement of technology-facilitated violence. And I’m increasingly convinced that citizen data has to be central to this because it takes so many different forms. And I think at this IGF, we’ve heard several examples of that where measurement becomes really tricky if you don’t have inclusion. So thank you very much. And I will definitely be looking at some of the tools and some of the work that you’ve done in this area, because I think this again has to be part of our global efforts to develop meaningful measurement of this phenomenon. So Elizabeth, we turn to you. With your work on disability statistics, could you give us an example on how citizen data help to ensure that the digital tools are inclusive?

Elizabeth Lockwood: Yes, thank you, Papa. I have three brief examples from partners. First, during the COVID pandemic, we had very little data on persons with disabilities and their experiences during the pandemic globally. So, as a result, NGOs, Organizations of Persons with Disabilities gathered data themselves using citizen data to understand the barriers and solutions for persons with disabilities. And the findings that the stakeholder group of persons with disabilities collected indicated that persons with disabilities face barriers in accessing digital technology in many vital areas, and this was critical for their survival in many cases, actually. And this included lack of access to fast internet connection, lack of financial means to purchase data packages for devices, and lack of captions and sign language interpretation for those daily news briefings that we had. And what happened, since governments really weren’t supporting this, organizations of persons with disabilities came in and they supported their members, they shared the information, and they advocated to their governments. And in the case of deaf organizations, captions were added, national sign language was added in many cases, and still continue today in emergency settings. This isn’t universal, and there are still many hurdles to jump for this, but it’s important to recognize this. And then another thing that’s interesting is digital platforms such as Zoom increased for persons with disabilities in terms of accessibility, but when the pandemic lessened, this actually got worse, because it wasn’t a priority for the general population. So it’s an interesting point to add. My second example is a large-scale example. It’s called the Digital Accessibility Rights Evaluation Index. It’s a benchmarking tool developed by the Global Initiative for Inclusive Technologies for advocates, governments, civil society, and others to trace ICT that’s accessible in different countries around the world. The data collection is based on a set of questionnaires and was done in cooperation with Disabled Peoples International and along with other organizations of persons with disabilities. It has been documented in 137 countries in 8 regions in the world, representing 90% of the world. Their findings from 2018 and most recently 2020. And if you look at the index score, the very nice platform online, you can see global and regional ranking, peer economic development group ranking and implementation ranking. And for many countries, you can compare 2018 to 2020. Most countries have improved, but not all. And so that’s something I really recommend you look at. And then my final example is from the European Blind Union that has done significant advocacy around accessible voting. Feedback was collected from blind and partially sighted members, individuals on their barriers using digital voting systems and election materials. So this turned into advocacy campaigns. And as a result, this advocacy has included improved compatibility with screen readers and other accessibility enhancements in voting in the European region. And in closing, it’s only by ensuring that organizations of persons with disabilities are leading or co-leading citizen data initiatives that digital tools will be inclusive, reflecting the reality and needs of the communities themselves. Thank you.

Papa Seck: Thank you very much, Elizabeth. Great work. I’ve followed this and I think you and I have also had conversations on some of the work that we are doing, because I think, you know, in the space of statistics, there’s still more that can be done. particularly on disability. And we’ve had several of these conversations and I think the collaborative is definitely well-placed to enrich that work. So, Dr. Hem, over to you.

Dr. Hem Raj Regmi: You are considering, as you mentioned, the use of digital tools to collect data on gender-based violence, as an example. Could you please let us know how you plan to engage with the communities to ensure that the tools are inclusive? But also, I would add, in this space, obviously there are lots of ethical concerns when it comes to collection of data on gender-based violence and how do you aim to address those concerns? Thank you, Papa, once again. The statistical system in most of the countries has been governed by the fundamental principle of official statistics since last many years. The statistics act, rules and regulations, they are particularly based on those principles which were promulgated by the UN last 20, 30 years ago. Now the situation is getting changed and then we need to change our statistical system to include other dimensions, particularly the citizen-generated data, for example. Luckily, in Nepal, we promulgated a new statistics act in 2022, almost two years ago, and then there are a few provisions where we can use such type of the data as an alternative data source, though it’s not yet streamlined, but it can be an alternative, what we are producing right now by different censuses, surveys, and studies. There is a provision of the data, like survey clearance system, called survey clearance system, which is almost similar to that Copenhagen framework where the different modalities have been suggested to produce the data by the… led by the CSOs, the civil society organizations or in collaboration with the NSO and CSO or even led by the NSO, these formalities are over there, these frameworks are over there and similar, not exactly similar, but up to similar modalities are there in the Statistics Act of Nepal that was promulgated in 2022. We plan to use, we plan to use those modalities to generate the data in the future. For example, the data of the marginalized people, the data for the sector where there is a data gap exists. If there is a data gap and then the national level surveys or the studies, they are not able to produce the data at that level for those marginalized people that we are discussing over here. So, for those areas which are quite remote or for that particular segment of the society which are marginalized, then in that situation, we plan to use that Copenhagen framework to produce the data of the different sectors. And then recently, we did one national level workshop to implement this citizen-generated data and we have decided these two areas that I mentioned in my previous statement also. One is on the violence, particularly the gender-based violence, the domestic violence and the other one is the disaster, particularly the impact of the disaster, not on the hazards and risks. When that disaster occurs, then the economic impact or the impact on the livelihood and the impact on the life of the people or even crop livestock or infrastructures, these two areas may be like piloting on the implementation of that Copenhagen framework in Nepal. If the confidence, if the reliability of the data can be raised with these pilot activities, then we hope that in the future, we can take these citizen-generated data as an alternative data source. which may be the major part of the official data system. Like the Census Survey and officially MIS has been the major part of the official data system right now. The citizen-generated data may be the another source of the data systems, which can help us particularly for the marginalized people, for the women, children, displaced people, people affected by the disasters. Yes, that’s the plan. Great. Thank you. Thank you very much.

Papa Seck: I really look forward to seeing this work. I think maybe one piece of advice I can also offer is that there’s a lot of work on citizen data and violence that has been done in Ghana in particular. I would definitely advise you to also talk to them and also learn about their experiences. I’d be happy to make the connection. Joseph, again, all our gratitude for the strong support to the work of the collaborative on citizen data. My question to you is, what motivated you to invest in this space? What social challenges in the context of technological innovation do you hope to address through your investments in citizen data? Thank you. We’re thrilled to be able to support, so grateful to be part of this work in a small way.

Joseph Hassine: I can zoom out a little bit. Google.org’s work on data has historically focused on two things, data that enables more informed policymaking or decision-making, and data that enables more accurate and inclusive AI tooling, sometimes both. The funds that we provide might be used to build a new data analysis platform, create advocacy tools that allow folks to communicate with policymakers on critical issues. or collect data in different languages and contexts in order to improve the accessibility of AI models. Citizen-generated data in particular is foundational to all of these efforts because at the end of the day it helps give us a more inclusive and accurate picture of the world around us, which is critical for any tooling that is built upon that data. At the same time, the data is only helpful if it’s used to inform some action through new understanding of a community or a problem and in order to make data useful, from my perspective, there’s some amount of validation and standardization that’s critical in order to ensure that these individual small or large-scale citizen data efforts are seen as trustworthy and usable and ultimately able to spur positive change. I think it’s not dissimilar to the digital rights check, for example, that a colleague mentioned of needing kind of an expert in this space to create a set of standards that others can follow when building so that we can look at data that’s produced by these entities and know that it meets some amount of benchmarks. And so that type of work is part of what makes us excited to support the collaborative on citizen data because I think the collaborative is filling that critical role through the Copenhagen framework, through pilot efforts to ensure that there’s reliability in this citizen data ecosystem that continues to grow. So we view that kind of central leadership as really critical to build capacity and create standards in the space and thrilled to be able to support the collaborative to continue to build upon that. Great, thank you very much and again, thanks for your strong support.

Papa Seck: So we still have some time and I would like to now open up for questions, both from those in the room, but also online. Yes, please and please introduce yourself. Hello, everyone. I’m Dina. I’m from Brazil. And first of all, thank you for this amazing meeting and other information shared. And my question is, how can data-driven human rights initiatives also include children and older adults? Because it was mentioned about women and people with disabilities. But I would like to also know about these initiatives. And it would be amazing if you can also mention inspired examples regarding these communities.

Joseph Hassine: Thank you. I also had a question. Thank you so much. Thank you so much for that discussion.

Audince: That was actually really helpful. So I had a slightly different question. When you build talking about having really high quality, good value data sets that are used for better policymaking and you can use for citizens’ good, there’s also a lot of, I think, harm you can do with these kind of data sets. Because when you have these data sets and you’re essentially maybe sharing them with the aim of doing public good, if you’re sharing them widely, or actually if it’s anywhere out that’s accessible, there’s nothing to stop maybe a company or any other actor from using this data to exploit, say, certain kinds of societal problems or any sort of a divide to make things worse. You already have political consultancies that do this when it comes to elections. So sharing public data, yes, it could lead to public good, but there’s also, I think, potential for public harm, especially now with so many AI systems being deployed. So how would you prevent against that? How would you ensure that all of this data collection is only being used for good? Thank you. Um, how are you? Do we have any questions online? Not at this moment. Great.

Papa Seck: Thanks. Um, uh, does any of you want to take the Do any of you want to take the questions that were asked? I can go on. Yes, please. Go ahead. Yeah, I can go on the Children one. So I know that one is, um, that is what I talked about using the intersectional lens

Bonnita Nyamwire: in citizens gathering citizens data. Because when you do things using the intersectional lens, then you’re able to see who has been left out, who has been included, even the different categories. Because if you’re focusing on Children, they also have different categories, you know, and the same applies to women and girls and also the adults that you talked about. I know that there are some organizations that are doing work on Children being online. I know that Plan International is doing work on that. They have done research on, uh, gender on on on cyberbullying online for young girls. I know that UNICEF is also doing a lot of work on that one. I know that also Child Fund, the different offices of Child Fund Globally, they’re also doing work around Children’s rights online. Yes, thank you. Great. Thank

Papa Seck: you. Um, anyone else for the second question on the harm? I can go ahead. This is Elizabeth. Yes, please go ahead. They just just briefly in the Copenhagen framework, we do have principles

Elizabeth Lockwood: that guard human rights. So it is the central core theme of the framework. So this can be something that can be used to apply to address the very good question that was asked in the audience. And I also think that it’s important that this, that we need to be, data need to be confidential and protected, but also in other cases, be open and accessible. So I think we also need that balance. And it’s really important to look at that balance and monitor and retain that balance. Thank you. Thank you.

Papa Seck: And Dina, just to add to your question, as part of the collaborative, we have various organizations working on different issues. So obviously, gender, women and girls is an important dimension of it, but it’s not the only one. There are other organizations that are working on various different dimensions. And I think that’s really what makes also the collaborative really rich. Papa, if I may, I would just add. Yes, please go ahead Joseph.

Joseph Hassine: Just one point to the second question. I think there’s some organizations doing really interesting work in ensuring that data collection efforts are equitable and fair. I think in particular areas that I’ve seen, such as for example, indigenous language, these are areas where indigenous communities can benefit from leveraging AI tools, but they’re often not available in native languages. At the same time, AI could be a tool for preserving native language, many of which are unfortunately becoming extinct in the US and elsewhere. But of course that has really meaningful risks associated with ownership of that data, whether these communities want their data ingested by systems. So I think these questions take place at a large scale, but also at a more issue specific scale, where it’s how do we make the data appropriate and safe in this particular instance? And I’ve spoken to some organizations who are, for example, leading efforts on, if you’re collecting indigenous data, here’s kind of a charter and a constitution of how that data can be used, how communities should be compensated, how they should be included in the process. and what ownership they should have of data moving forward. And I think work like that is really critical to that second question of how we avoid some of the risks here. Great, thank you. Thank you very much, Joseph. So we have time for just one final lightning round for all the panelists.

Papa Seck: And just I think in no more than a minute, what would you advise the collaborative to do to enrich its work in the digital space? Just a piece of advice for the collaborative that we can take into consideration.

Line Gamrath Rasmussen: So let’s start with Lene. Yes, thank you. One piece of advice, that’s hard, but I think one thing we have to realize is not just consider the end result. So we might have great citizen data that’s inclusive and that really reflects the society we’re in, but we also have to get it in a way, the process has to be human rights-based as well. And that means that we have to think about the human rights-based principles of participation, accountability, non-discrimination, empowerment, and legality. And also remember that this human rights due diligence, if you wanna call it, is ongoing. It’s not something you do once and then you’re sorted and then the rest you do for the next five years is fine. You have to keep doing and keep assessing who might be harmed by the products or services that you’re using and how you’re using. And then one final thing is maybe also this thing about considering offline alternatives as maybe the only option for some people to participate in and be empowered by this, that we cannot just, even if we have like universal access, even if it’s affordable, there will be people that we cannot reach online. So we have to think about and be serious about offline alternatives as well. Thank you. Yeah, thank you. The point about the digital divide is actually, I think, quite central to I think all of the discussions in order to make sure that we don’t leave anyone behind. Bonita, can I go to you?

Bonnita Nyamwire: Thank you, Papa. So for me, my final, I think, one piece of advice would be that all key stakeholders within doing work in the digital ecosystem, looking at all these issues, should not work in silos. They should work together, because you find government is doing this, the same thing civil society is also doing, the same thing that private sector is also doing, but instead would all work together, leverage on each other’s efforts, leverage on the structures and the systems that each of those stakeholders have to be able to address some of these issues. For instance, the safety issues that one of the participants has asked about, you know, it would be looking at what does government have, what does the tech companies, what does civil society have, and then working together to sort most of these issues, and then also to ensure that we do not leave anyone behind, because working in silos, we may forget some people, but working together, we will not leave anyone behind, plus all the categories that have been talked about, the women, the girls, the children, the adults, the persons with disabilities. Thank you. Great, thank you very much. Elizabeth, over to you.

Elizabeth Lockwood: Thank you, Papa. I think the collaborative needs to be in the conversation. We need to be part of this work, and meaningfully part of this work. One way is to engage in the monitoring and implementation of the Global Digital Compact. I think that’s a very, very good way that we can really be part of this as a collaborative. strengthen the capacity and invest in inclusive citizen data and participation in the digital

Papa Seck: space, especially for marginalized groups which we’ve been talking about. And I also echo that we should work cohesively and collaboratively instead of in silos. Thank you. Great, thank you very much. Dr. Ham. Thank you, Papa. Yes, according to my understanding, the citizen data are not

Dr. Hem Raj Regmi: going to replace the existing official data quite immediately, but in the future we can think about it. So, the focus should be on those areas where there is a data gap. Since COVID, the data collection system for the official data has already been changed. We have already transferred from PAPI, the paper-based approach, to the CAPI, computer-based approach, and even the telephone service has become quite common where we can link those information with the digital world. My idea is that let’s start from a small area, maybe from a municipality or with a small theme, or maybe for a district or maximum for a province. Let’s not focus at the national level immediately that citizen-generated data can fill the gap at the national level. Let’s start from the municipality or let’s start from some marginalized areas, from marginalized communities, from some societal segments of particular caste, ethnicity, or women, children, or elderly. Then we can integrate this data with the national data system. Then they will be available digitally and then in different forms. Yes, that’s my opinion.

Papa Seck: Great. Thank you very much, Dr. Hamm. Joseph, over to you. Thank you. I think I view this through a lens of how do we get more resources to the collaborative?

Joseph Hassine: And to that end, I think the work that’s been done already to pilot some of these efforts is critical. And a critical next step that we often miss in the data space is like, how do we capture the impact that that is having and tell that story? Because I think that we’ll ultimately need those throughlines of what data an organization created, what decision that ultimately led to, and what impact that ultimately had on a community of people. And the better that we can get at capturing and telling those stories, the more we’ll be able to find support for this work and continue to scale it.

Papa Seck: Great. Thank you very much. And this wasn’t part of the plan, but I will make it as a prerogative to as a chair’s prerogative. So I would like to put on the spot, Ms. Howie Chen, who’s at the UN Statistics Division and who’s really been, I think, at the forefront of this work and including driving this session. So I don’t think we can close without giving you the floor, Howie. Please, over to you.

Speaker: Thank you, Papa, for putting me on the spot. No, it’s really great session. Thank you so much, first of all, Papa, for being such a great moderator and to all the speakers. And great to see you over there, Bonita, and all the speakers. And without all the help from you, that would not be possible. Thanks to Francesca and her team to make it happen, the interpretations. And of course, the collaborative is a collaborative. We all work together. So really grateful for all the great work that we’ve done together. And thank you, Papa, for co-leading the collaborative with us. And we look forward to continue the conversation. Great. Thank you very much, Howie. And really, again, thanks to all of you for a rich conversation.

Papa Seck: There are many takeaways and, you know, obviously I won’t summarize, but I think, you know, for me, just some of the key points that came up really are around meaningful citizen participation in the data value chain and ensuring that digital systems and policies address the diverse needs of marginalized and underrepresented communities. We also need to foster inclusivity and equity in the digital era. Human data initiatives, as we’ve heard, such as the Copenhagen Framework, are vital for integrating citizens’ perspectives into digital transformation. We also need them for promoting transparency and safeguarding human rights in data governance processes. We’ve heard, you know, really, I think, you know, how and seen also how partnerships between civil society, national statistical offices, human rights institutes and institutions, academia and the private sector can really help to identify, amplify the effectiveness of citizen data in creating innovative policy and solutions, policy solutions. And here, you know, again, I think given the richness of the work in this area, I think, you know, the sky is really the limit. In terms of action points, three of them I noted. One is promoting and encouraging the adoption and implementation of the Copenhagen Framework and citizen data to ensure meaningful citizen participation and integration of marginalized communities in data governance. We need to develop and promote tools such as those that were highlighted here today and include really relevant stakeholders to ensure both compliance with ethical and human rights standards, but also data standards as well. We need to strengthen capacity and increase also increase investments in inclusive citizen participation in data. digital spaces. And you know, again, this needs to be, we need to ensure that marginalized communities and population groups are also included. So with that, we’ll close here. And thank you very much to all of you for a great session. you you you you you

P

Papa Seck

Speech speed

131 words per minute

Speech length

2013 words

Speech time

921 seconds

Citizen data essential for fostering inclusive digital environments

Explanation

Papa Seck emphasizes the importance of citizen data in creating inclusive digital environments. He argues that citizen participation in data-driven processes is crucial for addressing diverse community needs.

Evidence

Mentions the Global Digital Compact considering inclusivity as a cornerstone for a fair and equitable digital future.

Major Discussion Point

Importance of Citizen Data for Inclusive Digital Transformation

Agreed with

Bonnita Nyamwire

Joseph Hassine

Elizabeth Lockwood

Dr. Hem Raj Regm

Agreed on

Importance of citizen data for inclusive digital transformation

B

Bonnita Nyamwire

Speech speed

128 words per minute

Speech length

1185 words

Speech time

554 seconds

Citizen data helps identify and address systemic biases

Explanation

Bonnita Nyamwire argues that involving women and girls in data governance is essential to identify and address systemic biases. This ensures fairer systems and policies that reflect their unique challenges and experiences.

Evidence

Mentions research conducted on women in politics, media, and human rights defenders across African countries, highlighting issues like online gender-based violence and technology-facilitated GBV.

Major Discussion Point

Importance of Citizen Data for Inclusive Digital Transformation

Agreed with

Papa Seck

Joseph Hassine

Elizabeth Lockwood

Dr. Hem Raj Regm

Agreed on

Importance of citizen data for inclusive digital transformation

Involving women and girls in data governance practices

Explanation

Nyamwire emphasizes the importance of involving women and girls in data governance to ensure their perspectives and needs are not overlooked. She argues that their inclusion is vital for achieving gender parity in leadership and decision-making roles in the digital era.

Evidence

Mentions programs like Vote Women and Future of Work for Women in the Media, implemented in various African countries to improve women’s wellbeing in digital spaces.

Major Discussion Point

Meaningful Participation of Marginalized Groups

Agreed with

Elizabeth Lockwood

Dr. Hem Raj Regmi

Agreed on

Meaningful participation of marginalized groups

Including children and older adults in data initiatives

Explanation

Nyamwire highlights the importance of using an intersectional lens in citizen data gathering to include different categories of people, including children and older adults. She emphasizes the need to consider various demographics in data initiatives.

Evidence

Mentions organizations like Plan International, UNICEF, and Child Fund working on children’s rights online and cyberbullying for young girls.

Major Discussion Point

Meaningful Participation of Marginalized Groups

Importance of cross-sector collaboration

Explanation

Nyamwire advises that all key stakeholders in the digital ecosystem should work together rather than in silos. She argues that collaboration between government, civil society, and private sector can leverage each other’s efforts and structures to address issues more effectively.

Major Discussion Point

Collaboration and Capacity Building

J

Joseph Hassine

Speech speed

162 words per minute

Speech length

973 words

Speech time

359 seconds

Citizen data critical for accurate understanding of world challenges

Explanation

Joseph Hassine emphasizes the importance of citizen-generated data in providing a more inclusive and accurate picture of the world. He argues that this data is foundational for building tools and informing policymaking.

Evidence

Mentions Google.org’s focus on data that enables more informed policymaking and decision-making, as well as data that enables more accurate and inclusive AI tooling.

Major Discussion Point

Importance of Citizen Data for Inclusive Digital Transformation

Agreed with

Papa Seck

Bonnita Nyamwire

Elizabeth Lockwood

Dr. Hem Raj Regm

Agreed on

Importance of citizen data for inclusive digital transformation

Addressing potential harms from data sharing and misuse

Explanation

Hassine acknowledges the potential risks associated with data sharing and misuse. He emphasizes the need for careful consideration of data ownership and usage, especially in sensitive contexts like indigenous language preservation.

Evidence

Mentions organizations working on charters and constitutions for data use, compensation, and ownership when collecting indigenous data.

Major Discussion Point

Ensuring Human Rights and Ethical Standards in Citizen Data

Agreed with

Line Gamrath Rasmussen

Elizabeth Lockwood

Agreed on

Ensuring human rights and ethical standards in citizen data

Capturing and communicating impact of citizen data initiatives

Explanation

Hassine advises focusing on capturing and communicating the impact of citizen data initiatives. He argues that demonstrating the real-world effects of data-driven decisions is crucial for garnering support and scaling these efforts.

Major Discussion Point

Collaboration and Capacity Building

Differed with

Dr. Hem Raj Regm

Differed on

Approach to implementing citizen data initiatives

L

Line Gamrath Rasmussen

Speech speed

154 words per minute

Speech length

831 words

Speech time

322 seconds

Tools needed to assess human rights risks in digital projects

Explanation

Line Gamrath Rasmussen emphasizes the need for tools to assess human rights risks in digital projects. She argues that these tools help identify potential issues and provide actions to mitigate risks in technology deployment.

Evidence

Mentions the Digital Rights Check tool developed by the Danish Institute for Human Rights, which helps identify human rights risks in digital projects.

Major Discussion Point

Ensuring Human Rights and Ethical Standards in Citizen Data

Agreed with

Elizabeth Lockwood

Joseph Hassine

Agreed on

Ensuring human rights and ethical standards in citizen data

Need for human rights-based approach in data collection process

Explanation

Rasmussen stresses the importance of a human rights-based approach throughout the data collection process. She argues that principles of participation, accountability, non-discrimination, empowerment, and legality should be considered continuously.

Major Discussion Point

Ensuring Human Rights and Ethical Standards in Citizen Data

Agreed with

Elizabeth Lockwood

Joseph Hassine

Agreed on

Ensuring human rights and ethical standards in citizen data

E

Elizabeth Lockwood

Speech speed

138 words per minute

Speech length

903 words

Speech time

389 seconds

Citizen data can fill critical data gaps on marginalized groups

Explanation

Elizabeth Lockwood argues that citizen data, particularly data led by organizations of persons with disabilities, is crucial for filling critical data gaps. This data provides evidence to influence policies and ensure data reflects reality for marginalized groups.

Evidence

Mentions examples of NGOs and Organizations of Persons with Disabilities gathering data during the COVID pandemic to understand barriers and solutions for persons with disabilities.

Major Discussion Point

Importance of Citizen Data for Inclusive Digital Transformation

Agreed with

Papa Seck

Bonnita Nyamwire

Joseph Hassine

Dr. Hem Raj Regm

Agreed on

Importance of citizen data for inclusive digital transformation

Ensuring accessibility for persons with disabilities

Explanation

Lockwood emphasizes the importance of ensuring digital tools and platforms are accessible for persons with disabilities. She argues that organizations of persons with disabilities should lead or co-lead citizen data initiatives to ensure inclusivity.

Evidence

Mentions examples like the Digital Accessibility Rights Evaluation Index and advocacy by the European Blind Union leading to improved accessibility in voting systems.

Major Discussion Point

Meaningful Participation of Marginalized Groups

Agreed with

Bonnita Nyamwire

Dr. Hem Raj Regmi

Agreed on

Meaningful participation of marginalized groups

Importance of data confidentiality and protection

Explanation

Lockwood highlights the need for balance between data confidentiality and accessibility. She argues that while data needs to be protected, it should also be open and accessible in certain cases.

Major Discussion Point

Ensuring Human Rights and Ethical Standards in Citizen Data

Agreed with

Line Gamrath Rasmussen

Joseph Hassine

Agreed on

Ensuring human rights and ethical standards in citizen data

Need to strengthen capacity for inclusive citizen data

Explanation

Lockwood advises strengthening capacity and increasing investments in inclusive citizen participation in digital spaces. She emphasizes the importance of including marginalized communities and population groups in these efforts.

Major Discussion Point

Collaboration and Capacity Building

H

Dr. Hem Raj Regmi

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Citizen data as alternative source for areas with data gaps

Explanation

Dr. Hem Raj Regmi proposes citizen data as an alternative source for areas with data gaps, particularly for marginalized communities. He argues that citizen-generated data can complement official data systems, especially in remote areas or for specific societal segments.

Evidence

Mentions Nepal’s new statistics act from 2022 which includes provisions for using citizen-generated data as an alternative data source.

Major Discussion Point

Importance of Citizen Data for Inclusive Digital Transformation

Agreed with

Papa Seck

Bonnita Nyamwire

Joseph Hassine

Elizabeth Lockwood

Agreed on

Importance of citizen data for inclusive digital transformation

Focusing on marginalized communities and population groups

Explanation

Regmi emphasizes the importance of focusing citizen data efforts on marginalized communities and specific population groups. He argues that this approach can help fill data gaps for underrepresented segments of society.

Evidence

Mentions plans to use the Copenhagen framework to produce data on gender-based violence and disaster impacts in Nepal.

Major Discussion Point

Meaningful Participation of Marginalized Groups

Agreed with

Bonnita Nyamwire

Elizabeth Lockwood

Agreed on

Meaningful participation of marginalized groups

Starting with small-scale pilots before national implementation

Explanation

Regmi advises starting with small-scale pilots of citizen data initiatives before national implementation. He suggests focusing on specific areas or themes at the municipal or district level to integrate citizen-generated data with the national data system gradually.

Major Discussion Point

Collaboration and Capacity Building

Differed with

Joseph Hassine

Differed on

Approach to implementing citizen data initiatives

Agreements

Agreement Points

Importance of citizen data for inclusive digital transformation

Papa Seck

Bonnita Nyamwire

Joseph Hassine

Elizabeth Lockwood

Dr. Hem Raj Regmi

Citizen data essential for fostering inclusive digital environments

Citizen data helps identify and address systemic biases

Citizen data critical for accurate understanding of world challenges

Citizen data can fill critical data gaps on marginalized groups

Citizen data as alternative source for areas with data gaps

All speakers emphasized the crucial role of citizen data in creating inclusive digital environments, addressing systemic biases, and filling data gaps, particularly for marginalized groups.

Meaningful participation of marginalized groups

Bonnita Nyamwire

Elizabeth Lockwood

Dr. Hem Raj Regmi

Involving women and girls in data governance practices

Ensuring accessibility for persons with disabilities

Focusing on marginalized communities and population groups

Speakers agreed on the importance of involving marginalized groups, including women, girls, persons with disabilities, and other underrepresented communities, in data governance and collection processes.

Ensuring human rights and ethical standards in citizen data

Line Gamrath Rasmussen

Elizabeth Lockwood

Joseph Hassine

Tools needed to assess human rights risks in digital projects

Need for human rights-based approach in data collection process

Importance of data confidentiality and protection

Addressing potential harms from data sharing and misuse

Speakers emphasized the need for tools and approaches to assess and mitigate human rights risks in digital projects, ensure ethical data collection processes, and address potential harms from data sharing and misuse.

Similar Viewpoints

Both speakers emphasized the importance of collaboration and effectively communicating the impact of citizen data initiatives to garner support and scale efforts.

Bonnita Nyamwire

Joseph Hassine

Importance of cross-sector collaboration

Capturing and communicating impact of citizen data initiatives

Both speakers advocated for building capacity and starting with smaller-scale initiatives before expanding to larger implementations of citizen data projects.

Elizabeth Lockwood

Dr. Hem Raj Regmi

Need to strengthen capacity for inclusive citizen data

Starting with small-scale pilots before national implementation

Unexpected Consensus

Inclusion of offline alternatives in digital initiatives

Line Gamrath Rasmussen

Need for human rights-based approach in data collection process

While most speakers focused on digital solutions, Rasmussen unexpectedly emphasized the importance of considering offline alternatives for those who cannot be reached online, highlighting a unique perspective on inclusivity.

Overall Assessment

Summary

The speakers largely agreed on the importance of citizen data for inclusive digital transformation, the need for meaningful participation of marginalized groups, and the importance of ensuring human rights and ethical standards in data collection and use.

Consensus level

High level of consensus among speakers, with strong agreement on core principles. This suggests a unified approach to promoting inclusive citizen data initiatives, which could lead to more effective implementation and policy development in this area.

Differences

Different Viewpoints

Approach to implementing citizen data initiatives

Dr. Hem Raj Regmi

Joseph Hassine

Starting with small-scale pilots before national implementation

Capturing and communicating impact of citizen data initiatives

Regmi advocates for starting with small-scale pilots at municipal or district levels, while Hassine emphasizes the importance of capturing and communicating the impact of initiatives to scale efforts.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement were minor and primarily focused on implementation strategies rather than fundamental principles.

difference_level

The level of disagreement among speakers was low. Most speakers agreed on the importance of citizen data for inclusive digital transformation and the need to ensure human rights and ethical standards. The minor differences in approach do not significantly impact the overall consensus on the topic’s importance and general direction.

Partial Agreements

Partial Agreements

Both speakers agree on the importance of protecting rights in data collection, but Rasmussen focuses on a continuous human rights-based approach throughout the process, while Lockwood emphasizes the need for balance between confidentiality and accessibility.

Line Gamrath Rasmussen

Elizabeth Lockwood

Need for human rights-based approach in data collection process

Importance of data confidentiality and protection

Similar Viewpoints

Both speakers emphasized the importance of collaboration and effectively communicating the impact of citizen data initiatives to garner support and scale efforts.

Bonnita Nyamwire

Joseph Hassine

Importance of cross-sector collaboration

Capturing and communicating impact of citizen data initiatives

Both speakers advocated for building capacity and starting with smaller-scale initiatives before expanding to larger implementations of citizen data projects.

Elizabeth Lockwood

Dr. Hem Raj Regmi

Need to strengthen capacity for inclusive citizen data

Starting with small-scale pilots before national implementation

Takeaways

Key Takeaways

Citizen data is essential for fostering inclusive digital environments and addressing systemic biases

Human rights and ethical standards must be ensured when collecting and using citizen data

Meaningful participation of marginalized groups (women, persons with disabilities, children, etc.) is crucial in data governance

Cross-sector collaboration and capacity building are needed to advance citizen data initiatives

The Copenhagen Framework provides important principles for citizen data collection and use

Resolutions and Action Items

Promote and encourage adoption of the Copenhagen Framework on citizen data

Develop and promote tools to ensure compliance with ethical and human rights standards in data collection

Strengthen capacity and increase investments in inclusive citizen participation in digital spaces

Engage in monitoring and implementation of the Global Digital Compact

Capture and communicate the impact of citizen data initiatives to attract more resources and support

Unresolved Issues

How to fully prevent potential harms from data sharing and misuse

Specific strategies for including children and older adults in data initiatives

How to balance data confidentiality/protection with openness and accessibility

Addressing the digital divide to ensure offline alternatives for data participation

Suggested Compromises

Start with small-scale pilots (e.g. at municipality level) before national implementation of citizen data initiatives

Focus citizen data efforts on areas with existing data gaps rather than replacing all official data sources

Develop charters or constitutions for data use when working with specific communities (e.g. indigenous groups) to address ownership and compensation concerns

Thought Provoking Comments

We are looking for the alternative data sources which may be a little bit cost effective, which may be more reliable, which may be more representative. These non-traditional data sources may be different types, for example, big data may be there supported by the AI and then all these computers and then with high velocities. But the capacity is always limited to manage this big data. That is why the best alternative data source that we assume may be the citizen generated data.

speaker

Dr. Hem Raj Regmi

reason

This comment introduces the idea of citizen-generated data as a cost-effective and potentially more representative alternative to traditional data sources, highlighting a key advantage of this approach.

impact

This set the stage for much of the subsequent discussion about the potential and challenges of citizen-generated data, framing it as a promising solution to data gaps.

Involving women and girls in data governance is essential because their perspectives, their needs, their experiences are oftentimes overlooked in decision making and yet they are very important.

speaker

Bonnita Nyamwire

reason

This comment highlights the importance of inclusivity in data governance, specifically focusing on the often-overlooked perspectives of women and girls.

impact

It shifted the conversation to focus more explicitly on issues of inclusivity and representation in data collection and governance, leading to further discussion of marginalized groups.

It’s only by ensuring that organizations of persons with disabilities are leading or co-leading citizen data initiatives that digital tools will be inclusive, reflecting the reality and needs of the communities themselves.

speaker

Elizabeth Lockwood

reason

This comment emphasizes the critical importance of having marginalized communities lead data initiatives about themselves, rather than just being subjects of data collection.

impact

It deepened the conversation about inclusivity by suggesting a more active role for marginalized communities in the data collection process, moving beyond just representation to leadership.

We might have great citizen data that’s inclusive and that really reflects the society we’re in, but we also have to get it in a way, the process has to be human rights-based as well. And that means that we have to think about the human rights-based principles of participation, accountability, non-discrimination, empowerment, and legality.

speaker

Line Gamrath Rasmussen

reason

This comment introduces the important perspective that the process of data collection itself must be rights-based, not just the end result.

impact

It added complexity to the discussion by highlighting that ethical considerations need to be integrated throughout the entire data collection process, not just in how the data is used.

Overall Assessment

These key comments shaped the discussion by progressively broadening and deepening the conversation around citizen-generated data. The discussion moved from identifying citizen data as a potential solution to data gaps, to exploring how to make such data truly inclusive and representative, to considering the ethical implications of the entire data collection process. This progression reflects a nuanced and multifaceted approach to the topic, considering practical, ethical, and rights-based perspectives.

Follow-up Questions

How can data-driven human rights initiatives include children and older adults?

speaker

Dina (audience member)

explanation

The discussion focused on women and people with disabilities, but including children and older adults is important for comprehensive human rights initiatives.

How can we prevent the misuse of public data sets by malicious actors?

speaker

Audience member

explanation

While data sets can be used for public good, there’s potential for exploitation. Safeguards are needed to ensure data is only used for beneficial purposes.

How can we balance the need for data confidentiality and protection with the need for open and accessible data?

speaker

Elizabeth Lockwood

explanation

This balance is crucial for maintaining data integrity while also ensuring its usefulness and accessibility.

How can we ensure equitable and fair data collection efforts, particularly for indigenous communities?

speaker

Joseph Hassine

explanation

There are unique challenges and risks associated with collecting data from indigenous communities, requiring special considerations for data ownership and use.

How can we develop offline alternatives for data collection to include those who cannot be reached online?

speaker

Line Gamrath Rasmussen

explanation

Even with universal access, some people may not be reachable online, making offline alternatives crucial for inclusive data collection.

How can different stakeholders in the digital ecosystem work together more effectively instead of in silos?

speaker

Bonnita Nyamwire

explanation

Collaboration between government, civil society, and private sector is necessary to address digital issues comprehensively and ensure no one is left behind.

How can the Collaborative on Citizen Data engage in monitoring and implementing the Global Digital Compact?

speaker

Elizabeth Lockwood

explanation

This engagement could be a significant way for the Collaborative to be meaningfully involved in shaping digital policies and practices.

How can we start implementing citizen-generated data at smaller scales (e.g., municipality level) before scaling to national levels?

speaker

Dr. Hem Raj Regmi

explanation

Starting small could be an effective way to integrate citizen-generated data into national data systems gradually.

How can we better capture and communicate the impact of citizen data initiatives?

speaker

Joseph Hassine

explanation

Demonstrating the real-world impact of citizen data projects is crucial for attracting more resources and support for this work.

Disclaimer: This is not an official record of the session. The DiploAI system automatically generates these resources from the audiovisual recording. Resources are presented in their original format, as provided by the AI (e.g. including any spelling mistakes). The accuracy of these resources cannot be guaranteed.

WS #162 Overregulation: Balance Policy and Innovation in Technology

WS #162 Overregulation: Balance Policy and Innovation in Technology

Session at a Glance

Summary

This workshop focused on balancing AI regulation and innovation, exploring how to foster technological advancement while ensuring safety and ethical standards. Panelists from diverse backgrounds discussed various regulatory approaches to AI governance, including risk-based, human rights-based, principles-based, rules-based, and outcomes-based models. They emphasized the need for flexible, adaptable regulations that can keep pace with rapid technological changes.


Key issues addressed included the role of AI in combating child sexual abuse material (CSAM), the importance of human rights in AI governance, and the challenges of implementing AI in healthcare. Panelists stressed the need for context-specific regulations, noting that a one-size-fits-all approach may not be suitable across different regions and sectors.


The discussion highlighted the importance of public participation in developing AI policies and the need for capacity building and digital literacy. Panelists shared examples of how AI has been used innovatively during crises like the COVID-19 pandemic, demonstrating the potential benefits of flexible regulatory approaches.


The workshop also touched on the challenges of regulating AI without stifling innovation, with some arguing that policies might be preferable to strict regulations in certain cases. The importance of considering local needs and existing regulatory frameworks when developing AI governance strategies was emphasized.


Overall, the discussion underscored the complexity of AI regulation and the need for a balanced approach that protects human rights and public safety while allowing for technological progress and innovation.


Keypoints

Major discussion points:


– Balancing AI regulation and innovation


– Different approaches to AI governance (e.g. risk-based, human rights-based, principles-based)


– Challenges of regulating AI, including privacy concerns and potential misuse (e.g. for child exploitation)


– Need for AI literacy and capacity building, especially in developing countries


– Importance of considering local context when developing AI policies


Overall purpose:


The goal of this discussion was to explore how to effectively regulate AI technologies in a way that promotes innovation while also protecting public safety and ethical standards. The panelists aimed to share diverse perspectives on AI governance approaches from different regions and sectors.


Tone:


The overall tone was thoughtful and constructive. Panelists acknowledged the complexity of the issues and the need to balance different priorities. There was general agreement on the importance of regulation, but also caution about over-regulation stifling innovation. The tone remained analytical and solution-oriented throughout, with panelists offering nuanced views on different regulatory approaches.


Speakers

– Nicolas Fiumarelli: Moderator, represents the Latin American and Caribbean group from the technical community


– Natalie Tercova: Chair of the ICF in Czech Republic, member of ICANN, vice facilitator on the board of ISOC Youth Standing Group, PhD candidate focusing on digital skills of children and adolescents


– Paola Galvez: Tech policy consultant, founding director of IDON AI Lab, UNESCO’s lead AI national expert in Peru, team leader at the Center for AI and Digital Policy


– Ananda Gautam: Represents the Youth Coalition on Internet Governance, global AI governance expert


– James Nathan Adjartey Amattey: From the private sector in Africa, focuses on innovation and impact on regulatory practices


– Osei Manu Kagyah: Online moderator


Additional speakers:


– Agustina: Audience member from Argentina


Full session report

AI Regulation and Innovation: Striking a Balance


This workshop explored the complex challenge of balancing AI regulation with innovation, bringing together experts from diverse backgrounds to discuss various approaches to AI governance. The discussion, moderated by Nicolas Fiumarelli, highlighted the need for flexible, adaptable regulations that can keep pace with rapid technological changes while ensuring safety and ethical standards.


Key Themes and Discussions


1. Approaches to AI Regulation


Paola Galvez, a tech policy consultant, stated that we are past the question of whether to regulate or not, and now the focus is on how to regulate. She outlined several regulatory approaches, including:


– Risk-based


– Human rights-based


– Principles-based


– Rules-based


– Outcomes-based


Galvez emphasized that regulation should not stifle innovation and stressed the importance of human rights-based approaches to AI governance, while acknowledging the implementation challenges these approaches face.


Ananda Gautam, representing the Youth Coalition on Internet Governance, advocated for flexible, principle-based approaches that can foster innovation while protecting rights. He offered a historical perspective, noting that if the internet had been heavily regulated in its early days, it might not have developed into the tool we use today.


Natalie Tercova, from the healthcare sector, proposed a risk-based approach, suggesting that high-risk AI applications should undergo rigorous review, while low-risk innovations could proceed under lighter regulatory requirements.


2. Balancing Innovation and Safety


James Nathan Adjartey Amattey, from the private sector in Africa, pointed out that the COVID-19 pandemic demonstrated the need for innovation over rigid regulation in times of crisis. He argued that certain regulatory frameworks are necessary for innovation to flourish, challenging the notion that regulation and innovation are inherently opposed.


3. Context-Specific Regulation


Panelists stressed the importance of developing context-appropriate AI governance, particularly for developing countries. Paola Galvez cautioned against simply copying EU regulations, arguing that local needs and existing regulatory frameworks should inform AI governance strategies.


4. AI Literacy and Capacity Building


James Nathan Adjartey Amattey highlighted the need for AI literacy programs for regulators, developers, and users to understand the risks and benefits of AI technologies. Paola Galvez emphasized that digital skills development is key to leveraging AI’s potential, particularly in developing countries.


5. Ethical Concerns and Human Rights


Natalie Tercova raised the issue of AI’s dual use in both creating and detecting child sexual abuse material (CSAM), highlighting the complex balance between leveraging AI for child protection and ensuring privacy rights. She discussed the challenges of using AI to detect CSAM while also acknowledging its potential role in generating such content.


6. Public Participation and Multi-stakeholder Collaboration


Panelists agreed on the importance of public participation in developing AI policies. Paola Galvez emphasized that multi-stakeholder collaboration is crucial for creating effective and inclusive AI governance frameworks.


Audience Questions and Responses


The session included a brief Q&A period, where audience members raised questions about:


– The role of AI in addressing climate change


– Strategies for promoting responsible AI development


– The potential for AI to exacerbate existing inequalities


Due to time constraints, not all questions could be addressed in depth, but panelists provided brief responses highlighting the need for continued research and dialogue on these topics.


Unresolved Issues and Future Directions


Several unresolved issues emerged from the discussion, including:


1. How to effectively balance privacy and safety in AI-powered content moderation


2. The extent of responsibility for AI developers and companies for the effects of their technologies


3. How to address the growing AI divide between developed and developing countries


The discussion highlighted the need for continued dialogue and collaboration to address these complex challenges.


Conclusion


The workshop concluded with a group photo of the panelists and moderator. Throughout the session, speakers emphasized the importance of flexible, context-specific governance strategies that can adapt to rapid technological changes and address the unique needs of different regions and sectors. The diverse perspectives shared by the panelists provided valuable insights into the ongoing challenges and opportunities in AI regulation and innovation.


Session Transcript

Nicolas Fiumarelli: worries. We invite everyone to sit at the main table if you want, so you can be more engaged in the session. Yes, we have one speaker that is stuck in the route, in the Uber, but we will start the session and then he will join later. So okay, good morning, afternoon, everyone, for the ones online. It’s a great pleasure to welcome you all to this workshop called Uber Regulation, Balancing Policy and Innovation in Technology, under the sub-theme of the Harnessing Innovation and Balancing Risk in the Digital Space. My name is Nicola Fiumarelli and I will be moderating the session. I represent the Latin American and Caribbean group from the technical community and it’s a privilege to be among such a distinguished group of panelists and participants. I am very glad that we have this quantity of people in the room. I think the session title is very interesting for you. You know, we are in an era when we need to decide if regulate or not regulate, so this is a hot topic nowadays. You know, technology and innovation have always been drivers of societal progress. However, the phase space evolution of the digital technologies, especially artificial intelligence, presents unique challenges. So how can we foster innovation without stifling it through over regulation? How do we ensure safety and the ethical standards while allowing technology to reach its full potential? These are some of the critical questions we are going to address today, but this requires collective deliberation, so you are all invited to give your ideas and today we aim to address them. So the session will be conducted, as you know, in this roundtable format to encourage equal participation and also interaction among our esteemed panelists and the audience. To set the stage, I just will briefly introduce our panelists. Following this, each of the panelists will take a moment to introduce themselves and share a first motivation for participating in this session. Afterwards, we will dive into the core discussion, addressing some of the key policy questions. Toward the end, we will open the floor for questions from the audience, both online and on-site here, moderated by our colleague, Jose. So let’s meet our panelists. First, Natalie, Natalie Terkova. She’s the chair of the ICF in Church Republic, also a member of the ICANN, and vice facilitator on the board of the ISOC Youth Standing Group. You know, the ISOC Youth Standing Group, together with the Youth Coalition and the Internet Governance, every year organize youth-led sessions to bring the young voices to the Internet Governance Forum. She is also a PhD candidate, focusing on the digital skills of children and adolescents, their online opportunities and risks, with the emphasis on online safety, online privacy, and AI. And she has recently contributed to a report on AI usage in medical spheres, exploring the challenges of deploying AI technologies in health care. Additionally, her works includes critical research on the role of AI in addressing child sexual abuse


Natalie Tercova: material, called CSAM. So Natalie, will you please introduce yourself further and share your motivation for joining this session? Thank you so much, Nicolas. Can anyone hear me well? Perfect. So as you said, thank you for summing it up so perfectly. I am representing the academia stakeholder group. I am a researcher. It’s my day job. And I was recently very much focused on the AI and how it can impact the critical topics I’m focusing on in my research, which on one side is the health system and, for instance, also finding health information online, how people trust health oriented information provided by, for instance, AI driven chatbots and so forth. And on the other side, I’m also invested in the topic of CSAM, as you mentioned. So harmful content focusing on children and also abuse of such materials, depicting children in intimate scenarios where AI is perceived more as a double edged sword. So I hope to tell more about this during the session as I feel this is a crucial topic. Thank you for having me.


Nicolas Fiumarelli: Thank you so much, Natalie. This is, as I say at the beginning, a hot topic, right? We are here to decide whether it is good to regulate or not regulate. There are several factors that will bring us to think about regulation. But on the other hand, you know, this could undermine human rights, like the access to information and different ones. So there are several ways to regulate. So we are looking forward to deep dive on these kind of things for maybe to have good outcomes and some key takeaways on how policymaker actually will have a solution for this kind of issues. Now I will introduce Paola, Paola Galvez. She is Peruvian and is a tech policy consultant dedicated to advancing ethical AI and human centric digital regulation globally. She holds a Master of Public Policy from the University of Oxford and serves as the founding director of IDON AI Lab, UNESCO’s lead AI national expert in Peru and a team leader at the Center for AI and Digital Policy. Paola brings her unique perspective from her work at the OECD and UNESCO on international AI governments. You all must be hearing about UNESCO these days because every country in the world actually have the national AI strategies and UNESCO has the RAM that is the Readiness Assessment and drawing on her experience with UNESCO. AI RAM in Peru, she will provide some insights into this balance in regulatory safeguards and fostering innovation on the global scale. So Paola, could you introduce yourself and tell us about your motivation for this workshop?


Paola Galvez: Good morning, everyone. Thanks so much, Nicolás. Thank you all for joining this session. I think it’s really a critical discussion to be having, but I will put the different opinion here. I don’t think the question is anymore whether to regulate or not to regulate. We’re past beyond that. It’s my opinion. And what we’re working now is on how to regulate, right? Let me go one step back because you asked me to introduce myself a bit, and you did so well. Thanks so much, Nicolás. Just to give a bit of an overview, my perspective here comes with a background starting with the private sector. I used to work at Microsoft for almost five years, and it was me saying, innovation must come. Please do not prevent innovation in my country, which is a developing one. So advocating for auto-regulation, but that’s back 2013 when artificial intelligence was just starting in my country. I mean, in other countries, it was way developed, but the topic at the moment was cloud computing. So just to mention, that was my first version. Then I worked at the government. I advised the Secretary of Digital Transformation of Peru, and that was an absolute meaningful role because I contributed to the AI National Strategy, and I led the Digital Skills National Strategy. So that changed a bit, and I understood how it’s to work in the government, what are the challenges inside. I’m not saying good or justificate anything, but it happens. Then I paused my career and went to Oxford to study, and that’s what brought me to international organizations. And at the moment, I’m an independent consultant working for UNESCO and the OECD, contributing to my country because I just finished the UNESCO AI Readiness Assessment Methodology. I can tell more about it later. And also, I founded Idonia Lab, which is Spanish, which is Idonia, Idonia Lab, trying to make AI benefit everyone. through evidence-based digital regulation and capacity building, directed to women. Now going to the topic and what motivated me when Nicolás came with the idea of having this discussion, it was all in from the moment zero because I thought, yes, now that I’m in the global perspective, I live in Paris at the moment, and this question is not only happening in developing countries, even, I speak with a lot of startups in Paris, now they don’t know how to implement the EUAA Act, so it’s been crazy and there’s a lot of doubts. So I’d like to start and just leave it here, but by giving three key questions to start this conversation, I think first we need to think what is the public policy problem, what we want to regulate comes from that very first question. Regulation is needed to address public policy problems, fundamental and collective rights, so let’s find a more adequate solution by finding the problem, that’s the first step. Second, when to regulate? Map available regulatory instruments, because most of us have consumer codes or consumer law, not every country has data protection law, but that’s a good start, right? Intellectual property laws are in place, so let’s see what we have, and assess the feasibility of adopting this, because bringing to Congress, here we have a member, somebody working in the Parliament of Argentina in the table, for instance, it takes a lot of time, so if we can start enforcing the laws that we already have in place, that’s a good start. And the third question is how to regulate? Identifying a combination of AI regulatory approaches, I’m happy to tell you more about this, Nicolas, there’s several approaches to regulate AI at the moment, and there is no one single best one, but we can need to find according to our context. Thank you.


Nicolas Fiumarelli: Thank you, Paola, just summarising that. So, the first approach is what is the public policy problem, then is about when to regulate and at the end is how to regulate, okay. So, now I will introduce Ananda here on my left. He was stuck in the over in the traffic but he made it. Thank you Ananda to be with us. Ananda represents the Youth Coalition on the Internet Governance and is a global AI governance expert. He has extensive insights into the global regulatory impacts on technological innovation. So, Ananda, please share more about your work and what you bring today on our discussion.


Ananda Gautam: Thank you, Nicolas, and all the panelists. I’m so honored to be here with you guys, with some hiccups, so of course, anyway, so I mostly work with young people. I think capacity building of young people, helping people start their youth initiatives in their countries and how we bring young people to the global internet governance, and not only global, how do we engage in their capacity building in regional and national levels. So, that is my major focus right now. So, I am also working on different AI policies. Paula and myself did join the K-Dev together. I think Sabah is also here, Sabah was also part of our cohort. We have been learning about how developing AI landscape is affecting, and my perspective is very more concerned in the developing nations, because I come from the global south, and from the global south as well, Nepal itself is a very difficult area, and we have many challenges. We have outdated legislations, and some recent legislation like the EU-AI Act, which have extra territorial jurisdiction. It is not only applied in the EU region, so-called Brussels effect is affecting the legislation worldwide. So, countries like Nepal are also trying to build their own AI policies, but as I have been focusing on is, how do we build the capacity of those developing nations, so that they can build a comprehensive AI policy that will actually leverage the power of AI in developing those nations, and of course, how do we build our capacity of young people in engaging in AI governance processes. Another thing is, while we talk about digital divide, we have been seeing now AI divide, you know, like the people having access to AI, and then like people… not having access to AI so my focus would be on how do we build capacity of other stakeholders so that we can eliminate this divide so I think I’ll go on other things on second round. Thank you, Nicholas.


Nicolas Fiumarelli: Thank you, Ananda, and just summarizing that is a great issue to address on the developing nations right because there are difficult areas, as you mentioned, not every country is prepared or has different challenges while updating the legislation every legislation is different in every country. And in the, in the light of the AI Act, that is a mandatory thing and about these territorial jurisdictions right because, you know, the internet by nature is without frontiers. So, it’s very difficult sometimes to regulate this kind of things. In my opinion, as a technical and from the core of the internet, because you know IP addresses are not like country means so sometimes it’s not easy to, to see how, how, how a legislation from a country can affect the way we regulate on the internet because, as I said at the beginning the internet is for nature, a trans frontier, and as the excellency from Saudi Arabia say at the opening ceremony. And I mentioned we are on the AI divide now so this is a new concept that we need to take off and see how to leverage the power of AI as you say Ananda and how to about the young people that is using AI a lot, right. So, now I am going to introduce James, that is our speaker online. If the technicals can put James at the screen will be great. James comes from the private sector in Africa, where he focuses on innovation and the impact on regulatory practices. is going to share examples of how African innovations navigate regulatory challenges and three in the face of adversity. So James, you are with us there. Please introduce yourself for the people here on site. We have a full room. So share what drives your interest in this discussion, please.


James Nathan Adjartey Amattey: So thank you very much, Nicolas, for that introduction. My name is James Amate. I am from Ghana. And I basically come from a background of products management, software development, where we’ve had to create both consumer and enterprise products for education and insurance and also in banking. And I’ve realized in that space that innovation does not have to happen in a vacuum. There are certain regulatory frameworks that need to happen for innovation to come to the forefront. Now, my friend Adan was stuck in traffic. He was complaining about Uber. And Uber is one of those innovations that came about through, should I say, the advancement of technology. And one of those advancements is something we call GPS tracing or GPS tracking. Without that GPS tracking and without that being embedded in phones, we would not have something called Uber. And without internet, a platform like Uber will not be able to locate our friend and get him to the site. And these are some of the things that we tend to. lose sight of sometimes because when we try and build software and we are trying to do digital transformation we think, oh, it’s all about doing customer research and doing market research, but in all there are certain societal and regulatory things that need to trigger innovation to happen. So for example, I just gave an example of. I just give an example for Uber. Now there are certain examples around regulations so when we look at the internet for example, how it came about was the decoupling of AT&T. And that led to the widespread infrastructure, infrastructure development that brought forth the internet. Right so without that, we would not have things like broadband, you know, it’s not directly correlated but the breaking down the regulation that led to the breaking down of AT&T especially in the United States is what, you know, sort of brought forth advancements in the development of the internet as we know it today. Now there are certain times when, when policy tries to get in the way of innovation. And I would use one of our local laws, as an example, we call it the eLevy is a very notorious regulation that came to bring a tax on. Should I say financial transactions done online. Right. Now the problem with that was that Ghana was at the beginning of digital financial literacy so a lot of people were now beginning to transact online. So, the law in of itself was not bad. but I think the timing and then the implementation of it did not have a lot of, should I say, public approval. And, you know, according to reports, there were times when the government of Ghana, you know, missed revenue targets, you know, from taxation, and also the utilization of mobile money also reduced because of the tariffs. So sometimes, you know, regulation, you know, may have a good idea, good intention, but sometimes it’s the implementation of the regulation that makes it difficult for innovation to come forth. So I think as much as regulation is important, we also have to look at the timing of the regulation, especially in Africa where we are mostly now catching up to a lot of innovation. We do not have a lot of homegrown built solutions, so most of the solutions we use are imported. So we have to take our time with regulation and to make sure that there is enough understanding and there’s enough appreciation, and there’s enough, should I say, uses or use cases for the technology before we try to regulate it. Now, I do understand that, you know, sometimes the timing of regulation is important. So, and as much as we do not want any regulations, we also do not want to list regulations where it’s very difficult for, where the technology runs ahead of the society and it’s very difficult for us to control it. So thank you very much for the floor. Over to you, Nicolas. Thank you so much, James. So you also touched on this idea of regulatory frameworks. sometimes happen for innovation, right? But a population needs to be prepared. Innovation is not bad, as you say, but we need public approval. There are different cultures, there are different social at the countries. And while you mentioned that digital money reduce tariffs, on the other hand, that could be a difficult for someone that doesn’t know how to use the technology to use that money on the online, right? So the implementation of the regulation sometimes is a challenge and well, all the people to understand or have this digital financial literacy as you say this concept. So finally, we have Osei, Osei Manukagia here at the table. He’s our online moderator and is taking all the questions from the chat and he will also address questions or direct the people here on site to take the mic. So Osei will ensure this active participation for our visual audience and on site. So Osei, please introduce yourself and tell us about your role in the internet governance and in supporting this session.


Osei Manu Kagyah: Hello, thank you very much. This topic is such an important subject matter, balancing policy and innovation. I’ll be your online moderator and also help moderating on site. So if you have any question, you just raise your hand or if you are joining us online, you just have your question in the chat box. As a public interest technologist, this topic is very, very interesting to me. I love how you put it, how do we go about the regulation? The issue about regulation, I think we’ve moved past it. Is it a silver bullet? And if we are going about it, how do we approach it? The nuances we hope to delve. So we are very excited to join this conversation. Break all your inputs and we help dive deeper. Thank you very much. Over to you, Nicolas.


Nicolas Fiumarelli: Thank you so much, Osei. So we will now begin the core discussion here of today’s workshop. So focusing on critical policy question, each speaker will have approximately three to five minutes to respond the questions directed at them. So policy question number one, what specific regulations currently hinder the adoption of AI and how can this be reformed to promote technological advancement to ensure safety? Natalie, specifically for you, about your recent work on child sexual abuse material, focuses also on AI’s role in the creation of this material, right? So can you explain what CSAM is? Where do you see the need for regulation there?


Natalie Tercova: Thank you so much, Nicolas. So some of you may be joining also my lightning talk, which I delivered on the day one, if I’m not wrong. So I hope I will not repeat myself here. I feel like extending the discussions on CSAM and how it affects the youth, the survivors and also overall the society and the wellbeing of those involved is now at its next step when we talk about AI and how does this come into the place. And it’s deepening the crucial aspects of this issue that we are focusing on. So first, let me just start by saying that what it is. Usually we talk about child pornography or these types of terms are more known to us. So right now we are focusing more on this shortcut CSAM, which stands for child sexual abuse material because it is more broad and it allows us also to involve and it’s something that can be manipulated through, for instance, technologies using AI models and so forth. Because recently, for instance, in Czech Republic, where I come from, we saw a big prevalence of images that were partially from already existing materials that were not harmful, were spread online, sometimes from the child themselves, sometimes from a caregiver, from a teacher who captured moments somewhere at the school property. However, then the AI stepped in or someone using the tool to make the person naked, for instance. And suddenly such existing, already existing material was then abused and transformed into CSAM. So that is why we’re now also focusing on AI in relation to CSAM. So with this, I just want to highlight that now introducing the AI, we have lots of discussions on how this can be potential for harm, unfortunately. However, also some people perceive it as a potential to use AI for detection. And there is this clash between, can AI tools and new emerging technologies be something that can help us tackle this issue, or if it’s gonna make everything way worse. And this is what I want to bring to this debate. Just to give you an idea about how prevalent the CSAM is, I have just a few stats. In 2023, in over 223 countries, 50,000 websites, they were hosted CSAM materials and they were detected and taken down. However, we also have to be mindful that this is just the tip of the iceberg. There are so many things that we just don’t know about and can be in some closed forums, somewhere in the deep web and so forth. So this is just the tip of the iceberg and it is already so alarming. If we look at the specific types of materials such as pictures, videos, these are around 85 millions that has been reported globally in the year 2021. So this is a very alarming number. We have to talk about deepfake technologies and how we can use some, let’s say, advanced editing tools to make it easier for those perpetrators to make it manipulated, to make images and videos suddenly used for CSAM. And this is not just for those who see some form of excitement in these types of materials, but also there is a big money involved because they know that there are people who are willing to buy such materials from these people. So we are now trying to find a balance between how we can still ensure that people are using the new technologies, which actually do have a big potential for helping us tackling all sorts of harmful content, not just CSAM, but also protecting our privacy and protecting the privacy of those most vulnerable ones, in this case, children. Another thing is that right now we are talking a lot about these potential softwares that can detect when CSAM is around. However, this can go also back to grooming, which is the act when the perpetrator is slowly manipulating the child, and this is happening through text usually. But that would mean that some form of software would read the text we are typing somewhere, and that is a big clash between privacy and safety. So here I am also excited to hear your opinions on this issue, where we can find a sweet spot, where we can find a good balance between the right to use these technologies in our advance, use all the opportunities that AI and other emerging technologies can bring us, but also to minimize the risks that are involved with it. Thank you, Nico. Thank you, Natalie. You touched on some interesting things because


Nicolas Fiumarelli: as you mentioned money involved here, there are people that want to buy this kind of content, honest content, also how to ensure this balance, right, because there are people that using technology for the good, right, AI brings a lot of innovation, you know, also for creativity, also for, wow, if you see the advancements nowadays, you see that it’s very useful for a lot of wars, there are a lot about, if you see the discussions on the policy network and artificial intelligence on the show replacements, right, it’s something that is happening worldwide, and the digital divide is increasing because of this, and it’s called the AI divide, as Ananda said, so, but how to protect the privacy as well, because there are solutions, as you mentioned, that Rubin or software to detect, but it’s more on the surveillance part, right, it’s like, how to avoid these kind of practices, or in my opinion, as well, it’s like, some countries have this approach, like of, maybe they can install a software in all mobile phones, or having an agreement with the mobile companies, there are two or three, not so much, the farmers, so maybe it’s easy for them to do this technology, but on the other side, it’s an attack to the privacy, to the human rights, and so it’s difficult to balance on these critical issues, such as violent child pornography, and etc., and it’s not a new thing, right, because with these past years of talking about these issues, and there are like different views, and opposite directions, so thank you for touching on these issues to introduce, so let’s go to another area, that is the policy question number two, how can policymakers, that we have so policymakers here, how these policymakers and regulatory bodies design more flexible regulations. or that can adapt to rapid technological advancement as we were saying, but without compromising on some of the ethical standards and public safety. And here we are also touching on the ethical, right? So we can mention about bias, we can mention about copyright, I don’t know, different issues that are not on the same page as privacy but related to these emerging technologies. So Paola, now is your turn. What are the, in your opinion, the prevailing regulatory approaches to AI governance you have seen? And is there a particular model around all these models that are on the UNESCO framework and other maybe documents out there that stands out as the most conducive to encouraging these innovations? Thank you, Nicolas. Indeed,


Paola Galvez: there are different approaches, but let’s just to be all uniform in unifying in one idea, I will mention some approaches, but there are no exclusive, right? They can be a mix of it when the policymakers are deciding. And I would like to explain on five of them, this is not an exhaustive list, it could be more, but we’ve seen risk-based, human rights-based, principles-based, rules-based, and outcomes-based. These are what I’m about to explain to you, you can read more about in the report that is called AI Generative Governance from the WEF. It was published this year. So the risk-based approach is the most common one, we’ve seen it and actually the European Union, Europe adopted it. It really optimizes in terms of regulatory resources because it focuses their efforts on the areas with higher risks and minimize burdens on low-risk areas. Advantages it has, yeah, it allows regulatory frameworks to be flexible and adaptable when our circumstances are changing, but the challenges is risk assessment are complex. At the moment we see that the AI office is developing the guidelines, there is no one model on the risk assessment that should be done, so I’ve seen now in the market developing different ones, but how do we know, how can we be sure that this risk assessment is the correct one, right? Is there one? We’re still in that process. Second is the human rights-based approach, which should be, in my opinion, the best one. Why? Because with this technology we are seeing, and Nathalie mentioned it, you also, Nicolas, technology, this AI, artificial intelligence technologies are reproducing society’s bias, deepening inequalities, plus different other challenges. However, we cannot afford not being tech optimists. AI is the reality, is with us, and it holds tremendous promise. I do believe that when it is used wisely, it can really hold potential to hit SDGs and to help us be more efficient, and for no reason we won’t be replaced, at least from what I’ve seen now. But human rights are at stake, and this human rights-based approach means being grounded on international human rights law, which we already have. The advantage is the scope is not limited. In fact, AI system and the whole cycle of AI system must be under this regulation, should be developed and deployed in a way that respects and upholds human rights. There’s no doubt. What are the challenges of this approach? There is some complexity and opacity on AI system. We all know that these systems are called black boxes, and it also comes with the complexity that human rights protection are sometimes broadly worded, are hard to interpret sometimes, so we need a lawyer. years, specializing in international human rights law. And that’s what is lacking, in my opinion, in the discussion of these AI laws, because we are not really having these people on the table. And there is no example on this. At the moment of hard law, the Council of Europe AI Convention on AI, Human Rights, and the Rule of Law is the one that is putting human rights at the center. But that’s not mandatory. And well, we’ve seen that the adherence process is on the way. So let’s see how it goes. It sets basic principles and global standards on what we want in terms of AI. But it’s not the hard law that is applying in our countries. The principle base is the one that is adopting most of the countries. The US, with the Executive Order on Safe, Secure, and Trustworthy AI. Singapore. What is it? It sets out just fundamental principles, right? Fairness, accountability. And it is intended to foster innovation. So to your question, the principle-based approach could be the one that prevents the stifle of innovation, but protecting human rights in a sense with these principles. Fairness, right? No harm, but it’s not complete. Then rules-based is the Chinese approach, with the China Interim Measures for the Management of Generative AI as an example. It is very rigid, high compliance costs, but it lays out the detailed rules. So it really doesn’t leave much space for interpretation. That’s what is applying in China at the moment. And the outcomes-based is what the Japanese government is applying, because it focuses on achieving measurable AI-related outcomes. It also intends to foster innovation and compliance. But it has limited control over the process, because how can you really measure the outcomes? It can be very vague, right? I would just like to finish by saying that there is a risk to having a Brussels effect in terms of our Latin American countries trying to do what the European Union has done. And it is very important to say that our countries are not Europe. We don’t have the same context. So doing a copy-paste is not the solution. We can take best practices if they are already in place, but it’s very hard to see, really, the result of the implementation of the EU-AI Act, because it is still very new. And as you know, it’s in a phase process to be enforced. So we cannot tell at the moment. And also, I would say, if you guys can have a takeaway from what I’m explaining to you of these five approaches, let’s remember that any regulation that we want to approve in our countries, it must be under a public participation process, a meaningful one, meaning that sitting all the parties in the table and discussing what are their needs and how do they think they can be a solution. The readiness and methodology that I can tell you more later has this public consultation process and brings to the table the opinion of the public and the citizens. Thank you, Nicolás.


Nicolas Fiumarelli: Thank you, Paola, for some detailed analysis on the five different. approaches. I took some notes there, so we will have a good takeaways on that, and also on some noticeable comments you made on each of those. So, on the same line, I will ask now Ananda, what are global examples of flexible AI governance that can inspire policymakers nowadays?


Ananda Gautam: So, I’d reflect on something because our title is over-regulation and balancing the innovation. So, let’s go back to 1970s. If internet was regulated at the beginning, in the first three decades before the WECS started, we couldn’t have this internet, you know, like today we are using. We couldn’t have been talking or discussing about internet governance anymore. So, regulation is not always the best way of what we call governance, and one of the, like Nicholas asked me what is the best example, maybe UNESCO. So, AI ethics framework is one of the greatest examples, which has been endorsed by, I think, more than 90 countries, 100 now, I think, countries, and then like WECS has been working on second version of their ethical AI concept. So, these are like how principles can bind people rather than legislation. So, I would, if you ask me, is it legislation or policy, I would go for policy-based approach, which would actually harness the power of AI rather than like regulating it and, like we call it over-regulation. So, policy would be something that could actually promote the businesses without undermining the human rights. Like Paula mentioned, the human rights-based approach is the must, and also I reflect on what Nathalie said, like while AI is being developed, there will be both sides, you know, like pros and cons. If we take an example of cybersecurity, scammers are using AI to actually manipulate the people and then like phishing attacks have been into another level with the use of AI. At the same time, cybersecurity tools are being developed, which use AI technologies to detect the patterns of the cybersecurity faster, and then like to, at some times, automated countermeasures are applied right now. There are many tools developed by Palo Alto and other leading cybersecurity companies that employ AI to detect those technologies. So, these are some kind of things. I think it is in the very premature development stage. we have just seen the power of generative AI when like chat GPT exploded and there are so many GPTs available and we have only seen the power of generative AI. One thing is like while people are using AI, we should be very clear in terms of the legislation or policy that how it will be used by public. Like today school children are using AI or like chat GPT or something generative AI to add on their knowledge. Will it give the right knowledge or not that is very crucial. So these kind of considerations are very important which needs to be considered and it is also covered by different frameworks that are being developed but in a according to the national context also we need to have how people will leverage that thing. Like if something is generated by AI, how do people distinguish those things. Maybe we can call it AI literacy. You know people need to know what they are using, what are the consequences, are the things they need to be good enough to distinguish between what is being generated by AI, what is original and I think that are the baselines you know that we need to focus on. I’ll stop it here.


Nicolas Fiumarelli: I like your ideas Ananda but I think that recognizing if something is AI generated nowadays is more complex than everything that you can take off. So also on the complexity of these approaches like because we want to balance flexibility, enforceability, practicality. So it’s like risk-based methods demands nuanced assessments. Human rights-based approaches face challenges with opacity and legal gaps. Principles-based frameworks often lack enforceable mechanisms. So we have problems with each of them. approach is also the outcome-based model emphasize modern measurement, but may struggle, as Paola say, with contextual adaption and particularly in diverse regions, as you say, in Nepal, for example. So together, these approaches highlight, I think, a more multidisciplinary collaboration and tailored strategies, right, to address AI multifaceted risk and opportunities effectively. So going now for the online, James, we would like to see your face. From your experience, how has regulatory flexibility impacted on the African innovations, in your opinion? James, you’re muted.


James Nathan Adjartey Amattey: Okay, sorry. Yeah, so thank you very much, Wood. I think your question is very interesting because we’ll take COVID as an example. COVID, you know, really, I’ll say COVID catalyzed, or should I say highlighted the need for innovation over regulation. So during COVID, there was little regulation. It was all innovation. And together with that innovation, we’re able to control the cost, the spread, and, you know, the, should I say, you know, the lifestyle change that came with COVID, right? So what we want to do is that we do not want a case where it is just emergencies that allow us to be flexible with loss, but we want to adopt a lifestyle of, you know, having that flexibility, but keeping, you know, keeping guard or being on watch, right? So it’s like having a security man. You do not hope. that a thief attacks you, but he’s there for when the thief comes, right? So it’s, it’s just like that. So I like the idea of policies over regulations. So frameworks, you know, constructive ways of doing things that could, you know, guide people on how to do it properly versus inhibiting what they can and what they cannot do. Of course, there are certain times when you can do that, but as we are currently in the experimental phase of innovation, especially with AI, it’s very tantamount that we allow it to spread its wings for us to know what is possible and what is not. In the African context, COVID really allowed, so for example, we had the use of autonomous drones that were delivering COVID shots. They were delivering POPs. They were delivering face masks to remote organizations. We had trackers that way, that were used to identify hotspots of COVID and be able to, you know, design responses for them. And these are several other ways. We’ve also had issues of flooding in Ghana, where most of the work I do, especially in open data, has helped us to use AI to identify roads, to be able to help relieve, get to victims of dam spillage in 2023. And we’ve done a lot of work around public health and correlation of health data using mobile apps. And all of these things have been possible. you know, through innovation. So I think innovation and regulation should be teammates rather than, you know, trying to be competitors of who is right and who is superior. I think we should work and collaborate more. And, you know, innovation should not be an afterthought and regulation should not be an afterthought, but rather we could build, you know, these frameworks into innovation pipelines and into our regulatory pipelines. Thank you very much.


Nicolas Fiumarelli: Thank you so much, James. Due to the time constraints we are reaching to the end of our session, I will do a condensed question for all the panelists. So you have one minute each for answering. How can we successful, how can successful examples of AI applications, and international frameworks we were talking, could inform regulatory strategy that balance strategic innovation, safeguarding employment, for example, as an issue addressing societal impacts such as job displacements and critical needs like the healthcare, industrial automation. So may we start with Paola, growing from your experience leading the UNESCO AI run? Yes.


Paola Galvez: So the question was very long. So I’ll do a wrap up, just answering. First, think about local needs. What are the regulations that we have in place? And how can we complement them? Sometimes, and I think this is a personal opinion, we need an umbrella regulation, like the AI act and AI, not guidelines, but it must be mandatory. Why is it? Because the country needs to have a position. What the country wants the AI to do and to be for their citizens. What’s the position of the country in terms of legal autonomous weapons? I think that should be a yes or no, right? So that could be mandatory, and that’s a prohibition. vision or not. Surveillance, right? Are we using AI to safety and security? But let’s be mindful of that it can target people from migration or other communities that are vulnerable or minorities. So it’s very important when we’re using that. And that is taking a position. That means regulation. That’s law. In some other position, please, let’s invest in capacity development. Digital skills is key in terms of using AI because we will never be able as a country to leverage all the potential of this technology if we don’t help our citizens understand it or use it as it best. Thank you. This is very condensed, but happy to speak later.


Nicolas Fiumarelli: Thank you so much, Paola. Maybe Natalie, if you want to make one minute contribution on the health care part that you are the expert, please.


Natalie Tercova: Of course, I’ll try to be very brief. So I very much agree that it very depends on the specific case. We sometimes have discussions about, OK, what we should do in health care. But this is such a broad concept. And let’s say that ethical considerations such as patients privacy or the data protection of patients and minimizing the bias in algorithms when it comes to treatment and health care are really, from my perspective, non-negotiables. And definitely, we have to take these into account when we talk about health care. However, when we talk about, let’s say, diagnostic tools that can assist the doctors with some critical conditions, well, this carries way higher risks and higher risk stakes than, for instance, AI tools or other technologies used for administrative scheduling system. For instance, how we set a timeline for certain operations and stuff. So again, it is so broad. And we have to take into consideration the level of risks that is involved in this thing. So in light of this, I believe that those. high risk applications should undergo maybe more rigorous review before they come into practice while those low risk innovations can proceed under lighter regulatory requirements and then we can really grow and focus more on the innovation and make these things faster and more effective. So again it’s about balance and I don’t want to dive into more details but I’m of course happy to talk about it more because we recently conducted a robust research in central Europe about the AI usage in the healthcare we were also asking people whether they use it for their own questions for instance if they ask CGPT about oh I have this issue this is bothering me and if they trust what the AI is telling them because one thing is the usage and people can be just experimenting with the thing and you know just overall excited about these opportunities but they are mindful that sometimes what it is recommended to them is not the best. So we have some very I would say interesting insights and I’m happy to talk more about this also over


Nicolas Fiumarelli: coffee. Thank you. Thank you Nathalie. So thank you so much Nathalie. So yes we have only one hour session so you can reach Nathalie at the coffee and continue that conversation. Going for Osei do we have any question online and maybe we have one question for the on-site the first raise the hand you can make it okay. Okay so it’s not really a question this is a


Osei Manu Kagyah: suggestion from someone he talked about human rights being at the core of the conversation as said by the UN but I will have a question for all of us to mull over. I think the initiation or say conception of these policy questions it starts from how that lack of trust between multi-stakeholder the various multi-stakeholders and a good example is argument a this is an argument a I won’t ask debate about this the UK secretary of state for science and technology Peter Kyle argues that tech companies or say companies, should be treated like states because their investments in innovation exceed that of governments. Argument B was some few weeks back where a former Dutch member of the European Parliament argued that focus should be to strengthen the primacy of democratic governance and oversight and not to show humility. She argued further to highlight the need for self-confidence on the part of democratic government to make sure that these companies, these services, are taking their proper role within a rule of law-based system and are not overtaking it. Obviously, I think argument B sounds persuasive, but then how do we ensure stakeholders that do have a say in there? So the conception or the initiation of all these policy conversations is the lack of trust I have noticed. But if you have any question on-site, please do raise your hand. Please be snappy because we have privacy as I speak.


Audience: Yes, thank you so much. My name is Agustina, I’m from Argentina, and I have a quick question. So I see the… Sorry. So when regulating AI or technology, what I found was like we have like different layers or aspects. So for instance, we have the users, we have the developers, we have regarding AI, the training of this AI. So in this sense, I see that users are already, if you want to say like in the physical world, punished by the law. But on the other side, like do you think that, for instance, companies should be responsible for what they develop or those who train the models should also be responsible for the effects that this has or not?


Nicolas Fiumarelli: Maybe some of the panelists want to… answer the question, and we go to the last question here, right? Okay. Who wants to answer the question? Or we go directly to the next question.


Audience: Thank you, Moderator. Mine is on the topic of this discussion. Have we really reached the stage of overregulation, given that with the advent of ICTs and now with the coming in of emerging technologies, we have seen regulation playing catch up. It’s usually ahead of regulation. So have we reached the stage of overregulation yet? Do you have a question as well? That will be the last panelist.


James Nathan Adjartey Amattey: Yes, I think I can answer this a bit. Okay, James, you can answer and then wrap up. Yes. So yes, there is a risk of innovation going ahead of regulation, but it all boils down to AI literacy. And we totally come into an understanding of what AI really is and what AI is not, right? Because for example, if you ask a lot of people, what is AI? Most of them will most of them will answer chat GPT, but chat GPT is just one use case of AI. It’s not AI enough in it of itself, right? So we need to be able to build literacy programs for regulators, for developers, for users, for us to be able to have an understanding of what are intersecting interests are. And then we can be able to then look at those intersective interests and then be able to now tailor AI solutions to our personal use cases. So most of my work for next year will literally fall around AI literacy and building literacy programs to help understand, help, should I say, proliferate that knowledge of what AI truly is and what AI is not, what it can do, what it should do, what it should be allowed to do. And then we can take the rest from there. Thank you very much. Happy to connect online. If, yes, my name is James. You can find me on LinkedIn.


Nicolas Fiumarelli: OK, thank you for your time, James, and your valuable contribution. Thank you, everyone, for the engaging discussion. Sorry for the ones that were in the queue. We are six minutes out of time. Today, we explored these critical aspects of AI regulation and innovation, drawing some insight from diverse regions and sectors, as you have seen. So a special thanks to our panelists here and their valuable contribution and also our audience for your active participation. Thank you, and enjoy the rest of the AGF. Thank you, online audience. We might take a photo on the front, if you want. Come, everybody. I think we’re good. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. OK. Great. OK. Great. Great. Great. Great. OK. OK. OK. OK. OK. OK. OK. OK. There. you you you you


P

Paola Galvez

Speech speed

144 words per minute

Speech length

1840 words

Speech time

764 seconds

Regulation is necessary but should not stifle innovation

Explanation

Paola Galvez argues that regulation is needed, but the focus should be on how to regulate rather than whether to regulate. She emphasizes the importance of balancing regulatory safeguards with fostering innovation.


Evidence

Paola mentions different regulatory approaches such as risk-based, human rights-based, principles-based, rules-based, and outcomes-based.


Major Discussion Point

Balancing AI Regulation and Innovation


Agreed with

Ananda Gautam


James Nathan Adjartey Amattey


Agreed on

Balancing regulation and innovation


Differed with

Ananda Gautam


Differed on

Approach to AI regulation


Human rights-based approaches are crucial but face implementation challenges

Explanation

Galvez emphasizes the importance of human rights-based approaches in AI regulation. However, she notes that these approaches face challenges due to the complexity and opacity of AI systems, as well as the broad wording of human rights protections.


Evidence

She mentions the Council of Europe AI Convention on AI, Human Rights, and the Rule of Law as an example of a human rights-based approach.


Major Discussion Point

Addressing AI Risks and Ethical Concerns


Copy-pasting EU regulations is not appropriate for developing countries

Explanation

Galvez warns against the ‘Brussels effect’ where Latin American countries might try to copy EU regulations. She emphasizes that context matters and that developing countries have different needs and circumstances compared to Europe.


Major Discussion Point

Developing Context-Appropriate AI Governance


Public participation is crucial in developing AI policies

Explanation

Galvez stresses the importance of involving all stakeholders in the development of AI policies. She argues for a meaningful public participation process that includes all parties in discussions about their needs and potential solutions.


Evidence

She mentions the UNESCO AI Readiness Assessment Methodology as an example of a process that includes public consultation.


Major Discussion Point

Developing Context-Appropriate AI Governance


Digital skills development is key to leveraging AI’s potential

Explanation

Galvez emphasizes the importance of investing in capacity development and digital skills. She argues that countries cannot leverage the full potential of AI technology without helping citizens understand and use it effectively.


Major Discussion Point

Building AI Capacity and Literacy


Agreed with

Ananda Gautam


James Nathan Adjartey Amattey


Agreed on

Importance of AI literacy and capacity building


Local needs and existing regulations should inform AI governance

Explanation

Galvez suggests that countries should consider their local needs and existing regulations when developing AI governance frameworks. She argues for complementing existing regulations rather than creating entirely new ones.


Evidence

She mentions the need for an umbrella regulation that defines a country’s position on key AI issues like autonomous weapons and surveillance.


Major Discussion Point

Developing Context-Appropriate AI Governance


A

Ananda Gautam

Speech speed

139 words per minute

Speech length

873 words

Speech time

374 seconds

Flexible, principle-based approaches can foster innovation while protecting rights

Explanation

Gautam advocates for policy-based approaches over strict legislation. He argues that this approach can promote business without undermining human rights and allows for more flexibility in governance.


Evidence

He cites the UNESCO AI ethics framework and the WECS ethical AI concept as examples of principle-based approaches.


Major Discussion Point

Balancing AI Regulation and Innovation


Agreed with

Paola Galvez


James Nathan Adjartey Amattey


Agreed on

Balancing regulation and innovation


Differed with

Paola Galvez


Differed on

Approach to AI regulation


AI divide between countries needs to be addressed

Explanation

Gautam highlights the growing AI divide between countries, particularly affecting developing nations. He emphasizes the need to build capacity in these nations to leverage AI’s power effectively.


Major Discussion Point

Developing Context-Appropriate AI Governance


Capacity building in developing nations is crucial

Explanation

Gautam stresses the importance of building capacity in developing nations to engage in AI governance processes. He argues that this is necessary for these countries to develop comprehensive AI policies that leverage AI’s power effectively.


Major Discussion Point

Building AI Capacity and Literacy


Public understanding of AI-generated content is important

Explanation

Gautam emphasizes the need for public understanding of AI-generated content. He argues that people need to be able to distinguish between AI-generated and original content, which he refers to as AI literacy.


Evidence

He mentions the use of AI by school children and the need for them to understand if they are getting the right knowledge.


Major Discussion Point

Building AI Capacity and Literacy


Agreed with

Paola Galvez


James Nathan Adjartey Amattey


Agreed on

Importance of AI literacy and capacity building


J

James Nathan Adjartey Amattey

Speech speed

129 words per minute

Speech length

1561 words

Speech time

723 seconds

COVID-19 highlighted need for innovation over rigid regulation

Explanation

Amattey uses the COVID-19 pandemic as an example of how innovation can thrive with less regulation in times of crisis. He argues for maintaining this flexibility in normal times, balancing innovation with necessary safeguards.


Evidence

He cites examples of autonomous drones delivering COVID supplies and AI-powered trackers identifying hotspots during the pandemic.


Major Discussion Point

Balancing AI Regulation and Innovation


Agreed with

Paola Galvez


Ananda Gautam


Agreed on

Balancing regulation and innovation


AI literacy is needed to understand risks and benefits

Explanation

Amattey emphasizes the importance of AI literacy for regulators, developers, and users. He argues that understanding what AI is and isn’t is crucial for tailoring AI solutions to specific use cases and addressing intersecting interests.


Evidence

He mentions his future work will focus on building AI literacy programs.


Major Discussion Point

Building AI Capacity and Literacy


Agreed with

Paola Galvez


Ananda Gautam


Agreed on

Importance of AI literacy and capacity building


N

Natalie Tercova

Speech speed

159 words per minute

Speech length

1312 words

Speech time

492 seconds

AI can be used to both create and detect child sexual abuse material

Explanation

Tercova discusses the dual role of AI in relation to child sexual abuse material (CSAM). She highlights how AI can be used to create deepfake CSAM, but also how it can be used to detect and combat such material.


Evidence

She cites statistics on the prevalence of CSAM, mentioning 50,000 websites hosting CSAM in 2023 and 85 million reported materials in 2021.


Major Discussion Point

Addressing AI Risks and Ethical Concerns


Risk-based regulation allows focus on high-risk AI applications

Explanation

Tercova advocates for a risk-based approach to AI regulation in healthcare. She argues that high-risk applications should undergo more rigorous review, while low-risk innovations can proceed under lighter regulatory requirements.


Evidence

She contrasts high-risk diagnostic tools with lower-risk administrative scheduling systems in healthcare.


Major Discussion Point

Balancing AI Regulation and Innovation


Patient privacy and data protection are non-negotiable in healthcare AI

Explanation

Tercova emphasizes that patient privacy, data protection, and minimizing bias in algorithms are non-negotiable aspects of AI use in healthcare. She argues that these ethical considerations must be taken into account regardless of the specific application.


Major Discussion Point

Addressing AI Risks and Ethical Concerns


Agreements

Agreement Points

Balancing regulation and innovation

speakers

Paola Galvez


Ananda Gautam


James Nathan Adjartey Amattey


arguments

Regulation is necessary but should not stifle innovation


Flexible, principle-based approaches can foster innovation while protecting rights


COVID-19 highlighted need for innovation over rigid regulation


summary

The speakers agree that while regulation is necessary, it should be flexible enough to allow for innovation. They advocate for approaches that balance safeguards with the ability to innovate.


Importance of AI literacy and capacity building

speakers

Paola Galvez


Ananda Gautam


James Nathan Adjartey Amattey


arguments

Digital skills development is key to leveraging AI’s potential


Public understanding of AI-generated content is important


AI literacy is needed to understand risks and benefits


summary

The speakers emphasize the crucial role of AI literacy and capacity building in enabling effective use and governance of AI technologies.


Similar Viewpoints

Both speakers stress the importance of considering local context and needs when developing AI governance frameworks, particularly for developing nations.

speakers

Paola Galvez


Ananda Gautam


arguments

Local needs and existing regulations should inform AI governance


Capacity building in developing nations is crucial


Both speakers emphasize the importance of protecting human rights and privacy in AI applications, while acknowledging the challenges in implementing these protections.

speakers

Natalie Tercova


Paola Galvez


arguments

Patient privacy and data protection are non-negotiable in healthcare AI


Human rights-based approaches are crucial but face implementation challenges


Unexpected Consensus

Risk-based approach to AI regulation

speakers

Paola Galvez


Natalie Tercova


arguments

Regulation is necessary but should not stifle innovation


Risk-based regulation allows focus on high-risk AI applications


explanation

Despite coming from different backgrounds (policy and healthcare), both speakers advocate for a risk-based approach to AI regulation, suggesting a broader consensus on this strategy across sectors.


Overall Assessment

Summary

The main areas of agreement include the need for balanced regulation that doesn’t stifle innovation, the importance of AI literacy and capacity building, and the necessity of considering local contexts in AI governance.


Consensus level

There is a moderate to high level of consensus among the speakers on key issues. This suggests a growing recognition of common challenges and potential solutions in AI governance across different sectors and regions. However, the specific implementation details and priorities may still vary, indicating the need for continued dialogue and collaboration.


Differences

Different Viewpoints

Approach to AI regulation

speakers

Paola Galvez


Ananda Gautam


arguments

Regulation is necessary but should not stifle innovation


Flexible, principle-based approaches can foster innovation while protecting rights


summary

While both speakers emphasize the importance of balancing regulation and innovation, Galvez argues for a more structured regulatory approach, while Gautam advocates for a more flexible, principle-based approach.


Unexpected Differences

Role of COVID-19 in shaping AI regulation

speakers

James Nathan Adjartey Amattey


Paola Galvez


arguments

COVID-19 highlighted need for innovation over rigid regulation


Copy-pasting EU regulations is not appropriate for developing countries


explanation

While not directly contradictory, these arguments present an unexpected difference in perspective on how crises and external influences should shape AI regulation in developing countries.


Overall Assessment

summary

The main areas of disagreement revolve around the approach to AI regulation, the balance between innovation and safeguards, and the consideration of local contexts in developing AI governance frameworks.


difference_level

The level of disagreement among the speakers is moderate. While there are differing perspectives on specific approaches to AI regulation, there is a general consensus on the need for balanced governance that protects rights while fostering innovation. These differences highlight the complexity of developing effective AI governance frameworks that can address diverse global needs and contexts.


Partial Agreements

Partial Agreements

Both speakers agree on the importance of human rights and privacy in AI regulation, but they differ in their focus. Galvez discusses broader human rights challenges, while Tercova emphasizes specific healthcare-related privacy concerns.

speakers

Paola Galvez


Natalie Tercova


arguments

Human rights-based approaches are crucial but face implementation challenges


Patient privacy and data protection are non-negotiable in healthcare AI


Similar Viewpoints

Both speakers stress the importance of considering local context and needs when developing AI governance frameworks, particularly for developing nations.

speakers

Paola Galvez


Ananda Gautam


arguments

Local needs and existing regulations should inform AI governance


Capacity building in developing nations is crucial


Both speakers emphasize the importance of protecting human rights and privacy in AI applications, while acknowledging the challenges in implementing these protections.

speakers

Natalie Tercova


Paola Galvez


arguments

Patient privacy and data protection are non-negotiable in healthcare AI


Human rights-based approaches are crucial but face implementation challenges


Takeaways

Key Takeaways

AI regulation is necessary but should be flexible to avoid stifling innovation


A human rights-based approach to AI governance is crucial but faces implementation challenges


Context-appropriate AI governance is needed, especially for developing countries


Building AI literacy and capacity across stakeholders is essential


Risk-based approaches can help focus regulation on high-risk AI applications while allowing innovation in lower-risk areas


Public participation and multi-stakeholder collaboration are important in developing AI policies


Resolutions and Action Items

Develop AI literacy programs for regulators, developers, and users


Invest in digital skills development to leverage AI’s potential


Consider local needs and existing regulations when developing AI governance frameworks


Implement rigorous review processes for high-risk AI applications in healthcare


Unresolved Issues

How to effectively balance privacy and safety in AI-powered content moderation


The extent of responsibility for AI developers and companies for the effects of their technologies


Whether the current state of AI regulation constitutes over-regulation or under-regulation


How to address the growing AI divide between developed and developing countries


Suggested Compromises

Adopt principle-based frameworks to provide guidance without rigid rules


Implement lighter regulatory requirements for low-risk AI innovations


Use policies and guidelines instead of strict legislation where possible


Balance innovation and regulation by treating them as collaborative rather than competitive forces


Thought Provoking Comments

I don’t think the question is anymore whether to regulate or not to regulate. We’re past beyond that. It’s my opinion. And what we’re working now is on how to regulate, right?

speaker

Paola Galvez


reason

This comment shifts the framing of the discussion from debating regulation itself to focusing on implementation approaches. It challenges the premise of the session title and sets a new direction.


impact

This reframing influenced subsequent speakers to focus more on specific regulatory approaches and implementation challenges rather than debating regulation in principle.


There are certain regulatory frameworks that need to happen for innovation to come to the forefront.

speaker

James Nathan Adjartey Amattey


reason

This insight highlights the complex relationship between regulation and innovation, suggesting they can be complementary rather than opposed.


impact

It prompted discussion of specific examples where regulation enabled or catalyzed innovation, adding nuance to the debate.


We have to take into consideration the level of risks that is involved in this thing. So in light of this, I believe that those high risk applications should undergo maybe more rigorous review before they come into practice while those low risk innovations can proceed under lighter regulatory requirements

speaker

Natalie Tercova


reason

This comment introduces a nuanced, risk-based approach to regulation that balances innovation and safety concerns.


impact

It shifted the conversation towards more granular considerations of how to tailor regulatory approaches to different AI applications and risk levels.


There are different approaches, but let’s just to be all uniform in unifying in one idea, I will mention some approaches, but there are no exclusive, right? They can be a mix of it when the policymakers are deciding.

speaker

Paola Galvez


reason

This insight highlights the complexity of AI governance and the potential for hybrid regulatory approaches.


impact

It led to a more detailed discussion of various regulatory models (risk-based, human rights-based, principles-based, etc.) and their respective strengths and weaknesses.


If internet was regulated at the beginning, in the first three decades before the WECS started, we couldn’t have this internet, you know, like today we are using. We couldn’t have been talking or discussing about internet governance anymore.

speaker

Ananda Gautam


reason

This historical perspective provides a cautionary tale about over-regulation stifling innovation.


impact

It prompted reflection on balancing regulation with allowing space for technological development and innovation.


Overall Assessment

These key comments shaped the discussion by moving it from a binary debate about regulation vs. non-regulation to a more nuanced exploration of different regulatory approaches, their impacts on innovation, and the need to balance multiple concerns including safety, ethics, and technological progress. The discussion evolved to consider risk-based frameworks, the role of soft law and principles, and the importance of context-specific approaches tailored to different applications and regions. There was a general consensus that some form of governance is necessary, but disagreement on the exact form it should take and how to implement it effectively without stifling innovation.


Follow-up Questions

How can we find a balance between privacy and safety when using AI to detect child sexual abuse material (CSAM)?

speaker

Natalie Tercova


explanation

This is a crucial issue as it involves the tension between protecting children and maintaining individual privacy rights.


How can we ensure that risk assessments for AI systems are accurate and reliable?

speaker

Paola Galvez


explanation

This is important for implementing effective risk-based regulatory approaches to AI governance.


How can we improve AI literacy among policymakers, developers, and users?

speaker

James Nathan Adjartey Amattey


explanation

This is crucial for informed decision-making and effective regulation of AI technologies.


How can we design regulatory frameworks that are flexible enough to adapt to rapid technological advancements?

speaker

Nicolas Fiumarelli


explanation

This is important to ensure regulations remain relevant and effective as AI technology evolves.


How can we balance the need for innovation with the protection of human rights in AI development and deployment?

speaker

Paola Galvez and Ananda Gautam


explanation

This is critical for ensuring AI benefits society while minimizing potential harms.


How can developing nations build comprehensive AI policies that leverage the power of AI while addressing their specific challenges?

speaker

Ananda Gautam


explanation

This is important for ensuring equitable global development of AI technologies and policies.


How can we distinguish between AI-generated and human-generated content, and what are the implications for AI literacy?

speaker

Ananda Gautam


explanation

This is crucial for addressing potential misuse of AI and ensuring informed consumption of information.


How can we design AI regulations that take into account local needs and existing regulatory frameworks?

speaker

Paola Galvez


explanation

This is important for creating effective and context-appropriate AI governance.


How should responsibility be allocated among AI developers, companies, and users for the effects of AI systems?

speaker

Audience member (Agustina from Argentina)


explanation

This is crucial for establishing accountability in AI development and use.


Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #262 Innovative Financing Mechanisms to Bridge the Digital Divide

WS #262 Innovative Financing Mechanisms to Bridge the Digital Divide

Session at a Glance

Summary

This discussion focused on innovative financing mechanisms to bridge the digital divide, particularly in developing countries. The panelists explored why traditional approaches to telecommunications infrastructure have failed to achieve universal access goals and how community-centered connectivity solutions can address this gap.

Carlos Rey-Moreno provided historical context, noting that despite recommendations dating back to the 1980s, private sector investment alone has been insufficient to close the digital divide. He emphasized the need for public finance and support for local, community-centered initiatives. Other speakers highlighted the unique advantages of community networks, including their flexibility to adapt to local conditions and ability to operate sustainably at small scales.

Regulatory challenges were discussed, with Dr. Emma Otieno sharing Kenya’s experience in creating an enabling environment for community networks through licensing frameworks and capacity building. Jane Coffin stressed the importance of regulators re-imagining financing models and gathering more data to support these initiatives.

The role of Universal Service Funds (USFs) was examined, with suggestions for making them more transparent, efficient, and inclusive to support smaller local connectivity providers. Panelists also discussed the need for innovative risk assessment models to attract investment in community networks.

Gender considerations were addressed, with Talant highlighting the Women in Digital Economy Fund as an example of targeted support for closing the gender digital divide. The discussion concluded with practical advice on building sustainable community networks and the importance of knowledge sharing between communities.

Overall, the panel emphasized the need for a multi-stakeholder approach, blending public and private financing, and adapting policies and regulations to support community-centered connectivity solutions as a complement to traditional infrastructure approaches.

Keypoints

Major discussion points:

– The persistent digital divide and failure of traditional approaches to achieve universal access

– The potential of community-centered connectivity initiatives to bridge the digital divide

– The need for innovative financing mechanisms and enabling policy/regulatory environments

– The role of regulators in supporting community networks through licensing, capacity building, etc.

– Addressing gender gaps and inclusion in digital connectivity efforts

The overall purpose of the discussion was to explore innovative financing mechanisms and policy approaches to support community-centered connectivity initiatives as a way to bridge the digital divide, especially in underserved areas.

The tone of the discussion was largely constructive and solution-oriented. Speakers shared examples, case studies and recommendations with a sense of urgency about addressing connectivity gaps. There was an emphasis on reimagining traditional approaches and taking calculated risks to support new models. The tone became more interactive and practical during the Q&A portion at the end.

Speakers

– Risper Arose: Africa Regional Capacity Building Coordinator for the Local Access Network, a LockNet initiative

– Emma Otieno: Representing Women International Digital Inclusivity Network; Communication Authority of Kenya

– Carlos Rey Moreno: Co-manages the LogMet initiative, focuses on policy and regulatory environment for community-centered connectivity initiatives

– Jane Roberts Coffin: Speaking in personal capacity; 28 years of experience working with communities, international financial institutions, and organizations focused on connectivity

– Lilian Chamorro: Part of Colnodo, an NGO in Colombia working with community networks

Full session report

Expanded Summary: Innovative Financing Mechanisms for Bridging the Digital Divide

This discussion, organized by the Local Access Network, focused on innovative financing mechanisms to bridge the digital divide, particularly in developing countries. The session explored why traditional approaches to telecommunications infrastructure have failed to achieve universal access goals and how community-centred connectivity solutions can address this gap.

Session Structure and Participants

Risper Arose, the moderator from the Association for Progressive Communications (APC), introduced the session structure:

1. A keynote presentation by Carlos Rey-Moreno

2. A panel discussion with experts in the field

3. An interactive Q&A session with the audience

Keynote Presentation

Carlos Rey-Moreno, Senior Advisor on Community Networks at APC, provided historical context and current challenges in his keynote. He highlighted that despite recommendations dating back to the 1980s, private sector investment alone has been insufficient to close the digital divide. Rey-Moreno emphasized the need for public finance and support for local, community-centred initiatives. He also mentioned the ongoing WSIS+20 review and the task force on financial mechanisms, stressing the importance of community-centred approaches in addressing connectivity gaps.

Panel Discussion

The panel featured experts from various backgrounds:

– Dr. Emma Otieno, Director of Licensing, Compliance and Standards at the Communications Authority of Kenya

– Jane Coffin, Senior Advisor at ISOC

– Lilian Chamorro, Researcher at Colnodo

– Talant Sultanov, Co-founder of the Internet Society Kyrgyzstan Chapter

Key points from the panel discussion included:

1. Regulatory Approaches:

Dr. Emma Otieno shared Kenya’s experience in creating an enabling environment for community networks. She highlighted specific support measures:

– Flexible licensing frameworks

– Capacity building programs

– Guidelines for community network operators

– Reforms to universal service funds to support smaller providers

2. Reimagining Financing Models:

Jane Coffin emphasized the need for regulators to:

– Re-evaluate risk assessment for local connectivity projects

– Gather more data to support community network initiatives

– Explore blended finance approaches combining public and private funding

3. Sustainability Strategies:

Lilian Chamorro discussed sustainability strategies for community networks, including:

– Adapting to local conditions and needs

– Lowering costs through community involvement

– Knowledge sharing between community networks

4. Real-World Impact:

Talant Sultanov shared an example from Kyrgyzstan where a small investment in community connectivity catalyzed broader development. A village connected to the internet through a community network saw improved communication with relatives abroad and was able to advocate for other essential infrastructure improvements.

5. Gender Considerations:

The Women in Digital Economy Fund was highlighted as an example of targeted support for closing the gender digital divide. The fund focuses on:

– Supporting women-led initiatives in the digital economy

– Providing resources for skills development and entrepreneurship

Audience Interaction

During the Q&A session, Kossi Amessinou from Benin inquired about the Women in Digital Economy Fund’s eligibility criteria and geographic scope. Panelists provided information on how to access the fund and its current focus areas.

Conclusion

The discussion emphasized the need for a multi-stakeholder approach, blending public and private financing, and adapting policies and regulations to support community-centred connectivity solutions. There was a growing recognition of the potential of these alternative models to address the persistent digital divide, complementing traditional infrastructure approaches. The panel highlighted the importance of flexible regulatory frameworks, innovative financing mechanisms, and community involvement in developing sustainable connectivity solutions for underserved areas.

Session Transcript

Risper Arose: Good afternoon, everyone. It’s my absolute pleasure to welcome you all to this important session. My name is Risper Arose, and I serve as the Africa Regional Capacity Building Coordinator for the Local Access Network, a LockNet initiative, which is a collective effort led by the Association for Progressive Communication, in partnership with grassroots communities and support organizations in Africa, Asia, and Latin America and the Caribbean. We aim to directly support meaningful community-centered connectivity initiatives while contributing to an enabling ecosystem for their emergence and their growth. It has been an enriching and insightful week, participating here at the Internet Governance Forum 2024 here in Riyadh, and today I have the privilege of moderating our discussion on a topic that lies at the heart of digital inclusion. The topic is innovative financing mechanisms to bridging the digital divide. In this digital age, ensuring universal access to telecommunication infrastructure remains a significant challenge, particularly in developing countries. Despite significant public and private investments, traditional approaches to telecommunication infrastructure, while impactful in certain respects, have failed to achieve universal access goals, even for basic voice connectivity for over two decades. However, as the saying goes, innovative technology solutions have emerged as a powerful alternative. These solutions are rewriting the narrative driven by distinct investment priorities. These providers not only connect underserved communities, but they also foster social, economic development. They represent a vital part of the micro, small and medium enterprises, SMEs’ ecosystem, which is the backbone of developing countries that has been largely overlooked by traditional large scale network operators. They remind us that connectivity is more than a utility. It is a foundation for empowerment and progress. And with all this said, they still face funding constraints and regulatory challenges that hinder their sustainability and scalability and their impact. Efforts to engage traditional commercial financial institutions that fund communication infrastructure have surfaced three intrinsic difficulties that needs to be addressed. At a limited scale, there are high real and perceived level of risks and the lower returns on investment. To address these constraints, there is a strong need to create an enabling and flexible policy, regulatory and financing environment that encourages the emergence of more innovative regional and local investment models for community centered connectivity providers, which by extension allows them to expand and operate cost effectively. And in this respect, to improve the balance between profit maximization and also reaching the universal access, the time has come to fully review where investments are made and how effective they are at addressing these challenges. digital inclusion. That is just a brief. For our session today, we will explore the interplay of policy, regulation and financing and fostering innovative connectivity solutions that bridge the digital divide. Our discussion will showcase new and innovative financing mechanisms, investing in small scale infrastructures that are already supporting emerging and successfully supporting community-centered solutions. I am thrilled to have a distinguished panel of experts and practitioners with us today. And without further ado, I will give them each less than a minute to introduce themselves. And I’ll start with those joining us online. I’ll give the floor to Dr. Emma Otieno.

Emma Otieno: Thank you for the opportunity. In the event that I’m not able to keep both the voice and the video sometimes I’ll be switching off, but I’m glad to join this very, very important session. My name is Dr. Emma Otieno. Currently based in Kenya. I’m on this call representing an organization, a non-profit organization known as Women International Digital Inclusivity Network. It has an abbreviation that is written in French. So back in Kenya I also work for the regulator. That’s the communication authority of Kenya. My background mostly at the moment I’m specializing on matters of digital inclusivity and specifically I’m passionate about digital gender inclusivity. I’m happy to be in this call.

Risper Arose: Thank you so much, Dr. Emma. We are also very happy to have you as part of the panelists. Next on, I’ll go to Dr. Carlos Rey Moreno. Carlos?

Carlos Rey Moreno: Hi everyone there in the room and also online. My name is Carlos Rey Moreno. I’ve been co-managing the initiative, the LogMet initiative that Risper mentioned at the beginning, focusing on policy and regulatory or creating an enabling environment for community centre connectivity initiatives, especially in policy and regulation. But lately, I’ve been doing quite some work around financial mechanisms as part of that. And it will be a pleasure to be in the session sharing that with you and joining from Spain. It has been amazing to be there with you, but it was impossible this time around. Thank you again. Thank you.

Risper Arose: Thank you so much, Carlos. Next on, we’ll hear from Jane Coffin.

Jane Roberts Coffin: Good evening. Good afternoon. My name is Jane. I’m joining you from the United States. And I’m speaking in my personal capacity, not professional where I work. And based on my experience over the last 28 years, and most recently, my experience working with communities around the world, international financial institutions, international organizations focused on connectivity, and the importance of how we can take a look at financing smaller community-based networks and or small ISPs. A pleasure to join you today. And again, I’m speaking in my personal capacity, and none of the information that I’ll be speaking about today is non-public. It’s all public information and based on my experience. Thank you very much.

Risper Arose: Thank you so much, Jane. And thanks for clarifying that. Now I’ll give this chance to Talant to start us off. You can introduce yourself. Thanks so much.

Speaker: My name is Talant and I’m wearing two hats here. One is the co-founder of Internet Society, Kyrgyz chapter within which we’ve launched several community networks in Kyrgyzstan. And also I am representing Global Digital Inclusion Partnership, which is a member of our consortium with GSMA and implementing project called Women in Digital Economy Fund. So I’ll talk about that as well. Thank you.

Risper Arose: Thank you, Talant, happy to have you in this panel. Last but not least, we’ll hear from Lilian Chamorro. Well, hello to all. Thanks for joining to this session.

Lilian Chamorro: My name is Lilian Chamorro. I’m part of the team of Colnodo. Colnodo is an NGO based in Colombia, in South America. We have been working with community networks since many years ago, but more helping communities to have their own infrastructure since 2017, approximately. And we have been involved in different projects with different parents and allies. Then, okay, I’m going to share with you some of the experience that we have in Colombia right now. Thank you so much, Lilian.

Risper Arose: Looking forward to those experiences and engaging. Maybe just a brief on the structure of our session today. We will include, we’ll have a keynote presentation to set the stage. And this will then be followed by a dynamic panel discussions from the speakers you’ve heard from. And then afterwards, we’ll open the floor to questions and contributions. And I encourage each of you to engage your questions, thoughts, and insights, etc. we really hope to make this session interactive and also enriching. So as we begin, I’d like to leave all of us with a question, maybe to think about and to reflect on, and that’s in how can we reimagine financing models and also financing models that can empower local connectivity providers and achieve universal access. And with that said, I’d now welcome Dr. Carlos Rey-Moreno to give us a keynote presentation. Carlos, over to you.

Carlos Rey Moreno: Hello. Thank you, Risper. It’s a pleasure to be with so many nice people in this panel. Let me actually share a screen. Yeah, okay. And let me know if you can see my screen. Those online? I can see you, Carlos. Yep. Yes, we can see your screen. Okay, sure. So yeah, I wanted to go back to the introductory remarks from Risper and why we are talking about this, right? Because the reality that, you know, I also want to frame this presentation in the context of the WSIS Plus 20 review that is taking place at the moment and to be concluded next year. That there is a reality, right, that is the continued inability to meet universal service aspirations that demonstrate that for ensuring the WSIS vision of a people-centered, inclusive, and development-oriented information society, where everyone can create, access, and utilize and share information, we cannot leave it solely to traditional technology. telecom incumbents operators to solve, right? That all the actors need to participate because as Riesfeld was saying, after 20 years, there is a massive persistent digital divide that those business models are not able to close, right? Because over this time, we’ve seen a shift in focus from access to telephony, to broadband internet, now to meaningful connectivity that underscores that changing landscape. But throughout that changing landscape, there is the essence of a business case that meets the profitability requirements of those operators continue to pose a challenge for these players to offer services that can bridge the digital divide in remote and rural areas with a small populations and low income. And despite being here today, despite many fora, especially after COVID discussing this issue, discussing the issue of the persistent digital divide, and this has been a longstanding challenge. I mean, this was introduced in the mainland report in 1985. It was known that private sector alone was not gonna be able to close the digital divide, right? Still in that context, donors and many international financial institutions, because it was thought that a for-profit, a fully for-profit model was gonna be able to do that, and that private investment were gonna find enough return as to be able to finance this capital intensive industry. They started to withdraw from the area in the early 90s, right? Because that private capital is stepping, right? And, but later on, as part of WSIS, there was a task force on financial mechanisms that was created to precisely look at what could be done, right? As it was realized that private capital could not do it alone. There were ideas of the Digital Solidarity Fund that were studied, the Digital Solidarity Fund was created maybe not as intended initially, but more with voluntary contributions which did not quite solve the problem. But one of the things that the task force on financial mechanisms highlighted back in 2004, 2005 was the vital role of public finance as well, right? Because, you know, from this graph from the World Bank from 2002 that we have seen in so many places, and there is, you know, an affordability frontier, a market gap, and then an access gap that the market isn’t gonna be able to cover and public intervention is gonna be necessary, right? And this actually led to the creation of many universal or that report and the task force on financial mechanisms influenced many discussions for countries to create a universal service funds and implementing agencies to actually utilize them and implement them. Some countries such as the US promulgated USF even before this type of reports, but ever since this report, many other agencies or funds have been created. A report from the ITU mentioned that in 2022 42% of their members had a USF agency or fund. And then the adoption of the strategies, how it has worked. whether efficiently or inefficiently is up for debate. In some countries it has a massive positive impact. In other countries there has been issues with its disbursement. It has been issues with many other things. I believe other speakers and during the discussions we will be touching precisely on USS but they were there to solve or to look at that problem from that perspective that the private finance was not gonna be able to close the digital divide, right? But there were other findings from the TFF that were incorporated in the Tunis agenda that I wanted to highlight here. One was helping to accelerate the development of domestic financial instruments including by supporting networking initiatives based on local communities and strengthening capacities to enhance the potential of securitized funds and utilizing them effectively, right? The Tunis agenda was adopted in 2005. And this is the path for the future that was adopted in September, 2024, that is three months ago where the digital divide is recognized as its first objective, right? That there is a persistent digital divide that is there and that it needs to accelerate it. The closing that digital divide needs to be accelerated in order to meet the sustainable development goals. And there are two commitments there that I want to kind of guide the presentation today. One is the development of innovative and blended financial mechanisms and incentives included in collaboration with governments, multilateral development banks and relevant international organizations on the private sector. Again, kind of saying that private and public and multilateral development banks are part of the solutions and blended financing is part of the solution. but also that there is a need to invest on local network initiatives, right? It was said in 2005, it is said again in 2024, right? 20 years after, in order to provide safe and secure network coverage for all areas including rural, remote and hard to reach areas, right? So why no change? Why we are still discussing this 20 years later, right? I think from our analysis and this is touching on the submission that APC did to the Commission on Science and Technology for Development on the WSIS plus 20 review, that there has been little resources that has gone to developing countries. The WSIS didn’t include a financial mechanism per se, a fund, mainly as a result of donor countries not wanting to make additional financial commitments, but also the fact that the lingering impact of structural adjustments, right? A trend in development aid to discourage global South governments relying on aid from investing on public sector infrastructure and services and the debt burden after a period of debt forgiveness in the early of the centuries, that is, you know, that current crisis of the debt is at the center of many of the issues that we are seeing in developing countries as well. And the idea that, you know, financial mechanisms are not just to address the infrastructure issues, but also there is a need to include, there is a need to invest on human capacity and digital public services to do this, right? There is not gonna be an infrastructure without the capacity, it would not solve this issue. But also because regardless of private finance and public finance, regarding of many other instruments, the logics have been supporting and using all that finance to support. for traditional incumbent operators and they’re trying to support their return on investments and their for-profit business model to do something that they are struggling to, that their business model is not able to provide the return on investment that they are seeking in those areas, right? And that again, something different needs to be done. A lot of the USF, pretty much 99% of that USF has gone to the very same operators that were using private finance for their operation. But also that the public finance over and beyond USF, the multilateral banks only use 1%, and this is a study from the Alliance for Affordable Internet, that around 1% of the multilateral development banks cumulative commitments in low and middle income countries over the period of 2012, 2026 in relation to ICPs and was 1%, was 5 billion over that period, right? There has been some changes. There are some initiatives such as, the World Bank including ICPs and as part of the digital, as part of their priorities, the European Commission for Team Europe, Global Gateway, the G7-led Partnership for Global Infrastructure Investment, the Digital Silk Road. There are many, many more investments, but still those investments are going to the very same partners and they are way less than what the ITU and others are considering to do that. And they are going to the very same places and investment and funding similar things in many cases than the private sector and private finance is funding, right? 5G, Leo satellites, submarine cables, and that tend to focus on profitable markets that maximize the return of their shareholders. They are not going into supporting. initiatives or supporting interventions that could potentially close the digital divide and address the issues that 30% of the persistent digital divide are facing. As Risper was saying, and I think many others have been saying over the years, there are 16 initiatives that are showing that there are other ways of doing this, right? And initiatives that are based on or business cases that are focused on decentralized, local and community center initiatives that are driven by completely different investment imperatives, right? Where it’s not only that they need to be sustainable, they need to have some revenue streams to be sustainable, but that they have a different bottom line around social and environmental concerns that move their interest, not only the seek of profit and return for their investors, right? They are part of the micro, small and medium size businesses that are, as Riesper was alluding to, part of the blood or the lifeblood of so many economies in the global South, but that they don’t, it’s only very recently that they are taking part of the telecommunications sector and that they are struggling to be part of a telecommunication sector that is built for traditional and national based footprint, right? This community center connectivity providers can operate and be self-sustainable at a very small scale and have a way more diverse range of ownership and operating models, right? By being community center instead of profit center, as I was saying, they are able to use all their financial mechanisms to reduce their costs and be a center on the communication needs of the community. rather than the profit that their shareholders are seeking. Over and beyond the initiatives, we’ve seen that the ITU and all their members have reached consensus that both at the World Telecommunications Development Conference in the bridging the digital divide resolution that this type of complementary access solutions are needed. Not only at the WTDC, but also at the Plenipotentiary Conference. Again, all the member states are agreeing that we need to look at complementary access solutions and enable them to close the digital divide. And the ITU in the recent Universal Service Financing Efficiency Toolkit is putting community broadband networks as part of the solutions that need to be considered. And because when we are looking at innovative financial mechanisms, those that are able to do the same at a lower cost are a financial mechanism in and by themselves, right? ABC is coming up with a financial assessment tool to compare last mile connectivity providers. And the initial findings that we are having is that they are considerably cheaper. And not only considerably cheaper, but that they bring along social inclusion related impacts to achieve meaningful connectivity that those incumbent traditional operators are not able to come up with. So how they are able to do this, right? Well, the investment comes from the users themselves. And they are also able to tap into other non-returnable support, such as subsidies and grants or donations from people that kind of align with that social mission, also public budgets and other. in mechanisms that private companies also use such as recovering the cost of hardware in the price of sales or private finance. But in the sustainability model, they also include other elements such as barter transactions, action-based subsidies, membership fees, and others that public operators are not, sorry, private operators are not able to meet. And that’s how they are able to provide pricing that is below market price. Sometimes it’s based on cost recovery and sometimes it is even free of charge depending on these other contributions on the capital investment and the sustainability models that members and other socially aligned actors are allowing them to achieve. So it’s not only that there are recommendations that are being incorporated in policy in some countries. It’s not only that our existing initiatives that are able to do this at a lower cost is that financial institutions themselves such as the Inter-American Development Bank and the Asian Development Bank in their publications are incorporating and are recommending governments to look at this as a financial mechanism in and by itself, right? As a way of closing the digital divide because traditional operators are not able to do so. The Broadband Commission in this report from 2021 looking at financing models for bridging broadband gaps incorporates recommendations that recommends government to explore the options and feasibility of funding allocations to empower smaller providers such as community networks, recommends the potential beneficiaries from USF to include community networks, propose an international fund where projects that are less scalable such as. community networks could be funded and that could ask that as a clearing house for knowledge and best practices and potentially provide loans and other resources in a condition at concessional terms. We have countries such as Argentina that are looking at supporting community networks, right? $3 million from the USF budget for 2020 to 2022 were made available to community networks. And one would say, but no, we cannot support community networks because bigger operators are doing a great job at that. 100%, that’s why we are referring to complementary solutions. That’s why Argentina allocated 0.63% of their budget to community networks. So they could continue supporting the other operations that they were doing with 99.37% of their budget, right? It’s not that what we are proposing and what others are proposing should be right front and center of these initiatives. It’s about complementing, it’s about testing things that haven’t been tested, supporting initiatives that haven’t been supported. So just coming to the proposed solutions and I’m coming to an end, and one is about capacity building, right? And again, I’m trying to talk to the WSIS, to talk to the task force on financial mechanisms because some of the findings are the same, right? We proposed a new policy of financial mechanisms where that could provide an information service that provide access to independent advice on how to evaluate the information that many governments and regulators are bombarded with from the private sector. and prevent them from looking at other solutions that prevent them to negotiate as equals. Such information is also relevant, and such advice is also relevant to other stakeholder groups as well, such as community-based initiatives and other of their partners. Because as the Task Force on Finance and Mechanism was saying, building human resource capacity and knowledge at every level is central for achieving the WSIS objectives. Diversifying the ecosystem. There is, you know, we have that, we have some countries, such as Kenya, where there is a national recognition, even a license for community-centered initiatives, but that’s the exception more so than the rule. So that recognition that is on the ITU, that recognition that comes from the multilateral development banks and other institutions need to be included, right? And streamlined, right? So there are licensing processes that reduce the licensing fees and make other enablers, such as Spectrum, and reporting bartends, and access to backbone available, right? If it’s illegal to do this, how someone is going to invest in them, right? There are many investors that are ready and willing to invest in this if they were legal, right? So please consider that as an enabled financial mechanism or as a prerequisite for this. And this needs to be accompanied by awareness-raising and capacity-building programs to develop the pipeline of initiatives that could use effectively these instruments that are created, right? And this starts, again, with a finding from the Task Force on Financial Mechanisms from WSIS that was saying that policy and regulatory incentives and more open access policies are also needed for private investment, civil society organizations, and community networks to contribute to expanding ICT to rural and low-income populations to address the bottom. of the pyramid population. That is to say, this was said in 2004. The other recommendations that I’m making were said in 2004. If we don’t include them in 2005, and in 2024, 2025, we are not gonna be able to implement the GDC. We are not gonna be able to invest in local networks in a way that they close the digital divide, right? And two more points on innovative financing mechanisms. For that investment in local network initiatives to be effective, new ways to make smaller products available are needed. Multilateral development banks, they recommend supporting this, but they don’t know how to do it. Because the minimum investment that they can do is $1 million, $2 million, $3 million. These are small operators, they don’t need, there is no need for $1 million. They could function with way less, right? But we understand the fact that making those deals available has the same transactional costs. So we need to look at reducing those transactional costs per loan, per grant, per whatever that we are doing, as well as finding ways of risking those investments so the rates are lower, right? They are donors out there that have guarantee pools that are willing to use these, to the risk these and reduce the interest rates. There are ways of working with local financial institutions and that are public and that could, you know, benefit from those concessional loans and pass them on in this product so they could be made smaller and with better rates for these social mission driven operators. We could create pools of projects and a standardized process and documentation that reduce the duplication of effort to again, reduce the transactional costs. And this is not new. This has been tested in so many other sectors that could take the form of that international fund as proposed by the UN Broadband Commission. The GDC is talking about blended finance. And that blended finance, those grants could be used in places where it is impossible to find a return on investment or to start initiatives off. And it could be used to cover some of the transactional costs that I was alluding above. And there is an interest on development finance, right? There is the UN Financing for Development Conference that is taking place next week, sorry, next year after 10 years that could look into all of this, that could put operators that are looking at development, not at profit, at the center of this investment in telecommunications. But there is a risk of, and we are seeing this already, of those profit-seeking companies looking at development finance that we need to be aware of and kind of be attentive. So the development finance is used for development, is not used for profit, right? And for increasing shareholder value. But this is the last comment that also was mentioned by Risper, right? We need to improve the balance between profit maximization and the goal of reaching universal and meaningful access, right? And the time has come of reviewing where those investments are going, right? And how effective they are when they are targeting universal and meaningful connectivity. And we really invite the stakeholders to implement the solutions that we propose in this presentation. We are open to discuss, to share our lessons, to work together to do that. But really, development should be put at the center and not profit maximization. Thank you very much.

Risper Arose: Thank you so much, Carlos, for setting the tone for this discussion. You have a brilliant presentation and has given us why this discussion now and why it’s important to start thinking around innovating financing mechanism for bridging the digital divide. You’ve mentioned a lot of the work that has already been happening, both in terms of the global processes as well as policy recommendations and also tools that have been created. So I would say that’s what we’re trying to do in bridging the digital divide and, of course, community-centered connectivity stand as a viable alternative solution that more focus and also visibility around supporting in bridging the digital divide. This really sets the tone towards our next segment of this particular session. Now we’ll move into the panel discussion and we’ll hear from our different speakers that are lined up. We’ll start with Lilian and Talant who are here with me. And for this segment, we’ll look at framing the challenge and ideally talk about what are the main barriers to universal access and why have traditional infrastructure approach failed. We’ll start by looking into your various experience and also your expertise within the work that you’ve done. And also, while at it, perhaps you can talk around why our community centered connectivity solutions uniquely suited to address this gap. So we can start with you, Lilian.

Lilian Chamorro: Thank you, Risper and Carlos. Well, just something that we have been talking about is that for the private sector is the rural communities are not something that they want to address. Because those are communities with low income that they cannot pay high prices for the services, for the communication services. But also they are few people in the remote areas. Then the investments that companies make in those spaces is difficult. They have the return to obtain the money that they invest in those spaces, because we have few people. And also, the cost of the deployment of the infrastructure in some of the rural areas is so, so high. This is because you need transportation, you need security also. Sometimes, for example, basic services in electricity are not available. Then they have to implement also other infrastructure, not only telecommunication infrastructure. Then the cost to deploy telecommunications infrastructure in rural areas is so high. And the low income and the few population in those areas is not enough to cover the cost. And this is if we don’t have in account other factors. For example, they’re difficult to access to some territories. In Colombia, we have areas where the violence, but also the geographic, is very hard to go to those areas. Then it’s something that is not easy to address, even for implementing infrastructure, but also for sustaining infrastructure when you need to repair something, or something like that. It’s not so easy to go to that areas. By other side, the governmental programs have failed also in sustaining the initiatives. Then they invest a lot of money putting antennas, putting some infrastructure. But when resources finish, then the projects also finish. And we have a lot of, I don’t know how to say, infrastructure that is not used, and it’s getting old. and without use in many rural areas. You can see antennas. You can see a lot of infrastructure that is not used in rural areas, because the program of the government’s finished, and no one could sustain the work of that kind of equipment. About the other question, why community-centered connectivity solutions are suited then for this, I think one of the principal things is the flexibility of these kind of initiatives. Flexibility to adapt to the diverse and unique condition that every community have. Then it’s so difficult that you see community networks that work equal in many of the communities. Each community network is different, have their own characteristics. Because the communities can establish their own infrastructure, adapting to the geographic conditions, but also to the necessities they have. Also, because the sustainability model that they adopt is different, and it depends on the context, the income that the community have. Also, the traditional ways to exchange, I don’t know how to say in English, but yes, to exchange the services we could be. And additionally, the communities can establish their own governance process to define how and for why the community network is for. And the other reason is not all the reasons, but just one more, the possibility to share basic costs. For example, the connectivity. For example, the transportation of some technical people to help to fix some problem. The community can share the cost, can have. different ways to a board. For example, they invite to the people to their house. It’s not just about money. It’s also about how the community organize themselves to have the services that they need for sustain the community network.

Risper Arose: Thanks so much for the opportunity to speak about the community networks.

Talant: And I wanted to share a story of how a small investment into a community network helped unlock major investments into infrastructure. So just before coming to IGF, I was reading news in Kyrgyzstan. And there was a news that a very remote village of Zardali in Kyrgyzstan was connected to electricity. And the Minister of Energy personally came to the village and announced that now the village has electricity. Just a few weeks before that, Minister of Transport came to this village and said that I’m going to provide you a road, because this village had no road, no electricity, no internet. And before that, a mobile operator came and installed a mobile tower. All of this is happening month after month in the village, which previously nobody has heard of. And if you heard of it, it would be once in a decade. Maybe some disaster happened or something. And the president himself flew to the village on a helicopter to say that we are going to help the village with infrastructure. While all of this was happening, a year before that, the village was connected to the internet as a community network with a small grant from Internet Society Foundation. And the very first thing that the villagers did was, of course, first to connect to their relatives around the world, saying that now we have a connectivity. Please call us. We can call you. And the second, they started making videos of the village, saying that, Mr. President, Mr. Prime Minister, we don’t have a road. We don’t have electricity. We don’t have mobile connectivity. Please help. And maybe, like Carlos was saying, that it wasn’t millions of dollars that were required to connect this village. It took maybe $10,000 to provide internet. But with this small investment, the villagers were able to attract hundreds of thousands of dollars from the government to do all of this infrastructure. So we were really excited that we, as the Internet Society Kirghiz Chapter, helped the village to be connected. And this was an interesting kind of case survey, which is not PPP, as we know, but I counted 6P. So it was a partnership of the public, the government, partnership with the private sector, the ISPs who provided the spectrum, partnership with the provincial or municipal government, partnership with the people who live in the village, and finally, a partnership of civil society organizations and international donors like ISOC Foundation, European Union, US Embassy. They have provided small funds that allowed us to connect this village to the internet. And the way we learned how to do community networks was actually in an IGF like this many years ago in Guadalajara. And that’s where we learned that there is such a movement of community networks. And I think today, there is a discussion that next year, there will be a decision, do we need the IGFs? And of course, for the villagers of Zardali, yes, we need the IGFs, because they do make a real impact on the people on the ground. So this is kind of my brief introduction. Hopefully, later on, I can share information about potential financing opportunities for such initiatives to close the digital divide, especially gender digital divide. And I wanted to talk about the Women in Digital Economy Fund. And I think we have colleagues online who are working with women-led initiatives. And this fund can provide support in terms of financing, in terms of technology. technical assistance and know-how and in terms of policy and regulations. So thank you. Back to you.

Risper Arose: Thank you so much, Talant. Such a great story. And just back to give what a community, connectivity can do in terms of even fundraising for this sort of initiatives. And also Lillian, thank you so much for highlighting from the community aspect and the governance of this type of networks and how by pulling the resources together, they can be able to sustain this network infrastructure and even reduce the cost of this type of community-centered connectivity. Now, next on, I’ll just jump into the next segment where we’ll now hear regional insights on innovative policies and regulation. And with us in the room, we have Dr. Emma Otieno and also Jen Coffin. And as I give you the floor, perhaps you can talk to us around what role do regulators play in enabling flexible community-oriented licensing framework, as well as how can universal service fund mechanisms be more transparent, efficient, and inclusive to support smaller local connectivity providers. We can start with Jane and then finish with Dr. Emma.

Emma Otieno: Thank you very much. And it was just very inspiring hearing everyone speak. And it has struck me so much that when you start to think about the localized approach to providing connectivity, it’s almost a very obvious correlation. area that you need a localized approach to policy regulation and financing. So if you’re, and that means looking at, as Carlos has said, different ways of assessing risk, different ways of bringing blended finance, meaning different pots of money together at different times with different funders and different instruments, financial instruments. It’s just very logical that you would look at this differently now. And as Carlos has noted, the 1985 Maitland Commission Report, known as the Missing Link Report, laid this all out for us years ago. And the regulatory and policy environment has changed so much over time and does probably need to be recalibrated to the local circumstances that we are all seeing. How do we find different ways of changing our policies and regulations to fit and adapt to making sure that different sized networks, which have different demand and supply side economics than larger network investments, how do we take a look at this again? And I think, and Dr. Emma will probably have a great deal to say here too, is that we’ve got to take some risks as policymakers and regulators and rethink how we’re looking at local connectivity, infrastructure assessment, feasibility studies, what’s out there in a country and how we would re-approach, re-imagine not only that policy and regulatory environment, because the investors want to see data. They want to see the facts. They want to know how many people are not connected, where is that community located, what’s the potential estimated cost of connecting that community, whether it’s through satellite technology, fixed infrastructure, fixed wireless infrastructure, or wireless solutions alone. They want to see the facts. they want to understand, but we also need some of those larger investment organizations and some of the localized investment entities to re-look at these, the demand and supply side issues, what the regulatory and policy frameworks are that are in existence, and what needs to be changed from that regulatory policy side and the financial side to match that local connectivity challenge. One thing I would say is that regulators and policy makers have a fabulous tool in their hands and something called the notice of inquiry or a gathering of information process. You can put out a notice and ask different organizations to provide you with information and you can pull in all that information to reassess, realign, and reimagine what you need to do. Look, the G20 this year and the Digital Economy Working Group has even acknowledged that innovative financing is super important to connectivity and that’s from the Brazil process this year. It’s rolling into South Africa next year in 2025, so there’s a recognition at the highest levels of government, at the highest levels of global organizations and financial organizations that that innovative approach to financing, the blended finance approach that Carlos indicated, needs to be looked at and you have to have that policy and regulatory corollary that match. It’s not just, oh we’ll bring in lots of money, but how do we bring in money at a certain point locally with a different vision of risk as well. There’s an old way of looking at risk that’s a bit, I’m going to say this out loud and this is my personal opinion, colonial. It’s a traditional different type of banking approach. We’ve got to de-risk investment differently and so it’s at that local level with the local facts and with the data. so that investors have confidence, but they’re local investors too, which might be more locally bought in to the solutions that could be provided. So really, if we’re looking and re-imagining on how we bring in, Talent has just talked about a network in Kyrgyzstan. That network is a completely different network economic and policy and regulatory solution based on what type of technologies can be brought in, how the local people can be brought in to also help with that productivity solution. And so the investors are gonna look at this differently. The government from a governance perspective is gonna look at this differently as well. So it’s matching that local sensibility, the local factors in place and just taking steps back. And I’m gonna circle back around. Sorry, it’s a little late here. So my brain is probably a little circular, but please as regulators and policymakers, give yourself a little breathing room to re-imagine, to take a step back and work with the public, work with the different financial institutions, work with ITU and the development financial institutions, but work with this multi-stakeholder community. You also have of APC, ISOC, other, other, where you can come in and get that information from all of the organizations who know that local circumstance. And then of course, some of those financiers and investors that know the different types of blended finance instruments that can be brought in. So we really are talking about taking a step back, gathering more data, looking at new models that are being brought in and looking at just the sheer fact that if we haven’t been able to solve some of the connectivity problems from since 1985 forward, we have seen how community networks can solve that problem. We’ve seen how regulators can come in and re-imagine their USFs. We’ve seen how that they can re-imagine their licensing and bring in community and other complimentary based networks. You really can create a match up with the financing side now and looking at the different demand and supply side. I’m gonna stop talking. So Dr. Emma. can jump in here, but I’m really excited to hear more from the panelists and others, but I think we really do have an amazing opportunity to just rethink, step back and gather more data. And for regulators and policymakers, use that tool of the notice of inquiry to gather more information. Thank you. Thank you very much, Jane and Carlos and the rest of the speakers who have really given a very pertinent points to this very important subject of discussion. And I will just augment the very many points that have already been stated very aptly. In terms of how the regulator can be this enabler for the community-oriented networks and in terms of coming in as the enabler for licensing or the existence, I would actually state the following and I’ll be using the case of Kenya just as Carlos has said that Kenya has made a bit of progress and very passionately and very intentionally. So when it comes to supporting the community-centered networks or connectivity. So one of the things and where Kenya started and it should be a good example for all the other regulators to go is to really take the issue of access gaps for the market when it comes to connectivity gaps and usage gaps very seriously. I think that’s where Kenya started to create this appetite and create a justification of where to place the community-centered networks and actually how to fight for their place in the frameworks of the regulation and in policy. So for the case here in 2021, there was an updated access capacity that was being undertaken and it went further to actually really take a microscopic view of the true gap because by that time, the connection in terms of population coverage had already gotten upwards of 78. So we’re indeed getting into the true gap space of what Carlos shared as the access gap model. And with that, the beauty of a space like Kenya enjoys is the combining of the regulator also being the board responsible for the ambassador service fund. So this very seamlessly speak to each other that when there is a data, empirical data that the universal service fund brings onto the table, the regulatory of the regulator really swings in and they’re able to come on board with issues like then how can we use these findings to update the market structure? And that is how the case of finding a place to review the market structure and starting to a process of incorporating the community centered networks into the regulatory framework and for Kenya. In 2020, the first day the license framework was adopted and since then several community networks have been licensed. So that’s another regulatory support that has actually been extended to enable these sorts of networks. And moving on actually, the regulator should also move away from being the police to enable and like the question that has been asked which is enabling. For instance, there’s been a lot of deliberate handholding that has come from the Kenyan regulatory setup that the regulator is reaching out and really nurturing the community networks and the constituents. Speaking to institutions like APC Locknet and other capacity building facilitators and trying to champion for can we have courses of capacity building opportunities that are actually centered to supporting the community networks. networks operators. And we’re seeing that really their capacity is a bit low. Rostive source was there also a bit not very enabled. So the regulator comes to the table and speak and actually champions for their rights actually and speak for them in terms of let us allocate, let us prioritize, let’s ensure we do not have a conversation where we’re talking about the various market segments and the various support, the various challenges without including the community centered network. So that has really been a nibbling role that the regulator has played in this perspective. And even this year we’ve had several collaborative initiatives with the Africa Telecommunications, the capacity building arm of the telecom sector and the Locknet and APC and the regulator, the ministry to really ensure that the capacity of the constituents in this space is being built. So there’s been practical example that actually that is an enabling angle to it. Further to that, there’s the issue of making that space very clear. Beyond the existence of just the licensing framework, they need to be additional guidelines that can really provide sort of like things like handbooks, things like guidelines on how to really domesticate or roll out networks in different typologies. And I think some previous speakers have talked about the various challenges that are facing this connectivity initiatives. The challenges in some areas are terrain, the challenges in other areas are security related, others are just about affordability and financial support. So actually coming up with guidelines that incorporate the feedback from this operators so that they’re out there. So the people who are interested to come into a field in this gap are able to be guided uniformly in terms of how they can go about it, even in terms of accessing support. What is the procedure? Where do you start from? When it comes to this license, apart from the the framework being there. How do we start? Whom do you talk to? Where do you go to for support? And we find that they’re quite not exposed because most of them are indeed from the community, they’re from the rural. So they really need a lot of guidance, like really broken down to a granular guidance to enable them. And that, again, speaking of the Kenyan situation, that is something that is ongoing and the process of even augmenting the guidelines to support them is happening. And also the issue of sustainability. We find that being a subject that is very critical, that has to be at the center of this discussion all the time. And if a regulator is coming in from the front of being able to support the community networks, when you have licensed a community network, you don’t want to license it to go and die. So what are these other discussions that can be put on the table from the policy perspective from bringing on the board, the other agencies as collaborators? For instance, the feedback we’re getting from the community networks in Kenya is that the revenue agency seeming to want to expand their revenue base would actually go for anybody who seems to be putting up a business, trying to speak for them. They cannot be able to go as individuals to speak onto those tables. So that you are like speaking for them and saying that this is a new model, which is working like this, we should be exempted, talking about exemptions in terms of our taxations, clear exemptions that are actually enacted in the law, but speaking to other players, like the Ministry of Finance, the central banks, like what avenues can actually come on board to assist to this community networks, especially that we are seeing that they have to get into the leveraging the usage gap, so that they come up with a bit of products that can create that sustainability interface between the connectivity and the keeping the business running. So that sustainability is a very major issue also to champion through the various forums that can be able to listen to whatever is being. being proposed as avenues and strategies of enabling the sustainability of these community networks. I’ll quickly speak to two or three points maybe that touch on the issues of how this transparent, efficient, inclusive ways of supporting this and using the Universal Service Fund. And we have to rethink as regulators, I think at the Universal Service Funds mostly, what are these other approaches that can be used to finance the community centered connectivity or networks. Key point here, speaking from the experience in Africa, I find that most Universal Service Funds are using subsidies as a key support to the telecommunications operators. And we find that this, even in the Kenyan situation, a subsidy to a network, a community centered network might not be it because it’s not for profit as unlike the other categories of licenses. So starting to move away from the traditional way of support to thinking wider about, even if it’s a government that is talking managing the Universal Service Fund, grants, frameworks for grants, that requires a bit of change in the laws, like the public financial management laws, the issues of like a loaning. And the other day we were having a conversation with Brazil and they have a very good model where the Universal Service Fund is loaning these small operators, community centered networks to loan. So that is a collaboration, a very major shift in law, bringing together the ICT ministry, the Ministry of Finance and the banks so that the Universal Service Fund can be used to offer loans. So many other models that if time allowed would actually speak about. And then also the issue of really getting the involvement, engaging the community centered connectivity networks, players in that space so that you get feedback that can improve decisions that are being made. And then also strategies, like for instance, I’ve seen the current strategy that’s being under development for the Universal Service Fund in the space of Kenya, community networks, community. a certain connectivity has been prioritized, very clear targets, like by when, how many do they want to support, what framework should be put in place, what kind of stakeholder engagement and mapping should be undertaken. And then of course, monitoring and evaluating how that is being implemented. Then finally, and the last one is really collaborating with the agencies or governments or other bodies, let’s say like the APC and networks, the people who have the global perspective to champion for the community-centered connectivity so that they can support the areas of research so that regulators can have updated data to inform decisions, both on the side of a universal service fund and the side of the regulation, because they have the part of the pie that makes decisions that can impact on this, but they require updated data and they may not have that resource or the expertise in all the instances to be able to make a decision. So thank you very much. Let’s see what other available opportunity can be to keep on discussing about this and so much about it.

Risper Arose: Thank you. I appreciate it. Brilliantly said. Thank you so much for our two presenters and Dr. Emma for coming in and talking from the regulatory perspective. Jane, also thank you so much for joining in at this hour and also just sharing and really underscoring the importance of regulators re-imagining, financing for initiatives like community-centered connectivity solutions and gathering more data as well as working with different stakeholders, really a multi-stakeholder approach to supporting the community-centered connectivity solutions. For Dr. Emma, thank you so much for also highlighting the case that has been happening in Kenya in terms of regulators hand-holding the community network and what that looks like in creating an enabling environment for community-centered connectivity solutions. And of course, just highlighting the role of USF and how that alternative models and frameworks that can really look into transparency of an inclusive universal service fund in supporting community-centered connectivities. I am just cognizant of time, and because of that, we’ll jump into question and answers, and we’ll have a discussion around that as we round up this very engaging conversation. So I’ll start with the online audience. Are there any questions online?

Carlos Rey Moreno: There were some comments in relation to, I believe this was made when Talan was speaking around how do we ensure that energy perspective is attendant as the critical enablers for Internet governance. There was others that request Jane to expand further on alternative risk perspective, and then request around how best practice areas for this topic, where those resources could be created. I can answer that one in the chat, and then maybe develop procedures or guidelines as highlighted by Dr. Emma. So those questions around Jane to expand on the risk perspective and maybe Talan touching upon the energy perspective around Internet governance as a critical enabler were the ones that appear in the chat so far.

Risper Arose: Thank you. Thank you so much, Carlos. Yes, so we can start with Jane, and then we can have talent come in and then see if we can finish up.

Jane Roberts Coffin : Absolutely, and I’ll be very quick so we can fit in more people. I think what we’d have to say is we’d have to take a look at the traditional risk checklists and risk matrices that are generally used for investment and look at how we would adapt them and look at our own risk tolerance and the specific market or area we’re talking about and what the feasibility is of developing that infrastructure, building it out, the different regulatory and policy parameters in a country. And by reassessing risk, it’s also taking a look at, there’s some complications when people take a look at risk and look at a country and say, oh, well, I’m not sure we can actually do business here. Well, if we’re talking about a different focus from a community-based perspective with the backing of different government organizations in the country, different communities of interest, the way you’re gonna look at the tolerance for risk, your checklist for risk, and whether it’s a social investment risk model, which is different, you’re gonna have different factors that you would use from both governance, policy regulatory, and then even from a more systemic financial perspective where there’s some very different models that are used for assessing risk. So this might be one of those topics that would be really great for another IGS or for, one of the speakers has typed in the chat about what some of the best practices might be for reassessing, assessing risk in a country and how we can work and speak with donors at a different level to speak their language from an investment perspective, but reassess how that would work with local community involvement. So I’ll stop there, but it’s more a question. And so when we get to the question of, if there are traditional ways of assessing risk that are tied to grants and loans, and or how government might look at a project coming into its country, or how local multi stakeholder approach to building connectivity would be put together, we, we’d have to start to take apart those traditional models and rebuild a different type of risk framework.

Risper Arose: Thank you again. Thanks so much.

Talant: I think it’s a great questions. Thank you for asking about the governance because it’s a really good segue into the publication that I wanted to bring your attention to it’s called the policy actions to close the gender digital divide. We are here today to speak about innovative financing mechanism for generally digital divide and the organization where I’m working is now focusing on gender digital divide because it’s a major topic globally. And this initiative is called Women in Digital Economy Fund, which was launched by USAID, Bill and Melinda Gates Foundation and other donors, and which started as a $50 million fund now is growing to 18 million and more. Yeah. So may this initiative identified three areas where there are, there has to be a lot of work done. One is the financing. So there is funds set apart for to support women led organizations or organizations who are working on closing gender the digital divide. And the second area I think it’s the know how, how to do this kind of work. So, there is a opportunity for technology. technical assistance to organizations. And finally, third major area is policy and regulations. And there, myself and my colleague, Vakas here, who is sitting here in this room as well, we are working on with the governments that are interested and willing to do reforms to close the gender digital divide. And in this publication we have collected interesting, promising practices from around the world. And this will be annual publication and hopefully that next year we can see Columbia case, Kenya case, that would enable other governments also to get inspired and to work on closing the gender digital divide. And finally, I just wanted to say there are five core areas where the work is being focused on. One is access and affordability. Second is the relevant digital tools and services. Third is digital skills and literacy. Fourth is safety and security online. And finally, as Jane was mentioning, the importance of data and insights, which is crucial for policymakers and regulators to make decisions. Thank you.

Risper Arose: Thank you so much, Talant. Absolutely, we can’t finish without talking about how we are mainstreaming gender whilst talking around financing mechanism for this type of initiatives. And thanks for mentioning what WEDEF is doing. Now I’ll just hand it back to the onsite participants. Let me see how many questions we have in the room. So a couple of hands. Okay, yes, go ahead. Okay, thank you very much.

Audience: Thank you to all the panelists. This session was very comprehensive, and please let me present myself. I’m Kosi, I’m a senior. I’m from Benin. I’m from Ministry of Economy and Finance, but I’m also lead one NGO called Women Be Free. We provide employment for training and employment for ladies and women in our country. When we talk about community network is something very important for us. I want to know step by step what is the process to build a good community network where we don’t have problem on how we can make it sustainable for a time. That is my first question. The second one is leading the ladies and women. How can I put on table partnership directly, self and self organization to provide knowledge somewhere to another countries like mine in Benin? Can we make a partnership and provide knowledge directly to our different community? How can we do it now? Third one, the phone we are talking about, is it available for every region of the world or is specifically for some region and from some kind of stakeholders specifically? Thank you very much.

Risper Arose: Thank you. Thank you so much for those three questions. I would give the mic to Carlos. Carlos, I’ve seen you have shared some of the links in the chat. Thank you so much for sharing those links. You can come in and then also we can hear from Lillian and Challenge. You have something. Okay, go ahead. Yeah.

Carlos Rey Moreno: On the first question I hope someone in the room can share with you some of the links that are on the on the on the chat because I believe they provide some of those guidelines. I haven’t changed absolutely nothing about my setup can you hear me? I can hear you Carlos you’re loud and clear. Yeah we can also hear you. Sorry we still can’t hear you just give give me one one minute one second.

Risper Arose: Can you speak Carlos now let’s hear?

Carlos Rey Moreno: Hello hello no no we can hear Carlos I think it’s a respite is your side we can hear. Thanks Dr Emma it might be something in the room I mean with the audio setup of the room. Yes it’s on our side we’re trying to figure out. So you can hear me something somehow I know or you can hear me through the I’m confused now. Okay as we can’t hear Carlos in the room so we are we are sorting that out

Risper Arose: with the technicians but meanwhile we can hear from Lillian and Talant and then we’ll get back to you Carlos. Well I think the the question about the sustainability is the big question

Lilian Chamorro: no because I think we in Colombia have a methodology when we try that the community appropriate the network since the first beginning of the process then. have a series of steps to create a group that sustain the operation, but also the uses of the community network, when there is a problem, can fix the network. Then I think that is one of the things that make that the network can sustain in the time, to have a group of people in the community that lead the operation of the community. I think that is so important, try to find those persons that like the technology, but also like to serve the community, and try that they engage to the community network and sustain the operation of the community network. And then other part could be, try to find a financial method to sustain the expenses of the community network, the way to sustain the connectivity, but try to be creative in that. Sometimes people also make some festivals or things, that for them is easier to have the expenses, or in other communities they just put a contribution, an equal contribution to sustain the expenses of the community, then it’s different in each community. I don’t know. Okay, I think it’s okay for me.

Risper Arose: Thank you so much Talant. Briefly. Thank you. In terms of building community

Talant: networks, I think I could point out two organizations that are very strong at it, is one Internet Society, and we have colleagues here, and I think there are guides for, step-by-step guides on building community networks, and the other one is Association for Progressive Communications. APC, and Carlos, I think, could probably share some links, so very good resources. And in our case, in Kyrgyzstan, engaging the local community has been the biggest factor, that it has to be a bottom-up interest and approach. And in terms of the fund, in your question, this fund is available for organizations around the world, in the global south. But of course, the most priority will be the areas where the gender divide is the biggest challenge. Yes, and finally, I would like to say

Lilian Chamorro: that it’s important to exchange the experiences between communities. For us, has been a key to have some encounters where people from different communities can know what is the experience of other communities, can exchange the problems, because they said, OK, it makes me see that it’s not just my problem, that maybe many of us have the same problem, and many of us can find the solutions for some problems. Then the exchange of experiences, I think, is so important.

Risper Arose: Thank you so much, Lilian. I don’t know if we have one minute for Carlos. Carlos, you can come in now. Hello, hello, hello. Can you hear me? No? Can you hear me in the room? If you cannot hear me in the room. I can hear you online. Yeah, I guess it’s fine. OK, bye bye. Thank you for everything. Rizpra, I just wanted to draw attention to a huge conference in June and July. Unfortunately, Carlos, we can’t hear you for now. I don’t think we can hear them either, Jane. Yeah, now it seems we can’t hear them, they can’t hear us online. Hello, IGF, can you hear us? Hello, hello. Can you hear our private conversation here? Yeah, we need to have a separate workshop. Exactly. You know what they’re doing? They’re using different frequencies in each of the rooms. Talk about tricky. They did this in Baku in 2012, and it’s good. They have an open ceiling, that’s why it’s hard to… Let me try to see if somebody… But it looks like the session has ended anyway. Oh, has it? Oh, yeah, you’re right. I mean, it was due to finish two minutes ago, so… Let me try to call somebody who’s in the room to just advise us what’s happening. Sure, thank you. Dr. Emlin. Not speaking, let me try another one. For some reason, I was seeing the person in the room, but somehow the call is not… connecting. I think we can just chat and say we’re dropping off. Yeah, maybe we can write it on the chat. I think that’s a great idea. All right. Well, it was great to hear your voices. Likewise. I owe you an email, Jane, so I will be writing to you soon. Same here. Great to hear your voice, Dr. Emma. Yeah, yeah. Great to hear you, Jane. Yeah, we’ll connect again. Definitely. Okay, good to hear you. Bye-bye, Carlos. Bye-bye, Dr. Emma. Thank you so much. And reaching ideas. But they can hear each other. Thank you. Thank you. It’s great to hear that this type of consultation is happening. I love what she said about people who are starting to learn. I just don’t see any effort. I don’t know. I think I’d still be interested in people who are talking about keeping things that we have. Well, we’re just trying. Yeah. I feel like it’s just cool. Yeah. No, no. Thank you. Thank you.

L

Lilian Chamorro

Speech speed

115 words per minute

Speech length

972 words

Speech time

503 seconds

High costs and low returns in rural areas

Explanation

Rural communities present challenges for traditional telecom operators due to low population density and income levels. The cost of deploying and maintaining infrastructure in these areas is high, while the potential revenue is low.

Evidence

Example of Colombia where geographic and security challenges make it difficult to access some rural areas

Major Discussion Point

Challenges in achieving universal connectivity

Agreed with

Carlos Rey Moreno

Risper Arose

Agreed on

Traditional approaches have failed to achieve universal connectivity

Flexibility to adapt to local conditions and needs

Explanation

Community networks can be tailored to the specific geographic, economic, and social conditions of each area. This allows for more effective and sustainable connectivity solutions.

Evidence

Experience in Colombia where each community network is unique and adapted to local circumstances

Major Discussion Point

Benefits of community-centered connectivity solutions

Agreed with

Carlos Rey Moreno

Risper Arose

Speaker

Agreed on

Community networks offer unique benefits and flexibility

C

Carlos Rey Moreno

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Failure of traditional approaches and business models

Explanation

Traditional telecom operators and their profit-driven models have been unable to close the digital divide in remote and rural areas. This persistent gap demonstrates the need for alternative approaches to connectivity.

Evidence

Reference to the WSIS Plus 20 review and the continued inability to meet universal service aspirations after 20 years

Major Discussion Point

Challenges in achieving universal connectivity

Agreed with

Lilian Chamorro

Risper Arose

Agreed on

Traditional approaches have failed to achieve universal connectivity

Differed with

Emma Otieno

Differed on

Role of traditional telecom operators

Lower costs through community involvement

Explanation

Community-centered connectivity initiatives can operate at lower costs by leveraging local resources and community participation. This makes them more sustainable in areas where traditional operators struggle to be profitable.

Evidence

Mention of ABC’s financial assessment tool showing community networks are considerably cheaper

Major Discussion Point

Benefits of community-centered connectivity solutions

Agreed with

Lilian Chamorro

Risper Arose

Speaker

Agreed on

Community networks offer unique benefits and flexibility

Blended finance approaches combining public and private funding

Explanation

Innovative financing mechanisms that combine different sources of funding are needed to support community networks. This includes blending public, private, and development finance to create sustainable funding models.

Evidence

Reference to recommendations from the Broadband Commission and development banks

Major Discussion Point

Innovative financing mechanisms

International funds to support smaller-scale projects

Explanation

There is a need for international funding mechanisms specifically designed to support smaller-scale connectivity projects. These funds should be able to provide smaller amounts of financing with more flexible terms.

Evidence

Proposal for an international fund as suggested by the UN Broadband Commission

Major Discussion Point

Innovative financing mechanisms

R

Risper Arose

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Lack of sustainable funding for community networks

Explanation

Community-centered connectivity providers face funding constraints that hinder their sustainability and scalability. Traditional financing models are not well-suited to these smaller, locally-focused initiatives.

Major Discussion Point

Challenges in achieving universal connectivity

Agreed with

Lilian Chamorro

Carlos Rey Moreno

Agreed on

Traditional approaches have failed to achieve universal connectivity

Empowerment of local communities

Explanation

Community networks not only provide connectivity but also foster social and economic development. They empower local communities by giving them control over their communication infrastructure.

Major Discussion Point

Benefits of community-centered connectivity solutions

Agreed with

Lilian Chamorro

Carlos Rey Moreno

Speaker

Agreed on

Community networks offer unique benefits and flexibility

S

Speaker

Speech speed

155 words per minute

Speech length

1154 words

Speech time

445 seconds

Energy and infrastructure limitations in remote areas

Explanation

Remote areas often lack basic infrastructure like electricity, which is crucial for telecommunications. This creates additional challenges and costs for providing connectivity in these regions.

Evidence

Example of a remote village in Kyrgyzstan that lacked electricity and roads before getting internet connectivity

Major Discussion Point

Challenges in achieving universal connectivity

Ability to attract further infrastructure investments

Explanation

Community networks can serve as catalysts for attracting additional infrastructure investments to remote areas. By demonstrating demand and impact, they can encourage government and private sector investment in other essential services.

Evidence

Story of how a small community network in Kyrgyzstan led to subsequent investments in electricity, roads, and mobile coverage

Major Discussion Point

Benefits of community-centered connectivity solutions

Agreed with

Lilian Chamorro

Carlos Rey Moreno

Risper Arose

Agreed on

Community networks offer unique benefits and flexibility

Engagement of local communities in network development

Explanation

Successful community networks require active engagement and interest from local communities. A bottom-up approach ensures that the network meets local needs and has community support for long-term sustainability.

Evidence

Experience from Kyrgyzstan where community engagement was cited as the biggest factor in successful network development

Major Discussion Point

Multi-stakeholder collaboration

Gender-focused funding initiatives

Explanation

There are specific funding initiatives aimed at addressing the gender digital divide. These programs provide financial support, technical assistance, and policy guidance to organizations working on improving women’s access to digital technologies.

Evidence

Mention of the Women in Digital Economy Fund, which started as a $50 million initiative and is growing

Major Discussion Point

Innovative financing mechanisms

E

Emma Otieno

Speech speed

157 words per minute

Speech length

2867 words

Speech time

1092 seconds

Creating flexible licensing frameworks for community networks

Explanation

Regulators need to develop licensing frameworks that accommodate community networks. This involves reviewing market structures and creating specific categories for community-centered connectivity providers.

Evidence

Example of Kenya adopting a community network license framework in 2020

Major Discussion Point

Regulatory and policy changes needed

Differed with

Carlos Rey Moreno

Differed on

Role of traditional telecom operators

Reforming universal service funds to support smaller providers

Explanation

Universal Service Funds need to be adapted to support community networks and other small-scale connectivity providers. This may involve moving beyond traditional subsidy models to include grants, loans, and other financing mechanisms.

Evidence

Discussion of Kenya’s efforts to update their Universal Service Fund strategy to prioritize community networks

Major Discussion Point

Regulatory and policy changes needed

Developing guidelines and capacity building for community networks

Explanation

Regulators should provide clear guidelines and support capacity building for community network operators. This includes creating handbooks, offering training, and providing guidance on various aspects of network deployment and management.

Evidence

Mention of collaborative initiatives in Kenya involving the regulator, ministry, and organizations like APC to build capacity for community networks

Major Discussion Point

Regulatory and policy changes needed

Partnerships between regulators, communities and support organizations

Explanation

Effective support for community networks requires collaboration between regulators, local communities, and supporting organizations. Regulators can play a role in facilitating these partnerships and advocating for community networks.

Evidence

Example of the Kenyan regulator reaching out to organizations like APC Locknet to support community networks

Major Discussion Point

Multi-stakeholder collaboration

Collaboration with researchers to gather data on impact

Explanation

Regulators need updated data to inform their decisions on community networks. Collaborating with research organizations can provide valuable insights on the impact and effectiveness of community-centered connectivity initiatives.

Major Discussion Point

Multi-stakeholder collaboration

J

Jane Roberts Coffin

Speech speed

0 words per minute

Speech length

0 words

Speech time

1 seconds

Reimagining risk assessment for local connectivity projects

Explanation

Traditional risk assessment models need to be adapted for community-based connectivity projects. This involves considering different factors and developing new risk tolerance frameworks that account for social impact and community involvement.

Major Discussion Point

Regulatory and policy changes needed

Agreements

Agreement Points

Traditional approaches have failed to achieve universal connectivity

Lilian Chamorro

Carlos Rey Moreno

Risper Arose

High costs and low returns in rural areas

Failure of traditional approaches and business models

Lack of sustainable funding for community networks

The speakers agree that traditional telecom operators and their profit-driven models have been unable to close the digital divide in remote and rural areas due to high costs, low returns, and lack of sustainable funding models.

Community networks offer unique benefits and flexibility

Lilian Chamorro

Carlos Rey Moreno

Risper Arose

Speaker

Flexibility to adapt to local conditions and needs

Lower costs through community involvement

Empowerment of local communities

Ability to attract further infrastructure investments

The speakers concur that community-centered connectivity solutions provide unique benefits such as flexibility, lower costs, community empowerment, and the ability to catalyze further infrastructure investments.

Similar Viewpoints

Both speakers advocate for innovative financing mechanisms that combine different sources of funding, including reforming universal service funds to better support community networks and smaller providers.

Carlos Rey Moreno

Emma Otieno

Blended finance approaches combining public and private funding

Reforming universal service funds to support smaller providers

Both speakers emphasize the need for regulatory changes to accommodate community networks, including flexible licensing frameworks and adapted risk assessment models.

Emma Otieno

Jane Roberts Coffin

Creating flexible licensing frameworks for community networks

Reimagining risk assessment for local connectivity projects

Unexpected Consensus

Multi-stakeholder collaboration for community networks

Emma Otieno

Speaker

Partnerships between regulators, communities and support organizations

Engagement of local communities in network development

There was an unexpected consensus on the importance of multi-stakeholder collaboration, with both a regulator perspective (Emma Otieno) and a community network implementer perspective (Speaker) emphasizing the need for partnerships and community engagement.

Overall Assessment

Summary

The speakers generally agreed on the limitations of traditional approaches to connectivity, the unique benefits of community networks, the need for innovative financing mechanisms, and the importance of regulatory changes to support these initiatives.

Consensus level

There was a high level of consensus among the speakers, which suggests a growing recognition of the potential of community-centered connectivity solutions to address the persistent digital divide. This consensus implies that there may be increasing support for policy and regulatory changes to enable these alternative models, as well as for developing new financing mechanisms tailored to community networks.

Differences

Different Viewpoints

Role of traditional telecom operators

Carlos Rey Moreno

Emma Otieno

Failure of traditional approaches and business models

Creating flexible licensing frameworks for community networks

Carlos Rey Moreno argues that traditional telecom operators have failed to close the digital divide, while Emma Otieno suggests creating flexible frameworks that could potentially include traditional operators alongside community networks.

Unexpected Differences

Overall Assessment

summary

The main areas of disagreement revolve around the role of traditional telecom operators and the specific mechanisms for financing community networks.

difference_level

The level of disagreement among speakers is relatively low. Most speakers agree on the importance of community networks and the need for innovative financing, but differ slightly in their proposed approaches. This suggests a general consensus on the topic, with variations in implementation strategies.

Partial Agreements

Partial Agreements

All speakers agree on the need for innovative financing mechanisms, but propose different approaches: Carlos suggests international funds, Emma focuses on reforming universal service funds, and Jane emphasizes reimagining risk assessment.

Carlos Rey Moreno

Emma Otieno

Jane Roberts Coffin

Blended finance approaches combining public and private funding

Reforming universal service funds to support smaller providers

Reimagining risk assessment for local connectivity projects

Similar Viewpoints

Both speakers advocate for innovative financing mechanisms that combine different sources of funding, including reforming universal service funds to better support community networks and smaller providers.

Carlos Rey Moreno

Emma Otieno

Blended finance approaches combining public and private funding

Reforming universal service funds to support smaller providers

Both speakers emphasize the need for regulatory changes to accommodate community networks, including flexible licensing frameworks and adapted risk assessment models.

Emma Otieno

Jane Roberts Coffin

Creating flexible licensing frameworks for community networks

Reimagining risk assessment for local connectivity projects

Takeaways

Key Takeaways

Traditional approaches to connectivity have failed to achieve universal access goals, especially in rural and remote areas

Community-centered connectivity solutions offer a flexible, cost-effective alternative to bridge the digital divide

Regulatory and policy changes are needed to enable and support community networks

Innovative and blended financing mechanisms are required to fund smaller-scale connectivity projects

Multi-stakeholder collaboration is crucial for developing sustainable community connectivity initiatives

Resolutions and Action Items

Regulators should re-examine licensing frameworks to accommodate community networks

Universal service funds should be reformed to support smaller, local connectivity providers

Develop guidelines and capacity building programs for community network operators

Explore blended finance approaches combining public, private and development funding

Increase data collection and research on the impact of community networks

Unresolved Issues

Specific steps for building sustainable community networks in different contexts

How to effectively assess and mitigate risks for community connectivity projects

Mechanisms for knowledge sharing between community networks across regions

Ways to integrate gender considerations into community network financing and development

Suggested Compromises

Allocate a small percentage of universal service funds to community networks while continuing support for traditional operators

Develop tiered licensing systems with reduced fees and requirements for small-scale community providers

Create blended financing models that combine commercial investment with development funding and community resources

Thought Provoking Comments

Despite significant public and private investments, traditional approaches to telecommunication infrastructure, while impactful in certain respects, have failed to achieve universal access goals, even for basic voice connectivity for over two decades.

speaker

Risper Arose

reason

This comment sets up the key problem the discussion aims to address and challenges the effectiveness of traditional approaches.

impact

It framed the entire discussion around the need for innovative solutions and alternative financing models.

We need to improve the balance between profit maximization and the goal of reaching universal and meaningful access, right? And the time has come of reviewing where those investments are going, right? And how effective they are when they are targeting universal and meaningful connectivity.

speaker

Carlos Rey Moreno

reason

This comment directly challenges the profit-driven model of traditional telecom investments and calls for a paradigm shift.

impact

It sparked discussion about alternative models focused on social impact rather than just financial returns.

By being community center instead of profit center, as I was saying, they are able to use all their financial mechanisms to reduce their costs and be a center on the communication needs of the community rather than the profit that their shareholders are seeking.

speaker

Carlos Rey Moreno

reason

This insight highlights a key advantage of community-centered networks over traditional profit-driven models.

impact

It led to further discussion about the unique benefits and sustainability models of community networks.

While all of this was happening, a year before that, the village was connected to the internet as a community network with a small grant from Internet Society Foundation. And the very first thing that the villagers did was, of course, first to connect to their relatives around the world, saying that now we have a connectivity. Please call us. We can call you. And the second, they started making videos of the village, saying that, Mr. President, Mr. Prime Minister, we don’t have a road. We don’t have electricity. We don’t have mobile connectivity. Please help.

speaker

Talant Sultanov

reason

This real-world example powerfully illustrates how a small investment in community connectivity can catalyze broader development.

impact

It provided concrete evidence of the potential impact of community networks, shifting the discussion from theoretical to practical.

So if you’re, and that means looking at, as Carlos has said, different ways of assessing risk, different ways of bringing blended finance, meaning different pots of money together at different times with different funders and different instruments, financial instruments.

speaker

Jane Roberts Coffin

reason

This comment introduces the concept of blended finance and new risk assessment models, offering a practical approach to financing community networks.

impact

It opened up discussion on specific financial mechanisms and risk models that could support community-centered connectivity solutions.

For instance, there’s been a lot of deliberate handholding that has come from the Kenyan regulatory setup that the regulator is reaching out and really nurturing the community networks and the constituents.

speaker

Dr. Emma Otieno

reason

This insight provides a concrete example of how regulators can actively support community networks.

impact

It shifted the discussion towards the role of regulators in enabling community networks, leading to further exploration of policy and regulatory approaches.

Overall Assessment

These key comments shaped the discussion by challenging traditional telecom investment models, highlighting the unique advantages of community-centered networks, providing real-world examples of their impact, introducing innovative financing concepts, and exploring the role of regulators in enabling these networks. The discussion evolved from identifying the problem to exploring concrete solutions and policy approaches, with a strong focus on the social impact and sustainability of community-centered connectivity initiatives.

Follow-up Questions

How can we reassess and adapt traditional risk assessment models for community-based connectivity initiatives?

speaker

Jane Roberts Coffin

explanation

Traditional risk assessment models may not be suitable for community-based initiatives, and new approaches are needed to properly evaluate and support these projects.

What are the best practices for creating resources and guidelines on implementing community networks?

speaker

Dr. Emma Otieno

explanation

Developing clear, granular guidelines can help community networks navigate the process of setting up and operating, especially in rural or underserved areas.

How can energy infrastructure be integrated into Internet governance discussions as a critical enabler?

speaker

Online participant (via chat)

explanation

Energy infrastructure is crucial for connectivity initiatives, particularly in remote areas, and needs to be considered in Internet governance frameworks.

What are the step-by-step processes to build a sustainable community network?

speaker

Audience member (Kossi Amessinou from Benin)

explanation

Detailed guidance on establishing and maintaining community networks is needed, especially for those new to the concept.

How can partnerships be formed to directly transfer knowledge about community networks between organizations and countries?

speaker

Audience member (Kossi Amessinou from Benin)

explanation

Facilitating knowledge transfer between experienced organizations and those starting out could accelerate the development of community networks in new areas.

What are the eligibility criteria and geographic scope for the Women in Digital Economy Fund?

speaker

Audience member (Kossi Amessinou from Benin)

explanation

Clarification on the fund’s availability and criteria is important for organizations seeking support for gender-focused digital initiatives.

How can Universal Service Funds be adapted to better support community-centered networks?

speaker

Dr. Emma Otieno

explanation

Exploring alternative models for Universal Service Funds, such as grants or loans, could provide more effective support for community networks.

What policy and regulatory changes are needed to create an enabling environment for community networks?

speaker

Carlos Rey Moreno

explanation

Identifying necessary policy changes can help remove barriers and create supportive frameworks for community networks.

How can we improve data collection and research to inform decision-making around community networks?

speaker

Dr. Emma Otieno

explanation

Better data and research are crucial for regulators and policymakers to make informed decisions about supporting community networks.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #278 Digital Solidarity & Rights-Based Capacity Building

WS #278 Digital Solidarity & Rights-Based Capacity Building

Session at a Glance

Summary

This panel discussion focused on the concept of digital solidarity and its implementation in global digital policy. Jennifer Bachus from the U.S. State Department introduced digital solidarity as a framework for international cooperation on digital issues, emphasizing human rights and multi-stakeholder approaches. Panelists from various sectors discussed the opportunities and challenges of digital solidarity.

Key themes included the importance of multi-stakeholder collaboration, the need to balance digital sovereignty with international cooperation, and the role of the UN and other international forums in promoting digital rights. Panelists highlighted the importance of inclusive infrastructure, data privacy, and cybersecurity in advancing digital solidarity. They also stressed the need for capacity building and support for civil society organizations in developing countries.

The discussion touched on upcoming international processes, including the WSIS+20 review and the future of the Internet Governance Forum (IGF). Panelists emphasized the need to strengthen these mechanisms and ensure they remain inclusive and rights-respecting. The conversation also addressed challenges such as internet shutdowns, surveillance, and the potential misuse of cybercrime legislation.

Participants debated the hosting of international forums like the IGF in countries with problematic human rights records. They also discussed the impact of sanctions on international technical cooperation in cybersecurity. The panel concluded with a call for continued dialogue and collaboration to advance digital solidarity and address emerging challenges in the digital sphere.

Keypoints

Major discussion points:

– The concept of digital solidarity and how it relates to digital sovereignty

– The importance of multi-stakeholder approaches and collaboration in digital governance

– Challenges and opportunities for operationalizing digital solidarity, especially in developing countries

– The role of the IGF, WSIS, and other multilateral processes in advancing digital cooperation

– Balancing cybersecurity needs with human rights and privacy concerns

Overall purpose:

The discussion aimed to explore the concept of digital solidarity, its importance in global digital governance, and how it can be operationalized through multi-stakeholder collaboration and international processes.

Tone:

The overall tone was collaborative and solution-oriented, with panelists offering constructive ideas and acknowledging challenges. There was a sense of urgency about addressing digital divides and governance issues. The tone became slightly more critical during audience questions, but remained respectful and focused on problem-solving.

Speakers

– Jennifer Bachus: Western European and Others Group (WEOG)

– Nashilongo Gervasius: Public interest technology expert, media and communications lecturer at Namibia University of Science and Technology, founding president and board member of the Internet Society Namibia chapter

– Jason Pielemeier: Executive Director at Western European and Others Group (WEOG)

– Robert Opp: Chief Digital Officer at UNDP (United Nations Development Programme)

– Susan Mwape: Founder and Executive Director of Common Cause Zambia

Additional speakers:

– Barbara: From Nepal (audience member)

– Alexander Savnin: From Russian Federation, civil society representative (audience member)

– Hala Rasheed: Public policy and human rights expert representing Alnahda Society (audience member)

Full session report

Digital Solidarity in Global Digital Policy: A Comprehensive Overview

This panel discussion, moderated by Jennifer Bachus from the U.S. State Department, explored the concept of digital solidarity and its implementation in global digital policy. The panel brought together experts from government, civil society, and international organizations, including Jason Pielemeier (participating remotely), Nashilongo Gervasius, Robert Opp, and Susan Mwape.

Defining Digital Solidarity

Jennifer Bachus presented digital solidarity as a concept embraced by the US government, rooted in the International Cyberspace and Digital Policy Strategy launched in May. This approach promotes cooperation while respecting rights, contrasting with digital sovereignty approaches that can potentially undermine economic and security objectives.

Nashilongo Gervasius, a lecturer and member of the Internet Society Namibia chapter, emphasized that digital solidarity should align with both global and regional ambitions, suggesting a nuanced approach to sovereignty concerns.

Multi-stakeholder Approaches and Collaboration

A key theme throughout the discussion was the critical role of multi-stakeholder collaboration in addressing digital challenges. Jason Pielemeier emphasized that multi-stakeholderism is at the core of digital solidarity, allowing diverse actors to come together and be “stronger than the sum of their parts”.

Robert Opp highlighted UNDP’s role in creating spaces for multi-stakeholder dialogues, focusing on digital policies and strategies, use of technology, and capacity building. Susan Mwape stressed the important role that civil society plays in multi-stakeholder engagement and provided examples of how citizens can participate in promoting digital solidarity, such as advocacy campaigns and supporting ethical digital platforms.

Challenges in Implementing Digital Policies

The discussion revealed several challenges in implementing digital policies, particularly in developing countries:

1. Resource constraints in enforcement

2. Potential risks to privacy posed by cybercrime conventions

3. The impact of sanctions on technical cooperation and solidarity

4. The need to consider local context and ongoing reforms in policy dialogues

Internet Governance Forums and Processes

The panel devoted significant attention to the role of the Internet Governance Forum (IGF) and other multilateral processes. There was broad agreement on the need to renew and strengthen the IGF mandate. Robert Opp suggested that both the IGF and WSIS processes need to become more mainstream and integrated with other global issues.

The discussion also revealed tensions surrounding the IGF, particularly regarding host country selection. Jason Pielemeier shared that his organization chose not to attend the current IGF in person due to concerns about the host government’s human rights record, sparking a debate about the benefits and risks of holding such forums in countries with problematic human rights situations.

Balancing Digital Sovereignty and Solidarity

The discussion highlighted the need to balance cybersecurity needs with human rights and privacy concerns. Jason Pielemeier raised concerns about the potential misuse of cybercrime legislation, while audience members emphasized the challenges faced by developing countries in enforcing digital policies.

Infrastructure and Capacity Building

Susan Mwape emphasized that infrastructure and data privacy are key components of digital solidarity. Jennifer Bachus acknowledged the challenges posed by lack of connectivity or digital capacity, linking these issues to the broader goal of achieving sustainable development. Robert Opp shared insights on how the COVID-19 pandemic prompted a shift from techno-optimism to a more holistic, rights-centered approach to digital solutions in development contexts.

Future Directions and Initiatives

Jennifer Bachus highlighted several US government initiatives:

1. A focus on responsible AI development and governance

2. Efforts to address the proliferation of commercial spyware

3. A $3 million initiative to build capacity for international stakeholders to engage in multilateral processes, especially related to AI

Conclusion

The panel concluded with a call for continued dialogue and collaboration to advance digital solidarity. Key outcomes included support for renewing the IGF mandate, agreement to hold a virtual reunion in early 2025, and acknowledgment of the need to address unresolved issues such as balancing digital sovereignty with solidarity, potential misuse of cybercrime conventions, and mitigating the impact of sanctions on technical cooperation.

The discussion underscored the complexity of achieving digital solidarity in a diverse global context, highlighting the need for ongoing dialogue, compromise, and innovative approaches to advance shared goals in digital governance. Jennifer Bachus emphasized the US government’s openness to feedback and criticism, reinforcing the collaborative spirit of the discussion.

Session Transcript

Jennifer Bachus: Okay. Let’s go. Okay. Channel 2. Okay. Good morning. It’s very tough to be in the morning, although I don’t know, Jason, must be the middle of the night for you. So thanks for joining us in the middle of the night. So excited everyone here. For those of you who don’t know me, my name is Jennifer Bacchus. I am the number two in the State Department and Cyberspace and Digital Policy Bureau. We propose this workshop because we believe in the critical importance of digital solidarity, working together to address digital policy in a rights respecting manner. For this workshop, we’re going to start with introductory remarks and then have three rounds of questions and then we’ll finish with some audience questions. Before we get into the introductory remarks, I’m going to try to do my best to introduce our panelists here. My apologies if I missed the parts of your bio you’re the most proud of, please correct me. First of all, I’m so excited to introduce and oh, I should also say I’m really terrible with pronouncing names. So again, my apologies. Nacholongo Gervasius, I hope is correct, who is a public interest technology with extensive experience in our field, a media and communications lecturer at Namibia University of Science and Technology and the founding president and board member of the Internet Society Namibia chapter. There’s a whole bunch of other stuff here, which hopefully she will talk about as we go through this workshop. Remotely, we have Jason Peelmeyer, and again, thank you for joining us remotely, who leads the Global Network Initiative, which is a dynamic multi-stakeholder human rights collaboration which builds on consensus for the advancement of freedom of expression and privacy among technology companies, academics, human rights, and press freedom groups. He previously served as the deputy director and policy director and is now the executive director and had previous experience working at the State Department. and so seems to understand sort of those of us that are at the State Department. Very pleased and excited to have Robert Opp, who’s the Chief Digital Officer at UNDP. He is, for those of you who don’t know UNDP, but I really hope you do know UNDP, it’s the Global Sustainable Development Organization which works across 170 countries with more than 17,000 staff, which is actually a very close description of the State Department. We were talking about our similarities there. He is leading the agency’s digital transformation, which is an organization-wide effort to harness the power of new technology to improve the lives of those for this behind. And last, but really not least, is Suzanne Mwappe, who is Founder and Executive Director of Common Cause Zambia which is an organization that seeks to promote citizens’ participation in various government processes, leveraging technology to enhance public accountability and resource tracking. And really excited to have all of you here today and having been able to engage with all of you separately in various situations, I know you are gonna be amazing panelists here today. Because I’ve decided to play both the role of moderator and speaker, congratulations to all of you here for getting to hear me do both. I will start with my brief opening remarks, which are mostly to sort of situate us in our vision on digital solidarity and what it means to the United States and how we’re working to advance this concept. You know, digital solidarity is a concept that had been out there a little bit, but which we really fully embraced when we published the US International Cyberspace and Digital Policy Strategy back in May of this past year at RSA, which is a cybersecurity conference, and our Secretary of State launched it. We believe through digital solidarity that it’s the idea that we have a willingness to work together on shared goals, stand together, help partners build capacity and provide mutual. support. My colleague Stuart here has excerpts from our strategy. If you want to know more about this, I will try to go through some of it, but we did not bring our printed strategies with us, but he is a one pager for those of you who want to understand it and see the excerpts. Essentially, it’s about framing partnerships, building like-minded coalitions, people around the world can use technologies to achieve a more secure, resilient, inclusive, and prosperous digital future. We have through digital solidarity, of course, a never-ending firm commitment to multi-stakeholder, race-based, interoperable approaches to Internet governance, digital policy processes, and the design, development, and use of emerging technology. Simply put, digital solidarity is rooted in working together to seize the promise of technology while countering the risks. I think we all recognize the urgency of us promoting digital solidarity. There are just too many stakeholders that cannot fulfill that promise, so we need to really come together to do so. We recognize the lack of connectivity or digital capacity, and I know all of you are working on this, and these things can hinder the ability to fully participate in the digital economy and challenges our collective ability to achieve the sustainable development goal. Having attended one and participated in another panel, there was a lot of talk this week about connecting the unconnected, and I think we’re all coming together around this idea. We also recognize the huge financing gap to achieve the sustainable development goals. The financing gap right now, I think you all know, is estimated at around $4 trillion. It’s a lot of money. Cyber threats, which is another issue that my office focuses on by criminals and other bad actors, such as ransomware attacks. It’s another thing we have to continue to push back against, and we are pleased to also host a side event on the Counter Ransomware Initiative. We of course see authoritarian governments who continue to increase efforts to undermine the multi-stakeholder rights-based approach to Internet governance and digital policy processes, including across multilateral fora, recognizing where we are here today, which of course puts at risk the future of an open, interoperable, secure, and reliable Internet. We know that various actors are increasingly misusing technologies, especially emerging technologies and ways that undermine the development goals of emerging economies and human rights and democracy. But we always like to be a little bit optimistic in our organization, which is why we have the concept of digital solidarity. And we know that many people here today are seized with the urgency of building these international coalitions to build digital and cyber capacities to counter those threats and harness the benefits of technology. And to this end, the United States does and will continue to support a global, multi-stakeholder, rights-respecting approach to Internet governance, digital policy processes, like the WSIS Plus 20 Review, which has been the point of many conversations this week, and emerging technologies such as AI. We recognize, as I said, governments can’t do it alone. We need a broad array of stakeholders who help us by using their expertise to inform and drive action on these issues and participate meaningfully in various fora, such as this one. Thank you very much for your long travels here, as well as working closely with allies and partners to ensure digital technologies are designed, developed, and used in a responsible and rights-respecting manner. I just wanted to say, and this came up in a discussion we had yesterday, the U.S. Department of State and USAID are working very closely together with leading tech companies, as well as civil society, academia, and partner governments to ensure as many people as possible can benefit from safe, secure, and trustworthy emerging technologies. I will also here note work we’re doing on AI, and I do have a handout as well on our AI programs for those of you that are interested. Over the past few years, we’re very proud that our foreign assistance budgets for these issues has more than tripled, still not enough, but in my opinion, every year you just keep trying to do a little bit more and a little bit better. So we’ve already been engaged with over 140 countries around the world and are now poised to dramatically increase our bilateral and multilateral cooperation. We’re going to get into examples later because I really feel like I’ve been speaking for way too long, but I’m going to try to turn to our panelists now. So I will start with our first panelist, Nashilongo. Can you please, why don’t you give us your opening remarks, please? Thanks.

Nashilongo Gervasius: Thank you very much for being here and I appreciate the putting together of this panel and thank you, Ms. Bachos, for leading this conversation. I think in my very first introduction is really recognizing the importance of partnership, the importance of collaboration, particularly for many of the things that we face in an increasingly digitalized world. From engaging policy matters, whether it’s a local, regional or global level, from dealing with real issues that faces society and this is where, you know, places us as civil societies, but also academia in dealing with issues of skilling, in issues of cyber crime and cyber security issues at local level. So many of these things are increasingly becoming an issue that one cannot deal with by themselves, even just at local level. these networks of collaborators, of supporters, of partners who are able to provide the necessary funding to assist us to carry out necessary research that produces the evidence to compel policymaking at local level, but also helps us into getting in the rooms like this, that many of our partners and our collaborators at local level can only dream to be. I think we might not have this term, solidarity, as broadly as the US government has made it intentional to embrace at local level, at regional level, but I think it is an important concept. It is an important approach to be working together for all of us, I think. Thank you.

Jennifer Bachus: Thanks for that. Next, actually, I’m going to go to Jason. Thanks again for being with us at this ungodly, probably early, late hour for you.

Jason Pielemeier: Thanks, Pia Spokas. Can you hear me okay? Wonderful. Well, thank you very much. It’s a pleasure to be part of this panel. It is either very early or very late here, and I hope I can manage to stay engaged. I hope you’ll forgive me if I have a few yawns over the course of the panel, but I’m really interested to hear from the other panelists and to be a part of this conversation. Just to quickly introduce myself, I’m Jason Pielmeier. I’m the Executive Director of the Global Network Initiative, GNI, which is the world’s leading multi-stakeholder initiative committed to fostering respect for freedom of expression and privacy in the technology sector. GNI brings together over 100 members, including academics, civil society organizations, and investors, and tech companies from around the world to work together. And our members do that by sharing information about challenges to freedom of expression and privacy stemming from overbroad government regulations, policies, and demands, and working to support each other in pushing back on those scenarios. And we do this in four primary ways. First, through policy engagement. So we speak collectively on behalf of our broad membership to illustrate how diverse stakeholders from disparate regions, often critical of one another in other spaces, can nevertheless share common positions on a remarkably broad range of topics, from telecom regulations to AI safety. Second, we foster safe spaces for learning across our membership, through which companies can confidentially share insights into the challenges they face in different jurisdictions. And civil society and others can present research and recommendations to help companies understand risks and make more responsible decisions. Third, we facilitate a unique accountability process through which our member companies’ efforts to implement the GNI principles and implementation guidelines, which we refer to collectively as our GNI framework. So information is shared about how they implement that framework in a regular manner, and those efforts are independently reviewed and assessed. And these assessments allow companies to share, again confidentially, non-public information about their internal policies, structures, and systems, as well as the kinds of challenges they face in upholding their responsibility to respect free expression and privacy rights in the face of government pressures and demands. And finally, we work to share insights and good practices, as well as recommendations gleaned from. these internal member-facing processes with our outside partners, including governments, multilateral bodies, and other companies that are not yet members. So that’s a little bit about how we work. And at the core of all of that is this concept of multi-stakeholderism, the idea that different actors with different backgrounds and expertise can come together and be stronger than the sum of their parts.

Robert Opp: Okay. Thank you. Well, it’s a pleasure to be here. As Jennifer said, I’m Robert Opp. I come from the United Nations Development Program and I’m Chief Digital Officer there. And maybe just as sort of a little bit of overview of our work, we are, as Jennifer said, the UN Development Arm and present in 170 countries worldwide. I think our digital work has definitely accelerated over the last several years, particularly in the wake of the COVID pandemic when countries around the world really started to accelerate their own digital transformation, building their digital infrastructure kinds of efforts. And that has meant that we as a United Nations Development Organization need to look at what that actually means in terms of choices made every day by governments and communities and others when it comes to embracing technology. And I think it’s, I would say and characterize that prior to the COVID pandemic, a large parts of the development community, and we’re talking about, I’m talking about the conventional mainstream development community, were rather techno-optimist and looking at digital solutions as, well, I’ll sprinkle an app here and I’ll put a database there and this will result in magical development results. But COVID really was an inflection point for us in the understanding of how we need to move from being very solutions and oriented and somewhat fragmented into being more holistic and strategic in the way we use digital solutions. But very importantly, moving from that moment of techno-optimism into an understanding of the risks and the importance of putting people’s rights at the center of whatever we do in technical or digital solutions. And so as we work with countries around the world, we find that the work that we get requested to support with breaks down into three big areas. Digital policies and strategies, where countries are looking to see how they can better support and govern the use of digital solutions. So that might mean data protection laws, privacy laws and strategies, et cetera. And misinformation, information integrity, that sort of thing. Also in the second area of requests is around the use of technology. So this is where the kind of technologies like digital public infrastructure, digital identity, digital payments platforms, and things like that. Countries request support in those. And then the third area is capacity building, where it’s building the competence and capacity to be able to leverage those systems and those platforms. In all of that, our starting point is the individual person and that person’s rights. And in our digital strategy, we have a set of seven guiding principles. The very first one is that we put human rights at the center of what we do in digital. And so when we talk about digital solidarity, we talk about digital rights, this is absolutely fundamental for us. And I would just sort of finish by saying, when we work with countries, we often see that they’re in a big rush to put in place digital platforms and solutions, because we know how urgent these things are. We know how quickly technology is evolving. We feel that we need to get ahead with the right kind of advice when it comes to best practices, sharing lessons learned, cooperation mechanisms between other countries, or regional cooperation mechanisms. Countries tend to make the choices that are in favor of inclusion and rights, but they need the right frameworks, the right learnings, the right practices shared among them. And so that’s one of the fundamental elements for us that we look at as a UN organization.

Susan Mwape: All right, so thank you very much for having me on this panel. As a way of introduction, Common Cause Zambia is an organization that was established in 2013, and our role is to promote citizens’ participation in governance processes, and so we do this through providing empowerment programs, building capacity, and also just trying to provide tools that citizens can use to hold their leaders accountable. We have a wide range of programs that we undertake, starting from research in terms of our technology program, research around policy, research around the state of digital rights in the country, and also internet freedom. We also do capacity building. We undertake advocacy work as an organization, but also collectively at national level and also international level. So over the years, we’ve built different levels of stakeholder engagement. We have what we call a cyber network, which is a membership of organizations that work at community level, so we’re looking at community organizations, community-based media, community radio stations, and traditional leaders as well, because they play a very significant role. We do this to try and just bridge the digital divide that exists in the face of the reality of where technology is taking us. So we don’t do this on our own. We collaborate with the government in doing this, our IT regulator. We carry law enforcement agencies with us into the communities, and we found that it is very effective. And as much as we talk about the digital divide, there are serious issues that happen in these communities, and we’ve noticed that they have been able to hold, for instance, law enforcement officers accountable on issues of fraud and things like that, that involve things like mobile money, which is very popular now. We also engage at national level. We co-established a digital rights network, and it’s a multi-stakeholder platform that brings together different organizations to the table where we assess issues of policy and various issues, and we fight collective causes. I will talk a bit more about some of the work that we have done, but basically that is what Common Cause Zambia does. Thank you.

Jennifer Bachus: Thanks to all of you. So, we’re going to go to an interactive discussion now in what’s a little unconventional. I am posing myself questions. We were jokingly saying that I might just like jump seats to say when I’m the moderator. Not the moderator, but we’ve decided that’s maybe a little too interactive for this hour of the morning. And so, the question I got was actually partially came from the panelists because the question boy, the question is essentially what do I say to those who advocate for digital sovereignty or data sovereignty instead of digital solidarity and are these concepts mutually exclusive? I think I want to start by just underscoring the commitment of the United States to a positive and economic benefits that come from preserving openness while protecting privacy, promoting safety and mitigating harms. And I thought, Robert, your comments on this question about guardrails is really incredibly important and we believe that you can both have digital solidarity and guardrails. Those two things are not mutually exclusive, but what we see when we hear talk of digital sovereignty and protect, it’s mostly oftentimes an idea of protectionism. The idea of blocking access to markets, unduly preventing cross-border data flows, preferencing domestic manufacturers and service providers. And we see this as potentially undermining what is critically important when it comes to interoperability, security and market access. So we see the rise of this digital sovereignty or data localization narrative, including what we’ll acknowledge from partners, very close partners and allies, has a potential to undermine key economic and cybersecurity objectives. And essentially, the possibility to limit the potential of economic, social, and individual exchanges that the growing digital economy and cyberspace make possible. So, you know, over the last two and a half years, there’s been a lot of discussion about Ukraine, and I will, you know, even though this is not, we’re not currently sitting in Europe, I’m going to start by talking about Ukraine in terms of the value of digital solidarity. Just before Russia’s full-scale invasion of Ukraine, the Ukrainian government changed its laws to allow government data to be stored on the cloud. They had refused to do so before, so to be clear, we had lots of engagements with them, but they felt like their data was going to be more secure if they could look at the servers. It’s a very common thing. If I can see it, I feel like it’s more secure. But in reality, this does, in many cases, undermine cybersecurity. And because of this very last-minute change, U.S. cloud service providers were able to safely and securely store Ukrainian data abroad. This protected the data from Russia’s brutal attacks. It allowed the government to continue serving its people, regardless of where they ended up in Europe and around the world. When governments erect barriers to the free flow of data, for example, or fail to take advantage of global cloud services for the sake of protectionism, it has demonstrably increased costs, slows innovation, and weakens cybersecurity. We need to continue to make sure that data can flow seamlessly and securely across borders because this is critically important as the backbone of our digital economy. We recognize the very, very real concerns that many countries have over the affordable and sustainable digital investments, the lack of which, ultimately, we know can undermine their sovereignty. But solutions proposed, such as data localization, network usage fees, other market access barriers, ultimately, as I said, can undermine economic and security objectives. These false solutions essentially contribute to that. to this idea that you have increased control, but in reality, they often cause real damage. There are other better ways to address these concerns. We in the United States have embraced the Global Cross-Border Privacy Rules, CBPR. It’s a mouthful. It’s a system which has certifications that can ensure privacy protections that travel with the data while at the same time facilitating cross-border data flows, crucial to supporting digital trade, international transactions, and other critical business deeds. The Global CBPR Forum demonstrates that countries can come together to protect privacy and democratic principles while fostering economic openness, interoperability, and integration. I think it’s also worth noting that oftentimes, some of the narratives around digital sovereignty are really an idea of localizing data so that governments can have better access to that data, ultimately to undermine the privacy of their citizens. We have been engaging with UNDP on digital public infrastructure, and we really strongly support the UNDP-led Universal DPR Safeguards Framework, which is designed to promote the protection of members of vulnerable groups online, including children, protect privacy and human rights, multi-stakeholderism, and fair competition, and to guard against cybersecurity vulnerabilities because, again, it’s great to talk about DPI, but if you don’t have those safeguards, DPI, again, creates potential vectors for inappropriate access to information. So we think that there is a middle ground where you can have protectionism, you can have protection, safeguards, but you don’t need to sacrifice digital solidarity, the idea that we’re going to work together and that we need to work together to advance digital safeguards and tools that mitigate the potential harms and ensure technology is. developed, used, and governed consistently with human rights and democratic values. And we’ve developed so many tools to do this in the past year, which includes, of course, as always, promoting the multi-stakeholder rights respecting approaches, including related to AI and information integrity. Some of those tools will be in a capacity building toolkit. We will share in the workshop report that results from the session. And with that, I’m going to, every once in a while, so for those of you in the room, I did learn this yesterday. You’ve got to point your little thing at one of the lights. If you didn’t know that, that’s my little, yeah, that one too. But every once, oh, just that one. So every once in a while, I lose my mic. So anyway, so now you get the question, which is, as a civil society leader, how do you think about the opportunities and challenges of stakeholder collaboration? How can we foster more effective collaboration across different stakeholders and groups? And please. This one, right?

Nashilongo Gervasius: So thank you again, Ms. Bacos, for this new level of engagement. So in terms of collaboration, I think, once again, there’s so many opportunities for many organizations like ours, who works closely with grassroots organizations. But again, engaging across all levels, at national level, from policy and implementation. We find a lot of challenges in making meaningful contributions or even finding meaningful ways in enforcing policies across sectors. But the opportunities are key. Many of these to engage in core issues at different levels. finding common grounds, but there are also key challenges as I’ve mentioned before in creating meaningful collaboration and even participation once again. Issues of accessing stakeholder platform in another, but also resource accessibility and limitation is the other. In many of the engagements that we have, we find in siloed by the fact that technologies, so private sector in engaging in platforms like this, for instance. I don’t know, maybe by observation, I haven’t seen many people from your usual tech in these platforms, your Meta, your Google and many of these platforms in really finding, getting together and answering those questions together. But I also wanted to really engage on the issue of digital sovereignty, issues of, whether this is mutually exclusive or not. I mean, we recognize, for instance, from the African Union and this is maybe something that’s a bit more closer to home for African civil societies. The AU has a digital transformation strategy, 2020, 2020, 2030. The prioritize digitally enabled socioeconomic development to stimulate job creation and poverty and reducing inequalities just amongst others and also dealing with issues of delivery of goods and services and that’s how the AFDTA, for instance, becomes a bit more relevant. And this could be controversial if you, the concept of sovereignty. need balance in terms of engagement of with issues of access and issues of control, right? With that, that doesn’t mean it cannot operate or cannot be aligned to solidarity, like you can have one and you can still have the others because the common goals are broader issues that affect everybody globally, issues of privacy, that’s a human rights issue, cuts across with whatever market that we have. And so, yes? This is much better, right? Okay, there is a real concern that over affordable and sustainable digital investments, this in itself can undermine issues of sovereignty altogether. And I think for many of the African countries in the African region, and I don’t want to speak for everyone, is investment in infrastructures that we are addressing within the UNDP and DPIs, that control of whether it’s the infrastructure itself or the data hidden over those infrastructure, can pose serious challenges that filters into cyber crime, cyberware that can tend to go beyond the control of any nation, but also provides more opportunities, right? Opportunities such as data that to us, particularly if, sorry, but. It’s a microphone, yeah, just hold the microphone. Yeah, that’s, again, on data, we’re looking, how do we find values of that kind of data at local level where we can entrust researchers, innovators, to use this, particularly if it is, you know, accrued through public funding means to be innovated and find solutions for local challenges, for instance. And many of these initiatives, so still requires investment, requires support, and that can also be enhanced through this digital solidarity concept that we are talking about, yes. Let me, and come.

Jennifer Bachus: I’m gonna turn to Jason online. So here’s your question. As the leader of a multi-stakeholder organization, what does the multi-stakeholder approach offer for digital solidarity, and where can multi-stakeholder approaches be strengthened? I’d appreciate your thoughts on that.

Jason Pielemeier: Yeah, thanks. So the cyberspace and digital policy strategy that the State Department has put out talks about digital solidarity as recognizing that all who use digital technologies in a rights-respecting manner are more secure, resilient, self-determining, and prosperous when they work together to shape the international environment and innovate at the technological edge. And it goes on to note that the State Department can’t accomplish its objectives in this strategy without strong partnerships with the private sector, civil society. and technical communities. So I think the strategy acknowledges both the core importance of international and human rights as a framework for bringing disparate actors together, both different countries, multilaterally and bilaterally, as well as different stakeholders in different kinds of multistakeholder spaces and processes. And really what the sort of the fuel that allows, I think, digital solidarity to work, and that allows this kind of collaboration across countries and across stakeholders is trust. So trust is really the kind of critical ingredient. And trust is something that, as the famous sort of adage goes, is difficult to build and very easy to lose, right? So the US government over years, going back to my time at the State Department, has pretty consistently tried to articulate approach to international tech policy that centers human rights and brings in diverse stakeholders. And we’ve done a lot through financial support, done a lot through our multilateral engagement, including things like the Freedom Online Coalition, but there’ve been many bumps in the road as well. It’s not a straight line, it’s not a linear process. And I think that the challenges are only in some ways getting, the barriers are getting higher, the challenges are getting more intractable, as you were alluding to in your remarks, Peter S. Baucus. It’s really great to see the State Department kind of doubling down and recommitting to this kind of approach. And I think there’s a real thirst and a desire among many other governments and states, as well as other non-state actors for this kind of approach rooted in human rights and this concept of solidarity. I want to just note a process that I was privileged to take part of earlier this year, the NetMundial plus 10 process. So NetMundial was a conference that was organized a decade ago by the government of Brazil and its Internet Steering Committee to bring together a really diverse range of stakeholders in Sao Paulo to talk about the importance of multi-stakeholderism. And this was in the wake of Edward Snowden and his revelations at a moment of pretty low trust. And notwithstanding that sort of atmosphere and context, NetMundial, I think, was seen by many as a very successful moment where the multi-stakeholder community really was able to assert itself and put forward some important principles for how Internet governance can be most effectively carried out. And so fast forward to 10 years later, as the sort of multilateral community, international community, is preparing for the WSIS plus 20 review and the Global Digital Compact was being negotiated, many of the same actors who organized NetMundial came back together and organized a reprisal in Sao Paulo in April. And I was a civil society representative on the high level expert group. for the NetMundial Plus 10 conference. And it was really just, I wasn’t a part of that process. And I was actually in the State Department 10 years ago when the original NetMundial took place. So it was really interesting to be on the civil society side and see how that process worked, the sort of very diverse views across not only civil society but other stakeholder groups that nevertheless coalesced around, I think, a very strong outcome document. At the core of this NetMundial Plus 10 outcome document which builds on the original NetMundial document are a set of guidelines referred to as the Sao Paulo Multi-Stakeholder Guidelines. And I just kind of very quickly talk through some of the key guidelines because I think they really underscore how trust can be built and how multi-stakeholder processes should work. The first of them reminds us all that we need to be mindful of power asymmetries between diverse stakeholders. In particular, I think we need to pay attention to the resource constraints that were mentioned earlier that can keep people from being able to effectively participate in civil society. In particular, I think often feels like they are under-resourced compared to governments, compared to the private sector. The second guideline focuses on informed and deliberative discussion. So really making sure that there is equal access to information and that there’s open space for deliberation. The third focuses on treating stakeholders fairly and equitably. The fourth centers the rule of law and respect for international human rights principles. The fifth talks about the value of linguistic diversity and the need to respect that and enable that. The sixth. focuses on the shared responsibility to uphold accountability and transparency across these kinds of processes. We often see actors nodding towards concepts of accountability and transparency, but really meaningfully building them into these processes is critical, especially for those moments when trust is tested. I won’t go through the rest just because there are 13 of them total, but you get a sense from the ones I’ve talked about of kind of the level of granularity and detail that are in these principles. They’re really born out of extensive experience that stakeholders in the technical community and academia and civil society and the private sector, whether through standard setting bodies or governance bodies have learned and crystallized. And I really wanna recommend those principles as sort of a playbook whereby different actors, whether at the national level, regionally, or internationally can use these principles to help build multi-stakeholderism and foster trust so that digital solidarity can prosper and sort of be the force that helps us push back against these more sovereignty-focused or kind of self-interested or national interest-focused approaches.

Jennifer Bachus: Thanks. So Robert, from your point of view, what are the advantages and opportunities or challenges of using the UN in multilateral fora?

Robert Opp: Yeah, well, Jason has done a beautiful job of describing the kind of elements of multi-stakeholderism that are so important for what we’re talking about here, ensuring a rights-based approach to digital solutions, et cetera. And I think we have a bit of, we have opportunities and we have challenges, as your question puts out there. On the one hand, we have a multilateral system that has just delivered a global digital compact that puts rights at the center, has quite strong language about the need for cooperation. It even has language in there about multi-stakeholderism, but it is a multilateral or an intergovernmental agreement. And there were some struggles on how best to incorporate multi-stakeholderism into that process and some dissatisfaction on the parts of some groups that it wasn’t a more multi-stakeholder-oriented process slash agreement. But it is the intergovernmental system that we have, the truly global one, and that has sent a strong signal in certain directions. I think it’s also fair to say, though, that the multilateral system delivers in the form of the WSIS Forum and the IGF in particular a very large and inclusive multi-stakeholder platform that has been going for 20 years. And so I do think that this shows the UN can create some space and the multilateral system can create space for multi-stakeholder dialogues. And I do think that it’s important that we maintain those spaces as we go forward. It’s probably more important than ever that we continue this tradition that started 20 years ago of ensuring that voices of individuals, people, civil society, private sector and others to be able to come together and talk about our digital future. That should continue. And I think the UN does make it possible to have those kinds of platforms in place. So I think I don’t need to say more than that. I mean, I think there’s more that we could do on the kind of intergovernmental side. And sometimes when we do these agreements, and I know that there’s a lot of goodwill to make that possible. So there’s more room to go, but we need to preserve what we have and make it better as well.

Jennifer Bachus: Susan, as the founder of Common Cause Zambia, which seeks to promote citizen participation, how do you suggest citizens participate in this concept?

Susan Mwape: Thank you very much. I think that there are so many ways in which citizens can participate in promoting digital solidarity. And I think that it’s a range of… Part of the efforts that citizens can undertake have already been… Okay. Part of that has already… We have seen it in other platforms, but I would point to advocacy and awareness campaigns. Citizens can do a lot of that to ensure that they promote the concept, using social media platforms to raise awareness. We have seen that happen in Zambia. I’ll give you an example of 2021 when we were going towards elections and young people, the youth of Zambia were dissatisfied with how government was conducting government business. And so they decided to hold a protest and then the government threatened to break their bones and not permit them because they needed to get a permit, a police permit to do that, but the government would not let them. So they opted to do their protest online. So we woke up on that day to maximum police presence on the ground, chasing after these young people who once threatened, had told their colleagues to stand down and decided to hold the protest in the bush and just live stream it. The impact of that was that they had more than 12,000 people that viewed that stream. They had… more outreach, more impact, and also there was a lot of solidarity around that. So in as much as they were getting their message across, they were able to use digital platforms to get their message across. Another strategy that citizens can use is also supporting digital rights and privacy, and this can be done through joining different movements. We’ve seen that there’s a lot of effectiveness in working collaboratively. A very quick example that comes to mind for me is the Keep It On campaign led by Access Now, which is a global campaign of different organizations that push back against internet shutdowns, and it has really, really created a lot of impact in the sense that it provides an opportunity for global communities to understand what’s going on, but also be able to lend a voice and provide that solidarity to push back against internet shutdowns. I would also look to supporting just ethical digital platforms, and I think that we all talk about using digital platforms in so many ways, but supporting ethical platforms is one way in which I think we can also stand up against bad practices that exist and also just push back on the platforms that do not serve the needs of citizens. So I think these are some of the ways that we can do that.

Jennifer Bachus: Thanks for that. So I think the next question that we’ll all answer, so I’ll just put it out there, is how can we more effectively operationalize this concept in 2025? What are some examples of practical approaches and tools? I know we’re running a little behind, so I’m going to run pretty quickly through some of those that we in the U.S. government are using and some of the work that we’re doing. Well, to start with, we have done a lot on cyberspace security, working on approaches to cybersecurity. We are working on law enforcement collaboration to build secure and resilient ICT infrastructure and governance and effective incident detection and response, recognizing that all of this needs to be in line with international law and reinforce arms of responsible state behavior in cyberspace. On AI, there’s just so much going on in this space. I could probably spend an entire panel on that. And I’m sure there’s been more than one panel on AI this week. We did launch the partnership for global inclusivity on AI, which I, again, I have some paperwork on if people are interested, which is bringing together the Department of State and some of the largest tech companies, galvanizing more than $100 million to help unlock AI’s potential. We established a group of friends of AI for sustainable development with our co-chair, Morocco. We are gathering to share best practices, figuring out ways to collaborate. With USAID, we launched an AI for development funders collaborative and an AI and global development playbook. Just a second on commercial spyware, which is an issue that we have also tackled very significantly in 2024, and which will continue in 2025. Really, it’s a whole of government strategy, which includes things like regulating the US government’s use of commercial spyware through an executive order, promoting accountability, using economic sanctions, export controls, visa restrictions, and then working diplomatically with partner countries to address this. And I would just say that as we undertook GDC, and again, I think was recognized that there are many complaints for the multi-stakeholder community on GDC, but we in the US government really thought it was important to engage with civil society, with the multi-stakeholder community, including from the global South. to promote multi-stakeholderism. We had lots of civil society roundtables. We did what we tend to do and we used these, we implemented these consultations into what we were gonna do. And I know we’re gonna talk about WSIS Plus 20, so I will leave our sort of forward-looking work on WSIS Plus 20 to the next question and I will hand it over to you.

Nashilongo Gervasius: Yes, so in amplifying operationalization of digital solidarities 2025, we should be looking at raising common voices on key issues. I think Susan have mentioned issues of standing up against repressive regimes and shutting down the internet, for instance. So how do we collaborate? How do we stand together when countries where communities that we join in partnership through digital solidarity is faced with issues of surveillance, for instance, you know, issues of internet shutdown. So really that requires a lot of coordination. So I see, we’re seeing digital solidarity being operationalized through collaboration, standing together, creating regional harmonization report, if that it is, whether it is coming through the regional forums, such as the regional IGF or the sub-regional IGFs and getting to platforms like this. The other is also creating evidence and this is possibly one of my favorites, really getting down and creating knowledge by researching and providing those evidences and bringing them to platforms like this, whether it is the IGF, whether it is. to the YSS and actually saying, look, this is actually what it is. This is actually how it look like at local level. But also capacity building, right? We appreciate opportunities like this because once we get down at home, you really go with a different perspective. You really go with a bit more informed voice that is able to tell private sector, that is able to convince your law enforcement to say, you know what, this concept of smart city, as you find it as effective for your law enforcement purposes, this is a contravention of human rights. You are putting people at risk. And so capacity building, I think, remains very key. And also, I mean, we faced with so many issues related to surveillance. I am possibly happy to be engaging with one of the projects that we are doing through the Digital Rights Network for Africa, a project led by the University of California, I think Alvin’s School of Law, where we are really researching how surveillance is taking shape within Africa and how that is seen by government, but also how civil society and all the other stakeholders at local level are engaging with that. And really, again, comprehensively looking at issues of digital rights, asking currently with the support through the solidarity, asking that human rights online must be recognized as true human rights, whether this is constitutionally or through other means. We should also be able to work together through other existing mechanisms. I think the UPR, yesterday we had that. a quick conversation, and we just found out that everybody amongst many of our colleagues are involved in the UPR mechanism or the assessments at local level. And many of us have been able to effectively do so because we’ve got partners, a lot of solidarity colleagues that stands with us and say, look, we know that many of the time, government when they are putting that report together from themselves, they decide who gets in the room. But we found ways and means to create our own reviews, particularly on digital rights. I think this was the case with the Internet Society during our last review and really presented. And we also found that even government was very receptive to come and say, look, we know you looked at human rights, issues of vulnerable communities, but this is what we’re looking at from a digital rights perspective, issues of privacy, freedom of expression online, access to information, just amongst others, and making sure that we’ve got the right frameworks that are human rights, respecting and making sure that they put people first. So there’s a lot of work that we possibly foresee ourselves 2025 and even beyond. Again, it requires all of us, and it requires partners who knows better, possibly who are better resourced, and to help us carry and bring our voices at platforms like this. Thank you.

Jennifer Bachus: Thanks for that. Smart cities is another topic, which, yeah, I have the same surveillance concerns. And I think there are very good things that can come out of smart cities, but we should all be a little concerned about them. So very glad to hear you’re raising your voice on that issue as well. Over to you, Jason.

Jason Pielemeier: Yeah, thanks. So, yeah, I really wanna echo Narsilingo. and Susan’s comments earlier as well about the way in which these interconnecting technologies can both enable free expression and freedom of association, can enable connectivity where it didn’t exist before, can promote voice where it may have been stifled, but also how when they’re misused, these technologies can be very, very dangerous to those who are trying to raise their voices and to call governments to account. So in that context, I think it’s important to note, and referencing also my earlier comments about trust, that I think the IGF itself is a really great sort of enabler of digital solidarity. And it’s not just this conference that’s happening right now, it’s the national and regional level IGFs that Narsha Nandga referred to, it’s the intersessional work, the dynamic coalitions, the policy networks. I mean, this is a sort of process, a community that’s been built over decades, and it’s incredibly valuable. It brings so much expertise, a tremendous repository of knowledge and experience together. But for those same reasons, it’s important to recognize that hosting the IGF in countries where those same human rights that we’ve been referring to are systematically repressed creates a real challenge. I am not with you in person because my board made a decision that we would not attend this IGF in person as an organization. organization, because we had real concerns about safety and security, as well as concerns about the human rights track record of the host government. And so we put out a statement that indicates our ongoing support for the IGF and the IGF community, but also raises concerns about the hosting decisions that the IGF and the broader UN have made, and the need to ensure that IGF is hosted in countries where the community feels safe, where there is trust, and where we can have these robust conversations without fear of reprisals, without fear of unwarranted surveillance, without fear of censorship. So I think, you know, this is an incredible mechanism that is available to the community. It is a medium for building trust and fostering digital solidarity. But we have to take care of this resource that is the IGF, and that means having some difficult conversations. And those conversations are going to come up in the forthcoming WSIS process, which includes, of course, the question of renewal and the mandate of the IGF. So I really hope that we can use that process not only to extend the mandate, but to strengthen the IGF, including how decisions about hosting are made going forward.

Jennifer Bachus: Thanks. Over to you, Robert.

Robert Opp: Yeah. Sure. I would – I want to talk about two different levels, now I’m doing – I don’t know what’s going on with the microphone. I think we are in an unprecedented time right now of the technical ability for the exchange of information, data, the need for greater data as we look at artificial intelligence systems. There’s a – you know, there’s parts of the Global Digital Compact that talk about data governance. At a technical level, I think we need to be thinking more than ever about cross-border flows of data, information, interoperability of systems, and so on. And this is, you know, again, a bit behind that, you know, UNDP’s interest in the space of digital public infrastructure on how can we really create greater economic and social prosperity coming out of the use of these digital platforms. which then of course have to be accompanied by the policy level, policy legal protections level that has a very careful focus on people’s rights. Because as we said, and I don’t wanna be repetitive, but I think it’s worth emphasizing that we should not be in the space. And as a UN agency, we are not in the space of offering technology support to countries without the accompanying governance mechanisms, policies, legal protections, et cetera. Because as Jason says, and other panelists have been saying, they present that risk. So in 2025, our intention is to continue to pursue the benefits of interoperability, of data availability and interoperability, better data governance that takes a global and a global cooperation approach, but doing that in a safe way that really puts people’s rights at the heart of it.

Jennifer Bachus: And over to you.

Susan Mwape: Thank you. And for me, I think it’s to talk about looking forward to how we can make operationalize digital solidarity. And so I just had a few things about that. I think, first of all, there was a lot of talk from Jason and yourself, Jennifer, around DPI. And so I think that building inclusive infrastructure to access technology will be helpful. We’ve talked a lot about the digital divide and how things are moving forward, but until we have infrastructure that is inclusive of those that have been left behind, this whole journey is going to be challenged because after all, solidarity is best with as many numbers as you can without leaving anyone behind. So infrastructure is very important. Also strengthening data privacy and security issues. Now, Sholongo talked about safe cities. We have a number of countries that are struggling with bad laws, data protection laws, nonexistent in some countries. So that is also something that is of great importance for us to move forward with. I think that we also… need to begin to think of platform designs for solidarity as well, where we can do this in a more open space. Robert raised a very valuable concern around, for instance, hosting of the IGF, and so when we think about those that are unable to travel, then, you know, if we have platforms that enable them to effectively participate, it is helpful. I think when we talk about cost of internet, for instance, I come from Zambia, a country that has one of the highest costs where data is concerned, so not many people would be able to participate in an event like this. We are struggling with issues of load shedding, endless hours, and so that creates a barrier in itself. Finally, I know that we are running out of time. I think there’s also need for support to NGOs, civil society, and also other stakeholders to look at how we can also bring the private sector on board, maybe providing incentives for things like corporate social responsibility that supports digital solidarity. Thank you.

Jennifer Bachus: Thanks for that, and public-private partnership is absolutely at a cornerstone of what the U.S. also works around the idea of public-private partnership because it’s absolutely accurate that the amount of resources the U.S. government is ever going to be able to put into this is going to be dwarfed by the amount of private sector resources, and I’m always really heartened when I talk to a lot of tech companies and I hear about the academies they have, the work they’re doing, and I think getting the word out on that can continuing to promote that, continuing to support that, which I try to do whenever I meet with them. And I think that I will say, and not just because you both come from Africa, but I’ve had the opportunity to travel through Africa and seen really impressive work with a wide range and also not because you’re both women, but particularly focused on bridging the gender digital divide. And in some of the cases, it’s so interestingly the way they’ve thought out things like transportation, childcare, food, because it’s great to say, we’re gonna empower you, we’re gonna train you. It’s another thing to say, we’re gonna do it in a way that’s gonna work in the circumstances that you live in. And that is, I think, incredibly important to understand that if you, for cost reasons, have to locate the thing you’re doing in a rural, a place that’s not in the center, get people there, right? What do you do if the people participating in the program get pregnant? Can you figure out a way? They go on parental leave, they have to be home, or they have to be home for their children. I mean, it’s just thinking through all these elements, it’s really complicated. And the best programs are the ones where it’s the private sector has come together with civil society who says, well, you need to think about this thing in our country. And I think that’s incredibly powerful. And again, I now went off, and we’re supposed to talk a little bit because we find ourselves situated between GDC and WSIS, and the question of the extension of the IGF mandate and what all this means. And I personally have listened to lots of conversations. I acknowledge, as I’ve acknowledged to many of you already, I’m not the expert on these things in the US government. Luckily, I have experts on all of these processes. And just to say, we will continue, the US government will continue to support IGF, we will continue to support multi-stakeholder approaches, internet governance, and. We know that this is a particularly pivotal year for IGF as we look at WSIS plus 20. I also was given the opportunity by my team to announce that we recently launched a three million dollar initiative to build the capacity of international stakeholders including civil society and governments of developing countries to engage more meaningfully in multilateral development and governance processes particularly related to AI because this is a demand again that we keep hearing and we hope that this could potentially include facilitating greater engagement in the IGF in Norway which it’s gonna not be cheap let’s be clear as well as the WSIS plus 20 review and the GDC implementation. I’m happy to say more on that and you know also to note that I’ve had the opportunity to meet with some of you that we’ve supported in traveling around the world to various events including interestingly Jason, Tenet Mundial plus 10, the UN General Assembly and of course this week. So in early 2025 after the IGF virtual workshop report report comes out we would like to host a virtual reunion of this panel to just follow up and see how things are going because it’s great to meet here but it’s also even better if we continue to meet and build on these engagements. So I will turn it to you all for your thoughts as we continue to look to some of these multilateral processes what you’re thinking about.

Nashilongo Gervasius: Thank you. Again appreciating just this engagement in the nature that it’s taking we appreciate the US government’s commitment for funding and making sure realize many of the efforts taking place locally but also participation again in platforms like that. With that also just recognizing that as we engage on this concept of digital solidarity and the support that the U.S. government is, for instance, able to put forth, it’s a recognition that many different countries are at very different levels of development or even engagement on digital issues. I think many of the countries are already thinking about how do we review our policies on cybersecurity, for instance, making sure that it is aligned to the UN Treaty on Cybercrime, but yet you also have countries that just do not have laws or policies in place. So we have many of those challenges of alignment and I hope that the digital solidarity provides us opportunities for sharing lessons learned, for ensuring that policies at some level reflects the ambitions, whether this is the global ambitions, but also the regional ambitions, for instance, again, the Malabo Convention and many others, the LOME declarations of this year that looks at cybercrime, for instance. So we have those challenges that as we move forward, we use these platforms to align interest, but also aligns policies and other framework emerging requirements. And then there is, from a GDC perspective, and I think this is something that you’ve mentioned is, we hope that WSIS and maybe IGF20 would really shape the clear reporting mechanisms for the GDC. It seems very ambition looks good to have, but maybe a bit more broader and does not give us specificity in terms of how do we deliver or how do we focus and say, this is what we have delivered on this particular. And then, of course, we had, I think I mentioned from Robert, on issues of cross-border flow, and I really appreciate the UNDP effort, as you’ve mentioned, making sure that in promoting platforms and in promoting sort of channels where this data is saved or is distributed, we’re also promoting safe containment of this data, that it does not lead to putting people at risk, whether this is your usual human rights defenders, but also just ordinary citizens. So much work that we see ourselves working, and this platform, I think, will help us with that. It also goes with the issues of interoperability, that, again, we faced in many of the countries, we have systems that don’t work, that don’t talk to each other. Why isn’t your ID, national ID system, talks to your election system, and really solve many of the issues that come with credibility of elections, and make it very seamless in processes like that? But so is the health sector. So many, and I think the UNDP’s effort through digital infrastructure, I think that is helping. Maybe this conversation from UNDP, I think I only saw those conversations happening in Namibia only this year, and as a person that has been engaging maybe through the little and limited means, I’m asking, but why are we studying only now? Particularly because, again, really aligning those efforts, what’s happening locally, but also what is happening globally. happening globally. And I think as in my closing here, is really the support that we need in the policymaking process. I think I’m repeating, I’m finding myself at the danger of repeating myself really supporting the policymaking processes, but also churning out easier mechanisms for civil societies, but also other stakeholders to really say, but this policy is legitimate, this policy puts people first, this policy respects human rights, just amongst us. Yes, thank you.

Jennifer Bachus: Over to you, Jason.

Jason Pielemeier: Thanks. Yeah, I’ll try and be brief. I know we’re running out of time. Yeah, I wanted to highlight something Nashi Lango referenced, which is the UN Cybercrime Convention, which is poised to be finalized this month and creates a framework that intends to enhance and facilitate more collaboration to address cybercrime. And that is certainly a good thing. We know that cybercrime is a scourge that needs to be addressed. However, we have been pretty consistent as GNI and many of our members, both private sector and civil society in pointing out some of the real potential challenges that this convention could pose by creating a sort of sanctioning system that allows for countries to put pressure on private companies and their employees domestically to require them to hand over data, to violate user privacy. for countries to continue the types of digital transnational repression that we are increasingly seeing around the world. And so there’s going to be a lot of work that needs to be done, including under the framework of digital solidarity to ensure that the Cybercrime Convention is used appropriately and not misused. As GNI, we are very committed to continuing to help civil society and private sector actors come together with rights-respecting governments to kind of shepherd this process, hopefully in a more rights-respecting manner. With WSIS plus 20, we are engaged and very hopeful that, as Nashilango said, that WSIS can remain sort of the central mechanism for, and the WSIS Action Lines can continue to provide the type of sort of framework that is needed to ensure more collaboration, more access and support for those who are falling behind the digital divide, as well as to renew the mandate of the IGF, and the Net Mundial plus 10 outcome document that I referenced earlier calls for a 10-year renewal of the IGF to sort of give it the sustainability and predictability that it needs as we continue to deal with all kinds of new technological challenges and opportunities. It also calls for strengthened funding and resourcing for the IGF, including improving the process for selecting host countries going forward, which I mentioned earlier. So the WSIS conversation will be an important one, and we really think GDC needs to be a process that supports and feeds into WSIS and doesn’t duplicate, doesn’t create sort of new separate tracks for similar conversations, which would allow for potential conflicts and also duplication of the burden for those of us who are trying to participate meaningfully in these various processes. So yeah, lots of important processes and conversations to be had in 2025. And we’re very much as GNI looking forward to participating them together with many of you.

Jennifer Bachus: Thanks, Jason. And we look forward to working with you on issues around the Cybercrime Convention because we also recognize the potential for misuse of the convention and are guarding against it as well as these questions of the future of IGF and I will say you’ve done admirably in the middle of the night and I think you get all your points. Congrats for that and with that I’ll turn to Robert.

Robert Opp: Yeah, I would agree with that. So it sounds like we all believe IGF, WSIS should be strengthened. I have a slightly different take on the whole evolution of this, which is 20 years ago when WSIS and IGF were first created, let’s face it, ICT for development was a bit of a niche community. And just as digital has become more mainstream in our lives, if you look at just the use of personal devices and all of the kinds of issues that are coming around that, I think IGF and WSIS need to be expanded and strengthened in a direction that makes it more mainstream. In a good way, not in a bad way, if I can put it that way. I think that we need to see more integration with some of the other issues that we have out there. Environmental sustainability is one of them that I find missing here. There’s some issues around children and gender violence and things that are somewhat here but not super well represented. So I think that this is the path for us because these issues are. absolutely crucial to the lives of everyone. And if you look at the number of connected people worldwide and the trajectory of that, it’s more important than ever that we’re seeing this as such a central platform.

Susan Mwape: All right, so I think for me, I will just conclude by saying, looking forward, I would be of the view that multi-stakeholderism is a very important aspect to digital solidarity. And it is my hope that we will be able to localize these concepts to the lowest level, because when we talk about digital solidarity, it means different things to different people. And so we have to find a way of ensuring that it’s interpreted as much as possible and taken forward. I really hope that the mandate of the IGF also will be renewed because I think that the IGF is one of those platforms that is really, really relevant and very important and being a platform that provides a multi-stakeholder engagement and people have a chance to lend their voice to all these different conversation that happen at local and international level. So that would be my hope, and I think that would be my parting words, thank you.

Jennifer Bachus: Great, I happen to know there are a couple of questions in the room. So what we’re gonna do, because I think we have three minutes, is at least get those questions down. We will do our best to answer them. And if we don’t get all the way through the answers, we promise to engage with you after the session. So anyone who wants to ask a question, we’re gonna have them all asked at the same time and then we’ll do our best.

Audience: This is Barbara, I’m from Nepal for the record. I have a question. We talked about digital solidarity and also we talked about digital sovereignty. In most of the case, In particular, developing countries where resources is very limited and the enforcement is very tough, especially to digital platform providers. So they talk about digital sovereignty because law enforcement are facing very tough times while enforcing cybercrime issues and other similar criminal cases or economic crime as well. So where we can find the solution about the digital sovereignty issue, comparing digital solidarity, avoiding strict legislation on data localization, they talked about. So what could be the alternative way for the governance model of that kind of situation? Thank you. Well, more bit notes, but also questions. Alexander Savnin from Russian Federation, not representing government, more than civil society and operations. I will turn to Jason and say, this process was created in Tunisia, which was really authoritarian country and democratic forces and organizations was used to meet people, to build capacity and to build solidarity then. As citizen of Russian Federation in opposition to my government, I really said that IGF have been moved to Norway. I’ve expected a lot of hopes for such organizations, such people, even state department to come and to talk to people. So please say my blames to your board and to board of other organizations. But again, if I criticize my government, it doesn’t mean I will not criticize or ask questions to US government. Because when we are talking in solidarity, we also, okay, the most solidarity in cybersecurity is built on technical community. People are working together and so on and so on. And I would like to criticize or ask question to US government. your sanctions are actually breaking solidarity. The latest question that’s few Russian-based or Linux kernel supporters was removed and it’s actually broke solidarity. One of them actually lives in the United States for years and work for Amazon. But again, some kind of sanctions. Another kind of example, especially related to cyber security, is that sanctions actually economical measures which is imposed before, even before what you call brutal attack. And as a Russian citizen based in Russian Federation have to call special military operation. Russian cyber security, huge cyber security companies, Kaspersky and positive technologies was sanctions and actually removed from all this processes of communications between technical community. And this actually, well, I don’t believe that I’m saying this, but well, I have to notice that not only Russian government breaks solidarity on cyber security and not joining Budapest conventions. And I definitely understand why they don’t do this. But also, well, some small thing, governments including United States may also. So please answer what could be done to improve or stop breaking technical solidarity even by this good looking measures. Thank you. Yeah. Hi everyone. So I’ll try to keep it brief. I just have a comment. My name is Hala Rasheed. I’m a public policy and human rights expert representing another society. So a little bit of context as to Saudi civil society. It’s a 60 year old women’s non-governmental nonprofit with UN ECOSOC consultative status. And we also lead the Saudi delegation to the W20. I appreciate Jason’s remarks about the IGF being hosted in Saudi Arabia, but I do have some comments. For these dialogues to achieve their objectives, it is essential to approach them with. an open mind and a clear understanding of the current legal frameworks, enacted policy reform and local context. Let us ensure that historical inaccuracies and biases do not cloud our judgment going into 2025. Thank you very much.

Jennifer Bachus: So, well, you can hear me, I’ve lost my mic or my ability to hear. I just wanna say, look, we were talking on our way in today and of course the United States government is always ready and willing to take any feedback. I understand the IGF, right after Snowden was quite complicated for the US government, but we show up, we listen, we try to respond. That’s what we’re here to do. We will not pretend that everything we do is perfect. If we don’t hear the feedback, we make bad policy. So I would say, I recognize your points and it is important. On the question of the non-responsiveness of platforms, look, it’s something I personally hear all the time. We push them very hard to be more responsive, but criminalizing speech, as you know, is of course at the same time, very problematic. I don’t know if they really wanna shoo us out, I don’t wanna give the opportunity to at least a couple of words if they feel obliged.

Robert Opp: Hello, super quickly, just on your situation as well, digital sovereignty does not equal cybersecurity. What I mean is we can’t assume that acting in ways that we think are protective is actually safer.

Jennifer Bachus: Before we get kicked out of the room, thank you to all of you for your participation. Thank you to the panelists. And please, if you haven’t read our strategy, I encourage you to. Thank you for your participation and we look forward to continuing these conversations. And thank you, Jason, from the other side of the world. Thank you.

J

Jennifer Bachus

Speech speed

159 words per minute

Speech length

4189 words

Speech time

1573 seconds

Digital solidarity promotes cooperation while respecting rights

Explanation

Jennifer Bachus emphasizes that digital solidarity involves working together on shared goals while respecting human rights. It aims to create a more secure, resilient, inclusive, and prosperous digital future.

Evidence

The US International Cyberspace and Digital Policy Strategy promotes digital solidarity as a framework for partnerships and like-minded coalitions.

Major Discussion Point

Digital Solidarity and Sovereignty

Agreed with

Jason Pielemeier

Susan Mwape

Robert Opp

Nashilongo Gervasius

Agreed on

Importance of multi-stakeholder approaches

Digital sovereignty can undermine economic and security objectives

Explanation

Jennifer Bachus argues that digital sovereignty often leads to protectionism, blocking market access, and preventing cross-border data flows. This can undermine interoperability, security, and economic objectives.

Evidence

Example of Ukraine changing laws to allow cloud storage of government data, which protected it from Russian attacks.

Major Discussion Point

Digital Solidarity and Sovereignty

Differed with

Nashilongo Gervasius

Differed on

Digital sovereignty vs. digital solidarity

J

Jason Pielemeier

Speech speed

127 words per minute

Speech length

2346 words

Speech time

1108 seconds

Digital solidarity requires trust-building across stakeholders

Explanation

Jason Pielemeier emphasizes that trust is crucial for digital solidarity to work effectively. He argues that building trust across countries and stakeholders is challenging but essential for collaboration.

Evidence

Reference to the NetMundial+10 process as an example of multi-stakeholder collaboration in a low-trust environment.

Major Discussion Point

Digital Solidarity and Sovereignty

Multi-stakeholder collaboration is critical for addressing digital challenges

Explanation

Jason Pielemeier argues that bringing together diverse actors with different backgrounds and expertise can lead to stronger outcomes. He emphasizes the importance of multi-stakeholderism in addressing complex digital issues.

Evidence

Description of GNI’s work in bringing together academics, civil society organizations, investors, and tech companies to address challenges to freedom of expression and privacy.

Major Discussion Point

Multi-stakeholder Approaches

Agreed with

Jennifer Bachus

Susan Mwape

Robert Opp

Nashilongo Gervasius

Agreed on

Importance of multi-stakeholder approaches

IGF mandate should be renewed and strengthened

Explanation

Jason Pielemeier advocates for a 10-year renewal of the IGF mandate to provide sustainability and predictability. He argues for strengthened funding and improved processes for selecting host countries.

Evidence

Reference to the NetMundial+10 outcome document calling for IGF renewal and strengthening.

Major Discussion Point

Internet Governance Forums and Processes

Agreed with

Susan Mwape

Robert Opp

Agreed on

Renewal and strengthening of IGF

Hosting decisions for IGF need careful consideration

Explanation

Jason Pielemeier raises concerns about hosting IGF in countries with poor human rights records. He argues that this creates challenges for safety, security, and open dialogue.

Evidence

GNI’s decision not to attend the IGF in person due to concerns about the host government’s human rights track record.

Major Discussion Point

Internet Governance Forums and Processes

Differed with

Alexander Savnin (Audience)

Differed on

Hosting of Internet Governance Forum (IGF)

Cybercrime convention poses potential risks to privacy

Explanation

Jason Pielemeier expresses concerns about the UN Cybercrime Convention potentially allowing countries to pressure companies to hand over user data. He argues this could lead to violations of user privacy and digital transnational repression.

Evidence

GNI’s consistent warnings about the potential challenges posed by the convention.

Major Discussion Point

Challenges in Implementing Digital Policies

S

Susan Mwape

Speech speed

147 words per minute

Speech length

1325 words

Speech time

538 seconds

Infrastructure and data privacy are key for digital solidarity

Explanation

Susan Mwape emphasizes the importance of building inclusive infrastructure for accessing technology and strengthening data privacy and security. She argues these are crucial for operationalizing digital solidarity.

Evidence

Reference to high internet costs and load shedding in Zambia as barriers to participation in digital events.

Major Discussion Point

Digital Solidarity and Sovereignty

Civil society plays important role in multi-stakeholder engagement

Explanation

Susan Mwape highlights the role of civil society in promoting citizen participation and holding leaders accountable. She argues for the importance of localizing concepts like digital solidarity to the lowest level.

Evidence

Description of Common Cause Zambia’s work in promoting citizens’ participation in governance processes and building capacity.

Major Discussion Point

Multi-stakeholder Approaches

Agreed with

Jennifer Bachus

Jason Pielemeier

Robert Opp

Nashilongo Gervasius

Agreed on

Importance of multi-stakeholder approaches

IGF provides important platform for multi-stakeholder engagement

Explanation

Susan Mwape expresses hope for the renewal of the IGF mandate, emphasizing its importance as a platform for multi-stakeholder engagement. She argues that IGF allows people to lend their voice to various conversations at local and international levels.

Major Discussion Point

Internet Governance Forums and Processes

Agreed with

Jason Pielemeier

Robert Opp

Agreed on

Renewal and strengthening of IGF

R

Robert Opp

Speech speed

147 words per minute

Speech length

1507 words

Speech time

613 seconds

UN/UNDP can create spaces for multi-stakeholder dialogues

Explanation

Robert Opp highlights the role of UN and UNDP in creating platforms for multi-stakeholder dialogues. He emphasizes the importance of putting human rights at the center of digital initiatives.

Evidence

Reference to UNDP’s work in supporting countries with digital policies, strategies, and capacity building.

Major Discussion Point

Multi-stakeholder Approaches

Agreed with

Jennifer Bachus

Jason Pielemeier

Susan Mwape

Nashilongo Gervasius

Agreed on

Importance of multi-stakeholder approaches

IGF and WSIS need to become more mainstream and integrated

Explanation

Robert Opp argues that IGF and WSIS need to expand and strengthen in a direction that makes them more mainstream. He suggests integrating more issues like environmental sustainability, children’s rights, and gender violence.

Evidence

Comparison of the niche ICT for development community 20 years ago to the current mainstream nature of digital issues.

Major Discussion Point

Internet Governance Forums and Processes

Agreed with

Jason Pielemeier

Susan Mwape

Agreed on

Renewal and strengthening of IGF

N

Nashilongo Gervasius

Speech speed

119 words per minute

Speech length

2097 words

Speech time

1054 seconds

Digital solidarity should align with regional and global ambitions

Explanation

Nashilongo Gervasius emphasizes the need for digital solidarity to align with both global and regional ambitions. She argues for the importance of sharing lessons learned and ensuring policies reflect these ambitions.

Evidence

Reference to the African Union’s digital transformation strategy and regional declarations on cybercrime.

Major Discussion Point

Digital Solidarity and Sovereignty

Differed with

Jennifer Bachus

Differed on

Digital sovereignty vs. digital solidarity

Multi-stakeholder approaches need to consider power asymmetries

Explanation

Nashilongo Gervasius highlights the importance of being mindful of power asymmetries between diverse stakeholders. She argues for the need to support policymaking processes and create mechanisms for civil society to assess policy legitimacy.

Evidence

Reference to resource constraints that can keep people from effectively participating in civil society.

Major Discussion Point

Multi-stakeholder Approaches

Agreed with

Jennifer Bachus

Jason Pielemeier

Susan Mwape

Robert Opp

Agreed on

Importance of multi-stakeholder approaches

A

Audience

Speech speed

137 words per minute

Speech length

614 words

Speech time

267 seconds

Developing countries face resource constraints in enforcement

Explanation

An audience member raises the issue of resource limitations in developing countries, particularly in enforcing digital policies. They question how to balance digital sovereignty concerns with digital solidarity.

Evidence

Reference to challenges in enforcing cybercrime issues and other criminal cases.

Major Discussion Point

Challenges in Implementing Digital Policies

Sanctions can undermine technical cooperation and solidarity

Explanation

An audience member argues that sanctions imposed by the US government can break technical solidarity in cybersecurity. They suggest that such measures can hinder communication and cooperation in the technical community.

Evidence

Examples of Russian-based Linux kernel supporters being removed and sanctions on Russian cybersecurity companies.

Major Discussion Point

Challenges in Implementing Digital Policies

Local context and reforms need consideration in policy dialogues

Explanation

An audience member emphasizes the importance of approaching digital policy dialogues with an open mind and understanding of local context. They argue for the need to consider current legal frameworks and enacted policy reforms.

Major Discussion Point

Challenges in Implementing Digital Policies

Agreements

Agreement Points

Importance of multi-stakeholder approaches

Jennifer Bachus

Jason Pielemeier

Susan Mwape

Robert Opp

Nashilongo Gervasius

Digital solidarity promotes cooperation while respecting rights

Multi-stakeholder collaboration is critical for addressing digital challenges

Civil society plays important role in multi-stakeholder engagement

UN/UNDP can create spaces for multi-stakeholder dialogues

Multi-stakeholder approaches need to consider power asymmetries

All speakers emphasized the importance of multi-stakeholder approaches in addressing digital challenges and promoting digital solidarity. They agreed that collaboration across different sectors and stakeholders is crucial for effective governance and policy-making in the digital realm.

Renewal and strengthening of IGF

Jason Pielemeier

Susan Mwape

Robert Opp

IGF mandate should be renewed and strengthened

IGF provides important platform for multi-stakeholder engagement

IGF and WSIS need to become more mainstream and integrated

These speakers agreed on the importance of renewing and strengthening the Internet Governance Forum (IGF) mandate. They view IGF as a crucial platform for multi-stakeholder engagement and believe it should be expanded to address a broader range of issues.

Similar Viewpoints

Both speakers expressed concerns about how certain approaches to digital sovereignty or cybercrime prevention could potentially undermine privacy, security, and economic objectives.

Jennifer Bachus

Jason Pielemeier

Digital sovereignty can undermine economic and security objectives

Cybercrime convention poses potential risks to privacy

Both speakers emphasized the importance of considering local and regional contexts in implementing digital solidarity, particularly in terms of infrastructure development and policy alignment.

Susan Mwape

Nashilongo Gervasius

Infrastructure and data privacy are key for digital solidarity

Digital solidarity should align with regional and global ambitions

Unexpected Consensus

Challenges in implementing digital policies

Jennifer Bachus

Jason Pielemeier

Audience members

Digital sovereignty can undermine economic and security objectives

Cybercrime convention poses potential risks to privacy

Developing countries face resource constraints in enforcement

Sanctions can undermine technical cooperation and solidarity

Despite representing different perspectives (government, civil society, and audience), there was an unexpected consensus on the complexities and potential negative consequences of implementing certain digital policies. This highlights a shared recognition of the challenges in balancing security, privacy, and economic interests in the digital realm.

Overall Assessment

Summary

The main areas of agreement centered around the importance of multi-stakeholder approaches, the need to strengthen and renew the IGF mandate, and the recognition of challenges in implementing digital policies. There was also consensus on the need to balance digital sovereignty concerns with international cooperation and human rights considerations.

Consensus level

The level of consensus among the speakers was moderately high, particularly on the importance of multi-stakeholder engagement and the need for inclusive digital development. This consensus suggests a shared understanding of the complexities in digital governance and the need for collaborative approaches. However, there were also areas of divergence, particularly in how to balance national interests with global cooperation, indicating that while there is agreement on broad principles, the specifics of implementation remain contentious.

Differences

Different Viewpoints

Digital sovereignty vs. digital solidarity

Jennifer Bachus

Nashilongo Gervasius

Digital sovereignty can undermine economic and security objectives

Digital solidarity should align with regional and global ambitions

Jennifer Bachus argues that digital sovereignty often leads to protectionism and undermines economic and security objectives, while Nashilongo Gervasius suggests that digital solidarity should align with both global and regional ambitions, implying a more balanced approach to sovereignty concerns.

Hosting of Internet Governance Forum (IGF)

Jason Pielemeier

Alexander Savnin (Audience)

Hosting decisions for IGF need careful consideration

IGF should be held in countries with human rights concerns to promote dialogue

Jason Pielemeier expresses concerns about hosting IGF in countries with poor human rights records, while Alexander Savnin argues that holding IGF in such countries can be beneficial for building capacity and solidarity.

Unexpected Differences

Impact of sanctions on digital solidarity

Jennifer Bachus

Alexander Savnin (Audience)

Digital solidarity promotes cooperation while respecting rights

Sanctions can undermine technical cooperation and solidarity

While Jennifer Bachus promotes digital solidarity as a means of cooperation, Alexander Savnin unexpectedly raises the issue of US sanctions undermining technical solidarity in cybersecurity, which was not directly addressed by the panelists.

Overall Assessment

summary

The main areas of disagreement revolve around the balance between digital sovereignty and solidarity, the approach to hosting international forums like IGF, and the practical implementation of digital solidarity principles in the face of geopolitical realities.

difference_level

The level of disagreement among the speakers is moderate. While there is general consensus on the importance of digital solidarity and multi-stakeholder approaches, there are significant differences in how these concepts should be implemented and balanced against national interests and human rights concerns. These disagreements highlight the complexity of achieving digital solidarity in a diverse global context and suggest that further dialogue and compromise will be necessary to advance shared goals in digital governance.

Partial Agreements

Partial Agreements

All speakers agree on the importance of digital solidarity and multi-stakeholder approaches, but they emphasize different aspects: Jennifer Bachus focuses on rights-respecting cooperation, Jason Pielemeier on trust-building, Susan Mwape on infrastructure and privacy, and Robert Opp on the UN’s role in facilitating dialogues.

Jennifer Bachus

Jason Pielemeier

Susan Mwape

Robert Opp

Digital solidarity promotes cooperation while respecting rights

Digital solidarity requires trust-building across stakeholders

Infrastructure and data privacy are key for digital solidarity

UN/UNDP can create spaces for multi-stakeholder dialogues

Similar Viewpoints

Both speakers expressed concerns about how certain approaches to digital sovereignty or cybercrime prevention could potentially undermine privacy, security, and economic objectives.

Jennifer Bachus

Jason Pielemeier

Digital sovereignty can undermine economic and security objectives

Cybercrime convention poses potential risks to privacy

Both speakers emphasized the importance of considering local and regional contexts in implementing digital solidarity, particularly in terms of infrastructure development and policy alignment.

Susan Mwape

Nashilongo Gervasius

Infrastructure and data privacy are key for digital solidarity

Digital solidarity should align with regional and global ambitions

Takeaways

Key Takeaways

Digital solidarity promotes cooperation while respecting rights and is seen as preferable to digital sovereignty approaches

Multi-stakeholder collaboration is critical for addressing digital challenges and fostering digital solidarity

Internet governance forums like IGF need to be strengthened and made more inclusive

There are challenges in implementing digital policies, especially for developing countries with resource constraints

Trust-building across stakeholders is essential for digital solidarity

Resolutions and Action Items

The US government announced a $3 million initiative to build capacity for international stakeholders to engage in multilateral processes, especially related to AI

The panel agreed to hold a virtual reunion in early 2025 to follow up on progress

Participants expressed support for renewing and strengthening the IGF mandate

Unresolved Issues

How to balance digital sovereignty concerns (e.g. law enforcement needs) with digital solidarity approaches

How to address potential misuse of the UN Cybercrime Convention

How to make IGF and WSIS processes more inclusive and representative

How to mitigate the impact of sanctions on technical cooperation and solidarity

Suggested Compromises

Implementing digital policies with strong safeguards and human rights protections

Strengthening IGF while improving the process for selecting host countries

Integrating digital governance discussions with other key issues like environmental sustainability

Thought Provoking Comments

We believe through digital solidarity that it’s the idea that we have a willingness to work together on shared goals, stand together, help partners build capacity and provide mutual support.

speaker

Jennifer Bachus

reason

This comment introduces and defines the key concept of ‘digital solidarity’ that frames the entire discussion.

impact

It set the tone for the conversation and provided a framework for the other panelists to discuss collaboration and partnership in the digital space.

We recognize the lack of connectivity or digital capacity, and I know all of you are working on this, and these things can hinder the ability to fully participate in the digital economy and challenges our collective ability to achieve the sustainable development goal.

speaker

Jennifer Bachus

reason

This comment highlights a critical challenge in achieving digital solidarity and links it to broader development goals.

impact

It prompted discussion of specific challenges and inequalities in digital access and capacity throughout the conversation.

We need to continue to push back against, and we are pleased to also host a side event on the Counter Ransomware Initiative. We of course see authoritarian governments who continue to increase efforts to undermine the multi-stakeholder rights-based approach to Internet governance and digital policy processes, including across multilateral fora, recognizing where we are here today, which of course puts at risk the future of an open, interoperable, secure, and reliable Internet.

speaker

Jennifer Bachus

reason

This comment introduces the tension between digital solidarity and authoritarian approaches to internet governance.

impact

It sparked discussion about the challenges of maintaining open internet governance in the face of authoritarian pressures.

I think we might not have this term, solidarity, as broadly as the US government has made it intentional to embrace at local level, at regional level, but I think it is an important concept. It is an important approach to be working together for all of us, I think.

speaker

Nashilongo Gervasius

reason

This comment provides a perspective from outside the US on the concept of digital solidarity.

impact

It broadened the discussion to consider how digital solidarity is understood and implemented in different contexts globally.

And at the core of all of that is this concept of multi-stakeholderism, the idea that different actors with different backgrounds and expertise can come together and be stronger than the sum of their parts.

speaker

Jason Pielemeier

reason

This comment introduces the important concept of multi-stakeholderism as central to digital solidarity.

impact

It shifted the discussion towards considering how different stakeholders can collaborate effectively in digital governance.

But COVID really was an inflection point for us in the understanding of how we need to move from being very solutions and oriented and somewhat fragmented into being more holistic and strategic in the way we use digital solutions. But very importantly, moving from that moment of techno-optimism into an understanding of the risks and the importance of putting people’s rights at the center of whatever we do in technical or digital solutions.

speaker

Robert Opp

reason

This comment highlights a significant shift in thinking about digital solutions in development contexts.

impact

It prompted discussion about the need for rights-based approaches and consideration of risks in digital development.

I am not with you in person because my board made a decision that we would not attend this IGF in person as an organization, because we had real concerns about safety and security, as well as concerns about the human rights track record of the host government.

speaker

Jason Pielemeier

reason

This comment raises important issues about the tension between promoting digital solidarity and concerns about human rights in host countries.

impact

It sparked discussion about the challenges of hosting international forums in countries with problematic human rights records.

Overall Assessment

These key comments shaped the discussion by introducing and defining the concept of digital solidarity, highlighting challenges in achieving it (such as digital divides and authoritarian pressures), emphasizing the importance of multi-stakeholder approaches, and raising critical questions about how to balance promoting digital solidarity with concerns about human rights. The discussion evolved from defining broad concepts to exploring specific challenges and tensions in implementing digital solidarity in practice.

Follow-up Questions

How can we address the challenges of digital sovereignty while promoting digital solidarity?

speaker

Barbara from Nepal

explanation

This is important to find a balance between law enforcement needs in developing countries and avoiding strict data localization laws that could hinder digital solidarity.

How can the IGF process be made more inclusive for people from countries with authoritarian governments?

speaker

Alexander Savnin

explanation

This is important to ensure the IGF can continue to serve as a platform for building capacity and solidarity in challenging political environments.

How can US sanctions be adjusted to avoid breaking technical solidarity in cybersecurity?

speaker

Alexander Savnin

explanation

This is important to maintain global cooperation on cybersecurity issues despite political tensions.

How can we ensure dialogues about digital rights and governance take into account current legal frameworks, policy reforms, and local context in different countries?

speaker

Hala Rasheed

explanation

This is important to have more productive and accurate discussions about digital rights and governance globally.

How can the IGF and WSIS processes be expanded and strengthened to address more mainstream issues?

speaker

Robert Opp

explanation

This is important to ensure these processes remain relevant and address crucial issues like environmental sustainability and gender violence in the digital context.

How can we localize the concept of digital solidarity to make it meaningful at the lowest levels?

speaker

Susan Mwape

explanation

This is important to ensure that digital solidarity is understood and implemented effectively in diverse local contexts.

How can we improve the process for selecting host countries for the IGF?

speaker

Jason Pielemeier

explanation

This is important to ensure the safety and inclusivity of the IGF for all participants.

How can we ensure that the Global Digital Compact (GDC) supports and feeds into the WSIS process without creating duplication?

speaker

Jason Pielemeier

explanation

This is important to avoid conflicting processes and reduce the burden on participants trying to engage in multiple forums.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.

WS #42 Combating misinformation with Election Coalitions

WS #42 Combating misinformation with Election Coalitions

Session at a Glance

Summary

This discussion focused on the role of election coalitions in combating misinformation during elections worldwide. Panelists from Google, fact-checking organizations, and journalism backgrounds shared insights on forming and operating these coalitions. They emphasized the importance of collaboration between diverse stakeholders, including media outlets, fact-checkers, and civil society groups, to address misinformation effectively.

The speakers highlighted successful coalition models from various countries, such as Comprova in Brazil and Facts First PH in the Philippines. They stressed the need for building trust among coalition members and maintaining neutrality in leadership. The discussion also touched on the challenges of sustaining momentum beyond election periods and adapting to different cultural and political contexts.

Participants explored the role of technology companies like Google in supporting these coalitions, while also addressing concerns about potential conflicts of interest and the impact of government pressure. The conversation included debates on terminology, with some questioning the use of “misinformation” and suggesting a focus on specific harms instead.

The panel addressed the emergence of AI-generated content and its implications for election integrity, noting both potential risks and opportunities for leveraging AI in fact-checking efforts. They also discussed strategies for engaging young people and non-voters in the fact-checking process.

Overall, the discussion underscored the complexity of combating election-related misinformation and the importance of multi-stakeholder approaches. Panelists agreed that while challenges remain, election coalitions represent a promising model for promoting information integrity and supporting democratic processes globally.

Keypoints

Major discussion points:

– The importance and effectiveness of election coalitions in combating misinformation

– Challenges in maintaining momentum and addressing critiques of election coalitions

– The role of technology companies like Google in supporting election coalitions

– Concerns about government pressure and content moderation in relation to misinformation

– The need for clear policies, transparency, and relationship-building in election coalitions

The overall purpose of the discussion was to explore the role and impact of election coalitions in combating misinformation, sharing best practices and lessons learned from various global examples. The speakers aimed to highlight the importance of collaboration between journalists, fact-checkers, and other stakeholders in promoting election integrity.

The tone of the discussion was generally informative and collaborative, with speakers sharing insights from their experiences. However, it became more pointed and critical when audience members raised concerns about content moderation, government pressure, and the role of large tech companies. The panelists responded professionally to these challenges, maintaining a constructive dialogue while acknowledging the complexity of the issues raised.

Speakers

Speakers from the provided list:

– Mevan Babakar – News and Information Credibility Lead for MENA for Google

– Daniel Bramatti – Investigative journalist from Brazil

– David Ajikobi – Nigeria editor for Africa Check

– Alex Walden – Global Head of Human Rights for Google

– Jim Prendergast – Moderator

– Lena Slachmuijlder – Search for Common Ground and the Council on Tech and Social Cohesion

– Milton Mueller – Internet Governance Project at Georgia Tech

– Claes de Vreese – University of Amsterdam and executive board of the European Digital Media Observatory

Full session report

Election Coalitions and Combating Misinformation: A Global Perspective

This discussion brought together experts from various fields to explore the role of election coalitions in combating misinformation during elections worldwide. Speakers from Google, fact-checking organizations, and journalism backgrounds shared insights on forming and operating these coalitions, emphasizing the importance of collaboration between diverse stakeholders to address misinformation effectively.

Introduction to Election Coalitions and the Elections Playbook

Mevan Babakar, News and Information Credibility Lead for MENA at Google, introduced the concept of election coalitions and presented the Elections Playbook, a comprehensive guide developed to help organizations form and maintain effective coalitions. The playbook outlines two main models for election coalitions: collaborative approaches, where multiple organizations work together, and independent approaches, where a single organization leads the effort.

Key Examples of Election Coalitions

Several successful election coalitions were highlighted during the discussion:

1. Electionland: A U.S.-based coalition that brings together multiple newsrooms to monitor and report on election integrity issues.

2. Comprova: A Brazilian coalition of media organizations that collaboratively fact-check election-related claims.

3. Facts First PH: A Philippine coalition that introduced the MESH concept, combining fact-checking with in-depth explanatory journalism.

These examples demonstrate the diverse approaches to coalition-building across different cultural and political contexts.

Strategies for Combating Misinformation

Speakers discussed various innovative approaches to address misinformation:

1. Pre-bunking: Babakar introduced this proactive strategy to inoculate against expected false narratives before they become viral. She noted successful pre-bunking efforts in Europe.

2. Context-based fact-checking: Daniel Bramatti, an investigative journalist from Brazil, emphasized that fact-checking should add context rather than censor speech.

3. Media literacy: David Ajikobi, Nigeria editor for Africa Check, highlighted the importance of media literacy efforts to engage youth.

4. AI-assisted fact-checking: Babakar discussed the potential of leveraging AI tools to scale fact-checking efforts, while also noting the challenges posed by AI-generated content.

Challenges and Considerations

Despite the overall agreement on the importance of coalitions, speakers acknowledged several challenges:

1. Building trust: Babakar noted that building relationships and trust among coalition members takes time but is critical for success.

2. Funding: Maintaining long-term financial support for coalitions was identified as a significant challenge.

3. Balancing diverse interests: Ajikobi highlighted the difficulties in managing diverse media organizations within coalitions.

4. Leadership: Bramatti stressed the importance of choosing neutral leadership to ensure coalition credibility.

5. Government pressure: Alex Walden, Global Head of Human Rights for Google, pointed out the need to navigate government pressure and legal challenges.

6. Local context: Speakers emphasized the importance of understanding and adapting to local contexts when forming coalitions, particularly in countries with limited civil society or media infrastructure.

The Role of Technology Platforms

The discussion touched on the role of technology platforms in election integrity:

1. Content moderation: Walden emphasized the need for platforms to balance content moderation with free speech concerns.

2. Transparency: Speakers called for increased transparency around content moderation policies and government removal requests.

3. Industry-wide collaboration: Claes de Vreese from the University of Amsterdam suggested that platforms should collaborate on industry-wide coalitions to address election integrity issues collectively.

Evaluating Impact and Future Directions

Speakers discussed various approaches to evaluating the impact of misinformation and coalition efforts:

1. Harm-based framework: Babakar proposed focusing on specific harmful narratives rather than all misinformation, using a harm-based framework to determine when intervention is warranted.

2. Measuring concrete harms: Ajikobi agreed on the importance of measuring tangible impacts, such as election interference percentages.

3. Online and offline impacts: Bramatti highlighted the need to consider both digital and traditional media impacts, particularly noting the importance of radio in African contexts.

Unresolved Issues and Future Considerations

Several unresolved issues emerged from the discussion, including:

1. Balancing content moderation with free speech concerns

2. Determining appropriate thresholds for platform intervention on misleading content

3. Addressing the challenges posed by AI-generated content

4. Adapting coalition models to diverse global contexts

Conclusion

The discussion underscored the complexity of combating election-related misinformation and the importance of multi-stakeholder approaches. While challenges remain, election coalitions represent a promising model for promoting information integrity and supporting democratic processes globally. The conversation highlighted the need for continued dialogue, collaboration, and innovation in addressing the evolving landscape of misinformation in elections, with a particular emphasis on building trust, adapting to local contexts, and leveraging technology responsibly.

Session Transcript

Jim Prendergast: Good morning, everyone. I think we’ll get started. Let me just get to the screen, the appropriate screen. So thanks, everybody, for coming. Good morning, good afternoon, good evening. Whether you’re joining us in person or virtually, welcome. My name is Jim Prendergast and I’m your moderator for this morning or today’s session, which is titled Combating Misinformation with Election Coalitions. If this isn’t the session you thought it would be, we’d like you to stay anyway. So 2024 was a watershed year for elections. The UN called it a super year for elections. Sixty-plus countries held elections this year. I believe that’s an all-time record. At a time when elections around the globe are increasingly vulnerable to the spread of misinformation, the stakes couldn’t have been higher. Disinformation campaigns not only undermine electoral integrity, but they also erode trust in institutions, diminish civic participation, and in some cases, polarize societies. But there’s good news, and that’s what we want to talk about. Today, we’re going to focus on the role of election coalitions, essentially partnerships between governments, civil society, private sector, fact-checkers, encountering the rise of the tide of misinformation. These coalitions have emerged as a promising approach to build trust, promote credible information, and strengthen election resilience. But their effectiveness depends on a lot of factors, including strong coordination, shared resources, and clear strategies. I’m excited to be joined by a great group of experts on this topic who bring a diverse set of perspectives and extensive experience to the table, both in person and virtually. First off, let me introduce Alex Waldron, who’s the Global Head of Human Rights for Google. She’s seated here at the table with me. Wave to everybody online. Meevan Babakar, who’s the News and Information Credibility Lead for MENA for Google. She is joining us from London. London. David Adjokobi, he’s the Nigeria editor for Africa Check. He is also remote. And then, finally, Daniel Bramante, who’s joining us from Brazil. He’s an investigative journalist. And he wins the prize for the earliest time zone as a presenter speaker. Before we begin, just a couple of things to point out. Going to be, there are our speakers. Our session is going to start off with a couple of brief presentations from our speakers. I’ll kick it off with a couple of questions. But we really want this to be interactive. We want this to be highly participatory. So for those of you online and those of you in person, we really encourage questions and conversation and discussion. With that, let’s get started. Alex, could you sort of kick us off and help set the scene with explaining why you think election coalitions are important?

Alex Walden: Sure, thanks. I think this global elections has been a banner year for global elections. And Google has taken it seriously in all of these dozens of elections that have happened around the world this year. And so it’s timely for us to be having this conversation reflecting around the successes of the approaches that industry and our partners have had, and also looking forward to what do we need to do to strengthen those. So I’m really glad we’re having this conversation today. I also think it’s appropriate that we would be having this conversation at the IGF, where we are all focused on the multi-stakeholder model and the importance of that. Everything about what we’re doing here at IGF is focused on the necessity of government and civil society and companies working together to ensure that we’re all sort of realizing the benefits of what technology can deliver, and that those relationships and that working together also should inform how we address problems that come before us. And so that’s true across many types of issues. And in particular, that’s true across elections. And so I think my colleagues across the panel today are the best experts to demonstrate and talk through the ways that we’ve seen these successes. But at Google, we have billions of people who come to our products every day. And in particular, in the election context, people are coming to find information and information about where to vote. And so we have an obligation and responsibility to make sure that we are doing the best to deliver information to those users. But also, it is incumbent upon us to engage with the rest of the ecosystem to make sure that the things that are not really, or they’re not necessarily in our power to change entirely, we need to be working with the rest of the ecosystem to ensure that there is integrity in the way that we’re delivering information to all these billions of users around the world in the election context. So again, I’ll stop there. I think Google’s really excited to be having this conversation and hear the input from everybody in the room and online about how we do this work going forward.

Jim Prendergast: Great. Thanks a lot, Alex. I’d now like to turn to. Mevan, who’s going to explain to us a project she worked on, something she developed called the Elections Playbook. And Mevan, I’ll be driving the slides for you, so just let me know when you want to advance.

Mevan Babakar: Perfect. Thank you very much. Can you all hear me? Excellent. OK, great. Let me just quickly, OK. Hi, everyone. I’m Mi-Van. I work at Google as well. I actually work in trust strategy now across knowledge and information. So that touches on search, that touches on ads and other products that we have. But previously, I used to work in the Google News Initiative. And previous to Google, I actually worked in fact-checking for a decade. So I used to work at Full Fact, the UK’s independent fact-checking charity. And at my time at Full Fact and also at Google, I saw the power of election coalitions. And one of the things that became very clear to me is that election coalitions are actually quite a magical way of scaling the work of journalists and campaigners around the world, especially during elections. So I’m going to talk to you today a little bit about the short history of election coalitions, election coalitions, a research project that we’ve done specifically to capture some of the learnings from around the world, how you can form and organize an election coalition, some of the lessons learned from all of those interviews that we did as well. We’ve got 10 minutes. It’ll be a bit of a whistle-stop tour. But if anyone has any questions, feel free to just jump in and ask them. So next slide, please. So 2024 was a very big election year, as Jim mentioned. More than two billion people voted in over 60 different countries. But as we all know, misinformation, unfortunately, a big part of elections and has been around for as long as elections and probably longer. There are lots of ways to combat mis and disinformation in lots of different ways, but there is no silver bullet. There are very subtle and sometimes not so subtle nuances between countries that have quite a big difference in how you would combat misinformation. Things like public broadcasting, community participation, press freedoms, all of these things actually necessitate a specific country-level intervention. Over the past decade, journalists and fact-checkers have come together to form these election coalitions. What they essentially are, just as a very top line, is when journalists, fact-checkers, community organizations, sometimes lawyers, sometimes researchers, join forces and share resources or share the impact of the work that they do during a specific event like an election coalition. So it might be actually sharing the resources of their media monitoring, their actual research that they do. It might be sharing the learnings of the actual fact-checks or the journalism. It might be sharing the impact or scaling the actual outcomes of the work itself. One of the earliest examples was Electionland in 2016. This was a U.S. coalition that was set up and it was 1,100 journalists working together. It was a nationwide effort to cover voting rights in election administration in 2016. So there was a narrative going around that basically the election was rigged and that narrative is one that still exists today, but key claims come up each time that the election was rigged. Historically, at least in 2016, the newsrooms were primarily focused on reporting the outcome of what happened on election day and the run-up to the political ins and outs. Voting issues were sort of relegated to secondary coverage. So a bunch of journalists and newsrooms came together and started election land to kind of combat that, especially because in the US, the election laws vary drastically from state to state, and even country by country. So no national newsroom was at the time in a position to cover election administration through a wide lens. So all these newsrooms came together. And actually, one of the things that they did was actually quite new at the time was using social media to actually alert the local newsrooms and journalists that were taking part on specific claims that were coming up around the election being rigged so that they could actually localize specific narratives and specific claims to certain regions. And on top of that, they had 1100 journalists immediately and authoritatively rebut some of the pieces of misinformation that were coming out. It kind of showed at the time that news organizations a could work together, and they can collaboratively serve as a watchdog for like, for this crucial democratic moment that was taking place. And as it stands in 2016, the election land project won an online journalism award for its work. And since 2016, there have been at least eight more coalitions, I think probably more like 12 at this point. And they have operated not just for elections, sometimes across multiple elections. Like Comprova, for example, which we’ll hear more about later, has run for multiple years now. And more recently in 2024, although their logo isn’t on this slide, we’ve had the Shakti coalition in India, which is about 40 organizations coming together. And in the EU, the election 24 check, which was 45 organizations across Europe, and working across 37 countries, who published 3000 plus fact checks around the EU elections. I think it really shows that when newsrooms come together or when fact-checkers and community groups come together, the impact can scale quite drastically. There’s something quite special in that model. Next slide, thanks. So Google has a long history of supporting these election coalitions and we wanted to understand how to effectively build them and what they should look like to serve the needs of voters in the countries in the run-up to elections. But more importantly, there had already been so much learning from the past decade and it felt a bit like every single time everyone was starting from scratch. So we wanted to run a six-month research project and talk to all of the election coalitions that had come before to understand exactly what the best ways of setting it up are, what are all the lessons learned over the past decade and how can we effectively build them going forward. We ended up talking to 15 global experts and the countries that we touched on was France, Brazil, Argentina, Mexico, Nigeria and the Philippines as well as the US. One of the key things that I think has come out of the learnings is that there’s really no one-size-fits-all approach to building a successful election coalition because of the fact that each country is very unique in how it’s set up. There are often different election laws, different voting systems, there are different news consumption habits like radio, TV, social media. If these things are turned up or down you’ll need to change how you do your monitoring. There are also different types of misinformation taking place. Sometimes there are one-off instances of claims that maybe are more honest in the fact that it’s misinterpreted something. Well sometimes there’s types of misinformation and disinformation that direct foreign interference and of course these things would need different approaches. But having said that, there are some things that are shared across all the successful election coalitions. And by asking the right questions, we can start to build something much quicker and much more viable. So the things that have come up as sort of stages and needs in election coalitions is to identify the need, actually understand what is it that you’re trying to do in the first instance, although they’re called election coalitions, and a lot of them are around missing disinformation around election coalitions. That model has also been extended to pandemics, for example, so the COVID-19 pandemic and others as well, or epidemics in local regions, kind of share the same model. So figure out what it is that need that you’re trying to meet specifically. It’s become clear that we need to identify the lead as well. So a specific organization often takes charge of the larger coalition, not necessarily as the spokesperson, but as the organizing lead for any kind of coalition to take place. And I think it’s really important that this is one of the things that came out of the interviews, it’s really important that that organization that takes the lead in that country context is seen as neutrally as possible, or seen as balanced as possible. Because a really key point of the election coalitions is that you want a broad spectrum of actors and journalists that meet the needs of voters. And depending on how polarized that ecosystem is, you might want to use it as a means of building trust in institutions or building trust in journalism or fact checking or whatever it is that’s happening in those countries. So having that as a key aim really, really helps and identifying a lead that is as neutral as possible helps build that bridge. membership, whether it’s formal or informal, whether actually you’re focusing on subject experts or technology partners, these are all very important steps, and things to formalise before the actual coalition comes together. And then I think the next two are very important, actually. Implementing capacity building programmes is especially important for an election coalition when there are multiple media organisations working together, because historically, those media organisations work against one another. They’re competitors. And I think that what they’re doing here is actually quite unique. They’re coming together, sharing resources, they’re sharing sometimes outputs. And they’re working in a much more collaborative way. So trust building is an incredibly important part of these election coalitions. And trust isn’t something that is earned overnight. It is earned through example, it’s earned through case studies, it’s earned through the experience of working with one another. And the more times that you can bring people in the election coalition together in person, the better it will be for that. And then on top of that, making sure that people kind of have the same skills and resources available to them. Developing clear coalition policies is key. There are actually two models for election coalitions that I have seen so far, we call them the collaborative approach and the independent approach. In the collaborative approach, the organisations actually share resources to do the media monitoring together, they check together, they edit together, it actually becomes one mega newsroom. And actually, they publish the final outputs of the pieces across the multiple media organisations as well. That’s the collaborative approach. In the independent approach, and we see this more sometimes when people don’t have the trust necessarily to jump in together yet. In the independent approach, there’s no commitment to share the output, so often people will maybe share the media monitoring side of things, but then do the check or the article through their own independent editorial processes. And then that’s kind of shared across newsrooms or across a platform, and organizations can choose whether to share it or not, so there’s no commitment to share it. But still, there’s a lot of value there in understanding what are the shared narratives happening in that country, and actually what are the gaps that still need to be filled that haven’t been filled across the ecosystem. And then other things like figuring out the branding of the coalition, etc., the code of ethics and standards and correction policies is incredibly important when many newsrooms come together. Next slide, please. So some of the key things that came out were about preparation, starting early, and planning for scale is incredibly important. With an election coalition, you can’t start too early. I think that there’s a lot of prep to do for them, and the sooner you build trust, the better. Diversity and collaboration is a really key part. We’ve already mentioned that that scale and that width of partners is very important. But often you have a layer of journalists, and then that intersects with the community as well. And so in some places, you actually get media and civil society organizations taking part as well. And that’s an opportunity to go even broader and more diverse and trying to get the stories out there. And finally, context. So actually understanding how the context of your own country might be changing. In some cases, for example, there might be a growth in AI misinformation and understanding do you have tools across your AI, across your election? coalition to actually be able to come back up. Next, please. I’m going to just quickly touch on two case studies. And one of those was crosscheck in France. In 2017, it brought together over 30 organizations. And it was led by a Jones France Press FP, who took on the editorial leadership of it. They had 37 partners, and across the videos that they all shared, they had 1.2 million video views in total. And they published hundreds of articles between them. Gregoire Lemarchand, the chief editor, editor of the digital verification program at AFP, said this is, for us, one of the biggest wins in AFP history, crosscheck will always be special, personally. Sometimes I meet colleagues who took part in this project, and they say, Do you remember crosscheck? That was so great. And I think that’s a really key part is that that trust that it builds across journalists is really important. And it lives beyond the election coalition to. Next slide, please. Then we have facts first pH, which is one of my favorites. Sorry, everyone else. But they had 131 partners working together. And they published 1400 fact checks. And Jim Mendoza, the head of research and strategy at Rappler said the thing with these is that this is these are experiments. I wouldn’t say facts first pH was perfect. At the time we were experimenting. And the reason why you wanted to experiment was because there was a huge challenge. And it’s true, there is a huge challenge. And even when we look at these numbers 131 partners and 1400 fact checks, it might not feel like it’s big enough to meet the scale. But I think one of the important things we need to remember is that with misinformation, there are often just a handful of narratives. that are the most well-known and well-seen narratives and that cause the most harm. And actually, if we focus efforts on those narratives and those pieces that are being seen the most or the most harmful, you can actually go quite a long way to kind of interrupting the flow of misinformation in each country. Next slide, please. I think one of the things that the Rappler team did very, very well in the Philippines with FactsVerse.ph is that they introduced something called the MESH. And they had all of these authoritative information sources in blue. So these were journalists, expert institutions, fact-checkers. These were actually producing the research. And then in red, the MESH, they actually had over 100 orgs that were separate to the information providers. And these were influencers, NGOs, communities, trusted people in their communities. And they would then go out and share the outputs of the election coalition more broadly. And I thought that was a really amazing model of the kind of impact that that had in the Philippines in terms of building trust and showing people that there was an answer to misinformation was actually very, very powerful. And then just more broadly on top of that, there was also research. So taking all the learnings from that, and then finally, accountability and change. There are some actors around the world that take the outputs of the authoritative information that’s being found or being introduced into the world to combat harms, and then actually using it as evidence to hold people accountable, for example, in the International Criminal Court sometimes or in legal cases. And I think that that’s also a really important part of the misinformation challenge. It’s not just about combating misinformation. It’s also about looking for that systemic change that might improve the system overall. I’m going to leave it there. I appreciate I’ve been talking for a long time. But I’m really… really pleased to share this with you all today. If you want to learn more about all of the case studies and go into depth in any of this stuff, and there’s this election coalition’s playbook that we’ve published alongside anchor change. And actually, there’s a podcast as well with Claire Wardle, and Daniel, who’s here today, that took place as well, where you can get a summary of everything. So please download it, enjoy it, use it. And if you ever make an election coalition, get in touch. Thank you very much.

Jim Prendergast: Great, thank you very much, Mevan. I think I know the answer to this question before I ask it, but I’ve noticed at least in the room, some people are taking camera shots of the slides. Are you willing to share them with anybody who would like them? Yeah, perfect. That’s what I figured. Okay, great. So we’ve already had a couple of examples, case studies of country election coalitions. I’m going to now ask David to share his experience with election coalitions in Africa. Good morning, David. Are you there?

David Ajikobi: Hi, everyone. Can you hear me? Can you see me?

Jim Prendergast: We can and we can see we can hear and see you looking great.

David Ajikobi: Greetings from Lagos, Nigeria. So I think Meevan has already sort of set the tone to the conversation. I think I just wanted to add a few things. I think largely for us, at Africa Check, we are the continent’s leading first independent practical organisation. And what we’ve sort of done with election coalition work is to also help other countries around the elections to set up, you know, the coalitions. It can be very, it sounds very, very pretty when Meevan was saying it, but it can be very problematic, you know, particularly in a continent like Africa, where, you know, historically, media ownerships are often, the media is often owned by either politically exposed people or politicians or by government. So what we’ve been able to do essentially is to say, look, we would bring everybody on the table and we’ll have a common interest to say, we don’t want, we want elections to, we want our elections to hold, we want integrity in our elections, we do not want disinformation to be the third candidate or the fourth candidate in Africa elections. And so far so good, we’ve actually established some successes. And I’ll give you an example. So I just go back from Ghana, where, you know, we’re able to sort of foster an election coalition, comprised of Dubai, Ghana, FACTS, FACTSpace and other partners. And what we essentially saw was this. Traditionally, people did things in their own different, you know, corner, right? But together, it’s like coming together, we were able to sort of form a formidable front. And we saw how that panned out in the last elections in Ghana that brought about the election of John Mahama as the new president of the country. And so much so in the sense that the collaboration also helped with, you know, it was inclusive, the inclusive nature of the collaboration, because, for example, in Ghana, there were situation rooms in Accra, in the south, there were disinformation monitoring rooms in Tamale, which was in the north. So what that did was that we were able to also map out not just the actors, but also the patterns we’re seeing from region to region. And I can say the same thing about the Senegal elections that brought about Boussaou Faye, where we had an election coalition, we had the same thing even in South Africa. And if you follow what happened in the African elections, where for the first time, we had a government of national unity, because, you know, and we saw how this information played. out in all of that. I’ll give you my own context in Nigeria, where we had elections in 2023, which is dubbed one of the biggest elections in Africa, where we had more than 50 million voters in that election. Having a situation room, having an election coalition was a very great, not just about fact-checking alone or debunking, but also helped with debunking, because we already had experience with 2019 elections where we did coalitions to say, we would know that this kind of election disinformation would spread. On election day, we would have things like, oh, one candidate has stepped up for another candidate, so we can actually debunk that. But also, just like what Milind said, we were also able to use the coalitions to introduce AI tools, new tools, capacity building. So for example, the tool developed by Fufa in collaboration with Africa Check and CheckKado was provided for free to the coalition members in Ghana, Nigeria, and practically all the countries that had election coalitions in Africa. And for us, that is a very, very big step, because naturally, those individual organizations might not be able to access that. But with the coalitions, we were able to sort of give them for free, onboard them for free, in collaboration with Fufa. So for us, it wasn’t just about the election. It was the opportunity to collaborate at the largest scale. And I’ll give an example. For my context in Africa, radio plays a very important role. So you cannot talk about election coalitions without talking about the impact of radio. So at the level that, if you look at the structure that Milind presented, where you have the collaborative nature, what we did was that, apart from the fact-checkers, the CSOs, and the media agencies we were doing, we also partnered with radio stations across the continent when we were doing election work. What I did was that our content was able to reach a lot of more people, and also people who were in news deserts or on the sub-communities. or people who, what we would call media inclusion. So for us, that was very, very important. And we think that moving on with elections where they are coming around in 2025, it’s an opportunity to actually connect to do more. Thank you very much.

Jim Prendergast: David, thank you very much. Turn your camera back on for a second, because I want everybody to note that David gets bonus points for color coordinating his outfit with the theme color of the IGF. I also want to thank you for, you know, calling out the importance of radio. I think so many people are focused on the next technology and the future of technology and where these problems are happening that sometimes, including myself, we forget about what’s already there or how different environments consume news. So your comments about radio are certainly hitting home with me and I’m sure with others. So Daniel, you’ve got some experience with this in Brazil and I believe we got you up in the middle of night to share those with us. So why don’t you tell us how it went for you and some lessons learned there.

Daniel Bramatti: Well, thanks for having me. I am the editor of Estadão Verifica, the fact-checking unit of O Estado de São Paulo newspaper, and also a member of the advisory board of IFCN, the International Fact-Checking Network. I’m going to talk about the largest and most successful and most durable collaborative project involving journalism in the history of the Brazilian press, which of course is the Comprova Project. The origin, at the beginning of 2018, the Brazilian Association, the Brazilian Investigative Journalism Association, Abragi, was invited to organize and coordinate a coalition of 24 media outlets to combat misinformation and disinformation in that year’s presidential elections. I was then president of Abraji. And the invitation came from the researcher Claire Wardle, then head of first draft and author of the famous report Information Disorder from 2017. Google was one of the sponsors of the project. I have to say that at first, not all media outlets showed enthusiasm for the project. Of course, the news market is very competitive in Brazil. And there was no culture of collaboration between different companies here. But gradually, resistance was broken down, mainly because there was great concern about the impact of disinformation campaigns during the presidential race. Everyone knew that the challenge of containing this information was too great to be faced in isolation. And all decisions related to the project were made through consensus building without imposing directions or rules. Even the name of the project was chosen by the participants themselves. In Portuguese, comprova means to verify or to check, and also sounds like the words comprova, which means with proof. So there is a wordplay here. An important decision we made was to limit our verification work to content generated by social media users. We didn’t check the candidates’ speeches or statements. As one of the candidates lied a lot more than the others, it was probable that he would be the most contradicted. And so many media outlets were concerned about the possibility of conveying the idea that they were against this candidate or that they wanted to benefit their opponents. The vast majority of the media. media outlets invited to take part in Comprova did not have fact-checking units in their newsrooms. So dozens of journalists had to be trained using the methodology provided by First Draft. These professionals were from TV stations, radio stations, newspapers, magazines, and digital native media. Organizations of different sizes that reached different audiences in different parts of Brazil. In essence, Comprova put journalists from different companies to work together to debunk misleading content. And the final result was only published after a cross-checking process, meaning that at least three media outlets not involved in the original fact-checking had to give their approval to the work done by other colleagues. In addition to working together, another important aspect was the amplifying power of the media outlets involved. The fact-checks were almost always published by all 24 participants in the project. So after our first face-to-face meeting in May 2018, Comprova was officially launched in June and during the Congress of Abraji. And in August, we started publishing our first fact-checks. The election campaign, which ended in October, confirmed our worst fears. There was a huge circulation of misleading content and this content generated enormous engagement with a public that wasn’t prepared to deal with the problem. We had a lot of work, but also a lot of enthusiasm. All the work was done remotely, I’m talking about two years before the pandemic here, and to coordinate our activities. activities, we used a WhatsApp group. The amount of messages exchanged in this group was immense. In six months, around 50 journalists exchanged more than 18,000 messages in the group. And I did a word count on these messages and found that more than 315,000 words were written. For comparison, that’s more text than any book of the Harry Potter saga. So we learned some lessons. Number one, a shared purpose motivates journalists much more than competition. Number two, horizontal collaboration works best if there is central coordination. Also, number three, the role of the central coordinator, the project editor is not to give orders like a boss, but to act as a diplomat who seeks to build consensus and break down resistance when needed. And we learned fundamentally that fact checking is hard, very hard. Sometimes it took us days to get the information we needed to disprove a piece of content that clearly had been created in minutes. We managed to publish around 12 fact checks per week, or 147 in total. Organizers and participants were very satisfied with this experience. And as a result, Comprova did not end in 2018. As originally planned, the consortium remains active to this day, with the mission of fact checking rumors related to public policies, health, climate change, and other topics. We also worked together during the pandemics, fact checking false rumors about vaccines and the virus, and the electoral campaigns of 2020, 22 and 24. The number of participants grew to more than Our work in 2022 was especially important because in that year, there was a wave of attacks on the integrity of the Brazilian electoral process. There was a lot of content citing false vulnerabilities in the electronic voting machines and suggesting that there was fraud to benefit one of the candidates. We didn’t know at the time, but many of these rumors were created and spread by state actors, by intelligence agents from the Brazilian government, with the aim of destabilizing our democracy. Recent investigations by the Brazilian Federal Police have revealed that we almost suffered a coup d’etat that year and that the disinformation campaigns were part of the plan. We still have democracy in Brazil, and I don’t want to exaggerate our role, but I think I can say without fear of being wrong that journalism contributes to this result. Thank you very much.

Jim Prendergast: Great, Daniel, thank you. So, unlike many of the sessions you may have been at the IGF to date, we have this room, this room both physically and zoom room till 11am so that’s 45 minutes that we have set aside for comments, questions and discussion as I told you at the outset we wanted this to be as interactive and engaging as possible so I actually see we have a question already online but while I get my act together on that I’m going to throw one out to the group to sort of let folks in the room. Think about it but, you know, one of the things that I was struck by was, and Danny you talked about this is pivoting from an election to other things like the pandemic where you’re doing fact-checking. I guess for everybody, you know, how do you keep the momentum going? You know, I’m biased. I just came out of a national and congressional election in the U.S. where we were bombarded nonstop with election ads and all sorts of stuff. And frankly, we’re tired. I can’t imagine how journalists feel coming off of a cycle like that. How do you keep the momentum going both from elections and from other issues, you know, either state or local or other events like a pandemic? So whoever wants to take that first go ahead.

Daniel Bramatti: I can go first. In the Comprova case, the media outlets that participate in the project are the same, but not the journalists. We rotate the team so that more people can get together and learn from the others. So basically, we have a fresh team working together every year. So this is not a problem to us.

Jim Prendergast: Meevan or David, any comments on that one? Go ahead, Meevan.

Mevan Babakar: Sure. I think it’s really important in my experiences of being in election coalitions myself before Google and being a journalist for a long time is it’s really important to look after yourself in those situations and to look after the team. And it’s important to also step away when it becomes too much. I think that actually the emotional burden that a lot of people take on in these situations is quite high, especially, you know, And when we talk about elections, that’s one experience, but a lot of people are fact-checking during conflict in war zones, or like doing work that actually, where you end up seeing things that are quite harmful yourself. And I think the wellbeing of the team and the people is actually the thing that must be preserved and looked after beyond anything else. So I wanted to recommend a handbook that was written called, it’s called, it’s about vicarious trauma and how to look after people in a newsroom specifically. And I’m gonna put a link to it in the chat, but it has really great recommendations for how to look after journalists, newsrooms, and campaigners so that actually they don’t experience vicarious trauma through the work that they do. And I think it’s a really great resource for answering that question.

Jim Prendergast: Great, thank you. David, I saw you flash your camera on. You wanna weigh in?

David Ajikobi: Yeah, so for us, we had a very interesting case in 2019. So in 2019, we had an election coalition and it was sort of midwives by a foreign organization. And because the language or the reason, the thinking behind it was not so clear to some media partners. What happened was that, you know, you expect them to be paid. And I could tell you that when they got paid and when the money dried up, I mean, people left the conversation. Only Africa Check, Dubawa, and Fact Check Hub, three IFCN members stuck to that goal. But what we did with 2023 elections in Nigeria was that we said, look, we wanted people who understood the role of media in a democracy like Nigeria. For example, my country has had decades of military rule. So having elections. done properly and the outcome of the election done, you know, in a country like this is very important and, you know, the media has a role to play in that. And I also give you the example of the fact that we know, by doing so, we had partners who were committed to that. And in the election coalition in Nigeria in 2023, there was no single external funding. In fact, there was no single funding. What we did was to collapse our individual election work into the coalition work. So what I did was that we were all equal stakeholders, right? And also, just to speak to that point, Nigeria has off-season elections. So we’ve had state elections in about four or five states in 2023. And we’ve come together again to set up the coalition. Africa just set up the one in Lagos. Yubawa or Paktekum set up the one in Abuja, you know, for example. But we’re seeing across the continent that there’s a lot of funding and sponsorship and support coming from Google Initiative and other funders for the coalitions, you know. But we think that if we have journalists and fact-checkers and media partners who have a common sense of understanding the role of the media in democracies, it wouldn’t be a problem. It’s not been a problem so far in Africa and some of the election coalitions that we have coordinated. So I think that’s one of the key successes we’ve had.

Jim Prendergast: Thanks a lot, David. So we do have an online question, which I’ll read out. It is from Hazel Bitanya. I hope I got that right. Do you have any experience or thoughts in involving children or young people who are non-voters but would like to contribute to the discourse, either as fact-checkers or part of disinformation campaigns or as the target audience of these campaigns? Who would like to take that?

Mevan Babakar: I can jump in really quickly. I haven’t seen any young people being included in an election coalition specifically, but maybe David and Daniel will know more than I do on this, but I do know that there have been media literacy efforts that include young people, for sure. And one that comes to mind is the Teen Fact-Checking Network, that media-wise, that’s run out of pointer in the US, but they would actually go and work with teenagers and actually teach them what does it look like to even fact-check, what is a fact-check, how can you go out and check something that you see on social media. And a while ago now, Chequillado in Argentina used to run a really big schools network as well of fact-checking and fact-checkers, and they had a series of videos that actually had nothing to do with politics, it was a lot about, you have seen a post on the internet and it’s about your friend, and someone is pretending that your friend has done something that they haven’t, and actually they set it up in a way that it was almost like a series of, like you were a detective trying to figure out what had happened, and you could use a reverse image search and you could run a couple of searches that would actually help you get contextual information, and it really helped the young people who took part not just learn those skills, but also to ask the right questions. And I think that’s a really important part of it, it’s not necessarily to just learn fact-checking through the lens of politics, it’s actually just being critical in your day-to-day when you see something, and I think that question part of it is the most important. I’ll link to those two projects as well in the chat so you can see them.

Jim Prendergast: Thanks. David, did you want to add something?

David Ajikobi: Yes, I want to add quickly that specifically for election coalition work, we involved the Nigerian Union of Campus Journalists, these are actually students who are based, who are campus journalists, who are based on campuses, basically, pretty much like press clubs, you know. We invited them to the institutional rooms when we were doing the elections, which would typically last for like a week or two, to see how we were operating every day and how the fact-checking process works in newsrooms and also in that context. Then two, was the fact that we also had student volunteers, you know, students who would come out to say, oh, can we join you guys, you know, so that was very important. Beyond the coalition work that we did, we, for example, in Africa Tech, we also did, we were trying to do what we call the finish model, where we were trying to get across to school owners who were below a voting age. So we had a project sponsored by the UN agency, where we went to schools to actually teach them basic, you know, how to fact-checking, like, you know, with very simple exercises. And we actually incorporated games, because we think that young people, it’s easier to catch their attention with games and things like that. And we’re seeing that, you know, the feedback has been very fantastic, particularly when it comes to, because these school learners will turn 18 and will be the next batch of voters in 2027. And we thought that by raising the critical thinking skills now would help them access or, you know, navigate the more keywords of the election information when the next wave of elections are coming up in Africa. Thank you.

Jim Prendergast: Great, thank you. We’ve got a couple of questions in the room. So I’m going to pass this microphone to the woman to… Hopefully, it’s still working.

Lena Slachmuijlder: Yeah. Hi, my name… My name is Lina Slachmolder. I’m with Search for Common Ground and the Council on Tech and Social Cohesion. I just want to congratulate all of you because this is exemplary work. It also aligns with what Google has signed on to, which is the Voluntary Election Integrity Guidelines for tech companies that IFS worked with you and many other industry partners and where there’s hopefully a momentum to try and put these kinds of things in practice. So just want to really acknowledge that. It’s hard work. It’s good work. But I have four questions that I want to raise and I’m very curious to hear what you think. The first is that there is a lot of evidence about Google’s ad policy monetizing mis- and disinformation. And so while the fact-checking work is critical, you actually have an upstream driver of misinformation that doesn’t seem to be discussed in these kinds of conversations. Secondly, we see how generative AI is very quickly taking over search. And that includes your own AI summaries and plus all of the competition between all of the AI companies, which could seriously disrupt the things that Google has done so well in terms of upranking higher quality information over years. It’s been a big point of credibility. But this risks to disintegrate. Number three is that these countries and the examples of this in Indonesia and other places is excellent. But we had, what, 80 elections or 60 to 80 elections? And, you know, we work in places that are struggling, that are conflict-affected. And these kinds of coalitions don’t happen in places like Chad. You know, they don’t happen in other places where the, between civil society and government is so incredibly deep that it’s difficult. So the question is, you know, how does Google act when in fact there isn’t a coalition? Do you try and take the initiative? And the last question is similar in the sense that I believe Google was part of the pre-bunking effort in Europe to try to tackle misinformation through pre-bunking. And if I’m not mistaken, it was an initiative that you took. But you haven’t taken that initiative in all the other places, notably in the global south, where we have similar issues. And sometimes the consequences of misinformation in these conflict-affected societies is deadly.

Jim Prendergast: Thank you very much. There’s a lot to digest there. Mevan, do you want to take the first shot at some of it, what you can? I guess one of the things I would ask is, and it was one of the questions I actually had for David because he used the term first, pre-bunking. Up until this week, I’d never heard of what that, I’d never heard that word before. So explain what that part of your answer.

Mevan Babakar: Sure. Let me first say, I think those are all very important questions. And I’m grateful that they have a forum to be asked. Pre-bunking is when there is a narrative that is trending in a country, or there’s like a series of claims that add up to a narrative that might be seen at the sharp end of a news outlet, et cetera. And instead of dealing with it after it’s been published and after it’s actually trending and viral, pre-bunking deals with it beforehand. So for example, in the UK, I know that every single election, based on my years fact-checking, there’s going to be a claim that comes up around the election, around like day one of voting that, or the day before voting, that will say, if you use a pencil to actually mark your X on your piece of paper. paper, then your vote is invalid. And that’s a claim that comes up every single year. It’s not true. But it comes up and it’s used to disenfranchise people sometimes. Another claim that comes up is something that will say, if you’re voting for this party, you vote on this day, if you’re voting for this other party, you vote on this day. And it might feel innocuous, but these are things that we know are going to come up. And sometimes they can cause harm. And they might disenfranchise specific populations. So instead of dealing with that fact, that piece of misinformation after it’s actually going viral, a pre bunk will actually warn people that this is going to happen maybe weeks and months in advance. You know, it will say one of the kind of tactics that we’ve seen is these kinds of claims being used to disenfranchise people or will teach people about straw man arguments or the kind of tactics and manipulations that take place so that people are sort of inoculated or vaccinated against the misinformation when they see it. So that’s a pre bunk. I think a really important part of a pre bunk is that it’s not Google trying to push this out there. It’s actually the community organizations that have the relationships. And I think this kind of goes to some of the questions. In a lot of these cases, I think it’s really important. And this is why we do the work on election coalitions, that it’s not just one organization pushing out a narrative. It’s actually communities identifying misinformation that affects them. And then those same communities being empowered to combat that misinformation themselves. Because it’s one thing for somebody in that community to fix it. It’s another thing for an external party to come in and say this is how it should be. And we both know which one’s going to engender more. trust. And I think that that’s a really important part of this puzzle. In the case of pre bunking, it’s still a relatively new effort. And it’s one that’s led by jigsaw. And actually just last week, or the beginning of December, they graduated pre bunking into the real world, and actually handed it over to a series of community organizations. And the idea is that those community organizations will be the ones that kind of further it and grow it. And that includes people from across the globe. So it’s it shouldn’t be just a EU centric effort. It should be something that exists around the world. But I think it, it is resource intensive, and it requires infrastructure. And I think that part of the selection coalition’s work is building that infrastructure for things like pre bunking to actually jump off of. And because having that layer of community organizations and journalists working together, is the scaffolding that we need for things like pre bunking to actually take effect. Your other questions were about how does Google act when there isn’t an election coalition? Or whether isn’t that kind of infrastructure already in place? Like you mentioned, Chad, for example? I think that’s a really important question. And I also think that there’s an element of that that’s supporting understanding what are the prerequisites for an election coalition. And in some cases, yes, it does require community organizations to already exist, it requires services, it requires a certain amount of media organizations to be present. And I think in the cases when those things aren’t present, we have societal conversations, we have societal challenges that we need to tackle. And I think that’s not something that Google should do in isolation. It’s something that we all need to talk about together. And actually, Google plays a part in it. But so just figuring out actually how out how to create those structures in a completely different environment. And then finally, on Gen AI and ads. On Gen AI, I think it’s a really important question. And actually, a lot of my work at Google these days is about building tools to support fact-checkers working with Gen AI. I think it’s important to say that a lot of the fact-checkers are really excited about using Gen AI and AI tools. And I think that that’s sometimes missed as a part of this conversation. The scale of the misinformation that already exists is quite high. And I think we’re all aware that manual efforts alone are not enough to fix it. So there’s an opportunity there to use AI to actually help the battle. And that’s not to replace anybody, but actually to just support the efforts. And David already mentioned the full-fact AI tools that are used by over 50 organizations around the world now. And that helps fact-checkers actually spot repeat instances of misinformation to actually do some primary checking. And actually, it doesn’t take the fact-checker or journalist out of the equation, but it supercharges them to do more and more at scale. And I think that’s really important. And then I think, finally, on Gen AI, the EU election coalition that I mentioned, the election 24 check, we actually funded them to do some research with the 3,000 fact-checks that they did this time around to actually tag the ones that came out that were Gen AI. And actually, it was surprisingly low, the number of instances of Gen AI that caused harm in an election cycle. I’m not saying that it won’t cause harm. Obviously, it has the potential to. That’s an obvious thing. But I also think that it’s interesting to consider that at this moment in time, it’s not doing that. So how can we actually? find instances of where pre-monking might actually help with Gen-AI. So, in Taiwan, for example, one of the ministers would put out fake, deep fakes of themselves ahead of an election coalition to inoculate the population against it. And I think that’s a really interesting key study. But having said that, the harms of Gen-AI is still quite high. And there’s a lot of efforts at the moment across Google to combat potential harm from AI election information especially. So, there’s something called SynthID. This is watermarking. So, where we actually add like a little signature into any image that’s generated by Google. And we would be able to flag if it’s a generated AI image or not in any of our tools. We’re also part of something called the C2PA coalition. It actually is an industry standard for assessing where information has come from and the provenance of information. And those are being added into our tools right now. So, if actually if you see an image, you’d be able to say this is where it came from. And beyond provenance, we’re actually also working on a series of tools that are about giving people more context. So, when you actually see a piece of misinformation or when you see that it’s AI generated, you can also go to things like about this image or about this page on search which tells you how old is this image? Where has it come from? Who first published it? Are there any fact checks about it? And so, encouraging people to do lateral reading around it is a really important part of this. So, that’s the kind of user intervention side of it. And then finally, there are a whole host of policies that remove hundreds and thousands of ads, like thousands and thousands of ads every single day. And whether or not those thresholds are… like, are exactly in the right places, that’s a conversation that’s constantly being had and changes in each country and changes with different laws and regulations. But I think it’s an important challenge. But I think the thing that I’d like to leave you all here with, is this isn’t a thing where it’s just one answer. It’s different in every single country. It’s different in every single threshold, the context keeps changing. The tools keep changing. And actually, it’s something where the see-saw of it is hopefully going in the right direction. But we do keep see-sawing, if that makes sense. I’ve spoken a lot, but I’ll leave it there. Thanks.

Jim Prendergast: Yeah, you’re entitled to another sip of tea there. So great exchange. Lena knows the routine because she’s asked questions before. So when you do ask a question, which we have a couple in the room, just please identify yourself and your affiliation. So I’ll turn it over to Milton.

Milton Mueller: Thanks, Jim. I’m Milton Mueller. I’m at the Internet Governance Project at Georgia Tech. I want to begin by challenging the term, misinformation. I’m in a sort of a computer science, algorithmically-driven university. And the term tends to encourage the idea that misinformation is something that has a signature that you can just recognize and somehow kick out of the bit stream. And I think the Google speaker was very perceptive in pointing out that it’s really, it’s narratives, it’s interpretations. And I don’t know why we don’t just say false or misleading information, because that makes it clear that when you interfere in these discourses, you are essentially setting yourself as an arbiter of truth. I love this idea of coalitions of journalists coalescing to do fact-checking, because that is fully in line with the liberal democratic idea of the role of the press in a free society. You are… are you are not forcing anybody to do anything. You are just simply responding to bad speech with correct speech or good speech. But there’s an elephant in the room here that I hope to see addressed and that I want to ask you about. And I’m sure the Google people are very aware of this. There’s a high degree of concentration of communication and discourse around platforms. And as a result of that, contestation over what those platforms suppress and what they promote is the stakes are raised very high. And in particular, when governments get involved in trying to influence those decisions, you get problems. You also get problems with perceptions of bias from the platforms, which are well-known as being situated in liberal California and Silicon Valley as not being exactly in red state territory. And perhaps the Hunter Biden laptop story is a perfect example of where you think you’re suppressing misinformation, but you’re actually responding to maybe political pressure from people who think that a certain amount of information that might actually be true is going to harm the chances of their favored candidate in the election. So I’m concerned about how you set the threshold for where you actually intervene in these false narratives or misleading narratives. I don’t want to use the word misinformation. And I’m particularly concerned about how you handle the role of government. We have a series of court cases. Again, Murphy versus Missouri. We have the Supreme Court case. That went all the way to the Supreme Court. We have state legislation in Florida, which is trying to regulate the way you make these decisions and impose common carrier obligations. So this issue is really a lot more, I mean, it’s great to have these journalistic coalitions, but legally and economically, this issue is a lot more. rot than you’ve made it out to be here and I’d like to know how you handle those situations, particularly again when you know the government is an interested actor in the outcome of an election obviously right so what happens when you get pressure from governments to suppress information that may be damaging to them or that may be an extension of their policy?

Alex Walden: I just don’t know if you wanted to pass the mic to somebody else and take more than one at a time.

Jim Prendergast: Sorry yeah we’ll take a couple of questions from the room and then we can sort of bounce them around.

Claes de Vreese: I think there’s sound now that’s that’s a good start. So good morning everybody my name is Claes de Vreese I’m from the University of Amsterdam and also on the executive board of the European Digital Media Observatory. I have three quick questions I really appreciate the eye for sort of the local context in which these coalitions are built but I wonder if you could speak a little bit more as to how you choose partners who are in or outside of these coalitions since there are so many new and relatively unknown actors when building these coalitions in different election contexts. So the sort of how to build and which partners are in outside of the scope. The second question would be what in your google playbook are the best advices that you give the coalitions in dealing with critiques that they might get that they’re trying either to stifle free speech or to intervene in the elections which is a common critique that is coming from different vantage point in different kinds of elections. And then the third question would be how is google trying to be proactive in actually building a coalition that would also have multiple big tech platforms at the table so that you would see a coalition that is rather driven maybe by an industry interest the more so maybe than one or two individual companies. So those would be my three questions. Thank you.

Jim Prendergast: Okay great so move on I hope you your you’ve refreshed your palate because I think a lot of them are directed you but not all of them so do you want to start us off and then we’ll maybe work Daniel and David into the conversation.

Mevan Babakar: Sure I’m actually out of tea unfortunately I feel like I need a top up but I will say I just want to make it really clear based on that last question that these election coalitions are not Google-run election coalitions these are communities of journalists and fact-checkers and organ like social organizations that have come together created the coalition and then gone for funding and Google just happens to be one of the people that have funded it and I so I think that’s a really important part of this these are like interested important people in their own communities in their own countries coming together to build something that actually serves the voters of those countries and the only thing really that Google has done is supported them either with a resource or with funding and done this research project to kind of collect some of the learnings from them so that if another group of organizations comes to us with an election coalition we can say hey here are the lessons of the other coalitions that have come before you and you can learn from them and so I really want to just make that very clear so we don’t choose who are the partners that are in the election coalition it’s not us picking and choosing it’s actually the organizations themselves coming up with their own ecosystem their own collaboration their own policies their own membership models their own capacity building programs and I think that’s a really important part that you had a question about how do we how did they really like stop the challenges of being said to censor and and free speech and I’m going to take off my google hat for one second and put on my fact checker hat and when I used to work at full fact and we used to get full facts the UK’s fact checking charity we used to get a lot of you know fact checks are censoring speech kind of conversations and our response at the time and still is used to be fact checking is the free speech response to misinformation we’re not actually taking anything down as fact checkers we’re adding more context we’re giving you more information so that you can make up your own mind and I think that’s that’s how we used to deal with it as fact checkers but Daniel and David can probably give you a much better answer for where things are these days

Jim Prendergast: Milton I’ll ask uh we’ll we’ll get in response from Daniel David first then we’ll cycle back to you

Daniel Bramatti: I think that uh even even responses is perfect uh I I really uh contest and this idea that fact checking is is censorship you you you you have to have uh content moderation in in platforms you have content moderation regarding uh violence regarding pornography regarding other other things and and and also you have to have uh content moderation regarding uh the flow of uh bad information information that uh contributes to to to polluting our uh media ecosystem our information ecosystem so um the other thing uh uh there were so many questions uh I I just want to to to mention briefly that um how we decided uh uh who enters the coalition and in in the beginning in the beginning of the pandemic uh we wanted you know for from our side to uh introduce the communication towards uh people who are probably international innovators and so we in the very very early stage of of the the adoption of the of the framework which we we have put together against the and after. So in Comprova, at the beginning, our goal was to reach a large percentage of the Brazilian public. So we invited to the table all the big players in the media here. And also, we tried to balance different media organizations according to their editorial orientation, more to the left, more to the right, and also contemplating local players. So it was a very diverse group, in my opinion. And since then, since we decided to keep Comprova going, all the new participants are, they ask to enter the coalition. And we decide collaboratively, and everybody has a veto power if we give the OK or not to that applicant. And to this day, to my knowledge, we never close the door to anybody.

Jim Prendergast: So coming back to you, Mivan, to Milton’s question about thresholds and how do you determine when you take action and when you don’t, did I incorporate one of them?

Alex Walden: I can jump in just quickly on one about the challenges of how we engage with governments and the government pressure and how that does or does not impact us. And then I’ll kick it over to you, Mivan, to talk more about the definition of misinformation and the challenges around that. Although I will say on that piece, having worked on this from the beginning when fake news was the term, and then we all decided fake news was not the right term, and there are still many conversations happening all over where mis-dismal information. I think for us at a company, we sort of have to just land on something and figure out how to operationalize it while we’re managing these harms. And we also will continue to be part of conversations around what’s the appropriate lexicon for how we’re describing what are just sort of abuse or exploitation of the services that we’re providing. But when it comes to sort of the challenges of government, on the one hand, obviously, we are deeply committed to partnering with governments across a lot of the work that we do. And increasingly so, that’s the case. But also, I’m agreeing with you that also it’s the case that governments and parties are interested parties in the outcomes of elections. And so we have to be mindful of the role that we have in engaging neutrally. And so really, that gets. back to the importance of us having clear and robust policies in place to make sure that we are consistently addressing any of these issues as they come to us. So on the one hand, that’s about having clear policies that are the product policies. How do we define election misinformation or misrepresentation or the variety of other things that might come up? How do we ensure that that’s clear? And then we really do have to enforce that consistently across all of our policies in every country. And then also being clear about when it’s, for legal reasons, we might need to remove something under a national law. And so it’s perfectly legitimate for any government to say this content violates our local law and here’s an order and you need to remove it. In that case, we would evaluate it under our standards. And then if we, under our analysis, it’s consistent with the local law and we’ve received the appropriate process from the proper authority, we may remove that. And then that would be something that we put in our transparency report and make clear to everyone that we have complied with a law under the national, which is a national requirement. So I think those things enable us to, that’s what we rely on to make sure that we can be consistent everywhere we’re operating, even though it’s true that we are getting, we will get pressure from government, but we have to kind of have that to fall back on.

Jim Prendergast: Thank you Alex, Mevan.

Mevan Babakar: I think that was a great summary. And the only thing I would add on terms is, like Alex, I’ve seen like a lot of work done on the different terms in the space. And I think that sometimes different terms are helpful for different types of things. But I think the way that I like to think about it personally that makes it very real and, makes it very real and reminds me and others of the importance of this work, is thinking about it through the lens of harms. And I think that there is some really excellent work being done at the moment by a professor called Peter Cunliffe-Jones at Westminster University to develop a harms framework specifically for mis and disinformation. And the European Fact-Checking Standards Network is, well, a couple of organizations in Europe are looking to actually start using that in a meaningful way to actually highlight the harms of misinformation in like a completely different way. Because I think it’s one thing to say there’s misinformation, and it’s another thing to say there has been 5% election interference in this country, right, or vaccine misinformation that has led to this harm. And I think that that is the level of granularity we need to get to. And at that point, we kind of bypass some of the issues of the words, and we get more to those claims and those narratives and those harms. And it’s only at that kind of detail we can start to understand what interventions are meaningful and who should take them. Should that intervention come from a government? Should that intervention come from a platform? Should that intervention come from the community or the people affected? Because I think, actually, we need interventions at all of those levels. Thanks.

Jim Prendergast: Alex, did you have anything to add? No? Okay. I’ve just been shown the, we have five minutes to wrap up sign from our helpful tech support in the room. What I’m going to do is ask each of our panelists, I guess, if you had one piece of advice that you could offer people who might be interested in either participating in or starting their own elections coalition, what would that piece of advice be? And then a call to action coming out of today, what would you like to see happen? We’ll start with David, please.

David Ajikobi: I’ll say the first thing, if you’re, how do you feel about the election, or the listening fields, and do you think there are elections releasing, how would someone answer this? You know, it’s a good point to be if you’re not, if you’re not seeing things in the same perspective. I also got to give you advice, you know, but also I explain to you, I have to tell you that Google does not have all the charts on any election campaign that we have in Africa. You know, like Google Search has some, you know, some supporters, and I can tell you that, you know, I’ll tell you where that’s on this, this page. The top last thing you have to do is, you know, and it’s something that they will be probably a little bit challenging. And what we see that helps us in Africa is that we’re able to develop, develop expertise very early in the days. And many times the coalitions are sort of, that’s regulated, sort of, you know, coordinated by AFP members, whether it’s transparency of funding, transparency of election policies, the things we search. I want you to find out, I’m trying to say you can’t check everything. So we need, we have a sort of methodology of what we’re looking at, because we want to actually, like all the bonds to be very transparent around the elections. That’s, I think that’s, that’s where we should be coming from, and that’s really the challenge that we have. Thank you.

Jim Prendergast: Great. Thank you, David. Daniel, turn to you.

Daniel Bramatti: Yes. Sorry, my camera was off. My advice is choose wisely the organization that is going to coordinate your coalition. As Miwan said, it has to be some organization that is, if not neutral, the more neutral possible, equidistant in terms of political stance, independent from parties and from government and from private sector pressures. I know that sometimes it’s difficult to find an organization with that characteristics, but it is essential. to gain trust and to lead the work.

Jim Prendergast: Great, thank you, Daniel. Meevan.

Mevan Babakar: I’d say the relationships are everything in election coalitions. So similar kind of to Daniel’s points, but it’s really important to not underestimate the amount of work it will take to actually build those relationships across those media organizations and across those people. I think that when you get a group of people like that coming together who trust each other, that’s when something special can happen, but that takes time. And I’d also add, because relationships take time to build, maybe it’s not the first election that’s the best one, maybe it’s the second one or the third one. And I think that Comprova is a very good example of that. And I think that as those relationships build, so does the opportunity and the scale of those coalitions. Yeah, thank you.

Jim Prendergast: Great, thank you. And then finally here in the room, Alex. No, okay. So I wanna thank everybody. It was a great presentation, some very good questions, some very pointed questions, frankly, and a good discussion. For those in the room who wanna copy the slides, just come see me. I’ve already sent them to somebody online who wanted them. And I do wanna thank our panelists who all got up at varying degrees of the middle of the night to join us. And for Alex for joining us here in person. Thanks for everybody for showing up in person and online. More to come. And then for those in the room, Meevan’s been dropping links to various information into the chat. So when the recording is posted on the IGF website, be sure to go back and that information is for you waiting. So thank you very much everyone and enjoy the rest of your day.

Daniel Bramatti: Thank you so much. Bye.

M

Mevan Babakar

Speech speed

155 words per minute

Speech length

5698 words

Speech time

2200 seconds

Coalitions allow journalists to collaborate and scale impact

Explanation

Election coalitions enable journalists and fact-checkers to work together and share resources. This collaboration allows them to have a greater impact in combating misinformation during elections.

Evidence

Examples of successful coalitions like Electionland in 2016 and Comprova in Brazil were provided.

Major Discussion Point

The importance and effectiveness of election coalitions

Agreed with

Daniel Bramatti

David Ajikobi

Alex Walden

Agreed on

Importance of election coalitions

Pre-bunking can inoculate against expected false narratives

Explanation

Pre-bunking involves warning people about potential misinformation before it spreads. This strategy can help inoculate the public against false narratives that are likely to emerge during elections.

Evidence

Example of pre-bunking claims about invalid votes marked with pencils in UK elections.

Major Discussion Point

Strategies for combating misinformation

Focus on specific harmful narratives rather than all misinformation

Explanation

Instead of trying to address all misinformation, coalitions should focus on the most harmful and widespread narratives. This targeted approach can be more effective in mitigating the impact of disinformation.

Major Discussion Point

Evaluating the impact of misinformation and coalitions

Agreed with

David Ajikobi

Agreed on

Focus on specific harmful narratives

Differed with

Milton Mueller

Differed on

Approach to addressing misinformation

D

Daniel Bramatti

Speech speed

110 words per minute

Speech length

1439 words

Speech time

778 seconds

Coalitions build trust across media organizations

Explanation

Election coalitions help build trust and collaboration between competing media organizations. This trust is crucial for effective fact-checking and information sharing during elections.

Evidence

Experience with the Comprova project in Brazil, which brought together 24 media outlets.

Major Discussion Point

The importance and effectiveness of election coalitions

Agreed with

Mevan Babakar

David Ajikobi

Alex Walden

Agreed on

Importance of election coalitions

Choosing neutral leadership is essential for coalition credibility

Explanation

The organization coordinating an election coalition should be as neutral as possible. This neutrality is crucial for maintaining credibility and trust among participants and the public.

Major Discussion Point

Challenges and considerations in forming election coalitions

Fact-checking adds context rather than censoring speech

Explanation

Fact-checking is not censorship but rather a way to provide additional context and information. This approach allows people to make informed decisions without restricting free speech.

Major Discussion Point

Strategies for combating misinformation

D

David Ajikobi

Speech speed

162 words per minute

Speech length

1770 words

Speech time

652 seconds

Coalitions help combat disinformation in African elections

Explanation

Election coalitions have been effective in combating disinformation during elections in various African countries. These coalitions bring together fact-checkers, media organizations, and civil society groups to address misinformation.

Evidence

Examples of successful coalitions in Ghana, Senegal, and Nigeria were provided.

Major Discussion Point

The importance and effectiveness of election coalitions

Agreed with

Mevan Babakar

Daniel Bramatti

Alex Walden

Agreed on

Importance of election coalitions

Media literacy efforts can engage youth

Explanation

Engaging young people through media literacy programs can help prepare future voters to navigate misinformation. These efforts can include teaching basic fact-checking skills and critical thinking.

Evidence

Example of a project sponsored by the UN agency to teach fact-checking in schools.

Major Discussion Point

Strategies for combating misinformation

Measure concrete harms like election interference percentages

Explanation

To evaluate the impact of misinformation, it’s important to measure concrete harms such as the percentage of election interference. This approach provides a more tangible understanding of the effects of disinformation.

Major Discussion Point

Evaluating the impact of misinformation and coalitions

Agreed with

Mevan Babakar

Agreed on

Focus on specific harmful narratives

A

Alex Walden

Speech speed

176 words per minute

Speech length

891 words

Speech time

302 seconds

Coalitions are crucial for delivering credible information to users

Explanation

Election coalitions play a vital role in ensuring that credible information reaches users during elections. These partnerships help address the challenges of misinformation that individual organizations may struggle to tackle alone.

Major Discussion Point

The importance and effectiveness of election coalitions

Agreed with

Mevan Babakar

Daniel Bramatti

David Ajikobi

Agreed on

Importance of election coalitions

Coalitions must navigate government pressure and legal challenges

Explanation

Election coalitions face challenges in dealing with government pressure and legal issues. It’s important to have clear policies and consistent enforcement to maintain neutrality and credibility.

Major Discussion Point

Challenges and considerations in forming election coalitions

AI tools can help scale fact-checking efforts

Explanation

Artificial intelligence tools can assist in scaling up fact-checking efforts. These tools can help identify repeat instances of misinformation and support primary checking, allowing fact-checkers to work more efficiently.

Major Discussion Point

Strategies for combating misinformation

Platforms must balance content moderation and free speech

Explanation

Tech platforms face the challenge of balancing content moderation with preserving free speech. Clear policies and consistent enforcement are crucial in addressing this challenge.

Major Discussion Point

The role of tech platforms in election integrity

Differed with

Milton Mueller

Differed on

Role of tech platforms in content moderation

Transparency in platform policies and government requests is key

Explanation

Transparency in platform policies and government content removal requests is essential. This transparency helps maintain trust and accountability in content moderation processes.

Evidence

Mention of transparency reports that disclose compliance with national laws.

Major Discussion Point

The role of tech platforms in election integrity

M

Milton Mueller

Speech speed

136 words per minute

Speech length

524 words

Speech time

229 seconds

Concentration of discourse on platforms raises stakes of moderation

Explanation

The high concentration of communication on a few major platforms increases the importance of content moderation decisions. This concentration raises concerns about the power of platforms to influence public discourse.

Major Discussion Point

The role of tech platforms in election integrity

Differed with

Alex Walden

Differed on

Role of tech platforms in content moderation

Balance addressing misinformation with preserving free speech

Explanation

There is a need to balance efforts to combat misinformation with protecting free speech. Interventions in public discourse raise concerns about platforms becoming arbiters of truth.

Evidence

Example of the Hunter Biden laptop story and potential political pressure in content moderation decisions.

Major Discussion Point

Evaluating the impact of misinformation and coalitions

Differed with

Mevan Babakar

Differed on

Approach to addressing misinformation

C

Claes de Vreese

Speech speed

163 words per minute

Speech length

242 words

Speech time

88 seconds

Platforms should collaborate on industry-wide coalitions

Explanation

Tech platforms should work together to form industry-wide coalitions to address election integrity issues. This collaboration could lead to more consistent and effective approaches to combating misinformation.

Major Discussion Point

The role of tech platforms in election integrity

Agreements

Agreement Points

Importance of election coalitions

Mevan Babakar

Daniel Bramatti

David Ajikobi

Alex Walden

Coalitions allow journalists to collaborate and scale impact

Coalitions build trust across media organizations

Coalitions help combat disinformation in African elections

Coalitions are crucial for delivering credible information to users

All speakers emphasized the significance of election coalitions in combating misinformation and enhancing election integrity through collaboration and resource sharing.

Focus on specific harmful narratives

Mevan Babakar

David Ajikobi

Focus on specific harmful narratives rather than all misinformation

Measure concrete harms like election interference percentages

Both speakers advocated for targeting specific harmful narratives and measuring concrete impacts rather than addressing all misinformation.

Similar Viewpoints

Both speakers emphasized the importance of balancing fact-checking and content moderation with preserving free speech, viewing fact-checking as a way to add context rather than censor.

Daniel Bramatti

Alex Walden

Fact-checking adds context rather than censoring speech

Platforms must balance content moderation and free speech

Both speakers discussed innovative strategies to combat misinformation, including pre-bunking and AI tools, highlighting the need for proactive and scalable approaches.

Mevan Babakar

Alex Walden

Pre-bunking can inoculate against expected false narratives

AI tools can help scale fact-checking efforts

Unexpected Consensus

Importance of neutrality in coalition leadership

Daniel Bramatti

David Ajikobi

Choosing neutral leadership is essential for coalition credibility

Coalitions help combat disinformation in African elections

Despite coming from different regions, both speakers emphasized the importance of neutral leadership in election coalitions, suggesting a shared understanding of coalition dynamics across diverse contexts.

Overall Assessment

Summary

The main areas of agreement included the importance of election coalitions, the need to focus on specific harmful narratives, the balance between fact-checking and free speech, and the potential of innovative strategies like pre-bunking and AI tools.

Consensus level

There was a high level of consensus among the speakers on the core principles and strategies for combating misinformation in elections. This consensus suggests a growing understanding of effective practices in election integrity efforts across different global contexts. However, there were nuanced differences in approaches and emphases, reflecting the diverse challenges faced in different regions and the need for context-specific solutions.

Differences

Different Viewpoints

Role of tech platforms in content moderation

Milton Mueller

Alex Walden

Concentration of discourse on platforms raises stakes of moderation

Platforms must balance content moderation and free speech

Milton Mueller expressed concerns about the concentration of communication on platforms and their power to influence public discourse, while Alex Walden emphasized the need for clear policies and consistent enforcement to balance content moderation with free speech.

Approach to addressing misinformation

Mevan Babakar

Milton Mueller

Focus on specific harmful narratives rather than all misinformation

Balance addressing misinformation with preserving free speech

Mevan Babakar advocated for focusing on the most harmful and widespread narratives, while Milton Mueller emphasized the need to balance efforts to combat misinformation with protecting free speech.

Unexpected Differences

Terminology and framing of misinformation

Milton Mueller

Mevan Babakar

Balance addressing misinformation with preserving free speech

Focus on specific harmful narratives rather than all misinformation

Milton Mueller unexpectedly challenged the term ‘misinformation,’ suggesting it encourages algorithmic solutions, while Mevan Babakar proposed focusing on specific harmful narratives. This difference in framing the issue highlights the complexity of addressing misinformation.

Overall Assessment

summary

The main areas of disagreement centered around the role of tech platforms in content moderation, approaches to addressing misinformation, and the balance between combating false information and preserving free speech.

difference_level

The level of disagreement was moderate. While speakers generally agreed on the importance of election coalitions and fact-checking, they differed on specific strategies and the role of various actors. These differences reflect the complex nature of addressing misinformation in elections and highlight the need for continued dialogue and collaboration among stakeholders.

Partial Agreements

Partial Agreements

All speakers agreed on the importance of election coalitions, but they emphasized different aspects: Mevan focused on scaling impact, Daniel on building trust, and David on combating disinformation in specific contexts.

Mevan Babakar

Daniel Bramatti

David Ajikobi

Coalitions allow journalists to collaborate and scale impact

Coalitions build trust across media organizations

Coalitions help combat disinformation in African elections

Both speakers agreed on the need to scale fact-checking efforts, but they proposed different approaches: Alex emphasized AI tools, while Mevan suggested focusing on specific harmful narratives.

Alex Walden

Mevan Babakar

AI tools can help scale fact-checking efforts

Focus on specific harmful narratives rather than all misinformation

Similar Viewpoints

Both speakers emphasized the importance of balancing fact-checking and content moderation with preserving free speech, viewing fact-checking as a way to add context rather than censor.

Daniel Bramatti

Alex Walden

Fact-checking adds context rather than censoring speech

Platforms must balance content moderation and free speech

Both speakers discussed innovative strategies to combat misinformation, including pre-bunking and AI tools, highlighting the need for proactive and scalable approaches.

Mevan Babakar

Alex Walden

Pre-bunking can inoculate against expected false narratives

AI tools can help scale fact-checking efforts

Takeaways

Key Takeaways

Election coalitions are an effective way for journalists and fact-checkers to collaborate and scale their impact in combating misinformation

Building trust and relationships between coalition members is crucial but takes time

Pre-bunking and media literacy efforts can help inoculate against expected false narratives

Tech platforms play an important but complex role in election integrity, balancing content moderation with free speech concerns

Measuring concrete harms from misinformation is more useful than focusing on all potential misinformation

Resolutions and Action Items

Google to continue supporting election coalitions through funding and resources

Fact-checkers to focus on measuring and highlighting specific harms from misinformation

Coalition members encouraged to build long-term relationships beyond single election cycles

Tech platforms to increase transparency around content moderation policies and government removal requests

Unresolved Issues

How to form effective coalitions in countries with limited civil society or media infrastructure

Balancing the need for content moderation with concerns about censorship and free speech

How to handle government pressure on platforms to remove content during elections

Determining appropriate thresholds for platform intervention on misleading content

Suggested Compromises

Using a harm-based framework to determine when intervention on misinformation is warranted, rather than blanket policies

Platforms collaborating on industry-wide coalitions to address election integrity, rather than individual company efforts

Balancing fact-checking with adding context, rather than removing content outright

Thought Provoking Comments

Pre-bunking is when there is a narrative that is trending in a country, or there’s like a series of claims that add up to a narrative that might be seen at the sharp end of a news outlet, et cetera. And instead of dealing with it after it’s been published and after it’s actually trending and viral, pre-bunking deals with it beforehand.

speaker

Mevan Babakar

reason

This introduces the concept of pre-bunking, which is a proactive approach to combating misinformation that many participants were unfamiliar with.

impact

It sparked further discussion about proactive strategies for addressing misinformation and led to examples being shared, like the Taiwan minister creating fake deep fakes to inoculate the population.

I want to begin by challenging the term, misinformation. I’m in a sort of a computer science, algorithmically-driven university. And the term tends to encourage the idea that misinformation is something that has a signature that you can just recognize and somehow kick out of the bit stream.

speaker

Milton Mueller

reason

This comment challenges a fundamental assumption underlying much of the discussion and pushes for more precise language.

impact

It shifted the conversation to consider the complexities of defining and identifying misinformation, leading to discussions about narratives and interpretations rather than just false information.

There’s a high degree of concentration of communication and discourse around platforms. And as a result of that, contestation over what those platforms suppress and what they promote is the stakes are raised very high. And in particular, when governments get involved in trying to influence those decisions, you get problems.

speaker

Milton Mueller

reason

This comment highlights the broader context and potential risks of platform-based approaches to combating misinformation.

impact

It led to a discussion about the role of governments, potential biases, and the challenges of setting thresholds for intervention.

I think that sometimes different terms are helpful for different types of things. But I think the way that I like to think about it personally that makes it very real and reminds me and others of the importance of this work, is thinking about it through the lens of harms.

speaker

Mevan Babakar

reason

This reframes the discussion of misinformation in terms of concrete harms rather than abstract definitions.

impact

It shifted the focus towards more tangible impacts and metrics, suggesting new ways to approach and measure the effects of misinformation.

Overall Assessment

These key comments shaped the discussion by challenging assumptions, introducing new concepts, and reframing the issue of misinformation. They moved the conversation from a focus on technical solutions and coalitions to a more nuanced consideration of the complexities involved in defining, identifying, and addressing misinformation. The discussion evolved to consider broader societal impacts, the role of various stakeholders including governments and platforms, and the importance of focusing on concrete harms rather than abstract definitions.

Follow-up Questions

How can election coalitions be formed in conflict-affected countries where cooperation between civil society and government is difficult?

speaker

Lena Slachmuijlder

explanation

This is important to understand how to implement election integrity efforts in challenging political environments.

How does Google act when there isn’t an election coalition in a country?

speaker

Lena Slachmuijlder

explanation

This explores Google’s role and responsibilities in countries lacking established election integrity infrastructure.

Why hasn’t Google taken the initiative to implement pre-bunking efforts in the Global South, similar to what was done in Europe?

speaker

Lena Slachmuijlder

explanation

This addresses potential disparities in misinformation prevention efforts between different regions.

How do platforms set the threshold for when to intervene in false or misleading narratives?

speaker

Milton Mueller

explanation

This is crucial for understanding how platforms balance free speech concerns with misinformation prevention.

How do platforms handle pressure from governments to suppress information that may be damaging to them or that may be an extension of their policy?

speaker

Milton Mueller

explanation

This explores the complex relationship between platforms, governments, and information control during elections.

How do election coalitions choose which partners to include or exclude?

speaker

Claes de Vreese

explanation

This is important for understanding how coalitions maintain credibility and effectiveness.

What are the best practices for election coalitions to deal with critiques that they are trying to stifle free speech or intervene in elections?

speaker

Claes de Vreese

explanation

This addresses a common challenge faced by fact-checking and anti-misinformation efforts.

How is Google trying to be proactive in building a coalition that would include multiple big tech platforms?

speaker

Claes de Vreese

explanation

This explores the potential for broader industry cooperation in addressing election misinformation.

Disclaimer: This is not an official session record. DiploAI generates these resources from audiovisual recordings, and they are presented as-is, including potential errors. Due to logistical challenges, such as discrepancies in audio/video or transcripts, names may be misspelled. We strive for accuracy to the best of our ability.